Rate-Based Model Predictive Control of Turbofan Engine Clearance
NASA Technical Reports Server (NTRS)
DeCastro, Jonathan A.
2006-01-01
An innovative model predictive control strategy is developed for control of nonlinear aircraft propulsion systems and sub-systems. At the heart of the controller is a rate-based linear parameter-varying model that propagates the state derivatives across the prediction horizon, extending prediction fidelity to transient regimes where conventional models begin to lose validity. The new control law is applied to a demanding active clearance control application, where the objectives are to tightly regulate blade tip clearances and also anticipate and avoid detrimental blade-shroud rub occurrences by optimally maintaining a predefined minimum clearance. Simulation results verify that the rate-based controller is capable of satisfying the objectives during realistic flight scenarios where both a conventional Jacobian-based model predictive control law and an unconstrained linear-quadratic optimal controller are incapable of doing so. The controller is evaluated using a variety of different actuators, illustrating the efficacy and versatility of the control approach. It is concluded that the new strategy has promise for this and other nonlinear aerospace applications that place high importance on the attainment of control objectives during transient regimes.
Rate-based degradation modeling of lithium-ion cells
E.V. Thomas; I. Bloom; J.P. Christophersen; V.S. Battaglia
2012-05-01
Accelerated degradation testing is commonly used as the basis to characterize battery cell performance over a range of stress conditions (e.g., temperatures). Performance is measured by some response that is assumed to be related to the state of health of the cell (e.g., discharge resistance). Often, the ultimate goal of such testing is to predict cell life at some reference stress condition, where cell life is defined to be the point in time where performance has degraded to some critical level. These predictions are based on a degradation model that expresses the expected performance level versus the time and conditions under which a cell has been aged. Usually, the degradation model relates the accumulated degradation to the time at a constant stress level. The purpose of this article is to present an alternative framework for constructing a degradation model that focuses on the degradation rate rather than the accumulated degradation. One benefit of this alternative approach is that prediction of cell life is greatly facilitated in situations where the temperature exposure is not isothermal. This alternative modeling framework is illustrated via a family of rate-based models and experimental data acquired during calendar-life testing of high-power lithium-ion cells.
Global Earthquake Activity Rate models based on version 2 of the Global Strain Rate Map
NASA Astrophysics Data System (ADS)
Bird, P.; Kreemer, C.; Kagan, Y. Y.; Jackson, D. D.
2013-12-01
Global Earthquake Activity Rate (GEAR) models have usually been based on either relative tectonic motion (fault slip rates and/or distributed strain rates), or on smoothing of seismic catalogs. However, a hybrid approach appears to perform better than either parent, at least in some retrospective tests. First, we construct a Tectonic ('T') forecast of shallow (≤ 70 km) seismicity based on global plate-boundary strain rates from version 2 of the Global Strain Rate Map. Our approach is the SHIFT (Seismic Hazard Inferred From Tectonics) method described by Bird et al. [2010, SRL], in which the character of the strain rate tensor (thrusting and/or strike-slip and/or normal) is used to select the most comparable type of plate boundary for calibration of the coupled seismogenic lithosphere thickness and corner magnitude. One difference is that activity of offshore plate boundaries is spatially smoothed using empirical half-widths [Bird & Kagan, 2004, BSSA] before conversion to seismicity. Another is that the velocity-dependence of coupling in subduction and continental-convergent boundaries [Bird et al., 2009, BSSA] is incorporated. Another forecast component is the smoothed-seismicity ('S') forecast model of [Kagan & Jackson, 1994, JGR; Kagan & Jackson, 2010, GJI], which was based on optimized smoothing of the shallow part of the GCMT catalog, years 1977-2004. Both forecasts were prepared for threshold magnitude 5.767. Then, we create hybrid forecasts by one of 3 methods: (a) taking the greater of S or T; (b) simple weighted-average of S and T; or (c) log of the forecast rate is a weighted average of the logs of S and T. In methods (b) and (c) there is one free parameter, which is the fractional contribution from S. All hybrid forecasts are normalized to the same global rate. Pseudo-prospective tests for 2005-2012 (using versions of S and T calibrated on years 1977-2004) show that many hybrid models outperform both parents (S and T), and that the optimal weight on S
Nonparametric Hammerstein model based model predictive control for heart rate regulation.
Su, Steven W; Huang, Shoudong; Wang, Lu; Celler, Branko G; Savkin, Andrey V; Guo, Ying; Cheng, Teddy
2007-01-01
This paper proposed a novel nonparametric model based model predictive control approach for the regulation of heart rate during treadmill exercise. As the model structure of human cardiovascular system is often hard to determine, nonparametric modelling is a more realistic manner to describe complex behaviours of cardiovascular system. This paper presents a new nonparametric Hammerstein model identification approach for heart rate response modelling. Based on the pseudo-random binary sequence experiment data, we decouple the identification of linear dynamic part and input nonlinearity of the Hammerstein system. Correlation analysis is applied to acquire step response of linear dynamic component. Support Vector Regression is adopted to obtain a nonparametric description of the inverse of input static nonlinearity that is utilized to form an approximate linear model of the Hammerstein system. Based on the established model, a model predictive controller under predefined speed and acceleration constraints is designed to achieve safer treadmill exercise. Simulation results show that the proposed control algorithm can achieve optimal heart rate tracking performance under predefined constraints.
NASA Astrophysics Data System (ADS)
Wang, WenBin; Wu, ZiNiu; Wang, ChunFeng; Hu, RuiFeng
2013-11-01
A model based on a thermodynamic approach is proposed for predicting the dynamics of communicable epidemics assumed to be governed by controlling efforts of multiple scales so that an entropy is associated with the system. All the epidemic details are factored into a single and time-dependent coefficient, the functional form of this coefficient is found through four constraints, including notably the existence of an inflexion point and a maximum. The model is solved to give a log-normal distribution for the spread rate, for which a Shannon entropy can be defined. The only parameter, that characterizes the width of the distribution function, is uniquely determined through maximizing the rate of entropy production. This entropy-based thermodynamic (EBT) model predicts the number of hospitalized cases with a reasonable accuracy for SARS in the year 2003. This EBT model can be of use for potential epidemics such as avian influenza and H7N9 in China.
Mixture of a seismicity model based on the rate-and-state friction and ETAS model
NASA Astrophysics Data System (ADS)
Iwata, T.
2015-12-01
Currently the ETAS model [Ogata, 1988, JASA] is considered to be a standard model of seismicity. However, because the ETAS model is a purely statistical one, the physics-based seismicity model derived from the rate-and-state friction (hereafter referred to as Dieterich model) [Dieterich, 1994, JGR] is frequently examined. However, the original version of the Dieterich model has several problems in the application to real earthquake sequences and therefore modifications have been conducted in previous studies. Iwata [2015, Pageoph] is one of such studies and shows that the Dieterich model is significantly improved as a result of the inclusion of the effect of secondary aftershocks (i.e., aftershocks caused by previous aftershocks). However, still the performance of the ETAS model is superior to that of the improved Dieterich model. For further improvement, the mixture of the Dieterich and ETAS models is examined in this study. To achieve the mixture, the seismicity rate is represented as a sum of the ETAS and Dieterich models of which weights are given as k and 1-k, respectively. This mixture model is applied to the aftershock sequences of the 1995 Kobe and 2004 Mid-Niigata sequences which have been analyzed in Iwata [2015]. Additionally, the sequence of the Matsushiro earthquake swarm in central Japan 1965-1970 is also analyzed. The value of k and parameters of the ETAS and Dieterich models are estimated by means of the maximum likelihood method, and the model performances are assessed on the basis of AIC. For the two aftershock sequences, the AIC values of the ETAS model are around 3-9 smaller (i.e., better) than those of the mixture model. On the contrary, for the Matsushiro swarm, the AIC value of the mixture model is 5.8 smaller than that of the ETAS model, indicating that the mixture of the two models results in significant improvement of the seismicity model.
Rate-based process modeling study of CO{sub 2} capture with aqueous monoethanolamine solution
Zhang, Y.; Chen, H.; Chen, C.C.; Plaza, J.M.; Dugas, R.; Rochelle, G.T.
2009-10-15
Rate-based process modeling technology has matured and is increasingly gaining acceptance over traditional equilibrium-stage modeling approaches. Recently comprehensive pilot plant data for carbon dioxide (CO{sub 2}) capture with aqueous monoethanolamine (MEA) solution have become available from the University of Texas at Austin. The pilot plant data cover key process variables including CO{sub 2} concentration in the gas stream, CO{sub 2} loading in lean MEA solution, liquid to gas ratio, and packing type. In this study, we model the pilot plant operation with Aspen RateSep, a second generation rate-based multistage separation unit operation model in Aspen Plus. After a brief review of rate-based modeling, thermodynamic and kinetic models for CO{sub 2} absorption with the MEA solution, and transport property models, we show excellent match of the rate-based model predictions against the comprehensive pilot plant data and we validate the superiority of the rate-based models over the traditional equilibrium-stage models. We further examine the impacts of key rate-based modeling options, i.e., film discretization options and flow model options. The rate-based model provides excellent predictive capability, and it should be very useful for design and scale-up of CO{sub 2} capture processes.
Acid–base chemical reaction model for nucleation rates in the polluted atmospheric boundary layer
Chen, Modi; Titcombe, Mari; Jiang, Jingkun; Jen, Coty; Kuang, Chongai; Fischer, Marc L.; Eisele, Fred L.; Siepmann, J. Ilja; Hanson, David R.; Zhao, Jun; McMurry, Peter H.
2012-01-01
Climate models show that particles formed by nucleation can affect cloud cover and, therefore, the earth's radiation budget. Measurements worldwide show that nucleation rates in the atmospheric boundary layer are positively correlated with concentrations of sulfuric acid vapor. However, current nucleation theories do not correctly predict either the observed nucleation rates or their functional dependence on sulfuric acid concentrations. This paper develops an alternative approach for modeling nucleation rates, based on a sequence of acid–base reactions. The model uses empirical estimates of sulfuric acid evaporation rates obtained from new measurements of neutral molecular clusters. The model predicts that nucleation rates equal the sulfuric acid vapor collision rate times a prefactor that is less than unity and that depends on the concentrations of basic gaseous compounds and preexisting particles. Predicted nucleation rates and their dependence on sulfuric acid vapor concentrations are in reasonable agreement with measurements from Mexico City and Atlanta. PMID:23091030
Acid-base chemical reaction model for nucleation rates in the polluted atmospheric boundary layer.
Chen, Modi; Titcombe, Mari; Jiang, Jingkun; Jen, Coty; Kuang, Chongai; Fischer, Marc L; Eisele, Fred L; Siepmann, J Ilja; Hanson, David R; Zhao, Jun; McMurry, Peter H
2012-11-13
Climate models show that particles formed by nucleation can affect cloud cover and, therefore, the earth's radiation budget. Measurements worldwide show that nucleation rates in the atmospheric boundary layer are positively correlated with concentrations of sulfuric acid vapor. However, current nucleation theories do not correctly predict either the observed nucleation rates or their functional dependence on sulfuric acid concentrations. This paper develops an alternative approach for modeling nucleation rates, based on a sequence of acid-base reactions. The model uses empirical estimates of sulfuric acid evaporation rates obtained from new measurements of neutral molecular clusters. The model predicts that nucleation rates equal the sulfuric acid vapor collision rate times a prefactor that is less than unity and that depends on the concentrations of basic gaseous compounds and preexisting particles. Predicted nucleation rates and their dependence on sulfuric acid vapor concentrations are in reasonable agreement with measurements from Mexico City and Atlanta.
Modeling of Diffusion Based Correlations Between Heart Rate Modulations and Respiration Pattern
2001-10-25
1 of 4 MODELING OF DIFFUSION BASED CORRELATIONS BETWEEN HEART RATE MODULATIONS AND RESPIRATION PATTERN R.Langer,(1) Y.Smorzik,(2) S.Akselrod,(1...generations of the bronchial tree. The second stage describes the oxygen diffusion process from the pulmonary gas in the alveoli into the pulmonary...patterns (FRC, TV, rate). Keywords – Modeling, Diffusion , Heart Rate fluctuations I. INTRODUCTION Under a whole-body management perception, the
A model-based technique for predicting pilot opinion ratings for large commercial transports
NASA Technical Reports Server (NTRS)
Levison, W. H.
1980-01-01
A model-based technique for predicting pilot opinion ratings is described. Features of this procedure, which is based on the optimal-control model for pilot/vehicle systems, include (1) capability to treat 'unconventional' aircraft dynamics, (2) a relatively free-form pilot model, (3) a simple scalar metric for attentional workload, and (4) a straightforward manner of proceeding from descriptions of the flight task environment and requirements to a prediction of pilot opinion rating. The method is able to provide a good match to a set of pilot opinion ratings obtained in a manned simulation study of large commercial aircraft in landing approach.
A model-based technique for predicting pilot opinion ratings for large commercial transports
NASA Technical Reports Server (NTRS)
Levison, W. H.
1982-01-01
A model-based technique for predicting pilot opinion ratings is described. Features of this procedure, which is based on the optimal-control model for pilot/vehicle systems, include (1) capability to treat "unconventional" aircraft dynamics, (2) a relatively free-form pilot model, (3) a simple scalar metric for attentional workload, and (4) a straightforward manner of proceeding from descriptions of the flight task environment and requirements to a prediction of pilot opinion rating. The method was able to provide a good match to a set of pilot opinion ratings obtained in a manned simulation study of large commercial aircraft in landing approach.
Zeng, Yuehua; Shen, Zheng-Kang
2016-01-01
We invert Global Positioning System (GPS) velocity data to estimate fault slip rates in California using a fault‐based crustal deformation model with geologic constraints. The model assumes buried elastic dislocations across the region using Uniform California Earthquake Rupture Forecast Version 3 (UCERF3) fault geometries. New GPS velocity and geologic slip‐rate data were compiled by the UCERF3 deformation working group. The result of least‐squares inversion shows that the San Andreas fault slips at 19–22 mm/yr along Santa Cruz to the North Coast, 25–28 mm/yr along the central California creeping segment to the Carrizo Plain, 20–22 mm/yr along the Mojave, and 20–24 mm/yr along the Coachella to the Imperial Valley. Modeled slip rates are 7–16 mm/yr lower than the preferred geologic rates from the central California creeping section to the San Bernardino North section. For the Bartlett Springs section, fault slip rates of 7–9 mm/yr fall within the geologic bounds but are twice the preferred geologic rates. For the central and eastern Garlock, inverted slip rates of 7.5 and 4.9 mm/yr, respectively, match closely with the geologic rates. For the western Garlock, however, our result suggests a low slip rate of 1.7 mm/yr. Along the eastern California shear zone and southern Walker Lane, our model shows a cumulative slip rate of 6.2–6.9 mm/yr across its east–west transects, which is ∼1 mm/yr increase of the geologic estimates. For the off‐coast faults of central California, from Hosgri to San Gregorio, fault slips are modeled at 1–5 mm/yr, similar to the lower geologic bounds. For the off‐fault deformation, the total moment rate amounts to 0.88×1019 N·m/yr, with fast straining regions found around the Mendocino triple junction, Transverse Ranges and Garlock fault zones, Landers and Brawley seismic zones, and farther south. The overall California moment rate is 2.76×1019
Modeling of Rate-Dependent Hysteresis Using a GPO-Based Adaptive Filter
Zhang, Zhen; Ma, Yaopeng
2016-01-01
A novel generalized play operator-based (GPO-based) nonlinear adaptive filter is proposed to model rate-dependent hysteresis nonlinearity for smart actuators. In the proposed filter, the input signal vector consists of the output of a tapped delay line. GPOs with various thresholds are used to construct a nonlinear network and connected with the input signals. The output signal of the filter is composed of a linear combination of signals from the output of GPOs. The least-mean-square (LMS) algorithm is used to adjust the weights of the nonlinear filter. The modeling results of four adaptive filter methods are compared: GPO-based adaptive filter, Volterra filter, backlash filter and linear adaptive filter. Moreover, a phenomenological operator-based model, the rate-dependent generalized Prandtl-Ishlinskii (RDGPI) model, is compared to the proposed adaptive filter. The various rate-dependent modeling methods are applied to model the rate-dependent hysteresis of a giant magnetostrictive actuator (GMA). It is shown from the modeling results that the GPO-based adaptive filter can describe the rate-dependent hysteresis nonlinear of the GMA more accurately and effectively. PMID:26861349
Modeling of Rate-Dependent Hysteresis Using a GPO-Based Adaptive Filter.
Zhang, Zhen; Ma, Yaopeng
2016-02-06
A novel generalized play operator-based (GPO-based) nonlinear adaptive filter is proposed to model rate-dependent hysteresis nonlinearity for smart actuators. In the proposed filter, the input signal vector consists of the output of a tapped delay line. GPOs with various thresholds are used to construct a nonlinear network and connected with the input signals. The output signal of the filter is composed of a linear combination of signals from the output of GPOs. The least-mean-square (LMS) algorithm is used to adjust the weights of the nonlinear filter. The modeling results of four adaptive filter methods are compared: GPO-based adaptive filter, Volterra filter, backlash filter and linear adaptive filter. Moreover, a phenomenological operator-based model, the rate-dependent generalized Prandtl-Ishlinskii (RDGPI) model, is compared to the proposed adaptive filter. The various rate-dependent modeling methods are applied to model the rate-dependent hysteresis of a giant magnetostrictive actuator (GMA). It is shown from the modeling results that the GPO-based adaptive filter can describe the rate-dependent hysteresis nonlinear of the GMA more accurately and effectively.
Model-based control of networked distributed systems with multi-rate state feedback updates
NASA Astrophysics Data System (ADS)
Garcia, Eloy; Antsaklis, Panos
2013-09-01
This paper presents a model-based multi-rate control technique for stabilisation of uncertain discrete-time systems that transmit information through a limited bandwidth communication network. This model-based multi-rate approach is applied to two networked architectures. First, we discuss the implementation of a centralised control system with distributed sensing capabilities and, second, we address the problem of stabilisation of networks of coupled subsystems with distributed sensors and controllers. In both cases, we provide necessary and sufficient conditions for stability of the uncertain system with multi-rate model updates. Furthermore, we show that, in general, an important reduction of network bandwidth can be obtained using the multi-rate approach with respect to the single-rate implementations. Finally, an extension is provided that addresses restricted access to the communication channel.
Decker, Scott L; Schneider, W Joel; Hale, James B
2012-01-01
Neuropsychologists frequently rely on a battery of neuropsychological tests which are normally distributed to determine impaired functioning. The statistical likelihood of Type I error in clinical decision-making is in part determined by the base rate of normative individuals obtaining atypical performance on neuropsychological tests. Base rates are most accurately obtained by co-normed measures, but this is rarely accomplished in neuropsychological testing. Several statistical methods have been proposed to estimate base rates for tests that are not co-normed. This study compared two statistical approaches (binomial and Monte Carlo models) used to estimate the base rates for flexible test batteries. The two approaches were compared against empirically derived base rates for a multitest co-normed battery of cognitive measures. Estimates were compared across a variety of conditions including age and different α levels (N =3,356). Monte Carlo R(2) estimates ranged from .980 to .997 across five different age groups, indicating a good fit. In contrast, the binomial model fit estimates ranged from 0.387 to 0.646. Results confirm that the binomial model is insufficient for estimating base rates because it does not take into account correlations among measures in a multitest battery. Although the Monte Carlo model produced more accurate results, minor biases occurred that are likely due to skewess and kurtosis of test variables. Implications for future research and applied practice are discussed. © The Author 2011. Published by Oxford University Press. All rights reserved.
Speciation rates decline through time in individual-based models of speciation and extinction.
Wang, Shaopeng; Chen, Anping; Fang, Jingyun; Pacala, Stephen W
2013-09-01
A well-documented pattern in the fossil record is a long-term decline in the origination rate of new taxa after diversity rebounds from a mass extinction. The mechanisms for this pattern remain elusive. In this article, we investigate the macroevolutionary predictions of an individual-based birth-death model (BDI model) where speciation and extinction rates emerge from population dynamics. We start with the simplest neutral model in which every individual has the same per capita rates of birth, death, and speciation. Although the prediction of the simplest neutral model agrees qualitatively with the fossil pattern, the predicted decline in per-species speciation rates is too fast to explain the long-term trend in fossil data. We thus consider models with variation among species in per capita rates of speciation and a suite of alternative assumptions about the heritability of speciation rate. The results show that interspecific variation in per capita speciation rate can induce differences among species in their ability to resist extinction because a low speciation rate confers a small but important demographic advantage. As a consequence, the model predicts an appropriately slow temporal decline in speciation rates, which provides a mechanistic explanation for the fossil pattern.
Modeling Low-Dose-Rate Effects in Irradiated Bipolar-Base Oxides
Cirba, C.R.; Fleetwood, D.M.; Graves, R.J.; Michez, A.; Milanowski, R.J.; Saigne, F.; Schrimpf, R.D.; Witczak, S.C.
1998-10-26
A physical model is developed to quantify the contribution of oxide-trapped charge to enhanced low-dose-rate gain degradation in bipolar junction transistors. Multiple-trapping simulations show that space charge limited transport is partially responsible for low-dose-rate enhancement. At low dose rates, more holes are trapped near the silicon-oxide interface than at high dose rates, resulting in larger midgap voltage shifts at lower dose rates. The additional trapped charge near the interface may cause an exponential increase in excess base current, and a resultant decrease in current gain for some NPN bipolar technologies.
A New Statistically based Autoconversion rate Parameterization for use in Large-Scale Models
NASA Technical Reports Server (NTRS)
Lin, Bing; Zhang, Junhua; Lohmann, Ulrike
2002-01-01
The autoconversion rate is a key process for the formation of precipitation in warm clouds. In climate models, physical processes such as autoconversion rate, which are calculated from grid mean values, are biased, because they do not take subgrid variability into account. Recently, statistical cloud schemes have been introduced in large-scale models to account for partially cloud-covered grid boxes. However, these schemes do not include the in-cloud variability in their parameterizations. In this paper, a new statistically based autoconversion rate considering the in-cloud variability is introduced and tested in three cases using the Canadian Single Column Model (SCM) of the global climate model. The results show that the new autoconversion rate improves the model simulation, especially in terms of liquid water path in all three case studies.
A New Statistically based Autoconversion rate Parameterization for use in Large-Scale Models
NASA Technical Reports Server (NTRS)
Lin, Bing; Zhang, Junhua; Lohmann, Ulrike
2002-01-01
The autoconversion rate is a key process for the formation of precipitation in warm clouds. In climate models, physical processes such as autoconversion rate, which are calculated from grid mean values, are biased, because they do not take subgrid variability into account. Recently, statistical cloud schemes have been introduced in large-scale models to account for partially cloud-covered grid boxes. However, these schemes do not include the in-cloud variability in their parameterizations. In this paper, a new statistically based autoconversion rate considering the in-cloud variability is introduced and tested in three cases using the Canadian Single Column Model (SCM) of the global climate model. The results show that the new autoconversion rate improves the model simulation, especially in terms of liquid water path in all three case studies.
On rate-dependent mechanical model for adaptive magnetorheological elastomer base isolator
NASA Astrophysics Data System (ADS)
Li, Yancheng; Li, Jianchun
2017-04-01
This paper presents research on the phenomenological model of an adaptive base isolator. The adaptive base isolator is made of field-dependent magnetorheological elastomer (MRE) which can alter its physical property under application of magnetic field. Experimental testing demonstrated that the developed MRE base isolator possesses an amazing ability to vary its stiffness under applied magnetic field. However, several challenges have been encountered when it comes modeling such novel device. For example, under a large deformation, the MRE base isolator exhibits a clear strain stiffening effect and this behavior escalates with the increasing of applied current. In addition, the MRE base isolator has also shown typical rate-dependent behavior. Following a review on mechanical models for viscos-elastic rubber devices, a novel rate-dependent model is proposed in this paper to capture the behavior of the new MRE base isolator. To develop a generalized model, the proposed model was evaluated using its performance under random displacement input and a seismic input. It shows that the proposed rate-dependent model can successfully describe the complex behavior of the device.
Lim, Hojun; Battaile, Corbett C.; Brown, Justin L.; ...
2016-06-14
In this work, we develop a tantalum strength model that incorporates e ects of temperature, strain rate and pressure. Dislocation kink-pair theory is used to incorporate temperature and strain rate e ects while the pressure dependent yield is obtained through the pressure dependent shear modulus. Material constants used in the model are parameterized from tantalum single crystal tests and polycrystalline ramp compression experiments. It is shown that the proposed strength model agrees well with the temperature and strain rate dependent yield obtained from polycrystalline tantalum experiments. Furthermore, the model accurately reproduces the pressure dependent yield stresses up to 250 GPa.more » The proposed strength model is then used to conduct simulations of a Taylor cylinder impact test and validated with experiments. This approach provides a physically-based multi-scale strength model that is able to predict the plastic deformation of polycrystalline tantalum through a wide range of temperature, strain and pressure regimes.« less
Lim, Hojun; Battaile, Corbett C.; Brown, Justin L.; Weinberger, Christopher R.
2016-06-14
In this work, we develop a tantalum strength model that incorporates e ects of temperature, strain rate and pressure. Dislocation kink-pair theory is used to incorporate temperature and strain rate e ects while the pressure dependent yield is obtained through the pressure dependent shear modulus. Material constants used in the model are parameterized from tantalum single crystal tests and polycrystalline ramp compression experiments. It is shown that the proposed strength model agrees well with the temperature and strain rate dependent yield obtained from polycrystalline tantalum experiments. Furthermore, the model accurately reproduces the pressure dependent yield stresses up to 250 GPa. The proposed strength model is then used to conduct simulations of a Taylor cylinder impact test and validated with experiments. This approach provides a physically-based multi-scale strength model that is able to predict the plastic deformation of polycrystalline tantalum through a wide range of temperature, strain and pressure regimes.
Lim, Hojun; Battaile, Corbett C.; Brown, Justin L.; Weinberger, Christopher R.
2016-06-14
In this work, we develop a tantalum strength model that incorporates e ects of temperature, strain rate and pressure. Dislocation kink-pair theory is used to incorporate temperature and strain rate e ects while the pressure dependent yield is obtained through the pressure dependent shear modulus. Material constants used in the model are parameterized from tantalum single crystal tests and polycrystalline ramp compression experiments. It is shown that the proposed strength model agrees well with the temperature and strain rate dependent yield obtained from polycrystalline tantalum experiments. Furthermore, the model accurately reproduces the pressure dependent yield stresses up to 250 GPa. The proposed strength model is then used to conduct simulations of a Taylor cylinder impact test and validated with experiments. This approach provides a physically-based multi-scale strength model that is able to predict the plastic deformation of polycrystalline tantalum through a wide range of temperature, strain and pressure regimes.
NASA Astrophysics Data System (ADS)
Wei, Xiaodan; Liu, Lijun; Zhou, Wenshu
2017-03-01
In this paper, we study the global stability and attractivity of the endemic equilibrium for a network-based SIS epidemic model with nonmonotone incidence rate. The model was introduced in Li (2015). We prove that the endemic equilibrium is globally asymptotically stable if α (a parameter of this model) is sufficiently large, and is globally attractive if the transmission rate λ satisfies λ/λc ∈(1 , 2 ] , where λc is the epidemic threshold. Some numerical experiments are also presented to illustrate the theoretical results.
A Numerical Study of Water Loss Rate Distributions in MDCT-based Human Airway Models
Wu, Dan; Miyawaki, Shinjiro; Tawhai, Merryn H.; Hoffman, Eric A.; Lin, Ching-Long
2015-01-01
Both three-dimensional (3D) and one-dimensional (1D) computational fluid dynamics (CFD) methods are applied to study regional water loss in three multi-detector row computed-tomography (MDCT)-based human airway models at the minute ventilations of 6, 15 and 30 L/min. The overall water losses predicted by both 3D and 1D models in the entire respiratory tract agree with available experimental measurements. However, 3D and 1D models reveal different regional water loss rate distributions due to the 3D secondary flows formed at bifurcations. The secondary flows cause local skewed temperature and humidity distributions on inspiration acting to elevate the local water loss rate; and the secondary flow at the carina tends to distribute more cold air to the lower lobes. As a result, the 3D model predicts that the water loss rate first increases with increasing airway generation, and then decreases as the air approaches saturation, while the 1D model predicts a monotonic decrease of water loss rate with increasing airway generation. Moreover, the 3D (or 1D) model predicts relatively higher water loss rates in lower (or upper) lobes. The regional water loss rate can be related to the non-dimensional wall shear stress (τ*) by the non-dimensional mass transfer coefficient (h0*) as h0* = 1.15 τ*0.272, R = 0.842. PMID:25869455
A Numerical Study of Water Loss Rate Distributions in MDCT-Based Human Airway Models.
Wu, Dan; Miyawaki, Shinjiro; Tawhai, Merryn H; Hoffman, Eric A; Lin, Ching-Long
2015-11-01
Both three-dimensional (3D) and one-dimensional (1D) computational fluid dynamics methods are applied to study regional water loss in three multi-detector row computed-tomography-based human airway models at the minute ventilations of 6, 15 and 30 L/min. The overall water losses predicted by both 3D and 1D models in the entire respiratory tract agree with available experimental measurements. However, 3D and 1D models reveal different regional water loss rate distributions due to the 3D secondary flows formed at bifurcations. The secondary flows cause local skewed temperature and humidity distributions on inspiration acting to elevate the local water loss rate; and the secondary flow at the carina tends to distribute more cold air to the lower lobes. As a result, the 3D model predicts that the water loss rate first increases with increasing airway generation, and then decreases as the air approaches saturation, while the 1D model predicts a monotonic decrease of water loss rate with increasing airway generation. Moreover, the 3D (or 1D) model predicts relatively higher water loss rates in lower (or upper) lobes. The regional water loss rate can be related to the non-dimensional wall shear stress (τ (*)) by the non-dimensional mass transfer coefficient (h 0 (*) ) as [Formula: see text].
Residential air exchange rates (AERs) are a key determinant in the infiltration of ambient air pollution indoors. Population-based human exposure models using probabilistic approaches to estimate personal exposure to air pollutants have relied on input distributions from AER meas...
Residential air exchange rates (AERs) are a key determinant in the infiltration of ambient air pollution indoors. Population-based human exposure models using probabilistic approaches to estimate personal exposure to air pollutants have relied on input distributions from AER meas...
Comparison of two lung clearance models based on the dissolution rates of oxidized depleted uranium
Crist, K.C.
1984-10-01
An in-vitro dissolution study was conducted on two respirable oxidized depleted uranium samples. The dissolution rates generated from this study were then utilized in the International Commission on Radiological Protection Task Group lung clearance model and a lung clearance model proposed by Cuddihy. Predictions from both models based on the dissolution rates of the amount of oxidized depleted uranium that would be cleared to blood from the pulmonary region following an inhalation exposure were compared. It was found that the predictions made by both models differed considerably. The difference between the predictions was attributed to the differences in the way each model perceives the clearance from the pulmonary region. 33 references, 11 figures, 9 tables.
Dynamics of a network-based SIS epidemic model with nonmonotone incidence rate
NASA Astrophysics Data System (ADS)
Li, Chun-Hsien
2015-06-01
This paper studies the dynamics of a network-based SIS epidemic model with nonmonotone incidence rate. This type of nonlinear incidence can be used to describe the psychological effect of certain diseases spread in a contact network at high infective levels. We first find a threshold value for the transmission rate. This value completely determines the dynamics of the model and interestingly, the threshold is not dependent on the functional form of the nonlinear incidence rate. Furthermore, if the transmission rate is less than or equal to the threshold value, the disease will die out. Otherwise, it will be permanent. Numerical experiments are given to illustrate the theoretical results. We also consider the effect of the nonlinear incidence on the epidemic dynamics.
Cooling rate estimations based on kinetic modelling of Fe-Mg diffusion in olivine
NASA Technical Reports Server (NTRS)
Taylor, L. A.; Onorato, P. I. K.; Uhlmann, D. R.
1977-01-01
A finite one-dimensional kinetic model was developed to estimate the cooling rates of lunar rocks. The model takes into consideration the compositional zonation of olivine and applies Buening and Buseck (1973) data on ion diffusion in olivine. Since the 'as-solidified' profile of a given olivine is not known, a step-function, with infinite gradient, is assumed; the position of this step is based on mass balance considerations of the measured compositional profile. A minimum cooling rate would be associated with the preservation of a given gradient. The linear cooling rates of lunar rocks 12002 and 15555 were estimated by use of the olivine cooling-rate indicator to be 10 C/day and 5 C/day, respectively. These values are lower than those obtained by dynamic crystallization studies (10-20 C/day).
Cooling rate estimations based on kinetic modelling of Fe-Mg diffusion in olivine
NASA Technical Reports Server (NTRS)
Taylor, L. A.; Onorato, P. I. K.; Uhlmann, D. R.
1977-01-01
A finite one-dimensional kinetic model was developed to estimate the cooling rates of lunar rocks. The model takes into consideration the compositional zonation of olivine and applies Buening and Buseck (1973) data on ion diffusion in olivine. Since the 'as-solidified' profile of a given olivine is not known, a step-function, with infinite gradient, is assumed; the position of this step is based on mass balance considerations of the measured compositional profile. A minimum cooling rate would be associated with the preservation of a given gradient. The linear cooling rates of lunar rocks 12002 and 15555 were estimated by use of the olivine cooling-rate indicator to be 10 C/day and 5 C/day, respectively. These values are lower than those obtained by dynamic crystallization studies (10-20 C/day).
Introduction of a prediction model to assigning periodontal prognosis based on survival rates.
Martinez-Canut, Pedro; Alcaraz, Jaime; Alcaraz, Jaime; Alvarez-Novoa, Pablo; Alvarez-Novoa, Carmen; Marcos, Ana; Noguerol, Blas; Noguerol, Fernando; Zabalegui, Ion
2017-09-04
To develop a prediction model for tooth loss due to periodontal disease (TLPD) in patients following periodontal maintenance (PM), and assess its performance using a multicentre approach. A multilevel analysis of eleven predictors of TLPD in 500 patients following PM was carried out to calculate the probability of TLPD. This algorithm was applied to three different TLPD samples (369 teeth) gathered retrospectively by nine periodontist, associating several intervals of probability with the corresponding survival rates, based on significant differences in the mean survival rates. The reproducibility of these associations was assessed in each sample (One-way ANOVA and pair-wise comparison with Bonferroni corrections). The model presented high specificity and moderate sensitivity, with optimal calibration and discrimination measurements. Seven intervals of probability were associated with seven survival rates and these associations contained close to 80% of the cases: the probability predicted the survival rate at this percentage. The model performed well in the three samples, since the mean survival rates of each association were significantly different within each sample, while no significant differences between the samples were found in pair-wise comparisons of means. This model might be useful for predicting survival rates in different TLPD samples This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Sonne, DP; Hansen, M; Bagger, JI; Lund, A; Rehfeld, JF; Alskär, O; Karlsson, MO; Vilsbøll, T; Knop, FK; Bergstrand, M
2016-01-01
Bile acids released postprandially modify the rate and extent of absorption of lipophilic compounds. The present study aimed to predict gastric emptying (GE) rate and gallbladder emptying (GBE) patterns in response to caloric intake. A mechanism‐based model for GE, cholecystokinin plasma concentrations, and GBE was developed on data from 33 patients with type 2 diabetes and 33 matched nondiabetic individuals who were administered various test drinks. A feedback action of the caloric content entering the proximal small intestine was identified for the rate of GE. The cholecystokinin concentrations were not predictive of GBE, and an alternative model linking the nutrients amount in the upper intestine to GBE was preferred. Relative to fats, the potency on GBE was 68% for proteins and 2.3% for carbohydrates. The model predictions were robust across a broad range of nutritional content and may potentially be used to predict postprandial changes in drug absorption. PMID:28028939
Zakynthinaki, Maria S.
2015-01-01
The objective of the present study was to formulate a simple and at the same time effective mathematical model of heart rate kinetics in response to movement (exercise). Based on an existing model, a system of two coupled differential equations which give the rate of change of heart rate and the rate of change of exercise intensity is used. The modifications introduced to the existing model are justified and discussed in detail, while models of blood lactate accumulation in respect to time and exercise intensity are also presented. The main modification is that the proposed model has now only one parameter which reflects the overall cardiovascular condition of the individual. The time elapsed after the beginning of the exercise, the intensity of the exercise, as well as blood lactate are also taken into account. Application of the model provides information regarding the individual’s cardiovascular condition and is able to detect possible changes in it, across the data recording periods. To demonstrate examples of successful numerical fit of the model, constant intensity experimental heart rate data sets of two individuals have been selected and numerical optimization was implemented. In addition, numerical simulations provided predictions for various exercise intensities and various cardiovascular condition levels. The proposed model can serve as a powerful tool for a complete means of heart rate analysis, not only in exercise physiology (for efficiently designing training sessions for healthy subjects) but also in the areas of cardiovascular health and rehabilitation (including application in population groups for which direct heart rate recordings at intense exercises are not possible or not allowed, such as elderly or pregnant women). PMID:25876164
Modelling heart rate kinetics.
Zakynthinaki, Maria S
2015-01-01
The objective of the present study was to formulate a simple and at the same time effective mathematical model of heart rate kinetics in response to movement (exercise). Based on an existing model, a system of two coupled differential equations which give the rate of change of heart rate and the rate of change of exercise intensity is used. The modifications introduced to the existing model are justified and discussed in detail, while models of blood lactate accumulation in respect to time and exercise intensity are also presented. The main modification is that the proposed model has now only one parameter which reflects the overall cardiovascular condition of the individual. The time elapsed after the beginning of the exercise, the intensity of the exercise, as well as blood lactate are also taken into account. Application of the model provides information regarding the individual's cardiovascular condition and is able to detect possible changes in it, across the data recording periods. To demonstrate examples of successful numerical fit of the model, constant intensity experimental heart rate data sets of two individuals have been selected and numerical optimization was implemented. In addition, numerical simulations provided predictions for various exercise intensities and various cardiovascular condition levels. The proposed model can serve as a powerful tool for a complete means of heart rate analysis, not only in exercise physiology (for efficiently designing training sessions for healthy subjects) but also in the areas of cardiovascular health and rehabilitation (including application in population groups for which direct heart rate recordings at intense exercises are not possible or not allowed, such as elderly or pregnant women).
Effect of control sampling rates on model-based manipulator control schemes
NASA Technical Reports Server (NTRS)
Khosla, P. K.
1987-01-01
The effect of changing the control sampling period on the performance of the computed-torque and independent joint control schemes is discussed. While the former utilizes the complete dynamics model of the manipulator, the latter assumes a decoupled and linear model of the manipulator dynamics. Researchers discuss the design of controller gains for both the computed-torque and the independent joint control schemes and establish a framework for comparing their trajectory tracking performance. Experiments show that within each scheme the trajectory tracking accuracy varies slightly with the change of the sampling rate. However, at low sampling rates the computed-torque scheme outperforms the independent joint control scheme. Based on experimental results, researchers also conclusively establish the importance of high sampling rates as they result in an increased stiffness of the system.
Time-dependent sleep stage transition model based on heart rate variability.
Takeda, Toki; Mizuno, Osamu; Tanaka, Tomohiro
2015-01-01
A new model is proposed to automatically classify sleep stages using heart rate variability (HRV). The generative model, based on the characteristics that the distribution and the transition probabilities of sleep stages depend on the elapsed time from the beginning of sleep, infers the sleep stage with a Gibbs sampler. Experiments were conducted using a public data set consisting of 45 healthy subjects and the model's classification accuracy was evaluated for three sleep stages: wake state, rapid eye movement (REM) sleep, and non-REM sleep. Experimental results demonstrated that the model provides more accurate sleep stage classification than conventional (naive Bayes and Support Vector Machine) models that do not take the above characteristics into account. Our study contributes to improve the quality of sleep monitoring in the daily life using easy-to-wear HRV sensors.
MRI-based anatomical model of the human head for specific absorption rate mapping
Makris, Nikos; Angelone, Leonardo; Tulloch, Seann; Sorg, Scott; Kaiser, Jonathan; Kennedy, David
2009-01-01
In this study, we present a magnetic resonance imaging (MRI)-based, high-resolution, numerical model of the head of a healthy human subject. In order to formulate the model, we performed quantitative volumetric segmentation on the human head, using T1-weighted MRI. The high spatial resolution used (1 × 1 × 1 mm3), allowed for the precise computation and visualization of a higher number of anatomical structures than provided by previous models. Furthermore, the high spatial resolution allowed us to study individual thin anatomical structures of clinical relevance not visible by the standard model currently adopted in computational bioelectromagnetics. When we computed the electromagnetic field and specific absorption rate (SAR) at 7 Tesla MRI using this high-resolution model, we were able to obtain a detailed visualization of such fine anatomical structures as the epidermis/dermis, bone structures, bone-marrow, white matter and nasal and eye structures. PMID:18985401
Balakrishnan, Narayanaswamy; Pal, Suvra
2016-08-01
Recently, a flexible cure rate survival model has been developed by assuming the number of competing causes of the event of interest to follow the Conway-Maxwell-Poisson distribution. This model includes some of the well-known cure rate models discussed in the literature as special cases. Data obtained from cancer clinical trials are often right censored and expectation maximization algorithm can be used in this case to efficiently estimate the model parameters based on right censored data. In this paper, we consider the competing cause scenario and assuming the time-to-event to follow the Weibull distribution, we derive the necessary steps of the expectation maximization algorithm for estimating the parameters of different cure rate survival models. The standard errors of the maximum likelihood estimates are obtained by inverting the observed information matrix. The method of inference developed here is examined by means of an extensive Monte Carlo simulation study. Finally, we illustrate the proposed methodology with a real data on cancer recurrence.
Zheng, Dandan; Hou, Huirang; Zhang, Tao
2016-04-01
For ultrasonic gas flow rate measurement based on ultrasonic exponential model, when the noise frequency is close to that of the desired signals (called similar-frequency noise) or the received signal amplitude is small and unstable at big flow rate, local convergence of the algorithm genetic-ant colony optimization-3cycles may appear, and measurement accuracy may be affected. Therefore, an improved method energy genetic-ant colony optimization-3cycles (EGACO-3cycles) is proposed to solve this problem. By judging the maximum energy position of signal, the initial parameter range of exponential model can be narrowed and then the local convergence can be avoided. Moreover, a DN100 flow rate measurement system with EGACO-3cycles method is established based on NI PCI-6110 and personal computer. A series of experiments are carried out for testing the new method and the measurement system. It is shown that local convergence doesn't appear with EGACO-3cycles method when similar-frequency noises exist and flow rate is big. Then correct time of flight can be obtained. Furthermore, through flow calibration on this system, the measurement range ratio is achieved 500:1, and the measurement accuracy is 0.5% with a low transition velocity 0.3 m/s.
2015-01-01
Background Different from other indicators of cardiac function, such as ejection fraction and transmitral early diastolic velocity, myocardial strain is promising to capture subtle alterations that result from early diseases of the myocardium. In order to extract the left ventricle (LV) myocardial strain and strain rate from cardiac cine-MRI, a modified hierarchical transformation model was proposed. Methods A hierarchical transformation model including the global and local LV deformations was employed to analyze the strain and strain rate of the left ventricle by cine-MRI image registration. The endocardial and epicardial contour information was introduced to enhance the registration accuracy by combining the original hierarchical algorithm with an Iterative Closest Points using Invariant Features algorithm. The hierarchical model was validated by a normal volunteer first and then applied to two clinical cases (i.e., the normal volunteer and a diabetic patient) to evaluate their respective function. Results Based on the two clinical cases, by comparing the displacement fields of two selected landmarks in the normal volunteer, the proposed method showed a better performance than the original or unmodified model. Meanwhile, the comparison of the radial strain between the volunteer and patient demonstrated their apparent functional difference. Conclusions The present method could be used to estimate the LV myocardial strain and strain rate during a cardiac cycle and thus to quantify the analysis of the LV motion function. PMID:25602778
Error-Rate Estimation Based on Multi-Signal Flow Graph Model and Accelerated Radiation Tests
Wang, Yueke; Xing, Kefei; Deng, Wei; Zhang, Zelong
2016-01-01
A method of evaluating the single-event effect soft-error vulnerability of space instruments before launched has been an active research topic in recent years. In this paper, a multi-signal flow graph model is introduced to analyze the fault diagnosis and meantime to failure (MTTF) for space instruments. A model for the system functional error rate (SFER) is proposed. In addition, an experimental method and accelerated radiation testing system for a signal processing platform based on the field programmable gate array (FPGA) is presented. Based on experimental results of different ions (O, Si, Cl, Ti) under the HI-13 Tandem Accelerator, the SFER of the signal processing platform is approximately 10−3(error/particle/cm2), while the MTTF is approximately 110.7 h. PMID:27583533
Error-Rate Estimation Based on Multi-Signal Flow Graph Model and Accelerated Radiation Tests.
He, Wei; Wang, Yueke; Xing, Kefei; Deng, Wei; Zhang, Zelong
2016-01-01
A method of evaluating the single-event effect soft-error vulnerability of space instruments before launched has been an active research topic in recent years. In this paper, a multi-signal flow graph model is introduced to analyze the fault diagnosis and meantime to failure (MTTF) for space instruments. A model for the system functional error rate (SFER) is proposed. In addition, an experimental method and accelerated radiation testing system for a signal processing platform based on the field programmable gate array (FPGA) is presented. Based on experimental results of different ions (O, Si, Cl, Ti) under the HI-13 Tandem Accelerator, the SFER of the signal processing platform is approximately 10-3(error/particle/cm2), while the MTTF is approximately 110.7 h.
Testing foetal-maternal heart rate synchronization via model-based analyses.
Riedl, Maik; van Leeuwen, Peter; Suhrbier, Alexander; Malberg, Hagen; Grönemeyer, Dietrich; Kurths, Jürgen; Wessel, Niels
2009-04-13
The investigation of foetal reaction to internal and external conditions and stimuli is an important tool in the characterization of the developing neural integration of the foetus. An interesting example of this is the study of the interrelationship between the foetal and the maternal heart rate. Recent studies have shown a certain likelihood of occasional heart rate synchronization between mother and foetus. In the case of respiratory-induced heart rate changes, the comparison with maternal surrogates suggests that the evidence for detected synchronization is largely statistical and does not result from physiological interaction. Rather, they simply reflect a stochastic, temporary stability of two independent oscillators with time-variant frequencies. We reanalysed three datasets from that study for a more local consideration. Epochs of assumed synchronization associated with short-term regulation of the foetal heart rate were selected and compared with synchronization resulting from white noise instead of the foetal signal. Using data-driven modelling analysis, it was possible to identify the consistent influence of the heartbeat duration of maternal beats preceding the foetal beats during epochs of synchronization. These maternal beats occurred approximately one maternal respiratory cycle prior to the affected foetal beat. A similar effect could not be found in the epochs without synchronization. Simulations based on the fitted models led to a higher likelihood of synchronization in the data segments with assumed foetal-maternal interaction than in the segment without such assumed interaction. We conclude that the data-driven model-based analysis can be a useful tool for the identification of synchronization.
Littoral transport rates in the Santa Barbara Littoral Cell: a process-based model analysis
Elias, E. P. L.; Barnard, Patrick L.; Brocatus, John
2009-01-01
Identification of the sediment transport patterns and pathways is essential for sustainable coastal zone management of the heavily modified coastline of Santa Barbara and Ventura County (California, USA). A process-based model application, based on Delft3D Online Morphology, is used to investigate the littoral transport potential along the Santa Barbara Littoral Cell (between Point Conception and Mugu Canyon). An advanced optimalization procedure is applied to enable annual sediment transport computations by reducing the ocean wave climate in 10 wave height - direction classes. Modeled littoral transport rates compare well with observed dredging volumes, and erosion or sedimentation hotspots coincide with the modeled divergence and convergence of the transport gradients. Sediment transport rates are strongly dependent on the alongshore variation in wave height due to wave sheltering, diffraction and focusing by the Northern Channel Islands, and the local orientation of the geologically-controlled coastline. Local transport gradients exceed the net eastward littoral transport, and are considered a primary driver for hot-spot erosion.
The rate equation based optical model for phosphor-converted white light-emitting diodes
NASA Astrophysics Data System (ADS)
Du, Kang; Li, Haokai; Guo, Keqin; Wang, Heng; Li, Dacheng; Zhang, Wending; Mei, Ting; Chua, Soo Jin
2017-03-01
An optical model based on the rate equation was developed to calculate the emission spectrum of a phosphor-converted white light-emitting diode (pc-WLED) taking into consideration the phosphor weight percentage, film thickness, and optical properties of phosphor, viz. absorption spectrum, quantum efficiency spectrum and fluorescent emission spectrum. Films containing a mixture of phosphor and silicone elastomer encapsulant were investigated using this model. A linear relationship was found between the peak absorption coefficient and the phosphor weight percentage with slopes of 66.76 ± 0.52 mm‑1 and 29.66 ± 2.05 mm‑1 for a red phosphor CaAlSiN3:Eu2+ and a yellow phosphor Y3Al5O12:Ce3+, respectively. With these parameters, the model predicted emission spectra which are in good agreement with measurement, thus verifying the validity of the model. The model correctly predicts redshift and spectral width reduction of the emission peak for increasing phosphor weight percentage or film thickness, as expected from the phenomenon of photon reabsorption by the phosphors. This model does not require the use of Monte Carlo simulation and Mie theory.
The relationship between afterslip and aftershocks: a study based on Coulomb-Rate-and-State models
NASA Astrophysics Data System (ADS)
Cattania, Camilla; Hainzl, Sebastian; Roth, Frank; Wang, Lifeng
2014-05-01
The original Coulomb stress hypothesis, as well as most physics based models of aftershock sequences, assume that aftershocks are triggered by the instantaneous coseismic stress: in other words, the stress field is treated as stationary following the mainshock. However, several lines of evidence indicate that postseismic processes may affect aftershock triggering. The cumulative seismic moment of afterslip can be a significant fraction of the coseismic moment, generating comparable stress changes; moreover, afterslip has a similar time dependence as aftershocks, suggesting that the two processes may be linked. Aftershocks themselves contribute to the redistribution of stresses, and they can trigger their own aftershocks: spatial clustering, and the success of statistical models which include secondary triggering (ETAS) suggest that, even though aftershocks typically generate stresses orders of magnitude smaller than the mainshock, they are significant on a local scale. Our goal is to study the effect of postseismically induced stresses in the spatial and temporal distribution of aftershocks. We focus on the two processes described above (afterslip and secondary triggering), and do not consider other phenomena such as poroelastic response and viscoelastic relaxation. We study a period of 250 days following the mainshock, for two case studies: the Parkfield, Mw=6.0 and the Tohoku, Mw=9.0 earthquakes. We model the seismic response to stress changes using the Dieterich constitutive law, derived from a population of faults governed by Rate-and-State dependent friction; we also consider uncertainties in the input stress field using a Monte Carlo technique. We find that modeling secondary triggering systematically improves model performance; afterslip has a less significant overall impact on the model, but in both cases studies we observe clusters of seismicity which, due to their location relative to the coseismic and postseismic slip, are better explained when afterslip
Singh, Viraj; Misra, Anil; Parthasarathy, Ranganathan; Ye, Qiang; Park, Jonggu; Spencer, Paulette
2014-01-01
The aim of this study is to investigate the mechanical behavior of model methacrylate-based dentin adhesives under conditions that simulate the wet oral environment. A series of monotonic and creep experiments were performed on rectangular beam samples of dentin adhesive in three-point bending configuration under different moisture conditions. The monotonic test results show a significant effect of loading rate on the failure strength and the linear limit (yield point) of the stress-strain response. In addition, these tests show that the failure strength is low, and the failure occurs at a smaller deformation when the test is performed under continuously changing moisture conditions. The creep test results show that under constant moisture conditions, the model dentin adhesives can have a viscoelastic response under certain low loading levels. However, when the moisture conditions vary under the same low loading levels, the dentin adhesives have an anomalous creep response accompanied by large secondary creep and high strain accumulation. PMID:23744598
2011-01-01
The overall framework of this rate-dependent HCPD model follows the structure of the anisotropic Gursen- Tvergaard-Needleman( GTN ) type elasto...with evolving porosity. The HCPD model follows the Gurson-Tvergaard-Needleman or GTN models framework established in [14, 15, 16, 17] that account for...this method. In [32, 33] the VCFEM model has been extended for rate-dependent elastic- viscoplastic porous ductile material. Micromechanical analysis
Model-based detection of heart rate turbulence using mean shape information.
Smith, Danny; Solem, Kristian; Laguna, Pablo; Martínez, Juan Pablo; Sörnmo, Leif
2010-02-01
A generalized likelihood ratio test (GLRT) statistic is proposed for detection of heart rate turbulence (HRT), where a set of Karhunen-LoEve basis functions models HRT. The detector structure is based on the extended integral pulse frequency modulation model that accounts for the presence of ectopic beats and HRT. This new test statistic takes a priori information regarding HRT shape into account, whereas our previously presented GLRT detector relied solely on the energy contained in the signal subspace. The spectral relationship between heart rate variability (HRV) and HRT is investigated for the purpose of modeling HRV "noise" present during the turbulence period, the results suggesting that the white noise assumption is feasible to pursue. The performance was studied for both simulated and real data, leading to results which show that the new GLRT detector is superior to the original one as well as to the commonly used parameter turbulence slope (TS) on both types of data. Averaging ten ventricular ectopic beats, the estimated detection probability of the new detector, the previous detector, and TS were found to be 0.83, 0.35, and 0.41, respectively, when the false alarm probability was held fixed at 0.1.
Yu, Xiaozhi; Ren, Jindong; Zhang, Qian; Liu, Qun; Liu, Honghao
2017-04-01
Reach envelopes are very useful for the design and layout of controls. In building reach envelopes, one of the key problems is to represent the reach limits accurately and conveniently. Spherical harmonics are proved to be accurate and convenient method for fitting of the reach capability envelopes. However, extensive study are required on what components of spherical harmonics are needed in fitting the envelope surfaces. For applications in the vehicle industry, an inevitable issue is to construct reach limit surfaces with consideration of the seating positions of the drivers, and it is desirable to use population envelopes rather than individual envelopes. However, it is relatively inconvenient to acquire reach envelopes via a test considering the seating positions of the drivers. In addition, the acquired envelopes are usually unsuitable for use with other vehicle models because they are dependent on the current cab packaging parameters. Therefore, it is of great significance to construct reach envelopes for real vehicle conditions based on individual capability data considering seating positions. Moreover, traditional reach envelopes provide little information regarding the assessment of reach difficulty. The application of reach envelopes will improve design quality by providing difficulty-rating information about reach operations. In this paper, using the laboratory data of seated reach with consideration of the subjective difficulty ratings, the method of modeling reach envelopes is studied based on spherical harmonics. The surface fitting using spherical harmonics is conducted for circumstances both with and without seat adjustments. For use with adjustable seat, the seating position model is introduced to re-locate the test data. The surface fitting is conducted for both population and individual reach envelopes, as well as for boundary envelopes. Comparison of the envelopes of adjustable seat and the SAE J287 control reach envelope shows that the latter
Learning to maximize reward rate: a model based on semi-Markov decision processes
Khodadadi, Arash; Fakhari, Pegah; Busemeyer, Jerome R.
2014-01-01
When animals have to make a number of decisions during a limited time interval, they face a fundamental problem: how much time they should spend on each decision in order to achieve the maximum possible total outcome. Deliberating more on one decision usually leads to more outcome but less time will remain for other decisions. In the framework of sequential sampling models, the question is how animals learn to set their decision threshold such that the total expected outcome achieved during a limited time is maximized. The aim of this paper is to provide a theoretical framework for answering this question. To this end, we consider an experimental design in which each trial can come from one of the several possible “conditions.” A condition specifies the difficulty of the trial, the reward, the penalty and so on. We show that to maximize the expected reward during a limited time, the subject should set a separate value of decision threshold for each condition. We propose a model of learning the optimal value of decision thresholds based on the theory of semi-Markov decision processes (SMDP). In our model, the experimental environment is modeled as an SMDP with each “condition” being a “state” and the value of decision thresholds being the “actions” taken in those states. The problem of finding the optimal decision thresholds then is cast as the stochastic optimal control problem of taking actions in each state in the corresponding SMDP such that the average reward rate is maximized. Our model utilizes a biologically plausible learning algorithm to solve this problem. The simulation results show that at the beginning of learning the model choses high values of decision threshold which lead to sub-optimal performance. With experience, however, the model learns to lower the value of decision thresholds till finally it finds the optimal values. PMID:24904252
Learning to maximize reward rate: a model based on semi-Markov decision processes.
Khodadadi, Arash; Fakhari, Pegah; Busemeyer, Jerome R
2014-01-01
WHEN ANIMALS HAVE TO MAKE A NUMBER OF DECISIONS DURING A LIMITED TIME INTERVAL, THEY FACE A FUNDAMENTAL PROBLEM: how much time they should spend on each decision in order to achieve the maximum possible total outcome. Deliberating more on one decision usually leads to more outcome but less time will remain for other decisions. In the framework of sequential sampling models, the question is how animals learn to set their decision threshold such that the total expected outcome achieved during a limited time is maximized. The aim of this paper is to provide a theoretical framework for answering this question. To this end, we consider an experimental design in which each trial can come from one of the several possible "conditions." A condition specifies the difficulty of the trial, the reward, the penalty and so on. We show that to maximize the expected reward during a limited time, the subject should set a separate value of decision threshold for each condition. We propose a model of learning the optimal value of decision thresholds based on the theory of semi-Markov decision processes (SMDP). In our model, the experimental environment is modeled as an SMDP with each "condition" being a "state" and the value of decision thresholds being the "actions" taken in those states. The problem of finding the optimal decision thresholds then is cast as the stochastic optimal control problem of taking actions in each state in the corresponding SMDP such that the average reward rate is maximized. Our model utilizes a biologically plausible learning algorithm to solve this problem. The simulation results show that at the beginning of learning the model choses high values of decision threshold which lead to sub-optimal performance. With experience, however, the model learns to lower the value of decision thresholds till finally it finds the optimal values.
Analysis and Computation of a Base Labor Rate for Cost Models of Major Weapon System Acquisition
1978-06-01
was to analyze and compare the Base Labor Rates determined by the full cost approach versus the Maintenance Cost System. If the labor rates were...Southeastern United Statea which support ?-ransport aircraft were studied. The elements of cost which make up the Depot Labor Rate were used to facilitate the...10. MCS Labor Rate.............. 47 vi . LIST OF FIGURES Figure Page 1. Full Cost Approach . . . . ,. **13 vii Chapter 1 INTRODUCTION OVERVIEW The cost
Logarithmic rate based elasto-viscoplastic cyclic constitutive model for soft biological tissues.
Zhu, Yilin; Kang, Guozheng; Yu, Chao; Poh, Leong Hien
2016-08-01
Based on the logarithmic rate and piecewise linearization theory, a thermodynamically consistent elasto-viscoplastic constitutive model is developed in the framework of finite deformations to describe the nonlinear time-dependent biomechanical performances of soft biological tissues, such as nonlinear anisotropic monotonic stress-strain responses, stress relaxation, creep and ratchetting. In the proposed model, the soft biological tissue is assumed as a typical composites consisting of an isotropic matrix and anisotropic fiber aggregation. Accordingly, the free energy function and stress tensor are divided into two parts related to the matrix and fiber aggregation, respectively. The nonlinear biomechanical responses of the tissues are described by the piecewise linearization theory with hypo-elastic relations of fiber aggregation. The evolution equations of viscoplasticity are formulated from the dissipation inequalities by the co-directionality hypotheses. The anisotropy is considered in the hypo-elastic relations and viscoplastic flow rules by introducing some material parameters dependent on the loading direction. Then the capability of the proposed model to describe the nonlinear time-dependent deformation of soft biological tissues is verified by comparing the predictions with the corresponding experimental results of three tissues. It is seen that the predicted monotonic stress-strain responses, stress relaxation, creep and ratchetting of soft biological tissues are in good agreement with the corresponding experimental ones. Copyright © 2016 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Murphy, Gregory J.
2012-01-01
This quantitative study explores the 2010 recommendation of the Educational Funding Advisory Board to consider the Evidence-Based Adequacy model of school funding in Illinois. This school funding model identifies and costs research based practices necessary in a prototypical school and sets funding levels based upon those practices. This study…
Stage-discharge rating curves based on satellite altimetry and modeled discharge in the Amazon basin
NASA Astrophysics Data System (ADS)
Paris, Adrien; Dias de Paiva, Rodrigo; Santos da Silva, Joecila; Medeiros Moreira, Daniel; Calmant, Stephane; Garambois, Pierre-André; Collischonn, Walter; Bonnet, Marie-Paule; Seyler, Frederique
2016-05-01
In this study, rating curves (RCs) were determined by applying satellite altimetry to a poorly gauged basin. This study demonstrates the synergistic application of remote sensing and watershed modeling to capture the dynamics and quantity of flow in the Amazon River Basin, respectively. Three major advancements for estimating basin-scale patterns in river discharge are described. The first advancement is the preservation of the hydrological meanings of the parameters expressed by Manning's equation to obtain a data set containing the elevations of the river beds throughout the basin. The second advancement is the provision of parameter uncertainties and, therefore, the uncertainties in the rated discharge. The third advancement concerns estimating the discharge while considering backwater effects. We analyzed the Amazon Basin using nearly one thousand series that were obtained from ENVISAT and Jason-2 altimetry for more than 100 tributaries. Discharge values and related uncertainties were obtained from the rain-discharge MGB-IPH model. We used a global optimization algorithm based on the Monte Carlo Markov Chain and Bayesian framework to determine the rating curves. The data were randomly allocated into 80% calibration and 20% validation subsets. A comparison with the validation samples produced a Nash-Sutcliffe efficiency (Ens) of 0.68. When the MGB discharge uncertainties were less than 5%, the Ens value increased to 0.81 (mean). A comparison with the in situ discharge resulted in an Ens value of 0.71 for the validation samples (and 0.77 for calibration). The Ens values at the mouths of the rivers that experienced backwater effects significantly improved when the mean monthly slope was included in the RC. Our RCs were not mission-dependent, and the Ens value was preserved when applying ENVISAT rating curves to Jason-2 altimetry at crossovers. The cease-to-flow parameter of our RCs provided a good proxy for determining river bed elevation. This proxy was validated
NASA Astrophysics Data System (ADS)
Wang, Yunong; Cheng, Rongjun; Ge, Hongxia
2017-08-01
In this paper, a lattice hydrodynamic model is derived considering not only the effect of flow rate difference but also the delayed feedback control signal which including more comprehensive information. The control method is used to analyze the stability of the model. Furthermore, the critical condition for the linear steady traffic flow is deduced and the numerical simulation is carried out to investigate the advantage of the proposed model with and without the effect of flow rate difference and the control signal. The results are consistent with the theoretical analysis correspondingly.
Petersen, Mark D.; Zeng, Yuehua; Haller, Kathleen M.; McCaffrey, Robert; Hammond, William C.; Bird, Peter; Moschetti, Morgan; Shen, Zhengkang; Bormann, Jayne; Thatcher, Wayne
2014-01-01
The 2014 National Seismic Hazard Maps for the conterminous United States incorporate additional uncertainty in fault slip-rate parameter that controls the earthquake-activity rates than was applied in previous versions of the hazard maps. This additional uncertainty is accounted for by new geodesy- and geology-based slip-rate models for the Western United States. Models that were considered include an updated geologic model based on expert opinion and four combined inversion models informed by both geologic and geodetic input. The two block models considered indicate significantly higher slip rates than the expert opinion and the two fault-based combined inversion models. For the hazard maps, we apply 20 percent weight with equal weighting for the two fault-based models. Off-fault geodetic-based models were not considered in this version of the maps. Resulting changes to the hazard maps are generally less than 0.05 g (acceleration of gravity). Future research will improve the maps and interpret differences between the new models.
Multi-sequence H.264/AVC Rate Control Based on the Linear Model
NASA Astrophysics Data System (ADS)
Pastuszak, Grzegorz; Pietrasiewicz, Andrzej
Multi-sequence video coding can distribute bit-budget among sequences to balance the qualities subject to a common limitation on the bit-rate. This paper presents the method of selection of a common quantization parameter, which is applied concurrently to each sequence. The approach takes into account ρ-domain rate-distortion models kept independently for each video sequence and builds a common model. The output buffer is verified jointly for all the sequences and drives a joint bit allocation process. The method has been verified in simulation to demonstrate its usefulness in video encoding.
Estimation of inlet flow rates for image-based aneurysm CFD models: where and how to begin?
Valen-Sendstad, Kristian; Piccinelli, Marina; KrishnankuttyRema, Resmi; Steinman, David A
2015-06-01
Patient-specific flow rates are rarely available for image-based computational fluid dynamics models. Instead, flow rates are often assumed to scale according to the diameters of the arteries of interest. Our goal was to determine how choice of inlet location and scaling law affect such model-based estimation of inflow rates. We focused on 37 internal carotid artery (ICA) aneurysm cases from the Aneurisk cohort. An average ICA flow rate of 245 mL min(-1) was assumed from the literature, and then rescaled for each case according to its inlet diameter squared (assuming a fixed velocity) or cubed (assuming a fixed wall shear stress). Scaling was based on diameters measured at various consistent anatomical locations along the models. Choice of location introduced a modest 17% average uncertainty in model-based flow rate, but within individual cases estimated flow rates could vary by >100 mL min(-1). A square law was found to be more consistent with physiological flow rates than a cube law. Although impact of parent artery truncation on downstream flow patterns is well studied, our study highlights a more insidious and potentially equal impact of truncation site and scaling law on the uncertainty of assumed inlet flow rates and thus, potentially, downstream flow patterns.
An Agent-Based Modeling Approach for Determining Corn Stover Removal Rate and Transboundary Effects
NASA Astrophysics Data System (ADS)
Gan, Jianbang; Langeveld, J. W. A.; Smith, C. T.
2014-02-01
Bioenergy production involves different agents with potentially different objectives, and an agent's decision often has transboundary impacts on other agents along the bioenergy value chain. Understanding and estimating the transboundary impacts is essential to portraying the interactions among the different agents and in the search for the optimal configuration of the bioenergy value chain. We develop an agent-based model to mimic the decision making by feedstock producers and feedstock-to-biofuel conversion plant operators and propose multipliers (i.e., ratios of economic values accruing to different segments and associated agents in the value chain) for assessing the transboundary impacts. Our approach is generic and thus applicable to a variety of bioenergy production systems at different sites and geographic scales. We apply it to the case of producing ethanol using corn stover in Iowa, USA. The results from the case study indicate that stover removal rate is site specific and varies considerably with soil type, as well as other factors, such as stover price and harvesting cost. In addition, ethanol production using corn stover in the study region would have strong positive ripple effects, with the values of multipliers varying with greenhouse gas price and national energy security premium. The relatively high multiplier values suggest that a large portion of the value associated with corn stover ethanol production would accrue to the downstream end of the value chain instead of stover producers.
An agent-based modeling approach for determining corn stover removal rate and transboundary effects.
Gan, Jianbang; Langeveld, J W A; Smith, C T
2014-02-01
Bioenergy production involves different agents with potentially different objectives, and an agent's decision often has transboundary impacts on other agents along the bioenergy value chain. Understanding and estimating the transboundary impacts is essential to portraying the interactions among the different agents and in the search for the optimal configuration of the bioenergy value chain. We develop an agent-based model to mimic the decision making by feedstock producers and feedstock-to-biofuel conversion plant operators and propose multipliers (i.e., ratios of economic values accruing to different segments and associated agents in the value chain) for assessing the transboundary impacts. Our approach is generic and thus applicable to a variety of bioenergy production systems at different sites and geographic scales. We apply it to the case of producing ethanol using corn stover in Iowa, USA. The results from the case study indicate that stover removal rate is site specific and varies considerably with soil type, as well as other factors, such as stover price and harvesting cost. In addition, ethanol production using corn stover in the study region would have strong positive ripple effects, with the values of multipliers varying with greenhouse gas price and national energy security premium. The relatively high multiplier values suggest that a large portion of the value associated with corn stover ethanol production would accrue to the downstream end of the value chain instead of stover producers.
Satisfaction Ratings of QOLPAV: Psychometric Properties Based on the Graded Response Model
ERIC Educational Resources Information Center
Chen, Ssu-Kuang; Hwang, Fang-Ming; Lin, Sunny S. J.
2013-01-01
A scale measuring quality of life (QOL) is important in adolescent research. Using the graded response model (GRM), this study evaluates the psychometric properties of the satisfaction ratings of the Quality of Life Profile Adolescent Version (QOLPAV). Data for 1,392 adolescents were used to check IRT assumptions such as unidimensionality and…
Satisfaction Ratings of QOLPAV: Psychometric Properties Based on the Graded Response Model
ERIC Educational Resources Information Center
Chen, Ssu-Kuang; Hwang, Fang-Ming; Lin, Sunny S. J.
2013-01-01
A scale measuring quality of life (QOL) is important in adolescent research. Using the graded response model (GRM), this study evaluates the psychometric properties of the satisfaction ratings of the Quality of Life Profile Adolescent Version (QOLPAV). Data for 1,392 adolescents were used to check IRT assumptions such as unidimensionality and…
Evaluating crown fire rate of spread predictions from physics-based models
C. M. Hoffman; J. Ziegler; J. Canfield; R. R. Linn; W. Mell; C. H. Sieg; F. Pimont
2015-01-01
Modeling the behavior of crown fires is challenging due to the complex set of coupled processes that drive the characteristics of a spreading wildfire and the large range of spatial and temporal scales over which these processes occur. Detailed physics-based modeling approaches such as FIRETEC and the Wildland Urban Interface Fire Dynamics Simulator (WFDS) simulate...
Source mass eruption rate retrieved from satellite-based data using statistical modelling
NASA Astrophysics Data System (ADS)
Gouhier, Mathieu; Guillin, Arnaud; Azzaoui, Nourddine; Eychenne, Julia; Valade, Sébastien
2015-04-01
Ash clouds emitted during volcanic eruptions have long been recognized as a major hazard likely to have dramatic consequences on aircrafts, environment and people. Thus, the International Civil Aviation Organization (ICAO) established nine Volcanic Ash Advisory Centers (VAACs) around the world, whose mission is to forecast the location and concentration of ash clouds over hours to days, using volcanic ash transport and dispersion models (VATDs). Those models use input parameters such as plume height (PH), particle size distribution (PSD), and mass eruption rate (MER), the latter being a key parameter as it directly controls the amount of ash injected into the atmosphere. The MER can be obtained rather accurately from detailed ground deposit studies, but this method does not match the operational requirements in case of a volcanic crisis. Thus, VAACs use empirical laws to determine the MER from the estimation of the plume height. In some cases, this method can be difficult to apply, either because plume height data are not available or because uncertainties related to this method are too large. We propose here an alternative method based on the utilization of satellite data to assess the MER at the source, during explosive eruptions. Satellite-based techniques allow fine ash cloud loading to be quantitatively retrieved far from the source vent. Those measurements can be carried out in a systematic and real-time fashion using geostationary satellite, in particular. We tested here the relationship likely to exist between the amount of fine ash dispersed in the atmosphere and of coarser tephra deposited on the ground. The sum of both contributions yielding an estimate of the MER. For this purpose we examined 19 eruptions (of known duration) in detail for which both (i) the amount of fine ash dispersed in the atmosphere, and (ii) the mass of tephra deposited on the ground have been estimated and published. We combined these data with contextual information that may
The Dynamics of Scaling: A Memory-Based Anchor Model of Category Rating and Absolute Identification
ERIC Educational Resources Information Center
Petrov, Alexander A.; Anderson, John R.
2005-01-01
A memory-based scaling model--ANCHOR--is proposed and tested. The perceived magnitude of the target stimulus is compared with a set of anchors in memory. Anchor selection is probabilistic and sensitive to similarity, base-level strength, and recency. The winning anchor provides a reference point near the target and thereby converts the global…
Zhang, Xiaoying; Liu, Chongxuan; Hu, Bill X.; Hu, Qinhong
2015-09-28
This study statistically analyzed a grain-size based additivity model that has been proposed to scale reaction rates and parameters from laboratory to field. The additivity model assumed that reaction properties in a sediment including surface area, reactive site concentration, reaction rate, and extent can be predicted from field-scale grain size distribution by linearly adding reaction properties for individual grain size fractions. This study focused on the statistical analysis of the additivity model with respect to reaction rate constants using multi-rate uranyl (U(VI)) surface complexation reactions in a contaminated sediment as an example. Experimental data of rate-limited U(VI) desorption in a stirred flow-cell reactor were used to estimate the statistical properties of multi-rate parameters for individual grain size fractions. The statistical properties of the rate constants for the individual grain size fractions were then used to analyze the statistical properties of the additivity model to predict rate-limited U(VI) desorption in the composite sediment, and to evaluate the relative importance of individual grain size fractions to the overall U(VI) desorption. The result indicated that the additivity model provided a good prediction of the U(VI) desorption in the composite sediment. However, the rate constants were not directly scalable using the additivity model, and U(VI) desorption in individual grain size fractions have to be simulated in order to apply the additivity model. An approximate additivity model for directly scaling rate constants was subsequently proposed and evaluated. The result found that the approximate model provided a good prediction of the experimental results within statistical uncertainty. This study also found that a gravel size fraction (2-8mm), which is often ignored in modeling U(VI) sorption and desorption, is statistically significant to the U(VI) desorption in the sediment.
Evaluation of Finite-Rate GasSurface Interaction Models for a Carbon Based Ablator
NASA Technical Reports Server (NTRS)
Chen, Yih-Kanq; Goekcen, Tahir
2015-01-01
Two sets of finite-rate gas-surface interaction model between air and the carbon surface are studied. The first set is an engineering model with one-way chemical reactions, and the second set is a more detailed model with two-way chemical reactions. These two proposed models intend to cover the carbon surface ablation conditions including the low temperature rate-controlled oxidation, the mid-temperature diffusion-controlled oxidation, and the high temperature sublimation. The prediction of carbon surface recession is achieved by coupling a material thermal response code and a Navier-Stokes flow code. The material thermal response code used in this study is the Two-dimensional Implicit Thermal-response and Ablation Program, which predicts charring material thermal response and shape change on hypersonic space vehicles. The flow code solves the reacting full Navier-Stokes equations using Data Parallel Line Relaxation method. Recession analyses of stagnation tests conducted in NASA Ames Research Center arc-jet facilities with heat fluxes ranging from 45 to 1100 wcm2 are performed and compared with data for model validation. The ablating material used in these arc-jet tests is Phenolic Impregnated Carbon Ablator. Additionally, computational predictions of surface recession and shape change are in good agreement with measurement for arc-jet conditions of Small Probe Reentry Investigation for Thermal Protection System Engineering.
Development of a QTL-environment-based predictive model for node addition rate in common bean.
Zhang, Li; Gezan, Salvador A; Eduardo Vallejos, C; Jones, James W; Boote, Kenneth J; Clavijo-Michelangeli, Jose A; Bhakta, Mehul; Osorno, Juan M; Rao, Idupulapati; Beebe, Stephen; Roman-Paoli, Elvin; Gonzalez, Abiezer; Beaver, James; Ricaurte, Jaumer; Colbert, Raphael; Correll, Melanie J
2017-05-01
This work reports the effects of the genetic makeup, the environment and the genotype by environment interactions for node addition rate in an RIL population of common bean. This information was used to build a predictive model for node addition rate. To select a plant genotype that will thrive in targeted environments it is critical to understand the genotype by environment interaction (GEI). In this study, multi-environment QTL analysis was used to characterize node addition rate (NAR, node day(- 1)) on the main stem of the common bean (Phaseolus vulgaris L). This analysis was carried out with field data of 171 recombinant inbred lines that were grown at five sites (Florida, Puerto Rico, 2 sites in Colombia, and North Dakota). Four QTLs (Nar1, Nar2, Nar3 and Nar4) were identified, one of which had significant QTL by environment interactions (QEI), that is, Nar2 with temperature. Temperature was identified as the main environmental factor affecting NAR while day length and solar radiation played a minor role. Integration of sites as covariates into a QTL mixed site-effect model, and further replacing the site component with explanatory environmental covariates (i.e., temperature, day length and solar radiation) yielded a model that explained 73% of the phenotypic variation for NAR with root mean square error of 16.25% of the mean. The QTL consistency and stability was examined through a tenfold cross validation with different sets of genotypes and these four QTLs were always detected with 50-90% probability. The final model was evaluated using leave-one-site-out method to assess the influence of site on node addition rate. These analyses provided a quantitative measure of the effects on NAR of common beans exerted by the genetic makeup, the environment and their interactions.
NASA Astrophysics Data System (ADS)
Jeyakumar, Lordwin; Zhao, Yaqian
2014-05-01
Increased awareness of the impacts of diffuse pollution and their intensification has pushed forward the need for the development of low-cost wastewater treatment techniques. One of such efforts is the use of novel DASC (Dewatered Alum Sludge Cakes) based constructed wetlands (CWs) for removing nutrients, organics, trace elements and other pollutants from wastewater. Understanding of the processes in CWs requires a numerical model that describes the biochemical transformation and degradation processes in subsurface vertical flow (VF) CWs. Therefore, this research focuses on the development of a process-based model for phosphorus (P) and nitrogen (N) removal to achieve a stable performance by using DASC as a substrate in CWs treatment system. An object-oriented modelling tool known as "STELLA" which works based on the principle of system dynamics is used for the development of P and N model. The core objective of the modelling work is oriented towards understanding the process in DASC-based CWs and optimizes design criteria. The P and N dynamic model is developed for DASC-based CWs. The P model developed exclusively for DASC-based CW was able to simulate the effluent P concentration leaving the system satisfactorily. Moreover, the developed P dynamic model has identified the major P pathways as adsorption (72%) followed by plant uptake (20%) and microbial uptake (7%) in single-stage laboratory scale DASC-based CW. Similarly, P dynamic simulation model was developed to simulate the four-stage laboratory scale DASC-based CWs. It was found that simulated and observed values of P removal were in good agreement. The fate of P in all the four stages clearly shows that adsorption played a pivotal role in each stage of the system due to the use of the DASC as a substrate. P adsorption by wetland substrate/DASC represents 59-75% of total P reduction. Subsequently, plant uptake and microbial uptake have lesser role regarding P removal (as compared to adsorption).With regard
The effects of composition on glass dissolution rates: The application of four models to a data base
Geldart, R.W.; Kindle, C.H.
1988-01-01
Four models have been applied to a data base to relate glass dissolution in distilled water to composition. The data base is used to compare the precisions obtained from the models in fitting actual data. The usefulness of the data base in formulating a model is also demonstrated. Two related models in which the composite or pH-adjusted free energy of hydration of the glass is the correlating parameter are compared with experimental data. In a structural model, the nonbridging oxygen content of the glasses is used to correlate glass dissolution rate to composition. In a model formulated for this report, the cation valence and the oxygen content of the glass are compared with observed dissolution rates. The models were applied to the 28-day normalized silica release at 90/sup 0/C for over 285 glass compositions with surface area to volume ratios of 10 m/sup -1/ (Materials Characterization Center MCC-1 glass durability test using distilled water). These glasses included the nonradioactive analogs of WV205 and SRL-165, as well as SRL-131, PNL 76-68, and a European glass, UK209. Predicted glass dissolution rates show similar fits to the data for all four models. The predictions of the models were also plotted for two subsets of the glasses: waste glasses and Savannah River Laboratory glasses. The model predictions fit the data for these groups much better than they fit the data for the entire set of glasses. 14 refs., 12 figs., 7 tabs.
[Estimating of backscattering rate in Lake Chaohu based on bio-optical model].
Liu, Zhong-Hua; Li, Yun-Mei; Lü, Heng; Xu, Yi-Fan; Xu, Xin; Huang, Jia-Zhu
2011-02-01
Backscattering rate is the important factor of above water spectra. A bio-optical model simplified in near-infrared bands was constructed to estimate backscattering rate in Lake Chaohu by using the in-situ data measured in June 2009. The results show that the maximum value of backscattering rate in Lake Chaohu is 0.059, the minimum value is 0.001 4 and the mean value is 0.023 6. Backscattering rate and spatial differences in the west of Lake Chaohu are all greater than that in the east. In addition, particle refractive index of each sampling site was calculated by using backscattering rate. The dominant factors of in-water particles were determined according to the change scopes of refractive index. The results show that 74% of all sampling sites are dominated by inorganic particles, 18.5% of all sampling sites are dominated by both phytoplankton and inorganic particles, and the rest of sampling sites which accounts only for 7.4% of all sampling sites are dominated by phytoplankton. This indicates that backscattering character of Chaohu Lake is mainly affected by inorganic particles, while the effect of algae particle is relatively small.
Su, Xiang; Wang, Gang; Li, Jianfeng; Rong, Yiming
2016-01-01
The effects of strain rate and temperature on the dynamic behavior of Fe-based high temperature alloy was studied. The strain rates were 0.001-12,000 s(-1), at temperatures ranging from room temperature to 800 °C. A phenomenological constitutive model (Power-Law constitutive model) was proposed considering adiabatic temperature rise and accurate material thermal physical properties. During which, the effects of the specific heat capacity on the adiabatic temperature rise was studied. The constitutive model was verified to be accurate by comparison between predicted and experimental results.
NASA Astrophysics Data System (ADS)
Winarti, Yuyun Guna; Noviyanti, Lienda; Setyanto, Gatot R.
2017-03-01
The stock investment is a high risk investment. Therefore, there are derivative securities to reduce these risks. One of them is Asian option. The most fundamental of option is option pricing. Many factors that determine the option price are underlying asset price, strike price, maturity date, volatility, risk free interest rate and dividends. Various option pricing usually assume that risk free interest rate is constant. While in reality, this factor is stochastic process. The arithmetic Asian option is free from distribution, then, its pricing is done using the modified Black-Scholes model. In this research, the modification use the Curran approximation. This research focuses on the arithmetic Asian option pricing without dividends. The data used is the stock daily closing data of Telkom from January 1 2016 to June 30 2016. Finnaly, those option price can be used as an option trading strategy.
Innovative model-based flow rate optimization for vanadium redox flow batteries
NASA Astrophysics Data System (ADS)
König, S.; Suriyah, M. R.; Leibfried, T.
2016-11-01
In this paper, an innovative approach is presented to optimize the flow rate of a 6-kW vanadium redox flow battery with realistic stack dimensions. Efficiency is derived using a multi-physics battery model and a newly proposed instantaneous efficiency determination technique. An optimization algorithm is applied to identify optimal flow rates for operation points defined by state-of-charge (SoC) and current. The proposed method is evaluated against the conventional approach of applying Faraday's first law of electrolysis, scaled to the so-called flow factor. To make a fair comparison, the flow factor is also optimized by simulating cycles with different charging/discharging currents. It is shown through the obtained results that the efficiency is increased by up to 1.2% points; in addition, discharge capacity is also increased by up to 1.0 kWh or 5.4%. Detailed loss analysis is carried out for the cycles with maximum and minimum charging/discharging currents. It is shown that the proposed method minimizes the sum of losses caused by concentration over-potential, pumping and diffusion. Furthermore, for the deployed Nafion 115 membrane, it is observed that diffusion losses increase with stack SoC. Therefore, to decrease stack SoC and lower diffusion losses, a higher flow rate during charging than during discharging is reasonable.
A Predictive Model for Vehicle Air Exchange Rates based on a Large, Representative Sample
Fruin, Scott A.; Hudda, Neelakshi; Sioutas, Constantinos; Delfino, Ralph J.
2014-01-01
The in-vehicle microenvironment is an important route of exposure to traffic-related pollutants, particularly ultrafine particles. However, significant particle losses can occur under conditions of low air exchange rate (AER) when windows are closed and air is recirculating. AERs are lower for newer vehicles and at lower speeds. Despite the importance of AER in affecting in-vehicle particle exposures, few studies have characterized AER and all have tested only a small number of cars. One reason for this is the difficulty in measuring AER with tracer gases such as SF6 the most common method. We developed a simplified yet accurate method for determining AER using the occupants’ own production of CO2 a convenient compound to measure. By measuring initial CO2 build-up rates and equilibrium values of CO2 at fixed speeds, AER was calculated for 59 vehicles representative of California’s fleet. AER measurements correlated and agreed well with the largest other study conducted (R2=0.83). Multi-variable models captured 70% of the variability in observed AER using only age, mileage, manufacturer and speed. These results will be useful to exposure and epidemiological studies since all model variable values are easily obtainable through questionnaire. PMID:21428392
Predictive model for vehicle air exchange rates based on a large, representative sample.
Fruin, Scott A; Hudda, Neelakshi; Sioutas, Constantinos; Delfino, Ralph J
2011-04-15
The in-vehicle microenvironment is an important route of exposure to traffic-related pollutants, particularly ultrafine particles. However, significant particle losses can occur under conditions of low air exchange rate (AER) when windows are closed and air is recirculating. AERs are lower for newer vehicles and at lower speeds. Despite the importance of AER in affecting in-vehicle particle exposures, few studies have characterized AER and all have tested only a small number of cars. One reason for this is the difficulty in measuring AER with tracer gases such as SF(6), the most common method. We developed a simplified yet accurate method for determining AER using the occupants' own production of CO(2), a convenient compound to measure. By measuring initial CO(2) build-up rates and equilibrium values of CO(2) at fixed speeds, AER was calculated for 59 vehicles representative of California's fleet. AER measurements correlated and agreed well with the largest other study conducted (R(2) = 0.83). Multivariable models captured 70% of the variability in observed AER using only age, mileage, manufacturer, and speed. These results will be useful to exposure and epidemiological studies since all model variable values are easily obtainable through questionnaire.
NASA Astrophysics Data System (ADS)
Lim, Hojun; Battaile, Corbett C.; Carroll, Jay D.; Boyce, Brad L.; Weinberger, Christopher R.
2015-01-01
In this work, we develop a crystal plasticity finite element model (CP-FEM) that constitutively captures the temperature and strain rate dependent flow stresses in pure BCC refractory metals. This model is based on the kink-pair theory developed by Seeger (1981) and is calibrated to available data from single crystal experiments to produce accurate and convenient constitutive laws that are implemented into a BCC crystal plasticity model. The model is then used to predict temperature and strain rate dependent yield stresses of single and polycrystal BCC refractory metals (molybdenum, tantalum, tungsten and niobium) and compared with existing experimental data. To connect to larger length scales, classical continuum-scale constitutive models are fit to the CP-FEM predictions of polycrystal yield stresses. The results produced by this model, based on kink-pair theory and with origins in dislocation mechanics, show excellent agreement with the Mechanical Threshold Stress (MTS) model for temperature and strain-rate dependent flow. This framework provides a method to bridge multiple length scales in modeling the deformation of BCC metals.
A biophysically-based neuromorphic model of spike rate- and timing-dependent plasticity.
Rachmuth, Guy; Shouval, Harel Z; Bear, Mark F; Poon, Chi-Sang
2011-12-06
Current advances in neuromorphic engineering have made it possible to emulate complex neuronal ion channel and intracellular ionic dynamics in real time using highly compact and power-efficient complementary metal-oxide-semiconductor (CMOS) analog very-large-scale-integrated circuit technology. Recently, there has been growing interest in the neuromorphic emulation of the spike-timing-dependent plasticity (STDP) Hebbian learning rule by phenomenological modeling using CMOS, memristor or other analog devices. Here, we propose a CMOS circuit implementation of a biophysically grounded neuromorphic (iono-neuromorphic) model of synaptic plasticity that is capable of capturing both the spike rate-dependent plasticity (SRDP, of the Bienenstock-Cooper-Munro or BCM type) and STDP rules. The iono-neuromorphic model reproduces bidirectional synaptic changes with NMDA receptor-dependent and intracellular calcium-mediated long-term potentiation or long-term depression assuming retrograde endocannabinoid signaling as a second coincidence detector. Changes in excitatory or inhibitory synaptic weights are registered and stored in a nonvolatile and compact digital format analogous to the discrete insertion and removal of AMPA or GABA receptor channels. The versatile Hebbian synapse device is applicable to a variety of neuroprosthesis, brain-machine interface, neurorobotics, neuromimetic computation, machine learning, and neural-inspired adaptive control problems.
An enhanced rate-based emission trading program for NOX: the Dutch model.
Sholtz, A M; Van Amburg, B; Wochnick, V K
2001-12-01
Since 1997 government and industry in The Netherlands have been engaged in intensive policy discussions on how to design an emission trading program that would satisfy the Government's policy objectives within the national and international regulatory framework and accommodate industry's need for a flexible and cost-effective approach. Early on in the discussion the most promising solution was a rate-based approach, which dynamically allocated saleable emission credits based on a performance standard rate and actual energy used by facilities. All industrial facilities above a threshold of 20 MWth would be judged on their ability to meet this performance rate. Those "cleaner" than the standard can sell excess credits to others with an allocation that is less than their actual NOX emission. With some changes in law, such a design could be made to fit well into the national and EU legislative framework while at the same time uniquely meeting industry's requirement of flexibility toward economic growth and facility expansion. (An analysis of the legislative changes required will be given in a separate paper by Chris Dekkers.) However, the environmental outcome of such a system is not as certain as under an absolute emission cap. At the request of the Netherlands Ministry of Housing, Spatial Planning and the Environment (VROM), Automated Credit Exchange (ACE), in close cooperation with the working group of government and industry representatives introduced a number of features into the Dutch NOX program allowing full exploitation of market mechanisms while allowing intermediate adjustments in the performance standard rates. The design is geared toward meeting environmental targets without jeopardizing the trading market the program intends to create. The paper discusses the genesis of the two-tier credit system ACE helped to design, explains the differences between primary (fixed) and secondary (variable) credits, and outlines how the Dutch system is expected to
Evolution of the rate of biological aging using a phenotype based computational model.
Kittas, Aristotelis
2010-10-07
In this work I introduce a simple model to study how natural selection acts upon aging, which focuses on the viability of each individual. It is able to reproduce the Gompertz law of mortality and can make predictions about the relation between the level of mutation rates (beneficial/deleterious/neutral), age at reproductive maturity and the degree of biological aging. With no mutations, a population with low age at reproductive maturity R stabilizes at higher density values, while with mutations it reaches its maximum density, because even for large pre-reproductive periods each individual evolves to survive to maturity. Species with very short pre-reproductive periods can only tolerate a small number of detrimental mutations. The probabilities of detrimental (P(d)) or beneficial (P(b)) mutations are demonstrated to greatly affect the process. High absolute values produce peaks in the viability of the population over time. Mutations combined with low selection pressure move the system towards weaker phenotypes. For low values in the ratio P(d)/P(b), the speed at which aging occurs is almost independent of R, while higher values favor significantly species with high R. The value of R is critical to whether the population survives or dies out. The aging rate is controlled by P(d) and P(b) and the amount of the viability of each individual is modified, with neutral mutations allowing the system more "room" to evolve. The process of aging in this simple model is revealed to be fairly complex, yielding a rich variety of results.
Probabilistic short-term forecasting of eruption rate at Kīlauea Volcano using a physics-based model
NASA Astrophysics Data System (ADS)
Anderson, K. R.
2016-12-01
Deterministic models of volcanic eruptions yield predictions of future activity conditioned on uncertainty in the current state of the system. Physics-based eruption models are well-suited for deterministic forecasting as they can relate magma physics with a wide range of observations. Yet, physics-based eruption forecasting is strongly limited by an inadequate understanding of volcanic systems, and the need for eruption models to be computationally tractable. At Kīlauea Volcano, Hawaii, episodic depressurization-pressurization cycles of the magma system generate correlated, quasi-exponential variations in ground deformation and surface height of the active summit lava lake. Deflations are associated with reductions in eruption rate, or even brief eruptive pauses, and thus partly control lava flow advance rates and associated hazard. Because of the relatively well-understood nature of Kīlauea's shallow magma plumbing system, and because more than 600 of these events have been recorded to date, they offer a unique opportunity to refine a physics-based effusive eruption forecasting approach and apply it to lava eruption rates over short (hours to days) time periods. A simple physical model of the volcano ascribes observed data to temporary reductions in magma supply to an elastic reservoir filled with compressible magma. This model can be used to predict the evolution of an ongoing event, but because the mechanism that triggers events is unknown, event durations are modeled stochastically from previous observations. A Bayesian approach incorporates diverse data sets and prior information to simultaneously estimate uncertain model parameters and future states of the system. Forecasts take the form of probability distributions for eruption rate or cumulative erupted volume at some future time. Results demonstrate the significant uncertainties that still remain even for short-term eruption forecasting at a well-monitored volcano - but also the value of a physics-based
NASA Astrophysics Data System (ADS)
Martin, Yvonne E.; Johnson, E. A.; Chaikina, Olga
2017-02-01
Rainfall, snowmelt and/or other mass movements are possible triggers to initiate debris flows. In supply-limited landscapes, clastic and organic materials (together termed debris) accumulate in the gully via various geomorphic processes that occur on gully sidewalls. The conceptualization of this phenomenon has been termed the gully recharge rate, with several recent field studies measuring such rates in coastal British Columbia. In the present study, a simple numerical model is introduced to estimate debris flow volumes in Haida Gwaii, British Columbia based on debris flow recurrence intervals, gully recharge rates and factors affecting deposition of debris flow material. Debris flow volumes obtained in model runs are somewhat lower than field-based values by about half, which is a reasonable result for this exploratory study. The annual erosion rate (clastic material) for debris flows in the model run is 0.031 mm yr- 1. This value is about 0.57 × of the field-based value and is lower than the erosion rate for debris slides in Haida Gwaii of 0.1 mm yr- 1. Deposition of debris flows in the model occurs in 60% of cases due to a decrease in channel gradient, with deposition resulting from high stream junction angles being less common. Locations for initiation of debris flow deposition were situated in stream orders 3 and 4 in 60% of cases. Sensitivity analysis shows that in comparison to other model variables, recharge rate has the greatest effect on the statistics and frequency distributions of debris flow volumes and total debris flow volume (summation of all debris activity in a basin) over the study time period.
Lum, Kristian; Swarup, Samarth; Eubank, Stephen; Hawdon, James
2014-09-06
We build an agent-based model of incarceration based on the susceptible-infected-suspectible (SIS) model of infectious disease propagation. Our central hypothesis is that the observed racial disparities in incarceration rates between Black and White Americans can be explained as the result of differential sentencing between the two demographic groups. We demonstrate that if incarceration can be spread through a social influence network, then even relatively small differences in sentencing can result in large disparities in incarceration rates. Controlling for effects of transmissibility, susceptibility and influence network structure, our model reproduces the observed large disparities in incarceration rates given the differences in sentence lengths for White and Black drug offenders in the USA without extensive parameter tuning. We further establish the suitability of the SIS model as applied to incarceration by demonstrating that the observed structural patterns of recidivism are an emergent property of the model. In fact, our model shows a remarkably close correspondence with California incarceration data. This work advances efforts to combine the theories and methods of epidemiology and criminology.
Lum, Kristian; Swarup, Samarth; Eubank, Stephen; Hawdon, James
2014-01-01
We build an agent-based model of incarceration based on the susceptible–infected–suspectible (SIS) model of infectious disease propagation. Our central hypothesis is that the observed racial disparities in incarceration rates between Black and White Americans can be explained as the result of differential sentencing between the two demographic groups. We demonstrate that if incarceration can be spread through a social influence network, then even relatively small differences in sentencing can result in large disparities in incarceration rates. Controlling for effects of transmissibility, susceptibility and influence network structure, our model reproduces the observed large disparities in incarceration rates given the differences in sentence lengths for White and Black drug offenders in the USA without extensive parameter tuning. We further establish the suitability of the SIS model as applied to incarceration by demonstrating that the observed structural patterns of recidivism are an emergent property of the model. In fact, our model shows a remarkably close correspondence with California incarceration data. This work advances efforts to combine the theories and methods of epidemiology and criminology. PMID:24966237
Model-based setting of inspiratory pressure and respiratory rate in pressure-controlled ventilation.
Schranz, C; Becher, T; Schädler, D; Weiler, N; Möller, K
2014-03-01
Mechanical ventilation carries the risk of ventilator-induced-lung-injury (VILI). To minimize the risk of VILI, ventilator settings should be adapted to the individual patient properties. Mathematical models of respiratory mechanics are able to capture the individual physiological condition and can be used to derive personalized ventilator settings. This paper presents model-based calculations of inspiration pressure (pI), inspiration and expiration time (tI, tE) in pressure-controlled ventilation (PCV) and a retrospective evaluation of its results in a group of mechanically ventilated patients. Incorporating the identified first order model of respiratory mechanics in the basic equation of alveolar ventilation yielded a nonlinear relation between ventilation parameters during PCV. Given this patient-specific relation, optimized settings in terms of minimal pI and adequate tE can be obtained. We then retrospectively analyzed data from 16 ICU patients with mixed pathologies, whose ventilation had been previously optimized by ICU physicians with the goal of minimization of inspiration pressure, and compared the algorithm's 'optimized' settings to the settings that had been chosen by the physicians. The presented algorithm visualizes the patient-specific relations between inspiration pressure and inspiration time. The algorithm's calculated results highly correlate to the physician's ventilation settings with r = 0.975 for the inspiration pressure, and r = 0.902 for the inspiration time. The nonlinear patient-specific relations of ventilation parameters become transparent and support the determination of individualized ventilator settings according to therapeutic goals. Thus, the algorithm is feasible for a variety of ventilated ICU patients and has the potential of improving lung-protective ventilation by minimizing inspiratory pressures and by helping to avoid the build-up of clinically significant intrinsic positive end-expiratory pressure.
A model-based evaluation of sedimentary reconstructions of 10Be production rates
NASA Astrophysics Data System (ADS)
Carney, Lewis; Plancherel, Yves; Khatiwala, Samar; Henderson, Gideon
2016-04-01
Atmospheric production of 10Be is small when solar activity and, therefore, solar magnetic field and total solar irradiance are strong. Variations in solar activity affect climate and the production of other climate-relevant isotopes, such as 14C. Solar activity is thus an important variable to constrain. Since 10Be production is clearly related to solar activity and the cycle of beryllium is simpler than that of carbon, 10Be records in ice cores have been used to reconstruct total solar irradiance variability. Unfortunately, 10Be records in ice cores are not only affected by variations in atmospheric production, but are also modulated by changes in wind patterns since spatiotemporal atmospheric 10Be gradients are quite large. In that context, sedimentary 10Be records from the abyssal ocean could be of great interest: since the residence time of 10Be in the ocean is thought to be comparable to the overturning time-scale of the ocean, spatial 10Be gradients may be relatively weaker than those in the atmosphere. Under these conditions, regional oceanic variability should only weakly affect the distribution of 10Be in the ocean and local sedimentary 10Be records are expected to represent the global average 10Be production better than 10Be measured in ice cores. We here show results from a global ocean model of 10Be that we use to investigate the spatial variability of simulated sedimentary 10Be records and test the sensitivity of the 10Be sedimentary flux to uncertainties in the circulation field and in the particle chemistry of beryllium. Our ocean model is based on the Transport Matrix method. The surface 10Be input fluxes are taken from atmospheric model simulations. Our model experiments, constrained by available dissolved 10Be data, show that there exist regions in the ocean where the sedimentary 10Be flux is relatively insensitive to changes in input patterns and magnitudes, assumed particle chemistry and flux patterns, and ocean circulation. We submit that
NASA Astrophysics Data System (ADS)
Chen, Jian; Dong, Tiekuang; Ren, Zhongzhou
2017-04-01
A physical model based on the open-source toolkit Geant4 for production rates of cosmogenic nuclei on the lunar surface is proposed and calibrated. The fluxes of proton and neutron beneath the lunar surface are obtained by simulating the physical processes between the cosmic-ray particles and the lunar surface material. By combining the experimental proton cross sections and the a posteriori neutron cross sections, we calculate the production rate depth profiles of long-lived nuclei (10Be, 14C, 26Al, 36Cl, and 53Mn). Through comparing experimental and theoretical data for these nuclei, we find that for all the selected nuclei, experimental and theoretical production rate depth profiles agree well with each other by introducing a single normalization factor. It means that the physical model based on Geant4 can also reproduce the depth profiles of cosmogenic nuclei, and that this model can be used by everyone worldwide. In addition, we predict the production rates of three stable nuclei (21Ne, 22Ne, and 38Ar).
Avanzo, Michele; Stancanello, Joseph; Franchin, Giovanni; Sartor, Giovanna; Jena, Rajesh; Drigo, Annalisa; Dassie, Andrea; Gigante, Marco; Capra, Elvira
2010-04-15
Purpose: To extend the application of current radiation therapy (RT) based tumor control probability (TCP) models of nasopharyngeal carcinoma (NPC) to include the effects of hypoxia and chemoradiotherapy (CRT). Methods: A TCP model is described based on the linear-quadratic model modified to account for repopulation, chemotherapy, heterogeneity of dose to the tumor, and hypoxia. Sensitivity analysis was performed to determine which parameters exert the greatest influence on the uncertainty of modeled TCP. On the basis of the sensitivity analysis, the values of specific radiobiological parameters were set to nominal values reported in the literature for NPC or head and neck tumors. The remaining radiobiological parameters were determined by fitting TCP to clinical local control data from published randomized studies using both RT and CRT. Validation of the model was performed by comparison of estimated TCP and average overall local control rate (LCR) for 45 patients treated at the institution with conventional linear-accelerator-based or helical tomotherapy based intensity-modulated RT and neoadjuvant chemotherapy. Results: Sensitivity analysis demonstrates that the model is most sensitive to the radiosensitivity term {alpha} and the dose per fraction. The estimated values of {alpha} and OER from data fitting were 0.396 Gy{sup -1} and 1.417. The model estimate of TCP (average 90.9%, range 26.9%-99.2%) showed good correlation with the LCR (86.7%). Conclusions: The model implemented in this work provides clinicians with a useful tool to predict the success rate of treatment, optimize treatment plans, and compare the effects of multimodality therapy.
NASA Astrophysics Data System (ADS)
Maher, K.; Steefel, C. I.; Depaolo, D. J.
2004-12-01
Chemical weathering rates in natural systems are typically much slower than expected based on experiments and theory. There are several possible explanations. However, because it has been difficult to determine what effects in particular reduce the rates in specific settings, natural rates remain difficult to predict. Silicate-rich deep-sea sediments provide an ideal in-situ laboratory for investigating weathering rates because certain potentially important factors, such as advective transport through heterogeneous media, limitations on the availability of reactive surface area due to low porosity and/or cementation, unsaturated flow conditions, and seasonal variations in fluid flux and temperature, do not occur in this setting. Geochemical profiles from Site 984 in the North Atlantic are modeled using a multi-component reactive transport model (CRUNCH) to determine in-situ rates of plagioclase dissolution and other diagenetic processes, including sulfate reduction and anaerobic methane oxidation. Various possible processes which might contribute to slower rates in the field are considered, including the effect of mineral saturation state, secondary precipitation of clays, inhibition by dissolved aluminum, and the availability of reactive surface area. The reactive transport model includes an isotopic solid-solution formulation that tracks the isotopic composition of precipitating (calcite) and dissolving (plagioclase and calcite) phases, thus allowing the determination of plagioclase dissolution rates. The rate constants for plagioclase determined by geochemical transport modeling of major element profiles are within the same range determined from U-series calculations and suggest that natural weathering rates for this system are on the order of 10-17.5 to 10-17.7 mol/m2/sec assuming estimates of reactive surface area are correct, several orders of magnitude slower than laboratory-derived rates. The slow plagioclase rates are most likely due to the fact that
Gandhi, O.P.; Gu, Y.G.; Chen, J.Y.; Bassen, H.I. )
1992-09-01
The authors have previously reported local, layer-averaged, and whole-body-averaged specific absorption rates and induced currents for a 5,628-cell anatomically based model of a human for plane-wave exposures 20-100 MHz. Using a higher resolution, 45,024-cell model of the human body, calculations have now been extended to 915 MHz using the finite-difference time-domain method. Because of the higher resolution of the model, it has been possible to calculate specific absorption rates for various organs (brain, eyes, heart, lungs, liver, kidneys, and intestines) and for various parts of the body (head, neck, torso, legs, and arms) as a function of frequency in the band 100-915 MHz. Consistent with some of the experimental data in the literature, the highest part-body-averaged specific absorption rate for the head and neck region (as well as for the eyes and brain) occurs at 200 MHz for the isolated condition and at 150 MHz for the grounded condition of the model. Also observed is an increasing specific absorption rate for the eyes for frequencies above 350 MHz due to the superficial nature of power deposition at increasing frequencies.
NASA Astrophysics Data System (ADS)
Sun, Yongle; Li, Q. M.; Withers, P. J.
2015-09-01
Realistic simulations are increasingly demanded to clarify the dynamic behaviour of foam materials, because, on one hand, the significant variability (e.g. 20% scatter band) of foam properties and the lack of reliable dynamic test methods for foams bring particular difficulty to accurately evaluate the strain-rate sensitivity in experiments; while on the other hand numerical models based on idealised cell structures (e.g. Kelvin and Voronoi) may not be sufficiently representative to capture the actual structural effect. To overcome these limitations, the strain-rate sensitivity of the compressive and tensile properties of closed-cell aluminium Alporas foam is investigated in this study by means of meso-scale realistic finite element (FE) simulations. The FE modelling method based on X-ray computed tomography (CT) image is introduced first, as well as its applications to foam materials. Then the compression and tension of Alporas foam at a wide variety of applied nominal strain-rates are simulated using FE model constructed from the actual cell geometry obtained from the CT image. The stain-rate sensitivity of compressive strength (collapse stress) and tensile strength (0.2% offset yield point) are evaluated when considering different cell-wall material properties. The numerical results show that the rate dependence of cell-wall material is the main cause of the strain-rate hardening of the compressive and tensile strengths at low and intermediate strain-rates. When the strain-rate is sufficiently high, shock compression is initiated, which significantly enhances the stress at the loading end and has complicated effect on the stress at the supporting end. The plastic tensile wave effect is evident at high strain-rates, but shock tension cannot develop in Alporas foam due to the softening associated with single fracture process zone occurring in tensile response. In all cases the micro inertia of individual cell walls subjected to localised deformation is found to
ERIC Educational Resources Information Center
Swanson, Patricia E.
2015-01-01
Elementary school mathematics is increasingly recognized for its crucial role in developing the foundational skills and understandings for algebra. In this article, the author uses a lesson to introduce the concept of "rates"--comparing two different types and units of measure--and how to graph them. Described is the lesson and shared…
Relaxed Poisson cure rate models.
Rodrigues, Josemar; Cordeiro, Gauss M; Cancho, Vicente G; Balakrishnan, N
2016-03-01
The purpose of this article is to make the standard promotion cure rate model (Yakovlev and Tsodikov, ) more flexible by assuming that the number of lesions or altered cells after a treatment follows a fractional Poisson distribution (Laskin, ). It is proved that the well-known Mittag-Leffler relaxation function (Berberan-Santos, ) is a simple way to obtain a new cure rate model that is a compromise between the promotion and geometric cure rate models allowing for superdispersion. So, the relaxed cure rate model developed here can be considered as a natural and less restrictive extension of the popular Poisson cure rate model at the cost of an additional parameter, but a competitor to negative-binomial cure rate models (Rodrigues et al., ). Some mathematical properties of a proper relaxed Poisson density are explored. A simulation study and an illustration of the proposed cure rate model from the Bayesian point of view are finally presented.
Parent, B; Turc, O; Gibon, Y; Stitt, M; Tardieu, F
2010-05-01
Temperature fluctuates rapidly and affects all developmental and metabolic processes. This often obscures the effects of developmental trends or of other environmental conditions when temperature fluctuates naturally. A method is proposed for modelling temperature-compensated rates, based on the coordination of temperature responses of developmental processes. In a data set comprising 41 experiments in the greenhouse, growth chamber, or the field, the temperature responses in the range of 6-36 degrees C for different processes were compared in three species, maize, rice, and Arabidopsis thaliana. Germination, cell division, expansive growth rate, leaf initiation, and phenology showed coordinated temperature responses and followed common laws within each species. The activities of 10 enzymes involved in carbon metabolism exhibited monotonous exponential responses across the whole range 10-40 degrees C. Hence, the temperature dependence of developmental processes is not explained by a simple relationship to central metabolism. Temperature-compensated rates of development were calculated from the equations of response curve, by expressing rates per unit equivalent time at 20 degrees C. This resulted in stable rates when temperatures fluctuated over a large range (for which classical thermal time was inefficient), and in time courses of leaf development which were common to several experiments with different temperature scenarios.
Cancer Detection Rates in a Population-Based, Opportunistic Screening Model, New Delhi, India
Shridhar, Krithiga; Dey, Subhojit; Bhan, Chandra Mohan; Bumb, Dipika; Govil, Jyostna; Dhillon, Preet K
2017-01-01
Background In India, cancer accounts for 7.3% of DALY’s, 14.3% of mortality with an age-standardized incident rate of 92.4/100,000 in men and 97.4/100,000 in women and yet there are no nationwide screening programs. Materials and Methods We calculated age-standardized and age-truncated (30-69 years) detection rates for men and women who attended the Indian Cancer Society detection centre, New Delhi from 2011-12. All participants were registered with socio-demographic, medical, family and risk factors history questionnaires, administered clinical examinations to screen for breast, oral, gynecological and other cancers through a comprehensive physical examination and complete blood count. Patients with an abnormal clinical exam or blood result were referred to collaborating institutes for further investigations and follow-up. Results A total of n=3503 were screened during 2011-12 (47.8% men, 51.6% women and 0.6% children <15 years) with a mean age of 47.8 yrs (±15.1 yrs); 80.5% were aged 30-69 years and 77.1% had at least a secondary education. Tobacco use was reported by 15.8%, alcohol consumption by 11.9% and family history of cancer by 9.9% of participants. Follow-up of suspicious cases yielded 45 incident cancers (51.1% in men, 48.9% in women), consisting of 55.5% head and neck (72.0% oral), 28.9% breast, 6.7% gynecological and 8.9% other cancer sites. The age-standardized detection rate for all cancer sites was 340.8/100,000 men and 329.8/100,000 women. Conclusions Cancer screening centres are an effective means of attracting high-risk persons in low-resource settings. Opportunistic screening is one feasible pathway to address the rising cancer burden in urban India through early detection. PMID:25773793
NASA Astrophysics Data System (ADS)
Nguyen, Phuong Hoa; Hofmann, Karl R.; Paasch, Gernot
2002-11-01
In advanced full-band Monte Carlo (MC) models, the Nordheim approximation with a spherical Wigner-Seitz cell for a lattice with two atoms per elementary cell is still common, and in the most detailed work on silicon by Kunikiyo [et al.] [J. Appl. Phys. 74, 297 (1994)], the atomic positions in the cell have been incorrectly introduced in the phonon scattering rates. In this article the correct expressions for the phonon scattering rates based on the screened pseudopotential are formulated for the case of several atoms per unit cell. Furthermore, the simplest wave number dependent approximation is introduced, which contains an average of the cell structure factor and the acoustic and the optical deformation potentials as two parameters to be fitted. While the band structure is determined by the pseudopotential at the reciprocal lattice vectors, the phonon scattering rates are essentially determined by wave numbers below the smallest reciprocal lattice vector. Thus, in the phonon scattering rates, the pseudopotential form factor is modeled by the simple Ashcroft model potential, in contrast to the full band structure, which is calculated using a nonlocal pseudopotential scheme. The parameter in the Ashcroft model potential is determined using a method based on the equilibrium condition. For the screening of the pseudopotential form factor, the Lindhard dielectric function is used. Compared to the Nordheim approximation with a spherical Wigner-Seitz cell, the approximation results in up to 10% lower phonon scattering rates. Examples from a detailed comparison of the influence of the two deformation potentials on the electron and hole drift velocities are presented for Ge and Si at different temperatures. The results are prerequisite for a well-founded choice of the two deformation potentials as fit parameters and they provide an explanation of the differences between the two materials, the origin of the anisotropy of the drift velocities, and the origin of the dent in
NASA Astrophysics Data System (ADS)
Wang, Lei
2017-04-01
In distributed hydrological modeling, surface air temperature (Tair) is of great importance in simulating cold region processes, while the near-surface-air-temperature lapse rate (NLR) is crucial to prepare Tair (when interpolating Tair from site observations to model grids). In this study, a distributed biosphere hydrological model with improved snow physics (WEB-DHM-S) was rigorously evaluated in a typical cold, large river basin (e.g., the upper Yellow River basin), given a mean monthly NLRs. Based on the validated model, we have examined the influence of the NLR on the simulated snow processes and streamflows. We found that the NLR has a large effect on the simulated streamflows, with a maximum difference of greater than 24 % among the various scenarios for NLRs considered. To supplement the insufficient number of monitoring sites for near-surface-air-temperature at developing/undeveloped mountain regions, the nighttime MODIS LST is used as an alternative to derive the approximate NLR at a finer spatial scale (e.g., at different elevation bands, different land covers, different aspects, and different snow conditions). Using satellite-based estimation of NLR, the modeling of snow processes has been greatly refined. Results show that both the determination of rainfall/snowfall and the snow pack process were significantly improved, contributing to a reduced summer evapotranspiration and thus an improved streamflow simulation.
NASA Astrophysics Data System (ADS)
Wang, Lei; Sun, Litao; Shrestha, Maheswor; Li, Xiuping; Liu, Wenbin; Zhou, Jing; Yang, Kun; Lu, Hui; Chen, Deliang
2016-10-01
In distributed hydrological modeling, surface air temperature (Tair) is of great importance in simulating cold region processes, while the near-surface-air-temperature lapse rate (NLR) is crucial to prepare Tair (when interpolating Tair from site observations to model grids). In this study, a distributed biosphere hydrological model with improved snow physics (WEB-DHM-S) was rigorously evaluated in a typical cold, large river basin (e.g., the upper Yellow River basin), given a mean monthly NLRs. Based on the validated model, we have examined the influence of the NLR on the simulated snow processes and streamflows. We found that the NLR has a large effect on the simulated streamflows, with a maximum difference of greater than 24% among the various scenarios for NLRs considered. To supplement the insufficient number of monitoring sites for near-surface-air-temperature at developing/undeveloped mountain regions, the nighttime Moderate Resolution Imaging Spectroradiometer land surface temperature is used as an alternative to derive the approximate NLR at a finer spatial scale (e.g., at different elevation bands, different land covers, different aspects, and different snow conditions). Using satellite-based estimation of NLR, the modeling of snow processes has been greatly refined. Results show that both the determination of rainfall/snowfall and the snowpack process were significantly improved, contributing to a reduced summer evapotranspiration and thus an improved streamflow simulation.
NASA Astrophysics Data System (ADS)
Porta, Alberto; Bari, Vlasta; Ranuzzi, Giovanni; De Maria, Beatrice; Baselli, Giuseppe
2017-09-01
We propose a multiscale complexity (MSC) method assessing irregularity in assigned frequency bands and being appropriate for analyzing the short time series. It is grounded on the identification of the coefficients of an autoregressive model, on the computation of the mean position of the poles generating the components of the power spectral density in an assigned frequency band, and on the assessment of its distance from the unit circle in the complex plane. The MSC method was tested on simulations and applied to the short heart period (HP) variability series recorded during graded head-up tilt in 17 subjects (age from 21 to 54 years, median = 28 years, 7 females) and during paced breathing protocols in 19 subjects (age from 27 to 35 years, median = 31 years, 11 females) to assess the contribution of time scales typical of the cardiac autonomic control, namely in low frequency (LF, from 0.04 to 0.15 Hz) and high frequency (HF, from 0.15 to 0.5 Hz) bands to the complexity of the cardiac regulation. The proposed MSC technique was compared to a traditional model-free multiscale method grounded on information theory, i.e., multiscale entropy (MSE). The approach suggests that the reduction of HP variability complexity observed during graded head-up tilt is due to a regularization of the HP fluctuations in LF band via a possible intervention of sympathetic control and the decrement of HP variability complexity observed during slow breathing is the result of the regularization of the HP variations in both LF and HF bands, thus implying the action of physiological mechanisms working at time scales even different from that of respiration. MSE did not distinguish experimental conditions at time scales larger than 1. Over a short time series MSC allows a more insightful association between cardiac control complexity and physiological mechanisms modulating cardiac rhythm compared to a more traditional tool such as MSE.
Yong, Kamuela E; Mubayi, Anuj; Kribs, Christopher M
2015-11-01
The parasite Trypanosoma cruzi, spread by triatomine vectors, affects over 100 mammalian species throughout the Americas, including humans, in whom it causes Chagas' disease. In the U.S., only a few autochthonous cases have been documented in humans, but prevalence is high in sylvatic hosts (primarily raccoons in the southeast and woodrats in Texas). The sylvatic transmission of T. cruzi is spread by the vector species Triatoma sanguisuga and Triatoma gerstaeckeri biting their preferred hosts and thus creating multiple interacting vector-host cycles. The goal of this study is to quantify the rate of contacts between different host and vector species native to Texas using an agent-based model framework. The contact rates, which represent bites, are required to estimate transmission coefficients, which can be applied to models of infection dynamics. In addition to quantitative estimates, results confirm host irritability (in conjunction with host density) and vector starvation thresholds and dispersal as determining factors for vector density as well as host-vector contact rates.
NASA Astrophysics Data System (ADS)
Sun, Jun; Gao, Wen; Zhao, Debin
2008-01-01
Fine-Granular SNR scalable (FGS) technologies in H.264/AVC-based scalable video coding (SVC) provide a flexible and effective foundation for scaling FGS enhancement layer (EL) to accommodate different and variable network capacities. To support smooth quality extraction of SVC FGS videos, it's important to obtain the Rate-Distortion (R-D) function of each picture or group of pictures (GOP). In this paper, firstly, we introduce the R-D analysis of SVC FGS coding in our prior work. Then, with the analysis and models, we present virtual GOP concept and a virtual-GOP-based packet scheduling algorithm is proposed to acquire the optimal packet scheduling sequence in a virtual GOP. Based on the packet scheduling algorithm and the R-D analysis of FGS EL, an effective and flexible D-R model is proposed to describe the D-R function of the virtual GOP. Then, with the R-D model of virtual GOPs, a practical non-search algorithm for smooth quality reconstruction is introduced. Compared to the quality layer method, the reconstructed video quality is improved not only objectively but also subjectively.
NASA Astrophysics Data System (ADS)
Nemat-Nasser, Sia
2005-08-01
Excluding high-temperature creep, the plastic deformation of metals occurs by the motion of dislocations that produce slip on various slip planes in various slip directions. It is thus natural to seek to develop constitutive relations for metal plasticity, based on the concept of dislocations and their kinematics and kinetics. Such an approach has been successfully used by a number of investigators over the past several decades. More recently, however, the development of the recovery Hopkinson techniques by this writer and his coworkers at UCSD's CEAM, has provided important experimental tools to obtain reliable data on stress-strain response of variety of metals over broad ranges of strain rates and temperatures. A wealth of information has become available to guide and verify constitutive models that are proposed to describe metal plasticity. Using such data, I have been able to create a class of dislocation-based models that involve a few material constants, and seem to accurately characterize the response of a large number of metals over 10-4 to 105/s strain rates, and 77 to 1,300K temperatures.
Real-time inversions for finite fault slip models and rupture geometry based on high-rate GPS data
Minson, Sarah E.; Murray, Jessica R.; Langbein, John O.; Gomberg, Joan S.
2015-01-01
We present an inversion strategy capable of using real-time high-rate GPS data to simultaneously solve for a distributed slip model and fault geometry in real time as a rupture unfolds. We employ Bayesian inference to find the optimal fault geometry and the distribution of possible slip models for that geometry using a simple analytical solution. By adopting an analytical Bayesian approach, we can solve this complex inversion problem (including calculating the uncertainties on our results) in real time. Furthermore, since the joint inversion for distributed slip and fault geometry can be computed in real time, the time required to obtain a source model of the earthquake does not depend on the computational cost. Instead, the time required is controlled by the duration of the rupture and the time required for information to propagate from the source to the receivers. We apply our modeling approach, called Bayesian Evidence-based Fault Orientation and Real-time Earthquake Slip, to the 2011 Tohoku-oki earthquake, 2003 Tokachi-oki earthquake, and a simulated Hayward fault earthquake. In all three cases, the inversion recovers the magnitude, spatial distribution of slip, and fault geometry in real time. Since our inversion relies on static offsets estimated from real-time high-rate GPS data, we also present performance tests of various approaches to estimating quasi-static offsets in real time. We find that the raw high-rate time series are the best data to use for determining the moment magnitude of the event, but slightly smoothing the raw time series helps stabilize the inversion for fault geometry.
NASA Astrophysics Data System (ADS)
Bieliński, Henryk
2016-09-01
The current paper presents the experimental validation of the generalized model of the two-phase thermosyphon loop. The generalized model is based on mass, momentum, and energy balances in the evaporators, rising tube, condensers and the falling tube. The theoretical analysis and the experimental data have been obtained for a new designed variant. The variant refers to a thermosyphon loop with both minichannels and conventional tubes. The thermosyphon loop consists of an evaporator on the lower vertical section and a condenser on the upper vertical section. The one-dimensional homogeneous and separated two-phase flow models were used in calculations. The latest minichannel heat transfer correlations available in literature were applied. A numerical analysis of the volumetric flow rate in the steady-state has been done. The experiment was conducted on a specially designed test apparatus. Ultrapure water was used as a working fluid. The results show that the theoretical predictions are in good agreement with the measured volumetric flow rate at steady-state.
David F. DeSante; M. Philip Nott; Danielle R. Kaschube
2005-01-01
In this paper we argue that effective management of landbirds should be based on assessing and monitoring their vital rates (primary demographic parameters) as well as population trends. This is because environmental stressors and management actions affect vital rates directly and usually without time lags, and because monitoring vital rates provides a) information on...
Zhang, Xiaoying; Liu, Chongxuan; Hu, Bill X.; Hu, Qinhong
2015-09-28
The additivity model assumed that field-scale reaction properties in a sediment including surface area, reactive site concentration, and reaction rate can be predicted from field-scale grain-size distribution by linearly adding reaction properties estimated in laboratory for individual grain-size fractions. This study evaluated the additivity model in scaling mass transfer-limited, multi-rate uranyl (U(VI)) surface complexation reactions in a contaminated sediment. Experimental data of rate-limited U(VI) desorption in a stirred flow-cell reactor were used to estimate the statistical properties of the rate constants for individual grain-size fractions, which were then used to predict rate-limited U(VI) desorption in the composite sediment. The result indicated that the additivity model with respect to the rate of U(VI) desorption provided a good prediction of U(VI) desorption in the composite sediment. However, the rate constants were not directly scalable using the additivity model. An approximate additivity model for directly scaling rate constants was subsequently proposed and evaluated. The result found that the approximate model provided a good prediction of the experimental results within statistical uncertainty. This study also found that a gravel-size fraction (2 to 8 mm), which is often ignored in modeling U(VI) sorption and desorption, is statistically significant to the U(VI) desorption in the sediment.
Vali, Leila; Mastaneh, Zahra; Mouseli, Ali; Kardanmoghadam, Vida; Kamali, Sodabeh
2017-07-01
One of the ways to improve the quality of services in the health system is through clinical governance. This method aims to create a framework for clinical services providers to be accountable in return for continuing improvement of quality and maintaining standards of services. To evaluate the success rate of clinical governance implementation in Kerman teaching hospitals based on 9 steps of Karsh's Model. This cross-sectional study was conducted in 2015 on 94 people including chief executive officers (CEOs), nursing managers, clinical governance managers and experts, head nurses and nurses. The required data were collected through a researcher-made questionnaire containing 38 questions with three-point Likert Scale (good, moderate, and weak). The Karsh's Model consists of nine steps including top management commitment to change, accountability for change, creating a structured approach for change, training, pilot implementation, communication, feedback, simulation, and end-user participation. Data analysis using descriptive statistics and Mann-Whitney-Wilcoxon test was done by SPSS software version 16. About 81.9 % of respondents were female and 74.5 have a Bachelor of Nursing (BN) degree. In general, the status of clinical governance implementation in studied hospitals based on 9 steps of the model was 44 % (moderate). A significant relationship was observed among accountability and organizational position (p=0.0012) and field of study (p=0.000). Also, there were significant relationships between structure-based approach and organizational position (p=0.007), communication and demographic characteristics (p=0.000), and end-user participation with organizational position (p=0.03). Clinical governance should be implemented by correct needs assessment and participation of all stakeholders, to ensure its enforcement in practice, and to enhance the quality of services.
ERIC Educational Resources Information Center
Soderstrom, Irina R.; Leitner, Dennis W.
While it is imperative that attempts be made to assess the predictive accuracy of any prediction model, traditional measures of predictive accuracy have been criticized as suffering from "the base rate problem." The base rate refers to the relative frequency of occurrence of the event being studied in the population of interest, and the…
NASA Astrophysics Data System (ADS)
Brodie, E.; King, E.; Molins, S.; Karaoz, U.; Johnson, J. N.; Bouskill, N.; Hug, L. A.; Thomas, B. C.; Castelle, C. J.; Beller, H. R.; Banfield, J. F.; Steefel, C. I.
2014-12-01
In soils and sediments microorganisms perform essential ecosystem services through their roles in regulating the stability of carbon and the flux of nutrients, and the purification of water. But these are complex systems with the physical, chemical and biological components all intimately connected. Components of this complexity are gradually being uncovered and our understanding of the extent of microbial functional diversity in particular has been enhanced greatly with the development of cultivation independent approaches. However we have not moved far beyond a descriptive and correlative use of this powerful resource. As the ability to reconstruct thousands of genomes from microbial populations using metagenomic techniques gains momentum, the challenge will be to develop an understanding of how these metabolic blueprints serve to influence the fitness of organisms within these complex systems and how populations emerge and impact the physical and chemical properties of their environment. In the presentation we will discuss the development of a trait-based model of microbial activity that simulates coupled guilds of microorganisms that are parameterized including traits extracted from large-scale metagenomic data. Using a reactive transport framework we simulate the thermodynamics of coupled electron donor and acceptor reactions to predict the energy available for respiration, biomass development and exo-enzyme production. Each group within a functional guild is parameterized with a unique combination of traits governing organism fitness under dynamic environmental conditions. This presentation will address our latest developments in the estimation of trait values related to growth rate and the identification and linkage of key fitness traits associated with respiratory and fermentative pathways, macromolecule depolymerization enzymes and nitrogen fixation from metagenomic data. We are testing model sensitivity to initial microbial composition and intra
NASA Astrophysics Data System (ADS)
Fu, Guanghui; Chandra, Abhijit
2002-10-01
It is well known that within-wafer nonuniformity (WIWNU) due to the variation in material removal rate (MRR) in chemical mechanical polishing (CMP) significantly affects the yield of good dies. The process control for a batch CMP operation is further complicated by wafer-to-wafer nonuniformity (WTWNU) caused by MRR decay when a number of wafers are polished with the same unconditioned pad. Accordingly, the present work focuses on modeling the WIWNU and WTWNU in CMP processes. Various material removal models suggest that the MRR is strongly influenced by the interface pressure. It is also well known that the viscoelastic properties of the pad play an important role in CMP. In the present work, an analytical expression for pressure distribution (and its associated MRR) at the wafer-pad interface for a viscoelastic pad is developed. It is observed that under constant load, which is typical during main polishing in CMP, the spatial distribution of the interface pressure profile may change with time from edge-slow to edge-fast, depending on the combination of wafer curvature, down pressure, and pad properties. For constant displacement operations, the pressure profile retains its edge-slow or edge-fast characteristics over time. The analytical model predictions of MRR based on viscoelastic pad properties also correlate very well to existing experimental observations of MRR decay when an unconditioned pad is used to polish a number of wafers. Based on these observations, it may be conjectured that the viscoelastic material properties of the pad play a primary role in causing the observed MRR decay. The analytical results obtained in the present work can also provide an estimation of evolution of thickness removal distribution over the entire wafer. This may be used for determining the optimum thickness of the overburden material and its polishing time, and for effective control of CMP processes.
Kasting, G B; Saiyasombati, P
2001-02-01
Because of their potential for inducing allergic contact dermatitis (ACD) if used improperly, perfumes are carefully assessed for dermal safety prior to incorporation into cosmetic products. Exposure assessment for these materials often involves the conservative assumption of 100% absorption of each component. This report describes an improved method to estimate the absorption and evaporation of perfume ingredients from skin, based on their physico-chemical properties. The effect of environmental variables such as temperature and wind velocity can be accounted for in a logical way. This was accomplished using a first-order kinetic approach expected to be applicable for small doses applied to skin. Skin penetration rate was calculated as a fraction of the maximum flux estimated from the compound's lipid solubility, S(lip) (represented by the product of octanol/water partition coefficient, K(octt), and water solubility, S(w)), and molecular weight, MW. Evaporation rates were estimated from a modified Henry's Law approach with a stagnant boundary layer whose thickness is a function of surface airflow, v. At a given value of v, evaporation rate was assumed proportional to the ratio P(vp)/S(lip), where P(vp) is the vapour pressure of the ingredient at skin temperature, T. The model predicts a relationship for total evaporation from skin of the form %evap = 100x/(k+x) where x = P(vp)MW(2.7)/(K(oct)S(w)) and k is a parameter which depends only on v and T. Comparison with published data on perfume evaporation from human skin in vivo showed good agreement between theory and experiment for two closely related perfume mixtures (r(2) = 0.52-0.74, s = 12-14%, n = 10). Thus, the method would seem to have a good prospect of providing skin absorption estimates suitable for use in exposure assessment and improved understanding of dose-related contact allergy.
Thibbotuwawa, Namal; Oloyede, Adekunle; Senadeera, Wijitha; Li, Tong; Gu, YuanTong
2015-11-01
Solid-interstitial fluid interaction, which depends on tissue permeability, is significant to the strain-rate-dependent mechanical behavior of humeral head (shoulder) cartilage. Due to anatomical and biomechanical similarities to that of the human shoulder, kangaroos present a suitable animal model. Therefore, indentation experiments were conducted on kangaroo shoulder cartilage tissues from low (10(-4)/s) to moderately high (10(-2)/s) strain-rates. A porohyperelastic model was developed based on the experimental characterization; and a permeability function that takes into account the effect of strain-rate on permeability (strain-rate-dependent permeability) was introduced into the model to investigate the effect of rate-dependent fluid flow on tissue response. The prediction of the model with the strain-rate-dependent permeability was compared with those of the models using constant permeability and strain-dependent permeability. Compared to the model with constant permeability, the models with strain-dependent and strain-rate-dependent permeability were able to better capture the experimental variation at all strain-rates (p < 0.05). Significant differences were not identified between models with strain-dependent and strain-rate-dependent permeability at strain-rate of 5 × 10(-3)/s (p = 0.179). However, at strain-rate of 10(-2)/s, the model with strain-rate-dependent permeability was significantly better at capturing the experimental results (p < 0.005). The findings thus revealed the significance of rate-dependent fluid flow on tissue behavior at large strain-rates, which provides insights into the mechanical deformation mechanisms of cartilage tissues. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Hou, Huirang; Zheng, Dandan; Nie, Laixiao
2015-04-01
For gas ultrasonic flowmeters, the signals received by ultrasonic sensors are susceptible to noise interference. If signals are mingled with noise, a large error in flow measurement can be caused by triggering mistakenly using the traditional double-threshold method. To solve this problem, genetic-ant colony optimization (GACO) based on the ultrasonic pulse received signal model is proposed. Furthermore, in consideration of the real-time performance of the flow measurement system, the improvement of processing only the first three cycles of the received signals rather than the whole signal is proposed. Simulation results show that the GACO algorithm has the best estimation accuracy and ant-noise ability compared with the genetic algorithm, ant colony optimization, double-threshold and enveloped zero-crossing. Local convergence doesn’t appear with the GACO algorithm until -10 dB. For the GACO algorithm, the converging accuracy and converging speed and the amount of computation are further improved when using the first three cycles (called GACO-3cycles). Experimental results involving actual received signals show that the accuracy of single-gas ultrasonic flow rate measurement can reach 0.5% with GACO-3 cycles, which is better than with the double-threshold method.
Wu, Hai-wei; Yu, Hai-ye; Zhang, Lei
2011-05-01
Using K-fold cross validation method and two support vector machine functions, four kernel functions, grid-search, genetic algorithm and particle swarm optimization, the authors constructed the support vector machine model of the best penalty parameter c and the best correlation coefficient. Using information granulation technology, the authors constructed P particle and epsilon particle about those factors affecting net photosynthetic rate, and reduced these dimensions of the determinant. P particle includes the percent of visible spectrum ingredients. Epsilon particle includes leaf temperature, scattering radiation, air temperature, and so on. It is possible to obtain the best correlation coefficient among photosynthetic effective radiation, visible spectrum and individual net photosynthetic rate by this technology. The authors constructed the training set and the forecasting set including photosynthetic effective radiation, P particle and epsilon particle. The result shows that epsilon-SVR-RBF-genetic algorithm model, nu-SVR-linear-grid-search model and nu-SVR-RBF-genetic algorithm model obtain the correlation coefficient of up to 97% about the forecasting set including photosynthetic effective radiation and P particle. The penalty parameter c of nu-SVR-linear-grid-search model is the minimum, so the model's generalization ability is the best. The authors forecasted the forecasting set including photosynthetic effective radiation, P particle and epsilon particle by the model, and the correlation coefficient is up to 96%.
ERIC Educational Resources Information Center
Ghaffarzadegan, Navid; Stewart, Thomas R.
2011-01-01
Elwin, Juslin, Olsson, and Enkvist (2007) and Henriksson, Elwin, and Juslin (2010) offered the constructivist coding hypothesis to describe how people code the outcomes of their decisions when availability of feedback is conditional on the decision. They provided empirical evidence only for the 0.5 base rate condition. This commentary argues that…
Evidential Impact of Base Rates
1981-05-15
Darley - Latane study did not affect subjects’ predictions of the behavior of an individual participant in the study, who was observed in a brief filmed...that the situation of the Darley - Latane study is not conducive to helping behavior. Whether an extreme base rate is attributed to an accident of
Hu, Jing-Xiao; Ran, Jia-Bing; Chen, Si; Shen, Xin-Yu; Tong, Hua
2015-12-01
In order to prepare sophisticated biomaterials using a biomimetic approach, a deeper understanding of biomineralization is needed. Of particular importance is the control and regulation of the mineralization process. In this study, a novel bilayer rate-controlling model was designed to investigate the factors potentially influencing mineralization. In the absence of a rate-controlling layer, nano-scale hydroxyapatite (HA) crystallites exhibited a spherical morphology, whereas, in the presence of a rate-controlling layer, HA crystallites were homogeneously dispersed and spindle-like in structure. The mineralization rate had a significant effect on controlling the morphology of crystals. Furthermore, in vitro tests demonstrated that the reaction layer containing spindle-like HA crystallites possessed superior biological properties. These results suggest that a slow mineralization rate is required for controlling the morphology of inorganic crystallites, and consumption by the rate-controlling layer ensured that the ammonia concentration remained low. This study demonstrates that a biomimetic approach can be used to prepare novel biomaterials containing HA crystallites that have different morphologies and biological properties. Copyright © 2015 Elsevier B.V. All rights reserved.
Wang, Yan; Tian, Qing-Jiu; Huang, Yan; Wei, Hong-Wei
2013-04-01
The present paper takes Chuzhou in Anhui Province as the research area, and deciduous broad-leaved forest as the research object. Then it constructs the recognition model about deciduous broad-leaved forest was constructed using NDVI difference rate between leaf expansion and flowering and fruit-bearing, and the model was applied to HJ-CCD remote sensing image on April 1, 2012 and May 4, 2012. At last, the spatial distribution map of deciduous broad-leaved forest was extracted effectively, and the results of extraction were verified and evaluated. The result shows the validity of NDVI difference rate extraction method proposed in this paper and also verifies the applicability of using HJ-CCD data for vegetation classification and recognition.
NASA Astrophysics Data System (ADS)
Won, M.; Yoon, S.; Jang, K.; Lim, J.
2016-12-01
Most of fires were human-caused fires in Korea, but meteorological factors are also big contributors to fire behavior and its spread. Thus, meteorological factors as well as social factors were considered in the fire danger rating systems. This study aims to develop an advanced Korean Forest Fire Danger Rating System (KFFDRS) using weather data of automatic mountain meteorology observation systems(AMOSs) to support forest fire prevention strategy in South Korea. The KFFDRS consists of three, 10-scale indices: daily weather index (DWI), fuel model index (FMI), and topography model index (TMI). DWI represents the meteorological characteristics, such as humidity (relative and effective), temperature and wind speed, and we integrated nine logistic regression models of the past into one national model. One integrated national model is [1+exp{2.706+(0.088×maximum temperature)-(0.055×relative humidity)-(0.023×effective humidity)-(0.104×mean wind speed)}-1]-1 and all weather variables significantly (p<0.01) affected the probability of forest fire occurrence in the overall regions. The predictive value of the model is 71.7 percent. Also we estimated accuracy of forest fire occurrences in case of pre or post-fusion of mountain weather data with 55 random sampling in forest fire event days. One integrated national model showed 10% high accuracy than nine logistic regression models when it is applied fused mountain weather data. These findings would be necessary for the policy makers in the Republic of Korea for the prevention of forest fires.
Thompson, C.D.; Krasodomski, H.T.; Lewis, N.; Makar, G.L.
1995-02-22
The Ford/Andresen slip dissolution SCC model, originally developed for stainless steel components in BWR environments, has been applied to Alloy 600 and Alloy X-750 tested in deaerated pure water chemistry. A method is described whereby the crack growth rates measured in compact tension specimens can be used to estimate crack growth in a component. Good agreement was found between model prediction and measured SCC in X-750 threaded fasteners over a wide range of temperatures, stresses, and material condition. Most data support the basic assumption of this model that cracks initiate early in life. The evidence supporting a particular SCC mechanism is mixed. Electrochemical repassivation data and estimates of oxide fracture strain indicate that the slip dissolution model can account for the observed crack growth rates, provided primary rather than secondary creep rates are used. However, approximately 100 cross-sectional TEM foils of SCC cracks including crack tips reveal no evidence of enhanced plasticity or unique dislocation patterns at the crack tip or along the crack to support a classic slip dissolution mechanism. No voids, hydrides, or microcracks are found in the vicinity of the crack tips creating doubt about classic hydrogen related mechanisms. The bulk oxide films exhibit a surface oxide which is often different than the oxides found within a crack. Although bulk chromium concentration affects the rate of SCC, analytical data indicates the mechanism does not result from chromium depletion at the grain boundaries. The overall findings support a corrosion/dissolution mechanism but not one necessarily related to slip at the crack tip.
NASA Technical Reports Server (NTRS)
Rickard, W. W.; Levison, W. H.
1981-01-01
A methodology was demonstrated for assessing longitudinal-axis handling qualities of transport aircraft on the basis of closed-loop criteria. Six longitudinal-axis approach configurations were studied covering a range of handling quality problems that included the presence of flexible aircraft modes. Using closed-loop performance requirements derived from task analyses and pilot interviews, predictions of performance/workload tradeoffs were obtained using an analytical pilot/vehicle model. A subsequent manned simulation study yielded objective performance measures and Cooper-Harper pilot ratings that were largely consistent with each other and with analytic predictions.
NASA Technical Reports Server (NTRS)
Chambers, A. B.; Blackaby, J. R.; Miles, J. B.
1973-01-01
Experimental results for three subjects walking on a treadmill at exercise rates of up to 590 watts showed that thermal comfort could be maintained in a liquid cooled garment by using an automatic temperature controller based on sweat rate. The addition of head- and neck-cooling to an Apollo type liquid cooled garment increased its effectiveness and resulted in greater subjective comfort. The biothermal model of man developed in the second portion of the study utilized heat rates and exchange coefficients based on the experimental data, and included the cooling provisions of a liquid-cooled garment with automatic temperature control based on sweat rate. Simulation results were good approximations of the experimental results.
Turkdogan-Aydinol, F Ilter; Yetilmezsoy, Kaan
2010-10-15
A MIMO (multiple inputs and multiple outputs) fuzzy-logic-based model was developed to predict biogas and methane production rates in a pilot-scale 90-L mesophilic up-flow anaerobic sludge blanket (UASB) reactor treating molasses wastewater. Five input variables such as volumetric organic loading rate (OLR), volumetric total chemical oxygen demand (TCOD) removal rate (R(V)), influent alkalinity, influent pH and effluent pH were fuzzified by the use of an artificial intelligence-based approach. Trapezoidal membership functions with eight levels were conducted for the fuzzy subsets, and a Mamdani-type fuzzy inference system was used to implement a total of 134 rules in the IF-THEN format. The product (prod) and the centre of gravity (COG, centroid) methods were employed as the inference operator and defuzzification methods, respectively. Fuzzy-logic predicted results were compared with the outputs of two exponential non-linear regression models derived in this study. The UASB reactor showed a remarkable performance on the treatment of molasses wastewater, with an average TCOD removal efficiency of 93 (+/-3)% and an average volumetric TCOD removal rate of 6.87 (+/-3.93) kg TCOD(removed)/m(3)-day, respectively. Findings of this study clearly indicated that, compared to non-linear regression models, the proposed MIMO fuzzy-logic-based model produced smaller deviations and exhibited a superior predictive performance on forecasting of both biogas and methane production rates with satisfactory determination coefficients over 0.98.
Penloglou, Giannis; Vasileiadou, Athina; Chatzidoukas, Christos; Kiparissides, Costas
2017-08-01
An integrated metabolic-polymerization-macroscopic model, describing the microbial production of polyhydroxybutyrate (PHB) in Azohydromonas lata bacteria, was developed and validated using a comprehensive series of experimental measurements. The model accounted for biomass growth, biopolymer accumulation, carbon and nitrogen sources utilization, oxygen mass transfer and uptake rates and average molecular weights of the accumulated PHB, produced under batch and fed-batch cultivation conditions. Model predictions were in excellent agreement with experimental measurements. The validated model was subsequently utilized to calculate optimal operating conditions and feeding policies for maximizing PHB productivity for desired PHB molecular properties. More specifically, two optimal fed-batch strategies were calculated and experimentally tested: (1) a nitrogen-limited fed-batch policy and (2) a nitrogen sufficient one. The calculated optimal operating policies resulted in a maximum PHB content (94% g/g) in the cultivated bacteria and a biopolymer productivity of 4.2 g/(l h), respectively. Moreover, it was demonstrated that different PHB grades with weight average molecular weights of up to 1513 kg/mol could be produced via the optimal selection of bioprocess operating conditions.
Dielectronic recombination rate in statistical model
NASA Astrophysics Data System (ADS)
Demura, A. V.; Leontyev, D. S.; Lisitsa, V. S.; Shurigyn, V. A.
2017-01-01
The dielectronic recombination rate of multielectron ions was calculated by means of the statistical approach. It is based on an idea of collective excitations of atomic electrons with the local plasma frequencies. These frequencies are expressed via the Thomas-Fermi model electron density distribution. The statistical approach provides fast computation of DR rates that are compared with the modern quantum mechanical calculations. The results are important for current studies of thermonuclear plasmas with the tungsten impurities.
Dielectronic recombination rate in statistical model
NASA Astrophysics Data System (ADS)
Demura, A. V.; Leontyev, D. S.; Lisitsa, V. S.; Shurigyn, V. A.
2016-12-01
The dielectronic recombination rate of multielectron ions was calculated by means of the statistical approach. It is based on an idea of collective excitations of atomic electrons with the local plasma frequencies. These frequencies are expressed via the Thomas-Fermi model electron density distribution. The statistical approach provides fast computation of DR rates that are compared with the modern quantum mechanical calculations. The results are important for current studies of thermonuclear plasmas with the tungsten impurities.
NASA Astrophysics Data System (ADS)
Povak, Nicholas A.; Hessburg, Paul F.; McDonnell, Todd C.; Reynolds, Keith M.; Sullivan, Timothy J.; Salter, R. Brion; Cosby, Bernard J.
2014-04-01
Accurate estimates of soil mineral weathering are required for regional critical load (CL) modeling to identify ecosystems at risk of the deleterious effects from acidification. Within a correlative modeling framework, we used modeled catchment-level base cation weathering (BCw) as the response variable to identify key environmental correlates and predict a continuous map of BCw within the southern Appalachian Mountain region. More than 50 initial candidate predictor variables were submitted to a variety of conventional and machine learning regression models. Predictors included aspects of the underlying geology, soils, geomorphology, climate, topographic context, and acidic deposition rates. Low BCw rates were predicted in catchments with low precipitation, siliceous lithology, low soil clay, nitrogen and organic matter contents, and relatively high levels of canopy cover in mixed deciduous and coniferous forest types. Machine learning approaches, particularly random forest modeling, significantly improved model prediction of catchment-level BCw rates over traditional linear regression, with higher model accuracy and lower error rates. Our results confirmed findings from other studies, but also identified several influential climatic predictor variables, interactions, and nonlinearities among the predictors. Results reported here will be used to support regional sulfur critical loads modeling to identify areas impacted by industrially derived atmospheric S inputs. These methods are readily adapted to other regions where accurate CL estimates are required over broad spatial extents to inform policy and management decisions.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 3 2010-10-01 2010-10-01 false Rate base. 65.800 Section 65.800 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) INTERSTATE RATE OF RETURN PRESCRIPTION PROCEDURES AND METHODOLOGIES Rate Base § 65.800 Rate base. The rate base...
Code of Federal Regulations, 2013 CFR
2013-10-01
... 47 Telecommunication 3 2013-10-01 2013-10-01 false Rate base. 65.800 Section 65.800 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) INTERSTATE RATE OF RETURN PRESCRIPTION PROCEDURES AND METHODOLOGIES Rate Base § 65.800 Rate base. The rate base...
Code of Federal Regulations, 2014 CFR
2014-10-01
... 47 Telecommunication 3 2014-10-01 2014-10-01 false Rate base. 65.800 Section 65.800 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) INTERSTATE RATE OF RETURN PRESCRIPTION PROCEDURES AND METHODOLOGIES Rate Base § 65.800 Rate base. The rate base...
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 3 2011-10-01 2011-10-01 false Rate base. 65.800 Section 65.800 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) INTERSTATE RATE OF RETURN PRESCRIPTION PROCEDURES AND METHODOLOGIES Rate Base § 65.800 Rate base. The rate base...
Code of Federal Regulations, 2012 CFR
2012-10-01
... 47 Telecommunication 3 2012-10-01 2012-10-01 false Rate base. 65.800 Section 65.800 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) INTERSTATE RATE OF RETURN PRESCRIPTION PROCEDURES AND METHODOLOGIES Rate Base § 65.800 Rate base. The rate base...
NASA Astrophysics Data System (ADS)
Vitale, M.; Matteucci, G.; Fares, S.; Davison, B.
2009-02-01
This paper concerns the application of a process-based model (MOCA, Modelling of Carbon Assessment) as an useful tool for estimating gas exchange, and integrating the empirical algorithms for calculation of monoterpene fluxes, in a Mediterranean maquis of central Italy (Castelporziano, Rome). Simulations were carried out for a range of hypothetical but realistic canopies of the evergreen Quercus ilex (holm oak), Arbutus unedo (strawberry tree) and Phillyrea latifolia. More, the dependence on total leaf area and leaf distribution of monoterpene fluxes at the canopy scale has been considered in the algorithms. Simulation of the gas exchange rates showed higher values for P. latifolia and A. unedo (2.39±0.30 and 3.12±0.27 gC m-2 d-1, respectively) with respect to Q. ilex (1.67±0.08 gC m-2 d-1) in the measuring campaign (May-June). Comparisons of the average Gross Primary Production (GPP) values with those measured by eddy covariance were well in accordance (7.98±0.20 and 6.00±1.46 gC m-2 d-1, respectively, in May-June), although some differences (of about 30%) were evident in a point-to-point comparison. These differences could be explained by considering the non uniformity of the measuring site where diurnal winds blown S-SW direction affecting thus calculations of CO2 and water fluxes. The introduction of some structural parameters in the algorithms for monoterpene calculation allowed to simulate monoterpene emission rates and fluxes which were in accord to those measured (6.50±2.25 vs. 9.39±4.5μg g-1DW h-1 for Q. ilex, and 0.63±0.207μg g-1DW h-1 vs. 0.98±0.30μg g-1DW h-1 for P. latifolia). Some constraints of the MOCA model are discussed, but it is demonstrated to be an useful tool to simulate physiological processes and BVOC fluxes in a very complicated plant distributions and environmental conditions, and necessitating also of a low number of input data.
NASA Astrophysics Data System (ADS)
Dolan, K. A.
2015-12-01
Disturbance plays a critical role in shaping the structure and function of forested ecosystems as well as the ecosystem services they provide, including but not limited to: carbon storage, biodiversity habitat, water quality and flow, and land atmosphere exchanges of energy and water. In addition, recent studies suggest that disturbance rates may increase in the future under altered climate and land use scenarios. Thus understanding how vulnerable forested ecosystems are to potential changes in disturbance rates is of high importance. This study calculated the theoretical threshold rate of disturbance for which forest ecosystems could no longer be sustained (λ*) across the Coterminous U.S. using an advanced process based ecosystem model (ED). Published rates of disturbance (λ) in 50 study sites were obtained from the North American Forest Disturbance (NAFD) program. Disturbance distance (λ* - λ) was calculated for each site by differencing the model based threshold under current climate conditions and average observed rates of disturbance over the last quarter century. Preliminary results confirm all sample forest sites have current average rates of disturbance below λ*, but there were interesting patterns in the recorded disturbance distances. In general western sites had much smaller disturbance distances, suggesting higher vulnerability to change, while eastern sites showed larger buffers. Ongoing work is being conducted to assess the vulnerability of these sites in the context of potential future changes by propagating scenarios of future climate and land-use change through the analysis.
Sun, Yi; Li, Liang; Zhou, Tian-Yan; Lu, Wei
2014-12-01
This study aims to compare the urate-lowering response rate of febuxostat and allopurinol in gout patient using a model-based meta-analysis. The literature search identified 22 clinical trials of gout with a total of 43 unique treatment arms that met our inclusion criteria, and a total of 6 365 gout patients were included in the study. The response rates of allopuriol and febuxostat were characterized by Tmax model and Emax model respectively, and the effect of baseline serum uric acid (sUA) and patient type on the drug effect was tested. The results showed that allopurinol can reach an average maximum response rate of 50.8% while febuxostat can reach a 100% response rate within a very short time, and the ED50 was 34.3 mg. Covariate analysis revealed that baseline sUA has a negative effect on response rate of allopurinol, and a positive effect on the predicted ED50 of febuxostat. For patients who had shown inadequate response to prior allopurinol treatment, the average response rate was about half that of the allopurinol responder patients.
Base Rates: Both Neglected and Intuitive
ERIC Educational Resources Information Center
Pennycook, Gordon; Trippas, Dries; Handley, Simon J.; Thompson, Valerie A.
2014-01-01
Base-rate neglect refers to the tendency for people to underweight base-rate probabilities in favor of diagnostic information. It is commonly held that base-rate neglect occurs because effortful (Type 2) reasoning is required to process base-rate information, whereas diagnostic information is accessible to fast, intuitive (Type 1) processing…
Base Rates: Both Neglected and Intuitive
ERIC Educational Resources Information Center
Pennycook, Gordon; Trippas, Dries; Handley, Simon J.; Thompson, Valerie A.
2014-01-01
Base-rate neglect refers to the tendency for people to underweight base-rate probabilities in favor of diagnostic information. It is commonly held that base-rate neglect occurs because effortful (Type 2) reasoning is required to process base-rate information, whereas diagnostic information is accessible to fast, intuitive (Type 1) processing…
Chung, Charles S; Ajo, David M; Kovács, Sándor J
2006-02-01
Transmitral Doppler echocardiography is the preferred method of noninvasive diastolic function assessment. Correlations between catheterization-based measures of isovolumic relaxation (IVR) and transmitral, early rapid filling (Doppler E-wave)-derived parameters have been observed, but no model-based, causal explanation has been offered. IVR has also been characterized in terms of its duration as IVR time (IVRT) and by tau, the time-constant of IVR, by approximating the terminal left ventricular IVR pressure contour as Pt= Pinfinity + P(o)e(-t/tau), where Pt is the continuity of pressure, Pinfinity and Po are constants, t is time, and tau is the time constant of IVR. To characterize the relation between IVR and early rapid filling more fully, simultaneous (micromanometric) left ventricular pressure and transmitral Doppler E-wave data from 25 subjects undergoing elective cardiac catheterization and having normal physiology were analyzed. The time constant tau was determined from the dP/dt vs. P (phase) plane and, simultaneous Doppler E-waves provided global indexes of chamber viscosity/relaxation (c), chamber stiffness (k), and load (xo). We hypothesize that temporal continuity of pressure decay at mitral valve opening and physiological constraints permit the algebraic derivation of linear relations relating 1/tau to both peak atrioventricular pressure gradient (kxo) and E-wave-derived viscosity/relaxation (c) but does not support a similar, causal (linear) relation between deceleration time and tau or IVRT. Both predicted linear relations were observed: kxo to 1/tau (r = 0.71) and viscosity/relaxation to 1/tau (r = 0.71). Similarly, as anticipated, only a weak linear correlation between deceleration time and IVRT or tau was observed (r = 0.41). The observed in vivo relationship provides insight into the isovolumic mechanism of relaxation and the changing-volume mechanism of early rapid filling via a link of the respective relaxation properties.
NASA Astrophysics Data System (ADS)
Chun, Sejong; Jin, Jonghan; Cho, Wan-Ho
2017-05-01
Wave dispersion is the key feature in understanding pulsating flows in a rigid circular pipe with small diameter. The wave dispersion makes flow signals distorted in the pulsating flows by boundary conditions due to pipe surface. Detailed description of this phenomenon can make the Greenfield-Fry model more practical. This model describes the relationship between the pressure gradient and the flow rate in the rigid circular pipe. Because pressure gradient measurement requires more than two pressure transducers, it would become more practical if only one pressure transducer is needed by applying the Taylor's frozen field hypothesis. This implies that only one pressure transducer is satisfactory for predicting flow signals with the Greenfield-Fry model. By applying the frequency variant convection velocity to consider the wave dispersion, the Taylor's frozen field hypothesis can relate the pressure signals with the flow signals according to the Greenfield-Fry model. In this study, the Taylor's frozen field hypothesis is reformulated into a simpler functional form with the frequency variant convection velocity in a zero-dimensional model with the Newtonian fluid, uniform, laminar, axially and one-dimensional pulsatile flow assumption. An experiment with a blood flow simulator is exemplified to demonstrate its usefulness to predict the flow signals from the pressure signals with the Greenfield-Fry model. Moreover, the three-element Windkessel model is compared to emphasize the importance of the physical model derived from the Navier-Stokes equation, such as the Greenfield-Fry model for the pulsating flows.
NASA Astrophysics Data System (ADS)
Phanikumar, Mantha S.; McGuire, Jennifer T.
2010-08-01
Push-pull tests are a popular technique to investigate various aquifer properties and microbial reaction kinetics in situ. Most previous studies have interpreted push-pull test data using approximate analytical solutions to estimate (generally first-order) reaction rate coefficients. Though useful, these analytical solutions may not be able to describe important complexities in rate data. This paper reports the development of a multi-species, radial coordinate numerical model (PPTEST) that includes the effects of sorption, reaction lag time and arbitrary reaction order kinetics to estimate rates in the presence of mixing interfaces such as those created between injected "push" water and native aquifer water. The model has the ability to describe an arbitrary number of species and user-defined reaction rate expressions including Monod/Michelis-Menten kinetics. The FORTRAN code uses a finite-difference numerical model based on the advection-dispersion-reaction equation and was developed to describe the radial flow and transport during a push-pull test. The accuracy of the numerical solutions was assessed by comparing numerical results with analytical solutions and field data available in the literature. The model described the observed breakthrough data for tracers (chloride and iodide-131) and reactive components (sulfate and strontium-85) well and was found to be useful for testing hypotheses related to the complex set of processes operating near mixing interfaces.
Zhang, Le; Strouthos, Costas G.; Wang, Zhihui; Deisboeck, Thomas S.
2008-01-01
We have extended our previously developed 3D multi-scale agent-based brain tumor model to simulate cancer heterogeneity and to analyze its impact across the scales of interest. While our algorithm continues to employ an epidermal growth factor receptor (EGFR) gene-protein interaction network to determine the cells’ phenotype, it now adds an implicit treatment of tumor cell adhesion related to the model’s biochemical microenvironment. We simulate a simplified tumor progression pathway that leads to the emergence of five distinct glioma cell clones with different EGFR density and cell ‘search precisions’. The in silico results show that microscopic tumor heterogeneity can impact the tumor system’s multicellular growth patterns. Our findings further confirm that EGFR density results in the more aggressive clonal populations switching earlier from proliferation-dominated to a more migratory phenotype. Moreover, analyzing the dynamic molecular profile that triggers the phenotypic switch between proliferation and migration, our in silico oncogenomics data display spatial and temporal diversity in documenting the regional impact of tumorigenesis, and thus support the added value of multi-site and repeated assessments in vitro and in vivo. Potential implications from this in silico work for experimental and computational studies are discussed. PMID:20047002
Rate control based on intermediate description
NASA Astrophysics Data System (ADS)
Liu, Meng; Guo, Yi; Li, Houqiang
2010-07-01
Video adaptation has been proved to be an efficient technique in dealing with various constraints such as bandwidth limitation and user requirement in multimedia applications. However, existing methods including Scalable Video Coding and transcoding cannot get a fine performance when bandwidth constraints exist in various scenarios particularly in realtime applications. In this paper, we propose a novel rate control scheme based on intermediate description. The proposed scheme can provide fast rate control for narrow and time-varying transmission channel in scenarios such as video streaming, video sharing and video on demand. In this scheme, Discrete Cosine Transform (DCT) coefficients distribution is modeled by generalized Gaussian distribution, meanwhile the parameter information of this model is stored as side information for rate control. With the stored parameter information, encoder and transcoder can achieve the target bit-rate with low complexity. Furthermore, an initial Quantization Parameter (QP) determination method is also presented to calculate a proper QP for the Instantaneous Decoding Refresh (IDR) picture. Experimental results show that compared with JVT-G012 in H.264, the proposed rate control scheme can save more than 85% encoding time and obtain the required bit-rate more precisely, meanwhile gains a performance improvement by 0.2dB averagely.
Rate process models for thermal welding
NASA Astrophysics Data System (ADS)
Prahl, Scott A.; Pearson, S. D.
1997-06-01
Laser tissue welding is a thermal process for binding tow tissues together. Optical and thermal models exist to calculate the temperatures of laser irradiated tissues. However, a rate process model is required to relate the time-temperature history to a weld strength. This paper proposes a first-order rate process model based on contraction during heating. The entropy and enthalpy associated with contraction of porcine intestine in a water bath was measured and used to calculate the fraction of altered molecules for both water bath and laser welding. Intestine was welded to intestine in a water bath at 60-80 degrees C for seven minutes. Pulsed laser welding used 10-30 pulses and an exogenous chromophore. The yield strengths of the welds were measured and found to roughly correlate with the fraction of altered molecules estimated for both the water bath and laser welds.
A regression method for modelling geometric rates.
Bottai, Matteo
2015-09-18
The occurrence of an event of interest over time is often summarized by the incidence rate, defined as the average number of events per person-time. This type of rate applies to events that may occur repeatedly over time on any given subject, such as infections, and Poisson regression represents a natural regression method for modelling the effect of covariates on it. However, for events that can occur only once, such as death, the geometric rate may be a better summary measure. The geometric rate has long been utilized in demography for studying the growth of populations and in finance to compute compound interest on capital. This type of rate, however, is virtually unknown to medical research. This may be partly a consequence of the lack of a regression method for it. This paper describes a regression method for modelling the effect of covariates on the geometric rate. The described method is based on applying quantile regression to a transform of the time-to-event variable. The proposed method is used to analyze mortality in a randomized clinical trial and in an observational epidemiological study.
NASA Astrophysics Data System (ADS)
Dolan, K. A.; Hurtt, G. C.; Fisk, J.; Flanagan, S.; LePage, Y.; Sahajpal, R.
2014-12-01
Disturbance plays a critical role in shaping the structure and function of forested ecosystems as well as the ecosystem services they provide, including but not limited to: carbon storage, biodiversity habitat, water quality and flow, and land atmosphere exchanges of energy and water. As recent studies highlight novel disturbance regimes resulting from pollution, invasive pests and climate change, there is a need to include these alterations in predictions of future forest function and structure. The Ecosystem Demography (ED) model is a mechanistic model of forest ecosystem dynamics in which individual-based forest dynamics can be efficiently implemented over regional to global scales due to advanced scaling methods. We utilize ED to characterize the sensitivity of potential vegetation structure and function to changes in rates of density independent mortality. Disturbance rate within ED can either be altered directly or through the development of sub-models. Disturbance sub-models in ED currently include fire, land use and hurricanes. We use a tiered approach to understand the sensitivity of North American ecosystems to changes in background density independent mortality. Our first analyses were conducted at half-degree spatial resolution with a constant rate of disturbance in space and time, which was altered between runs. Annual climate was held constant at the site level and the land use and fire sub-models were turned off. Results showed an ~ 30% increase in non-forest area across the US when disturbance rates were changed from 0.6% a year to 1.2% a year and a more than 3.5 fold increase in non-forest area when disturbance rates doubled again from 1.2% to 2.4%. Continued runs altered natural background disturbance rates with the existing fire and hurricane sub models turned on as well as historic and future land use. By quantify differences between model outputs that characterize ecosystem structure and function related to the carbon cycle across the US, we
Hallam, Brett Abbott, Malcolm; Nampalli, Nitin; Hamer, Phill; Wenham, Stuart
2016-02-14
A three-state model is used to explore the influence of defect formation- and passivation rates of carrier-induced degradation related to boron-oxygen complexes in boron-doped p-type silicon solar cells within a hydrogen-based model. The model highlights that the inability to effectively mitigate carrier-induced degradation at elevated temperatures in previous studies is due to the limited availability of defects for hydrogen passivation, rather than being limited by the defect passivation rate. An acceleration of the defect formation rate is also observed to increase both the effectiveness and speed of carrier-induced degradation mitigation, whereas increases in the passivation rate do not lead to a substantial acceleration of the hydrogen passivation process. For high-throughput mitigation of such carrier-induced degradation on finished solar cell devices, two key factors were found to be required, high-injection conditions (such as by using high intensity illumination) to enable an acceleration of defect formation whilst simultaneously enabling a rapid passivation of the formed defects, and a high temperature to accelerate both defect formation and defect passivation whilst still ensuring an effective mitigation of carrier-induced degradation.
A New Global Geodetic Strain Rate Model
NASA Astrophysics Data System (ADS)
Kreemer, C. W.; Klein, E. C.; Blewitt, G.; Shen, Z.; Wang, M.; Chamot-Rooke, N. R.; Rabaute, A.
2012-12-01
As part of the Global Earthquake Model (GEM) effort to improve global seismic hazard models, we present a new global geodetic strain rate model. This model (GSRM v. 2) is a vast improvement on the previous model from 2004 (v. 1.2). The model is still based on a finite-element type approach and has deforming cells in between the assumed rigid plates. While v.1.2 contained ~25,000 deforming cells of 0.6° by 0.5° dimension, the new models contains >136,000 cells of 0.25° by 0.2° dimension. We redefined the geometries of the deforming zones based on the definitions of Bird (2003) and Chamot-Rooke and Rabaute (2006). We made some adjustments to the grid geometry at places where seismicity and/or GPS velocities suggested the presence of deforming areas where those previous studies did not. As a result, some plates/blocks identified by Bird (2003) we assumed to deform, and the total number of plates and blocks in GSRM v.2 is 38 (including the Bering block, which Bird (2003) did not consider). GSRM v.1.2 was based on ~5,200 GPS velocities, taken from 86 studies. The new model is based on ~17,000 GPS velocities, taken from 170 studies. The GPS velocity field consists of a 1) ~4900 velocities derived by us for CPS stations publicly available RINEX data and >3.5 years of data, 2) ~1200 velocities for China from a new analysis of all CMONOC data, and 3) velocities published in the literature or made otherwise available to us. All studies were combined into the same reference frame by a 6-parameter transformation using velocities at collocated stations. Because the goal of the project is to model the interseismic strain rate field, we model co-seismic jumps while estimating velocities, ignore periods of post-seismic deformation, and exclude time-series that reflect magmatic and anthropogenic activity. GPS velocities were used to estimate angular velocities for most of the 38 rigid plates and blocks (the rest being taken from the literature), and these were used as boundary
NASA Astrophysics Data System (ADS)
Strasser, U.; Marke, T.
2010-05-01
This paper describes the spreadsheet-based point energy balance model ESCIMO.spread which simulates the energy and mass balance as well as melt rates of a snow surface. The model makes use of hourly recordings of temperature, precipitation, wind speed, relative humidity, global and longwave radiation. The effect of potential climate change on the seasonal evolution of the snow cover can be estimated by modifying the time series of observed temperature and precipitation by means of adjustable parameters. Model output is graphically visualized in hourly and daily diagrams. The results compare well with weekly measured snow water equivalent (SWE). The model is easily portable and adjustable, and runs particularly fast: hourly calculation of a one winter season is instantaneous on a standard computer. ESICMO.spread can be obtained from the authors on request (contact: ulrich.strasser@uni-graz.at).
Gao, Min-Jie; Zheng, Zhi-Yong; Wu, Jian-Rong; Dong, Shi-Juan; Li, Zhen; Jin, Hu; Zhan, Xiao-Bei; Lin, Chi-Chung
2012-02-01
Effective expression of porcine interferon-α (pIFN-α) with recombinant Pichia pastoris was conducted in a bench-scale fermentor. The influence of the glycerol feeding strategy on the specific growth rate and protein production was investigated. The traditional DO-stat feeding strategy led to very low cell growth rate resulting in low dry cell weight (DCW) of about 90 g/L during the subsequent induction phase. The previously reported Artificial Neural Network Pattern Recognition (ANNPR) model-based glycerol feeding strategy improved the cell density to 120 g DCW/L, while the specific growth rate decreased from 0.15 to 0.18 to 0.03-0.08 h(-1) during the last 10 h of the glycerol feeding stage leading to a variation of the porcine interferon-α production, as the glycerol feeding scheme had a significant effect on the induction phase. This problem was resolved by an improved ANNPR model-based feeding strategy to maintain the specific growth rate above 0.11 h(-1). With this feeding strategy, the pIFN-α concentration reached a level of 1.43 g/L, more than 1.5-fold higher than that obtained with the previously adopted feeding strategy. Our results showed that increasing the specific growth rate favored the target protein production and the glycerol feeding methods directly influenced the induction stage. Consequently, higher cell density and specific growth rate as well as effective porcine interferon-α production have been achieved by our novel glycerol feeding strategy.
Li, Dan; Wang, Xia; Dey, Dipak K
2016-09-01
Our present work proposes a new survival model in a Bayesian context to analyze right-censored survival data for populations with a surviving fraction, assuming that the log failure time follows a generalized extreme value distribution. Many applications require a more flexible modeling of covariate information than a simple linear or parametric form for all covariate effects. It is also necessary to include the spatial variation in the model, since it is sometimes unexplained by the covariates considered in the analysis. Therefore, the nonlinear covariate effects and the spatial effects are incorporated into the systematic component of our model. Gaussian processes (GPs) provide a natural framework for modeling potentially nonlinear relationship and have recently become extremely powerful in nonlinear regression. Our proposed model adopts a semiparametric Bayesian approach by imposing a GP prior on the nonlinear structure of continuous covariate. With the consideration of data availability and computational complexity, the conditionally autoregressive distribution is placed on the region-specific frailties to handle spatial correlation. The flexibility and gains of our proposed model are illustrated through analyses of simulated data examples as well as a dataset involving a colon cancer clinical trial from the state of Iowa.
Multiplicative earthquake likelihood models incorporating strain rates
NASA Astrophysics Data System (ADS)
Rhoades, D. A.; Christophersen, A.; Gerstenberger, M. C.
2017-01-01