Sample records for rate model based

  1. Geodesy- and geology-based slip-rate models for the Western United States (excluding California) national seismic hazard maps

    USGS Publications Warehouse

    Petersen, Mark D.; Zeng, Yuehua; Haller, Kathleen M.; McCaffrey, Robert; Hammond, William C.; Bird, Peter; Moschetti, Morgan; Shen, Zhengkang; Bormann, Jayne; Thatcher, Wayne

    2014-01-01

    The 2014 National Seismic Hazard Maps for the conterminous United States incorporate additional uncertainty in fault slip-rate parameter that controls the earthquake-activity rates than was applied in previous versions of the hazard maps. This additional uncertainty is accounted for by new geodesy- and geology-based slip-rate models for the Western United States. Models that were considered include an updated geologic model based on expert opinion and four combined inversion models informed by both geologic and geodetic input. The two block models considered indicate significantly higher slip rates than the expert opinion and the two fault-based combined inversion models. For the hazard maps, we apply 20 percent weight with equal weighting for the two fault-based models. Off-fault geodetic-based models were not considered in this version of the maps. Resulting changes to the hazard maps are generally less than 0.05 g (acceleration of gravity). Future research will improve the maps and interpret differences between the new models.

  2. Reacting Chemistry Based Burn Model for Explosive Hydrocodes

    NASA Astrophysics Data System (ADS)

    Schwaab, Matthew; Greendyke, Robert; Steward, Bryan

    2017-06-01

    Currently, in hydrocodes designed to simulate explosive material undergoing shock-induced ignition, the state of the art is to use one of numerous reaction burn rate models. These burn models are designed to estimate the bulk chemical reaction rate. Unfortunately, these models are largely based on empirical data and must be recalibrated for every new material being simulated. We propose that the use of an equilibrium Arrhenius rate reacting chemistry model in place of these empirically derived burn models will improve the accuracy for these computational codes. Such models have been successfully used in codes simulating the flow physics around hypersonic vehicles. A reacting chemistry model of this form was developed for the cyclic nitramine RDX by the Naval Research Laboratory (NRL). Initial implementation of this chemistry based burn model has been conducted on the Air Force Research Laboratory's MPEXS multi-phase continuum hydrocode. In its present form, the burn rate is based on the destruction rate of RDX from NRL's chemistry model. Early results using the chemistry based burn model show promise in capturing deflagration to detonation features more accurately in continuum hydrocodes than previously achieved using empirically derived burn models.

  3. Acid–base chemical reaction model for nucleation rates in the polluted atmospheric boundary layer

    PubMed Central

    Chen, Modi; Titcombe, Mari; Jiang, Jingkun; Jen, Coty; Kuang, Chongai; Fischer, Marc L.; Eisele, Fred L.; Siepmann, J. Ilja; Hanson, David R.; Zhao, Jun; McMurry, Peter H.

    2012-01-01

    Climate models show that particles formed by nucleation can affect cloud cover and, therefore, the earth's radiation budget. Measurements worldwide show that nucleation rates in the atmospheric boundary layer are positively correlated with concentrations of sulfuric acid vapor. However, current nucleation theories do not correctly predict either the observed nucleation rates or their functional dependence on sulfuric acid concentrations. This paper develops an alternative approach for modeling nucleation rates, based on a sequence of acid–base reactions. The model uses empirical estimates of sulfuric acid evaporation rates obtained from new measurements of neutral molecular clusters. The model predicts that nucleation rates equal the sulfuric acid vapor collision rate times a prefactor that is less than unity and that depends on the concentrations of basic gaseous compounds and preexisting particles. Predicted nucleation rates and their dependence on sulfuric acid vapor concentrations are in reasonable agreement with measurements from Mexico City and Atlanta. PMID:23091030

  4. Acid-base chemical reaction model for nucleation rates in the polluted atmospheric boundary layer.

    PubMed

    Chen, Modi; Titcombe, Mari; Jiang, Jingkun; Jen, Coty; Kuang, Chongai; Fischer, Marc L; Eisele, Fred L; Siepmann, J Ilja; Hanson, David R; Zhao, Jun; McMurry, Peter H

    2012-11-13

    Climate models show that particles formed by nucleation can affect cloud cover and, therefore, the earth's radiation budget. Measurements worldwide show that nucleation rates in the atmospheric boundary layer are positively correlated with concentrations of sulfuric acid vapor. However, current nucleation theories do not correctly predict either the observed nucleation rates or their functional dependence on sulfuric acid concentrations. This paper develops an alternative approach for modeling nucleation rates, based on a sequence of acid-base reactions. The model uses empirical estimates of sulfuric acid evaporation rates obtained from new measurements of neutral molecular clusters. The model predicts that nucleation rates equal the sulfuric acid vapor collision rate times a prefactor that is less than unity and that depends on the concentrations of basic gaseous compounds and preexisting particles. Predicted nucleation rates and their dependence on sulfuric acid vapor concentrations are in reasonable agreement with measurements from Mexico City and Atlanta.

  5. Search algorithm complexity modeling with application to image alignment and matching

    NASA Astrophysics Data System (ADS)

    DelMarco, Stephen

    2014-05-01

    Search algorithm complexity modeling, in the form of penetration rate estimation, provides a useful way to estimate search efficiency in application domains which involve searching over a hypothesis space of reference templates or models, as in model-based object recognition, automatic target recognition, and biometric recognition. The penetration rate quantifies the expected portion of the database that must be searched, and is useful for estimating search algorithm computational requirements. In this paper we perform mathematical modeling to derive general equations for penetration rate estimates that are applicable to a wide range of recognition problems. We extend previous penetration rate analyses to use more general probabilistic modeling assumptions. In particular we provide penetration rate equations within the framework of a model-based image alignment application domain in which a prioritized hierarchical grid search is used to rank subspace bins based on matching probability. We derive general equations, and provide special cases based on simplifying assumptions. We show how previously-derived penetration rate equations are special cases of the general formulation. We apply the analysis to model-based logo image alignment in which a hierarchical grid search is used over a geometric misalignment transform hypothesis space. We present numerical results validating the modeling assumptions and derived formulation.

  6. Integration of Continuous-Time Dynamics in a Spiking Neural Network Simulator.

    PubMed

    Hahne, Jan; Dahmen, David; Schuecker, Jannis; Frommer, Andreas; Bolten, Matthias; Helias, Moritz; Diesmann, Markus

    2017-01-01

    Contemporary modeling approaches to the dynamics of neural networks include two important classes of models: biologically grounded spiking neuron models and functionally inspired rate-based units. We present a unified simulation framework that supports the combination of the two for multi-scale modeling, enables the quantitative validation of mean-field approaches by spiking network simulations, and provides an increase in reliability by usage of the same simulation code and the same network model specifications for both model classes. While most spiking simulations rely on the communication of discrete events, rate models require time-continuous interactions between neurons. Exploiting the conceptual similarity to the inclusion of gap junctions in spiking network simulations, we arrive at a reference implementation of instantaneous and delayed interactions between rate-based models in a spiking network simulator. The separation of rate dynamics from the general connection and communication infrastructure ensures flexibility of the framework. In addition to the standard implementation we present an iterative approach based on waveform-relaxation techniques to reduce communication and increase performance for large-scale simulations of rate-based models with instantaneous interactions. Finally we demonstrate the broad applicability of the framework by considering various examples from the literature, ranging from random networks to neural-field models. The study provides the prerequisite for interactions between rate-based and spiking models in a joint simulation.

  7. Integration of Continuous-Time Dynamics in a Spiking Neural Network Simulator

    PubMed Central

    Hahne, Jan; Dahmen, David; Schuecker, Jannis; Frommer, Andreas; Bolten, Matthias; Helias, Moritz; Diesmann, Markus

    2017-01-01

    Contemporary modeling approaches to the dynamics of neural networks include two important classes of models: biologically grounded spiking neuron models and functionally inspired rate-based units. We present a unified simulation framework that supports the combination of the two for multi-scale modeling, enables the quantitative validation of mean-field approaches by spiking network simulations, and provides an increase in reliability by usage of the same simulation code and the same network model specifications for both model classes. While most spiking simulations rely on the communication of discrete events, rate models require time-continuous interactions between neurons. Exploiting the conceptual similarity to the inclusion of gap junctions in spiking network simulations, we arrive at a reference implementation of instantaneous and delayed interactions between rate-based models in a spiking network simulator. The separation of rate dynamics from the general connection and communication infrastructure ensures flexibility of the framework. In addition to the standard implementation we present an iterative approach based on waveform-relaxation techniques to reduce communication and increase performance for large-scale simulations of rate-based models with instantaneous interactions. Finally we demonstrate the broad applicability of the framework by considering various examples from the literature, ranging from random networks to neural-field models. The study provides the prerequisite for interactions between rate-based and spiking models in a joint simulation. PMID:28596730

  8. Estimation of inlet flow rates for image-based aneurysm CFD models: where and how to begin?

    PubMed

    Valen-Sendstad, Kristian; Piccinelli, Marina; KrishnankuttyRema, Resmi; Steinman, David A

    2015-06-01

    Patient-specific flow rates are rarely available for image-based computational fluid dynamics models. Instead, flow rates are often assumed to scale according to the diameters of the arteries of interest. Our goal was to determine how choice of inlet location and scaling law affect such model-based estimation of inflow rates. We focused on 37 internal carotid artery (ICA) aneurysm cases from the Aneurisk cohort. An average ICA flow rate of 245 mL min(-1) was assumed from the literature, and then rescaled for each case according to its inlet diameter squared (assuming a fixed velocity) or cubed (assuming a fixed wall shear stress). Scaling was based on diameters measured at various consistent anatomical locations along the models. Choice of location introduced a modest 17% average uncertainty in model-based flow rate, but within individual cases estimated flow rates could vary by >100 mL min(-1). A square law was found to be more consistent with physiological flow rates than a cube law. Although impact of parent artery truncation on downstream flow patterns is well studied, our study highlights a more insidious and potentially equal impact of truncation site and scaling law on the uncertainty of assumed inlet flow rates and thus, potentially, downstream flow patterns.

  9. Rain-rate data base development and rain-rate climate analysis

    NASA Technical Reports Server (NTRS)

    Crane, Robert K.

    1993-01-01

    The single-year rain-rate distribution data available within the archives of Consultative Committee for International Radio (CCIR) Study Group 5 were compiled into a data base for use in rain-rate climate modeling and for the preparation of predictions of attenuation statistics. The four year set of tip-time sequences provided by J. Goldhirsh for locations near Wallops Island were processed to compile monthly and annual distributions of rain rate and of event durations for intervals above and below preset thresholds. A four-year data set of tropical rain-rate tip-time sequences were acquired from the NASA TRMM program for 30 gauges near Darwin, Australia. They were also processed for inclusion in the CCIR data base and the expanded data base for monthly observations at the University of Oklahoma. The empirical rain-rate distributions (edfs) accepted for inclusion in the CCIR data base were used to estimate parameters for several rain-rate distribution models: the lognormal model, the Crane two-component model, and the three parameter model proposed by Moupfuma. The intent of this segment of the study is to obtain a limited set of parameters that can be mapped globally for use in rain attenuation predictions. If the form of the distribution can be established, then perhaps available climatological data can be used to estimate the parameters rather than requiring years of rain-rate observations to set the parameters. The two-component model provided the best fit to the Wallops Island data but the Moupfuma model provided the best fit to the Darwin data.

  10. Modeling of Diffusion Based Correlations Between Heart Rate Modulations and Respiration Pattern

    DTIC Science & Technology

    2001-10-25

    1 of 4 MODELING OF DIFFUSION BASED CORRELATIONS BETWEEN HEART RATE MODULATIONS AND RESPIRATION PATTERN R.Langer,(1) Y.Smorzik,(2) S.Akselrod,(1...generations of the bronchial tree. The second stage describes the oxygen diffusion process from the pulmonary gas in the alveoli into the pulmonary...patterns (FRC, TV, rate). Keywords – Modeling, Diffusion , Heart Rate fluctuations I. INTRODUCTION Under a whole-body management perception, the

  11. Cybernetic modeling based on pathway analysis for Penicillium chrysogenum fed-batch fermentation.

    PubMed

    Geng, Jun; Yuan, Jingqi

    2010-08-01

    A macrokinetic model employing cybernetic methodology is proposed to describe mycelium growth and penicillin production. Based on the primordial and complete metabolic network of Penicillium chrysogenum found in the literature, the modeling procedure is guided by metabolic flux analysis and cybernetic modeling framework. The abstracted cybernetic model describes the transients of the consumption rates of the substrates, the assimilation rates of intermediates, the biomass growth rate, as well as the penicillin formation rate. Combined with the bioreactor model, these reaction rates are linked with the most important state variables, i.e., mycelium, substrate and product concentrations. Simplex method is used to estimate the sensitive parameters of the model. Finally, validation of the model is carried out with 20 batches of industrial-scale penicillin cultivation.

  12. Development and corroboration of a bioenergetics model for northern pikeminnow (Ptychocheilus oregonensis) feeding on juvenile salmonids in the Columbia River

    USGS Publications Warehouse

    Petersen, J.H.; Ward, D.L.

    1999-01-01

    A bioenergetics model was developed and corroborated for northern pikeminnow Ptychocheilus oregonensis, an important predator on juvenile salmonids in the Pacific Northwest. Predictions of modeled predation rate on salmonids were compared with field data from three areas of John Day Reservoir (Columbia River). To make bioenergetics model estimates of predation rate, three methods were used to approximate the change in mass of average predators during 30-d growth periods: observed change in mass between the first and the second month, predicted change in mass calculated with seasonal growth rates, and predicted change in mass based on an annual growth model. For all reservoir areas combined, bioenergetics model predictions of predation on salmon were 19% lower than field estimates based on observed masses, 45% lower than estimates based on seasonal growth rates, and 15% lower than estimates based on the annual growth model. For each growth approach, the largest differences in field-versus-model predation occurred at the midreservoir area (-84% to -67% difference). Model predictions of the rate of predation on salmonids were examined for sensitivity to parameter variation, swimming speed, sampling bias caused by gear selectivity, and asymmetric size distributions of predators. The specific daily growth rate of northern pikeminnow predicted by the model was highest in July and October and decreased during August. The bioenergetics model for northern pikeminnow performed well compared with models for other fish species that have been tested with field data. This model should be a useful tool for evaluating management actions such as predator removal, examining the influence of temperature on predation rates, and exploring interactions between predators in the Columbia River basin.

  13. A Kinetic Model Describing Injury-Burden in Team Sports.

    PubMed

    Fuller, Colin W

    2017-12-01

    Injuries in team sports are normally characterised by the incidence, severity, and location and type of injuries sustained: these measures, however, do not provide an insight into the variable injury-burden experienced during a season. Injury burden varies according to the team's match and training loads, the rate at which injuries are sustained and the time taken for these injuries to resolve. At the present time, this time-based variation of injury burden has not been modelled. To develop a kinetic model describing the time-based injury burden experienced by teams in elite team sports and to demonstrate the model's utility. Rates of injury were quantified using a large eight-season database of rugby injuries (5253) and exposure (60,085 player-match-hours) in English professional rugby. Rates of recovery from injury were quantified using time-to-recovery analysis of the injuries. The kinetic model proposed for predicting a team's time-based injury burden is based on a composite rate equation developed from the incidence of injury, a first-order rate of recovery from injury and the team's playing load. The utility of the model was demonstrated by examining common scenarios encountered in elite rugby. The kinetic model developed describes and predicts the variable injury-burden arising from match play during a season of rugby union based on the incidence of match injuries, the rate of recovery from injury and the playing load. The model is equally applicable to other team sports and other scenarios.

  14. 3D modeling and characterization of a calorimetric flow rate sensor for sweat rate sensing applications

    NASA Astrophysics Data System (ADS)

    Iftekhar, Ahmed Tashfin; Ho, Jenny Che-Ting; Mellinger, Axel; Kaya, Tolga

    2017-03-01

    Sweat-based physiological monitoring has been intensively explored in the last decade with the hopes of developing real-time hydration monitoring devices. Although the content of sweat (electrolytes, lactate, urea, etc.) provides significant information about the physiology, it is also very important to know the rate of sweat at the time of sweat content measurements because the sweat rate is known to alter the concentrations of sweat compounds. We developed a calorimetric based flow rate sensor using PolydimethylSiloxane that is suitable for sweat rate applications. Our simple approach on using temperature-based flow rate detection can easily be adapted to multiple sweat collection and analysis devices. Moreover, we have developed a 3D finite element analysis model of the device using COMSOL Multiphysics™ and verified the flow rate measurements. The experiment investigated flow rate values from 0.3 μl/min up to 2.1 ml/min, which covers the human sweat rate range (0.5 μl/min-10 μl/min). The 3D model simulations and analytical model calculations covered an even wider range in order to understand the main physical mechanisms of the device. With a verified 3D model, different environmental heat conditions could be further studied to shed light on the physiology of the sweat rate.

  15. ADHD bifactor model based on parent and teacher ratings of Malaysian children.

    PubMed

    Gomez, Rapson

    2014-04-01

    The study used confirmatory factor analysis to ascertain support for the bifactor model of the Attention Deficit/Hyperactivity Disorder (ADHD) symptoms, based on parent and teacher ratings for a group of Malaysian children. Malaysian parents and teachers completed ratings of ADHD and Opposition Defiant Disorder (ODD) symptoms for 934 children. For both sets of ratings, the findings indicating good fit for the bifactor model, and the factors in this model showed differential associations with ODD, thereby supporting the internal and external validity of this model. The theoretical and clinical implications of the findings are discussed. Copyright © 2013 Elsevier B.V. All rights reserved.

  16. Exploring the relationships among performance-based functional ability, self-rated disability, perceived instrumental support, and depression: a structural equation model analysis.

    PubMed

    Weil, Joyce; Hutchinson, Susan R; Traxler, Karen

    2014-11-01

    Data from the Women's Health and Aging Study were used to test a model of factors explaining depressive symptomology. The primary purpose of the study was to explore the association between performance-based measures of functional ability and depression and to examine the role of self-rated physical difficulties and perceived instrumental support in mediating the relationship between performance-based functioning and depression. The inclusion of performance-based measures allows for the testing of functional ability as a clinical precursor to disability and depression: a critical, but rarely examined, association in the disablement process. Structural equation modeling supported the overall fit of the model and found an indirect relationship between performance-based functioning and depression, with perceived physical difficulties serving as a significant mediator. Our results highlight the complementary nature of performance-based and self-rated measures and the importance of including perception of self-rated physical difficulties when examining depression in older persons. © The Author(s) 2014.

  17. Recent topographic evolution and erosion of the deglaciated Washington Cascades inferred from a stochastic landscape evolution model

    NASA Astrophysics Data System (ADS)

    Moon, Seulgi; Shelef, Eitan; Hilley, George E.

    2015-05-01

    In this study, we model postglacial surface processes and examine the evolution of the topography and denudation rates within the deglaciated Washington Cascades to understand the controls on and time scales of landscape response to changes in the surface process regime after deglaciation. The postglacial adjustment of this landscape is modeled using a geomorphic-transport-law-based numerical model that includes processes of river incision, hillslope diffusion, and stochastic landslides. The surface lowering due to landslides is parameterized using a physically based slope stability model coupled to a stochastic model of the generation of landslides. The model parameters of river incision and stochastic landslides are calibrated based on the rates and distribution of thousand-year-time scale denudation rates measured from cosmogenic 10Be isotopes. The probability distributions of those model parameters calculated based on a Bayesian inversion scheme show comparable ranges from previous studies in similar rock types and climatic conditions. The magnitude of landslide denudation rates is determined by failure density (similar to landslide frequency), whereas precipitation and slopes affect the spatial variation in landslide denudation rates. Simulation results show that postglacial denudation rates decay over time and take longer than 100 kyr to reach time-invariant rates. Over time, the landslides in the model consume the steep slopes characteristic of deglaciated landscapes. This response time scale is on the order of or longer than glacial/interglacial cycles, suggesting that frequent climatic perturbations during the Quaternary may produce a significant and prolonged impact on denudation and topography.

  18. [Prediction of schistosomiasis infection rates of population based on ARIMA-NARNN model].

    PubMed

    Ke-Wei, Wang; Yu, Wu; Jin-Ping, Li; Yu-Yu, Jiang

    2016-07-12

    To explore the effect of the autoregressive integrated moving average model-nonlinear auto-regressive neural network (ARIMA-NARNN) model on predicting schistosomiasis infection rates of population. The ARIMA model, NARNN model and ARIMA-NARNN model were established based on monthly schistosomiasis infection rates from January 2005 to February 2015 in Jiangsu Province, China. The fitting and prediction performances of the three models were compared. Compared to the ARIMA model and NARNN model, the mean square error (MSE), mean absolute error (MAE) and mean absolute percentage error (MAPE) of the ARIMA-NARNN model were the least with the values of 0.011 1, 0.090 0 and 0.282 4, respectively. The ARIMA-NARNN model could effectively fit and predict schistosomiasis infection rates of population, which might have a great application value for the prevention and control of schistosomiasis.

  19. A thermal NO(x) prediction model - Scalar computation module for CFD codes with fluid and kinetic effects

    NASA Technical Reports Server (NTRS)

    Mcbeath, Giorgio; Ghorashi, Bahman; Chun, Kue

    1993-01-01

    A thermal NO(x) prediction model is developed to interface with a CFD, k-epsilon based code. A converged solution from the CFD code is the input to the postprocessing model for prediction of thermal NO(x). The model uses a decoupled analysis to estimate the equilibrium level of (NO(x))e which is the constant rate limit. This value is used to estimate the flame (NO(x)) and in turn predict the rate of formation at each node using a two-step Zeldovich mechanism. The rate is fixed on the NO(x) production rate plot by estimating the time to reach equilibrium by a differential analysis based on the reaction: O + N2 = NO + N. The rate is integrated in the nonequilibrium time space based on the residence time at each node in the computational domain. The sum of all nodal predictions yields the total NO(x) level.

  20. Concepts, challenges, and successes in modeling thermodynamics of metabolism.

    PubMed

    Cannon, William R

    2014-01-01

    The modeling of the chemical reactions involved in metabolism is a daunting task. Ideally, the modeling of metabolism would use kinetic simulations, but these simulations require knowledge of the thousands of rate constants involved in the reactions. The measurement of rate constants is very labor intensive, and hence rate constants for most enzymatic reactions are not available. Consequently, constraint-based flux modeling has been the method of choice because it does not require the use of the rate constants of the law of mass action. However, this convenience also limits the predictive power of constraint-based approaches in that the law of mass action is used only as a constraint, making it difficult to predict metabolite levels or energy requirements of pathways. An alternative to both of these approaches is to model metabolism using simulations of states rather than simulations of reactions, in which the state is defined as the set of all metabolite counts or concentrations. While kinetic simulations model reactions based on the likelihood of the reaction derived from the law of mass action, states are modeled based on likelihood ratios of mass action. Both approaches provide information on the energy requirements of metabolic reactions and pathways. However, modeling states rather than reactions has the advantage that the parameters needed to model states (chemical potentials) are much easier to determine than the parameters needed to model reactions (rate constants). Herein, we discuss recent results, assumptions, and issues in using simulations of state to model metabolism.

  1. What Are Error Rates for Classifying Teacher and School Performance Using Value-Added Models?

    ERIC Educational Resources Information Center

    Schochet, Peter Z.; Chiang, Hanley S.

    2013-01-01

    This article addresses likely error rates for measuring teacher and school performance in the upper elementary grades using value-added models applied to student test score gain data. Using a realistic performance measurement system scheme based on hypothesis testing, the authors develop error rate formulas based on ordinary least squares and…

  2. Direct coupling of a genome-scale microbial in silico model and a groundwater reactive transport model.

    PubMed

    Fang, Yilin; Scheibe, Timothy D; Mahadevan, Radhakrishnan; Garg, Srinath; Long, Philip E; Lovley, Derek R

    2011-03-25

    The activity of microorganisms often plays an important role in dynamic natural attenuation or engineered bioremediation of subsurface contaminants, such as chlorinated solvents, metals, and radionuclides. To evaluate and/or design bioremediated systems, quantitative reactive transport models are needed. State-of-the-art reactive transport models often ignore the microbial effects or simulate the microbial effects with static growth yield and constant reaction rate parameters over simulated conditions, while in reality microorganisms can dynamically modify their functionality (such as utilization of alternative respiratory pathways) in response to spatial and temporal variations in environmental conditions. Constraint-based genome-scale microbial in silico models, using genomic data and multiple-pathway reaction networks, have been shown to be able to simulate transient metabolism of some well studied microorganisms and identify growth rate, substrate uptake rates, and byproduct rates under different growth conditions. These rates can be identified and used to replace specific microbially-mediated reaction rates in a reactive transport model using local geochemical conditions as constraints. We previously demonstrated the potential utility of integrating a constraint-based microbial metabolism model with a reactive transport simulator as applied to bioremediation of uranium in groundwater. However, that work relied on an indirect coupling approach that was effective for initial demonstration but may not be extensible to more complex problems that are of significant interest (e.g., communities of microbial species and multiple constraining variables). Here, we extend that work by presenting and demonstrating a method of directly integrating a reactive transport model (FORTRAN code) with constraint-based in silico models solved with IBM ILOG CPLEX linear optimizer base system (C library). The models were integrated with BABEL, a language interoperability tool. The modeling system is designed in such a way that constraint-based models targeting different microorganisms or competing organism communities can be easily plugged into the system. Constraint-based modeling is very costly given the size of a genome-scale reaction network. To save computation time, a binary tree is traversed to examine the concentration and solution pool generated during the simulation in order to decide whether the constraint-based model should be called. We also show preliminary results from the integrated model including a comparison of the direct and indirect coupling approaches and evaluated the ability of the approach to simulate field experiment. Published by Elsevier B.V.

  3. A New Seismic Hazard Model for Mainland China

    NASA Astrophysics Data System (ADS)

    Rong, Y.; Xu, X.; Chen, G.; Cheng, J.; Magistrale, H.; Shen, Z. K.

    2017-12-01

    We are developing a new seismic hazard model for Mainland China by integrating historical earthquake catalogs, geological faults, geodetic GPS data, and geology maps. To build the model, we construct an Mw-based homogeneous historical earthquake catalog spanning from 780 B.C. to present, create fault models from active fault data, and derive a strain rate model based on the most complete GPS measurements and a new strain derivation algorithm. We divide China and the surrounding regions into about 20 large seismic source zones. For each zone, a tapered Gutenberg-Richter (TGR) magnitude-frequency distribution is used to model the seismic activity rates. The a- and b-values of the TGR distribution are calculated using observed earthquake data, while the corner magnitude is constrained independently using the seismic moment rate inferred from the geodetically-based strain rate model. Small and medium sized earthquakes are distributed within the source zones following the location and magnitude patterns of historical earthquakes. Some of the larger earthquakes are distributed onto active faults, based on their geological characteristics such as slip rate, fault length, down-dip width, and various paleoseismic data. The remaining larger earthquakes are then placed into the background. A new set of magnitude-rupture scaling relationships is developed based on earthquake data from China and vicinity. We evaluate and select appropriate ground motion prediction equations by comparing them with observed ground motion data and performing residual analysis. To implement the modeling workflow, we develop a tool that builds upon the functionalities of GEM's Hazard Modeler's Toolkit. The GEM OpenQuake software is used to calculate seismic hazard at various ground motion periods and various return periods. To account for site amplification, we construct a site condition map based on geology. The resulting new seismic hazard maps can be used for seismic risk analysis and management.

  4. A watershed scale spatially-distributed model for streambank erosion rate driven by channel curvature

    NASA Astrophysics Data System (ADS)

    McMillan, Mitchell; Hu, Zhiyong

    2017-10-01

    Streambank erosion is a major source of fluvial sediment, but few large-scale, spatially distributed models exist to quantify streambank erosion rates. We introduce a spatially distributed model for streambank erosion applicable to sinuous, single-thread channels. We argue that such a model can adequately characterize streambank erosion rates, measured at the outsides of bends over a 2-year time period, throughout a large region. The model is based on the widely-used excess-velocity equation and comprised three components: a physics-based hydrodynamic model, a large-scale 1-dimensional model of average monthly discharge, and an empirical bank erodibility parameterization. The hydrodynamic submodel requires inputs of channel centerline, slope, width, depth, friction factor, and a scour factor A; the large-scale watershed submodel utilizes watershed-averaged monthly outputs of the Noah-2.8 land surface model; bank erodibility is based on tree cover and bank height as proxies for root density. The model was calibrated with erosion rates measured in sand-bed streams throughout the northern Gulf of Mexico coastal plain. The calibrated model outperforms a purely empirical model, as well as a model based only on excess velocity, illustrating the utility of combining a physics-based hydrodynamic model with an empirical bank erodibility relationship. The model could be improved by incorporating spatial variability in channel roughness and the hydrodynamic scour factor, which are here assumed constant. A reach-scale application of the model is illustrated on ∼1 km of a medium-sized, mixed forest-pasture stream, where the model identifies streambank erosion hotspots on forested and non-forested bends.

  5. Among-character rate variation distributions in phylogenetic analysis of discrete morphological characters.

    PubMed

    Harrison, Luke B; Larsson, Hans C E

    2015-03-01

    Likelihood-based methods are commonplace in phylogenetic systematics. Although much effort has been directed toward likelihood-based models for molecular data, comparatively less work has addressed models for discrete morphological character (DMC) data. Among-character rate variation (ACRV) may confound phylogenetic analysis, but there have been few analyses of the magnitude and distribution of rate heterogeneity among DMCs. Using 76 data sets covering a range of plants, invertebrate, and vertebrate animals, we used a modified version of MrBayes to test equal, gamma-distributed and lognormally distributed models of ACRV, integrating across phylogenetic uncertainty using Bayesian model selection. We found that in approximately 80% of data sets, unequal-rates models outperformed equal-rates models, especially among larger data sets. Moreover, although most data sets were equivocal, more data sets favored the lognormal rate distribution relative to the gamma rate distribution, lending some support for more complex character correlations than in molecular data. Parsimony estimation of the underlying rate distributions in several data sets suggests that the lognormal distribution is preferred when there are many slowly evolving characters and fewer quickly evolving characters. The commonly adopted four rate category discrete approximation used for molecular data was found to be sufficient to approximate a gamma rate distribution with discrete characters. However, among the two data sets tested that favored a lognormal rate distribution, the continuous distribution was better approximated with at least eight discrete rate categories. Although the effect of rate model on the estimation of topology was difficult to assess across all data sets, it appeared relatively minor between the unequal-rates models for the one data set examined carefully. As in molecular analyses, we argue that researchers should test and adopt the most appropriate model of rate variation for the data set in question. As discrete characters are increasingly used in more sophisticated likelihood-based phylogenetic analyses, it is important that these studies be built on the most appropriate and carefully selected underlying models of evolution. © The Author(s) 2014. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  6. Temperature-dependence of biomass accumulation rates during secondary succession.

    PubMed

    Anderson, Kristina J; Allen, Andrew P; Gillooly, James F; Brown, James H

    2006-06-01

    Rates of ecosystem recovery following disturbance affect many ecological processes, including carbon cycling in the biosphere. Here, we present a model that predicts the temperature dependence of the biomass accumulation rate following disturbances in forests. Model predictions are derived based on allometric and biochemical principles that govern plant energetics and are tested using a global database of 91 studies of secondary succession compiled from the literature. The rate of biomass accumulation during secondary succession increases with average growing season temperature as predicted based on the biochemical kinetics of photosynthesis in chloroplasts. In addition, the rate of biomass accumulation is greater in angiosperm-dominated communities than in gymnosperm-dominated ones and greater in plantations than in naturally regenerating stands. By linking the temperature-dependence of photosynthesis to the rate of whole-ecosystem biomass accumulation during secondary succession, our model and results provide one example of how emergent, ecosystem-level rate processes can be predicted based on the kinetics of individual metabolic rate.

  7. Behavioral modeling of VCSELs for high-speed optical interconnects

    NASA Astrophysics Data System (ADS)

    Szczerba, Krzysztof; Kocot, Chris

    2018-02-01

    Transition from on-off keying to 4-level pulse amplitude modulation (PAM) in VCSEL based optical interconnects allows for an increase of data rates, at the cost of 4.8 dB sensitivity penalty. The resulting strained link budget creates a need for accurate VCSEL models for driver integrated circuit (IC) design and system level simulations. Rate equation based equivalent circuit models are convenient for the IC design, but system level analysis requires computationally efficient closed form behavioral models based Volterra series and neural networks. In this paper we present and compare these models.

  8. Assessing the prediction accuracy of cure in the Cox proportional hazards cure model: an application to breast cancer data.

    PubMed

    Asano, Junichi; Hirakawa, Akihiro; Hamada, Chikuma

    2014-01-01

    A cure rate model is a survival model incorporating the cure rate with the assumption that the population contains both uncured and cured individuals. It is a powerful statistical tool for prognostic studies, especially in cancer. The cure rate is important for making treatment decisions in clinical practice. The proportional hazards (PH) cure model can predict the cure rate for each patient. This contains a logistic regression component for the cure rate and a Cox regression component to estimate the hazard for uncured patients. A measure for quantifying the predictive accuracy of the cure rate estimated by the Cox PH cure model is required, as there has been a lack of previous research in this area. We used the Cox PH cure model for the breast cancer data; however, the area under the receiver operating characteristic curve (AUC) could not be estimated because many patients were censored. In this study, we used imputation-based AUCs to assess the predictive accuracy of the cure rate from the PH cure model. We examined the precision of these AUCs using simulation studies. The results demonstrated that the imputation-based AUCs were estimable and their biases were negligibly small in many cases, although ordinary AUC could not be estimated. Additionally, we introduced the bias-correction method of imputation-based AUCs and found that the bias-corrected estimate successfully compensated the overestimation in the simulation studies. We also illustrated the estimation of the imputation-based AUCs using breast cancer data. Copyright © 2014 John Wiley & Sons, Ltd.

  9. Are Plant Species Able to Keep Pace with the Rapidly Changing Climate?

    PubMed Central

    Cunze, Sarah; Heydel, Felix; Tackenberg, Oliver

    2013-01-01

    Future climate change is predicted to advance faster than the postglacial warming. Migration may therefore become a key driver for future development of biodiversity and ecosystem functioning. For 140 European plant species we computed past range shifts since the last glacial maximum and future range shifts for a variety of Intergovernmental Panel on Climate Change (IPCC) scenarios and global circulation models (GCMs). Range shift rates were estimated by means of species distribution modelling (SDM). With process-based seed dispersal models we estimated species-specific migration rates for 27 dispersal modes addressing dispersal by wind (anemochory) for different wind conditions, as well as dispersal by mammals (dispersal on animal's coat – epizoochory and dispersal by animals after feeding and digestion – endozoochory) considering different animal species. Our process-based modelled migration rates generally exceeded the postglacial range shift rates indicating that the process-based models we used are capable of predicting migration rates that are in accordance with realized past migration. For most of the considered species, the modelled migration rates were considerably lower than the expected future climate change induced range shift rates. This implies that most plant species will not entirely be able to follow future climate-change-induced range shifts due to dispersal limitation. Animals with large day- and home-ranges are highly important for achieving high migration rates for many plant species, whereas anemochory is relevant for only few species. PMID:23894290

  10. [Establishing and applying of autoregressive integrated moving average model to predict the incidence rate of dysentery in Shanghai].

    PubMed

    Li, Jian; Wu, Huan-Yu; Li, Yan-Ting; Jin, Hui-Ming; Gu, Bao-Ke; Yuan, Zheng-An

    2010-01-01

    To explore the feasibility of establishing and applying of autoregressive integrated moving average (ARIMA) model to predict the incidence rate of dysentery in Shanghai, so as to provide the theoretical basis for prevention and control of dysentery. ARIMA model was established based on the monthly incidence rate of dysentery of Shanghai from 1990 to 2007. The parameters of model were estimated through unconditional least squares method, the structure was determined according to criteria of residual un-correlation and conclusion, and the model goodness-of-fit was determined through Akaike information criterion (AIC) and Schwarz Bayesian criterion (SBC). The constructed optimal model was applied to predict the incidence rate of dysentery of Shanghai in 2008 and evaluate the validity of model through comparing the difference of predicted incidence rate and actual one. The incidence rate of dysentery in 2010 was predicted by ARIMA model based on the incidence rate from January 1990 to June 2009. The model ARIMA (1, 1, 1) (0, 1, 2)(12) had a good fitness to the incidence rate with both autoregressive coefficient (AR1 = 0.443) during the past time series, moving average coefficient (MA1 = 0.806) and seasonal moving average coefficient (SMA1 = 0.543, SMA2 = 0.321) being statistically significant (P < 0.01). AIC and SBC were 2.878 and 16.131 respectively and predicting error was white noise. The mathematic function was (1-0.443B) (1-B) (1-B(12))Z(t) = (1-0.806B) (1-0.543B(12)) (1-0.321B(2) x 12) micro(t). The predicted incidence rate in 2008 was consistent with the actual one, with the relative error of 6.78%. The predicted incidence rate of dysentery in 2010 based on the incidence rate from January 1990 to June 2009 would be 9.390 per 100 thousand. ARIMA model can be used to fit the changes of incidence rate of dysentery and to forecast the future incidence rate in Shanghai. It is a predicted model of high precision for short-time forecast.

  11. Comparison of field theory models of interest rates with market data

    NASA Astrophysics Data System (ADS)

    Baaquie, Belal E.; Srikant, Marakani

    2004-03-01

    We calibrate and test various variants of field theory models of the interest rate with data from Eurodollar futures. Models based on psychological factors are seen to provide the best fit to the market. We make a model independent determination of the volatility function of the forward rates from market data.

  12. Testing the Predictive Power of Coulomb Stress on Aftershock Sequences

    NASA Astrophysics Data System (ADS)

    Woessner, J.; Lombardi, A.; Werner, M. J.; Marzocchi, W.

    2009-12-01

    Empirical and statistical models of clustered seismicity are usually strongly stochastic and perceived to be uninformative in their forecasts, since only marginal distributions are used, such as the Omori-Utsu and Gutenberg-Richter laws. In contrast, so-called physics-based aftershock models, based on seismic rate changes calculated from Coulomb stress changes and rate-and-state friction, make more specific predictions: anisotropic stress shadows and multiplicative rate changes. We test the predictive power of models based on Coulomb stress changes against statistical models, including the popular Short Term Earthquake Probabilities and Epidemic-Type Aftershock Sequences models: We score and compare retrospective forecasts on the aftershock sequences of the 1992 Landers, USA, the 1997 Colfiorito, Italy, and the 2008 Selfoss, Iceland, earthquakes. To quantify predictability, we use likelihood-based metrics that test the consistency of the forecasts with the data, including modified and existing tests used in prospective forecast experiments within the Collaboratory for the Study of Earthquake Predictability (CSEP). Our results indicate that a statistical model performs best. Moreover, two Coulomb model classes seem unable to compete: Models based on deterministic Coulomb stress changes calculated from a given fault-slip model, and those based on fixed receiver faults. One model of Coulomb stress changes does perform well and sometimes outperforms the statistical models, but its predictive information is diluted, because of uncertainties included in the fault-slip model. Our results suggest that models based on Coulomb stress changes need to incorporate stochastic features that represent model and data uncertainty.

  13. Quantification of biodegradation for o-xylene and naphthalene using first order decay models, Michaelis-Menten kinetics and stable carbon isotopes.

    PubMed

    Blum, Philipp; Hunkeler, Daniel; Weede, Matthias; Beyer, Christof; Grathwohl, Peter; Morasch, Barbara

    2009-04-01

    At a former wood preservation plant severely contaminated with coal tar oil, in situ bulk attenuation and biodegradation rate constants for several monoaromatic (BTEX) and polyaromatic hydrocarbons (PAH) were determined using (1) classical first order decay models, (2) Michaelis-Menten degradation kinetics (MM), and (3) stable carbon isotopes, for o-xylene and naphthalene. The first order bulk attenuation rate constant for o-xylene was calculated to be 0.0025 d(-1) and a novel stable isotope-based first order model, which also accounted for the respective redox conditions, resulted in a slightly smaller biodegradation rate constant of 0.0019 d(-1). Based on MM-kinetics, the o-xylene concentration decreased with a maximum rate of k(max)=0.1 microg/L/d. The bulk attenuation rate constant of naphthalene retrieved from the classical first order decay model was 0.0038 d(-1). The stable isotope-based biodegradation rate constant of 0.0027 d(-1) was smaller in the reduced zone, while residual naphthalene in the oxic part of the plume further downgradient was degraded at a higher rate of 0.0038 d(-1). With MM-kinetics a maximum degradation rate of k(max)=12 microg/L/d was determined. Although best fits were obtained by MM-kinetics, we consider the carbon stable isotope-based approach more appropriate as it is specific for biodegradation (not overall attenuation) and at the same time accounts for the dominant electron-accepting process. For o-xylene a field based isotope enrichment factor epsilon(field) of -1.4 could be determined using the Rayleigh model, which closely matched values from laboratory studies of o-xylene degradation under sulfate-reducing conditions.

  14. Probabilistic estimation of residential air exchange rates for population-based human exposure modeling

    EPA Science Inventory

    Residential air exchange rates (AERs) are a key determinant in the infiltration of ambient air pollution indoors. Population-based human exposure models using probabilistic approaches to estimate personal exposure to air pollutants have relied on input distributions from AER meas...

  15. History, Epidemic Evolution, and Model Burn-In for a Network of Annual Invasion: Soybean Rust.

    PubMed

    Sanatkar, M R; Scoglio, C; Natarajan, B; Isard, S A; Garrett, K A

    2015-07-01

    Ecological history may be an important driver of epidemics and disease emergence. We evaluated the role of history and two related concepts, the evolution of epidemics and the burn-in period required for fitting a model to epidemic observations, for the U.S. soybean rust epidemic (caused by Phakopsora pachyrhizi). This disease allows evaluation of replicate epidemics because the pathogen reinvades the United States each year. We used a new maximum likelihood estimation approach for fitting the network model based on observed U.S. epidemics. We evaluated the model burn-in period by comparing model fit based on each combination of other years of observation. When the miss error rates were weighted by 0.9 and false alarm error rates by 0.1, the mean error rate did decline, for most years, as more years were used to construct models. Models based on observations in years closer in time to the season being estimated gave lower miss error rates for later epidemic years. The weighted mean error rate was lower in backcasting than in forecasting, reflecting how the epidemic had evolved. Ongoing epidemic evolution, and potential model failure, can occur because of changes in climate, host resistance and spatial patterns, or pathogen evolution.

  16. The contagious nature of imprisonment: an agent-based model to explain racial disparities in incarceration rates.

    PubMed

    Lum, Kristian; Swarup, Samarth; Eubank, Stephen; Hawdon, James

    2014-09-06

    We build an agent-based model of incarceration based on the susceptible-infected-suspectible (SIS) model of infectious disease propagation. Our central hypothesis is that the observed racial disparities in incarceration rates between Black and White Americans can be explained as the result of differential sentencing between the two demographic groups. We demonstrate that if incarceration can be spread through a social influence network, then even relatively small differences in sentencing can result in large disparities in incarceration rates. Controlling for effects of transmissibility, susceptibility and influence network structure, our model reproduces the observed large disparities in incarceration rates given the differences in sentence lengths for White and Black drug offenders in the USA without extensive parameter tuning. We further establish the suitability of the SIS model as applied to incarceration by demonstrating that the observed structural patterns of recidivism are an emergent property of the model. In fact, our model shows a remarkably close correspondence with California incarceration data. This work advances efforts to combine the theories and methods of epidemiology and criminology.

  17. Finite-Rate Ablation Boundary Conditions for Carbon-Phenolic Heat-Shield

    NASA Technical Reports Server (NTRS)

    Chen, Y.-K.; Milos, Frank S.

    2003-01-01

    A formulation of finite-rate ablation surface boundary conditions, including oxidation, nitridation, and sublimation of carbonaceous material with pyrolysis gas injection, has been developed based on surface species mass conservation. These surface boundary conditions are discretized and integrated with a Navier-Stokes solver. This numerical procedure can predict aerothermal heating, chemical species concentration, and carbonaceous material ablation rate over the heatshield surface of re-entry space vehicles. In this study, the gas-gas and gas-surface interactions are established for air flow over a carbon-phenolic heatshield. Two finite-rate gas-surface interaction models are considered in the present study. The first model is based on the work of Park, and the second model includes the kinetics suggested by Zhluktov and Abe. Nineteen gas phase chemical reactions and four gas-surface interactions are considered in the present model. There is a total of fourteen gas phase chemical species, including five species for air and nine species for ablation products. Three test cases are studied in this paper. The first case is a graphite test model in the arc-jet stream; the second is a light weight Phenolic Impregnated Carbon Ablator at the Stardust re-entry peak heating conditions, and the third is a fully dense carbon-phenolic heatshield at the peak heating point of a proposed Mars Sample Return Earth Entry Vehicle. Predictions based on both finite-rate gas- surface interaction models are compared with those obtained using B' tables, which were created based on the chemical equilibrium assumption. Stagnation point convective heat fluxes predicted using Park's finite-rate model are far below those obtained from chemical equilibrium B' tables and Zhluktov's model. Recession predictions from Zhluktov's model are generally lower than those obtained from Park's model and chemical equilibrium B' tables. The effect of species mass diffusion on predicted ablation rate is also examined.

  18. Uncertainty about fundamentals and herding behavior in the FOREX market

    NASA Astrophysics Data System (ADS)

    Kaltwasser, Pablo Rovira

    2010-03-01

    It is traditionally assumed in finance models that the fundamental value of assets is known with certainty. Although this is an appealing simplifying assumption it is by no means based on empirical evidence. A simple heterogeneous agent model of the exchange rate is presented. In the model, traders do not observe the true underlying fundamental exchange rate and as a consequence they base their trades on beliefs about this variable. Despite the fact that only fundamentalist traders operate in the market, the model belongs to the heterogeneous agent literature, as traders have different beliefs about the fundamental rate.

  19. Safety Changes in the US Vehicle Fleet since Model Year 1990, Based on NASS Data

    PubMed Central

    Eigen, Ana Maria; Digges, Kennerly; Samaha, Randa Radwan

    2012-01-01

    Based on the National Automotive Sampling System Crashworthiness Data System since the 1988–1992 model years, there has been a reduction in the MAIS 3+ injury rate and the Mean HARM for all crash modes. The largest improvement in vehicle safety has been in rollovers. There was an increase in the rollover injury rate in the 1993–1998 model year period, but a reduction since then. When comparing vehicles of the model year 1993 to 1998 with later model vehicles, the most profound difference was the reduction of rollover frequency for SUV’s – down more than 20% when compared to other crash modes. When considering only model years since 2002 the rollover frequency reduction was nearly 40%. A 26% reduction in the rate of moderate and serious injuries for all drivers in rollovers was observed for the model years later than 1998. The overall belt use rate for drivers of late model vehicles with HARM weighted injuries was 62% - up from 54% in earlier model vehicles. However, in rollover crashes, the same belt use rate lagged at 54%. PMID:23169134

  20. A fault‐based model for crustal deformation in the western United States based on a combined inversion of GPS and geologic inputs

    USGS Publications Warehouse

    Zeng, Yuehua; Shen, Zheng-Kang

    2017-01-01

    We develop a crustal deformation model to determine fault‐slip rates for the western United States (WUS) using the Zeng and Shen (2014) method that is based on a combined inversion of Global Positioning System (GPS) velocities and geological slip‐rate constraints. The model consists of six blocks with boundaries aligned along major faults in California and the Cascadia subduction zone, which are represented as buried dislocations in the Earth. Faults distributed within blocks have their geometrical structure and locking depths specified by the Uniform California Earthquake Rupture Forecast, version 3 (UCERF3) and the 2008 U.S. Geological Survey National Seismic Hazard Map Project model. Faults slip beneath a predefined locking depth, except for a few segments where shallow creep is allowed. The slip rates are estimated using a least‐squares inversion. The model resolution analysis shows that the resulting model is influenced heavily by geologic input, which fits the UCERF3 geologic bounds on California B faults and ±one‐half of the geologic slip rates for most other WUS faults. The modeled slip rates for the WUS faults are consistent with the observed GPS velocity field. Our fit to these velocities is measured in terms of a normalized chi‐square, which is 6.5. This updated model fits the data better than most other geodetic‐based inversion models. Major discrepancies between well‐resolved GPS inversion rates and geologic‐consensus rates occur along some of the northern California A faults, the Mojave to San Bernardino segments of the San Andreas fault, the western Garlock fault, the southern segment of the Wasatch fault, and other faults. Off‐fault strain‐rate distributions are consistent with regional tectonics, with a total off‐fault moment rate of 7.2×1018">7.2×1018 and 8.5×1018  N·m/year">8.5×1018  N⋅m/year for California and the WUS outside California, respectively.

  1. A study of the thermoregulatory characteristics of a liquid-cooled garment with automatic temperature control based on sweat rate: Experimental investigation and biothermal man-model development

    NASA Technical Reports Server (NTRS)

    Chambers, A. B.; Blackaby, J. R.; Miles, J. B.

    1973-01-01

    Experimental results for three subjects walking on a treadmill at exercise rates of up to 590 watts showed that thermal comfort could be maintained in a liquid cooled garment by using an automatic temperature controller based on sweat rate. The addition of head- and neck-cooling to an Apollo type liquid cooled garment increased its effectiveness and resulted in greater subjective comfort. The biothermal model of man developed in the second portion of the study utilized heat rates and exchange coefficients based on the experimental data, and included the cooling provisions of a liquid-cooled garment with automatic temperature control based on sweat rate. Simulation results were good approximations of the experimental results.

  2. Toward Building a New Seismic Hazard Model for Mainland China

    NASA Astrophysics Data System (ADS)

    Rong, Y.; Xu, X.; Chen, G.; Cheng, J.; Magistrale, H.; Shen, Z.

    2015-12-01

    At present, the only publicly available seismic hazard model for mainland China was generated by Global Seismic Hazard Assessment Program in 1999. We are building a new seismic hazard model by integrating historical earthquake catalogs, geological faults, geodetic GPS data, and geology maps. To build the model, we construct an Mw-based homogeneous historical earthquake catalog spanning from 780 B.C. to present, create fault models from active fault data using the methodology recommended by Global Earthquake Model (GEM), and derive a strain rate map based on the most complete GPS measurements and a new strain derivation algorithm. We divide China and the surrounding regions into about 20 large seismic source zones based on seismotectonics. For each zone, we use the tapered Gutenberg-Richter (TGR) relationship to model the seismicity rates. We estimate the TGR a- and b-values from the historical earthquake data, and constrain corner magnitude using the seismic moment rate derived from the strain rate. From the TGR distributions, 10,000 to 100,000 years of synthetic earthquakes are simulated. Then, we distribute small and medium earthquakes according to locations and magnitudes of historical earthquakes. Some large earthquakes are distributed on active faults based on characteristics of the faults, including slip rate, fault length and width, and paleoseismic data, and the rest to the background based on the distributions of historical earthquakes and strain rate. We evaluate available ground motion prediction equations (GMPE) by comparison to observed ground motions. To apply appropriate GMPEs, we divide the region into active and stable tectonics. The seismic hazard will be calculated using the OpenQuake software developed by GEM. To account for site amplifications, we construct a site condition map based on geology maps. The resulting new seismic hazard map can be used for seismic risk analysis and management, and business and land-use planning.

  3. A sediment graph model based on SCS-CN method

    NASA Astrophysics Data System (ADS)

    Singh, P. K.; Bhunya, P. K.; Mishra, S. K.; Chaube, U. C.

    2008-01-01

    SummaryThis paper proposes new conceptual sediment graph models based on coupling of popular and extensively used methods, viz., Nash model based instantaneous unit sediment graph (IUSG), soil conservation service curve number (SCS-CN) method, and Power law. These models vary in their complexity and this paper tests their performance using data of the Nagwan watershed (area = 92.46 km 2) (India). The sensitivity of total sediment yield and peak sediment flow rate computations to model parameterisation is analysed. The exponent of the Power law, β, is more sensitive than other model parameters. The models are found to have substantial potential for computing sediment graphs (temporal sediment flow rate distribution) as well as total sediment yield.

  4. Model for macroevolutionary dynamics.

    PubMed

    Maruvka, Yosef E; Shnerb, Nadav M; Kessler, David A; Ricklefs, Robert E

    2013-07-02

    The highly skewed distribution of species among genera, although challenging to macroevolutionists, provides an opportunity to understand the dynamics of diversification, including species formation, extinction, and morphological evolution. Early models were based on either the work by Yule [Yule GU (1925) Philos Trans R Soc Lond B Biol Sci 213:21-87], which neglects extinction, or a simple birth-death (speciation-extinction) process. Here, we extend the more recent development of a generic, neutral speciation-extinction (of species)-origination (of genera; SEO) model for macroevolutionary dynamics of taxon diversification. Simulations show that deviations from the homogeneity assumptions in the model can be detected in species-per-genus distributions. The SEO model fits observed species-per-genus distributions well for class-to-kingdom-sized taxonomic groups. The model's predictions for the appearance times (the time of the first existing species) of the taxonomic groups also approximately match estimates based on molecular inference and fossil records. Unlike estimates based on analyses of phylogenetic reconstruction, fitted extinction rates for large clades are close to speciation rates, consistent with high rates of species turnover and the relatively slow change in diversity observed in the fossil record. Finally, the SEO model generally supports the consistency of generic boundaries based on morphological differences between species and provides a comparator for rates of lineage splitting and morphological evolution.

  5. Nanoseismicity and picoseismicity rate changes from static stress triggering caused by a Mw 2.2 earthquake in Mponeng gold mine, South Africa

    NASA Astrophysics Data System (ADS)

    Kozłowska, Maria; Orlecka-Sikora, Beata; Kwiatek, Grzegorz; Boettcher, Margaret S.; Dresen, Georg

    2015-01-01

    Static stress changes following large earthquakes are known to affect the rate and distribution of aftershocks, yet this process has not been thoroughly investigated for nanoseismicity and picoseismicity at centimeter length scales. Here we utilize a unique data set of M ≥ -3.4 earthquakes following a Mw 2.2 earthquake in Mponeng gold mine, South Africa, that was recorded during a quiet interval in the mine to investigate if rate- and state-based modeling is valid for shallow, mining-induced seismicity. We use Dieterich's (1994) rate- and state-dependent formulation for earthquake productivity, which requires estimation of four parameters: (1) Coulomb stress changes due to the main shock, (2) the reference seismicity rate, (3) frictional resistance parameter, and (4) the duration of aftershock relaxation time. Comparisons of the modeled spatiotemporal patterns of seismicity based on two different source models with the observed distribution show that while the spatial patterns match well, the rate of modeled aftershocks is lower than the observed rate. To test our model, we used three metrics of the goodness-of-fit evaluation. The null hypothesis, of no significant difference between modeled and observed seismicity rates, was only rejected in the depth interval containing the main shock. Results show that mining-induced earthquakes may be followed by a stress relaxation expressed through aftershocks located on the rupture plane and in regions of positive Coulomb stress change. Furthermore, we demonstrate that the main features of the temporal and spatial distributions of very small, mining-induced earthquakes can be successfully determined using rate- and state-based stress modeling.

  6. Malaria transmission rates estimated from serological data.

    PubMed Central

    Burattini, M. N.; Massad, E.; Coutinho, F. A.

    1993-01-01

    A mathematical model was used to estimate malaria transmission rates based on serological data. The model is minimally stochastic and assumes an age-dependent force of infection for malaria. The transmission rates estimated were applied to a simple compartmental model in order to mimic the malaria transmission. The model has shown a good retrieving capacity for serological and parasite prevalence data. PMID:8270011

  7. COMPARISON OF THE USE OF A PHYSIOLOGICALLY-BASED PHARMACOKINETIC MODEL AND A CLASSICAL PHARMACOKINETIC MODEL FOR DIOXIN EXPOSURE ASSESSMENTS

    EPA Science Inventory

    In epidemiological studies, exposure assessments to TCDD, known as a possible human carcinogen, assume mono or biphasic elimination rates. Recent data suggests a dose dependent elimination rate for TCDD. A PBPK model, which uses a body burden dependent elimination rate, was dev...

  8. A continuous damage model based on stepwise-stress creep rupture tests

    NASA Technical Reports Server (NTRS)

    Robinson, D. N.

    1985-01-01

    A creep damage accumulation model is presented that makes use of the Kachanov damage rate concept with a provision accounting for damage that results from a variable stress history. This is accomplished through the introduction of an additional term in the Kachanov rate equation that is linear in the stress rate. Specification of the material functions and parameters in the model requires two types of constituting a data base: (1) standard constant-stress creep rupture tests, and (2) a sequence of two-step creep rupture tests.

  9. Dynamics of a network-based SIS epidemic model with nonmonotone incidence rate

    NASA Astrophysics Data System (ADS)

    Li, Chun-Hsien

    2015-06-01

    This paper studies the dynamics of a network-based SIS epidemic model with nonmonotone incidence rate. This type of nonlinear incidence can be used to describe the psychological effect of certain diseases spread in a contact network at high infective levels. We first find a threshold value for the transmission rate. This value completely determines the dynamics of the model and interestingly, the threshold is not dependent on the functional form of the nonlinear incidence rate. Furthermore, if the transmission rate is less than or equal to the threshold value, the disease will die out. Otherwise, it will be permanent. Numerical experiments are given to illustrate the theoretical results. We also consider the effect of the nonlinear incidence on the epidemic dynamics.

  10. Micromechanical modeling of rate-dependent behavior of Connective tissues.

    PubMed

    Fallah, A; Ahmadian, M T; Firozbakhsh, K; Aghdam, M M

    2017-03-07

    In this paper, a constitutive and micromechanical model for prediction of rate-dependent behavior of connective tissues (CTs) is presented. Connective tissues are considered as nonlinear viscoelastic material. The rate-dependent behavior of CTs is incorporated into model using the well-known quasi-linear viscoelasticity (QLV) theory. A planar wavy representative volume element (RVE) is considered based on the tissue microstructure histological evidences. The presented model parameters are identified based on the available experiments in the literature. The presented constitutive model introduced to ABAQUS by means of UMAT subroutine. Results show that, monotonic uniaxial test predictions of the presented model at different strain rates for rat tail tendon (RTT) and human patellar tendon (HPT) are in good agreement with experimental data. Results of incremental stress-relaxation test are also presented to investigate both instantaneous and viscoelastic behavior of connective tissues. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. The relationship-based care model: evaluation of the impact on patient satisfaction, length of stay, and readmission rates.

    PubMed

    Cropley, Stacey

    2012-06-01

    The objective of this study was to assess the impact of the implementation of the relationship-based care (RBC) model on patient satisfaction, length of stay, and readmission rates in hospitalized patients. RBC model promotes organizational viability in critical areas that measure success, inclusive of clinical quality, patient satisfaction, and robust financial standing. A retrospective secondary analysis of aggregate patient satisfaction data, length of stay, and readmission rates at a rural Texas hospital was reviewed for the years 2009 and 2010. This study compared preimplementation data for year 2009 with postimplementation data for year 2010. Data support the positive influential impact of RBC model. A negative correlation was noted with readmission rates and a concomitant positive correlation with length of stay. Overall satisfaction with nursing did not reveal a significant correlation to the new care model. RBC model supports a patient-centered, collaborative care environment, maximizing potential reimbursement.

  12. Comparison between phenomenological and ab-initio reaction and relaxation models in DSMC

    NASA Astrophysics Data System (ADS)

    Sebastião, Israel B.; Kulakhmetov, Marat; Alexeenko, Alina

    2016-11-01

    New state-specific vibrational-translational energy exchange and dissociation models, based on ab-initio data, are implemented in direct simulation Monte Carlo (DSMC) method and compared to the established Larsen-Borgnakke (LB) and total collision energy (TCE) phenomenological models. For consistency, both the LB and TCE models are calibrated with QCT-calculated O2+O data. The model comparison test cases include 0-D thermochemical relaxation under adiabatic conditions and 1-D normal shockwave calculations. The results show that both the ME-QCT-VT and LB models can reproduce vibrational relaxation accurately but the TCE model is unable to reproduce nonequilibrium rates even when it is calibrated to accurate equilibrium rates. The new reaction model does capture QCT-calculated nonequilibrium rates. For all investigated cases, we discuss the prediction differences based on the new model features.

  13. Modeling the Endogenous Sunlight Inactivation Rates of Laboratory Strain and Wastewater E. coli and Enterococci Using Biological Weighting Functions.

    PubMed

    Silverman, Andrea I; Nelson, Kara L

    2016-11-15

    Models that predict sunlight inactivation rates of bacteria are valuable tools for predicting the fate of pathogens in recreational waters and designing natural wastewater treatment systems to meet disinfection goals. We developed biological weighting function (BWF)-based numerical models to estimate the endogenous sunlight inactivation rates of E. coli and enterococci. BWF-based models allow the prediction of inactivation rates under a range of environmental conditions that shift the magnitude or spectral distribution of sunlight irradiance (e.g., different times, latitudes, water absorbances, depth). Separate models were developed for laboratory strain bacteria cultured in the laboratory and indigenous organisms concentrated directly from wastewater. Wastewater bacteria were found to be 5-7 times less susceptible to full-spectrum simulated sunlight than the laboratory bacteria, highlighting the importance of conducting experiments with bacteria sourced directly from wastewater. The inactivation rate models fit experimental data well and were successful in predicting the inactivation rates of wastewater E. coli and enterococci measured in clear marine water by researchers from a different laboratory. Additional research is recommended to develop strategies to account for the effects of elevated water pH on predicted inactivation rates.

  14. Error rate information in attention allocation pilot models

    NASA Technical Reports Server (NTRS)

    Faulkner, W. H.; Onstott, E. D.

    1977-01-01

    The Northrop urgency decision pilot model was used in a command tracking task to compare the optimized performance of multiaxis attention allocation pilot models whose urgency functions were (1) based on tracking error alone, and (2) based on both tracking error and error rate. A matrix of system dynamics and command inputs was employed, to create both symmetric and asymmetric two axis compensatory tracking tasks. All tasks were single loop on each axis. Analysis showed that a model that allocates control attention through nonlinear urgency functions using only error information could not achieve performance of the full model whose attention shifting algorithm included both error and error rate terms. Subsequent to this analysis, tracking performance predictions for the full model were verified by piloted flight simulation. Complete model and simulation data are presented.

  15. Treatment of Electronic Energy Level Transition and Ionization Following the Particle-Based Chemistry Model

    NASA Technical Reports Server (NTRS)

    Liechty, Derek S.; Lewis, Mark

    2010-01-01

    A new method of treating electronic energy level transitions as well as linking ionization to electronic energy levels is proposed following the particle-based chemistry model of Bird. Although the use of electronic energy levels and ionization reactions in DSMC are not new ideas, the current method of selecting what level to transition to, how to reproduce transition rates, and the linking of the electronic energy levels to ionization are, to the author s knowledge, novel concepts. The resulting equilibrium temperatures are shown to remain constant, and the electronic energy level distributions are shown to reproduce the Boltzmann distribution. The electronic energy level transition rates and ionization rates due to electron impacts are shown to reproduce theoretical and measured rates. The rates due to heavy particle impacts, while not as favorable as the electron impact rates, compare favorably to values from the literature. Thus, these new extensions to the particle-based chemistry model of Bird provide an accurate method for predicting electronic energy level transition and ionization rates in gases.

  16. Modeling and forecasting US presidential election using learning algorithms

    NASA Astrophysics Data System (ADS)

    Zolghadr, Mohammad; Niaki, Seyed Armin Akhavan; Niaki, S. T. A.

    2017-09-01

    The primary objective of this research is to obtain an accurate forecasting model for the US presidential election. To identify a reliable model, artificial neural networks (ANN) and support vector regression (SVR) models are compared based on some specified performance measures. Moreover, six independent variables such as GDP, unemployment rate, the president's approval rate, and others are considered in a stepwise regression to identify significant variables. The president's approval rate is identified as the most significant variable, based on which eight other variables are identified and considered in the model development. Preprocessing methods are applied to prepare the data for the learning algorithms. The proposed procedure significantly increases the accuracy of the model by 50%. The learning algorithms (ANN and SVR) proved to be superior to linear regression based on each method's calculated performance measures. The SVR model is identified as the most accurate model among the other models as this model successfully predicted the outcome of the election in the last three elections (2004, 2008, and 2012). The proposed approach significantly increases the accuracy of the forecast.

  17. Forecasting Induced Seismicity Using Saltwater Disposal Data and a Hydromechanical Earthquake Nucleation Model

    NASA Astrophysics Data System (ADS)

    Norbeck, J. H.; Rubinstein, J. L.

    2017-12-01

    The earthquake activity in Oklahoma and Kansas that began in 2008 reflects the most widespread instance of induced seismicity observed to date. In this work, we demonstrate that the basement fault stressing conditions that drive seismicity rate evolution are related directly to the operational history of 958 saltwater disposal wells completed in the Arbuckle aquifer. We developed a fluid pressurization model based on the assumption that pressure changes are dominated by reservoir compressibility effects. Using injection well data, we established a detailed description of the temporal and spatial variability in stressing conditions over the 21.5-year period from January 1995 through June 2017. With this stressing history, we applied a numerical model based on rate-and-state friction theory to generate seismicity rate forecasts across a broad range of spatial scales. The model replicated the onset of seismicity, the timing of the peak seismicity rate, and the reduction in seismicity following decreased disposal activity. The behavior of the induced earthquake sequence was consistent with the prediction from rate-and-state theory that the system evolves toward a steady seismicity rate depending on the ratio between the current and background stressing rates. Seismicity rate transients occurred over characteristic timescales inversely proportional to stressing rate. We found that our hydromechanical earthquake rate model outperformed observational and empirical forecast models for one-year forecast durations over the period 2008 through 2016.

  18. Low-dimensional spike rate models derived from networks of adaptive integrate-and-fire neurons: Comparison and implementation.

    PubMed

    Augustin, Moritz; Ladenbauer, Josef; Baumann, Fabian; Obermayer, Klaus

    2017-06-01

    The spiking activity of single neurons can be well described by a nonlinear integrate-and-fire model that includes somatic adaptation. When exposed to fluctuating inputs sparsely coupled populations of these model neurons exhibit stochastic collective dynamics that can be effectively characterized using the Fokker-Planck equation. This approach, however, leads to a model with an infinite-dimensional state space and non-standard boundary conditions. Here we derive from that description four simple models for the spike rate dynamics in terms of low-dimensional ordinary differential equations using two different reduction techniques: one uses the spectral decomposition of the Fokker-Planck operator, the other is based on a cascade of two linear filters and a nonlinearity, which are determined from the Fokker-Planck equation and semi-analytically approximated. We evaluate the reduced models for a wide range of biologically plausible input statistics and find that both approximation approaches lead to spike rate models that accurately reproduce the spiking behavior of the underlying adaptive integrate-and-fire population. Particularly the cascade-based models are overall most accurate and robust, especially in the sensitive region of rapidly changing input. For the mean-driven regime, when input fluctuations are not too strong and fast, however, the best performing model is based on the spectral decomposition. The low-dimensional models also well reproduce stable oscillatory spike rate dynamics that are generated either by recurrent synaptic excitation and neuronal adaptation or through delayed inhibitory synaptic feedback. The computational demands of the reduced models are very low but the implementation complexity differs between the different model variants. Therefore we have made available implementations that allow to numerically integrate the low-dimensional spike rate models as well as the Fokker-Planck partial differential equation in efficient ways for arbitrary model parametrizations as open source software. The derived spike rate descriptions retain a direct link to the properties of single neurons, allow for convenient mathematical analyses of network states, and are well suited for application in neural mass/mean-field based brain network models.

  19. Low-dimensional spike rate models derived from networks of adaptive integrate-and-fire neurons: Comparison and implementation

    PubMed Central

    Baumann, Fabian; Obermayer, Klaus

    2017-01-01

    The spiking activity of single neurons can be well described by a nonlinear integrate-and-fire model that includes somatic adaptation. When exposed to fluctuating inputs sparsely coupled populations of these model neurons exhibit stochastic collective dynamics that can be effectively characterized using the Fokker-Planck equation. This approach, however, leads to a model with an infinite-dimensional state space and non-standard boundary conditions. Here we derive from that description four simple models for the spike rate dynamics in terms of low-dimensional ordinary differential equations using two different reduction techniques: one uses the spectral decomposition of the Fokker-Planck operator, the other is based on a cascade of two linear filters and a nonlinearity, which are determined from the Fokker-Planck equation and semi-analytically approximated. We evaluate the reduced models for a wide range of biologically plausible input statistics and find that both approximation approaches lead to spike rate models that accurately reproduce the spiking behavior of the underlying adaptive integrate-and-fire population. Particularly the cascade-based models are overall most accurate and robust, especially in the sensitive region of rapidly changing input. For the mean-driven regime, when input fluctuations are not too strong and fast, however, the best performing model is based on the spectral decomposition. The low-dimensional models also well reproduce stable oscillatory spike rate dynamics that are generated either by recurrent synaptic excitation and neuronal adaptation or through delayed inhibitory synaptic feedback. The computational demands of the reduced models are very low but the implementation complexity differs between the different model variants. Therefore we have made available implementations that allow to numerically integrate the low-dimensional spike rate models as well as the Fokker-Planck partial differential equation in efficient ways for arbitrary model parametrizations as open source software. The derived spike rate descriptions retain a direct link to the properties of single neurons, allow for convenient mathematical analyses of network states, and are well suited for application in neural mass/mean-field based brain network models. PMID:28644841

  20. Performability modeling based on real data: A case study

    NASA Technical Reports Server (NTRS)

    Hsueh, M. C.; Iyer, R. K.; Trivedi, K. S.

    1988-01-01

    Described is a measurement-based performability model based on error and resource usage data collected on a multiprocessor system. A method for identifying the model structure is introduced and the resulting model is validated against real data. Model development from the collection of raw data to the estimation of the expected reward is described. Both normal and error behavior of the system are characterized. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model system behavior. A reward function, based on the service rate and the error rate in each state, is then defined in order to estimate the performability of the system and to depict the cost of apparent types of errors.

  1. Performability modeling based on real data: A casestudy

    NASA Technical Reports Server (NTRS)

    Hsueh, M. C.; Iyer, R. K.; Trivedi, K. S.

    1987-01-01

    Described is a measurement-based performability model based on error and resource usage data collected on a multiprocessor system. A method for identifying the model structure is introduced and the resulting model is validated against real data. Model development from the collection of raw data to the estimation of the expected reward is described. Both normal and error behavior of the system are characterized. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model the system behavior. A reward function, based on the service rate and the error rate in each state, is then defined in order to estimate the performability of the system and to depict the cost of different types of errors.

  2. Caliber Corrected Markov Modeling (C2M2): Correcting Equilibrium Markov Models.

    PubMed

    Dixit, Purushottam D; Dill, Ken A

    2018-02-13

    Rate processes are often modeled using Markov State Models (MSMs). Suppose you know a prior MSM and then learn that your prediction of some particular observable rate is wrong. What is the best way to correct the whole MSM? For example, molecular dynamics simulations of protein folding may sample many microstates, possibly giving correct pathways through them while also giving the wrong overall folding rate when compared to experiment. Here, we describe Caliber Corrected Markov Modeling (C 2 M 2 ), an approach based on the principle of maximum entropy for updating a Markov model by imposing state- and trajectory-based constraints. We show that such corrections are equivalent to asserting position-dependent diffusion coefficients in continuous-time continuous-space Markov processes modeled by a Smoluchowski equation. We derive the functional form of the diffusion coefficient explicitly in terms of the trajectory-based constraints. We illustrate with examples of 2D particle diffusion and an overdamped harmonic oscillator.

  3. A smoothed stochastic earthquake rate model considering seismicity and fault moment release for Europe

    NASA Astrophysics Data System (ADS)

    Hiemer, S.; Woessner, J.; Basili, R.; Danciu, L.; Giardini, D.; Wiemer, S.

    2014-08-01

    We present a time-independent gridded earthquake rate forecast for the European region including Turkey. The spatial component of our model is based on kernel density estimation techniques, which we applied to both past earthquake locations and fault moment release on mapped crustal faults and subduction zone interfaces with assigned slip rates. Our forecast relies on the assumption that the locations of past seismicity is a good guide to future seismicity, and that future large-magnitude events occur more likely in the vicinity of known faults. We show that the optimal weighted sum of the corresponding two spatial densities depends on the magnitude range considered. The kernel bandwidths and density weighting function are optimized using retrospective likelihood-based forecast experiments. We computed earthquake activity rates (a- and b-value) of the truncated Gutenberg-Richter distribution separately for crustal and subduction seismicity based on a maximum likelihood approach that considers the spatial and temporal completeness history of the catalogue. The final annual rate of our forecast is purely driven by the maximum likelihood fit of activity rates to the catalogue data, whereas its spatial component incorporates contributions from both earthquake and fault moment-rate densities. Our model constitutes one branch of the earthquake source model logic tree of the 2013 European seismic hazard model released by the EU-FP7 project `Seismic HAzard haRmonization in Europe' (SHARE) and contributes to the assessment of epistemic uncertainties in earthquake activity rates. We performed retrospective and pseudo-prospective likelihood consistency tests to underline the reliability of our model and SHARE's area source model (ASM) using the testing algorithms applied in the collaboratory for the study of earthquake predictability (CSEP). We comparatively tested our model's forecasting skill against the ASM and find a statistically significant better performance for testing periods of 10-20 yr. The testing results suggest that our model is a viable candidate model to serve for long-term forecasting on timescales of years to decades for the European region.

  4. Essays in applied macroeconomics: Asymmetric price adjustment, exchange rate and treatment effect

    NASA Astrophysics Data System (ADS)

    Gu, Jingping

    This dissertation consists of three essays. Chapter II examines the possible asymmetric response of gasoline prices to crude oil price changes using an error correction model with GARCH errors. Recent papers have looked at this issue. Some of these papers estimate a form of error correction model, but none of them accounts for autoregressive heteroskedasticity in estimation and testing for asymmetry and none of them takes the response of crude oil price into consideration. We find that time-varying volatility of gasoline price disturbances is an important feature of the data, and when we allow for asymmetric GARCH errors and investigate the system wide impulse response function, we find evidence of asymmetric adjustment to crude oil price changes in weekly retail gasoline prices. Chapter III discusses the relationship between fiscal deficit and exchange rate. Economic theory predicts that fiscal deficits can significantly affect real exchange rate movements, but existing empirical evidence reports only a weak impact of fiscal deficits on exchange rates. Based on US dollar-based real exchange rates in G5 countries and a flexible varying coefficient model, we show that the previously documented weak relationship between fiscal deficits and exchange rates may be the result of additive specifications, and that the relationship is stronger if we allow fiscal deficits to impact real exchange rates non-additively as well as nonlinearly. We find that the speed of exchange rate adjustment toward equilibrium depends on the state of the fiscal deficit; a fiscal contraction in the US can lead to less persistence in the deviation of exchange rates from fundamentals, and faster mean reversion to the equilibrium. Chapter IV proposes a kernel method to deal with the nonparametric regression model with only discrete covariates as regressors. This new approach is based on recently developed least squares cross-validation kernel smoothing method. It can not only automatically smooth the irrelevant variables out of the nonparametric regression model, but also avoid the problem of loss of efficiency related to the traditional nonparametric frequency-based method and the problem of misspecification based on parametric model.

  5. Application of a Novel Grey Self-Memory Coupling Model to Forecast the Incidence Rates of Two Notifiable Diseases in China: Dysentery and Gonorrhea

    PubMed Central

    Guo, Xiaojun; Liu, Sifeng; Wu, Lifeng; Tang, Lingling

    2014-01-01

    Objective In this study, a novel grey self-memory coupling model was developed to forecast the incidence rates of two notifiable infectious diseases (dysentery and gonorrhea); the effectiveness and applicability of this model was assessed based on its ability to predict the epidemiological trend of infectious diseases in China. Methods The linear model, the conventional GM(1,1) model and the GM(1,1) model with self-memory principle (SMGM(1,1) model) were used to predict the incidence rates of the two notifiable infectious diseases based on statistical incidence data. Both simulation accuracy and prediction accuracy were assessed to compare the predictive performances of the three models. The best-fit model was applied to predict future incidence rates. Results Simulation results show that the SMGM(1,1) model can take full advantage of the systematic multi-time historical data and possesses superior predictive performance compared with the linear model and the conventional GM(1,1) model. By applying the novel SMGM(1,1) model, we obtained the possible incidence rates of the two representative notifiable infectious diseases in China. Conclusion The disadvantages of the conventional grey prediction model, such as sensitivity to initial value, can be overcome by the self-memory principle. The novel grey self-memory coupling model can predict the incidence rates of infectious diseases more accurately than the conventional model, and may provide useful references for making decisions involving infectious disease prevention and control. PMID:25546054

  6. Application of a novel grey self-memory coupling model to forecast the incidence rates of two notifiable diseases in China: dysentery and gonorrhea.

    PubMed

    Guo, Xiaojun; Liu, Sifeng; Wu, Lifeng; Tang, Lingling

    2014-01-01

    In this study, a novel grey self-memory coupling model was developed to forecast the incidence rates of two notifiable infectious diseases (dysentery and gonorrhea); the effectiveness and applicability of this model was assessed based on its ability to predict the epidemiological trend of infectious diseases in China. The linear model, the conventional GM(1,1) model and the GM(1,1) model with self-memory principle (SMGM(1,1) model) were used to predict the incidence rates of the two notifiable infectious diseases based on statistical incidence data. Both simulation accuracy and prediction accuracy were assessed to compare the predictive performances of the three models. The best-fit model was applied to predict future incidence rates. Simulation results show that the SMGM(1,1) model can take full advantage of the systematic multi-time historical data and possesses superior predictive performance compared with the linear model and the conventional GM(1,1) model. By applying the novel SMGM(1,1) model, we obtained the possible incidence rates of the two representative notifiable infectious diseases in China. The disadvantages of the conventional grey prediction model, such as sensitivity to initial value, can be overcome by the self-memory principle. The novel grey self-memory coupling model can predict the incidence rates of infectious diseases more accurately than the conventional model, and may provide useful references for making decisions involving infectious disease prevention and control.

  7. MicroRNAfold: pre-microRNA secondary structure prediction based on modified NCM model with thermodynamics-based scoring strategy.

    PubMed

    Han, Dianwei; Zhang, Jun; Tang, Guiliang

    2012-01-01

    An accurate prediction of the pre-microRNA secondary structure is important in miRNA informatics. Based on a recently proposed model, nucleotide cyclic motifs (NCM), to predict RNA secondary structure, we propose and implement a Modified NCM (MNCM) model with a physics-based scoring strategy to tackle the problem of pre-microRNA folding. Our microRNAfold is implemented using a global optimal algorithm based on the bottom-up local optimal solutions. Our experimental results show that microRNAfold outperforms the current leading prediction tools in terms of True Negative rate, False Negative rate, Specificity, and Matthews coefficient ratio.

  8. Improved community model for social networks based on social mobility

    NASA Astrophysics Data System (ADS)

    Lu, Zhe-Ming; Wu, Zhen; Luo, Hao; Wang, Hao-Xian

    2015-07-01

    This paper proposes an improved community model for social networks based on social mobility. The relationship between the group distribution and the community size is investigated in terms of communication rate and turnover rate. The degree distributions, clustering coefficients, average distances and diameters of networks are analyzed. Experimental results demonstrate that the proposed model possesses the small-world property and can reproduce social networks effectively and efficiently.

  9. Covariates of the Rating Process in Hierarchical Models for Multiple Ratings of Test Items

    ERIC Educational Resources Information Center

    Mariano, Louis T.; Junker, Brian W.

    2007-01-01

    When constructed response test items are scored by more than one rater, the repeated ratings allow for the consideration of individual rater bias and variability in estimating student proficiency. Several hierarchical models based on item response theory have been introduced to model such effects. In this article, the authors demonstrate how these…

  10. The Betting Odds Rating System: Using soccer forecasts to forecast soccer.

    PubMed

    Wunderlich, Fabian; Memmert, Daniel

    2018-01-01

    Betting odds are frequently found to outperform mathematical models in sports related forecasting tasks, however the factors contributing to betting odds are not fully traceable and in contrast to rating-based forecasts no straightforward measure of team-specific quality is deducible from the betting odds. The present study investigates the approach of combining the methods of mathematical models and the information included in betting odds. A soccer forecasting model based on the well-known ELO rating system and taking advantage of betting odds as a source of information is presented. Data from almost 15.000 soccer matches (seasons 2007/2008 until 2016/2017) are used, including both domestic matches (English Premier League, German Bundesliga, Spanish Primera Division and Italian Serie A) and international matches (UEFA Champions League, UEFA Europe League). The novel betting odds based ELO model is shown to outperform classic ELO models, thus demonstrating that betting odds prior to a match contain more relevant information than the result of the match itself. It is shown how the novel model can help to gain valuable insights into the quality of soccer teams and its development over time, thus having a practical benefit in performance analysis. Moreover, it is argued that network based approaches might help in further improving rating and forecasting methods.

  11. The Betting Odds Rating System: Using soccer forecasts to forecast soccer

    PubMed Central

    Memmert, Daniel

    2018-01-01

    Betting odds are frequently found to outperform mathematical models in sports related forecasting tasks, however the factors contributing to betting odds are not fully traceable and in contrast to rating-based forecasts no straightforward measure of team-specific quality is deducible from the betting odds. The present study investigates the approach of combining the methods of mathematical models and the information included in betting odds. A soccer forecasting model based on the well-known ELO rating system and taking advantage of betting odds as a source of information is presented. Data from almost 15.000 soccer matches (seasons 2007/2008 until 2016/2017) are used, including both domestic matches (English Premier League, German Bundesliga, Spanish Primera Division and Italian Serie A) and international matches (UEFA Champions League, UEFA Europe League). The novel betting odds based ELO model is shown to outperform classic ELO models, thus demonstrating that betting odds prior to a match contain more relevant information than the result of the match itself. It is shown how the novel model can help to gain valuable insights into the quality of soccer teams and its development over time, thus having a practical benefit in performance analysis. Moreover, it is argued that network based approaches might help in further improving rating and forecasting methods. PMID:29870554

  12. Chemistry Resolved Kinetic Flow Modeling of TATB Based Explosives

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vitello, P A; Fried, L E; Howard, W M

    2011-07-21

    Detonation waves in insensitive, TATB based explosives are believed to have multi-time scale regimes. The initial burn rate of such explosives has a sub-microsecond time scale. However, significant late-time slow release in energy is believed to occur due to diffusion limited growth of carbon. In the intermediate time scale concentrations of product species likely change from being in equilibrium to being kinetic rate controlled. They use the thermo-chemical code CHEETAH linked to an ALE hydrodynamics code to model detonations. They term their model chemistry resolved kinetic flow as CHEETAH tracks the time dependent concentrations of individual species in the detonationmore » wave and calculates EOS values based on the concentrations. A HE-validation suite of model simulations compared to experiments at ambient, hot, and cold temperatures has been developed. They present here a new rate model and comparison with experimental data.« less

  13. Computational Simulation of the High Strain Rate Tensile Response of Polymer Matrix Composites

    NASA Technical Reports Server (NTRS)

    Goldberg, Robert K.

    2002-01-01

    A research program is underway to develop strain rate dependent deformation and failure models for the analysis of polymer matrix composites subject to high strain rate impact loads. Under these types of loading conditions, the material response can be highly strain rate dependent and nonlinear. State variable constitutive equations based on a viscoplasticity approach have been developed to model the deformation of the polymer matrix. The constitutive equations are then combined with a mechanics of materials based micromechanics model which utilizes fiber substructuring to predict the effective mechanical and thermal response of the composite. To verify the analytical model, tensile stress-strain curves are predicted for a representative composite over strain rates ranging from around 1 x 10(exp -5)/sec to approximately 400/sec. The analytical predictions compare favorably to experimentally obtained values both qualitatively and quantitatively. Effective elastic and thermal constants are predicted for another composite, and compared to finite element results.

  14. Prediction of mortality rates using a model with stochastic parameters

    NASA Astrophysics Data System (ADS)

    Tan, Chon Sern; Pooi, Ah Hin

    2016-10-01

    Prediction of future mortality rates is crucial to insurance companies because they face longevity risks while providing retirement benefits to a population whose life expectancy is increasing. In the past literature, a time series model based on multivariate power-normal distribution has been applied on mortality data from the United States for the years 1933 till 2000 to forecast the future mortality rates for the years 2001 till 2010. In this paper, a more dynamic approach based on the multivariate time series will be proposed where the model uses stochastic parameters that vary with time. The resulting prediction intervals obtained using the model with stochastic parameters perform better because apart from having good ability in covering the observed future mortality rates, they also tend to have distinctly shorter interval lengths.

  15. Applying the compound Poisson process model to the reporting of injury-related mortality rates.

    PubMed

    Kegler, Scott R

    2007-02-16

    Injury-related mortality rate estimates are often analyzed under the assumption that case counts follow a Poisson distribution. Certain types of injury incidents occasionally involve multiple fatalities, however, resulting in dependencies between cases that are not reflected in the simple Poisson model and which can affect even basic statistical analyses. This paper explores the compound Poisson process model as an alternative, emphasizing adjustments to some commonly used interval estimators for population-based rates and rate ratios. The adjusted estimators involve relatively simple closed-form computations, which in the absence of multiple-case incidents reduce to familiar estimators based on the simpler Poisson model. Summary data from the National Violent Death Reporting System are referenced in several examples demonstrating application of the proposed methodology.

  16. Analysis of non-fatal and fatal injury rates for mine operator and contractor employees and the influence of work location.

    PubMed

    Karra, Vijia K

    2005-01-01

    Mining injury surveillance data are used as the basis for assessing the severity of injuries among operator and contractor employees in the underground and surface mining of various minerals. Injury rates during 1983-2002 derived from Mine Safety and Health Administration (MSHA) database are analyzed using the negative binomial regression model. The logarithmic mean injury rate is expressed as a linear function of seven indicator variables representing Non-Coal Contractor, Metal Operator, Non Metal Operator, Stone Operator, Sand and Gravel Operator, Coal Contractor, and Work Location, and a continuous variable, RelYear, representing the relative year starting with 1983 as the base year. Based on the model, the mean injury rate declined at a 1.69% annual rate, and the mean injury rate for work on the surface is 52.53% lower compared to the rate for work in the underground. With reference to the Coal Operator mean injury rate: the Non-Coal Contractor rate is 30.34% lower, the Metal Operator rate is 27.18% lower, the Non-Metal Operator rate is 37.51% lower, the Stone Operator rate is 23.44% lower, the Sand and Gravel Operator rate is 16.45% lower, and the Coal Contractor rate is 1.41% lower. Fatality rates during the same 20 year period are analyzed similarly using Poisson regression model. Based on this model, the mean fatality rate declined at a 3.17% annual rate, and the rate for work on the surface is 64.3% lower compared to the rate for work in the underground. With reference to the Coal Operator mean fatality rate: the Non-Coal Contractor rate is 234.81% higher, the Metal Operator rate is 5.79% lower, the Non-Metal Operator rate is 47.36% lower, the Stone Operator rate is 8.29% higher, the Sand and Gravel Operator rate is 60.32% higher, and the Coal Contractor rate is 129.54% higher.

  17. Growth and food consumption by tiger muskellunge: Effects of temperature and ration level on bioenergetic model predictions

    USGS Publications Warehouse

    Chipps, S.R.; Einfalt, L.M.; Wahl, David H.

    2000-01-01

    We measured growth of age-0 tiger muskellunge as a function of ration size (25, 50, 75, and 100% C(max))and water temperature (7.5-25??C) and compared experimental results with those predicted from a bioenergetic model. Discrepancies between actual and predicted values varied appreciably with water temperature and growth rate. On average, model output overestimated winter consumption rates at 10 and 7.5??C by 113 to 328%, respectively, whereas model predictions in summer and autumn (20-25??C) were in better agreement with actual values (4 to 58%). We postulate that variation in model performance was related to seasonal changes in esocid metabolic rate, which were not accounted for in the bioenergetic model. Moreover, accuracy of model output varied with feeding and growth rate of tiger muskellunge. The model performed poorly for fish fed low rations compared with estimates based on fish fed ad libitum rations and was attributed, in part, to the influence of growth rate on the accuracy of bioenergetic predictions. Based on modeling simulations, we found that errors associated with bioenergetic parameters had more influence on model output when growth rate was low, which is consistent with our observations. In addition, reduced conversion efficiency at high ration levels may contribute to variable model performance, thereby implying that waste losses should be modeled as a function of ration size for esocids. Our findings support earlier field tests of the esocid bioenergetic model and indicate that food consumption is generally overestimated by the model, particularly in winter months and for fish exhibiting low feeding and growth rates.

  18. A measurement-based performability model for a multiprocessor system

    NASA Technical Reports Server (NTRS)

    Ilsueh, M. C.; Iyer, Ravi K.; Trivedi, K. S.

    1987-01-01

    A measurement-based performability model based on real error-data collected on a multiprocessor system is described. Model development from the raw errror-data to the estimation of cumulative reward is described. Both normal and failure behavior of the system are characterized. The measured data show that the holding times in key operational and failure states are not simple exponential and that semi-Markov process is necessary to model the system behavior. A reward function, based on the service rate and the error rate in each state, is then defined in order to estimate the performability of the system and to depict the cost of different failure types and recovery procedures.

  19. A Comparative Study of Spectral Auroral Intensity Predictions From Multiple Electron Transport Models

    NASA Astrophysics Data System (ADS)

    Grubbs, Guy; Michell, Robert; Samara, Marilia; Hampton, Donald; Hecht, James; Solomon, Stanley; Jahn, Jorg-Micha

    2018-01-01

    It is important to routinely examine and update models used to predict auroral emissions resulting from precipitating electrons in Earth's magnetotail. These models are commonly used to invert spectral auroral ground-based images to infer characteristics about incident electron populations when in situ measurements are unavailable. In this work, we examine and compare auroral emission intensities predicted by three commonly used electron transport models using varying electron population characteristics. We then compare model predictions to same-volume in situ electron measurements and ground-based imaging to qualitatively examine modeling prediction error. Initial comparisons showed differences in predictions by the GLobal airglOW (GLOW) model and the other transport models examined. Chemical reaction rates and radiative rates in GLOW were updated using recent publications, and predictions showed better agreement with the other models and the same-volume data, stressing that these rates are important to consider when modeling auroral processes. Predictions by each model exhibit similar behavior for varying atmospheric constants, energies, and energy fluxes. Same-volume electron data and images are highly correlated with predictions by each model, showing that these models can be used to accurately derive electron characteristics and ionospheric parameters based solely on multispectral optical imaging data.

  20. Modeling the time--varying subjective quality of HTTP video streams with rate adaptations.

    PubMed

    Chen, Chao; Choi, Lark Kwon; de Veciana, Gustavo; Caramanis, Constantine; Heath, Robert W; Bovik, Alan C

    2014-05-01

    Newly developed hypertext transfer protocol (HTTP)-based video streaming technologies enable flexible rate-adaptation under varying channel conditions. Accurately predicting the users' quality of experience (QoE) for rate-adaptive HTTP video streams is thus critical to achieve efficiency. An important aspect of understanding and modeling QoE is predicting the up-to-the-moment subjective quality of a video as it is played, which is difficult due to hysteresis effects and nonlinearities in human behavioral responses. This paper presents a Hammerstein-Wiener model for predicting the time-varying subjective quality (TVSQ) of rate-adaptive videos. To collect data for model parameterization and validation, a database of longer duration videos with time-varying distortions was built and the TVSQs of the videos were measured in a large-scale subjective study. The proposed method is able to reliably predict the TVSQ of rate adaptive videos. Since the Hammerstein-Wiener model has a very simple structure, the proposed method is suitable for online TVSQ prediction in HTTP-based streaming.

  1. Physically-based strength model of tantalum incorporating effects of temperature, strain rate and pressure

    DOE PAGES

    Lim, Hojun; Battaile, Corbett C.; Brown, Justin L.; ...

    2016-06-14

    In this work, we develop a tantalum strength model that incorporates e ects of temperature, strain rate and pressure. Dislocation kink-pair theory is used to incorporate temperature and strain rate e ects while the pressure dependent yield is obtained through the pressure dependent shear modulus. Material constants used in the model are parameterized from tantalum single crystal tests and polycrystalline ramp compression experiments. It is shown that the proposed strength model agrees well with the temperature and strain rate dependent yield obtained from polycrystalline tantalum experiments. Furthermore, the model accurately reproduces the pressure dependent yield stresses up to 250 GPa.more » The proposed strength model is then used to conduct simulations of a Taylor cylinder impact test and validated with experiments. This approach provides a physically-based multi-scale strength model that is able to predict the plastic deformation of polycrystalline tantalum through a wide range of temperature, strain and pressure regimes.« less

  2. Data Clustering and Evolving Fuzzy Decision Tree for Data Base Classification Problems

    NASA Astrophysics Data System (ADS)

    Chang, Pei-Chann; Fan, Chin-Yuan; Wang, Yen-Wen

    Data base classification suffers from two well known difficulties, i.e., the high dimensionality and non-stationary variations within the large historic data. This paper presents a hybrid classification model by integrating a case based reasoning technique, a Fuzzy Decision Tree (FDT), and Genetic Algorithms (GA) to construct a decision-making system for data classification in various data base applications. The model is major based on the idea that the historic data base can be transformed into a smaller case-base together with a group of fuzzy decision rules. As a result, the model can be more accurately respond to the current data under classifying from the inductions by these smaller cases based fuzzy decision trees. Hit rate is applied as a performance measure and the effectiveness of our proposed model is demonstrated by experimentally compared with other approaches on different data base classification applications. The average hit rate of our proposed model is the highest among others.

  3. A Symmetric Time-Varying Cluster Rate of Descent Model

    NASA Technical Reports Server (NTRS)

    Ray, Eric S.

    2015-01-01

    A model of the time-varying rate of descent of the Orion vehicle was developed based on the observed correlation between canopy projected area and drag coefficient. This initial version of the model assumes cluster symmetry and only varies the vertical component of velocity. The cluster fly-out angle is modeled as a series of sine waves based on flight test data. The projected area of each canopy is synchronized with the primary fly-out angle mode. The sudden loss of projected area during canopy collisions is modeled at minimum fly-out angles, leading to brief increases in rate of descent. The cluster geometry is converted to drag coefficient using empirically derived constants. A more complete model is under development, which computes the aerodynamic response of each canopy to its local incidence angle.

  4. Modelling rating curves using remotely sensed LiDAR data

    USGS Publications Warehouse

    Nathanson, Marcus; Kean, Jason W.; Grabs, Thomas J.; Seibert, Jan; Laudon, Hjalmar; Lyon, Steve W.

    2012-01-01

    Accurate stream discharge measurements are important for many hydrological studies. In remote locations, however, it is often difficult to obtain stream flow information because of the difficulty in making the discharge measurements necessary to define stage-discharge relationships (rating curves). This study investigates the feasibility of defining rating curves by using a fluid mechanics-based model constrained with topographic data from an airborne LiDAR scanning. The study was carried out for an 8m-wide channel in the boreal landscape of northern Sweden. LiDAR data were used to define channel geometry above a low flow water surface along the 90-m surveyed reach. The channel topography below the water surface was estimated using the simple assumption of a flat streambed. The roughness for the modelled reach was back calculated from a single measurment of discharge. The topographic and roughness information was then used to model a rating curve. To isolate the potential influence of the flat bed assumption, a 'hybrid model' rating curve was developed on the basis of data combined from the LiDAR scan and a detailed ground survey. Whereas this hybrid model rating curve was in agreement with the direct measurements of discharge, the LiDAR model rating curve was equally in agreement with the medium and high flow measurements based on confidence intervals calculated from the direct measurements. The discrepancy between the LiDAR model rating curve and the low flow measurements was likely due to reduced roughness associated with unresolved submerged bed topography. Scanning during periods of low flow can help minimize this deficiency. These results suggest that combined ground surveys and LiDAR scans or multifrequency LiDAR scans that see 'below' the water surface (bathymetric LiDAR) could be useful in generating data needed to run such a fluid mechanics-based model. This opens a realm of possibility to remotely sense and monitor stream flows in channels in remote locations.

  5. Chemistry resolved kinetic flow modeling of TATB based explosives

    NASA Astrophysics Data System (ADS)

    Vitello, Peter; Fried, Laurence E.; William, Howard; Levesque, George; Souers, P. Clark

    2012-03-01

    Detonation waves in insensitive, TATB-based explosives are believed to have multiple time scale regimes. The initial burn rate of such explosives has a sub-microsecond time scale. However, significant late-time slow release in energy is believed to occur due to diffusion limited growth of carbon. In the intermediate time scale concentrations of product species likely change from being in equilibrium to being kinetic rate controlled. We use the thermo-chemical code CHEETAH linked to an ALE hydrodynamics code to model detonations. We term our model chemistry resolved kinetic flow, since CHEETAH tracks the time dependent concentrations of individual species in the detonation wave and calculates EOS values based on the concentrations. We present here two variants of our new rate model and comparison with hot, ambient, and cold experimental data for PBX 9502.

  6. Vehicle crashworthiness ratings in Australia.

    PubMed

    Cameron, M; Mach, T; Neiger, D; Graham, A; Ramsay, R; Pappas, M; Haley, J

    1994-08-01

    The paper reviews the published vehicle safety ratings based on mass crash data from the United States, Sweden, and Great Britain. It then describes the development of vehicle crashworthiness ratings based on injury compensation claims and police accident reports from Victoria and New South Wales, the two most populous states in Australia. Crashworthiness was measured by a combination of injury severity (of injured drivers) and injury risk (of drivers involved in crashes). Injury severity was based on 22,600 drivers injured in crashes in the two states. Injury risk was based on 70,900 drivers in New South Wales involved in crashes after which a vehicle was towed away. Injury risk measured in this way was compared with the "relative injury risk" of particular model cars involved in two car crashes in Victoria (where essentially only casualty crashes are reported), which was based on the method developed by Folksam Insurance in Sweden from Evans' double-pair comparison method. The results include crashworthiness ratings for the makes and models crashing in Australia in sufficient numbers to measure their crash performance adequately. The ratings were normalised for the driver sex and speed limit at the crash location, the two factors found to be strongly related to injury risk and/or severity and to vary substantially across makes and models of Australian crash-involved cars. This allows differences in crashworthiness of individual models to be seen, uncontaminated by major crash exposure differences.

  7. Hybrid attacks on model-based social recommender systems

    NASA Astrophysics Data System (ADS)

    Yu, Junliang; Gao, Min; Rong, Wenge; Li, Wentao; Xiong, Qingyu; Wen, Junhao

    2017-10-01

    With the growing popularity of the online social platform, the social network based approaches to recommendation emerged. However, because of the open nature of rating systems and social networks, the social recommender systems are susceptible to malicious attacks. In this paper, we present a certain novel attack, which inherits characteristics of the rating attack and the relation attack, and term it hybrid attack. Furtherly, we explore the impact of the hybrid attack on model-based social recommender systems in multiple aspects. The experimental results show that, the hybrid attack is more destructive than the rating attack in most cases. In addition, users and items with fewer ratings will be influenced more when attacked. Last but not the least, the findings suggest that spammers do not depend on the feedback links from normal users to become more powerful, the unilateral links can make the hybrid attack effective enough. Since unilateral links are much cheaper, the hybrid attack will be a great threat to model-based social recommender systems.

  8. A multi-scale model of dislocation plasticity in α-Fe: Incorporating temperature, strain rate and non-Schmid effects

    DOE PAGES

    Lim, H.; Hale, L. M.; Zimmerman, J. A.; ...

    2015-01-05

    In this study, we develop an atomistically informed crystal plasticity finite element (CP-FE) model for body-centered-cubic (BCC) α-Fe that incorporates non-Schmid stress dependent slip with temperature and strain rate effects. Based on recent insights obtained from atomistic simulations, we propose a new constitutive model that combines a generalized non-Schmid yield law with aspects from a line tension (LT) model for describing activation enthalpy required for the motion of dislocation kinks. Atomistic calculations are conducted to quantify the non-Schmid effects while both experimental data and atomistic simulations are used to assess the temperature and strain rate effects. The parameterized constitutive equationmore » is implemented into a BCC CP-FE model to simulate plastic deformation of single and polycrystalline Fe which is compared with experimental data from the literature. This direct comparison demonstrates that the atomistically informed model accurately captures the effects of crystal orientation, temperature and strain rate on the flow behavior of siangle crystal Fe. Furthermore, our proposed CP-FE model exhibits temperature and strain rate dependent flow and yield surfaces in polycrystalline Fe that deviate from conventional CP-FE models based on Schmid's law.« less

  9. Modelling the spreading rate of controlled communicable epidemics through an entropy-based thermodynamic model

    NASA Astrophysics Data System (ADS)

    Wang, WenBin; Wu, ZiNiu; Wang, ChunFeng; Hu, RuiFeng

    2013-11-01

    A model based on a thermodynamic approach is proposed for predicting the dynamics of communicable epidemics assumed to be governed by controlling efforts of multiple scales so that an entropy is associated with the system. All the epidemic details are factored into a single and time-dependent coefficient, the functional form of this coefficient is found through four constraints, including notably the existence of an inflexion point and a maximum. The model is solved to give a log-normal distribution for the spread rate, for which a Shannon entropy can be defined. The only parameter, that characterizes the width of the distribution function, is uniquely determined through maximizing the rate of entropy production. This entropy-based thermodynamic (EBT) model predicts the number of hospitalized cases with a reasonable accuracy for SARS in the year 2003. This EBT model can be of use for potential epidemics such as avian influenza and H7N9 in China.

  10. Interest rate next-day variation prediction based on hybrid feedforward neural network, particle swarm optimization, and multiresolution techniques

    NASA Astrophysics Data System (ADS)

    Lahmiri, Salim

    2016-02-01

    Multiresolution analysis techniques including continuous wavelet transform, empirical mode decomposition, and variational mode decomposition are tested in the context of interest rate next-day variation prediction. In particular, multiresolution analysis techniques are used to decompose interest rate actual variation and feedforward neural network for training and prediction. Particle swarm optimization technique is adopted to optimize its initial weights. For comparison purpose, autoregressive moving average model, random walk process and the naive model are used as main reference models. In order to show the feasibility of the presented hybrid models that combine multiresolution analysis techniques and feedforward neural network optimized by particle swarm optimization, we used a set of six illustrative interest rates; including Moody's seasoned Aaa corporate bond yield, Moody's seasoned Baa corporate bond yield, 3-Month, 6-Month and 1-Year treasury bills, and effective federal fund rate. The forecasting results show that all multiresolution-based prediction systems outperform the conventional reference models on the criteria of mean absolute error, mean absolute deviation, and root mean-squared error. Therefore, it is advantageous to adopt hybrid multiresolution techniques and soft computing models to forecast interest rate daily variations as they provide good forecasting performance.

  11. Estimation of hydrolysis rate constants for carbamates ...

    EPA Pesticide Factsheets

    Cheminformatics based tools, such as the Chemical Transformation Simulator under development in EPA’s Office of Research and Development, are being increasingly used to evaluate chemicals for their potential to degrade in the environment or be transformed through metabolism. Hydrolysis represents a major environmental degradation pathway; unfortunately, only a small fraction of hydrolysis rates for about 85,000 chemicals on the Toxic Substances Control Act (TSCA) inventory are in public domain, making it critical to develop in silico approaches to estimate hydrolysis rate constants. In this presentation, we compare three complementary approaches to estimate hydrolysis rates for carbamates, an important chemical class widely used in agriculture as pesticides, herbicides and fungicides. Fragment-based Quantitative Structure Activity Relationships (QSARs) using Hammett-Taft sigma constants are widely published and implemented for relatively simple functional groups such as carboxylic acid esters, phthalate esters, and organophosphate esters, and we extend these to carbamates. We also develop a pKa based model and a quantitative structure property relationship (QSPR) model, and evaluate them against measured rate constants using R square and root mean square (RMS) error. Our work shows that for our relatively small sample size of carbamates, a Hammett-Taft based fragment model performs best, followed by a pKa and a QSPR model. This presentation compares three comp

  12. Recovery after treatment and sensitivity to base rate.

    PubMed

    Doctor, J N

    1999-04-01

    Accurate classification of patients as having recovered after psychotherapy depends largely on the base rate of such recovery. This article presents methods for classifying participants as recovered after therapy. The approach described here considers base rate in the statistical model. These methods can be applied to psychotherapy outcome data for 2 purposes: (a) to determine the robustness of a data set to differing base-rate assumptions and (b) to formulate an appropriate cutoff that is beyond the range of cases that are not robust to plausible base-rate assumptions. Discussion addresses a fundamental premise underlying the study of recovery after psychotherapy.

  13. On Optimizing H. 264/AVC Rate Control by Improving R-D Model and Incorporating HVS Characteristics

    NASA Astrophysics Data System (ADS)

    Zhu, Zhongjie; Wang, Yuer; Bai, Yongqiang; Jiang, Gangyi

    2010-12-01

    The state-of-the-art JVT-G012 rate control algorithm of H.264 is improved from two aspects. First, the quadratic rate-distortion (R-D) model is modified based on both empirical observations and theoretical analysis. Second, based on the existing physiological and psychological research findings of human vision, the rate control algorithm is optimized by incorporating the main characteristics of the human visual system (HVS) such as contrast sensitivity, multichannel theory, and masking effect. Experiments are conducted, and experimental results show that the improved algorithm can simultaneously enhance the overall subjective visual quality and improve the rate control precision effectively.

  14. The Topp-Leone generalized Rayleigh cure rate model and its application

    NASA Astrophysics Data System (ADS)

    Nanthaprut, Pimwarat; Bodhisuwan, Winai; Patummasut, Mena

    2017-11-01

    Cure rate model is one of the survival analysis when model consider a proportion of the censored data. In clinical trials, the data represent time to recurrence of event or death of patients are used to improve the efficiency of treatments. Each dataset can be separated into two groups: censored and uncensored data. In this work, the new mixture cure rate model is introduced based on the Topp-Leone generalized Rayleigh distribution. The Bayesian approach is employed to estimate its parameters. In addition, a breast cancer dataset is analyzed for model illustration purpose. According to the deviance information criterion, the Topp-Leone generalized Rayleigh cure rate model shows better result than the Weibull and exponential cure rate models.

  15. A likelihood-based biostatistical model for analyzing consumer movement in simultaneous choice experiments

    USDA-ARS?s Scientific Manuscript database

    Measures of animal movement versus consumption rates can provide valuable, ecologically relevant information on feeding preference, specifically estimates of attraction rate, leaving rate, tenure time, or measures of flight/walking path. Here, we develop a simple biostatistical model to analyze repe...

  16. Shilling attack detection for recommender systems based on credibility of group users and rating time series.

    PubMed

    Zhou, Wei; Wen, Junhao; Qu, Qiang; Zeng, Jun; Cheng, Tian

    2018-01-01

    Recommender systems are vulnerable to shilling attacks. Forged user-generated content data, such as user ratings and reviews, are used by attackers to manipulate recommendation rankings. Shilling attack detection in recommender systems is of great significance to maintain the fairness and sustainability of recommender systems. The current studies have problems in terms of the poor universality of algorithms, difficulty in selection of user profile attributes, and lack of an optimization mechanism. In this paper, a shilling behaviour detection structure based on abnormal group user findings and rating time series analysis is proposed. This paper adds to the current understanding in the field by studying the credibility evaluation model in-depth based on the rating prediction model to derive proximity-based predictions. A method for detecting suspicious ratings based on suspicious time windows and target item analysis is proposed. Suspicious rating time segments are determined by constructing a time series, and data streams of the rating items are examined and suspicious rating segments are checked. To analyse features of shilling attacks by a group user's credibility, an abnormal group user discovery method based on time series and time window is proposed. Standard testing datasets are used to verify the effect of the proposed method.

  17. Shilling attack detection for recommender systems based on credibility of group users and rating time series

    PubMed Central

    Wen, Junhao; Qu, Qiang; Zeng, Jun; Cheng, Tian

    2018-01-01

    Recommender systems are vulnerable to shilling attacks. Forged user-generated content data, such as user ratings and reviews, are used by attackers to manipulate recommendation rankings. Shilling attack detection in recommender systems is of great significance to maintain the fairness and sustainability of recommender systems. The current studies have problems in terms of the poor universality of algorithms, difficulty in selection of user profile attributes, and lack of an optimization mechanism. In this paper, a shilling behaviour detection structure based on abnormal group user findings and rating time series analysis is proposed. This paper adds to the current understanding in the field by studying the credibility evaluation model in-depth based on the rating prediction model to derive proximity-based predictions. A method for detecting suspicious ratings based on suspicious time windows and target item analysis is proposed. Suspicious rating time segments are determined by constructing a time series, and data streams of the rating items are examined and suspicious rating segments are checked. To analyse features of shilling attacks by a group user’s credibility, an abnormal group user discovery method based on time series and time window is proposed. Standard testing datasets are used to verify the effect of the proposed method. PMID:29742134

  18. Grain-Size Based Additivity Models for Scaling Multi-rate Uranyl Surface Complexation in Subsurface Sediments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Xiaoying; Liu, Chongxuan; Hu, Bill X.

    This study statistically analyzed a grain-size based additivity model that has been proposed to scale reaction rates and parameters from laboratory to field. The additivity model assumed that reaction properties in a sediment including surface area, reactive site concentration, reaction rate, and extent can be predicted from field-scale grain size distribution by linearly adding reaction properties for individual grain size fractions. This study focused on the statistical analysis of the additivity model with respect to reaction rate constants using multi-rate uranyl (U(VI)) surface complexation reactions in a contaminated sediment as an example. Experimental data of rate-limited U(VI) desorption in amore » stirred flow-cell reactor were used to estimate the statistical properties of multi-rate parameters for individual grain size fractions. The statistical properties of the rate constants for the individual grain size fractions were then used to analyze the statistical properties of the additivity model to predict rate-limited U(VI) desorption in the composite sediment, and to evaluate the relative importance of individual grain size fractions to the overall U(VI) desorption. The result indicated that the additivity model provided a good prediction of the U(VI) desorption in the composite sediment. However, the rate constants were not directly scalable using the additivity model, and U(VI) desorption in individual grain size fractions have to be simulated in order to apply the additivity model. An approximate additivity model for directly scaling rate constants was subsequently proposed and evaluated. The result found that the approximate model provided a good prediction of the experimental results within statistical uncertainty. This study also found that a gravel size fraction (2-8mm), which is often ignored in modeling U(VI) sorption and desorption, is statistically significant to the U(VI) desorption in the sediment.« less

  19. Hierarchial mark-recapture models: a framework for inference about demographic processes

    USGS Publications Warehouse

    Link, W.A.; Barker, R.J.

    2004-01-01

    The development of sophisticated mark-recapture models over the last four decades has provided fundamental tools for the study of wildlife populations, allowing reliable inference about population sizes and demographic rates based on clearly formulated models for the sampling processes. Mark-recapture models are now routinely described by large numbers of parameters. These large models provide the next challenge to wildlife modelers: the extraction of signal from noise in large collections of parameters. Pattern among parameters can be described by strong, deterministic relations (as in ultrastructural models) but is more flexibly and credibly modeled using weaker, stochastic relations. Trend in survival rates is not likely to be manifest by a sequence of values falling precisely on a given parametric curve; rather, if we could somehow know the true values, we might anticipate a regression relation between parameters and explanatory variables, in which true value equals signal plus noise. Hierarchical models provide a useful framework for inference about collections of related parameters. Instead of regarding parameters as fixed but unknown quantities, we regard them as realizations of stochastic processes governed by hyperparameters. Inference about demographic processes is based on investigation of these hyperparameters. We advocate the Bayesian paradigm as a natural, mathematically and scientifically sound basis for inference about hierarchical models. We describe analysis of capture-recapture data from an open population based on hierarchical extensions of the Cormack-Jolly-Seber model. In addition to recaptures of marked animals, we model first captures of animals and losses on capture, and are thus able to estimate survival probabilities w (i.e., the complement of death or permanent emigration) and per capita growth rates f (i.e., the sum of recruitment and immigration rates). Covariation in these rates, a feature of demographic interest, is explicitly described in the model.

  20. Increase in the CO2 exchange rate of leaves of Ilex rotunda with elevated atmospheric CO2 concentration in an urban canyon

    NASA Astrophysics Data System (ADS)

    Takagi, M.; Gyokusen, Koichiro; Saito, Akira

    It was found that the atmospheric carbon dioxide (CO2) concentration in an urban canyon in Fukuoka city, Japan during August 1997 was about 30 µmol mol-1 higher than that in the suburbs. When fully exposed to sunlight, in situ the rate of photosynthesis in single leaves of Ilex rotunda planted in the urban canyon was higher when the atmospheric CO2 concentration was elevated. A biochemically based model was able to predict the in situ rate of photosynthesis well. The model also predicted an increase in the daily CO2 exchange rate for leaves in the urban canyon with an increase in atmospheric CO2 concentration. However, in situ such an increase in the daily CO2 exchange rate may be offset by diminished sunlight, a higher air temperature and a lower relative humidity. Thus, the daily CO2 exchange rate predicted using the model based soleley on the environmental conditions prevailing in the urban canyon was lower than that predicted based only on environmental factors found in the suburbs.

  1. Fire flame detection based on GICA and target tracking

    NASA Astrophysics Data System (ADS)

    Rong, Jianzhong; Zhou, Dechuang; Yao, Wei; Gao, Wei; Chen, Juan; Wang, Jian

    2013-04-01

    To improve the video fire detection rate, a robust fire detection algorithm based on the color, motion and pattern characteristics of fire targets was proposed, which proved a satisfactory fire detection rate for different fire scenes. In this fire detection algorithm: (a) a rule-based generic color model was developed based on analysis on a large quantity of flame pixels; (b) from the traditional GICA (Geometrical Independent Component Analysis) model, a Cumulative Geometrical Independent Component Analysis (C-GICA) model was developed for motion detection without static background and (c) a BP neural network fire recognition model based on multi-features of the fire pattern was developed. Fire detection tests on benchmark fire video clips of different scenes have shown the robustness, accuracy and fast-response of the algorithm.

  2. Estimating roadside encroachment rates with the combined strengths of accident- and encroachment-based approaches

    DOT National Transportation Integrated Search

    2001-09-01

    In two recent studies by Miaou, he proposed a method to estimate vehicle roadside encroachment rates using accident-based models. He further illustrated the use of this method to estimate roadside encroachment rates for rural two-lane undivided roads...

  3. Economic policy optimization based on both one stochastic model and the parametric control theory

    NASA Astrophysics Data System (ADS)

    Ashimov, Abdykappar; Borovskiy, Yuriy; Onalbekov, Mukhit

    2016-06-01

    A nonlinear dynamic stochastic general equilibrium model with financial frictions is developed to describe two interacting national economies in the environment of the rest of the world. Parameters of nonlinear model are estimated based on its log-linearization by the Bayesian approach. The nonlinear model is verified by retroprognosis, estimation of stability indicators of mappings specified by the model, and estimation the degree of coincidence for results of internal and external shocks' effects on macroeconomic indicators on the basis of the estimated nonlinear model and its log-linearization. On the base of the nonlinear model, the parametric control problems of economic growth and volatility of macroeconomic indicators of Kazakhstan are formulated and solved for two exchange rate regimes (free floating and managed floating exchange rates)

  4. Recent topographic evolution and erosion of the deglaciated Washington Cascades inferred from a stochastic landscape evolution model

    NASA Astrophysics Data System (ADS)

    Moon, S.; Shelef, E.; Hilley, G. E.

    2013-12-01

    The Washington Cascades is currently in topographic and erosional disequilibrium after deglaciation occurred around 11- 17 ka ago. The topography still shows the features inherited from prior alpine glacial processes (e.g., cirques, steep side-valleys, and flat valley bottoms), though postglacial processes are currently denuding this landscape. Our previous study in this area calculated the thousand-year-timescale denudation rates using cosmogenic 10Be concentration (CRN-denudation rates), and showed that they were ~ four times higher than million-year-timescale uplift rates. In addition, the spatial distribution of denudation rates showed a good correlation with a factor-of-ten variation in precipitation. We interpreted this correlation as reflecting the sensitivity of landslide triggering in over-steepened deglaciated topography to precipitation, which produced high denudation rates in wet areas that experienced frequent landsliding. We explored this interpretation using a model of postglacial surface processes that predicts the evolution of the topography and denudation rates within the deglaciated Washington Cascades. Specifically, we used the model to understand the controls on and timescales of landscape response to changes in the surface process regime after deglaciation. The postglacial adjustment of this landscape is modeled using a geomorphic-transport-law-based numerical model that includes processes of river incision, hillslope diffusion, and stochastic landslides. The surface lowering due to landslides is parameterized using a physically-based slope stability model coupled to a stochastic model of the generation of landslides. The model parameters of river incision and stochastic landslides are calibrated based on the rates and distribution of thousand-year-timescale denudation rates measured from cosmogenic 10Be isotopes. The probability distribution of model parameters required to fit the observed denudation rates shows comparable ranges from previous studies in similar rock types and climatic conditions. The calibrated parameters suggest that the dominant sediment source of river sediments originates from stochastic landslides. The magnitude of landslide denudation rates is determined by failure density (similar to landslide frequency), while their spatial distribution is largely controlled by precipitation and slope angles. Simulation results show that denudation rates decay over time and take approximately 130-180 ka to reach steady-state rates. This response timescale is longer than glacial/interglacial cycles, suggesting that frequent climatic perturbations during the Quaternary may prevent these types of landscapes from reaching a dynamic equilibrium with postglacial processes.

  5. An equivalent dissipation rate model for capturing history effects in non-premixed flames

    DOE PAGES

    Kundu, Prithwish; Echekki, Tarek; Pei, Yuanjiang; ...

    2016-11-11

    The effects of strain rate history on turbulent flames have been studied in the. past decades with 1D counter flow diffusion flame (CFDF) configurations subjected to oscillating strain rates. In this work, these unsteady effects are studied for complex hydrocarbon fuel surrogates at engine relevant conditions with unsteady strain rates experienced by flamelets in a typical spray flame. Tabulated combustion models are based on a steady scalar dissipation rate (SDR) assumption and hence cannot capture these unsteady strain effects; even though they can capture the unsteady chemistry. In this work, 1D CFDF with varying strain rates are simulated using twomore » different modeling approaches: steady SDR assumption and unsteady flamelet model. Comparative studies show that the history effects due to unsteady SDR are directly proportional to the temporal gradient of the SDR. A new equivalent SDR model based on the history of a flamelet is proposed. An averaging procedure is constructed such that the most recent histories are given higher weights. This equivalent SDR is then used with the steady SDR assumption in 1D flamelets. Results show a good agreement between tabulated flamelet solution and the unsteady flamelet results. This equivalent SDR concept is further implemented and compared against 3D spray flames (Engine Combustion Network Spray A). Tabulated models based on steady SDR assumption under-predict autoignition and flame lift-off when compared with an unsteady Representative Interactive Flamelet (RIF) model. However, equivalent SDR model coupled with the tabulated model predicted autoignition and flame lift-off very close to those reported by the RIF model. This model is further validated for a range of injection pressures for Spray A flames. As a result, the new modeling framework now enables tabulated models with significantly lower computational cost to account for unsteady history effects.« less

  6. An equivalent dissipation rate model for capturing history effects in non-premixed flames

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kundu, Prithwish; Echekki, Tarek; Pei, Yuanjiang

    The effects of strain rate history on turbulent flames have been studied in the. past decades with 1D counter flow diffusion flame (CFDF) configurations subjected to oscillating strain rates. In this work, these unsteady effects are studied for complex hydrocarbon fuel surrogates at engine relevant conditions with unsteady strain rates experienced by flamelets in a typical spray flame. Tabulated combustion models are based on a steady scalar dissipation rate (SDR) assumption and hence cannot capture these unsteady strain effects; even though they can capture the unsteady chemistry. In this work, 1D CFDF with varying strain rates are simulated using twomore » different modeling approaches: steady SDR assumption and unsteady flamelet model. Comparative studies show that the history effects due to unsteady SDR are directly proportional to the temporal gradient of the SDR. A new equivalent SDR model based on the history of a flamelet is proposed. An averaging procedure is constructed such that the most recent histories are given higher weights. This equivalent SDR is then used with the steady SDR assumption in 1D flamelets. Results show a good agreement between tabulated flamelet solution and the unsteady flamelet results. This equivalent SDR concept is further implemented and compared against 3D spray flames (Engine Combustion Network Spray A). Tabulated models based on steady SDR assumption under-predict autoignition and flame lift-off when compared with an unsteady Representative Interactive Flamelet (RIF) model. However, equivalent SDR model coupled with the tabulated model predicted autoignition and flame lift-off very close to those reported by the RIF model. This model is further validated for a range of injection pressures for Spray A flames. As a result, the new modeling framework now enables tabulated models with significantly lower computational cost to account for unsteady history effects.« less

  7. Expectation maximization-based likelihood inference for flexible cure rate models with Weibull lifetimes.

    PubMed

    Balakrishnan, Narayanaswamy; Pal, Suvra

    2016-08-01

    Recently, a flexible cure rate survival model has been developed by assuming the number of competing causes of the event of interest to follow the Conway-Maxwell-Poisson distribution. This model includes some of the well-known cure rate models discussed in the literature as special cases. Data obtained from cancer clinical trials are often right censored and expectation maximization algorithm can be used in this case to efficiently estimate the model parameters based on right censored data. In this paper, we consider the competing cause scenario and assuming the time-to-event to follow the Weibull distribution, we derive the necessary steps of the expectation maximization algorithm for estimating the parameters of different cure rate survival models. The standard errors of the maximum likelihood estimates are obtained by inverting the observed information matrix. The method of inference developed here is examined by means of an extensive Monte Carlo simulation study. Finally, we illustrate the proposed methodology with a real data on cancer recurrence. © The Author(s) 2013.

  8. A fuzzy-logic-based model to predict biogas and methane production rates in a pilot-scale mesophilic UASB reactor treating molasses wastewater.

    PubMed

    Turkdogan-Aydinol, F Ilter; Yetilmezsoy, Kaan

    2010-10-15

    A MIMO (multiple inputs and multiple outputs) fuzzy-logic-based model was developed to predict biogas and methane production rates in a pilot-scale 90-L mesophilic up-flow anaerobic sludge blanket (UASB) reactor treating molasses wastewater. Five input variables such as volumetric organic loading rate (OLR), volumetric total chemical oxygen demand (TCOD) removal rate (R(V)), influent alkalinity, influent pH and effluent pH were fuzzified by the use of an artificial intelligence-based approach. Trapezoidal membership functions with eight levels were conducted for the fuzzy subsets, and a Mamdani-type fuzzy inference system was used to implement a total of 134 rules in the IF-THEN format. The product (prod) and the centre of gravity (COG, centroid) methods were employed as the inference operator and defuzzification methods, respectively. Fuzzy-logic predicted results were compared with the outputs of two exponential non-linear regression models derived in this study. The UASB reactor showed a remarkable performance on the treatment of molasses wastewater, with an average TCOD removal efficiency of 93 (+/-3)% and an average volumetric TCOD removal rate of 6.87 (+/-3.93) kg TCOD(removed)/m(3)-day, respectively. Findings of this study clearly indicated that, compared to non-linear regression models, the proposed MIMO fuzzy-logic-based model produced smaller deviations and exhibited a superior predictive performance on forecasting of both biogas and methane production rates with satisfactory determination coefficients over 0.98. 2010 Elsevier B.V. All rights reserved.

  9. Rate-Based Model Predictive Control of Turbofan Engine Clearance

    NASA Technical Reports Server (NTRS)

    DeCastro, Jonathan A.

    2006-01-01

    An innovative model predictive control strategy is developed for control of nonlinear aircraft propulsion systems and sub-systems. At the heart of the controller is a rate-based linear parameter-varying model that propagates the state derivatives across the prediction horizon, extending prediction fidelity to transient regimes where conventional models begin to lose validity. The new control law is applied to a demanding active clearance control application, where the objectives are to tightly regulate blade tip clearances and also anticipate and avoid detrimental blade-shroud rub occurrences by optimally maintaining a predefined minimum clearance. Simulation results verify that the rate-based controller is capable of satisfying the objectives during realistic flight scenarios where both a conventional Jacobian-based model predictive control law and an unconstrained linear-quadratic optimal controller are incapable of doing so. The controller is evaluated using a variety of different actuators, illustrating the efficacy and versatility of the control approach. It is concluded that the new strategy has promise for this and other nonlinear aerospace applications that place high importance on the attainment of control objectives during transient regimes.

  10. Probabilistic short-term forecasting of eruption rate at Kīlauea Volcano using a physics-based model

    NASA Astrophysics Data System (ADS)

    Anderson, K. R.

    2016-12-01

    Deterministic models of volcanic eruptions yield predictions of future activity conditioned on uncertainty in the current state of the system. Physics-based eruption models are well-suited for deterministic forecasting as they can relate magma physics with a wide range of observations. Yet, physics-based eruption forecasting is strongly limited by an inadequate understanding of volcanic systems, and the need for eruption models to be computationally tractable. At Kīlauea Volcano, Hawaii, episodic depressurization-pressurization cycles of the magma system generate correlated, quasi-exponential variations in ground deformation and surface height of the active summit lava lake. Deflations are associated with reductions in eruption rate, or even brief eruptive pauses, and thus partly control lava flow advance rates and associated hazard. Because of the relatively well-understood nature of Kīlauea's shallow magma plumbing system, and because more than 600 of these events have been recorded to date, they offer a unique opportunity to refine a physics-based effusive eruption forecasting approach and apply it to lava eruption rates over short (hours to days) time periods. A simple physical model of the volcano ascribes observed data to temporary reductions in magma supply to an elastic reservoir filled with compressible magma. This model can be used to predict the evolution of an ongoing event, but because the mechanism that triggers events is unknown, event durations are modeled stochastically from previous observations. A Bayesian approach incorporates diverse data sets and prior information to simultaneously estimate uncertain model parameters and future states of the system. Forecasts take the form of probability distributions for eruption rate or cumulative erupted volume at some future time. Results demonstrate the significant uncertainties that still remain even for short-term eruption forecasting at a well-monitored volcano - but also the value of a physics-based, mixed deterministic-probabilistic eruption forecasting approach in reducing and quantifying these uncertainties.

  11. Incorporating variability in simulations of seasonally forced phenology using integral projection models

    DOE PAGES

    Goodsman, Devin W.; Aukema, Brian H.; McDowell, Nate G.; ...

    2017-11-26

    Phenology models are becoming increasingly important tools to accurately predict how climate change will impact the life histories of organisms. We propose a class of integral projection phenology models derived from stochastic individual-based models of insect development and demography. Our derivation, which is based on the rate summation concept, produces integral projection models that capture the effect of phenotypic rate variability on insect phenology, but which are typically more computationally frugal than equivalent individual-based phenology models. We demonstrate our approach using a temperature-dependent model of the demography of the mountain pine beetle (Dendroctonus ponderosae Hopkins), an insect that kills maturemore » pine trees. This work illustrates how a wide range of stochastic phenology models can be reformulated as integral projection models. Due to their computational efficiency, these integral projection models are suitable for deployment in large-scale simulations, such as studies of altered pest distributions under climate change.« less

  12. Incorporating variability in simulations of seasonally forced phenology using integral projection models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goodsman, Devin W.; Aukema, Brian H.; McDowell, Nate G.

    Phenology models are becoming increasingly important tools to accurately predict how climate change will impact the life histories of organisms. We propose a class of integral projection phenology models derived from stochastic individual-based models of insect development and demography. Our derivation, which is based on the rate summation concept, produces integral projection models that capture the effect of phenotypic rate variability on insect phenology, but which are typically more computationally frugal than equivalent individual-based phenology models. We demonstrate our approach using a temperature-dependent model of the demography of the mountain pine beetle (Dendroctonus ponderosae Hopkins), an insect that kills maturemore » pine trees. This work illustrates how a wide range of stochastic phenology models can be reformulated as integral projection models. Due to their computational efficiency, these integral projection models are suitable for deployment in large-scale simulations, such as studies of altered pest distributions under climate change.« less

  13. ESTIMATION OF THE RATE OF VOC EMISSIONS FROM SOLVENT-BASED INDOOR COATING MATERIALS BASED ON PRODUCT FORMULATION

    EPA Science Inventory

    Two computational methods are proposed for estimation of the emission rate of volatile organic compounds (VOCs) from solvent-based indoor coating materials based on the knowledge of product formulation. The first method utilizes two previously developed mass transfer models with ...

  14. Small area estimation for estimating the number of infant mortality in West Java, Indonesia

    NASA Astrophysics Data System (ADS)

    Anggreyani, Arie; Indahwati, Kurnia, Anang

    2016-02-01

    Demographic and Health Survey Indonesia (DHSI) is a national designed survey to provide information regarding birth rate, mortality rate, family planning and health. DHSI was conducted by BPS in cooperation with National Population and Family Planning Institution (BKKBN), Indonesia Ministry of Health (KEMENKES) and USAID. Based on the publication of DHSI 2012, the infant mortality rate for a period of five years before survey conducted is 32 for 1000 birth lives. In this paper, Small Area Estimation (SAE) is used to estimate the number of infant mortality in districts of West Java. SAE is a special model of Generalized Linear Mixed Models (GLMM). In this case, the incidence of infant mortality is a Poisson distribution which has equdispersion assumption. The methods to handle overdispersion are binomial negative and quasi-likelihood model. Based on the results of analysis, quasi-likelihood model is the best model to overcome overdispersion problem. The basic model of the small area estimation used basic area level model. Mean square error (MSE) which based on resampling method is used to measure the accuracy of small area estimates.

  15. A Nonlinear Dynamic Inversion Predictor-Based Model Reference Adaptive Controller for a Generic Transport Model

    NASA Technical Reports Server (NTRS)

    Campbell, Stefan F.; Kaneshige, John T.

    2010-01-01

    Presented here is a Predictor-Based Model Reference Adaptive Control (PMRAC) architecture for a generic transport aircraft. At its core, this architecture features a three-axis, non-linear, dynamic-inversion controller. Command inputs for this baseline controller are provided by pilot roll-rate, pitch-rate, and sideslip commands. This paper will first thoroughly present the baseline controller followed by a description of the PMRAC adaptive augmentation to this control system. Results are presented via a full-scale, nonlinear simulation of NASA s Generic Transport Model (GTM).

  16. Error Rates in Measuring Teacher and School Performance Based on Student Test Score Gains. NCEE 2010-4004

    ERIC Educational Resources Information Center

    Schochet, Peter Z.; Chiang, Hanley S.

    2010-01-01

    This paper addresses likely error rates for measuring teacher and school performance in the upper elementary grades using value-added models applied to student test score gain data. Using realistic performance measurement system schemes based on hypothesis testing, we develop error rate formulas based on OLS and Empirical Bayes estimators.…

  17. Biomechanical models for radial distance determination by the rat vibrissal system.

    PubMed

    Birdwell, J Alexander; Solomon, Joseph H; Thajchayapong, Montakan; Taylor, Michael A; Cheely, Matthew; Towal, R Blythe; Conradt, Jorg; Hartmann, Mitra J Z

    2007-10-01

    Rats use active, rhythmic movements of their whiskers to acquire tactile information about three-dimensional object features. There are no receptors along the length of the whisker; therefore all tactile information must be mechanically transduced back to receptors at the whisker base. This raises the question: how might the rat determine the radial contact position of an object along the whisker? We developed two complementary biomechanical models that show that the rat could determine radial object distance by monitoring the rate of change of moment (or equivalently, the rate of change of curvature) at the whisker base. The first model is used to explore the effects of taper and inherent whisker curvature on whisker deformation and used to predict the shapes of real rat whiskers during deflections at different radial distances. Predicted shapes closely matched experimental measurements. The second model describes the relationship between radial object distance and the rate of change of moment at the base of a tapered, inherently curved whisker. Together, these models can account for recent recordings showing that some trigeminal ganglion (Vg) neurons encode closer radial distances with increased firing rates. The models also suggest that four and only four physical variables at the whisker base -- angular position, angular velocity, moment, and rate of change of moment -- are needed to describe the dynamic state of a whisker. We interpret these results in the context of our evolving hypothesis that neural responses in Vg can be represented using a state-encoding scheme that includes combinations of these four variables.

  18. A new leakage measurement method for damaged seal material

    NASA Astrophysics Data System (ADS)

    Wang, Shen; Yao, Xue Feng; Yang, Heng; Yuan, Li; Dong, Yi Feng

    2018-07-01

    In this paper, a new leakage measurement method based on the temperature field and temperature gradient field is proposed for detecting the leakage location and measuring the leakage rate in damaged seal material. First, a heat transfer leakage model is established, which can calculate the leakage rate based on the temperature gradient field near the damaged zone. Second, a finite element model of an infinite plate with a damaged zone is built to calculate the leakage rate, which fits the simulated leakage rate well. Finally, specimens in a tubular rubber seal with different damage shapes are used to conduct the leakage experiment, validating the correctness of this new measurement principle for the leakage rate and the leakage position. The results indicate the feasibility of the leakage measurement method for damaged seal material based on the temperature gradient field from infrared thermography.

  19. The dynamic compressive behavior and constitutive modeling of D1 railway wheel steel over a wide range of strain rates and temperatures

    NASA Astrophysics Data System (ADS)

    Jing, Lin; Su, Xingya; Zhao, Longmao

    The dynamic compressive behavior of D1 railway wheel steel at high strain rates was investigated using a split Hopkinson pressure bar (SHPB) apparatus. Three types of specimens, which were derived from the different positions (i.e., the rim, web and hub) of a railway wheel, were tested over a wide range of strain rates from 10-3 s-1 to 2.4 × 103 s-1 and temperatures from 213 K to 973 K. Influences of the strain rate and temperature on flow stress were discussed, and rate- and temperature-dependent constitutive relationships were assessed by the Cowper-Symonds model, Johnson-Cook model and a physically-based model, respectively. The experimental results show that the compressive true stress versus true strain response of D1 wheel steel is strain rate-dependent, and the strain hardening rate during the plastic flow stage decreases with the elevation of strain rate. Besides, the D1 wheel steel displays obvious temperature-dependence, and the third-type strain aging (3rd SA) is occurred at the temperature region of 673-973 K at a strain rate of ∼1500 s-1. Comparisons of experimental results with theoretical predictions indicate that the physically-based model has a better prediction capability for the 3rd SA characteristic of the tested D1 wheel steel.

  20. Studies for the Loss of Atomic and Molecular Species from Io

    NASA Technical Reports Server (NTRS)

    Smyth, William H.

    1998-01-01

    Updated neutral emission rates for electron impact excitation of atomic oxygen and sulfur based upon the Collisional Radiative Equilibrium (COREQ) model have been incorporated in the neutral cloud models. An empirical model for the Io plasma torus wake has also been added in the neutral cloud model to describe important enhancements in the neutral emission rates and lifetime rates in this spatial region. New insights into Io's atmosphere and its interaction with the plasma torus are discussed. These insights are based upon an initial comparison of simultaneous lo observations on October 14, 1997, for [0I] 6300 Angstrom emissions acquired by groundbased facilities and several ultraviolet emissions acquired by HST/STIS in the form of high-spatial- resolution images for atomic oxygen and sulfur.

  1. An empirical and model study on automobile market in Taiwan

    NASA Astrophysics Data System (ADS)

    Tang, Ji-Ying; Qiu, Rong; Zhou, Yueping; He, Da-Ren

    2006-03-01

    We have done an empirical investigation on automobile market in Taiwan including the development of the possession rate of the companies in the market from 1979 to 2003, the development of the largest possession rate, and so on. A dynamic model for describing the competition between the companies is suggested based on the empirical study. In the model each company is given a long-term competition factor (such as technology, capital and scale) and a short-term competition factor (such as management, service and advertisement). Then the companies play games in order to obtain more possession rate in the market under certain rules. Numerical simulation based on the model display a competition developing process, which qualitatively and quantitatively agree with our empirical investigation results.

  2. Computer modeling and design analysis of a bit rate discrimination circuit based dual-rate burst mode receiver

    NASA Astrophysics Data System (ADS)

    Kota, Sriharsha; Patel, Jigesh; Ghillino, Enrico; Richards, Dwight

    2011-01-01

    In this paper, we demonstrate a computer model for simulating a dual-rate burst mode receiver that can readily distinguish bit rates of 1.25Gbit/s and 10.3Gbit/s and demodulate the data bursts with large power variations of above 5dB. To our knowledge, this is the first such model to demodulate data bursts of different bit rates without using any external control signal such as a reset signal or a bit rate select signal. The model is based on a burst-mode bit rate discrimination circuit (B-BDC) and makes use of a unique preamble sequence attached to each burst to separate out the data bursts with different bit rates. Here, the model is implemented using a combination of the optical system simulation suite OptSimTM, and the electrical simulation engine SPICE. The reaction time of the burst mode receiver model is about 7ns, which corresponds to less than 8 preamble bits for the bit rate of 1.25Gbps. We believe, having an accurate and robust simulation model for high speed burst mode transmission in GE-PON systems, is indispensable and tremendously speeds up the ongoing research in the area, saving a lot of time and effort involved in carrying out the laboratory experiments, while providing flexibility in the optimization of various system parameters for better performance of the receiver as a whole. Furthermore, we also study the effects of burst specifications like the length of preamble sequence, and other receiver design parameters on the reaction time of the receiver.

  3. Prediction Accuracy of Error Rates for MPTB Space Experiment

    NASA Technical Reports Server (NTRS)

    Buchner, S. P.; Campbell, A. B.; Davis, D.; McMorrow, D.; Petersen, E. L.; Stassinopoulos, E. G.; Ritter, J. C.

    1998-01-01

    This paper addresses the accuracy of radiation-induced upset-rate predictions in space using the results of ground-based measurements together with standard environmental and device models. The study is focused on two part types - 16 Mb NEC DRAM's (UPD4216) and 1 Kb SRAM's (AMD93L422) - both of which are currently in space on board the Microelectronics and Photonics Test Bed (MPTB). To date, ground-based measurements of proton-induced single event upset (SEM cross sections as a function of energy have been obtained and combined with models of the proton environment to predict proton-induced error rates in space. The role played by uncertainties in the environmental models will be determined by comparing the modeled radiation environment with the actual environment measured aboard MPTB. Heavy-ion induced upsets have also been obtained from MPTB and will be compared with the "predicted" error rate following ground testing that will be done in the near future. These results should help identify sources of uncertainty in predictions of SEU rates in space.

  4. Tensile Strength of Carbon Nanotubes Under Realistic Temperature and Strain Rate

    NASA Technical Reports Server (NTRS)

    Wei, Chen-Yu; Cho, Kyeong-Jae; Srivastava, Deepak; Biegel, Bryan (Technical Monitor)

    2002-01-01

    Strain rate and temperature dependence of the tensile strength of single-wall carbon nanotubes has been investigated with molecular dynamics simulations. The tensile failure or yield strain is found to be strongly dependent on the temperature and strain rate. A transition state theory based predictive model is developed for the tensile failure of nanotubes. Based on the parameters fitted from high-strain rate and temperature dependent molecular dynamics simulations, the model predicts that a defect free micrometer long single-wall nanotube at 300 K, stretched with a strain rate of 1%/hour, fails at about 9 plus or minus 1% tensile strain. This is in good agreement with recent experimental findings.

  5. Interferon-based anti-viral therapy for hepatitis C virus infection after renal transplantation: an updated meta-analysis.

    PubMed

    Wei, Fang; Liu, Junying; Liu, Fen; Hu, Huaidong; Ren, Hong; Hu, Peng

    2014-01-01

    Hepatitis C virus (HCV) infection is highly prevalent in renal transplant (RT) recipients. Currently, interferon-based (IFN-based) antiviral therapies are the standard approach to control HCV infection. In a post-transplantation setting, however, IFN-based therapies appear to have limited efficacy and their use remains controversial. The present study aimed to evaluate the efficacy and safety of IFN-based therapies for HCV infection post RT. We searched Pubmed, Embase, Web of Knowledge, and The Cochrane Library (1997-2013) for clinical trials in which transplant patients were given Interferon (IFN), pegylated interferon (PEG), interferon plus ribavirin (IFN-RIB), or pegylated interferon plus ribavirin (PEG-RIB). The Sustained Virological Response (SVR) and/or drop-out rates were the primary outcomes. Summary estimates were calculated using the random-effects model of DerSimonian and Laird, with heterogeneity and sensitivity analysis. We identified 12 clinical trials (140 patients in total). The summary estimate for SVR rate, drop-out rate and graft rejection rate was 26.6% (95%CI, 15.0-38.1%), 21.1% (95% CI, 10.9-31.2%) and 4% (95%CI: 0.8%-7.1%), respectively. The overall SVR rate in PEG-based and standard IFN-based therapy was 40.6% (24/59) and 20.9% (17/81), respectively. The most frequent side-effect requiring discontinuation of treatment was graft dysfunction (14 cases, 45.1%). Meta-regression analysis showed the covariates included contribute to the heterogeneity in the SVR logit rate, but not in the drop-out logit rate. The sensitivity analyses by the random model yielded very similar results to the fixed-effects model. IFN-based therapy for HCV infection post RT has poor efficacy and limited safety. PEG-based therapy is a more effective approach for treating HCV infection post-RT than standard IFN-based therapy. Future research is required to develop novel strategies to improve therapeutic efficacy and tolerability, and reduce the liver-related morbidity and mortality in this important patient population.

  6. Valence-Dependent Belief Updating: Computational Validation

    PubMed Central

    Kuzmanovic, Bojana; Rigoux, Lionel

    2017-01-01

    People tend to update beliefs about their future outcomes in a valence-dependent way: they are likely to incorporate good news and to neglect bad news. However, belief formation is a complex process which depends not only on motivational factors such as the desire for favorable conclusions, but also on multiple cognitive variables such as prior beliefs, knowledge about personal vulnerabilities and resources, and the size of the probabilities and estimation errors. Thus, we applied computational modeling in order to test for valence-induced biases in updating while formally controlling for relevant cognitive factors. We compared biased and unbiased Bayesian models of belief updating, and specified alternative models based on reinforcement learning. The experiment consisted of 80 trials with 80 different adverse future life events. In each trial, participants estimated the base rate of one of these events and estimated their own risk of experiencing the event before and after being confronted with the actual base rate. Belief updates corresponded to the difference between the two self-risk estimates. Valence-dependent updating was assessed by comparing trials with good news (better-than-expected base rates) with trials with bad news (worse-than-expected base rates). After receiving bad relative to good news, participants' updates were smaller and deviated more strongly from rational Bayesian predictions, indicating a valence-induced bias. Model comparison revealed that the biased (i.e., optimistic) Bayesian model of belief updating better accounted for data than the unbiased (i.e., rational) Bayesian model, confirming that the valence of the new information influenced the amount of updating. Moreover, alternative computational modeling based on reinforcement learning demonstrated higher learning rates for good than for bad news, as well as a moderating role of personal knowledge. Finally, in this specific experimental context, the approach based on reinforcement learning was superior to the Bayesian approach. The computational validation of valence-dependent belief updating represents a novel support for a genuine optimism bias in human belief formation. Moreover, the precise control of relevant cognitive variables justifies the conclusion that the motivation to adopt the most favorable self-referential conclusions biases human judgments. PMID:28706499

  7. Valence-Dependent Belief Updating: Computational Validation.

    PubMed

    Kuzmanovic, Bojana; Rigoux, Lionel

    2017-01-01

    People tend to update beliefs about their future outcomes in a valence-dependent way: they are likely to incorporate good news and to neglect bad news. However, belief formation is a complex process which depends not only on motivational factors such as the desire for favorable conclusions, but also on multiple cognitive variables such as prior beliefs, knowledge about personal vulnerabilities and resources, and the size of the probabilities and estimation errors. Thus, we applied computational modeling in order to test for valence-induced biases in updating while formally controlling for relevant cognitive factors. We compared biased and unbiased Bayesian models of belief updating, and specified alternative models based on reinforcement learning. The experiment consisted of 80 trials with 80 different adverse future life events. In each trial, participants estimated the base rate of one of these events and estimated their own risk of experiencing the event before and after being confronted with the actual base rate. Belief updates corresponded to the difference between the two self-risk estimates. Valence-dependent updating was assessed by comparing trials with good news (better-than-expected base rates) with trials with bad news (worse-than-expected base rates). After receiving bad relative to good news, participants' updates were smaller and deviated more strongly from rational Bayesian predictions, indicating a valence-induced bias. Model comparison revealed that the biased (i.e., optimistic) Bayesian model of belief updating better accounted for data than the unbiased (i.e., rational) Bayesian model, confirming that the valence of the new information influenced the amount of updating. Moreover, alternative computational modeling based on reinforcement learning demonstrated higher learning rates for good than for bad news, as well as a moderating role of personal knowledge. Finally, in this specific experimental context, the approach based on reinforcement learning was superior to the Bayesian approach. The computational validation of valence-dependent belief updating represents a novel support for a genuine optimism bias in human belief formation. Moreover, the precise control of relevant cognitive variables justifies the conclusion that the motivation to adopt the most favorable self-referential conclusions biases human judgments.

  8. Evidence-Based Adequacy Model for School Funding: Success Rates in Illinois Schools that Meet Targets

    ERIC Educational Resources Information Center

    Murphy, Gregory J.

    2012-01-01

    This quantitative study explores the 2010 recommendation of the Educational Funding Advisory Board to consider the Evidence-Based Adequacy model of school funding in Illinois. This school funding model identifies and costs research based practices necessary in a prototypical school and sets funding levels based upon those practices. This study…

  9. Liquid phase methanol LaPorte process development unit: Modification, operation, and support studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    The objectives of the present work can be divided into three parts. The first objective was to develop a best-fit model for the older methanol synthesis catalyst (BASF S3-85) data base. At the time that this work commenced (June 1989), the BASF S3-85 data base contained many rate measurements accumulated over a few years. The newer catalyst (BASF S3-86) data base, at that time, contained only a few observations and did not include a broad range of conditions. Thus, a second objective of this work was to expand the BASF S3-86 data base to include more rate observations over amore » broader range of conditions. Finally, after expansion of the BASF S3-86 data base, the third objective was to develop a rate expression to describe this data base. This would include the application of rate expressions developed for the BASF S3-85 catalyst, as well as new models. (VC)« less

  10. MODELING CHLORINE RESIDUALS IN DRINKING-WATER DISTRIBUTION SYSTEMS

    EPA Science Inventory

    A mass transfer-based model is developed for predicting chlorine decay in drinking water distribution networks. he model considers first order reactions of chlorine to occur both in the bulk flow and at the pipe wall. he overall rate of the wall reaction is a function of the rate...

  11. COMPARISON OF IN VIVO DERIVED AND SCALED IN VITRO METABOLIC RATE CONSTANTS FOR SOME VOLATILE ORGANIC COMPOUNDS (VOCS)

    EPA Science Inventory

    The reliability of physiologically based pharmacokinetic (PBPK) models is directly related to the accuracy of the metabolic rate parameters used as model inputs. When metabolic rate parameters derived from in vivo experiments are unavailable, they can be estimated from in vitro d...

  12. Model-based estimation of individual fitness

    USGS Publications Warehouse

    Link, W.A.; Cooch, E.G.; Cam, E.

    2002-01-01

    Fitness is the currency of natural selection, a measure of the propagation rate of genotypes into future generations. Its various definitions have the common feature that they are functions of survival and fertility rates. At the individual level, the operative level for natural selection, these rates must be understood as latent features, genetically determined propensities existing at birth. This conception of rates requires that individual fitness be defined and estimated by consideration of the individual in a modelled relation to a group of similar individuals; the only alternative is to consider a sample of size one, unless a clone of identical individuals is available. We present hierarchical models describing individual heterogeneity in survival and fertility rates and allowing for associations between these rates at the individual level. We apply these models to an analysis of life histories of Kittiwakes (Rissa tridactyla) observed at several colonies on the Brittany coast of France. We compare Bayesian estimation of the population distribution of individual fitness with estimation based on treating individual life histories in isolation, as samples of size one (e.g. McGraw and Caswell, 1996).

  13. Model-based estimation of individual fitness

    USGS Publications Warehouse

    Link, W.A.; Cooch, E.G.; Cam, E.

    2002-01-01

    Fitness is the currency of natural selection, a measure of the propagation rate of genotypes into future generations. Its various definitions have the common feature that they are functions of survival and fertility rates. At the individual level, the operative level for natural selection, these rates must be understood as latent features, genetically determined propensities existing at birth. This conception of rates requires that individual fitness be defined and estimated by consideration of the individual in a modelled relation to a group of similar individuals; the only alternative is to consider a sample of size one, unless a clone of identical individuals is available. We present hierarchical models describing individual heterogeneity in survival and fertility rates and allowing for associations between these rates at the individual level. We apply these models to an analysis of life histories of Kittiwakes (Rissa tridactyla ) observed at several colonies on the Brittany coast of France. We compare Bayesian estimation of the population distribution of individual fitness with estimation based on treating individual life histories in isolation, as samples of size one (e.g. McGraw & Caswell, 1996).

  14. Exponential growth kinetics for Polyporus versicolor and Pleurotus ostreatus in submerged culture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carroad, P.A.; Wilke, C.R.

    1977-04-01

    Simple mathematical models for a batch culture of pellet-forming fungi in submerged culture were tested on growth data for Polyporus versicolor (ATCC 12679) and Pleurotus ostreatus (ATCC 9415). A kinetic model based on a growth rate proportional to the two-thirds power of the cell mass was shown to be satisfactory. A model based on a growth rate directly proportional to the cell mass fitted the data equally well, however, and may be preferable because of mathematical simplicity.

  15. Estimation of unemployment rates using small area estimation model by combining time series and cross-sectional data

    NASA Astrophysics Data System (ADS)

    Muchlisoh, Siti; Kurnia, Anang; Notodiputro, Khairil Anwar; Mangku, I. Wayan

    2016-02-01

    Labor force surveys conducted over time by the rotating panel design have been carried out in many countries, including Indonesia. Labor force survey in Indonesia is regularly conducted by Statistics Indonesia (Badan Pusat Statistik-BPS) and has been known as the National Labor Force Survey (Sakernas). The main purpose of Sakernas is to obtain information about unemployment rates and its changes over time. Sakernas is a quarterly survey. The quarterly survey is designed only for estimating the parameters at the provincial level. The quarterly unemployment rate published by BPS (official statistics) is calculated based on only cross-sectional methods, despite the fact that the data is collected under rotating panel design. The study purpose to estimate a quarterly unemployment rate at the district level used small area estimation (SAE) model by combining time series and cross-sectional data. The study focused on the application and comparison between the Rao-Yu model and dynamic model in context estimating the unemployment rate based on a rotating panel survey. The goodness of fit of both models was almost similar. Both models produced an almost similar estimation and better than direct estimation, but the dynamic model was more capable than the Rao-Yu model to capture a heterogeneity across area, although it was reduced over time.

  16. Shilling Attacks Detection in Recommender Systems Based on Target Item Analysis

    PubMed Central

    Zhou, Wei; Wen, Junhao; Koh, Yun Sing; Xiong, Qingyu; Gao, Min; Dobbie, Gillian; Alam, Shafiq

    2015-01-01

    Recommender systems are highly vulnerable to shilling attacks, both by individuals and groups. Attackers who introduce biased ratings in order to affect recommendations, have been shown to negatively affect collaborative filtering (CF) algorithms. Previous research focuses only on the differences between genuine profiles and attack profiles, ignoring the group characteristics in attack profiles. In this paper, we study the use of statistical metrics to detect rating patterns of attackers and group characteristics in attack profiles. Another question is that most existing detecting methods are model specific. Two metrics, Rating Deviation from Mean Agreement (RDMA) and Degree of Similarity with Top Neighbors (DegSim), are used for analyzing rating patterns between malicious profiles and genuine profiles in attack models. Building upon this, we also propose and evaluate a detection structure called RD-TIA for detecting shilling attacks in recommender systems using a statistical approach. In order to detect more complicated attack models, we propose a novel metric called DegSim’ based on DegSim. The experimental results show that our detection model based on target item analysis is an effective approach for detecting shilling attacks. PMID:26222882

  17. Monitoring, modeling, and management: why base avian management on vital rates and how should it be done?

    Treesearch

    David F. DeSante; M. Philip Nott; Danielle R. Kaschube

    2005-01-01

    In this paper we argue that effective management of landbirds should be based on assessing and monitoring their vital rates (primary demographic parameters) as well as population trends. This is because environmental stressors and management actions affect vital rates directly and usually without time lags, and because monitoring vital rates provides a) information on...

  18. The Dissolution Behavior of Borosilicate Glasses in Far-From Equilibrium Conditions

    DOE PAGES

    Neeway, James J.; Rieke, Peter C.; Parruzot, Benjamin P.; ...

    2018-02-10

    An area of agreement in the waste glass corrosion community is that, at far-from-equilibrium conditions, the dissolution of borosilicate glasses used to immobilize nuclear waste is known to be a function of both temperature and pH. The aim of this work is to study the effects of temperature and pH on the dissolution rate of three model nuclear waste glasses (SON68, ISG, AFCI). The dissolution rate data are then used to parameterize a kinetic rate model based on Transition State Theory that has been developed to model glass corrosion behavior in dilute conditions. To do this, experiments were conducted atmore » temperatures of 23, 40, 70, and 90 °C and pH(22 °C) values of 9, 10, 11, and 12 with the single-pass flow-through (SPFT) test method. Both the absolute dissolution rates and the rate model parameters are compared with previous results. Rate model parameters for the three glasses studied here are nearly equivalent within error and in relative agreement with previous studies though quantifiable differences exist. The glass dissolution rates were analyzed with a linear multivariate regression (LMR) and a nonlinear multivariate regression performed with the use of the Glass Corrosion Modeling Tool (GCMT), with which a robust uncertainty analysis is performed. This robust analysis highlights the high degree of correlation of various parameters in the kinetic rate model. As more data are obtained on borosilicate glasses with varying compositions, a mathematical description of the effect of glass composition on the rate parameter values should be possible. This would allow for the possibility of calculating the forward dissolution rate of glass based solely on composition. In addition, the method of determination of parameter uncertainty and correlation provides a framework for other rate models that describe the dissolution rates of other amorphous and crystalline materials in a wide range of chemical conditions. As a result, the higher level of uncertainty analysis would provide a basis for comparison of different rate models and allow for a better means of quantifiably comparing the various models.« less

  19. The dissolution behavior of borosilicate glasses in far-from equilibrium conditions

    NASA Astrophysics Data System (ADS)

    Neeway, James J.; Rieke, Peter C.; Parruzot, Benjamin P.; Ryan, Joseph V.; Asmussen, R. Matthew

    2018-04-01

    An area of agreement in the waste glass corrosion community is that, at far-from-equilibrium conditions, the dissolution of borosilicate glasses used to immobilize nuclear waste is known to be a function of both temperature and pH. The aim of this work is to study the effects of temperature and pH on the dissolution rate of three model nuclear waste glasses (SON68, ISG, AFCI). The dissolution rate data are then used to parameterize a kinetic rate model based on Transition State Theory that has been developed to model glass corrosion behavior in dilute conditions. To do this, experiments were conducted at temperatures of 23, 40, 70, and 90 °C and pH (22 °C) values of 9, 10, 11, and 12 with the single-pass flow-through (SPFT) test method. Both the absolute dissolution rates and the rate model parameters are compared with previous results. Rate model parameters for the three glasses studied here are nearly equivalent within error and in relative agreement with previous studies though quantifiable differences exist. The glass dissolution rates were analyzed with a linear multivariate regression (LMR) and a nonlinear multivariate regression performed with the use of the Glass Corrosion Modeling Tool (GCMT), with which a robust uncertainty analysis is performed. This robust analysis highlights the high degree of correlation of various parameters in the kinetic rate model. As more data are obtained on borosilicate glasses with varying compositions, a mathematical description of the effect of glass composition on the rate parameter values should be possible. This would allow for the possibility of calculating the forward dissolution rate of glass based solely on composition. In addition, the method of determination of parameter uncertainty and correlation provides a framework for other rate models that describe the dissolution rates of other amorphous and crystalline materials in a wide range of chemical conditions. The higher level of uncertainty analysis would provide a basis for comparison of different rate models and allow for a better means of quantifiably comparing the various models.

  20. The Dissolution Behavior of Borosilicate Glasses in Far-From Equilibrium Conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neeway, James J.; Rieke, Peter C.; Parruzot, Benjamin P.

    An area of agreement in the waste glass corrosion community is that, at far-from-equilibrium conditions, the dissolution of borosilicate glasses used to immobilize nuclear waste is known to be a function of both temperature and pH. The aim of this work is to study the effects of temperature and pH on the dissolution rate of three model nuclear waste glasses (SON68, ISG, AFCI). The dissolution rate data are then used to parameterize a kinetic rate model based on Transition State Theory that has been developed to model glass corrosion behavior in dilute conditions. To do this, experiments were conducted atmore » temperatures of 23, 40, 70, and 90 °C and pH(22 °C) values of 9, 10, 11, and 12 with the single-pass flow-through (SPFT) test method. Both the absolute dissolution rates and the rate model parameters are compared with previous results. Rate model parameters for the three glasses studied here are nearly equivalent within error and in relative agreement with previous studies though quantifiable differences exist. The glass dissolution rates were analyzed with a linear multivariate regression (LMR) and a nonlinear multivariate regression performed with the use of the Glass Corrosion Modeling Tool (GCMT), with which a robust uncertainty analysis is performed. This robust analysis highlights the high degree of correlation of various parameters in the kinetic rate model. As more data are obtained on borosilicate glasses with varying compositions, a mathematical description of the effect of glass composition on the rate parameter values should be possible. This would allow for the possibility of calculating the forward dissolution rate of glass based solely on composition. In addition, the method of determination of parameter uncertainty and correlation provides a framework for other rate models that describe the dissolution rates of other amorphous and crystalline materials in a wide range of chemical conditions. As a result, the higher level of uncertainty analysis would provide a basis for comparison of different rate models and allow for a better means of quantifiably comparing the various models.« less

  1. Prediction of the dollar to the ruble rate. A system-theoretic approach

    NASA Astrophysics Data System (ADS)

    Borodachev, Sergey M.

    2017-07-01

    Proposed a simple state-space model of dollar rate formation based on changes in oil prices and some mechanisms of money transfer between monetary and stock markets. Comparison of predictions by means of input-output model and state-space model is made. It concludes that with proper use of statistical data (Kalman filter) the second approach provides more adequate predictions of the dollar rate.

  2. Modeling analysis of pulsed magnetization process of magnetic core based on inverse Jiles-Atherton model

    NASA Astrophysics Data System (ADS)

    Liu, Yi; Zhang, He; Liu, Siwei; Lin, Fuchang

    2018-05-01

    The J-A (Jiles-Atherton) model is widely used to describe the magnetization characteristics of magnetic cores in a low-frequency alternating field. However, this model is deficient in the quantitative analysis of the eddy current loss and residual loss in a high-frequency magnetic field. Based on the decomposition of magnetization intensity, an inverse J-A model is established which uses magnetic flux density B as an input variable. Static and dynamic core losses under high frequency excitation are separated based on the inverse J-A model. Optimized parameters of the inverse J-A model are obtained based on particle swarm optimization. The platform for the pulsed magnetization characteristic test is designed and constructed. The hysteresis curves of ferrite and Fe-based nanocrystalline cores at high magnetization rates are measured. The simulated and measured hysteresis curves are presented and compared. It is found that the inverse J-A model can be used to describe the magnetization characteristics at high magnetization rates and to separate the static loss and dynamic loss accurately.

  3. Impact of the terrestrial-aquatic transition on disparity and rates of evolution in the carnivoran skull.

    PubMed

    Jones, Katrina E; Smaers, Jeroen B; Goswami, Anjali

    2015-02-04

    Which factors influence the distribution patterns of morphological diversity among clades? The adaptive radiation model predicts that a clade entering new ecological niche will experience high rates of evolution early in its history, followed by a gradual slowing. Here we measure disparity and rates of evolution in Carnivora, specifically focusing on the terrestrial-aquatic transition in Pinnipedia. We analyze fissiped (mostly terrestrial, arboreal, and semi-arboreal, but also including the semi-aquatic otter) and pinniped (secondarily aquatic) carnivorans as a case study of an extreme ecological transition. We used 3D geometric morphometrics to quantify cranial shape in 151 carnivoran specimens (64 fissiped, 87 pinniped) and five exceptionally-preserved fossil pinnipeds, including the stem-pinniped Enaliarctos emlongi. Range-based and variance-based disparity measures were compared between pinnipeds and fissipeds. To distinguish between evolutionary modes, a Brownian motion model was compared to selective regime shifts associated with the terrestrial-aquatic transition and at the base of Pinnipedia. Further, evolutionary patterns were estimated on individual branches using both Ornstein-Uhlenbeck and Independent Evolution models, to examine the origin of pinniped diversity. Pinnipeds exhibit greater cranial disparity than fissipeds, even though they are less taxonomically diverse and, as a clade nested within fissipeds, phylogenetically younger. Despite this, there is no increase in the rate of morphological evolution at the base of Pinnipedia, as would be predicted by an adaptive radiation model, and a Brownian motion model of evolution is supported. Instead basal pinnipeds populated new areas of morphospace via low to moderate rates of evolution in new directions, followed by later bursts within the crown-group, potentially associated with ecological diversification within the marine realm. The transition to an aquatic habitat in carnivorans resulted in a shift in cranial morphology without an increase in rate in the stem lineage, contra to the adaptive radiation model. Instead these data suggest a release from evolutionary constraint model, followed by aquatic diversifications within crown families.

  4. Forecasting Lightning Threat using Cloud-resolving Model Simulations

    NASA Technical Reports Server (NTRS)

    McCaul, E. W., Jr.; Goodman, S. J.; LaCasse, K. M.; Cecil, D. J.

    2009-01-01

    As numerical forecasts capable of resolving individual convective clouds become more common, it is of interest to see if quantitative forecasts of lightning flash rate density are possible, based on fields computed by the numerical model. Previous observational research has shown robust relationships between observed lightning flash rates and inferred updraft and large precipitation ice fields in the mixed phase regions of storms, and that these relationships might allow simulated fields to serve as proxies for lightning flash rate density. It is shown in this paper that two simple proxy fields do indeed provide reasonable and cost-effective bases for creating time-evolving maps of predicted lightning flash rate density, judging from a series of diverse simulation case study events in North Alabama for which Lightning Mapping Array data provide ground truth. One method is based on the product of upward velocity and the mixing ratio of precipitating ice hydrometeors, modeled as graupel only, in the mixed phase region of storms at the -15\\dgc\\ level, while the second method is based on the vertically integrated amounts of ice hydrometeors in each model grid column. Each method can be calibrated by comparing domainwide statistics of the peak values of simulated flash rate proxy fields against domainwide peak total lightning flash rate density data from observations. Tests show that the first method is able to capture much of the temporal variability of the lightning threat, while the second method does a better job of depicting the areal coverage of the threat. A blended solution is designed to retain most of the temporal sensitivity of the first method, while adding the improved spatial coverage of the second. Weather Research and Forecast Model simulations of selected North Alabama cases show that this model can distinguish the general character and intensity of most convective events, and that the proposed methods show promise as a means of generating quantitatively realistic fields of lightning threat. However, because models tend to have more difficulty in correctly predicting the instantaneous placement of storms, forecasts of the detailed location of the lightning threat based on single simulations can be in error. Although these model shortcomings presently limit the precision of lightning threat forecasts from individual runs of current generation models, the techniques proposed herein should continue to be applicable as newer and more accurate physically-based model versions, physical parameterizations, initialization techniques and ensembles of cloud-allowing forecasts become available.

  5. Time-dependent oral absorption models

    NASA Technical Reports Server (NTRS)

    Higaki, K.; Yamashita, S.; Amidon, G. L.

    2001-01-01

    The plasma concentration-time profiles following oral administration of drugs are often irregular and cannot be interpreted easily with conventional models based on first- or zero-order absorption kinetics and lag time. Six new models were developed using a time-dependent absorption rate coefficient, ka(t), wherein the time dependency was varied to account for the dynamic processes such as changes in fluid absorption or secretion, in absorption surface area, and in motility with time, in the gastrointestinal tract. In the present study, the plasma concentration profiles of propranolol obtained in human subjects following oral dosing were analyzed using the newly derived models based on mass balance and compared with the conventional models. Nonlinear regression analysis indicated that the conventional compartment model including lag time (CLAG model) could not predict the rapid initial increase in plasma concentration after dosing and the predicted Cmax values were much lower than that observed. On the other hand, all models with the time-dependent absorption rate coefficient, ka(t), were superior to the CLAG model in predicting plasma concentration profiles. Based on Akaike's Information Criterion (AIC), the fluid absorption model without lag time (FA model) exhibited the best overall fit to the data. The two-phase model including lag time, TPLAG model was also found to be a good model judging from the values of sum of squares. This model also described the irregular profiles of plasma concentration with time and frequently predicted Cmax values satisfactorily. A comparison of the absorption rate profiles also suggested that the TPLAG model is better at prediction of irregular absorption kinetics than the FA model. In conclusion, the incorporation of a time-dependent absorption rate coefficient ka(t) allows the prediction of nonlinear absorption characteristics in a more reliable manner.

  6. Modeling the Impact of Smoking Cessation Treatment Policies on Quit Rates

    PubMed Central

    Levy, David T.; Graham, Amanda L.; Mabry, Patricia L.; Abrams, David B.; Orleans, C. Tracy

    2010-01-01

    Background: Smoking cessation treatment policies could yield substantial increases in adult quit rates in the U.S. Purpose: The goals of this paper are to model the effects of individual cessation treatment policies on population quit rates, and to illustrate the potential benefits of combining policies to leverage their synergistic effects. Methods: A mathematical model is updated to examine the impact of five cessation treatment policies on quit attempts, treatment use, and treatment effectiveness. Policies include: (1) Expand cessation treatment coverage and provider reimbursement; (2) Mandate adequate funding for the use and promotion of evidence-based state-sponsored telephone quitlines; (3) Support healthcare systems changes to prompt, guide, and incentivize tobacco treatment; (4) Support and promote evidence-based treatment via the Internet; and (5) Improve individually tailored, stepped care approaches and the long-term effectiveness of evidence-based treatments. Results: The annual baseline population quit rate is 4.3% of all current smokers. Implementing any policy in isolation is projected to make the quit rate increase to between 4.5% and 6%. By implementing all five policies in combination, the quit rate is projected to increase to 10.9%, or 2.5 times the baseline rate. Conclusions: If fully implemented in a coordinated fashion, cessation treatment policies could reduce smoking prevalence from its current rate of 20.5% to 17.2% within 1 year. By modeling the policy impacts on the components of the population quit rate (quit attempts, treatment use, treatment effectiveness), key indicators are identified to analyze in improving the effect of cessation treatment policies. PMID:20176309

  7. The Relationship Between Hospital Value-Based Purchasing Program Scores and Hospital Bond Ratings.

    PubMed

    Rangnekar, Anooja; Johnson, Tricia; Garman, Andrew; O'Neil, Patricia

    2015-01-01

    Tax-exempt hospitals and health systems often borrow long-term debt to fund capital investments. Lenders use bond ratings as a standard metric to assess whether to lend funds to a hospital. Credit rating agencies have historically relied on financial performance measures and a hospital's ability to service debt obligations to determine bond ratings. With the growth in pay-for-performance-based reimbursement models, rating agencies are expanding their hospital bond rating criteria to include hospital utilization and value-based purchasing (VBP) measures. In this study, we evaluated the relationship between the Hospital VBP domains--Clinical Process of Care, Patient Experience of Care, Outcome, and Medicare Spending per Beneficiary (MSPB)--and hospital bond ratings. Given the historical focus on financial performance, we hypothesized that hospital bond ratings are not associated with any of the Hospital VBP domains. This was a retrospective, cross-sectional study of all hospitals that were rated by Moody's for fiscal year 2012 and participated in the Centers for Medicare & Medicaid Services' VBP program as of January 2014 (N = 285). Of the 285 hospitals in the study, 15% had been assigned a bond rating of Aa, and 46% had been assigned an A rating. Using a binary logistic regression model, we found an association between MSPB only and bond ratings, after controlling for other VBP and financial performance scores; however, MSPB did not improve the overall predictive accuracy of the model. Inclusion of VBP scores in the methodology used to determine hospital bond ratings is likely to affect hospital bond ratings in the near term.

  8. Estimation of an optimal chemotherapy utilisation rate for cancer: setting an evidence-based benchmark for quality cancer care.

    PubMed

    Jacob, S A; Ng, W L; Do, V

    2015-02-01

    There is wide variation in the proportion of newly diagnosed cancer patients who receive chemotherapy, indicating the need for a benchmark rate of chemotherapy utilisation. This study describes an evidence-based model that estimates the proportion of new cancer patients in whom chemotherapy is indicated at least once (defined as the optimal chemotherapy utilisation rate). The optimal chemotherapy utilisation rate can act as a benchmark for measuring and improving the quality of care. Models of optimal chemotherapy utilisation were constructed for each cancer site based on indications for chemotherapy identified from evidence-based treatment guidelines. Data on the proportion of patient- and tumour-related attributes for which chemotherapy was indicated were obtained, using population-based data where possible. Treatment indications and epidemiological data were merged to calculate the optimal chemotherapy utilisation rate. Monte Carlo simulations and sensitivity analyses were used to assess the effect of controversial chemotherapy indications and variations in epidemiological data on our model. Chemotherapy is indicated at least once in 49.1% (95% confidence interval 48.8-49.6%) of all new cancer patients in Australia. The optimal chemotherapy utilisation rates for individual tumour sites ranged from a low of 13% in thyroid cancers to a high of 94% in myeloma. The optimal chemotherapy utilisation rate can serve as a benchmark for planning chemotherapy services on a population basis. The model can be used to evaluate service delivery by comparing the benchmark rate with patterns of care data. The overall estimate for other countries can be obtained by substituting the relevant distribution of cancer types. It can also be used to predict future chemotherapy workload and can be easily modified to take into account future changes in cancer incidence, presentation stage or chemotherapy indications. Copyright © 2014 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  9. Predation rates by North Sea cod (Gadus morhua) - Predictions from models on gastric evacuation and bioenergetics

    USGS Publications Warehouse

    Hansson, S.; Rudstam, L. G.; Kitchell, J.F.; Hilden, M.; Johnson, B.L.; Peppard, P.E.

    1996-01-01

    We compared four different methods for estimating predation rates by North Sea cod (Gadus moi hua). Three estimates, based on gastric evacuation rates, came from an ICES multispecies working group and the fourth from a bioenergetics model. The bioenergetics model was developed from a review of literature on cod physiology. The three gastric evacuation rate models produced very different prey consumption estimates for small (2 kg) fish. For most size and age classes, the bioenergetics model predicted food consumption rates intermediate to those predicted by the gastric evacuation models. Using the standard ICES model and the average population abundance and age structure for 1974-1989, annual, prey consumption by the North Sea cod population (age greater than or equal to 1) was 840 kilotons. The other two evacuation rate models produced estimates of 1020 and 1640 kilotons, respectively. The bioenergetics model estimate was 1420 kilotons. The major differences between models were due to consumption rate estimates for younger age groups of cod. (C) 1996 International Council for the Exploration of the Sea

  10. Validation of statistical predictive models meant to select melanoma patients for sentinel lymph node biopsy.

    PubMed

    Sabel, Michael S; Rice, John D; Griffith, Kent A; Lowe, Lori; Wong, Sandra L; Chang, Alfred E; Johnson, Timothy M; Taylor, Jeremy M G

    2012-01-01

    To identify melanoma patients at sufficiently low risk of nodal metastases who could avoid sentinel lymph node biopsy (SLNB), several statistical models have been proposed based upon patient/tumor characteristics, including logistic regression, classification trees, random forests, and support vector machines. We sought to validate recently published models meant to predict sentinel node status. We queried our comprehensive, prospectively collected melanoma database for consecutive melanoma patients undergoing SLNB. Prediction values were estimated based upon four published models, calculating the same reported metrics: negative predictive value (NPV), rate of negative predictions (RNP), and false-negative rate (FNR). Logistic regression performed comparably with our data when considering NPV (89.4 versus 93.6%); however, the model's specificity was not high enough to significantly reduce the rate of biopsies (SLN reduction rate of 2.9%). When applied to our data, the classification tree produced NPV and reduction in biopsy rates that were lower (87.7 versus 94.1 and 29.8 versus 14.3, respectively). Two published models could not be applied to our data due to model complexity and the use of proprietary software. Published models meant to reduce the SLNB rate among patients with melanoma either underperformed when applied to our larger dataset, or could not be validated. Differences in selection criteria and histopathologic interpretation likely resulted in underperformance. Statistical predictive models must be developed in a clinically applicable manner to allow for both validation and ultimately clinical utility.

  11. Effects of the DRG-based prospective payment system operated by the voluntarily participating providers on the cesarean section rates in Korea.

    PubMed

    Lee, Kwangsoo; Lee, Sangil

    2007-05-01

    This study explored the effects of the diagnosis-related group (DRG)-based prospective payment system (PPS) operated by voluntarily participating organizations on the cesarean section (CS) rates, and analyzed whether the participating health care organizations had similar CS rates despite the varied participation periods. The study sample included delivery claims data from the Korean national health insurance program for the year 2003. Risk factors were identified and used in the adjustment model to distinguish the main reason for CS. Their risk-adjusted CS rates were compared by the reimbursement methods, and the organizations' internal and external environments were controlled. The final risk-adjustment model for the CS rates meets the criteria for an effective model. There were no significant differences of CS rates between providers in the DRG and fee-for-service system after controlling for organizational variables. The CS rates did not vary significantly depending on the providers' DRG participation periods. The results provide evidence that the DRG payment system operated by volunteering health care organizations had no impact on the CS rates, which can lower the quality of care. Although the providers joined the DRG system in different years, there were no differences in the CS rates among the DRG providers. These results support the future expansion of the DRG-based PPS plan to all health care services in Korea.

  12. Dynamic ground-effect measurements on the F-15 STOL and Maneuver Technology Demonstrator (S/MTD) configuration

    NASA Technical Reports Server (NTRS)

    Kemmerly, Guy T.

    1990-01-01

    A moving-model ground-effect testing method was used to study the influence of rate-of-descent on the aerodynamic characteristics for the F-15 STOL and Maneuver Technology Demonstrator (S/MTD) configuration for both the approach and roll-out phases of landing. The approach phase was modeled for three rates of descent, and the results were compared to the predictions from the F-15 S/MTD simulation data base (prediction based on data obtained in a wind tunnel with zero rate of descent). This comparison showed significant differences due both to the rate of descent in the moving-model test and to the presence of the ground boundary layer in the wind tunnel test. Relative to the simulation data base predictions, the moving-model test showed substantially less lift increase in ground effect, less nose-down pitching moment, and less increase in drag. These differences became more prominent at the larger thrust vector angles. Over the small range of rates of descent tested using the moving-model technique, the effect of rate of descent on longitudinal aerodynamics was relatively constant. The results of this investigation indicate no safety-of-flight problems with the lower jets vectored up to 80 deg on approach. The results also indicate that this configuration could employ a nozzle concept using lower reverser vector angles up to 110 deg on approach if a no-flare approach procedure were adopted and if inlet reingestion does not pose a problem.

  13. Joint Machine Learning and Game Theory for Rate Control in High Efficiency Video Coding.

    PubMed

    Gao, Wei; Kwong, Sam; Jia, Yuheng

    2017-08-25

    In this paper, a joint machine learning and game theory modeling (MLGT) framework is proposed for inter frame coding tree unit (CTU) level bit allocation and rate control (RC) optimization in High Efficiency Video Coding (HEVC). First, a support vector machine (SVM) based multi-classification scheme is proposed to improve the prediction accuracy of CTU-level Rate-Distortion (R-D) model. The legacy "chicken-and-egg" dilemma in video coding is proposed to be overcome by the learning-based R-D model. Second, a mixed R-D model based cooperative bargaining game theory is proposed for bit allocation optimization, where the convexity of the mixed R-D model based utility function is proved, and Nash bargaining solution (NBS) is achieved by the proposed iterative solution search method. The minimum utility is adjusted by the reference coding distortion and frame-level Quantization parameter (QP) change. Lastly, intra frame QP and inter frame adaptive bit ratios are adjusted to make inter frames have more bit resources to maintain smooth quality and bit consumption in the bargaining game optimization. Experimental results demonstrate that the proposed MLGT based RC method can achieve much better R-D performances, quality smoothness, bit rate accuracy, buffer control results and subjective visual quality than the other state-of-the-art one-pass RC methods, and the achieved R-D performances are very close to the performance limits from the FixedQP method.

  14. Applying constraints on model-based methods: Estimation of rate constants in a second order consecutive reaction

    NASA Astrophysics Data System (ADS)

    Kompany-Zareh, Mohsen; Khoshkam, Maryam

    2013-02-01

    This paper describes estimation of reaction rate constants and pure ultraviolet/visible (UV-vis) spectra of the component involved in a second order consecutive reaction between Ortho-Amino benzoeic acid (o-ABA) and Diazoniom ions (DIAZO), with one intermediate. In the described system, o-ABA was not absorbing in the visible region of interest and thus, closure rank deficiency problem did not exist. Concentration profiles were determined by solving differential equations of the corresponding kinetic model. In that sense, three types of model-based procedures were applied to estimate the rate constants of the kinetic system, according to Levenberg/Marquardt (NGL/M) algorithm. Original data-based, Score-based and concentration-based objective functions were included in these nonlinear fitting procedures. Results showed that when there is error in initial concentrations, accuracy of estimated rate constants strongly depends on the type of applied objective function in fitting procedure. Moreover, flexibility in application of different constraints and optimization of the initial concentrations estimation during the fitting procedure were investigated. Results showed a considerable decrease in ambiguity of obtained parameters by applying appropriate constraints and adjustable initial concentrations of reagents.

  15. Effect of chemistry and turbulence on NO formation in oxygen-natural gas flames

    NASA Technical Reports Server (NTRS)

    Samaniego, J. -M.; Egolfopoulos, F. N.; Bowman, C. T.

    1996-01-01

    The effects of chemistry and turbulence on NO formation in oxygen-natural turbulent diffusion flames gas flames have been investigated. The chemistry of nitric oxides has been studied numerically in the counterflow configuration. Systematic calculations with the GRI 2.11 mechanism for combustion of methane and NO chemistry were conducted to provide a base case. It was shown that the 'simple' Zeldovich mechanism accounts for more than 75% of N2 consumption in the flame in a range of strain-rates varying between 10 and 1000 s-l. The main shortcomings of this mechanism are: 1) overestimation (15%) of the NO production rate at low strain-rates because it does not capture the reburn due to the hydrocarbon chemistry, and 2) underestimation (25%) of the NO production rate at high strainrates because it ignores NO production through the prompt mechanism. Reburn through the Zeldovich mechanism alone proves to be significant at low strain-rates. A one-step model based on the Zeldovich mechanism and including reburn has been developed. It shows good agreement with the GRI mechanism at low strain-rates but underestimates significantly N2 consumption (about 50%) at high strain-rates. The role of turbulence has been assessed by using an existing 3-D DNS data base of a diffusion flame in decaying turbulence. Two PDF closure models used in practical industrial codes for turbulent NO formation have been tested. A simpler version of the global one-step chemical scheme for NO compared to that developed in this study was used to test the closure assumptions of the PDF models, because the data base could not provide all the necessary ingredients. Despite this simplification, it was possible to demonstrate that the current PDF models for NO overestimate significantly the NO production rate due to the fact that they neglect the correlations between the fluctuations in oxygen concentration and temperature. A single scalar PDF model for temperature that accounts for such correlations based on laminar flame considerations has been developed and showed excellent agreement with the values given by the DNS.

  16. Expectation Maximization Algorithm for Box-Cox Transformation Cure Rate Model and Assessment of Model Misspecification Under Weibull Lifetimes.

    PubMed

    Pal, Suvra; Balakrishnan, Narayanaswamy

    2018-05-01

    In this paper, we develop likelihood inference based on the expectation maximization algorithm for the Box-Cox transformation cure rate model assuming the lifetimes to follow a Weibull distribution. A simulation study is carried out to demonstrate the performance of the proposed estimation method. Through Monte Carlo simulations, we also study the effect of model misspecification on the estimate of cure rate. Finally, we analyze a well-known data on melanoma with the model and the inferential method developed here.

  17. Antarctic sub-shelf melt rates via SIMPEL

    NASA Astrophysics Data System (ADS)

    Reese, Ronja; Albrecht, Torsten; Winkelmann, Ricarda

    2017-04-01

    Ocean-induced melting below ice-shelves is currently suspected to be the dominant cause of mass loss from the Antarctic Ice Sheet (e.g. Depoorter et al. 2013). Although thinning of ice shelves does not directly contribute to sea-level rise, it may have a significant indirect impact through the potential of ice shelves to buttress their adjacent ice sheet. Hence, an appropriate representation of sub-shelf melt rates is essential for modelling the evolution of ice sheets with marine terminating outlet glaciers. Due to computational limits of fully-coupled ice and ocean models, sub-shelf melt rates are often parametrized in large-scale or long-term simulations (e.g. Matin et al. 2011, Pollard & DeConto 2012). These parametrizations usually depend on the depth of the ice shelf base or its local slope but do not include the physical processes in ice shelf cavities. Here, we present the Sub Ice shelf Melt Potsdam modEL (SIMPEL) which mimics the first-order large-scale circulation in ice shelf cavities based on an ocean box model (Olbers & Hellmer, 2010), implemented in the Parallel Ice Sheet Model (Bueler & Brown 2009, Winkelmann et al. 2011, www.pism-docs.org). In SIMPEL, ocean water is transported at depth towards the grounding line where sub-shelf melt rates are highest, and then rises along the shelf base towards the calving front where refreezing can occur. Melt rates are computed by a description of ice-ocean interaction commonly used in high-resolution models (McPhee 1992, Holland & Jenkins 1999). This enables the model to capture a wide-range of melt rates, comparable to the observed range for Antarctic ice shelves (Rignot et al. 2013).

  18. Extension of Viscoplasticity Based on Overstress to Capture the Effects of Prior Aging on the Time Dependent Deformation Behavior of a High-Temperature Polymer: Experiments and Modeling

    DTIC Science & Technology

    2008-10-01

    the standard model characterization procedure is based on creep and recovery tests, where loading and unloading occurs at a fast rate of 1.0 MPa/s...σ − g[ǫ] and on d̊g[ǫ] dǫ = E, where g̊ is defined as the equilibrium stress g[ ] for extremely fast loading. For this case, the stress-strain curves...Strain S tr es s Strain Rate Slow Strain Rate Medium Strain Rate Fast Plastic Flow Fully Established Figure 2.10: Stress Strain Curve Schematic

  19. Integrating data from multiple sources for insights into demographic processes: Simulation studies and proof of concept for hierarchical change-in-ratio models.

    PubMed

    Nilsen, Erlend B; Strand, Olav

    2018-01-01

    We developed a model for estimating demographic rates and population abundance based on multiple data sets revealing information about population age- and sex structure. Such models have previously been described in the literature as change-in-ratio models, but we extend the applicability of the models by i) using time series data allowing the full temporal dynamics to be modelled, by ii) casting the model in an explicit hierarchical modelling framework, and by iii) estimating parameters based on Bayesian inference. Based on sensitivity analyses we conclude that the approach developed here is able to obtain estimates of demographic rate with high precision whenever unbiased data of population structure are available. Our simulations revealed that this was true also when data on population abundance are not available or not included in the modelling framework. Nevertheless, when data on population structure are biased due to different observability of different age- and sex categories this will affect estimates of all demographic rates. Estimates of population size is particularly sensitive to such biases, whereas demographic rates can be relatively precisely estimated even with biased observation data as long as the bias is not severe. We then use the models to estimate demographic rates and population abundance for two Norwegian reindeer (Rangifer tarandus) populations where age-sex data were available for all harvested animals, and where population structure surveys were carried out in early summer (after calving) and late fall (after hunting season), and population size is counted in winter. We found that demographic rates were similar regardless whether we include population count data in the modelling, but that the estimated population size is affected by this decision. This suggest that monitoring programs that focus on population age- and sex structure will benefit from collecting additional data that allow estimation of observability for different age- and sex classes. In addition, our sensitivity analysis suggests that focusing monitoring towards changes in demographic rates might be more feasible than monitoring abundance in many situations where data on population age- and sex structure can be collected.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neeway, James J.; Rieke, Peter C.; Parruzot, Benjamin P.

    In far-from-equilibrium conditions, the dissolution of borosilicate glasses used to immobilize nuclear waste is known to be a function of both temperature and pH. The aim of this paper is to study effects of these variables on three model waste glasses (SON68, ISG, AFCI). To do this, experiments were conducted at temperatures of 23, 40, 70, and 90 °C and pH(RT) values of 9, 10, 11, and 12 with the single-pass flow-through (SPFT) test method. The results from these tests were then used to parameterize a kinetic rate model based on transition state theory. Both the absolute dissolution rates andmore » the rate model parameters are compared with previous results. Discrepancies in the absolute dissolution rates as compared to those obtained using other test methods are discussed. Rate model parameters for the three glasses studied here are nearly equivalent within error and in relative agreement with previous studies. The results were analyzed with a linear multivariate regression (LMR) and a nonlinear multivariate regression performed with the use of the Glass Corrosion Modeling Tool (GCMT), which is capable of providing a robust uncertainty analysis. This robust analysis highlights the high degree of correlation of various parameters in the kinetic rate model. As more data are obtained on borosilicate glasses with varying compositions, the effect of glass composition on the rate parameter values could possibly be obtained. This would allow for the possibility of predicting the forward dissolution rate of glass based solely on composition« less

  1. Interaction of rate- and size-effect using a dislocation density based strain gradient viscoplasticity model

    NASA Astrophysics Data System (ADS)

    Nguyen, Trung N.; Siegmund, Thomas; Tomar, Vikas; Kruzic, Jamie J.

    2017-12-01

    Size effects occur in non-uniform plastically deformed metals confined in a volume on the scale of micrometer or sub-micrometer. Such problems have been well studied using strain gradient rate-independent plasticity theories. Yet, plasticity theories describing the time-dependent behavior of metals in the presence of size effects are presently limited, and there is no consensus about how the size effects vary with strain rates or whether there is an interaction between them. This paper introduces a constitutive model which enables the analysis of complex load scenarios, including loading rate sensitivity, creep, relaxation and interactions thereof under the consideration of plastic strain gradient effects. A strain gradient viscoplasticity constitutive model based on the Kocks-Mecking theory of dislocation evolution, namely the strain gradient Kocks-Mecking (SG-KM) model, is established and allows one to capture both rate and size effects, and their interaction. A formulation of the model in the finite element analysis framework is derived. Numerical examples are presented. In a special virtual creep test with the presence of plastic strain gradients, creep rates are found to diminish with the specimen size, and are also found to depend on the loading rate in an initial ramp loading step. Stress relaxation in a solid medium containing cylindrical microvoids is predicted to increase with decreasing void radius and strain rate in a prior ramp loading step.

  2. Vehicle-specific emissions modeling based upon on-road measurements.

    PubMed

    Frey, H Christopher; Zhang, Kaishan; Rouphail, Nagui M

    2010-05-01

    Vehicle-specific microscale fuel use and emissions rate models are developed based upon real-world hot-stabilized tailpipe measurements made using a portable emissions measurement system. Consecutive averaging periods of one to three multiples of the response time are used to compare two semiempirical physically based modeling schemes. One scheme is based on internally observable variables (IOVs), such as engine speed and manifold absolute pressure, while the other is based on externally observable variables (EOVs), such as speed, acceleration, and road grade. For NO, HC, and CO emission rates, the average R(2) ranged from 0.41 to 0.66 for the former and from 0.17 to 0.30 for the latter. The EOV models have R(2) for CO(2) of 0.43 to 0.79 versus 0.99 for the IOV models. The models are sensitive to episodic events in driving cycles such as high acceleration. Intervehicle and fleet average modeling approaches are compared; the former account for microscale variations that might be useful for some types of assessments. EOV-based models have practical value for traffic management or simulation applications since IOVs usually are not available or not used for emission estimation.

  3. Cosmogenic Ne-21 Production Rates in H-Chondrites Based on Cl-36 - Ar-36 Ages

    NASA Technical Reports Server (NTRS)

    Leya, I.; Graf, Th.; Nishiizumi, K.; Guenther, D.; Wieler, R.

    2000-01-01

    We measured Ne-21 production rates in 14 H-chondrites in good agreement with model calculations. The production rates are based on Ne-21 concentrations measured on bulk samples or the non-magnetic fraction and Cl-36 - Ar-36 ages determined from the metal phase.

  4. Reaction Mechanism Generator: Automatic construction of chemical kinetic mechanisms

    NASA Astrophysics Data System (ADS)

    Gao, Connie W.; Allen, Joshua W.; Green, William H.; West, Richard H.

    2016-06-01

    Reaction Mechanism Generator (RMG) constructs kinetic models composed of elementary chemical reaction steps using a general understanding of how molecules react. Species thermochemistry is estimated through Benson group additivity and reaction rate coefficients are estimated using a database of known rate rules and reaction templates. At its core, RMG relies on two fundamental data structures: graphs and trees. Graphs are used to represent chemical structures, and trees are used to represent thermodynamic and kinetic data. Models are generated using a rate-based algorithm which excludes species from the model based on reaction fluxes. RMG can generate reaction mechanisms for species involving carbon, hydrogen, oxygen, sulfur, and nitrogen. It also has capabilities for estimating transport and solvation properties, and it automatically computes pressure-dependent rate coefficients and identifies chemically-activated reaction paths. RMG is an object-oriented program written in Python, which provides a stable, robust programming architecture for developing an extensible and modular code base with a large suite of unit tests. Computationally intensive functions are cythonized for speed improvements.

  5. Modeling of the rough spherical nanoparticles manipulation on a substrate based on the AFM nanorobot

    NASA Astrophysics Data System (ADS)

    Zakeri, M.; Faraji, J.

    2014-12-01

    In this paper, dynamic behavior of the rough spherical micro/nanoparticles during pulling/pushing on the flat substrate has been investigated and analyzed. For this purpose, at first, two hexagonal roughness models (George and Cooper) were studied and then evaluations for adhesion force were determined for rough particle manipulation on flat substrate. These two models were then changed by using of the Rabinovich theory. Evaluations were determined for contact adhesion force between rough particle and flat substrate; depth of penetration evaluations were determined by the Johnson-Kendall-Roberts contact mechanic theory and the Schwartz method and according to Cooper and George roughness models. Then, the novel contact theory was used to determine a dynamic model for rough micro/nanoparticle manipulation on flat substrate. Finally, simulation of particle dynamic behavior was implemented during pushing of rough spherical gold particles with radii of 50, 150, 400, 600, and 1,000 nm. Results derived from simulations of particles with several rates of roughness on flat substrate indicated that compared to results for flat particles, inherent roughness on particles might reduce the rate of critical force needed for sliding and rolling given particles. Given a fixed radius for roughness value and increased roughness height, evaluations for sliding and rolling critical forces showed greater reduction. Alternately, the rate of critical force was shown to reduce relative to an increased roughness radius. With respect to both models, based on the George roughness model, the predicted rate of adhesion force was greater than that determined in the Cooper roughness model, and as a result, the predicted rate of critical force based on the George roughness model was closer to the critical force value of flat particle.

  6. Modelling of plasma-based dry reforming: how do uncertainties in the input data affect the calculation results?

    NASA Astrophysics Data System (ADS)

    Wang, Weizong; Berthelot, Antonin; Zhang, Quanzhi; Bogaerts, Annemie

    2018-05-01

    One of the main issues in plasma chemistry modeling is that the cross sections and rate coefficients are subject to uncertainties, which yields uncertainties in the modeling results and hence hinders the predictive capabilities. In this paper, we reveal the impact of these uncertainties on the model predictions of plasma-based dry reforming in a dielectric barrier discharge. For this purpose, we performed a detailed uncertainty analysis and sensitivity study. 2000 different combinations of rate coefficients, based on the uncertainty from a log-normal distribution, are used to predict the uncertainties in the model output. The uncertainties in the electron density and electron temperature are around 11% and 8% at the maximum of the power deposition for a 70% confidence level. Still, this can have a major effect on the electron impact rates and hence on the calculated conversions of CO2 and CH4, as well as on the selectivities of CO and H2. For the CO2 and CH4 conversion, we obtain uncertainties of 24% and 33%, respectively. For the CO and H2 selectivity, the corresponding uncertainties are 28% and 14%, respectively. We also identify which reactions contribute most to the uncertainty in the model predictions. In order to improve the accuracy and reliability of plasma chemistry models, we recommend using only verified rate coefficients, and we point out the need for dedicated verification experiments.

  7. Influence of government controls over the currency exchange rate in the evolution of Hurst's exponent: An autonomous agent-based model

    NASA Astrophysics Data System (ADS)

    Chávez Muñoz, Pablo; Fernandes da Silva, Marcus; Vivas Miranda, José; Claro, Francisco; Gomez Diniz, Raimundo

    2007-12-01

    We have studied the performance of the Hurst's index associated with the currency exchange rate in Brazil and Chile. It is shown that this index maps the degree of government control in the exchange rate. A model of supply and demand based in an autonomous agent is proposed, that simulates a virtual market of sale and purchase, where buyer or seller are forced to negotiate through an intermediary. According to this model, the average of the price of daily transactions correspond to the theoretical balance proposed by the law of supply and demand. The influence of an added tendency factor is also analyzed.

  8. A growth inhibitory model with SOx influenced effective growth rate for estimation of algal biomass concentration under flue gas atmosphere

    USDA-ARS?s Scientific Manuscript database

    A theoretical model for the prediction of biomass concentration under real flue gas emission has been developed. The model considers the CO2 mass transfer rate, the critical SOx concentration and its role on pH based inter-conversion of bicarbonate in model building. The calibration and subsequent v...

  9. Simulation of finite-strain inelastic phenomena governed by creep and plasticity

    NASA Astrophysics Data System (ADS)

    Li, Zhen; Bloomfield, Max O.; Oberai, Assad A.

    2017-11-01

    Inelastic mechanical behavior plays an important role in many applications in science and engineering. Phenomenologically, this behavior is often modeled as plasticity or creep. Plasticity is used to represent the rate-independent component of inelastic deformation and creep is used to represent the rate-dependent component. In several applications, especially those at elevated temperatures and stresses, these processes occur simultaneously. In order to model these process, we develop a rate-objective, finite-deformation constitutive model for plasticity and creep. The plastic component of this model is based on rate-independent J_2 plasticity, and the creep component is based on a thermally activated Norton model. We describe the implementation of this model within a finite element formulation, and present a radial return mapping algorithm for it. This approach reduces the additional complexity of modeling plasticity and creep, over thermoelasticity, to just solving one nonlinear scalar equation at each quadrature point. We implement this algorithm within a multiphysics finite element code and evaluate the consistent tangent through automatic differentiation. We verify and validate the implementation, apply it to modeling the evolution of stresses in the flip chip manufacturing process, and test its parallel strong-scaling performance.

  10. Geometric model for softwood transverse thermal conductivity. Part I

    Treesearch

    Hong-mei Gu; Audrey Zink-Sharp

    2005-01-01

    Thermal conductivity is a very important parameter in determining heat transfer rate and is required for developing of drying models and in industrial operations such as adhesive cure rate. Geometric models for predicting softwood thermal conductivity in the radial and tangential directions were generated in this study based on obervation and measurements of wood...

  11. Coupling impervious surface rate derived from satellite remote sensing with distributed hydrological model for highly urbanized watershed flood forecasting

    NASA Astrophysics Data System (ADS)

    Dong, L.

    2017-12-01

    Abstract: The original urban surface structure changed a lot because of the rapid development of urbanization. Impermeable area has increased a lot. It causes great pressure for city flood control and drainage. Songmushan reservoir basin with high degree of urbanization is taken for an example. Pixel from Landsat is decomposed by Linear spectral mixture model and the proportion of urban area in it is considered as impervious rate. Based on impervious rate data before and after urbanization, an physically based distributed hydrological model, Liuxihe Model, is used to simulate the process of hydrology. The research shows that the performance of the flood forecasting of high urbanization area carried out with Liuxihe Model is perfect and can meet the requirement of the accuracy of city flood control and drainage. The increase of impervious area causes conflux speed more quickly and peak flow to be increased. It also makes the time of peak flow advance and the runoff coefficient increase. Key words: Liuxihe Model; Impervious rate; City flood control and drainage; Urbanization; Songmushan reservoir basin

  12. A mathematical model of microalgae growth in cylindrical photobioreactor

    NASA Astrophysics Data System (ADS)

    Bakeri, Noorhadila Mohd; Jamaian, Siti Suhana

    2017-08-01

    Microalgae are unicellular organisms, which exist individually or in chains or groups but can be utilized in many applications. Researchers have done various efforts in order to increase the growth rate of microalgae. Microalgae have a potential as an effective tool for wastewater treatment, besides as a replacement for natural fuel such as coal and biodiesel. The growth of microalgae can be estimated by using Geider model, which this model is based on photosynthesis irradiance curve (PI-curve) and focused on flat panel photobioreactor. Therefore, in this study a mathematical model for microalgae growth in cylindrical photobioreactor is proposed based on the Geider model. The light irradiance is the crucial part that affects the growth rate of microalgae. The absorbed photon flux will be determined by calculating the average light irradiance in a cylindrical system illuminated by unidirectional parallel flux and considering the cylinder as a collection of differential parallelepipeds. Results from this study showed that the specific growth rate of microalgae increases until the constant level is achieved. Therefore, the proposed mathematical model can be used to estimate the rate of microalgae growth in cylindrical photobioreactor.

  13. Benzene patterns in different urban environments and a prediction model for benzene rates based on NOx values

    NASA Astrophysics Data System (ADS)

    Paz, Shlomit; Goldstein, Pavel; Kordova-Biezuner, Levana; Adler, Lea

    2017-04-01

    Exposure to benzene has been associated with multiple severe impacts on health. This notwithstanding, at most monitoring stations, benzene is not monitored on a regular basis. The aims of the study were to compare benzene rates in different urban environments (region with heavy traffic and industrial region), to analyse the relationship between benzene and meteorological parameters in a Mediterranean climate type, to estimate the linkages between benzene and NOx and to suggest a prediction model for benzene rates based on NOx levels in order contribute to a better estimation of benzene. Data were used from two different monitoring stations, located on the eastern Mediterranean coast: 1) a traffic monitoring station in Tel Aviv, Israel (TLV) located in an urban region with heavy traffic; 2) a general air quality monitoring station in Haifa Bay (HIB), located in Israel's main industrial region. At each station, hourly, daily, monthly, seasonal, and annual data of benzene, NOx, mean temperature, relative humidity, inversion level, and temperature gradient were analysed over three years: 2008, 2009, and 2010. A prediction model for benzene rates based on NOx levels (which are monitored regularly) was developed to contribute to a better estimation of benzene. The severity of benzene pollution was found to be considerably higher at the traffic monitoring station (TLV) than at the general air quality station (HIB), despite the location of the latter in an industrial area. Hourly, daily, monthly, seasonal, and annual patterns have been shown to coincide with anthropogenic activities (traffic), the day of the week, and atmospheric conditions. A strong correlation between NOx and benzene allowed the development of a prediction model for benzene rates, based on NOx, the day of the week, and the month. The model succeeded in predicting the benzene values throughout the year (except for September). The severity of benzene pollution was found to be considerably higher at the traffic station (TLV) than at the general air quality station (HIB), despite being located in an industrial area. Hourly, daily, seasonal, and annual patterns of benzene rates have been shown to coincide with anthropogenic activities (traffic), day of the week, and atmospheric conditions. A prediction model for benzene rates was developed, based on NOx, the day of the week, and the month. The model suggested in this study might be useful for identifying potential risk of benzene in other urban environments.

  14. A novel epidemic spreading model with decreasing infection rate based on infection times

    NASA Astrophysics Data System (ADS)

    Huang, Yunhan; Ding, Li; Feng, Yun

    2016-02-01

    A new epidemic spreading model where individuals can be infected repeatedly is proposed in this paper. The infection rate decreases according to the times it has been infected before. This phenomenon may be caused by immunity or heightened alertness of individuals. We introduce a new parameter called decay factor to evaluate the decrease of infection rate. Our model bridges the Susceptible-Infected-Susceptible(SIS) model and the Susceptible-Infected-Recovered(SIR) model by this parameter. The proposed model has been studied by Monte-Carlo numerical simulation. It is found that initial infection rate has greater impact on peak value comparing with decay factor. The effect of decay factor on final density and threshold of outbreak is dominant but weakens significantly when considering birth and death rates. Besides, simulation results show that the influence of birth and death rates on final density is non-monotonic in some circumstances.

  15. A Comprehensive Prediction Model of Hydraulic Extended-Reach Limit Considering the Allowable Range of Drilling Fluid Flow Rate in Horizontal Drilling.

    PubMed

    Li, Xin; Gao, Deli; Chen, Xuyue

    2017-06-08

    Hydraulic extended-reach limit (HERL) model of horizontal extended-reach well (ERW) can predict the maximum measured depth (MMD) of the horizontal ERW. The HERL refers to the well's MMD when drilling fluid cannot be normally circulated by drilling pump. Previous model analyzed the following two constraint conditions, drilling pump rated pressure and rated power. However, effects of the allowable range of drilling fluid flow rate (Q min  ≤ Q ≤ Q max ) were not considered. In this study, three cases of HERL model are proposed according to the relationship between allowable range of drilling fluid flow rate and rated flow rate of drilling pump (Q r ). A horizontal ERW is analyzed to predict its HERL, especially its horizontal-section limit (L h ). Results show that when Q min  ≤ Q r  ≤ Q max (Case I), L h depends both on horizontal-section limit based on rated pump pressure (L h1 ) and horizontal-section limit based on rated pump power (L h2 ); when Q min  < Q max  < Q r (Case II), L h is exclusively controlled by L h1 ; while L h is only determined by L h2 when Q r  < Q min  < Q max (Case III). Furthermore, L h1 first increases and then decreases with the increase in drilling fluid flow rate, while L h2 keeps decreasing as the drilling fluid flow rate increases. The comprehensive model provides a more accurate prediction on HERL.

  16. Rating of personality disorder features in popular movie characters.

    PubMed

    Hesse, Morten; Schliewe, Sanna; Thomsen, Rasmus R

    2005-12-08

    Tools for training professionals in rating personality disorders are few. We present one such tool: rating of fictional persons. However, before ratings of fictional persons can be useful, we need to know whether raters get the same results, when rating fictional characters. Psychology students at the University of Copenhagen (N = 8) rated four different movie characters from four movies based on three systems: Global rating scales representing each of the 10 personality disorders in the DSM-IV, a criterion list of all criteria for all DSM-IV personality disorders in random order, and the Ten Item Personality Inventory for rating the five-factor model. Agreement was estimated based on intraclass-correlation. Agreement for rating scales for personality disorders ranged from 0.04 to 0.54. For personality disorder features based on DSM-IV criteria, agreement ranged from 0.24 to 0.89, and agreement for the five-factor model ranged from 0.05 to 0.88. The largest multivariate effect was observed for criteria count followed by the TIPI, followed by rating scales. Raters experienced personality disorder criteria as the easiest, and global personality disorder scales as the most difficult, but with significant variation between movies. Psychology students with limited or no clinical experience can agree well on the personality traits of movie characters based on watching the movie. Rating movie characters may be a way to practice assessment of personality.

  17. Rating of personality disorder features in popular movie characters

    PubMed Central

    Hesse, Morten; Schliewe, Sanna; Thomsen, Rasmus R

    2005-01-01

    Background Tools for training professionals in rating personality disorders are few. We present one such tool: rating of fictional persons. However, before ratings of fictional persons can be useful, we need to know whether raters get the same results, when rating fictional characters. Method Psychology students at the University of Copenhagen (N = 8) rated four different movie characters from four movies based on three systems: Global rating scales representing each of the 10 personality disorders in the DSM-IV, a criterion list of all criteria for all DSM-IV personality disorders in random order, and the Ten Item Personality Inventory for rating the five-factor model. Agreement was estimated based on intraclass-correlation. Results Agreement for rating scales for personality disorders ranged from 0.04 to 0.54. For personality disorder features based on DSM-IV criteria, agreement ranged from 0.24 to 0.89, and agreement for the five-factor model ranged from 0.05 to 0.88. The largest multivariate effect was observed for criteria count followed by the TIPI, followed by rating scales. Raters experienced personality disorder criteria as the easiest, and global personality disorder scales as the most difficult, but with significant variation between movies. Conclusion Psychology students with limited or no clinical experience can agree well on the personality traits of movie characters based on watching the movie. Rating movie characters may be a way to practice assessment of personality. PMID:16336663

  18. Masquerade Detection Using a Taxonomy-Based Multinomial Modeling Approach in UNIX Systems

    DTIC Science & Technology

    2008-08-25

    primarily the modeling of statistical features , such as the frequency of events, the duration of events, the co- occurrence of multiple events...are identified, we can extract features representing such behavior while auditing the user’s behavior. Figure1: Taxonomy of Linux and Unix...achieved when the features are extracted just from simple commands. Method Hit Rate False Positive Rate ocSVM using simple cmds (freq.-based

  19. Using Dynamic Transmission Modeling to Determine Vaccination Coverage Rate Based on 5-Year Economic Burden of Infectious Disease: An Example of Pneumococcal Vaccine.

    PubMed

    Wen, Yu-Wen; Wu, Hsin; Chang, Chee-Jen

    2015-05-01

    Vaccination can reduce the incidence and mortality of an infectious disease and thus increase the years of life and productivity for the entire society. But when determining the vaccination coverage rate, its economic burden is usually not taken into account. This article aimed to use a dynamic transmission modeling (DTM), which is based on a susceptible-infectious-recovered model and is a system of differential equations, to find the optimal vaccination coverage rate based on the economic burden of an infectious disease. Vaccination for pneumococcal diseases was used as an example to demonstrate the main purpose. 23-Valent pneumococcal polysaccharide vaccines (PPV23) and 13-valent pneumococcal conjugate vaccines (PCV13) have shown their cost-effectiveness in elderly and children, respectively. Scenarios analysis of PPV23 to elderly aged 65+ years and of PCV13 to children aged 0 to 4 years was applied to assess the optimal vaccination coverage rate based on the 5-year economic burden. Model parameters were derived from Taiwan's National Health Insurance Research Database, government data, and published literature. Various vaccination coverage rates, the vaccine efficacy, and all epidemiologic parameters were substituted into DTM, and all differential equations were solved in R Statistical Software. If the coverage rate of PPV23 for the elderly and of PCV13 for the children both reach 50%, the economic burden due to pneumococcal disease will be acceptable. This article provided an alternative perspective from the economic burden of diseases to obtain a vaccination coverage rate using the DTM. This will provide valuable information for vaccination policy decision makers. Copyright © 2015 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  20. Validation of Statistical Predictive Models Meant to Select Melanoma Patients for Sentinel Lymph Node Biopsy

    PubMed Central

    Sabel, Michael S.; Rice, John D.; Griffith, Kent A.; Lowe, Lori; Wong, Sandra L.; Chang, Alfred E.; Johnson, Timothy M.; Taylor, Jeremy M.G.

    2013-01-01

    Introduction To identify melanoma patients at sufficiently low risk of nodal metastases who could avoid SLN biopsy (SLNB). Several statistical models have been proposed based upon patient/tumor characteristics, including logistic regression, classification trees, random forests and support vector machines. We sought to validate recently published models meant to predict sentinel node status. Methods We queried our comprehensive, prospectively-collected melanoma database for consecutive melanoma patients undergoing SLNB. Prediction values were estimated based upon 4 published models, calculating the same reported metrics: negative predictive value (NPV), rate of negative predictions (RNP), and false negative rate (FNR). Results Logistic regression performed comparably with our data when considering NPV (89.4% vs. 93.6%); however the model’s specificity was not high enough to significantly reduce the rate of biopsies (SLN reduction rate of 2.9%). When applied to our data, the classification tree produced NPV and reduction in biopsies rates that were lower 87.7% vs. 94.1% and 29.8% vs. 14.3%, respectively. Two published models could not be applied to our data due to model complexity and the use of proprietary software. Conclusions Published models meant to reduce the SLNB rate among patients with melanoma either underperformed when applied to our larger dataset, or could not be validated. Differences in selection criteria and histopathologic interpretation likely resulted in underperformance. Development of statistical predictive models must be created in a clinically applicable manner to allow for both validation and ultimately clinical utility. PMID:21822550

  1. A site specific model and analysis of the neutral somatic mutation rate in whole-genome cancer data.

    PubMed

    Bertl, Johanna; Guo, Qianyun; Juul, Malene; Besenbacher, Søren; Nielsen, Morten Muhlig; Hornshøj, Henrik; Pedersen, Jakob Skou; Hobolth, Asger

    2018-04-19

    Detailed modelling of the neutral mutational process in cancer cells is crucial for identifying driver mutations and understanding the mutational mechanisms that act during cancer development. The neutral mutational process is very complex: whole-genome analyses have revealed that the mutation rate differs between cancer types, between patients and along the genome depending on the genetic and epigenetic context. Therefore, methods that predict the number of different types of mutations in regions or specific genomic elements must consider local genomic explanatory variables. A major drawback of most methods is the need to average the explanatory variables across the entire region or genomic element. This procedure is particularly problematic if the explanatory variable varies dramatically in the element under consideration. To take into account the fine scale of the explanatory variables, we model the probabilities of different types of mutations for each position in the genome by multinomial logistic regression. We analyse 505 cancer genomes from 14 different cancer types and compare the performance in predicting mutation rate for both regional based models and site-specific models. We show that for 1000 randomly selected genomic positions, the site-specific model predicts the mutation rate much better than regional based models. We use a forward selection procedure to identify the most important explanatory variables. The procedure identifies site-specific conservation (phyloP), replication timing, and expression level as the best predictors for the mutation rate. Finally, our model confirms and quantifies certain well-known mutational signatures. We find that our site-specific multinomial regression model outperforms the regional based models. The possibility of including genomic variables on different scales and patient specific variables makes it a versatile framework for studying different mutational mechanisms. Our model can serve as the neutral null model for the mutational process; regions that deviate from the null model are candidates for elements that drive cancer development.

  2. An analysis of USSPACECOM's space surveillance network sensor tasking methodology

    NASA Astrophysics Data System (ADS)

    Berger, Jeff M.; Moles, Joseph B.; Wilsey, David G.

    1992-12-01

    This study provides the basis for the development of a cost/benefit assessment model to determine the effects of alterations to the Space Surveillance Network (SSN) on orbital element (OE) set accuracy. It provides a review of current methods used by NORAD and the SSN to gather and process observations, an alternative to the current Gabbard classification method, and the development of a model to determine the effects of observation rate and correction interval on OE set accuracy. The proposed classification scheme is based on satellite J2 perturbations. Specifically, classes were established based on mean motion, eccentricity, and inclination since J2 perturbation effects are functions of only these elements. Model development began by creating representative sensor observations using a highly accurate orbital propagation model. These observations were compared to predicted observations generated using the NORAD Simplified General Perturbation (SGP4) model and differentially corrected using a Bayes, sequential estimation, algorithm. A 10-run Monte Carlo analysis was performed using this model on 12 satellites using 16 different observation rate/correction interval combinations. An ANOVA and confidence interval analysis of the results show that this model does demonstrate the differences in steady state position error based on varying observation rate and correction interval.

  3. Creating High Reliability in Health Care Organizations

    PubMed Central

    Pronovost, Peter J; Berenholtz, Sean M; Goeschel, Christine A; Needham, Dale M; Sexton, J Bryan; Thompson, David A; Lubomski, Lisa H; Marsteller, Jill A; Makary, Martin A; Hunt, Elizabeth

    2006-01-01

    Objective The objective of this paper was to present a comprehensive approach to help health care organizations reliably deliver effective interventions. Context Reliability in healthcare translates into using valid rate-based measures. Yet high reliability organizations have proven that the context in which care is delivered, called organizational culture, also has important influences on patient safety. Model for Improvement Our model to improve reliability, which also includes interventions to improve culture, focuses on valid rate-based measures. This model includes (1) identifying evidence-based interventions that improve the outcome, (2) selecting interventions with the most impact on outcomes and converting to behaviors, (3) developing measures to evaluate reliability, (4) measuring baseline performance, and (5) ensuring patients receive the evidence-based interventions. The comprehensive unit-based safety program (CUSP) is used to improve culture and guide organizations in learning from mistakes that are important, but cannot be measured as rates. Conclusions We present how this model was used in over 100 intensive care units in Michigan to improve culture and eliminate catheter-related blood stream infections—both were accomplished. Our model differs from existing models in that it incorporates efforts to improve a vital component for system redesign—culture, it targets 3 important groups—senior leaders, team leaders, and front line staff, and facilitates change management—engage, educate, execute, and evaluate for planned interventions. PMID:16898981

  4. Incorporating variability in simulations of seasonally forced phenology using integral projection models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goodsman, Devin W.; Aukema, Brian H.; McDowell, Nate G.

    Phenology models are becoming increasingly important tools to accurately predict how climate change will impact the life histories of organisms. We propose a class of integral projection phenology models derived from stochastic individual-based models of insect development and demography.Our derivation, which is based on the rate-summation concept, produces integral projection models that capture the effect of phenotypic rate variability on insect phenology, but which are typically more computationally frugal than equivalent individual-based phenology models. We demonstrate our approach using a temperature-dependent model of the demography of the mountain pine beetle (Dendroctonus ponderosae Hopkins), an insect that kills mature pine trees.more » This work illustrates how a wide range of stochastic phenology models can be reformulated as integral projection models. Due to their computational efficiency, these integral projection models are suitable for deployment in large-scale simulations, such as studies of altered pest distributions under climate change.« less

  5. Constitutive law for seismicity rate based on rate and state friction: Dieterich 1994 revisited.

    NASA Astrophysics Data System (ADS)

    Heimisson, E. R.; Segall, P.

    2017-12-01

    Dieterich [1994] derived a constitutive law for seismicity rate based on rate and state friction, which has been applied widely to aftershocks, earthquake triggering, and induced seismicity in various geological settings. Here, this influential work is revisited, and re-derived in a more straightforward manner. By virtue of this new derivation the model is generalized to include changes in effective normal stress associated with background seismicity. Furthermore, the general case when seismicity rate is not constant under constant stressing rate is formulated. The new derivation provides directly practical integral expressions for the cumulative number of events and rate of seismicity for arbitrary stressing history. Arguably, the most prominent limitation of Dieterich's 1994 theory is the assumption that seismic sources do not interact. Here we derive a constitutive relationship that considers source interactions between sub-volumes of the crust, where the stress in each sub-volume is assumed constant. Interactions are considered both under constant stressing rate conditions and for arbitrary stressing history. This theory can be used to model seismicity rate due to stress changes or to estimate stress changes using observed seismicity from triggered earthquake swarms where earthquake interactions and magnitudes are take into account. We identify special conditions under which influence of interactions cancel and the predictions reduces to those of Dieterich 1994. This remarkable result may explain the apparent success of the model when applied to observations of triggered seismicity. This approach has application to understanding and modeling induced and triggered seismicity, and the quantitative interpretation of geodetic and seismic data. It enables simultaneous modeling of geodetic and seismic data in a self-consistent framework. To date physics-based modeling of seismicity with or without geodetic data has been found to give insight into various processes related to aftershocks, VT and injection-induced seismicity. However, the role of various processes such as earthquake interactions and magnitudes and effective normal stress has been unclear. The new theory presented resolves some of the pertinent issues raised in the literature with application of the Dieterich 1994 model.

  6. Adjusting for overdispersion in piecewise exponential regression models to estimate excess mortality rate in population-based research.

    PubMed

    Luque-Fernandez, Miguel Angel; Belot, Aurélien; Quaresma, Manuela; Maringe, Camille; Coleman, Michel P; Rachet, Bernard

    2016-10-01

    In population-based cancer research, piecewise exponential regression models are used to derive adjusted estimates of excess mortality due to cancer using the Poisson generalized linear modelling framework. However, the assumption that the conditional mean and variance of the rate parameter given the set of covariates x i are equal is strong and may fail to account for overdispersion given the variability of the rate parameter (the variance exceeds the mean). Using an empirical example, we aimed to describe simple methods to test and correct for overdispersion. We used a regression-based score test for overdispersion under the relative survival framework and proposed different approaches to correct for overdispersion including a quasi-likelihood, robust standard errors estimation, negative binomial regression and flexible piecewise modelling. All piecewise exponential regression models showed the presence of significant inherent overdispersion (p-value <0.001). However, the flexible piecewise exponential model showed the smallest overdispersion parameter (3.2 versus 21.3) for non-flexible piecewise exponential models. We showed that there were no major differences between methods. However, using a flexible piecewise regression modelling, with either a quasi-likelihood or robust standard errors, was the best approach as it deals with both, overdispersion due to model misspecification and true or inherent overdispersion.

  7. Firing-rate response of linear and nonlinear integrate-and-fire neurons to modulated current-based and conductance-based synaptic drive.

    PubMed

    Richardson, Magnus J E

    2007-08-01

    Integrate-and-fire models are mainstays of the study of single-neuron response properties and emergent states of recurrent networks of spiking neurons. They also provide an analytical base for perturbative approaches that treat important biological details, such as synaptic filtering, synaptic conductance increase, and voltage-activated currents. Steady-state firing rates of both linear and nonlinear integrate-and-fire models, receiving fluctuating synaptic drive, can be calculated from the time-independent Fokker-Planck equation. The dynamic firing-rate response is less easy to extract, even at the first-order level of a weak modulation of the model parameters, but is an important determinant of neuronal response and network stability. For the linear integrate-and-fire model the response to modulations of current-based synaptic drive can be written in terms of hypergeometric functions. For the nonlinear exponential and quadratic models no such analytical forms for the response are available. Here it is demonstrated that a rather simple numerical method can be used to obtain the steady-state and dynamic response for both linear and nonlinear models to parameter modulation in the presence of current-based or conductance-based synaptic fluctuations. To complement the full numerical solution, generalized analytical forms for the high-frequency response are provided. A special case is also identified--time-constant modulation--for which the response to an arbitrarily strong modulation can be calculated exactly.

  8. The role of climate in the global patterns of ecosystem carbon turnover rates - contrasts between data and models

    NASA Astrophysics Data System (ADS)

    Carvalhais, N.; Forkel, M.; Khomik, M.; Bellarby, J.; Migliavacca, M.; Thurner, M.; Beer, C.; Jung, M.; Mu, M.; Randerson, J. T.; Saatchi, S. S.; Santoro, M.; Reichstein, M.

    2012-12-01

    The turnover rates of carbon in terrestrial ecosystems and their sensitivity to climate are instrumental properties for diagnosing the interannual variability and forecasting trends of biogeochemical processes and carbon-cycle-climate feedbacks. We propose to globally look at the spatial distribution of turnover rates of carbon to explore the association between bioclimatic regimes and the rates at which carbon cycles in terrestrial ecosystems. Based on data-driven approaches of ecosystem carbon fluxes and data-based estimates of ecosystem carbon stocks it is possible to build fully observationally supported diagnostics. These data driven diagnostics support the benchmarking of CMIP5 model outputs (Coupled Model Intercomparison Project Phase 5) with observationally based estimates. The models' performance is addressed by confronting spatial patterns of carbon fluxes and stocks with data, as well as the global and regional sensitivities of turnover rates to climate. Our results show strong latitudinal gradients globally, mostly controlled by temperature, which are not always paralleled by CMIP5 simulations. In northern colder regions is also where the largest difference in temperature sensitivity between models and data occurs. Interestingly, there seem to be two different statistical populations in the data (some with high, others with low apparent temperature sensitivity of carbon turnover rates), where the different models only seem to describe either one or the other population. Additionally, the comparisons within bioclimatic classes can even show opposite patterns between turnover rates and temperature in water limited regions. Overall, our analysis emphasizes the role of finding patterns and intrinsic properties instead of plain magnitudes of fluxes for diagnosing the sensitivities of terrestrial biogeochemical cycles to climate. Further, our regional analysis suggests a significant gap in addressing the partial influence of water in the ecosystem carbon turnover rates especially in very cold or water limited regions.

  9. Space-Time Earthquake Rate Models for One-Year Hazard Forecasts in Oklahoma

    NASA Astrophysics Data System (ADS)

    Llenos, A. L.; Michael, A. J.

    2017-12-01

    The recent one-year seismic hazard assessments for natural and induced seismicity in the central and eastern US (CEUS) (Petersen et al., 2016, 2017) rely on earthquake rate models based on declustered catalogs (i.e., catalogs with foreshocks and aftershocks removed), as is common practice in probabilistic seismic hazard analysis. However, standard declustering can remove over 90% of some induced sequences in the CEUS. Some of these earthquakes may still be capable of causing damage or concern (Petersen et al., 2015, 2016). The choices of whether and how to decluster can lead to seismicity rate estimates that vary by up to factors of 10-20 (Llenos and Michael, AGU, 2016). Therefore, in order to improve the accuracy of hazard assessments, we are exploring ways to make forecasts based on full, rather than declustered, catalogs. We focus on Oklahoma, where earthquake rates began increasing in late 2009 mainly in central Oklahoma and ramped up substantially in 2013 with the expansion of seismicity into northern Oklahoma and southern Kansas. We develop earthquake rate models using the space-time Epidemic-Type Aftershock Sequence (ETAS) model (Ogata, JASA, 1988; Ogata, AISM, 1998; Zhuang et al., JASA, 2002), which characterizes both the background seismicity rate as well as aftershock triggering. We examine changes in the model parameters over time, focusing particularly on background rate, which reflects earthquakes that are triggered by external driving forces such as fluid injection rather than other earthquakes. After the model parameters are fit to the seismicity data from a given year, forecasts of the full catalog for the following year can then be made using a suite of 100,000 ETAS model simulations based on those parameters. To evaluate this approach, we develop pseudo-prospective yearly forecasts for Oklahoma from 2013-2016 and compare them with the observations using standard Collaboratory for the Study of Earthquake Predictability tests for consistency.

  10. Region-of-interest determination and bit-rate conversion for H.264 video transcoding

    NASA Astrophysics Data System (ADS)

    Huang, Shu-Fen; Chen, Mei-Juan; Tai, Kuang-Han; Li, Mian-Shiuan

    2013-12-01

    This paper presents a video bit-rate transcoder for baseline profile in H.264/AVC standard to fit the available channel bandwidth for the client when transmitting video bit-streams via communication channels. To maintain visual quality for low bit-rate video efficiently, this study analyzes the decoded information in the transcoder and proposes a Bayesian theorem-based region-of-interest (ROI) determination algorithm. In addition, a curve fitting scheme is employed to find the models of video bit-rate conversion. The transcoded video will conform to the target bit-rate by re-quantization according to our proposed models. After integrating the ROI detection method and the bit-rate transcoding models, the ROI-based transcoder allocates more coding bits to ROI regions and reduces the complexity of the re-encoding procedure for non-ROI regions. Hence, it not only keeps the coding quality but improves the efficiency of the video transcoding for low target bit-rates and makes the real-time transcoding more practical. Experimental results show that the proposed framework gets significantly better visual quality.

  11. LS-DYNA Implementation of Polymer Matrix Composite Model Under High Strain Rate Impact

    NASA Technical Reports Server (NTRS)

    Zheng, Xia-Hua; Goldberg, Robert K.; Binienda, Wieslaw K.; Roberts, Gary D.

    2003-01-01

    A recently developed constitutive model is implemented into LS-DYNA as a user defined material model (UMAT) to characterize the nonlinear strain rate dependent behavior of polymers. By utilizing this model within a micromechanics technique based on a laminate analogy, an algorithm to analyze the strain rate dependent, nonlinear deformation of a fiber reinforced polymer matrix composite is then developed as a UMAT to simulate the response of these composites under high strain rate impact. The models are designed for shell elements in order to ensure computational efficiency. Experimental and numerical stress-strain curves are compared for two representative polymers and a representative polymer matrix composite, with the analytical model predicting the experimental response reasonably well.

  12. Long-term modeling of glass waste in portland cement- and clay-based matrices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stockman, H.W.; Nagy, K.L.; Morris, C.E.

    A set of ``templates`` was developed for modeling waste glass interactions with cement-based and clay-based matrices. The templates consist of a modified thermodynamic database, and input files for the EQ3/6 reaction path code, containing embedded rate models and compositions for waste glass, cement, and several pozzolanic materials. Significant modifications were made in the thermodynamic data for Th, Pb, Ra, Ba, cement phases, and aqueous silica species. It was found that the cement-containing matrices could increase glass corrosion rates by several orders of magnitude (over matrixless or clay matrix systems), but they also offered the lowest overall solubility for Pb, Ra,more » Th and U. Addition of pozzolans to cement decreased calculated glass corrosion rates by up to a factor of 30. It is shown that with current modeling capabilities, the ``affinity effect`` cannot be trusted to passivate glass if nuclei are available for precipitation of secondary phases that reduce silica activity.« less

  13. Littoral transport rates in the Santa Barbara Littoral Cell: a process-based model analysis

    USGS Publications Warehouse

    Elias, E. P. L.; Barnard, Patrick L.; Brocatus, John

    2009-01-01

    Identification of the sediment transport patterns and pathways is essential for sustainable coastal zone management of the heavily modified coastline of Santa Barbara and Ventura County (California, USA). A process-based model application, based on Delft3D Online Morphology, is used to investigate the littoral transport potential along the Santa Barbara Littoral Cell (between Point Conception and Mugu Canyon). An advanced optimalization procedure is applied to enable annual sediment transport computations by reducing the ocean wave climate in 10 wave height - direction classes. Modeled littoral transport rates compare well with observed dredging volumes, and erosion or sedimentation hotspots coincide with the modeled divergence and convergence of the transport gradients. Sediment transport rates are strongly dependent on the alongshore variation in wave height due to wave sheltering, diffraction and focusing by the Northern Channel Islands, and the local orientation of the geologically-controlled coastline. Local transport gradients exceed the net eastward littoral transport, and are considered a primary driver for hot-spot erosion.

  14. Chemistry Resolved Kinetic Flow Modeling of TATB Based Explosives

    NASA Astrophysics Data System (ADS)

    Vitello, Peter; Fried, Lawrence; Howard, Mike; Levesque, George; Souers, Clark

    2011-06-01

    Detonation waves in insensitive, TATB based explosives are believed to have multi-time scale regimes. The initial burn rate of such explosives has a sub-microsecond time scale. However, significant late-time slow release in energy is believed to occur due to diffusion limited growth of carbon. In the intermediate time scale concentrations of product species likely change from being in equilibrium to being kinetic rate controlled. We use the thermo-chemical code CHEETAH linked to ALE hydrodynamics codes to model detonations. We term our model chemistry resolved kinetic flow as CHEETAH tracks the time dependent concentrations of individual species in the detonation wave and calculate EOS values based on the concentrations. A validation suite of model simulations compared to recent high fidelity metal push experiments at ambient and cold temperatures has been developed. We present here a study of multi-time scale kinetic rate effects for these experiments. Prepared by LLNL under Contract DE-AC52-07NA27344.

  15. A semi-empirical model for the complete orientation dependence of the growth rate for vapor phase epitaxy - Chloride VPE of GaAs

    NASA Technical Reports Server (NTRS)

    Seidel-Salinas, L. K.; Jones, S. H.; Duva, J. M.

    1992-01-01

    A semi-empirical model has been developed to determine the complete crystallographic orientation dependence of the growth rate for vapor phase epitaxy (VPE). Previous researchers have been able to determine this dependence for a limited range of orientations; however, our model yields relative growth rate information for any orientation. This model for diamond and zincblende structure materials is based on experimental growth rate data, gas phase diffusion, and surface reactions. Data for GaAs chloride VPE is used to illustrate the model. The resulting growth rate polar diagrams are used in conjunction with Wulff constructions to simulate epitaxial layer shapes as grown on patterned substrates. In general, this model can be applied to a variety of materials and vapor phase epitaxy systems.

  16. Resolving Microzooplankton Functional Groups In A Size-Structured Planktonic Model

    NASA Astrophysics Data System (ADS)

    Taniguchi, D.; Dutkiewicz, S.; Follows, M. J.; Jahn, O.; Menden-Deuer, S.

    2016-02-01

    Microzooplankton are important marine grazers, often consuming a large fraction of primary productivity. They consist of a great diversity of organisms with different behaviors, characteristics, and rates. This functional diversity, and its consequences, are not currently reflected in large-scale ocean ecological simulations. How should these organisms be represented, and what are the implications for their biogeography? We develop a size-structured, trait-based model to characterize a diversity of microzooplankton functional groups. We compile and examine size-based laboratory data on the traits, revealing some patterns with size and functional group that we interpret with mechanistic theory. Fitting the model to the data provides parameterizations of key rates and properties, which we employ in a numerical ocean model. The diversity of grazing preference, rates, and trophic strategies enables the coexistence of different functional groups of micro-grazers under various environmental conditions, and the model produces testable predictions of the biogeography.

  17. An Integrated Intrusion Detection Model of Cluster-Based Wireless Sensor Network

    PubMed Central

    Sun, Xuemei; Yan, Bo; Zhang, Xinzhong; Rong, Chuitian

    2015-01-01

    Considering wireless sensor network characteristics, this paper combines anomaly and mis-use detection and proposes an integrated detection model of cluster-based wireless sensor network, aiming at enhancing detection rate and reducing false rate. Adaboost algorithm with hierarchical structures is used for anomaly detection of sensor nodes, cluster-head nodes and Sink nodes. Cultural-Algorithm and Artificial-Fish–Swarm-Algorithm optimized Back Propagation is applied to mis-use detection of Sink node. Plenty of simulation demonstrates that this integrated model has a strong performance of intrusion detection. PMID:26447696

  18. Development of braided rope seals for hypersonic engine applications. Part 2: Flow modeling

    NASA Technical Reports Server (NTRS)

    Mutharasan, Rajakkannu; Steinetz, Bruce M.; Tao, Xiaoming; Ko, Frank

    1991-01-01

    Two models based on the Kozeny-Carmen equation were developed to analyze the fluid flow through a new class of braided rope seals under development for advanced hypersonic engines. A hybrid seal geometry consisting of a braided sleeve and a substantial amount of longitudinal fibers with high packing density was selected for development based on its low leakage rates. The models developed allow prediction of the gas leakage rate as a function of fiber diameter, fiber packing density, gas properties, and pressure drop across the seal.

  19. An Integrated Intrusion Detection Model of Cluster-Based Wireless Sensor Network.

    PubMed

    Sun, Xuemei; Yan, Bo; Zhang, Xinzhong; Rong, Chuitian

    2015-01-01

    Considering wireless sensor network characteristics, this paper combines anomaly and mis-use detection and proposes an integrated detection model of cluster-based wireless sensor network, aiming at enhancing detection rate and reducing false rate. Adaboost algorithm with hierarchical structures is used for anomaly detection of sensor nodes, cluster-head nodes and Sink nodes. Cultural-Algorithm and Artificial-Fish-Swarm-Algorithm optimized Back Propagation is applied to mis-use detection of Sink node. Plenty of simulation demonstrates that this integrated model has a strong performance of intrusion detection.

  20. Reconstruction of interaction rate in holographic dark energy

    NASA Astrophysics Data System (ADS)

    Mukherjee, Ankan

    2016-11-01

    The present work is based on the holographic dark energy model with Hubble horizon as the infrared cut-off. The interaction rate between dark energy and dark matter has been reconstructed for three different parameterizations of the deceleration parameter. Observational constraints on the model parameters have been obtained by maximum likelihood analysis using the observational Hubble parameter data (OHD), type Ia supernovab data (SNe), baryon acoustic oscillation data (BAO) and the distance prior of cosmic microwave background (CMB) namely the CMB shift parameter data (CMBShift). The interaction rate obtained in the present work remains always positive and increases with expansion. It is very similar to the result obtained by Sen and Pavon [1] where the interaction rate has been reconstructed for a parametrization of the dark energy equation of state. Tighter constraints on the interaction rate have been obtained in the present work as it is based on larger data sets. The nature of the dark energy equation of state parameter has also been studied for the present models. Though the reconstruction is done from different parametrizations, the overall nature of the interaction rate is very similar in all the cases. Different information criteria and the Bayesian evidence, which have been invoked in the context of model selection, show that the these models are at close proximity of each other.

  1. A mesoscopic reaction rate model for shock initiation of multi-component PBX explosives.

    PubMed

    Liu, Y R; Duan, Z P; Zhang, Z Y; Ou, Z C; Huang, F L

    2016-11-05

    The primary goal of this research is to develop a three-term mesoscopic reaction rate model that consists of a hot-spot ignition, a low-pressure slow burning and a high-pressure fast reaction terms for shock initiation of multi-component Plastic Bonded Explosives (PBX). Thereinto, based on the DZK hot-spot model for a single-component PBX explosive, the hot-spot ignition term as well as its reaction rate is obtained through a "mixing rule" of the explosive components; new expressions for both the low-pressure slow burning term and the high-pressure fast reaction term are also obtained by establishing the relationships between the reaction rate of the multi-component PBX explosive and that of its explosive components, based on the low-pressure slow burning term and the high-pressure fast reaction term of a mesoscopic reaction rate model. Furthermore, for verification, the new reaction rate model is incorporated into the DYNA2D code to simulate numerically the shock initiation process of the PBXC03 and the PBXC10 multi-component PBX explosives, and the numerical results of the pressure histories at different Lagrange locations in explosive are found to be in good agreements with previous experimental data. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. The Design of an Instructional Model Based on Connectivism and Constructivism to Create Innovation in Real World Experience

    ERIC Educational Resources Information Center

    Jirasatjanukul, Kanokrat; Jeerungsuwan, Namon

    2018-01-01

    The objectives of the research were to (1) design an instructional model based on Connectivism and Constructivism to create innovation in real world experience, (2) assess the model designed--the designed instructional model. The research involved 2 stages: (1) the instructional model design and (2) the instructional model rating. The sample…

  3. Estimating methane emissions from landfills based on rainfall, ambient temperature, and waste composition: The CLEEN model.

    PubMed

    Karanjekar, Richa V; Bhatt, Arpita; Altouqui, Said; Jangikhatoonabad, Neda; Durai, Vennila; Sattler, Melanie L; Hossain, M D Sahadat; Chen, Victoria

    2015-12-01

    Accurately estimating landfill methane emissions is important for quantifying a landfill's greenhouse gas emissions and power generation potential. Current models, including LandGEM and IPCC, often greatly simplify treatment of factors like rainfall and ambient temperature, which can substantially impact gas production. The newly developed Capturing Landfill Emissions for Energy Needs (CLEEN) model aims to improve landfill methane generation estimates, but still require inputs that are fairly easy to obtain: waste composition, annual rainfall, and ambient temperature. To develop the model, methane generation was measured from 27 laboratory scale landfill reactors, with varying waste compositions (ranging from 0% to 100%); average rainfall rates of 2, 6, and 12 mm/day; and temperatures of 20, 30, and 37°C, according to a statistical experimental design. Refuse components considered were the major biodegradable wastes, food, paper, yard/wood, and textile, as well as inert inorganic waste. Based on the data collected, a multiple linear regression equation (R(2)=0.75) was developed to predict first-order methane generation rate constant values k as functions of waste composition, annual rainfall, and temperature. Because, laboratory methane generation rates exceed field rates, a second scale-up regression equation for k was developed using actual gas-recovery data from 11 landfills in high-income countries with conventional operation. The Capturing Landfill Emissions for Energy Needs (CLEEN) model was developed by incorporating both regression equations into the first-order decay based model for estimating methane generation rates from landfills. CLEEN model values were compared to actual field data from 6 US landfills, and to estimates from LandGEM and IPCC. For 4 of the 6 cases, CLEEN model estimates were the closest to actual. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Measurement and Modeling of Respiration Rate of Tomato (Cultivar Roma) for Modified Atmosphere Storage.

    PubMed

    Kandasamy, Palani; Moitra, Ranabir; Mukherjee, Souti

    2015-01-01

    Experiments were conducted to determine the respiration rate of tomato at 10, 20 and 30 °C using closed respiration system. Oxygen depletion and carbon dioxide accumulation in the system containing tomato was monitored. Respiration rate was found to decrease with increasing CO2 and decreasing O2 concentration. Michaelis-Menten type model based on enzyme kinetics was evaluated using experimental data generated for predicting the respiration rate. The model parameters that obtained from the respiration rate at different O2 and CO2 concentration levels were used to fit the model against the storage temperatures. The fitting was fair (R2 = 0.923 to 0.970) when the respiration rate was expressed as O2 concentation. Since inhibition constant for CO2 concentration tended towards negetive, the model was modified as a function of O2 concentration only. The modified model was fitted to the experimental data and showed good agreement (R2 = 0.998) with experimentally estimated respiration rate.

  5. New activity-based funding model for Australian private sector overnight rehabilitation cases: the rehabilitation Australian National Sub-Acute and Non-Acute Patient (AN-SNAP) model.

    PubMed

    Hanning, Brian; Predl, Nicolle

    2015-09-01

    Traditional overnight rehabilitation payment models in the private sector are not based on a rigorous classification system and vary greatly between contracts with no consideration of patient complexity. The payment rates are not based on relative cost and the length-of-stay (LOS) point at which a reduced rate applies (step downs) varies markedly. The rehabilitation Australian National Sub-Acute and Non-Acute Patient (AN-SNAP) model (RAM), which has been in place for over 2 years in some private hospitals, bases payment on a rigorous classification system, relative cost and industry LOS. RAM is in the process of being rolled out more widely. This paper compares and contrasts RAM with traditional overnight rehabilitation payment models. It considers the advantages of RAM for hospitals and Australian Health Service Alliance. It also considers payment model changes in the context of maintaining industry consistency with Electronic Claims Lodgement and Information Processing System Environment (ECLIPSE) and health reform generally.

  6. Design and Performance of the Sorbent-Based Atmosphere Revitalization System for Orion

    NASA Technical Reports Server (NTRS)

    Ritter, James A.; Reynolds, Steven P.; Ebner, Armin D.; Knox, James C.; LeVan, M. Douglas

    2007-01-01

    Validation and simulations of a real-time dynamic cabin model were conducted on the sorbent-based atmosphere revitalization system for Orion. The dynamic cabin model, which updates the concentration of H2O and CO2 every second during the simulation, was able to predict the steady state model values for H2O and CO2 for long periods of steady metabolic production for a 4-person crew. It also showed similar trends for the exercise periods, where there were quick changes in production rates. Once validated, the cabin model was used to determine the effects of feed flow rate, cabin volume and column volume. A higher feed flow rate reduced the cabin concentrations only slightly over the base case, a larger cabin volume was able to reduce the cabin concentrations even further, and the lower column volume led to much higher cabin concentrations. Finally, the cabin model was used to determine the effect of the amount of silica gel in the column. As the amount increased, the cabin concentration of H2O decreased, but the cabin concentration of CO2 increased.

  7. Multi-scale Modeling of the Impact Response of a Strain Rate Sensitive High-Manganese Austenitic Steel

    NASA Astrophysics Data System (ADS)

    Önal, Orkun; Ozmenci, Cemre; Canadinc, Demircan

    2014-09-01

    A multi-scale modeling approach was applied to predict the impact response of a strain rate sensitive high-manganese austenitic steel. The roles of texture, geometry and strain rate sensitivity were successfully taken into account all at once by coupling crystal plasticity and finite element (FE) analysis. Specifically, crystal plasticity was utilized to obtain the multi-axial flow rule at different strain rates based on the experimental deformation response under uniaxial tensile loading. The equivalent stress - equivalent strain response was then incorporated into the FE model for the sake of a more representative hardening rule under impact loading. The current results demonstrate that reliable predictions can be obtained by proper coupling of crystal plasticity and FE analysis even if the experimental flow rule of the material is acquired under uniaxial loading and at moderate strain rates that are significantly slower than those attained during impact loading. Furthermore, the current findings also demonstrate the need for an experiment-based multi-scale modeling approach for the sake of reliable predictions of the impact response.

  8. Kinetic Modeling of Slow Energy Release in Non-Ideal Carbon Rich Explosives

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vitello, P; Fried, L; Glaesemann, K

    2006-06-20

    We present here the first self-consistent kinetic based model for long time-scale energy release in detonation waves in the non-ideal explosive LX-17. Non-ideal, insensitive carbon rich explosives, such as those based on TATB, are believed to have significant late-time slow release in energy. One proposed source of this energy is diffusion-limited growth of carbon clusters. In this paper we consider the late-time energy release problem in detonation waves using the thermochemical code CHEETAH linked to a multidimensional ALE hydrodynamics model. The linked CHEETAH-ALE model dimensional treats slowly reacting chemical species using kinetic rate laws, with chemical equilibrium assumed for speciesmore » coupled via fast time-scale reactions. In the model presented here we include separate rate equations for the transformation of the un-reacted explosive to product gases and for the growth of a small particulate form of condensed graphite to a large particulate form. The small particulate graphite is assumed to be in chemical equilibrium with the gaseous species allowing for coupling between the instantaneous thermodynamic state and the production of graphite clusters. For the explosive burn rate a pressure dependent rate law was used. Low pressure freezing of the gas species mass fractions was also included to account for regions where the kinetic coupling rates become longer than the hydrodynamic time-scales. The model rate parameters were calibrated using cylinder and rate-stick experimental data. Excellent long time agreement and size effect results were achieved.« less

  9. Flow and fracture behavior of aluminum alloy 6082-T6 at different tensile strain rates and triaxialities.

    PubMed

    Chen, Xuanzhen; Peng, Yong; Peng, Shan; Yao, Song; Chen, Chao; Xu, Ping

    2017-01-01

    This study aims to investigate the flow and fracture behavior of aluminum alloy 6082-T6 (AA6082-T6) at different strain rates and triaxialities. Two groups of Charpy impact tests were carried out to further investigate its dynamic impact fracture property. A series of tensile tests and numerical simulations based on finite element analysis (FEA) were performed. Experimental data on smooth specimens under various strain rates ranging from 0.0001~3400 s-1 shows that AA6082-T6 is rather insensitive to strain rates in general. However, clear rate sensitivity was observed in the range of 0.001~1 s-1 while such a characteristic is counteracted by the adiabatic heating of specimens under high strain rates. A Johnson-Cook constitutive model was proposed based on tensile tests at different strain rates. In this study, the average stress triaxiality and equivalent plastic strain at facture obtained from numerical simulations were used for the calibration of J-C fracture model. Both of the J-C constitutive model and fracture model were employed in numerical simulations and the results was compared with experimental results. The calibrated J-C fracture model exhibits higher accuracy than the J-C fracture model obtained by the common method in predicting the fracture behavior of AA6082-T6. Finally, the Scanning Electron Microscope (SEM) of fractured specimens with different initial stress triaxialities were analyzed. The magnified fractographs indicate that high initial stress triaxiality likely results in dimple fracture.

  10. Use of an uncertainty analysis for genome-scale models as a prediction tool for microbial growth processes in subsurface environments.

    PubMed

    Klier, Christine

    2012-03-06

    The integration of genome-scale, constraint-based models of microbial cell function into simulations of contaminant transport and fate in complex groundwater systems is a promising approach to help characterize the metabolic activities of microorganisms in natural environments. In constraint-based modeling, the specific uptake flux rates of external metabolites are usually determined by Michaelis-Menten kinetic theory. However, extensive data sets based on experimentally measured values are not always available. In this study, a genome-scale model of Pseudomonas putida was used to study the key issue of uncertainty arising from the parametrization of the influx of two growth-limiting substrates: oxygen and toluene. The results showed that simulated growth rates are highly sensitive to substrate affinity constants and that uncertainties in specific substrate uptake rates have a significant influence on the variability of simulated microbial growth. Michaelis-Menten kinetic theory does not, therefore, seem to be appropriate for descriptions of substrate uptake processes in the genome-scale model of P. putida. Microbial growth rates of P. putida in subsurface environments can only be accurately predicted if the processes of complex substrate transport and microbial uptake regulation are sufficiently understood in natural environments and if data-driven uptake flux constraints can be applied.

  11. Game-Theoretic Models of Information Overload in Social Networks

    NASA Astrophysics Data System (ADS)

    Borgs, Christian; Chayes, Jennifer; Karrer, Brian; Meeder, Brendan; Ravi, R.; Reagans, Ray; Sayedi, Amin

    We study the effect of information overload on user engagement in an asymmetric social network like Twitter. We introduce simple game-theoretic models that capture rate competition between celebrities producing updates in such networks where users non-strategically choose a subset of celebrities to follow based on the utility derived from high quality updates as well as disutility derived from having to wade through too many updates. Our two variants model the two behaviors of users dropping some potential connections (followership model) or leaving the network altogether (engagement model). We show that under a simple formulation of celebrity rate competition, there is no pure strategy Nash equilibrium under the first model. We then identify special cases in both models when pure rate equilibria exist for the celebrities: For the followership model, we show existence of a pure rate equilibrium when there is a global ranking of the celebrities in terms of the quality of their updates to users. This result also generalizes to the case when there is a partial order consistent with all the linear orders of the celebrities based on their qualities to the users. Furthermore, these equilibria can be computed in polynomial time. For the engagement model, pure rate equilibria exist when all users are interested in the same number of celebrities, or when they are interested in at most two. Finally, we also give a finite though inefficient procedure to determine if pure equilibria exist in the general case of the followership model.

  12. Testability analysis on a hydraulic system in a certain equipment based on simulation model

    NASA Astrophysics Data System (ADS)

    Zhang, Rui; Cong, Hua; Liu, Yuanhong; Feng, Fuzhou

    2018-03-01

    Aiming at the problem that the complicated structure and the shortage of fault statistics information in hydraulic systems, a multi value testability analysis method based on simulation model is proposed. Based on the simulation model of AMESim, this method injects the simulated faults and records variation of test parameters ,such as pressure, flow rate, at each test point compared with those under normal conditions .Thus a multi-value fault-test dependency matrix is established. Then the fault detection rate (FDR) and fault isolation rate (FIR) are calculated based on the dependency matrix. Finally the system of testability and fault diagnosis capability are analyzed and evaluated, which can only reach a lower 54%(FDR) and 23%(FIR). In order to improve testability performance of the system,. number and position of the test points are optimized on the system. Results show the proposed test placement scheme can be used to solve the problems that difficulty, inefficiency and high cost in the system maintenance.

  13. Recent, climate-driven river incision rate fluctuations in the Mercantour crystalline massif, southern French Alps

    NASA Astrophysics Data System (ADS)

    Petit, C.; Goren, L.; Rolland, Y.; Bourlès, D.; Braucher, R.; Saillard, M.; Cassol, D.

    2017-06-01

    We present a new geomorphological analysis of the Tinée River tributaries in the southern French Alps based on numerical inverse and forward modelling of their longitudinal profiles. We model their relative uplift history with respect to the main channel, hence the incision rate history of this channel. Inverse models show that all tributaries have consistent incision rate histories with alternating high and low values. A comparison with global temperature curves shows that these variations correlate with quaternary climate changes. We suggest that during warm periods, a wave of regressive erosion propagates in the Tinée River, while its tributaries deeply incise their substratum to catch up with the falling base-level. We also show that the post 140 ka history of this landscape evolution is dominated by fluvial incision. We then perform forward models of river incision and simulate the incision of the Tinée River system over a time span of 600 ka. This model allows us to extract time and space incision rate variations of the Tinée River. With a background of a few mm.yr-1, incision rate can increase up to more than 1 cm yr-1 during short periods of time due to climatic oscillations. This result is compatible with published cosmogenic nuclide based dating, which evidenced incision rates from 0.2 to 24 mm yr-1. The part of the channel located between 12 and 20 km downstream from the source has undergone several periods of rapid incision rates, which could explain the steep hillslopes and the triggering of a landslide ∼10 ka ago.

  14. Estimating wildland fire rate of spread in a spatially nonuniform environment

    Treesearch

    Francis M Fujioka

    1985-01-01

    Estimating rate of fire spread is a key element in planning for effective fire control. Land managers use the Rothermel spread model, but the model assumptions are violated when fuel, weather, and topography are nonuniform. This paper compares three averaging techniques--arithmetic mean of spread rates, spread based on mean fuel conditions, and harmonic mean of spread...

  15. A new lattice hydrodynamic model based on control method considering the flux change rate and delay feedback signal

    NASA Astrophysics Data System (ADS)

    Qin, Shunda; Ge, Hongxia; Cheng, Rongjun

    2018-02-01

    In this paper, a new lattice hydrodynamic model is proposed by taking delay feedback and flux change rate effect into account in a single lane. The linear stability condition of the new model is derived by control theory. By using the nonlinear analysis method, the mKDV equation near the critical point is deduced to describe the traffic congestion. Numerical simulations are carried out to demonstrate the advantage of the new model in suppressing traffic jam with the consideration of flux change rate effect in delay feedback model.

  16. The Dynamics of the Law of Effect: A Comparison of Models

    ERIC Educational Resources Information Center

    Navakatikyan, Michael A.; Davison, Michael

    2010-01-01

    Dynamical models based on three steady-state equations for the law of effect were constructed under the assumption that behavior changes in proportion to the difference between current behavior and the equilibrium implied by current reinforcer rates. A comparison of dynamical models showed that a model based on Navakatikyan's (2007) two-component…

  17. Why Doesn't the "High School Drop Out Rate" Drop?

    ERIC Educational Resources Information Center

    Truby, William F.

    2016-01-01

    This article provides information, questions, and answers about current approaches to dropping the dropout rate of our students. For example, our current model of education is based on the mass production or assembly line model promoted by Henry Ford back in early years of the 1900s (1900-1920). This model served both factory production and…

  18. Team-based versus traditional primary care models and short-term outcomes after hospital discharge.

    PubMed

    Riverin, Bruno D; Li, Patricia; Naimi, Ashley I; Strumpf, Erin

    2017-04-24

    Strategies to reduce hospital readmission have been studied mainly at the local level. We assessed associations between population-wide policies supporting team-based primary care delivery models and short-term outcomes after hospital discharge. We extracted claims data on hospital admissions for any cause from 2002 to 2009 in the province of Quebec. We included older or chronically ill patients enrolled in team-based or traditional primary care practices. Outcomes were rates of readmission, emergency department visits and mortality in the 90 days following hospital discharge. We used inverse probability weighting to balance exposure groups on covariates and used marginal structural survival models to estimate rate differences and hazard ratios. We included 620 656 index admissions involving 312 377 patients. Readmission rates at any point in the 90-day post-discharge period were similar between primary care models. Patients enrolled in team-based primary care practices had lower 30-day rates of emergency department visits not associated with readmission (adjusted difference 7.5 per 1000 discharges, 95% confidence interval [CI] 4.2 to 10.8) and lower 30-day mortality (adjusted difference 3.8 deaths per 1000 discharges, 95% CI 1.7 to 5.9). The 30-day difference for mortality differed according to morbidity level (moderate morbidity: 1.0 fewer deaths per 1000 discharges in team-based practices, 95% CI 0.3 more to 2.3 fewer deaths; very high morbidity: 4.2 fewer deaths per 1000 discharges, 95% CI 3.0 to 5.3; p < 0.001). Our study showed that enrolment in the newer team-based primary care practices was associated with lower rates of postdischarge emergency department visits and death. We did not observe differences in readmission rates, which suggests that more targeted or intensive efforts may be needed to affect this outcome. © 2017 Canadian Medical Association or its licensors.

  19. Rate and timing cues associated with the cochlear amplifier: level discrimination based on monaural cross-frequency coincidence detection.

    PubMed

    Heinz, M G; Colburn, H S; Carney, L H

    2001-10-01

    The perceptual significance of the cochlear amplifier was evaluated by predicting level-discrimination performance based on stochastic auditory-nerve (AN) activity. Performance was calculated for three models of processing: the optimal all-information processor (based on discharge times), the optimal rate-place processor (based on discharge counts), and a monaural coincidence-based processor that uses a non-optimal combination of rate and temporal information. An analytical AN model included compressive magnitude and level-dependent-phase responses associated with the cochlear amplifier, and high-, medium-, and low-spontaneous-rate (SR) fibers with characteristic frequencies (CFs) spanning the AN population. The relative contributions of nonlinear magnitude and nonlinear phase responses to level encoding were compared by using four versions of the model, which included and excluded the nonlinear gain and phase responses in all possible combinations. Nonlinear basilar-membrane (BM) phase responses are robustly encoded in near-CF AN fibers at low frequencies. Strongly compressive BM responses at high frequencies near CF interact with the high thresholds of low-SR AN fibers to produce large dynamic ranges. Coincidence performance based on a narrow range of AN CFs was robust across a wide dynamic range at both low and high frequencies, and matched human performance levels. Coincidence performance based on all CFs demonstrated the "near-miss" to Weber's law at low frequencies and the high-frequency "mid-level bump." Monaural coincidence detection is a physiologically realistic mechanism that is extremely general in that it can utilize AN information (average-rate, synchrony, and nonlinear-phase cues) from all SR groups.

  20. Deciphering mRNA Sequence Determinants of Protein Production Rate

    NASA Astrophysics Data System (ADS)

    Szavits-Nossan, Juraj; Ciandrini, Luca; Romano, M. Carmen

    2018-03-01

    One of the greatest challenges in biophysical models of translation is to identify coding sequence features that affect the rate of translation and therefore the overall protein production in the cell. We propose an analytic method to solve a translation model based on the inhomogeneous totally asymmetric simple exclusion process, which allows us to unveil simple design principles of nucleotide sequences determining protein production rates. Our solution shows an excellent agreement when compared to numerical genome-wide simulations of S. cerevisiae transcript sequences and predicts that the first 10 codons, which is the ribosome footprint length on the mRNA, together with the value of the initiation rate, are the main determinants of protein production rate under physiological conditions. Finally, we interpret the obtained analytic results based on the evolutionary role of the codons' choice for regulating translation rates and ribosome densities.

  1. A Physics-Based Vibrotactile Feedback Library for Collision Events.

    PubMed

    Park, Gunhyuk; Choi, Seungmoon

    2017-01-01

    We present PhysVib: a software solution on the mobile platform extending an open-source physics engine in a multi-rate rendering architecture for automatic vibrotactile feedback upon collision events. PhysVib runs concurrently with a physics engine at a low update rate and generates vibrotactile feedback commands at a high update rate based on the simulation results of the physics engine using an exponentially-decaying sinusoidal model. We demonstrate through a user study that this vibration model is more appropriate to our purpose in terms of perceptual quality than more complex models based on sound synthesis. We also evaluated the perceptual performance of PhysVib by comparing eight vibrotactile rendering methods. Experimental results suggested that PhysVib enables more realistic vibrotactile feedback than the other methods as to perceived similarity to the visual events. PhysVib is an effective solution for providing physically plausible vibrotactile responses while reducing application development time to great extent.

  2. Earthquake Rate Models for Evolving Induced Seismicity Hazard in the Central and Eastern US

    NASA Astrophysics Data System (ADS)

    Llenos, A. L.; Ellsworth, W. L.; Michael, A. J.

    2015-12-01

    Injection-induced earthquake rates can vary rapidly in space and time, which presents significant challenges to traditional probabilistic seismic hazard assessment methodologies that are based on a time-independent model of mainshock occurrence. To help society cope with rapidly evolving seismicity, the USGS is developing one-year hazard models for areas of induced seismicity in the central and eastern US to forecast the shaking due to all earthquakes, including aftershocks which are generally omitted from hazards assessments (Petersen et al., 2015). However, the spatial and temporal variability of the earthquake rates make them difficult to forecast even on time-scales as short as one year. An initial approach is to use the previous year's seismicity rate to forecast the next year's seismicity rate. However, in places such as northern Oklahoma the rates vary so rapidly over time that a simple linear extrapolation does not accurately forecast the future, even when the variability in the rates is modeled with simulations based on an Epidemic-Type Aftershock Sequence (ETAS) model (Ogata, JASA, 1988) to account for earthquake clustering. Instead of relying on a fixed time period for rate estimation, we explore another way to determine when the earthquake rate should be updated. This approach could also objectively identify new areas where the induced seismicity hazard model should be applied. We will estimate the background seismicity rate by optimizing a single set of ETAS aftershock triggering parameters across the most active induced seismicity zones -- Oklahoma, Guy-Greenbrier, the Raton Basin, and the Azle-Dallas-Fort Worth area -- with individual background rate parameters in each zone. The full seismicity rate, with uncertainties, can then be estimated using ETAS simulations and changes in rate can be detected by applying change point analysis in ETAS transformed time with methods already developed for Poisson processes.

  3. An autoregressive model-based particle filtering algorithms for extraction of respiratory rates as high as 90 breaths per minute from pulse oximeter.

    PubMed

    Lee, Jinseok; Chon, Ki H

    2010-09-01

    We present particle filtering (PF) algorithms for an accurate respiratory rate extraction from pulse oximeter recordings over a broad range: 12-90 breaths/min. These methods are based on an autoregressive (AR) model, where the aim is to find the pole angle with the highest magnitude as it corresponds to the respiratory rate. However, when SNR is low, the pole angle with the highest magnitude may not always lead to accurate estimation of the respiratory rate. To circumvent this limitation, we propose a probabilistic approach, using a sequential Monte Carlo method, named PF, which is combined with the optimal parameter search (OPS) criterion for an accurate AR model-based respiratory rate extraction. The PF technique has been widely adopted in many tracking applications, especially for nonlinear and/or non-Gaussian problems. We examine the performances of five different likelihood functions of the PF algorithm: the strongest neighbor, nearest neighbor (NN), weighted nearest neighbor (WNN), probability data association (PDA), and weighted probability data association (WPDA). The performance of these five combined OPS-PF algorithms was measured against a solely OPS-based AR algorithm for respiratory rate extraction from pulse oximeter recordings. The pulse oximeter data were collected from 33 healthy subjects with breathing rates ranging from 12 to 90 breaths/ min. It was found that significant improvement in accuracy can be achieved by employing particle filters, and that the combined OPS-PF employing either the NN or WNN likelihood function achieved the best results for all respiratory rates considered in this paper. The main advantage of the combined OPS-PF with either the NN or WNN likelihood function is that for the first time, respiratory rates as high as 90 breaths/min can be accurately extracted from pulse oximeter recordings.

  4. Determination of fluence rate and temperature distributions in the rat brain; implications for photodynamic therapy.

    PubMed

    Angell-Petersen, Even; Hirschberg, Henry; Madsen, Steen J

    2007-01-01

    Light and heat distributions are measured in a rat glioma model used in photodynamic therapy. A fiber delivering 632-nm light is fixed in the brain of anesthetized BDIX rats. Fluence rates are measured using calibrated isotropic probes that are positioned stereotactically. Mathematical models are then used to derive tissue optical properties, enabling calculation of fluence rate distributions for general tumor and light application geometries. The fluence rates in tumor-free brains agree well with the models based on diffusion theory and Monte Carlo simulation. In both cases, the best fit is found for absorption and reduced scattering coefficients of 0.57 and 28 cm(-1), respectively. In brains with implanted BT(4)C tumors, a discrepancy between diffusion and Monte Carlo-derived two-layer models is noted. Both models suggest that tumor tissue has higher absorption and less scattering than normal brain. Temperatures are measured by inserting thermocouples directly into tumor-free brains. A model based on diffusion theory and the bioheat equation is found to be in good agreement with the experimental data and predict a thermal penetration depth of 0.60 cm in normal rat brain. The predicted parameters can be used to estimate the fluences, fluence rates, and temperatures achieved during photodynamic therapy.

  5. Variations of leaf longevity in tropical moist forests predicted by a trait-driven carbon optimality model

    DOE PAGES

    Xu, Xiangtao; Medvigy, David; Wright, Stuart Joseph; ...

    2017-07-04

    Leaf longevity (LL) varies more than 20-fold in tropical evergreen forests, but it remains unclear how to capture these variations using predictive models. Current theories of LL that are based on carbon optimisation principles are challenging to quantitatively assess because of uncertainty across species in the ‘ageing rate:’ the rate at which leaf photosynthetic capacity declines with age. Here in this paper, we present a meta-analysis of 49 species across temperate and tropical biomes, demonstrating that the ageing rate of photosynthetic capacity is positively correlated with the mass-based carboxylation rate of mature leaves. We assess an improved trait-driven carbon optimalitymore » model with in situLL data for 105 species in two Panamanian forests. Additionally, we show that our model explains over 40% of the cross-species variation in LL under contrasting light environment. Collectively, our results reveal how variation in LL emerges from carbon optimisation constrained by both leaf structural traits and abiotic environment.« less

  6. Heart rate prediction for coronary artery disease patients (CAD): Results of a clinical pilot study.

    PubMed

    Müller-von Aschwege, Frerk; Workowski, Anke; Willemsen, Detlev; Müller, Sebastian M; Hein, Andreas

    2015-01-01

    This paper describes the results of a pilot study with cardiac patients based on information that can be derived from a smartphone. The idea behind the study is to design a model for estimating the heart rate of a patient before an outdoor walking session for track planning, as well as using the model for guidance during an outdoor session. The model allows estimation of the heart rate several minutes in advance to guide the patient and avoid overstrain before its occurrence. This paper describes the first results of the clinical pilot study with cardiac patients taking β-blockers. 9 patients have been tested on a treadmill and during three outdoor sessions each. The results have been derived and three levels of improvement have been tested by cross validation. The overall result is an average Median Absolute Deviation (MAD) of 4.26 BPM between measured heart rate and smartphone sensor based model estimation.

  7. Variations of leaf longevity in tropical moist forests predicted by a trait-driven carbon optimality model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Xiangtao; Medvigy, David; Wright, Stuart Joseph

    Leaf longevity (LL) varies more than 20-fold in tropical evergreen forests, but it remains unclear how to capture these variations using predictive models. Current theories of LL that are based on carbon optimisation principles are challenging to quantitatively assess because of uncertainty across species in the ‘ageing rate:’ the rate at which leaf photosynthetic capacity declines with age. Here in this paper, we present a meta-analysis of 49 species across temperate and tropical biomes, demonstrating that the ageing rate of photosynthetic capacity is positively correlated with the mass-based carboxylation rate of mature leaves. We assess an improved trait-driven carbon optimalitymore » model with in situLL data for 105 species in two Panamanian forests. Additionally, we show that our model explains over 40% of the cross-species variation in LL under contrasting light environment. Collectively, our results reveal how variation in LL emerges from carbon optimisation constrained by both leaf structural traits and abiotic environment.« less

  8. Orbital debris environment for spacecraft in low earth orbit

    NASA Technical Reports Server (NTRS)

    Kessler, Donald J.

    1990-01-01

    Modeling and measurement results used in formulating an environment model that can be used for the engineering design of spacecraft are reviewed. Earth-based and space-based sensors are analyzed and it is noted that the effects of satellite breakups can be modeled to predict a uncatalogued population, if the nature of the breakup is understood. It is observed that the telescopic data indicate that the current model is too low for sizes slightly larger than 10 cm, and may be too low for sizes between 2 cm and 10 cm, while there is an uncertainty in the current development, especially for sizes smaller than 10 cm, and at altitudes different from 500 km. Projections for the catastrophic collision rate for different growth conditions are made, emphasizing that the rate of growth of fragments will be twice the rate of intact objects.

  9. Flight test derived heating math models for critical locations on the orbiter during reentry

    NASA Technical Reports Server (NTRS)

    Hertzler, E. K.; Phillips, P. W.

    1983-01-01

    An analysis technique was developed for expanding the aerothermodynamic envelope of the Space Shuttle without subjecting the vehicle to sustained flight at more stressing heating conditions. A transient analysis program was developed to take advantage of the transient maneuvers that were flown as part of this analysis technique. Heat rates were derived from flight test data for various locations on the orbiter. The flight derived heat rates were used to update heating models based on predicted data. Future missions were then analyzed based on these flight adjusted models. A technique for comparing flight and predicted heating rate data and the extrapolation of the data to predict the aerothermodynamic environment of future missions is presented.

  10. Simulating bimodal tall fescue growth with a degree-day-based process-oriented plant model

    USDA-ARS?s Scientific Manuscript database

    Plant growth simulation models have a temperature response function driving development, with a base temperature and an optimum temperature defined. Such growth simulation models often function well when plant development rate shows a continuous change throughout the growing season. This approach ...

  11. Quantifying the effect of a community-based injury prevention program in Queensland using a generalized estimating equation approach.

    PubMed

    Yorkston, Emily; Turner, Catherine; Schluter, Philip J; McClure, Rod

    2007-06-01

    To develop a generalized estimating equation (GEE) model of childhood injury rates to quantify the effectiveness of a community-based injury prevention program implemented in 2 communities in Australia, in order to contribute to the discussion of community-based injury prevention program evaluation. An ecological study was conducted comparing injury rates in two intervention communities in rural and remote Queensland, Australia, with those of 16 control regions. A model of childhood injury was built using hospitalization injury rate data from 1 July 1991 to 30 June 2005 and 16 social variables. The model was built using GEE analysis and was used to estimate parameters and to test the effectiveness of the intervention. When social variables were controlled for, the intervention was associated with a decrease of 0.09 injuries/10,000 children aged 0-4 years (95% CI -0.29 to 0.11) in logarithmically transformed injury rates; however, this decrease was not significant (p = 0.36). The evaluation methods proposed in this study provide a way of determining the effectiveness of a community-based injury prevention program while considering the effect of baseline differences and secular changes in social variables.

  12. A trait-based approach for predicting species responses to environmental change from sparse data: how well might terrestrial mammals track climate change?

    PubMed

    Santini, Luca; Cornulier, Thomas; Bullock, James M; Palmer, Stephen C F; White, Steven M; Hodgson, Jenny A; Bocedi, Greta; Travis, Justin M J

    2016-07-01

    Estimating population spread rates across multiple species is vital for projecting biodiversity responses to climate change. A major challenge is to parameterise spread models for many species. We introduce an approach that addresses this challenge, coupling a trait-based analysis with spatial population modelling to project spread rates for 15 000 virtual mammals with life histories that reflect those seen in the real world. Covariances among life-history traits are estimated from an extensive terrestrial mammal data set using Bayesian inference. We elucidate the relative roles of different life-history traits in driving modelled spread rates, demonstrating that any one alone will be a poor predictor. We also estimate that around 30% of mammal species have potential spread rates slower than the global mean velocity of climate change. This novel trait-space-demographic modelling approach has broad applicability for tackling many key ecological questions for which we have the models but are hindered by data availability. © 2016 The Authors. Global Change Biology Published by John Wiley & Sons Ltd.

  13. Control effects of stimulus paradigms on characteristic firings of parkinsonism

    NASA Astrophysics Data System (ADS)

    Zhang, Honghui; Wang, Qingyun; Chen, Guanrong

    2014-09-01

    Experimental studies have shown that neuron population located in the basal ganglia of parkinsonian primates can exhibit characteristic firings with certain firing rates differing from normal brain activities. Motivated by recent experimental findings, we investigate the effects of various stimulation paradigms on the firing rates of parkinsonism based on the proposed dynamical models. Our results show that the closed-loop deep brain stimulation is superior in ameliorating the firing behaviors of the parkinsonism, and other control strategies have similar effects according to the observation of electrophysiological experiments. In addition, in conformity to physiological experiments, we found that there exists optimal delay of input in the closed-loop GPtrain|M1 paradigm, where more normal behaviors can be obtained. More interestingly, we observed that W-shaped curves of the firing rates always appear as stimulus delay varies. We furthermore verify the robustness of the obtained results by studying three pallidal discharge rates of the parkinsonism based on the conductance-based model, as well as the integrate-and-fire-or-burst model. Finally, we show that short-term plasticity can improve the firing rates and optimize the control effects on parkinsonism. Our conclusions may give more theoretical insight into Parkinson's disease studies.

  14. The effect of learning models and emotional intelligence toward students learning outcomes on reaction rate

    NASA Astrophysics Data System (ADS)

    Sutiani, Ani; Silitonga, Mei Y.

    2017-08-01

    This research focused on the effect of learning models and emotional intelligence in students' chemistry learning outcomes on reaction rate teaching topic. In order to achieve the objectives of the research, with 2x2 factorial research design was used. There were two factors tested, namely: the learning models (factor A), and emotional intelligence (factor B) factors. Then, two learning models were used; problem-based learning/PBL (A1), and project-based learning/PjBL (A2). While, the emotional intelligence was divided into higher and lower types. The number of population was six classes containing 243 grade X students of SMAN 10 Medan, Indonesia. There were 15 students of each class were chosen as the sample of the research by applying purposive sampling technique. The data were analyzed by applying two-ways analysis of variance (2X2) at the level of significant α = 0.05. Based on hypothesis testing, there was the interaction between learning models and emotional intelligence in students' chemistry learning outcomes. Then, the finding of the research showed that students' learning outcomes in reaction rate taught by using PBL with higher emotional intelligence is higher than those who were taught by using PjBL. There was no significant effect between students with lower emotional intelligence taught by using both PBL and PjBL in reaction rate topic. Based on the finding, the students with lower emotional intelligence were quite hard to get in touch with other students in group discussion.

  15. Recreational Stream Crossing Effects on Sediment Delivery and Macroinvertebrates in Southwestern Virginia, USA

    NASA Astrophysics Data System (ADS)

    Kidd, Kathryn R.; Aust, W. Michael; Copenheaver, Carolyn A.

    2014-09-01

    Trail-based recreation has increased over recent decades, raising the environmental management issue of soil erosion that originates from unsurfaced, recreational trail systems. Trail-based soil erosion that occurs near stream crossings represents a non-point source of pollution to streams. We modeled soil erosion rates along multiple-use (hiking, mountain biking, and horseback riding) recreational trails that approach culvert and ford stream crossings as potential sources of sediment input and evaluated whether recreational stream crossings were impacting water quality based on downstream changes in macroinvertebrate-based indices within the Poverty Creek Trail System of the George Washington and Jefferson National Forest in southwestern Virginia, USA. We found modeled soil erosion rates for non-motorized recreational approaches that were lower than published estimates for an off-road vehicle approach, bare horse trails, and bare forest operational skid trail and road approaches, but were 13 times greater than estimated rates for undisturbed forests and 2.4 times greater than a 2-year old clearcut in this region. Estimated soil erosion rates were similar to rates for skid trails and horse trails where best management practices (BMPs) had been implemented. Downstream changes in macroinvertebrate-based indices indicated water quality was lower downstream from crossings than in upstream reference reaches. Our modeled soil erosion rates illustrate recreational stream crossing approaches have the potential to deliver sediment into adjacent streams, particularly where BMPs are not being implemented or where approaches are not properly managed, and as a result can negatively impact water quality below stream crossings.

  16. Modeling Effects of Local Extinctions on Culture Change and Diversity in the Paleolithic

    PubMed Central

    Premo, L. S.; Kuhn, Steven L.

    2010-01-01

    The persistence of early stone tool technologies has puzzled archaeologists for decades. Cognitively based explanations, which presume either lack of ability to innovate or extreme conformism, do not account for the totality of the empirical patterns. Following recent research, this study explores the effects of demographic factors on rates of culture change and diversification. We investigate whether the appearance of stability in early Paleolithic technologies could result from frequent extinctions of local subpopulations within a persistent metapopulation. A spatially explicit agent-based model was constructed to test the influence of local extinction rate on three general cultural patterns that archaeologists might observe in the material record: total diversity, differentiation among spatially defined groups, and the rate of cumulative change. The model shows that diversity, differentiation, and the rate of cumulative cultural change would be strongly affected by local extinction rates, in some cases mimicking the results of conformist cultural transmission. The results have implications for understanding spatial and temporal patterning in ancient material culture. PMID:21179418

  17. Simulation of lake ice and its effect on the late-Pleistocene evaporation rate of Lake Lahontan

    USGS Publications Warehouse

    Hostetler, S.W.

    1991-01-01

    A model of lake ice was coupled with a model of lake temperature and evaporation to assess the possible effect of ice cover on the late-Pleistocene evaporation rate of Lake Lahontan. The simulations were done using a data set based on proxy temperature indicators and features of the simulated late-Pleistocene atmospheric circulation over western North America. When a data set based on a mean-annual air temperature of 3?? C (7?? C colder than present) and reduced solar radiation from jet-stream induced cloud cover was used as input to the model, ice cover lasting ??? 4 months was simulated. Simulated evaporation rates (490-527 mm a-1) were ??? 60% lower than the present-day evaporation rate (1300 mm a-1) of Pyramid Lake. With this reduced rate of evaporation, water inputs similar to the 1983 historical maxima that occurred in the Lahontan basin would have been sufficient to maintain the 13.5 ka BP high stand of Lake Lahontan. ?? 1991 Springer-Verlag.

  18. Modeling & processing of ceramic and polymer precursor ceramic matrix composite materials

    NASA Astrophysics Data System (ADS)

    Wang, Xiaolin

    Synthesis and processing of novel materials with various advanced approaches have attracted much attention of engineers and scientists for the past thirty years. Many advanced materials display a number of exceptional properties and can be produced with different novel processing techniques. For example, AlN is a promising candidate for electronic, optical and opto-electronic applications due to its high thermal conductivity, high electrical resistivity, high acoustic wave velocity and large band gap. Large bulk AlN crystal can be produced by sublimation of AlN powder. Novel nonostructured multicomponent refractory metal-based ceramics (carbides, borides and nitrides) show a lot of exceptional mechanical, thermal and chemical properties, and can be easily produced by pyrolysis of suitable preceramic precursors mixed with metal particles. The objective of this work is to study sublimation and synthesis of AlN powder, and synthesis of SiC-based metal ceramics. For AlN sublimation crystal growth, we will focus on modeling the processes in the powder source that affect significantly the sublimation growth as a whole. To understand the powder porosity evolution and vapor transport during powder sublimation, the interplay between vapor transport and powder sublimation will be studied. A physics-based computational model will be developed considering powder sublimation and porosity evolution. Based on the proposed model, the effect of a central hole in the powder on the sublimation rate is studied and the result is compared to the case of powder without a hole. The effect of hole size on the sublimation rate will be studied. The effects of initial porosity, particle size and driving force on the sublimation rate are also studied. Moreover, the optimal growth condition for large diameter crystal quality and high growth rate will be determined. For synthesis of SiC-based metal ceramics, we will focus on developing a multi-scale process model to describe the dynamic behavior of filler particle reaction, microstructure evolution, at the microscale as well as transient fluid flow, heat transfer, and species transport at the macroscale. The model comprises of (i) a microscale model and (ii) a macroscale transport model, and aims to provide optimal conditions for the fabrication process of the ceramics. The porous media macroscale model for SiC-based metal-ceramic materials processing will be developed to understand the thermal polymer pyrolysis, chemical reaction of active fillers and transport phenomena in the porous media. The macroscale model will include heat and mass transfer, curing, pyrolysis, chemical reaction and crystallization in a mixture of preceramic polymers and submicron/nano-sized metal particles of uranium, zirconium, niobium, or hafnium. The effects of heating rate, sample size, size and volume ratio of the metal particles on the reaction rate and product uniformity will be studied. The microscale model will be developed for modeling the synthesis of SiC matrix and metal particles. The macroscale model provides thermal boundary conditions to the microscale model. The microscale model applies to repetitive units in the porous structure and describes mass transport, composition changes and motion of metal particles. The unit-cell is the representation unit of the source material, and it consists of several metal particles, SiC matrix and other components produced from the synthesis process. The reactions between different components, the microstructure evolution of the product will be considered. The effects of heating rate and metal particle size on species uniformity and microstructure are investigated.

  19. GENERALIZED VISCOPLASTIC MODELING OF DEBRIS FLOW.

    USGS Publications Warehouse

    Chen, Cheng-lung

    1988-01-01

    The earliest model developed by R. A. Bagnold was based on the concept of the 'dispersive' pressure generated by grain collisions. Some efforts have recently been made by theoreticians in non-Newtonian fluid mechanics to modify or improve Bagnold's concept or model. A viable rheological model should consist both of a rate-independent part and a rate-dependent part. A generalized viscoplastic fluid (GVF) model that has both parts as well as two major rheological properties (i. e. , the normal stress effect and soil yield criterion) is shown to be sufficiently accurate, yet practical for general use in debris-flow modeling. In fact, Bagnold's model is found to be only a particular case of the GVF model. analytical solutions for (steady) uniform debris flows in wide channels are obtained from the GVF model based on Bagnold's simplified assumption of constant grain concentration.

  20. On a problematic procedure to manipulate response biases in recognition experiments: the case of "implied" base rates.

    PubMed

    Bröder, Arndt; Malejka, Simone

    2017-07-01

    The experimental manipulation of response biases in recognition-memory tests is an important means for testing recognition models and for estimating their parameters. The textbook manipulations for binary-response formats either vary the payoff scheme or the base rate of targets in the recognition test, with the latter being the more frequently applied procedure. However, some published studies reverted to implying different base rates by instruction rather than actually changing them. Aside from unnecessarily deceiving participants, this procedure may lead to cognitive conflicts that prompt response strategies unknown to the experimenter. To test our objection, implied base rates were compared to actual base rates in a recognition experiment followed by a post-experimental interview to assess participants' response strategies. The behavioural data show that recognition-memory performance was estimated to be lower in the implied base-rate condition. The interview data demonstrate that participants used various second-order response strategies that jeopardise the interpretability of the recognition data. We thus advice researchers against substituting actual base rates with implied base rates.

  1. Likelihood-Based Random-Effect Meta-Analysis of Binary Events.

    PubMed

    Amatya, Anup; Bhaumik, Dulal K; Normand, Sharon-Lise; Greenhouse, Joel; Kaizar, Eloise; Neelon, Brian; Gibbons, Robert D

    2015-01-01

    Meta-analysis has been used extensively for evaluation of efficacy and safety of medical interventions. Its advantages and utilities are well known. However, recent studies have raised questions about the accuracy of the commonly used moment-based meta-analytic methods in general and for rare binary outcomes in particular. The issue is further complicated for studies with heterogeneous effect sizes. Likelihood-based mixed-effects modeling provides an alternative to moment-based methods such as inverse-variance weighted fixed- and random-effects estimators. In this article, we compare and contrast different mixed-effect modeling strategies in the context of meta-analysis. Their performance in estimation and testing of overall effect and heterogeneity are evaluated when combining results from studies with a binary outcome. Models that allow heterogeneity in both baseline rate and treatment effect across studies have low type I and type II error rates, and their estimates are the least biased among the models considered.

  2. Effects of tunnelling and asymmetry for system-bath models of electron transfer

    NASA Astrophysics Data System (ADS)

    Mattiat, Johann; Richardson, Jeremy O.

    2018-03-01

    We apply the newly derived nonadiabatic golden-rule instanton theory to asymmetric models describing electron-transfer in solution. The models go beyond the usual spin-boson description and have anharmonic free-energy surfaces with different values for the reactant and product reorganization energies. The instanton method gives an excellent description of the behaviour of the rate constant with respect to asymmetry for the whole range studied. We derive a general formula for an asymmetric version of the Marcus theory based on the classical limit of the instanton and find that this gives significant corrections to the standard Marcus theory. A scheme is given to compute this rate based only on equilibrium simulations. We also compare the rate constants obtained by the instanton method with its classical limit to study the effect of tunnelling and other quantum nuclear effects. These quantum effects can increase the rate constant by orders of magnitude.

  3. Fluid Flow Prediction with Development System Interwell Connectivity Influence

    NASA Astrophysics Data System (ADS)

    Bolshakov, M.; Deeva, T.; Pustovskikh, A.

    2016-03-01

    In this paper interwell connectivity has been studied. First of all, literature review of existing methods was made which is divided into three groups: Statistically-Based Methods, Material (fluid) Propagation-Based Methods and Potential (pressure) Change Propagation-Based Method. The disadvantages of the first and second groups are as follows: methods do not involve fluid flow through porous media, ignore any changes of well conditions (BHP, skin factor, etc.). The last group considers changes of well conditions and fluid flow through porous media. In this work Capacitance method (CM) has been chosen for research. This method is based on material balance and uses weight coefficients lambdas to assess well influence. In the next step synthetic model was created for examining CM. This model consists of an injection well and a production well. CM gave good results, it means that flow rates which were calculated by analytical method (CM) show matching with flow rate in model. Further new synthetic model was created which includes six production and one injection wells. This model represents seven-spot pattern. To obtain lambdas weight coefficients, the delta function was entered using by minimization algorithm. Also synthetic model which has three injectors and thirteen producer wells was created. This model simulates seven-spot pattern production system. Finally Capacitance method (CM) has been adjusted on real data of oil Field Ω. In this case CM does not give enough satisfying results in terms of field data liquid rate. In conclusion, recommendations to simplify CM calculations were given. Field Ω is assumed to have one injection and one production wells. In this case, satisfying results for production rates and cumulative production were obtained.

  4. Radiation induced dissolution of UO 2 based nuclear fuel - A critical review of predictive modelling approaches

    NASA Astrophysics Data System (ADS)

    Eriksen, Trygve E.; Shoesmith, David W.; Jonsson, Mats

    2012-01-01

    Radiation induced dissolution of uranium dioxide (UO 2) nuclear fuel and the consequent release of radionuclides to intruding groundwater are key-processes in the safety analysis of future deep geological repositories for spent nuclear fuel. For several decades, these processes have been studied experimentally using both spent fuel and various types of simulated spent fuels. The latter have been employed since it is difficult to draw mechanistic conclusions from real spent nuclear fuel experiments. Several predictive modelling approaches have been developed over the last two decades. These models are largely based on experimental observations. In this work we have performed a critical review of the modelling approaches developed based on the large body of chemical and electrochemical experimental data. The main conclusions are: (1) the use of measured interfacial rate constants give results in generally good agreement with experimental results compared to simulations where homogeneous rate constants are used; (2) the use of spatial dose rate distributions is particularly important when simulating the behaviour over short time periods; and (3) the steady-state approach (the rate of oxidant consumption is equal to the rate of oxidant production) provides a simple but fairly accurate alternative, but errors in the reaction mechanism and in the kinetic parameters used may not be revealed by simple benchmarking. It is essential to use experimentally determined rate constants and verified reaction mechanisms, irrespective of whether the approach is chemical or electrochemical.

  5. Computational simulations of vocal fold vibration: Bernoulli versus Navier-Stokes.

    PubMed

    Decker, Gifford Z; Thomson, Scott L

    2007-05-01

    The use of the mechanical energy (ME) equation for fluid flow, an extension of the Bernoulli equation, to predict the aerodynamic loading on a two-dimensional finite element vocal fold model is examined. Three steady, one-dimensional ME flow models, incorporating different methods of flow separation point prediction, were compared. For two models, determination of the flow separation point was based on fixed ratios of the glottal area at separation to the minimum glottal area; for the third model, the separation point determination was based on fluid mechanics boundary layer theory. Results of flow rate, separation point, and intraglottal pressure distribution were compared with those of an unsteady, two-dimensional, finite element Navier-Stokes model. Cases were considered with a rigid glottal profile as well as with a vibrating vocal fold. For small glottal widths, the three ME flow models yielded good predictions of flow rate and intraglottal pressure distribution, but poor predictions of separation location. For larger orifice widths, the ME models were poor predictors of flow rate and intraglottal pressure, but they satisfactorily predicted separation location. For the vibrating vocal fold case, all models resulted in similar predictions of mean intraglottal pressure, maximum orifice area, and vibration frequency, but vastly different predictions of separation location and maximum flow rate.

  6. Modeling coral calcification accounting for the impacts of coral bleaching and ocean acidification

    NASA Astrophysics Data System (ADS)

    Evenhuis, C.; Lenton, A.; Cantin, N. E.; Lough, J. M.

    2014-01-01

    Coral reefs are diverse ecosystems threatened by rising CO2 levels that are driving the observed increases in sea surface temperature and ocean acidification. Here we present a new unified model that links changes in temperature and carbonate chemistry to coral health. Changes in coral health and population are able to explicitly modelled by linking the rates of growth, recovery and calcification to the rates of bleaching and temperature stress induced mortality. The model is underpinned by four key principles: the Arrhenius equation, thermal specialisation, resource allocation trade-offs, and adaption to local environments. These general relationships allow this model to be constructed from a range of experimental and observational data. The different characteristics of this model are also assessed against independent data to show that the model captures the observed response of corals. We also provide new insights into the factors that determine calcification rates and provide a framework based on well-known biological principles for understanding the observed global distribution of calcification rates. Our results suggest that, despite the implicit complexity of the coral reef environment, a simple model based on temperature, carbonate chemistry and different species can reproduce much of the observed response of corals to changes in temperature and ocean acidification.

  7. POSTERIOR PREDICTIVE MODEL CHECKS FOR DISEASE MAPPING MODELS. (R827257)

    EPA Science Inventory

    Disease incidence or disease mortality rates for small areas are often displayed on maps. Maps of raw rates, disease counts divided by the total population at risk, have been criticized as unreliable due to non-constant variance associated with heterogeneity in base population si...

  8. Modeled Estimates of Soil and Dust Ingestion Rates for Children

    EPA Science Inventory

    Daily soil/dust ingestion rates typically used in exposure and risk assessments are based on tracer element studies, which have a number of limitations and do not separate contributions from soil and dust. This article presents an alternate approach of modeling soil and dust inge...

  9. Reaction modeling of drainage quality in the Duluth Complex, northern Minnesota, USA

    USGS Publications Warehouse

    Seal, Robert; Lapakko, Kim; Piatak, Nadine; Woodruff, Laurel G.

    2015-01-01

    Reaction modeling can be a valuable tool in predicting the long-term behavior of waste material if representative rate constants can be derived from long-term leaching tests or other approaches. Reaction modeling using the REACT program of the Geochemist’s Workbench was conducted to evaluate long-term drainage quality affected by disseminated Cu-Ni-(Co-)-PGM sulfide mineralization in the basal zone of the Duluth Complex where significant resources have been identified. Disseminated sulfide minerals, mostly pyrrhotite and Cu-Fe sulfides, are hosted by clinopyroxene-bearing troctolites. Carbonate minerals are scarce to non-existent. Long-term simulations of up to 20 years of weathering of tailings used two different sets of rate constants: one based on published laboratory single-mineral dissolution experiments, and one based on leaching experiments using bulk material from the Duluth Complex conducted by the Minnesota Department of Natural Resources (MNDNR). The simulations included only plagioclase, olivine, clinopyroxene, pyrrhotite, and water as starting phases. Dissolved oxygen concentrations were assumed to be in equilibrium with atmospheric oxygen. The simulations based on the published single-mineral rate constants predicted that pyrrhotite would be effectively exhausted in less than two years and pH would rise accordingly. In contrast, only 20 percent of the pyrrhotite was depleted after two years using the MNDNR rate constants. Predicted pyrrhotite depletion by the simulation based on the MNDNR rate constant matched well with published results of laboratory tests on tailings. Modeling long-term weathering of mine wastes also can provide important insights into secondary reactions that may influence the permeability of tailings and thereby affect weathering behavior. Both models predicted the precipitation of a variety of secondary phases including goethite, gibbsite, and clay (nontronite).

  10. A study of hydriding kinetics of metal hydrides using a physically based model

    NASA Astrophysics Data System (ADS)

    Voskuilen, Tyler G.

    The reaction of hydrogen with metals to form metal hydrides has numerous potential energy storage and management applications. The metal hydrogen system has a high volumetric energy density and is often reversible with a high cycle life. The stored hydrogen can be used to produce energy through combustion, reaction in a fuel cell, or electrochemically in metal hydride batteries. The high enthalpy of the metal-hydrogen reaction can also be used for rapid heat removal or delivery. However, improving the often poor gravimetric performance of such systems through the use of lightweight metals usually comes at the cost of reduced reaction rates or the requirement of pressure and temperature conditions far from the desired operating conditions. In this work, a 700 bar Sievert system was developed at the Purdue Hydrogen Systems Laboratory to study the kinetic and thermodynamic behavior of high pressure hydrogen absorption under near-ambient temperatures. This system was used to determine the kinetic and thermodynamic properties of TiCrMn, an intermetallic metal hydride of interest due to its ambient temperature performance for vehicular applications. A commonly studied intermetallic hydride, LaNi5, was also characterized as a base case for the phase field model. The analysis of the data obtained from such a system necessitate the use of specialized techniques to decouple the measured reaction rates from experimental conditions. These techniques were also developed as a part of this work. Finally, a phase field model of metal hydride formation in mass-transport limited interstitial solute reactions based on the regular solution model was developed and compared with measured kinetics of LaNi5 and TiCrMn. This model aided in the identification of key reaction features and was used to verify the proposed technique for the analysis of gas-solid reaction rates determined volumetrically. Additionally, the phase field model provided detailed quantitative predictions of the effects of multidimensional phase growth and transitions between rate-limiting processes on the experimentally determined reaction rates. Unlike conventional solid state reaction analysis methods, this model relies fully on rate parameters based on the physical mechanisms occurring in the hydride reaction and can be extended to reactions in any dimension.

  11. Cost-Effectiveness of Histamine2 Receptor Antagonists Versus Proton Pump Inhibitors for Stress Ulcer Prophylaxis in Critically Ill Patients.

    PubMed

    Hammond, Drayton A; Kathe, Niranjan; Shah, Anuj; Martin, Bradley C

    2017-01-01

    To determine the cost-effectiveness of stress ulcer prophylaxis with histamine 2 receptor antagonists (H2RAs) versus proton pump inhibitors (PPIs) in critically ill and mechanically ventilated adults. A decision analytic model estimating the costs and effectiveness of stress ulcer prophylaxis (with H2RAs and PPIs) from a health care institutional perspective. Adult mixed intensive care unit (ICU) population who received an H2RA or PPI for up to 9 days. Effectiveness measures were mortality during the ICU stay and complication rate. Costs (2015 U.S. dollars) were combined to include medication regimens and untoward events associated with stress ulcer prophylaxis (pneumonia, Clostridium difficile infection, and stress-related mucosal bleeding). Costs and probabilities for complications and mortality from complications came from randomized controlled trials and observational studies. A base case scenario was developed with pooled data from an observational study and meta-analysis of randomized controlled trials. Scenarios based on observational and meta-analysis data alone were evaluated. Outcomes were expected and incremental costs, mortalities, and complication rates. Univariate sensitivity analyses were conducted to determine the influence of inputs on cost, mortality, and complication rates. Monte Carlo simulations evaluated second-order uncertainty. In the base case scenario, the costs, complication rates, and mortality rates were $9039, 17.6%, and 2.50%, respectively, for H2RAs and $11,249, 22.0%, and 3.34%, respectively, for PPIs, indicating that H2RAs dominated PPIs. The observational study-based model provided similar results; however, in the meta-analysis-based model, H2RAs had a cost of $8364 and mortality rate of 3.2% compared with $7676 and 2.0%, respectively, for PPIs. At a willingness-to-pay threshold of $100,000/death averted, H2RA therapy was superior or preferred 70.3% in the base case and 97.0% in the observational study-based scenario. PPI therapy was preferred 87.2% in the meta-analysis-based scenario. Providing stress ulcer prophylaxis with H2RA therapy may reduce costs, increase survival, and avoid complications compared with PPI therapy. This finding is highly sensitive to the pneumonia and stress-related mucosal bleeding rates and whether observational data are used to inform the model. © 2016 Pharmacotherapy Publications, Inc.

  12. Can hydraulic-modelled rating curves reduce uncertainty in high flow data?

    NASA Astrophysics Data System (ADS)

    Westerberg, Ida; Lam, Norris; Lyon, Steve W.

    2017-04-01

    Flood risk assessments rely on accurate discharge data records. Establishing a reliable rating curve for calculating discharge from stage at a gauging station normally takes years of data collection efforts. Estimation of high flows is particularly difficult as high flows occur rarely and are often practically difficult to gauge. Hydraulically-modelled rating curves can be derived based on as few as two concurrent stage-discharge and water-surface slope measurements at different flow conditions. This means that a reliable rating curve can, potentially, be derived much faster than a traditional rating curve based on numerous stage-discharge gaugings. In this study we compared the uncertainty in discharge data that resulted from these two rating curve modelling approaches. We applied both methods to a Swedish catchment, accounting for uncertainties in the stage-discharge gauging and water-surface slope data for the hydraulic model and in the stage-discharge gauging data and rating-curve parameters for the traditional method. We focused our analyses on high-flow uncertainty and the factors that could reduce this uncertainty. In particular, we investigated which data uncertainties were most important, and at what flow conditions the gaugings should preferably be taken. First results show that the hydraulically-modelled rating curves were more sensitive to uncertainties in the calibration measurements of discharge than water surface slope. The uncertainty of the hydraulically-modelled rating curves were lowest within the range of the three calibration stage-discharge gaugings (i.e. between median and two-times median flow) whereas uncertainties were higher outside of this range. For instance, at the highest observed stage of the 24-year stage record, the 90% uncertainty band was -15% to +40% of the official rating curve. Additional gaugings at high flows (i.e. four to five times median flow) would likely substantially reduce those uncertainties. These first results show the potential of the hydraulically-modelled curves, particularly where the calibration gaugings are of high quality and cover a wide range of flow conditions.

  13. Reconstructing shifts in vital rates driven by long-term environmental change: a new demographic method based on readily available data.

    PubMed

    González, Edgar J; Martorell, Carlos

    2013-07-01

    Frequently, vital rates are driven by directional, long-term environmental changes. Many of these are of great importance, such as land degradation, climate change, and succession. Traditional demographic methods assume a constant or stationary environment, and thus are inappropriate to analyze populations subject to these changes. They also require repeat surveys of the individuals as change unfolds. Methods for reconstructing such lengthy processes are needed. We present a model that, based on a time series of population size structures and densities, reconstructs the impact of directional environmental changes on vital rates. The model uses integral projection models and maximum likelihood to identify the rates that best reconstructs the time series. The procedure was validated with artificial and real data. The former involved simulated species with widely different demographic behaviors. The latter used a chronosequence of populations of an endangered cactus subject to increasing anthropogenic disturbance. In our simulations, the vital rates and their change were always reconstructed accurately. Nevertheless, the model frequently produced alternative results. The use of coarse knowledge of the species' biology (whether vital rates increase or decrease with size or their plausible values) allowed the correct rates to be identified with a 90% success rate. With real data, the model correctly reconstructed the effects of disturbance on vital rates. These effects were previously known from two populations for which demographic data were available. Our procedure seems robust, as the data violated several of the model's assumptions. Thus, time series of size structures and densities contain the necessary information to reconstruct changing vital rates. However, additional biological knowledge may be required to provide reliable results. Because time series of size structures and densities are available for many species or can be rapidly generated, our model can contribute to understand populations that face highly pressing environmental problems.

  14. Reconstructing shifts in vital rates driven by long-term environmental change: a new demographic method based on readily available data

    PubMed Central

    González, Edgar J; Martorell, Carlos

    2013-01-01

    Frequently, vital rates are driven by directional, long-term environmental changes. Many of these are of great importance, such as land degradation, climate change, and succession. Traditional demographic methods assume a constant or stationary environment, and thus are inappropriate to analyze populations subject to these changes. They also require repeat surveys of the individuals as change unfolds. Methods for reconstructing such lengthy processes are needed. We present a model that, based on a time series of population size structures and densities, reconstructs the impact of directional environmental changes on vital rates. The model uses integral projection models and maximum likelihood to identify the rates that best reconstructs the time series. The procedure was validated with artificial and real data. The former involved simulated species with widely different demographic behaviors. The latter used a chronosequence of populations of an endangered cactus subject to increasing anthropogenic disturbance. In our simulations, the vital rates and their change were always reconstructed accurately. Nevertheless, the model frequently produced alternative results. The use of coarse knowledge of the species' biology (whether vital rates increase or decrease with size or their plausible values) allowed the correct rates to be identified with a 90% success rate. With real data, the model correctly reconstructed the effects of disturbance on vital rates. These effects were previously known from two populations for which demographic data were available. Our procedure seems robust, as the data violated several of the model's assumptions. Thus, time series of size structures and densities contain the necessary information to reconstruct changing vital rates. However, additional biological knowledge may be required to provide reliable results. Because time series of size structures and densities are available for many species or can be rapidly generated, our model can contribute to understand populations that face highly pressing environmental problems. PMID:23919169

  15. Application of neural network in the study of combustion rate of natural gas/diesel dual fuel engine.

    PubMed

    Yan, Zhao-Da; Zhou, Chong-Guang; Su, Shi-Chuan; Liu, Zhen-Tao; Wang, Xi-Zhen

    2003-01-01

    In order to predict and improve the performance of natural gas/diesel dual fuel engine (DFE), a combustion rate model based on forward neural network was built to study the combustion process of the DFE. The effect of the operating parameters on combustion rate was also studied by means of this model. The study showed that the predicted results were good agreement with the experimental data. It was proved that the developed combustion rate model could be used to successfully predict and optimize the combustion process of dual fuel engine.

  16. Risky forward interest rates and swaptions: Quantum finance model and empirical results

    NASA Astrophysics Data System (ADS)

    Baaquie, Belal Ehsan; Yu, Miao; Bhanap, Jitendra

    2018-02-01

    Risk free forward interest rates (Diebold and Li, 2006 [1]; Jamshidian, 1991 [2 ]) - and their realization by US Treasury bonds as the leading exemplar - have been studied extensively. In Baaquie (2010), models of risk free bonds and their forward interest rates based on the quantum field theoretic formulation of the risk free forward interest rates have been discussed, including the empirical evidence supporting these models. The quantum finance formulation of risk free forward interest rates is extended to the case of risky forward interest rates. The examples of the Singapore and Malaysian forward interest rates are used as specific cases. The main feature of the quantum finance model is that the risky forward interest rates are modeled both a) as a stand-alone case as well as b) being driven by the US forward interest rates plus a spread - having its own term structure -above the US forward interest rates. Both the US forward interest rates and the term structure for the spread are modeled by a two dimensional Euclidean quantum field. As a precursor to the evaluation of put option of the Singapore coupon bond, the quantum finance model for swaptions is tested using empirical study of swaptions for the US Dollar -showing that the model is quite accurate. A prediction for the market price of the put option for the Singapore coupon bonds is obtained. The quantum finance model is generalized to study the Malaysian case and the Malaysian forward interest rates are shown to have anomalies absent for the US and Singapore case. The model's prediction for a Malaysian interest rate swap is obtained.

  17. Modeling the Flow Behavior, Recrystallization, and Crystallographic Texture in Hot-Deformed Fe-30 Wt Pct Ni Austenite

    NASA Astrophysics Data System (ADS)

    Abbod, M. F.; Sellars, C. M.; Cizek, P.; Linkens, D. A.; Mahfouf, M.

    2007-10-01

    The present work describes a hybrid modeling approach developed for predicting the flow behavior, recrystallization characteristics, and crystallographic texture evolution in a Fe-30 wt pct Ni austenitic model alloy subjected to hot plane strain compression. A series of compression tests were performed at temperatures between 850 °C and 1050 °C and strain rates between 0.1 and 10 s-1. The evolution of grain structure, crystallographic texture, and dislocation substructure was characterized in detail for a deformation temperature of 950 °C and strain rates of 0.1 and 10 s-1, using electron backscatter diffraction and transmission electron microscopy. The hybrid modeling method utilizes a combination of empirical, physically-based, and neuro-fuzzy models. The flow stress is described as a function of the applied variables of strain rate and temperature using an empirical model. The recrystallization behavior is predicted from the measured microstructural state variables of internal dislocation density, subgrain size, and misorientation between subgrains using a physically-based model. The texture evolution is modeled using artificial neural networks.

  18. The Rangeland Hydrology and Erosion Model

    NASA Astrophysics Data System (ADS)

    Nearing, M. A.

    2016-12-01

    The Rangeland Hydrology and Erosion Model (RHEM) is a process-based model that was designed to address rangelands conditions. RHEM is designed for government agencies, land managers and conservationists who need sound, science-based technology to model, assess, and predict runoff and erosion rates on rangelands and to assist in evaluating rangeland conservation practices effects. RHEM is an event-based model that estimates runoff, erosion, and sediment delivery rates and volumes at the spatial scale of the hillslope and the temporal scale of as single rainfall event. It represents erosion processes under normal and fire-impacted rangeland conditions. Moreover, it adopts a new splash erosion and thin sheet-flow transport equation developed from rangeland data, and it links the model hydrologic and erosion parameters with rangeland plant community by providing a new system of parameter estimation equations based on 204 plots at 49 rangeland sites distributed across 15 western U.S. states. A dynamic partial differential sediment continuity equation is used to model the total detachment rate of concentrated flow and rain splash and sheet flow. RHEM is also designed to be used as a calculator, or "engine", within other watershed scale models. From the research perspective RHEM acts as a vehicle for incorporating new scientific findings from rangeland infiltration, runoff, and erosion studies. Current applications of the model include: 1) a web site for general use (conservation planning, research, etc.), 2) National Resource Inventory reports to Congress, 3) as a computational engine within watershed scale models (e.g., KINEROS, HEC), 4) Ecological Site & State and Transition Descriptions, 5) proposed in 2015 to become part of the NRCS Desktop applications for field offices.

  19. Dependence and risk assessment for oil prices and exchange rate portfolios: A wavelet based approach

    NASA Astrophysics Data System (ADS)

    Aloui, Chaker; Jammazi, Rania

    2015-10-01

    In this article, we propose a wavelet-based approach to accommodate the stylized facts and complex structure of financial data, caused by frequent and abrupt changes of markets and noises. Specifically, we show how the combination of both continuous and discrete wavelet transforms with traditional financial models helps improve portfolio's market risk assessment. In the empirical stage, three wavelet-based models (wavelet-EGARCH with dynamic conditional correlations, wavelet-copula, and wavelet-extreme value) are considered and applied to crude oil price and US dollar exchange rate data. Our findings show that the wavelet-based approach provides an effective and powerful tool for detecting extreme moments and improving the accuracy of VaR and Expected Shortfall estimates of oil-exchange rate portfolios after noise is removed from the original data.

  20. Equity venture capital platform model based on complex network

    NASA Astrophysics Data System (ADS)

    Guo, Dongwei; Zhang, Lanshu; Liu, Miao

    2018-05-01

    This paper uses the small-world network and the random-network to simulate the relationship among the investors, construct the network model of the equity venture capital platform to explore the impact of the fraud rate and the bankruptcy rate on the robustness of the network model while observing the impact of the average path length and the average agglomeration coefficient of the investor relationship network on the income of the network model. The study found that the fraud rate and bankruptcy rate exceeded a certain threshold will lead to network collapse; The bankruptcy rate has a great influence on the income of the platform; The risk premium exists, and the average return is better under a certain range of bankruptcy risk; The structure of the investor relationship network has no effect on the income of the investment model.

  1. M ξ, M αβ, M γ and M m X-ray production cross-sections for elements with 71⩽ z⩽92 at 5.96 keV photon energy

    NASA Astrophysics Data System (ADS)

    Sharma, Manju; Sharma, Veena; Kumar, Sanjeev; Puri, S.; Singh, Nirmal

    2006-11-01

    The M ξ, M αβ, M γ and M m X-ray production (XRP) cross-sections have been measured for the elements with 71⩽ Z⩽92 at 5.96 keV incident photon energy satisfying EM1< Einc< EL3, where EM1(L3) is the M 1(L 3) subshell binding energy. These XRP cross-sections have been calculated using photoionization cross-sections based on the relativistic Dirac-Hartree-Slater (RDHS) model with three sets of X-ray emission rates, fluorescence, Coster-Kronig and super Coster-Kronig yields based on (i) the non-relativistic Hartree-Slater (NRHS) potential model, (ii) the RDHS model and (iii) the relativistic Dirac-Fock (RDF) model. For the third set, the M i ( i=1-5) subshell fluorescence yields have been calculated using the RDF model-based X-ray emission rates and total widths reevaluated to incorporate the RDF model-based radiative widths. The measured cross-sections have been compared with the calculated values to check the applicability of the physical parameters based on different models.

  2. Physical initialization using SSM/I rain rates

    NASA Technical Reports Server (NTRS)

    Krishnamurti, T. N.; Bedi, H. S.; Ingles, Kevin

    1993-01-01

    Following our recent study on physical initialization for tropical prediction using rain rates based on outgoing long-wave radiation, the present study demonstrates a major improvement from the use of microwave radiance-based rain rates. A rain rate algorithm is used on the data from a special sensor microwave instrument (SSM/I). The initialization, as before, uses a reverse surface similarity theory, a reverse cumulus parameterization algorithm, and a bisection method to minimize the difference between satellite-based and the model-based outgoing long-wave radiation. These are invoked within a preforecast Newtonian relaxation phase of the initialization. These tests are carried out with a high-resolution global spectral model. The impact of the initialization on forecast is tested for a complex triple typhoon scenario over the Western Pacific Ocean during September 1987. A major impact from the inclusion of the SSM/I is demonstrated. Also addressed are the spin-up issues related to the typhoon structure and the improved water budget from the physical initialization.

  3. Generalization of exponential based hyperelastic to hyper-viscoelastic model for investigation of mechanical behavior of rate dependent materials.

    PubMed

    Narooei, K; Arman, M

    2018-03-01

    In this research, the exponential stretched based hyperelastic strain energy was generalized to the hyper-viscoelastic model using the heredity integral of deformation history to take into account the strain rate effects on the mechanical behavior of materials. The heredity integral was approximated by the approach of Goh et al. to determine the model parameters and the same estimation was used for constitutive modeling. To present the ability of the proposed hyper-viscoelastic model, the stress-strain response of the thermoplastic elastomer gel tissue at different strain rates from 0.001 to 100/s was studied. In addition to better agreement between the current model and experimental data in comparison to the extended Mooney-Rivlin hyper-viscoelastic model, a stable material behavior was predicted for pure shear and balance biaxial deformation modes. To present the engineering application of current model, the Kolsky bars impact test of gel tissue was simulated and the effects of specimen size and inertia on the uniform deformation were investigated. As the mechanical response of polyurea was provided over wide strain rates of 0.0016-6500/s, the current model was applied to fit the experimental data. The results were shown more accuracy could be expected from the current research than the extended Ogden hyper-viscoelastic model. In the final verification example, the pig skin experimental data was used to determine parameters of the hyper-viscoelastic model. Subsequently, a specimen of pig skin at different strain rates was loaded to a fixed strain and the change of stress with time (stress relaxation) was obtained. The stress relaxation results were revealed the peak stress increases by applied strain rate until the saturated loading rate and the equilibrium stress with magnitude of 0.281MPa could be reached. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Hierarchical Bayesian Modeling of Fluid-Induced Seismicity

    NASA Astrophysics Data System (ADS)

    Broccardo, M.; Mignan, A.; Wiemer, S.; Stojadinovic, B.; Giardini, D.

    2017-11-01

    In this study, we present a Bayesian hierarchical framework to model fluid-induced seismicity. The framework is based on a nonhomogeneous Poisson process with a fluid-induced seismicity rate proportional to the rate of injected fluid. The fluid-induced seismicity rate model depends upon a set of physically meaningful parameters and has been validated for six fluid-induced case studies. In line with the vision of hierarchical Bayesian modeling, the rate parameters are considered as random variables. We develop both the Bayesian inference and updating rules, which are used to develop a probabilistic forecasting model. We tested the Basel 2006 fluid-induced seismic case study to prove that the hierarchical Bayesian model offers a suitable framework to coherently encode both epistemic uncertainty and aleatory variability. Moreover, it provides a robust and consistent short-term seismic forecasting model suitable for online risk quantification and mitigation.

  5. Strain rate dependent hyperelastic stress-stretch behavior of a silica nanoparticle reinforced poly (ethylene glycol) diacrylate nanocomposite hydrogel.

    PubMed

    Zhan, Yuexing; Pan, Yihui; Chen, Bing; Lu, Jian; Zhong, Zheng; Niu, Xinrui

    2017-11-01

    Poly (ethylene glycol) diacrylate (PEGDA) derivatives are important biomedical materials. PEGDA based hydrogels have emerged as one of the popular regenerative orthopedic materials. This work aims to study the mechanical behavior of a PEGDA based silica nanoparticle (NP) reinforced nanocomposite (NC) hydrogel at physiological strain rates. The work combines materials fabrication, mechanical experiments, mathematical modeling and structural analysis. The strain rate dependent stress-stretch behaviors were observed, analyzed and quantified. Visco-hyperelasticity was identified as the deformation mechanism of the nano-silica/PEGDA NC hydrogel. NPs showed significant effect on both initial shear modulus and viscoelastic materials properties. A structure-based quasi-linear viscoelastic (QLV) model was constructed and capable to describe the visco-hyperelastic stress-stretch behavior of the NC hydrogel. A group of unified material parameters was extracted by the model from the stress-stretch curves obtained at different strain rates. Visco-hyperelastic behavior of NP/polymer interphase was not only identified but also quantified. The work could provide guidance to the structural design of next-generation NC hydrogel. Copyright © 2017. Published by Elsevier Ltd.

  6. Reaction Mechanism Generator: Automatic construction of chemical kinetic mechanisms

    DOE PAGES

    Gao, Connie W.; Allen, Joshua W.; Green, William H.; ...

    2016-02-24

    Reaction Mechanism Generator (RMG) constructs kinetic models composed of elementary chemical reaction steps using a general understanding of how molecules react. Species thermochemistry is estimated through Benson group additivity and reaction rate coefficients are estimated using a database of known rate rules and reaction templates. At its core, RMG relies on two fundamental data structures: graphs and trees. Graphs are used to represent chemical structures, and trees are used to represent thermodynamic and kinetic data. Models are generated using a rate-based algorithm which excludes species from the model based on reaction fluxes. RMG can generate reaction mechanisms for species involvingmore » carbon, hydrogen, oxygen, sulfur, and nitrogen. It also has capabilities for estimating transport and solvation properties, and it automatically computes pressure-dependent rate coefficients and identifies chemically-activated reaction paths. RMG is an object-oriented program written in Python, which provides a stable, robust programming architecture for developing an extensible and modular code base with a large suite of unit tests. Computationally intensive functions are cythonized for speed improvements.« less

  7. A Cost-Utility Analysis of Prostate Cancer Screening in Australia.

    PubMed

    Keller, Andrew; Gericke, Christian; Whitty, Jennifer A; Yaxley, John; Kua, Boon; Coughlin, Geoff; Gianduzzo, Troy

    2017-02-01

    The Göteborg randomised population-based prostate cancer screening trial demonstrated that prostate-specific antigen (PSA)-based screening reduces prostate cancer deaths compared with an age-matched control group. Utilising the prostate cancer detection rates from this study, we investigated the clinical and cost effectiveness of a similar PSA-based screening strategy for an Australian population of men aged 50-69 years. A decision model that incorporated Markov processes was developed from a health system perspective. The base-case scenario compared a population-based screening programme with current opportunistic screening practices. Costs, utility values, treatment patterns and background mortality rates were derived from Australian data. All costs were adjusted to reflect July 2015 Australian dollars (A$). An alternative scenario compared systematic with opportunistic screening but with optimisation of active surveillance (AS) uptake in both groups. A discount rate of 5 % for costs and benefits was utilised. Univariate and probabilistic sensitivity analyses were performed to assess the effect of variable uncertainty on model outcomes. Our model very closely replicated the number of deaths from both prostate cancer and background mortality in the Göteborg study. The incremental cost per quality-adjusted life-year (QALY) for PSA screening was A$147,528. However, for years of life gained (LYGs), PSA-based screening (A$45,890/LYG) appeared more favourable. Our alternative scenario with optimised AS improved cost utility to A$45,881/QALY, with screening becoming cost effective at a 92 % AS uptake rate. Both modelled scenarios were most sensitive to the utility of patients before and after intervention, and the discount rate used. PSA-based screening is not cost effective compared with Australia's assumed willingness-to-pay threshold of A$50,000/QALY. It appears more cost effective if LYGs are used as the relevant outcome, and is more cost effective than the established Australian breast cancer screening programme on this basis. Optimised utilisation of AS increases the cost effectiveness of prostate cancer screening dramatically.

  8. A rational approach to improving productivity in recombinant Pichia pastoris fermentation.

    PubMed

    d'Anjou, M C; Daugulis, A J

    2001-01-05

    A Mut(S) Pichia pastoris strain that had been genetically modified to produce and secrete sea raven antifreeze protein was used as a model system to demonstrate the implementation of a rational, model-based approach to improve process productivity. A set of glycerol/methanol mixed-feed continuous stirred-tank reactor (CSTR) experiments was performed at the 5-L scale to characterize the relationship between the specific growth rate and the cell yield on methanol, the specific methanol consumption rate, the specific recombinant protein formation rate, and the productivity based on secreted protein levels. The range of dilution rates studied was 0. 01 to 0.10 h(-1), and the residual methanol concentration was kept constant at approximately 2 g/L (below the inhibitory level). With the assumption that the cell yield on glycerol was constant, the cell yield on methanol increased from approximately 0.5 to 1.5 over the range studied. A maximum specific methanol consumption rate of 20 mg/g. h was achieved at a dilution rate of 0.06 h(-1). The specific product formation rate and the volumetric productivity based on product continued to increase over the range of dilution rates studied, and the maximum values were 0.06 mg/g. h and 1.7 mg/L. h, respectively. Therefore, no evidence of repression by glycerol was observed over this range, and operating at the highest dilution rate studied maximized productivity. Fed-batch mass balance equations, based on Monod-type kinetics and parameters derived from data collected during the CSTR work, were then used to predict cell growth and recombinant protein production and to develop an exponential feeding strategy using two carbon sources. Two exponential fed-batch fermentations were conducted according to the predicted feeding strategy at specific growth rates of 0.03 h(-1) and 0.07 h(-1) to verify the accuracy of the model. Cell growth was accurately predicted in both fed-batch runs; however, the model underestimated recombinant product concentration. The overall volumetric productivity of both runs was approximately 2.2 mg/L. h, representing a tenfold increase in the productivity compared with a heuristic feeding strategy. Copyright 2001 John Wiley & Sons, Inc.

  9. Energy minimization of mobile video devices with a hardware H.264/AVC encoder based on energy-rate-distortion optimization

    NASA Astrophysics Data System (ADS)

    Kang, Donghun; Lee, Jungeon; Jung, Jongpil; Lee, Chul-Hee; Kyung, Chong-Min

    2014-09-01

    In mobile video systems powered by battery, reducing the encoder's compression energy consumption is critical to prolong its lifetime. Previous Energy-rate-distortion (E-R-D) optimization methods based on a software codec is not suitable for practical mobile camera systems because the energy consumption is too large and encoding rate is too low. In this paper, we propose an E-R-D model for the hardware codec based on the gate-level simulation framework to measure the switching activity and the energy consumption. From the proposed E-R-D model, an energy minimizing algorithm for mobile video camera sensor have been developed with the GOP (Group of Pictures) size and QP(Quantization Parameter) as run-time control variables. Our experimental results show that the proposed algorithm provides up to 31.76% of energy consumption saving while satisfying the rate and distortion constraints.

  10. Predicting DNA hybridization kinetics from sequence

    NASA Astrophysics Data System (ADS)

    Zhang, Jinny X.; Fang, John Z.; Duan, Wei; Wu, Lucia R.; Zhang, Angela W.; Dalchau, Neil; Yordanov, Boyan; Petersen, Rasmus; Phillips, Andrew; Zhang, David Yu

    2018-01-01

    Hybridization is a key molecular process in biology and biotechnology, but so far there is no predictive model for accurately determining hybridization rate constants based on sequence information. Here, we report a weighted neighbour voting (WNV) prediction algorithm, in which the hybridization rate constant of an unknown sequence is predicted based on similarity reactions with known rate constants. To construct this algorithm we first performed 210 fluorescence kinetics experiments to observe the hybridization kinetics of 100 different DNA target and probe pairs (36 nt sub-sequences of the CYCS and VEGF genes) at temperatures ranging from 28 to 55 °C. Automated feature selection and weighting optimization resulted in a final six-feature WNV model, which can predict hybridization rate constants of new sequences to within a factor of 3 with ∼91% accuracy, based on leave-one-out cross-validation. Accurate prediction of hybridization kinetics allows the design of efficient probe sequences for genomics research.

  11. Performance Investigation of FSO-OFDM Communication Systems under the Heavy Rain Weather

    NASA Astrophysics Data System (ADS)

    Rashidi, Florence; He, Jing; Chen, Lin

    2017-12-01

    The challenge in the free-space optical (FSO) communication is the propagation of optical signal through different atmospheric conditions such as rain, snow and fog. In this paper, an orthogonal frequency-division multiplexing technique (OFDM) is proposed in the FSO communication system. Meanwhile, considering the rain attenuation models based on Marshal & Palmer and Carbonneau models, the performance of FSO communication system based on the OFDM is evaluated under the heavy-rain condition in Changsha, China. The simulation results show that, under a heavy-rainfall condition of 106.18 mm/h, with an attenuation factor of 7 dB/km based on the Marshal & Palmer model, the bit rate of 2.5 and 4.0 Gbps data can be transmitted over the FSO channels of 1.6 and 1.3 km, respectively, and the bit error rate of less than 1E - 4 can be achieved. In addition, the effect on rain attenuation over the FSO communication system based on the Marshal & Palmer model is less than that of the Carbonneau model.

  12. Relative risk for HIV in India - An estimate using conditional auto-regressive models with Bayesian approach.

    PubMed

    Kandhasamy, Chandrasekaran; Ghosh, Kaushik

    2017-02-01

    Indian states are currently classified into HIV-risk categories based on the observed prevalence counts, percentage of infected attendees in antenatal clinics, and percentage of infected high-risk individuals. This method, however, does not account for the spatial dependence among the states nor does it provide any measure of statistical uncertainty. We provide an alternative model-based approach to address these issues. Our method uses Poisson log-normal models having various conditional autoregressive structures with neighborhood-based and distance-based weight matrices and incorporates all available covariate information. We use R and WinBugs software to fit these models to the 2011 HIV data. Based on the Deviance Information Criterion, the convolution model using distance-based weight matrix and covariate information on female sex workers, literacy rate and intravenous drug users is found to have the best fit. The relative risk of HIV for the various states is estimated using the best model and the states are then classified into the risk categories based on these estimated values. An HIV risk map of India is constructed based on these results. The choice of the final model suggests that an HIV control strategy which focuses on the female sex workers, intravenous drug users and literacy rate would be most effective. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. An Analysis on the Constitutive Models for Forging of Ti6Al4V Alloy Considering the Softening Behavior

    NASA Astrophysics Data System (ADS)

    Souza, Paul M.; Beladi, Hossein; Singh, Rajkumar P.; Hodgson, Peter D.; Rolfe, Bernard

    2018-05-01

    This paper developed high-temperature deformation constitutive models for a Ti6Al4V alloy using an empirical-based Arrhenius equation and an enhanced version of the authors' physical-based EM + Avrami equations. The initial microstructure was a partially equiaxed α + β grain structure. A wide range of experimental data was obtained from hot compression of the Ti6Al4 V alloy at deformation temperatures ranging from 720 to 970 °C, and at strain rates varying from 0.01 to 10 s-1. The friction- and adiabatic-corrected flow curves were used to identify the parameter values of the constitutive models. Both models provided good overall accuracy of the flow stress. The generalized modified Arrhenius model was better at predicting the flow stress at lower strain rates. However, the model was inaccurate in predicting the peak strain. In contrast, the enhanced physical-based EM + Avrami model revealed very good accuracy at intermediate and high strain rates, but it was also better at predicting the peak strain. Blind sample tests revealed that the EM + Avrami maintained good predictions on new (unseen) data. Thus, the enhanced EM + Avrami model may be preferred over the Arrhenius model to predict the flow behavior of Ti6Al4V alloy during industrial forgings, when the initial microstructure is partially equiaxed.

  14. Evaluation of Second-Level Inference in fMRI Analysis

    PubMed Central

    Roels, Sanne P.; Loeys, Tom; Moerkerke, Beatrijs

    2016-01-01

    We investigate the impact of decisions in the second-level (i.e., over subjects) inferential process in functional magnetic resonance imaging on (1) the balance between false positives and false negatives and on (2) the data-analytical stability, both proxies for the reproducibility of results. Second-level analysis based on a mass univariate approach typically consists of 3 phases. First, one proceeds via a general linear model for a test image that consists of pooled information from different subjects. We evaluate models that take into account first-level (within-subjects) variability and models that do not take into account this variability. Second, one proceeds via inference based on parametrical assumptions or via permutation-based inference. Third, we evaluate 3 commonly used procedures to address the multiple testing problem: familywise error rate correction, False Discovery Rate (FDR) correction, and a two-step procedure with minimal cluster size. Based on a simulation study and real data we find that the two-step procedure with minimal cluster size results in most stable results, followed by the familywise error rate correction. The FDR results in most variable results, for both permutation-based inference and parametrical inference. Modeling the subject-specific variability yields a better balance between false positives and false negatives when using parametric inference. PMID:26819578

  15. Development of a Clinical Forecasting Model to Predict Comorbid Depression Among Diabetes Patients and an Application in Depression Screening Policy Making.

    PubMed

    Jin, Haomiao; Wu, Shinyi; Di Capua, Paul

    2015-09-03

    Depression is a common but often undiagnosed comorbid condition of people with diabetes. Mass screening can detect undiagnosed depression but may require significant resources and time. The objectives of this study were 1) to develop a clinical forecasting model that predicts comorbid depression among patients with diabetes and 2) to evaluate a model-based screening policy that saves resources and time by screening only patients considered as depressed by the clinical forecasting model. We trained and validated 4 machine learning models by using data from 2 safety-net clinical trials; we chose the one with the best overall predictive ability as the ultimate model. We compared model-based policy with alternative policies, including mass screening and partial screening, on the basis of depression history or diabetes severity. Logistic regression had the best overall predictive ability of the 4 models evaluated and was chosen as the ultimate forecasting model. Compared with mass screening, the model-based policy can save approximately 50% to 60% of provider resources and time but will miss identifying about 30% of patients with depression. Partial-screening policy based on depression history alone found only a low rate of depression. Two other heuristic-based partial screening policies identified depression at rates similar to those of the model-based policy but cost more in resources and time. The depression prediction model developed in this study has compelling predictive ability. By adopting the model-based depression screening policy, health care providers can use their resources and time better and increase their efficiency in managing their patients with depression.

  16. Information Filtering Based on Users' Negative Opinions

    NASA Astrophysics Data System (ADS)

    Guo, Qiang; Li, Yang; Liu, Jian-Guo

    2013-05-01

    The process of heat conduction (HC) has recently found application in the information filtering [Zhang et al., Phys. Rev. Lett.99, 154301 (2007)], which is of high diversity but low accuracy. The classical HC model predicts users' potential interested objects based on their interesting objects regardless to the negative opinions. In terms of the users' rating scores, we present an improved user-based HC (UHC) information model by taking into account users' positive and negative opinions. Firstly, the objects rated by users are divided into positive and negative categories, then the predicted interesting and dislike object lists are generated by the UHC model. Finally, the recommendation lists are constructed by filtering out the dislike objects from the interesting lists. By implementing the new model based on nine similarity measures, the experimental results for MovieLens and Netflix datasets show that the new model considering negative opinions could greatly enhance the accuracy, measured by the average ranking score, from 0.049 to 0.036 for Netflix and from 0.1025 to 0.0570 for Movielens dataset, reduced by 26.53% and 44.39%, respectively. Since users prefer to give positive ratings rather than negative ones, the negative opinions contain much more information than the positive ones, the negative opinions, therefore, are very important for understanding users' online collective behaviors and improving the performance of HC model.

  17. Computing rates of Markov models of voltage-gated ion channels by inverting partial differential equations governing the probability density functions of the conducting and non-conducting states.

    PubMed

    Tveito, Aslak; Lines, Glenn T; Edwards, Andrew G; McCulloch, Andrew

    2016-07-01

    Markov models are ubiquitously used to represent the function of single ion channels. However, solving the inverse problem to construct a Markov model of single channel dynamics from bilayer or patch-clamp recordings remains challenging, particularly for channels involving complex gating processes. Methods for solving the inverse problem are generally based on data from voltage clamp measurements. Here, we describe an alternative approach to this problem based on measurements of voltage traces. The voltage traces define probability density functions of the functional states of an ion channel. These probability density functions can also be computed by solving a deterministic system of partial differential equations. The inversion is based on tuning the rates of the Markov models used in the deterministic system of partial differential equations such that the solution mimics the properties of the probability density function gathered from (pseudo) experimental data as well as possible. The optimization is done by defining a cost function to measure the difference between the deterministic solution and the solution based on experimental data. By evoking the properties of this function, it is possible to infer whether the rates of the Markov model are identifiable by our method. We present applications to Markov model well-known from the literature. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  18. Evaluation of induced seismicity forecast models in the Induced Seismicity Test Bench

    NASA Astrophysics Data System (ADS)

    Király, Eszter; Gischig, Valentin; Zechar, Jeremy; Doetsch, Joseph; Karvounis, Dimitrios; Wiemer, Stefan

    2016-04-01

    Induced earthquakes often accompany fluid injection, and the seismic hazard they pose threatens various underground engineering projects. Models to monitor and control induced seismic hazard with traffic light systems should be probabilistic, forward-looking, and updated as new data arrive. Here, we propose an Induced Seismicity Test Bench to test and rank such models. We apply the test bench to data from the Basel 2006 and Soultz-sous-Forêts 2004 geothermal stimulation projects, and we assess forecasts from two models that incorporate a different mix of physical understanding and stochastic representation of the induced sequences: Shapiro in Space (SiS) and Hydraulics and Seismics (HySei). SiS is based on three pillars: the seismicity rate is computed with help of the seismogenic index and a simple exponential decay of the seismicity; the magnitude distribution follows the Gutenberg-Richter relation; and seismicity is distributed in space based on smoothing seismicity during the learning period with 3D Gaussian kernels. The HySei model describes seismicity triggered by pressure diffusion with irreversible permeability enhancement. Our results show that neither model is fully superior to the other. HySei forecasts the seismicity rate well, but is only mediocre at forecasting the spatial distribution. On the other hand, SiS forecasts the spatial distribution well but not the seismicity rate. The shut-in phase is a difficult moment for both models in both reservoirs: the models tend to underpredict the seismicity rate around, and shortly after, shut-in. Ensemble models that combine HySei's rate forecast with SiS's spatial forecast outperform each individual model.

  19. Flow and fracture behavior of aluminum alloy 6082-T6 at different tensile strain rates and triaxialities

    PubMed Central

    Chen, Xuanzhen; Peng, Shan; Yao, Song; Chen, Chao; Xu, Ping

    2017-01-01

    This study aims to investigate the flow and fracture behavior of aluminum alloy 6082-T6 (AA6082-T6) at different strain rates and triaxialities. Two groups of Charpy impact tests were carried out to further investigate its dynamic impact fracture property. A series of tensile tests and numerical simulations based on finite element analysis (FEA) were performed. Experimental data on smooth specimens under various strain rates ranging from 0.0001~3400 s-1 shows that AA6082-T6 is rather insensitive to strain rates in general. However, clear rate sensitivity was observed in the range of 0.001~1 s-1 while such a characteristic is counteracted by the adiabatic heating of specimens under high strain rates. A Johnson-Cook constitutive model was proposed based on tensile tests at different strain rates. In this study, the average stress triaxiality and equivalent plastic strain at facture obtained from numerical simulations were used for the calibration of J-C fracture model. Both of the J-C constitutive model and fracture model were employed in numerical simulations and the results was compared with experimental results. The calibrated J-C fracture model exhibits higher accuracy than the J-C fracture model obtained by the common method in predicting the fracture behavior of AA6082-T6. Finally, the Scanning Electron Microscope (SEM) of fractured specimens with different initial stress triaxialities were analyzed. The magnified fractographs indicate that high initial stress triaxiality likely results in dimple fracture. PMID:28759617

  20. Eruption mass estimation using infrasound waveform inversion and ash and gas measurements: Evaluation at Sakurajima Volcano, Japan

    NASA Astrophysics Data System (ADS)

    Fee, David; Izbekov, Pavel; Kim, Keehoon; Yokoo, Akihiko; Lopez, Taryn; Prata, Fred; Kazahaya, Ryunosuke; Nakamichi, Haruhisa; Iguchi, Masato

    2017-12-01

    Eruption mass and mass flow rate are critical parameters for determining the aerial extent and hazard of volcanic emissions. Infrasound waveform inversion is a promising technique to quantify volcanic emissions. Although topography may substantially alter the infrasound waveform as it propagates, advances in wave propagation modeling and station coverage permit robust inversion of infrasound data from volcanic explosions. The inversion can estimate eruption mass flow rate and total eruption mass if the flow density is known. However, infrasound-based eruption flow rates and mass estimates have yet to be validated against independent measurements, and numerical modeling has only recently been applied to the inversion technique. Here we present a robust full-waveform acoustic inversion method, and use it to calculate eruption flow rates and masses from 49 explosions from Sakurajima Volcano, Japan. Six infrasound stations deployed from 12-20 February 2015 recorded the explosions. We compute numerical Green's functions using 3-D Finite Difference Time Domain modeling and a high-resolution digital elevation model. The inversion, assuming a simple acoustic monopole source, provides realistic eruption masses and excellent fit to the data for the majority of the explosions. The inversion results are compared to independent eruption masses derived from ground-based ash collection and volcanic gas measurements. Assuming realistic flow densities, our infrasound-derived eruption masses for ash-rich eruptions compare favorably to the ground-based estimates, with agreement ranging from within a factor of two to one order of magnitude. Uncertainties in the time-dependent flow density and acoustic propagation likely contribute to the mismatch between the methods. Our results suggest that realistic and accurate infrasound-based eruption mass and mass flow rate estimates can be computed using the method employed here. If accurate volcanic flow parameters are known, application of this technique could be broadly applied to enable near real-time calculation of eruption mass flow rates and total masses. These critical input parameters for volcanic eruption modeling and monitoring are not currently available.

  1. Regression rate behaviors of HTPB-based propellant combinations for hybrid rocket motor

    NASA Astrophysics Data System (ADS)

    Sun, Xingliang; Tian, Hui; Li, Yuelong; Yu, Nanjia; Cai, Guobiao

    2016-02-01

    The purpose of this paper is to characterize the regression rate behavior of hybrid rocket motor propellant combinations, using hydrogen peroxide (HP), gaseous oxygen (GOX), nitrous oxide (N2O) as the oxidizer and hydroxyl-terminated poly-butadiene (HTPB) as the based fuel. In order to complete this research by experiment and simulation, a hybrid rocket motor test system and a numerical simulation model are established. Series of hybrid rocket motor firing tests are conducted burning different propellant combinations, and several of those are used as references for numerical simulations. The numerical simulation model is developed by combining the Navies-Stokes equations with the turbulence model, one-step global reaction model, and solid-gas coupling model. The distribution of regression rate along the axis is determined by applying simulation mode to predict the combustion process and heat transfer inside the hybrid rocket motor. The time-space averaged regression rate has a good agreement between the numerical value and experimental data. The results indicate that the N2O/HTPB and GOX/HTPB propellant combinations have a higher regression rate, since the enhancement effect of latter is significant due to its higher flame temperature. Furthermore, the containing of aluminum (Al) and/or ammonium perchlorate(AP) in the grain does enhance the regression rate, mainly due to the more energy released inside the chamber and heat feedback to the grain surface by the aluminum combustion.

  2. Reconstruction of interaction rate in holographic dark energy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mukherjee, Ankan, E-mail: ankan_ju@iiserkol.ac.in

    2016-11-01

    The present work is based on the holographic dark energy model with Hubble horizon as the infrared cut-off. The interaction rate between dark energy and dark matter has been reconstructed for three different parameterizations of the deceleration parameter. Observational constraints on the model parameters have been obtained by maximum likelihood analysis using the observational Hubble parameter data (OHD), type Ia supernovab data (SNe), baryon acoustic oscillation data (BAO) and the distance prior of cosmic microwave background (CMB) namely the CMB shift parameter data (CMBShift). The interaction rate obtained in the present work remains always positive and increases with expansion. Itmore » is very similar to the result obtained by Sen and Pavon [1] where the interaction rate has been reconstructed for a parametrization of the dark energy equation of state. Tighter constraints on the interaction rate have been obtained in the present work as it is based on larger data sets. The nature of the dark energy equation of state parameter has also been studied for the present models. Though the reconstruction is done from different parametrizations, the overall nature of the interaction rate is very similar in all the cases. Different information criteria and the Bayesian evidence, which have been invoked in the context of model selection, show that the these models are at close proximity of each other.« less

  3. Examining the effect of down regulation under high [CO2] on the growth of soybean assimilating a semi process-based model and FACE data

    NASA Astrophysics Data System (ADS)

    Sakurai, G.; Iizumi, T.; Yokozawa, M.

    2011-12-01

    The actual impact of elevated [CO2] with the interaction of the other climatic factors on the crop growth is still debated. In many process-based crop models, the response of photosynthesis per single leaf to environmental factors is basically described using the biochemical model of Farquhar et al. (1980). However, the decline in photosynthetic enhancement known as down regulation has not been taken into account. On the other hand, the mechanisms causing photosynthetic down regulation is still unknown, which makes it difficult to include the effect of down regulation into process-based crop models. The current results of Free-air CO2 enrichment (FACE) experiments have reported the effect of down regulation under actual environments. One of the effective approaches to involve these results into future crop yield prediction is developing a semi process-based crop growth model, which includes the effect of photosynthetic down regulation as a statistical model, and assimilating the data obtained in FACE experiments. In this study, we statistically estimated the parameters of a semi process-based model for soybean growth ('SPM-soybean') using a hierarchical Baysian method with the FACE data on soybeans (Morgan et al. 2005). We also evaluated the effect of down regulation on the soybean yield in future climatic conditions. The model selection analysis showed that the effective correction to the overestimation of the Farquhar's biochemical C3 model was to reduce the maximum rate of carboxylation (Vcmax) under elevated [CO2]. However, interestingly, the difference in the estimated final crop yields between the corrected model and the non-corrected model was very slight (Fig.1a) for future projection under climate change scenario (Miroc-ESM). This was due to that the reduction in Vcmax also brought about the reduction of the base dark respiration rate of leaves. Because the dark respiration rate exponentially increases with temperature, the slight difference in base respiration rate becomes a large difference under high temperature under the future climate scenarios. In other words, if the temperature rise is very small or zero under elevated [CO2] condition, the effect of down regulation significantly appears (Fig.1b). This result suggest that further experimental data that considering high CO2 effect and high temperature effect in field conditions should be important and elaborate the model projection of the future crop yield through data assimilation method.

  4. Prediction of the Dynamic Yield Strength of Metals Using Two Structural-Temporal Parameters

    NASA Astrophysics Data System (ADS)

    Selyutina, N. S.; Petrov, Yu. V.

    2018-02-01

    The behavior of the yield strength of steel and a number of aluminum alloys is investigated in a wide range of strain rates, based on the incubation time criterion of yield and the empirical models of Johnson-Cook and Cowper-Symonds. In this paper, expressions for the parameters of the empirical models are derived through the characteristics of the incubation time criterion; a satisfactory agreement of these data and experimental results is obtained. The parameters of the empirical models can depend on some strain rate. The independence of the characteristics of the incubation time criterion of yield from the loading history and their connection with the structural and temporal features of the plastic deformation process give advantage of the approach based on the concept of incubation time with respect to empirical models and an effective and convenient equation for determining the yield strength in a wider range of strain rates.

  5. Modeling the dissipation rate in rotating turbulent flows

    NASA Technical Reports Server (NTRS)

    Speziale, Charles G.; Raj, Rishi; Gatski, Thomas B.

    1990-01-01

    A variety of modifications to the modeled dissipation rate transport equation that have been proposed during the past two decades to account for rotational strains are examined. The models are subjected to two crucial test cases: the decay of isotropic turbulence in a rotating frame and homogeneous shear flow in a rotating frame. It is demonstrated that these modifications do not yield substantially improved predictions for these two test cases and in many instances give rise to unphysical behavior. An alternative proposal, based on the use of the tensor dissipation rate, is made for the development of improved models.

  6. A lattice hydrodynamic model based on delayed feedback control considering the effect of flow rate difference

    NASA Astrophysics Data System (ADS)

    Wang, Yunong; Cheng, Rongjun; Ge, Hongxia

    2017-08-01

    In this paper, a lattice hydrodynamic model is derived considering not only the effect of flow rate difference but also the delayed feedback control signal which including more comprehensive information. The control method is used to analyze the stability of the model. Furthermore, the critical condition for the linear steady traffic flow is deduced and the numerical simulation is carried out to investigate the advantage of the proposed model with and without the effect of flow rate difference and the control signal. The results are consistent with the theoretical analysis correspondingly.

  7. Modeling of the interest rate policy of the central bank of Russia

    NASA Astrophysics Data System (ADS)

    Shelomentsev, A. G.; Berg, D. B.; Detkov, A. A.; Rylova, A. P.

    2017-11-01

    This paper investigates interactions among money supply, exchange rates, inflation, and nominal interest rates, which are regulating parameters of the Central bank policy. The study is based on the data received from Russian source in 2002-2016. The major findings are 1) the interest rate demonstrates almost no relation with inflation; 2) ties of money supply and the nominal interest rate are strong; 3) money supply and inflation show meaningful relations only in comparison to their growth rates. We have developed a dynamic model, which can be used in forecasting of macroeconomic processes.

  8. Estimates of Stellar Weak Interaction Rates for Nuclei in the Mass Range A=65-80

    NASA Astrophysics Data System (ADS)

    Pruet, Jason; Fuller, George M.

    2003-11-01

    We estimate lepton capture and emission rates, as well as neutrino energy loss rates, for nuclei in the mass range A=65-80. These rates are calculated on a temperature/density grid appropriate for a wide range of astrophysical applications including simulations of late time stellar evolution and X-ray bursts. The basic inputs in our single-particle and empirically inspired model are (i) experimentally measured level information, weak transition matrix elements, and lifetimes, (ii) estimates of matrix elements for allowed experimentally unmeasured transitions based on the systematics of experimentally observed allowed transitions, and (iii) estimates of the centroids of the GT resonances motivated by shell model calculations in the fp shell as well as by (n, p) and (p, n) experiments. Fermi resonances (isobaric analog states) are also included, and it is shown that Fermi transitions dominate the rates for most interesting proton-rich nuclei for which an experimentally determined ground state lifetime is unavailable. For the purposes of comparing our results with more detailed shell model based calculations we also calculate weak rates for nuclei in the mass range A=60-65 for which Langanke & Martinez-Pinedo have provided rates. The typical deviation in the electron capture and β-decay rates for these ~30 nuclei is less than a factor of 2 or 3 for a wide range of temperature and density appropriate for presupernova stellar evolution. We also discuss some subtleties associated with the partition functions used in calculations of stellar weak rates and show that the proper treatment of the partition functions is essential for estimating high-temperature β-decay rates. In particular, we show that partition functions based on unconverged Lanczos calculations can result in errors in estimates of high-temperature β-decay rates.

  9. Application of a data assimilation method via an ensemble Kalman filter to reactive urea hydrolysis transport modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Juxiu Tong; Bill X. Hu; Hai Huang

    2014-03-01

    With growing importance of water resources in the world, remediations of anthropogenic contaminations due to reactive solute transport become even more important. A good understanding of reactive rate parameters such as kinetic parameters is the key to accurately predicting reactive solute transport processes and designing corresponding remediation schemes. For modeling reactive solute transport, it is very difficult to estimate chemical reaction rate parameters due to complex processes of chemical reactions and limited available data. To find a method to get the reactive rate parameters for the reactive urea hydrolysis transport modeling and obtain more accurate prediction for the chemical concentrations,more » we developed a data assimilation method based on an ensemble Kalman filter (EnKF) method to calibrate reactive rate parameters for modeling urea hydrolysis transport in a synthetic one-dimensional column at laboratory scale and to update modeling prediction. We applied a constrained EnKF method to pose constraints to the updated reactive rate parameters and the predicted solute concentrations based on their physical meanings after the data assimilation calibration. From the study results we concluded that we could efficiently improve the chemical reactive rate parameters with the data assimilation method via the EnKF, and at the same time we could improve solute concentration prediction. The more data we assimilated, the more accurate the reactive rate parameters and concentration prediction. The filter divergence problem was also solved in this study.« less

  10. The effect of recruitment rate and other demographic parameters on the transmission of dengue disease

    NASA Astrophysics Data System (ADS)

    Supriatna, A. K.; Anggriani, N.

    2015-03-01

    One of important factors which always appears in most of dengue transmission mathematical model is the number of new susceptible recruited into the susceptible compartment. In this paper we discuss the effect of different rates of recruitment on the transmission of dengue disease. We choose a dengue transmission model with the most realistic form of recruitment rate and analyze the effect of environmental change to the transmission of dengue based on the selected model. We model the effect of environmental change by considering that it can alter the value of mosquito's carrying capacity and mosquito's death rate. We found that the most prevalent effect of the environmental change to the transmission of dengue is when it can alter the death rate of the mosquitoes.

  11. Analytical Modelling of the Spread of Disease in Confined and Crowded Spaces

    NASA Astrophysics Data System (ADS)

    Goscé, Lara; Barton, David A. W.; Johansson, Anders

    2014-05-01

    Since 1927 and until recently, most models describing the spread of disease have been of compartmental type, based on the assumption that populations are homogeneous and well-mixed. Recent models have utilised agent-based models and complex networks to explicitly study heterogeneous interaction patterns, but this leads to an increasing computational complexity. Compartmental models are appealing because of their simplicity, but their parameters, especially the transmission rate, are complex and depend on a number of factors, which makes it hard to predict how a change of a single environmental, demographic, or epidemiological factor will affect the population. Therefore, in this contribution we propose a middle ground, utilising crowd-behaviour research to improve compartmental models in crowded situations. We show how both the rate of infection as well as the walking speed depend on the local crowd density around an infected individual. The combined effect is that the rate of infection at a population scale has an analytically tractable non-linear dependency on crowd density. We model the spread of a hypothetical disease in a corridor and compare our new model with a typical compartmental model, which highlights the regime in which current models may not produce credible results.

  12. Numerical modeling study on the epitaxial growth of silicon from dichlorosilane

    NASA Astrophysics Data System (ADS)

    Zaidi, Imama; Jang, Yeon-Ho; Ko, Dong Guk; Im, Ik-Tae

    2018-02-01

    Computer simulations play an important role in determining the optimal design parameters for chemical vapor deposition (CVD) reactors, such as flow rates, positions of the inlet and outlet orifices, and rotational rates, etc. Reliability of the results of these simulations depends on the set of chemical reaction used to represent the process of deposition in the reactor. Aim of the present work is to validate the simple empirical reaction to model the epitaxial growth of silicon for a Dichlorosilane-H2 (DCS)-H2 system. Governing equations for continuity, momentum, energy, and reacting species are solved numerically using the finite volume method. The agreement between experimental and predicted growth rates for various DCS flow rates is shown to be satisfactory. The increase in growth rate with the increase in pressure is in accordance with the available data. Based on the validated chemical reaction model, a study was carried out to analyze the uniformity of the silicon layer thickness for two different flow rates in a planetary reactor. It was concluded that, based on the operating conditions, the uniformity of the silicon layer over the wafer is independent of the satellite rotational rate in the reactor.

  13. Frequency-dependent selection predicts patterns of radiations and biodiversity.

    PubMed

    Melián, Carlos J; Alonso, David; Vázquez, Diego P; Regetz, James; Allesina, Stefano

    2010-08-26

    Most empirical studies support a decline in speciation rates through time, although evidence for constant speciation rates also exists. Declining rates have been explained by invoking pre-existing niches, whereas constant rates have been attributed to non-adaptive processes such as sexual selection and mutation. Trends in speciation rate and the processes underlying it remain unclear, representing a critical information gap in understanding patterns of global diversity. Here we show that the temporal trend in the speciation rate can also be explained by frequency-dependent selection. We construct a frequency-dependent and DNA sequence-based model of speciation. We compare our model to empirical diversity patterns observed for cichlid fish and Darwin's finches, two classic systems for which speciation rates and richness data exist. Negative frequency-dependent selection predicts well both the declining speciation rate found in cichlid fish and explains their species richness. For groups like the Darwin's finches, in which speciation rates are constant and diversity is lower, speciation rate is better explained by a model without frequency-dependent selection. Our analysis shows that differences in diversity may be driven by incipient species abundance with frequency-dependent selection. Our results demonstrate that genetic-distance-based speciation and frequency-dependent selection are sufficient to explain the high diversity observed in natural systems and, importantly, predict decay through time in speciation rate in the absence of pre-existing niches.

  14. Self-rated health, multimorbidity and depression in Mexican older adults: Proposal and evaluation of a simple conceptual model.

    PubMed

    Bustos-Vázquez, Eduardo; Fernández-Niño, Julián Alfredo; Astudillo-Garcia, Claudia Iveth

    2017-04-01

    Self-rated health is an individual and subjective conceptualization involving the intersection of biological, social and psychological factors. It provides an invaluable and unique evaluation of a person's general health status. To propose and evaluate a simple conceptual model to understand self-rated health and its relationship to multimorbidity, disability and depressive symptoms in Mexican older adults. We conducted a cross-sectional study based on a national representative sample of 8,874 adults of 60 years of age and older. Self-perception of a positive health status was determined according to a Likert-type scale based on the question: "What do you think is your current health status?" Intermediate variables included multimorbidity, disability and depressive symptoms, as well as dichotomous exogenous variables (sex, having a partner, participation in decision-making and poverty). The proposed conceptual model was validated using a general structural equation model with a logit link function for positive self-rated health. A direct association was found between multimorbidity and positive self-rated health (OR=0.48; 95% CI: 0.42-0.55), disability and positive self-rated health (OR=0.35; 95% CI: 0.30-0.40), depressive symptoms and positive self-rated health (OR=0.38; 95% CI: 0.34-0.43). The model also validated indirect associations between disability and depressive symptoms (OR=2.25; 95% CI: 2.01- 2.52), multimorbidity and depressive symptoms (OR=1.79; 95% CI: 1.61-2.00) and multimorbidity and disability (OR=1.98; 95% CI: 1.78-2.20). A parsimonious theoretical model was empirically evaluated, which enabled identifying direct and indirect associations with positive self-rated health.

  15. Failure rate and reliability of the KOMATSU hydraulic excavator in surface limestone mine

    NASA Astrophysics Data System (ADS)

    Harish Kumar N., S.; Choudhary, R. P.; Murthy, Ch. S. N.

    2018-04-01

    The model with failure rate function of bathtub-shaped is helpful in reliability analysis of any system and particularly in reliability associated privative maintenance. The usual Weibull distribution is, however, not capable to model the complete lifecycle of the any with a bathtub-shaped failure rate function. In this paper, failure rate and reliability analysis of the KOMATSU hydraulic excavator/shovel in surface mine is presented and also to improve the reliability and decrease the failure rate of each subsystem of the shovel based on the preventive maintenance. The model of the bathtub-shaped for shovel can also be seen as a simplification of the Weibull distribution.

  16. A New Global Geodetic Strain Rate Model

    NASA Astrophysics Data System (ADS)

    Kreemer, C. W.; Klein, E. C.; Blewitt, G.; Shen, Z.; Wang, M.; Chamot-Rooke, N. R.; Rabaute, A.

    2012-12-01

    As part of the Global Earthquake Model (GEM) effort to improve global seismic hazard models, we present a new global geodetic strain rate model. This model (GSRM v. 2) is a vast improvement on the previous model from 2004 (v. 1.2). The model is still based on a finite-element type approach and has deforming cells in between the assumed rigid plates. While v.1.2 contained ~25,000 deforming cells of 0.6° by 0.5° dimension, the new models contains >136,000 cells of 0.25° by 0.2° dimension. We redefined the geometries of the deforming zones based on the definitions of Bird (2003) and Chamot-Rooke and Rabaute (2006). We made some adjustments to the grid geometry at places where seismicity and/or GPS velocities suggested the presence of deforming areas where those previous studies did not. As a result, some plates/blocks identified by Bird (2003) we assumed to deform, and the total number of plates and blocks in GSRM v.2 is 38 (including the Bering block, which Bird (2003) did not consider). GSRM v.1.2 was based on ~5,200 GPS velocities, taken from 86 studies. The new model is based on ~17,000 GPS velocities, taken from 170 studies. The GPS velocity field consists of a 1) ~4900 velocities derived by us for CPS stations publicly available RINEX data and >3.5 years of data, 2) ~1200 velocities for China from a new analysis of all CMONOC data, and 3) velocities published in the literature or made otherwise available to us. All studies were combined into the same reference frame by a 6-parameter transformation using velocities at collocated stations. Because the goal of the project is to model the interseismic strain rate field, we model co-seismic jumps while estimating velocities, ignore periods of post-seismic deformation, and exclude time-series that reflect magmatic and anthropogenic activity. GPS velocities were used to estimate angular velocities for most of the 38 rigid plates and blocks (the rest being taken from the literature), and these were used as boundary conditions for the strain rate calculations. For the strain rate calculations we used the method of Haines and Holt. In order to equally fit the data in slowly and rapidly deforming areas, we first calculated a very smooth model by setting the a priori variances of the strain rate components very low. We then used this model as a proxy for the a priori standard deviations of the final model. To add some more constraints to the model (to make it more stable), we manipulated the a priori covariance matrix to reflect the expected style of deformation derived from (an interpolation of) shallow earthquake focal mechanisms. We will show examples of the strain rate and velocity field results. We will also highlight how and where the results can be viewed and accessed through a dedicated webportal.

  17. Sobol‧ sensitivity analysis of NAPL-contaminated aquifer remediation process based on multiple surrogates

    NASA Astrophysics Data System (ADS)

    Luo, Jiannan; Lu, Wenxi

    2014-06-01

    Sobol‧ sensitivity analyses based on different surrogates were performed on a trichloroethylene (TCE)-contaminated aquifer to assess the sensitivity of the design variables of remediation duration, surfactant concentration and injection rates at four wells to remediation efficiency First, the surrogate models of a multi-phase flow simulation model were constructed by applying radial basis function artificial neural network (RBFANN) and Kriging methods, and the two models were then compared. Based on the developed surrogate models, the Sobol‧ method was used to calculate the sensitivity indices of the design variables which affect the remediation efficiency. The coefficient of determination (R2) and the mean square error (MSE) of these two surrogate models demonstrated that both models had acceptable approximation accuracy, furthermore, the approximation accuracy of the Kriging model was slightly better than that of the RBFANN model. Sobol‧ sensitivity analysis results demonstrated that the remediation duration was the most important variable influencing remediation efficiency, followed by rates of injection at wells 1 and 3, while rates of injection at wells 2 and 4 and the surfactant concentration had negligible influence on remediation efficiency. In addition, high-order sensitivity indices were all smaller than 0.01, which indicates that interaction effects of these six factors were practically insignificant. The proposed Sobol‧ sensitivity analysis based on surrogate is an effective tool for calculating sensitivity indices, because it shows the relative contribution of the design variables (individuals and interactions) to the output performance variability with a limited number of runs of a computationally expensive simulation model. The sensitivity analysis results lay a foundation for the optimal groundwater remediation process optimization.

  18. Estimation of rates-across-sites distributions in phylogenetic substitution models.

    PubMed

    Susko, Edward; Field, Chris; Blouin, Christian; Roger, Andrew J

    2003-10-01

    Previous work has shown that it is often essential to account for the variation in rates at different sites in phylogenetic models in order to avoid phylogenetic artifacts such as long branch attraction. In most current models, the gamma distribution is used for the rates-across-sites distributions and is implemented as an equal-probability discrete gamma. In this article, we introduce discrete distribution estimates with large numbers of equally spaced rate categories allowing us to investigate the appropriateness of the gamma model. With large numbers of rate categories, these discrete estimates are flexible enough to approximate the shape of almost any distribution. Likelihood ratio statistical tests and a nonparametric bootstrap confidence-bound estimation procedure based on the discrete estimates are presented that can be used to test the fit of a parametric family. We applied the methodology to several different protein data sets, and found that although the gamma model often provides a good parametric model for this type of data, rate estimates from an equal-probability discrete gamma model with a small number of categories will tend to underestimate the largest rates. In cases when the gamma model assumption is in doubt, rate estimates coming from the discrete rate distribution estimate with a large number of rate categories provide a robust alternative to gamma estimates. An alternative implementation of the gamma distribution is proposed that, for equal numbers of rate categories, is computationally more efficient during optimization than the standard gamma implementation and can provide more accurate estimates of site rates.

  19. Prediction of subjective ratings of emotional pictures by EEG features

    NASA Astrophysics Data System (ADS)

    McFarland, Dennis J.; Parvaz, Muhammad A.; Sarnacki, William A.; Goldstein, Rita Z.; Wolpaw, Jonathan R.

    2017-02-01

    Objective. Emotion dysregulation is an important aspect of many psychiatric disorders. Brain-computer interface (BCI) technology could be a powerful new approach to facilitating therapeutic self-regulation of emotions. One possible BCI method would be to provide stimulus-specific feedback based on subject-specific electroencephalographic (EEG) responses to emotion-eliciting stimuli. Approach. To assess the feasibility of this approach, we studied the relationships between emotional valence/arousal and three EEG features: amplitude of alpha activity over frontal cortex; amplitude of theta activity over frontal midline cortex; and the late positive potential over central and posterior mid-line areas. For each feature, we evaluated its ability to predict emotional valence/arousal on both an individual and a group basis. Twenty healthy participants (9 men, 11 women; ages 22-68) rated each of 192 pictures from the IAPS collection in terms of valence and arousal twice (96 pictures on each of 4 d over 2 weeks). EEG was collected simultaneously and used to develop models based on canonical correlation to predict subject-specific single-trial ratings. Separate models were evaluated for the three EEG features: frontal alpha activity; frontal midline theta; and the late positive potential. In each case, these features were used to simultaneously predict both the normed ratings and the subject-specific ratings. Main results. Models using each of the three EEG features with data from individual subjects were generally successful at predicting subjective ratings on training data, but generalization to test data was less successful. Sparse models performed better than models without regularization. Significance. The results suggest that the frontal midline theta is a better candidate than frontal alpha activity or the late positive potential for use in a BCI-based paradigm designed to modify emotional reactions.

  20. The Modellers' Halting Foray into Ecological Theory: Or, What is This Thing Called 'Growth Rate'?

    PubMed

    Deveau, Michael; Karsten, Richard; Teismann, Holger

    2015-06-01

    This discussion paper describes the attempt of an imagined group of non-ecologists ("Modellers") to determine the population growth rate from field data. The Modellers wrestle with the multiple definitions of the growth rate available in the literature and the fact that, in their modelling, it appears to be drastically model-dependent, which seems to throw into question the very concept itself. Specifically, they observe that six representative models used to capture the data produce growth-rate values, which differ significantly. Almost ready to concede that the problem they set for themselves is ill-posed, they arrive at an alternative point of view that not only preserves the identity of the concept of the growth rate, but also helps discriminate between competing models for capturing the data. This is accomplished by assessing how robustly a given model is able to generate growth-rate values from randomized time-series data. This leads to the proposal of an iterative approach to ecological modelling in which the definition of theoretical concepts (such as the growth rate) and model selection complement each other. The paper is based on high-quality field data of mites on apple trees and may be called a "data-driven opinion piece".

  1. The Gifted Rating Scales-School Form: An Analysis of the Standardization Sample Based on Age, Gender, Race, and Diagnostic Efficiency

    ERIC Educational Resources Information Center

    Pfeiffer, Steven I.; Jarosewich, Tania

    2007-01-01

    This study analyzes the standardization sample of a new teacher rating scale designed to assist in the identification of gifted students. The Gifted Rating Scales-School Form (GRS-S) is based on a multidimensional model of giftedness. Results indicate no age or race/ethnicity differences on any of the scales and small but significant differences…

  2. Cyclic plasticity models and application in fatigue analysis

    NASA Technical Reports Server (NTRS)

    Kalev, I.

    1981-01-01

    An analytical procedure for prediction of the cyclic plasticity effects on both the structural fatigue life to crack initiation and the rate of crack growth is presented. The crack initiation criterion is based on the Coffin-Manson formulae extended for multiaxial stress state and for inclusion of the mean stress effect. This criterion is also applied for the accumulated damage ahead of the existing crack tip which is assumed to be related to the crack growth rate. Three cyclic plasticity models, based on the concept of combination of several yield surfaces, are employed for computing the crack growth rate of a crack plane stress panel under several cyclic loading conditions.

  3. Analysis of an algae-based CELSS. I - Model development

    NASA Technical Reports Server (NTRS)

    Holtzapple, Mark T.; Little, Frank E.; Makela, Merry E.; Patterson, C. O.

    1989-01-01

    A steady state chemical model and computer program have been developed for a life support system and applied to trade-off studies. The model is based on human demand for food and oxygen determined from crew metabolic needs. The model includes modules for water recycle, waste treatment, CO2 removal and treatment, and food production. The computer program calculates rates of use and material balance for food, O2, the recycle of human waste and trash, H2O, N2, and food production/supply. A simple noniterative solution for the model has been developed using the steady state rate equations for the chemical reactions. The model and program have been used in system sizing and subsystem trade-off studies of a partially closed life support system.

  4. Analysis of an algae-based CELSS. Part 1: model development

    NASA Technical Reports Server (NTRS)

    Holtzapple, M. T.; Little, F. E.; Makela, M. E.; Patterson, C. O.

    1989-01-01

    A steady state chemical model and computer program have been developed for a life support system and applied to trade-off studies. The model is based on human demand for food and oxygen determined from crew metabolic needs. The model includes modules for water recycle, waste treatment, CO2 removal and treatment, and food production. The computer program calculates rates of use and material balance for food. O2, the recycle of human waste and trash, H2O, N2, and food production supply. A simple non-iterative solution for the model has been developed using the steady state rate equations for the chemical reactions. The model and program have been used in system sizing and subsystem trade-off studies of a partially closed life support system.

  5. Mesoscopic modeling of DNA denaturation rates: Sequence dependence and experimental comparison

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dahlen, Oda, E-mail: oda.dahlen@ntnu.no; Erp, Titus S. van, E-mail: titus.van.erp@ntnu.no

    Using rare event simulation techniques, we calculated DNA denaturation rate constants for a range of sequences and temperatures for the Peyrard-Bishop-Dauxois (PBD) model with two different parameter sets. We studied a larger variety of sequences compared to previous studies that only consider DNA homopolymers and DNA sequences containing an equal amount of weak AT- and strong GC-base pairs. Our results show that, contrary to previous findings, an even distribution of the strong GC-base pairs does not always result in the fastest possible denaturation. In addition, we applied an adaptation of the PBD model to study hairpin denaturation for which experimentalmore » data are available. This is the first quantitative study in which dynamical results from the mesoscopic PBD model have been compared with experiments. Our results show that present parameterized models, although giving good results regarding thermodynamic properties, overestimate denaturation rates by orders of magnitude. We believe that our dynamical approach is, therefore, an important tool for verifying DNA models and for developing next generation models that have higher predictive power than present ones.« less

  6. A Microstructure-Based Constitutive Model for Superplastic Forming

    NASA Astrophysics Data System (ADS)

    Jafari Nedoushan, Reza; Farzin, Mahmoud; Mashayekhi, Mohammad; Banabic, Dorel

    2012-11-01

    A constitutive model is proposed for simulations of hot metal forming processes. This model is constructed based on dominant mechanisms that take part in hot forming and includes intergranular deformation, grain boundary sliding, and grain boundary diffusion. A Taylor type polycrystalline model is used to predict intergranular deformation. Previous works on grain boundary sliding and grain boundary diffusion are extended to drive three-dimensional macro stress-strain rate relationships for each mechanism. In these relationships, the effect of grain size is also taken into account. The proposed model is first used to simulate step strain-rate tests and the results are compared with experimental data. It is shown that the model can be used to predict flow stresses for various grain sizes and strain rates. The yield locus is then predicted for multiaxial stress states, and it is observed that it is very close to the von Mises yield criterion. It is also shown that the proposed model can be directly used to simulate hot forming processes. Bulge forming process and gas pressure tray forming are simulated, and the results are compared with experimental data.

  7. A review of air exchange rate models for air pollution exposure assessments.

    PubMed

    Breen, Michael S; Schultz, Bradley D; Sohn, Michael D; Long, Thomas; Langstaff, John; Williams, Ronald; Isaacs, Kristin; Meng, Qing Yu; Stallings, Casson; Smith, Luther

    2014-11-01

    A critical aspect of air pollution exposure assessments is estimation of the air exchange rate (AER) for various buildings where people spend their time. The AER, which is the rate of exchange of indoor air with outdoor air, is an important determinant for entry of outdoor air pollutants and for removal of indoor-emitted air pollutants. This paper presents an overview and critical analysis of the scientific literature on empirical and physically based AER models for residential and commercial buildings; the models highlighted here are feasible for exposure assessments as extensive inputs are not required. Models are included for the three types of airflows that can occur across building envelopes: leakage, natural ventilation, and mechanical ventilation. Guidance is provided to select the preferable AER model based on available data, desired temporal resolution, types of airflows, and types of buildings included in the exposure assessment. For exposure assessments with some limited building leakage or AER measurements, strategies are described to reduce AER model uncertainty. This review will facilitate the selection of AER models in support of air pollution exposure assessments.

  8. Application of a Reynolds stress turbulence model to the compressible shear layer

    NASA Technical Reports Server (NTRS)

    Sarkar, S.; Balakrishnan, L.

    1990-01-01

    Theoretically based turbulence models have had success in predicting many features of incompressible, free shear layers. However, attempts to extend these models to the high-speed, compressible shear layer have been less effective. In the present work, the compressible shear layer was studied with a second-order turbulence closure, which initially used only variable density extensions of incompressible models for the Reynolds stress transport equation and the dissipation rate transport equation. The quasi-incompressible closure was unsuccessful; the predicted effect of the convective Mach number on the shear layer growth rate was significantly smaller than that observed in experiments. Having thus confirmed that compressibility effects have to be explicitly considered, a new model for the compressible dissipation was introduced into the closure. This model is based on a low Mach number, asymptotic analysis of the Navier-Stokes equations, and on direct numerical simulation of compressible, isotropic turbulence. The use of the new model for the compressible dissipation led to good agreement of the computed growth rates with the experimental data. Both the computations and the experiments indicate a dramatic reduction in the growth rate when the convective Mach number is increased. Experimental data on the normalized maximum turbulence intensities and shear stress also show a reduction with increasing Mach number.

  9. Modelling coral calcification accounting for the impacts of coral bleaching and ocean acidification

    NASA Astrophysics Data System (ADS)

    Evenhuis, C.; Lenton, A.; Cantin, N. E.; Lough, J. M.

    2015-05-01

    Coral reefs are diverse ecosystems that are threatened by rising CO2 levels through increases in sea surface temperature and ocean acidification. Here we present a new unified model that links changes in temperature and carbonate chemistry to coral health. Changes in coral health and population are explicitly modelled by linking rates of growth, recovery and calcification to rates of bleaching and temperature-stress-induced mortality. The model is underpinned by four key principles: the Arrhenius equation, thermal specialisation, correlated up- and down-regulation of traits that are consistent with resource allocation trade-offs, and adaption to local environments. These general relationships allow this model to be constructed from a range of experimental and observational data. The performance of the model is assessed against independent data to demonstrate how it can capture the observed response of corals to stress. We also provide new insights into the factors that determine calcification rates and provide a framework based on well-known biological principles to help understand the observed global distribution of calcification rates. Our results suggest that, despite the implicit complexity of the coral reef environment, a simple model based on temperature, carbonate chemistry and different species can give insights into how corals respond to changes in temperature and ocean acidification.

  10. Zero velocity interval detection based on a continuous hidden Markov model in micro inertial pedestrian navigation

    NASA Astrophysics Data System (ADS)

    Sun, Wei; Ding, Wei; Yan, Huifang; Duan, Shunli

    2018-06-01

    Shoe-mounted pedestrian navigation systems based on micro inertial sensors rely on zero velocity updates to correct their positioning errors in time, which effectively makes determining the zero velocity interval play a key role during normal walking. However, as walking gaits are complicated, and vary from person to person, it is difficult to detect walking gaits with a fixed threshold method. This paper proposes a pedestrian gait classification method based on a hidden Markov model. Pedestrian gait data are collected with a micro inertial measurement unit installed at the instep. On the basis of analyzing the characteristics of the pedestrian walk, a single direction angular rate gyro output is used to classify gait features. The angular rate data are modeled into a univariate Gaussian mixture model with three components, and a four-state left–right continuous hidden Markov model (CHMM) is designed to classify the normal walking gait. The model parameters are trained and optimized using the Baum–Welch algorithm and then the sliding window Viterbi algorithm is used to decode the gait. Walking data are collected through eight subjects walking along the same route at three different speeds; the leave-one-subject-out cross validation method is conducted to test the model. Experimental results show that the proposed algorithm can accurately detect different walking gaits of zero velocity interval. The location experiment shows that the precision of CHMM-based pedestrian navigation improved by 40% when compared to the angular rate threshold method.

  11. [Comparison of Flu Outbreak Reporting Standards Based on Transmission Dynamics Model].

    PubMed

    Yang, Guo-jing; Yi, Qing-jie; Li, Qin; Zeng, Qing

    2016-05-01

    To compare the current two flu outbreak reporting standards for the purpose of better prevention and control of flu outbreaks. A susceptible-exposed-infectious/asymptomatic-removed (SEIAR) model without interventions was set up first, followed by a model with interventions based on real situation. Simulated interventions were developed based on the two reporting standards, and evaluated by estimated duration of outbreaks, cumulative new cases, cumulative morbidity rates, decline in percentage of morbidity rates, and cumulative secondary cases. The basic reproductive number of the outbreak was estimated as 8. 2. The simulation produced similar results as the real situation. The effect of interventions based on reporting standard one (10 accumulated new cases in a week) was better than that of interventions based on reporting standard two (30 accumulated new cases in a week). The reporting standard one (10 accumulated new cases in a week) is more effective for prevention and control of flu outbreaks.

  12. Contrasting faith-based and traditional substance abuse treatment programs.

    PubMed

    Neff, James Alan; Shorkey, Clayton T; Windsor, Liliane Cambraia

    2006-01-01

    This article (a) discusses the definition of faith-based substance abuse treatment programs, (b) juxtaposes Durkheim's theory regarding religion with treatment process model to highlight key dimensions of faith-based and traditional programs, and (c) presents results from a study of seven programs to identify key program dimensions and to identify differences/similarities between program types. Focus group/Concept Mapping techniques yielded a clear "spiritual activities, beliefs, and rituals" dimension, rated as significantly more important to faith-based programs. Faith-based program staff also rated "structure and discipline" as more important and "work readiness" as less important. No differences were found for "group activities/cohesion" and "role modeling/mentoring," "safe, supportive environment," and "traditional treatment modalities." Programs showed substantial similarities with regard to core social processes of treatment such as mentoring, role modeling, and social cohesion. Implications are considered for further research on treatment engagement, retention, and other outcomes.

  13. A testing-coverage software reliability model considering fault removal efficiency and error generation.

    PubMed

    Li, Qiuying; Pham, Hoang

    2017-01-01

    In this paper, we propose a software reliability model that considers not only error generation but also fault removal efficiency combined with testing coverage information based on a nonhomogeneous Poisson process (NHPP). During the past four decades, many software reliability growth models (SRGMs) based on NHPP have been proposed to estimate the software reliability measures, most of which have the same following agreements: 1) it is a common phenomenon that during the testing phase, the fault detection rate always changes; 2) as a result of imperfect debugging, fault removal has been related to a fault re-introduction rate. But there are few SRGMs in the literature that differentiate between fault detection and fault removal, i.e. they seldom consider the imperfect fault removal efficiency. But in practical software developing process, fault removal efficiency cannot always be perfect, i.e. the failures detected might not be removed completely and the original faults might still exist and new faults might be introduced meanwhile, which is referred to as imperfect debugging phenomenon. In this study, a model aiming to incorporate fault introduction rate, fault removal efficiency and testing coverage into software reliability evaluation is developed, using testing coverage to express the fault detection rate and using fault removal efficiency to consider the fault repair. We compare the performance of the proposed model with several existing NHPP SRGMs using three sets of real failure data based on five criteria. The results exhibit that the model can give a better fitting and predictive performance.

  14. Dynamic classification of fetal heart rates by hierarchical Dirichlet process mixture models.

    PubMed

    Yu, Kezi; Quirk, J Gerald; Djurić, Petar M

    2017-01-01

    In this paper, we propose an application of non-parametric Bayesian (NPB) models for classification of fetal heart rate (FHR) recordings. More specifically, we propose models that are used to differentiate between FHR recordings that are from fetuses with or without adverse outcomes. In our work, we rely on models based on hierarchical Dirichlet processes (HDP) and the Chinese restaurant process with finite capacity (CRFC). Two mixture models were inferred from real recordings, one that represents healthy and another, non-healthy fetuses. The models were then used to classify new recordings and provide the probability of the fetus being healthy. First, we compared the classification performance of the HDP models with that of support vector machines on real data and concluded that the HDP models achieved better performance. Then we demonstrated the use of mixture models based on CRFC for dynamic classification of the performance of (FHR) recordings in a real-time setting.

  15. Dynamic classification of fetal heart rates by hierarchical Dirichlet process mixture models

    PubMed Central

    Yu, Kezi; Quirk, J. Gerald

    2017-01-01

    In this paper, we propose an application of non-parametric Bayesian (NPB) models for classification of fetal heart rate (FHR) recordings. More specifically, we propose models that are used to differentiate between FHR recordings that are from fetuses with or without adverse outcomes. In our work, we rely on models based on hierarchical Dirichlet processes (HDP) and the Chinese restaurant process with finite capacity (CRFC). Two mixture models were inferred from real recordings, one that represents healthy and another, non-healthy fetuses. The models were then used to classify new recordings and provide the probability of the fetus being healthy. First, we compared the classification performance of the HDP models with that of support vector machines on real data and concluded that the HDP models achieved better performance. Then we demonstrated the use of mixture models based on CRFC for dynamic classification of the performance of (FHR) recordings in a real-time setting. PMID:28953927

  16. Influence of the formation- and passivation rate of boron-oxygen defects for mitigating carrier-induced degradation in silicon within a hydrogen-based model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hallam, Brett, E-mail: brett.hallam@unsw.edu.au; Abbott, Malcolm; Nampalli, Nitin

    2016-02-14

    A three-state model is used to explore the influence of defect formation- and passivation rates of carrier-induced degradation related to boron-oxygen complexes in boron-doped p-type silicon solar cells within a hydrogen-based model. The model highlights that the inability to effectively mitigate carrier-induced degradation at elevated temperatures in previous studies is due to the limited availability of defects for hydrogen passivation, rather than being limited by the defect passivation rate. An acceleration of the defect formation rate is also observed to increase both the effectiveness and speed of carrier-induced degradation mitigation, whereas increases in the passivation rate do not lead tomore » a substantial acceleration of the hydrogen passivation process. For high-throughput mitigation of such carrier-induced degradation on finished solar cell devices, two key factors were found to be required, high-injection conditions (such as by using high intensity illumination) to enable an acceleration of defect formation whilst simultaneously enabling a rapid passivation of the formed defects, and a high temperature to accelerate both defect formation and defect passivation whilst still ensuring an effective mitigation of carrier-induced degradation.« less

  17. A physically based analytical spatial air temperature and humidity model

    Treesearch

    Yang Yang; Theodore A. Endreny; David J. Nowak

    2013-01-01

    Spatial variation of urban surface air temperature and humidity influences human thermal comfort, the settling rate of atmospheric pollutants, and plant physiology and growth. Given the lack of observations, we developed a Physically based Analytical Spatial Air Temperature and Humidity (PASATH) model. The PASATH model calculates spatial solar radiation and heat...

  18. Modeling Public School Partnerships: Merging Corporate and Community Issues.

    ERIC Educational Resources Information Center

    Clark, Cynthia E.; Brill, Dale A.

    This paper describes a model that merges corporate community relations strategy and public relations pedagogy to accelerate the rate at which Internet-based technologies are integrated into the public schools system. The model provides Internet-based training for a select group of Key Contacts drawn from two urban middle schools. Training is…

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Menikoff, Ralph

    Previously the SURFplus reactive burn model was calibrated for the TATB based explosive PBX 9502. The calibration was based on fitting Pop plot data, the failure diameter and the limiting detonation speed, and curvature effect data for small curvature. The model failure diameter is determined utilizing 2-D simulations of an unconfined rate stick to find the minimum diameter for which a detonation wave propagates. Here we examine the effect of mesh resolution on an unconfined rate stick with a diameter (10mm) slightly greater than the measured failure diameter (8 to 9 mm).

  20. Accretion flow dynamics during 1999 outburst of XTE J1859+226—modeling of broadband spectra and constraining the source mass

    NASA Astrophysics Data System (ADS)

    Nandi, Anuj; Mandal, S.; Sreehari, H.; Radhika, D.; Das, Santabrata; Chattopadhyay, I.; Iyer, N.; Agrawal, V. K.; Aktar, R.

    2018-05-01

    We examine the dynamical behavior of accretion flow around XTE J1859+226 during the 1999 outburst by analyzing the entire outburst data (˜166 days) from RXTE Satellite. Towards this, we study the hysteresis behavior in the hardness intensity diagram (HID) based on the broadband (3-150 keV) spectral modeling, spectral signature of jet ejection and the evolution of Quasi-periodic Oscillation (QPO) frequencies using the two-component advective flow model around a black hole. We compute the flow parameters, namely Keplerian accretion rate (\\dot{m}d), sub-Keplerian accretion rate (\\dot{m}h), shock location (rs) and black hole mass (M_{bh}) from the spectral modeling and study their evolution along the q-diagram. Subsequently, the kinetic jet power is computed as L^{obs}_{jet} ˜3-6 ×10^{37} erg s^{-1} during one of the observed radio flares which indicates that jet power corresponds to 8-16% mass outflow rate from the disc. This estimate of mass outflow rate is in close agreement with the change in total accretion rate (˜14%) required for spectral modeling before and during the flare. Finally, we provide a mass estimate of the source XTE J1859+226 based on the spectral modeling that lies in the range of 5.2-7.9 M_{⊙} with 90% confidence.

  1. Using the Many-Facet Rasch Model to Evaluate Standard-Setting Judgments: Setting Performance Standards for Advanced Placement® Examinations

    ERIC Educational Resources Information Center

    Kaliski, Pamela; Wind, Stefanie A.; Engelhard, George, Jr.; Morgan, Deanna; Plake, Barbara; Reshetar, Rosemary

    2012-01-01

    The Many-Facet Rasch (MFR) Model is traditionally used to evaluate the quality of ratings on constructed response assessments; however, it can also be used to evaluate the quality of judgments from panel-based standard setting procedures. The current study illustrates the use of the MFR Model by examining the quality of ratings obtained from a…

  2. A Theory-Based Model for Understanding Faculty Intention to Use Students Ratings to Improve Teaching in a Health Sciences Institution in Puerto Rico

    ERIC Educational Resources Information Center

    Collazo, Andrés A.

    2018-01-01

    A model derived from the theory of planned behavior was empirically assessed for understanding faculty intention to use student ratings for teaching improvement. A sample of 175 professors participated in the study. The model was statistically significant and had a very large explanatory power. Instrumental attitude, affective attitude, perceived…

  3. Development of the hard and soft constraints based optimisation model for unit sizing of the hybrid renewable energy system designed for microgrid applications

    NASA Astrophysics Data System (ADS)

    Sundaramoorthy, Kumaravel

    2017-02-01

    The hybrid energy systems (HESs) based electricity generation system has become a more attractive solution for rural electrification nowadays. Economically feasible and technically reliable HESs are solidly based on an optimisation stage. This article discusses about the optimal unit sizing model with the objective function to minimise the total cost of the HES. Three typical rural sites from southern part of India have been selected for the application of the developed optimisation methodology. Feasibility studies and sensitivity analysis on the optimal HES are discussed elaborately in this article. A comparison has been carried out with the Hybrid Optimization Model for Electric Renewable optimisation model for three sites. The optimal HES is found with less total net present rate and rate of energy compared with the existing method

  4. Evaluation of indoor air composition time variation in air-tight occupied spaces during night periods

    NASA Astrophysics Data System (ADS)

    Markov, Detelin

    2012-11-01

    This paper presents an easy-to-understand procedure for prediction of indoor air composition time variation in air-tight occupied spaces during the night periods. The mathematical model is based on the assumptions for homogeneity and perfect mixing of the indoor air, the ideal gas model for non-reacting gas mixtures, mass conservation equations for the entire system and for each species, a model for prediction of basal metabolic rate of humans as well as a model for prediction of O2 consumption rate and both CO2 and H2O generation rates by breathing. Time variation of indoor air composition is predicted at constant indoor air temperature for three scenarios based on the analytical solution of the mathematical model. The results achieved reveal both the most probable scenario for indoor air time variation in air-tight occupied spaces as well as the cause for morning tiredness after having a sleep in a modern energy efficient space.

  5. Research on the output bit error rate of 2DPSK signal based on stochastic resonance theory

    NASA Astrophysics Data System (ADS)

    Yan, Daqin; Wang, Fuzhong; Wang, Shuo

    2017-12-01

    Binary differential phase-shift keying (2DPSK) signal is mainly used for high speed data transmission. However, the bit error rate of digital signal receiver is high in the case of wicked channel environment. In view of this situation, a novel method based on stochastic resonance (SR) is proposed, which is aimed to reduce the bit error rate of 2DPSK signal by coherent demodulation receiving. According to the theory of SR, a nonlinear receiver model is established, which is used to receive 2DPSK signal under small signal-to-noise ratio (SNR) circumstances (between -15 dB and 5 dB), and compared with the conventional demodulation method. The experimental results demonstrate that when the input SNR is in the range of -15 dB to 5 dB, the output bit error rate of nonlinear system model based on SR has a significant decline compared to the conventional model. It could reduce 86.15% when the input SNR equals -7 dB. Meanwhile, the peak value of the output signal spectrum is 4.25 times as that of the conventional model. Consequently, the output signal of the system is more likely to be detected and the accuracy can be greatly improved.

  6. In situ monitored in-pile creep testing of zirconium alloys

    NASA Astrophysics Data System (ADS)

    Kozar, R. W.; Jaworski, A. W.; Webb, T. W.; Smith, R. W.

    2014-01-01

    The experiments described herein were designed to investigate the detailed irradiation creep behavior of zirconium based alloys in the HALDEN Reactor spectrum. The HALDEN Test Reactor has the unique capability to control both applied stress and temperature independently and externally for each specimen while the specimen is in-reactor and under fast neutron flux. The ability to monitor in situ the creep rates following a stress and temperature change made possible the characterization of creep behavior over a wide stress-strain-rate-temperature design space for two model experimental heats, Zircaloy-2 and Zircaloy-2 + 1 wt%Nb, with only 12 test specimens in a 100-day in-pile creep test program. Zircaloy-2 specimens with and without 1 wt% Nb additions were tested at irradiation temperatures of 561 K and 616 K and stresses ranging from 69 MPa to 455 MPa. Various steady state creep models were evaluated against the experimental results. The irradiation creep model proposed by Nichols that separates creep behavior into low, intermediate, and high stress regimes was the best model for predicting steady-state creep rates. Dislocation-based primary creep, rather than diffusion-based transient irradiation creep, was identified as the mechanism controlling deformation during the transitional period of evolving creep rate following a step change to different test conditions.

  7. Tailoring drug release rates in hydrogel-based therapeutic delivery applications using graphene oxide

    PubMed Central

    Zhi, Z. L.; Craster, R. V.

    2018-01-01

    Graphene oxide (GO) is increasingly used for controlling mass diffusion in hydrogel-based drug delivery applications. On the macro-scale, the density of GO in the hydrogel is a critical parameter for modulating drug release. Here, we investigate the diffusion of a peptide drug through a network of GO membranes and GO-embedded hydrogels, modelled as porous matrices resembling both laminated and ‘house of cards’ structures. Our experiments use a therapeutic peptide and show a tunable nonlinear dependence of the peptide concentration upon time. We establish models using numerical simulations with a diffusion equation accounting for the photo-thermal degradation of fluorophores and an effective percolation model to simulate the experimental data. The modelling yields an interpretation of the control of drug diffusion through GO membranes, which is extended to the diffusion of the peptide in GO-embedded agarose hydrogels. Varying the density of micron-sized GO flakes allows for fine control of the drug diffusion. We further show that both GO density and size influence the drug release rate. The ability to tune the density of hydrogel-like GO membranes to control drug release rates has exciting implications to offer guidelines for tailoring drug release rates in hydrogel-based therapeutic delivery applications. PMID:29445040

  8. Experimental Study and Modelling of Poly (Methyl Methacrylate) and Polycarbonate Compressive Behavior from Low to High Strain Rates

    NASA Astrophysics Data System (ADS)

    El-Qoubaa, Z.; Colard, L.; Matadi Boumbimba, R.; Rusinek, A.

    2018-06-01

    This paper concerns an experimental investigation of Polycarbonate and Poly (methyl methacrylate) compressive behavior from low to high strain rates. Experiments were conducted from 0.001/s to ≈ 5000/s for PC and from 0.001/s to ≈ 2000/s for PMMA. The true strain-stress behavior is established and analyzed at various stain rates. Both PC and PMMA mechanical behavior appears as known, to be strain rate and temperature dependent. The DSGZ model is selected for modelling the strain-stress curves while the yield stress is reproduced using the cooperative model and a modified Eyring equation based on Eyring first process theory. All the three models predictions are in agreement with experiments performed on PC and PMMA.

  9. Models for financial crisis detection in Indonesia based on bank deposits, real exchange rate and terms of trade indicators

    NASA Astrophysics Data System (ADS)

    Sugiyanto; Zukhronah, Etik; Nur Aini, Anis

    2017-12-01

    Several times Indonesia has experienced to face a financial crisis, but the crisis occurred in 1997 had a tremendous impact on the economy and national stability. The impact of the crisis fall the exchange rate of rupiah against the dollar so it is needed the financial crisis detection system. Some data of bank deposits, real exchange rate and terms of trade indicators are used in this paper. Data taken from January 1990 until December 2016 are used to form the models with three state. Combination of volatility and Markov switching models are used to model the data. The result suggests that the appropriate model for bank deposit and terms of trade is SWARCH (3,1), and for real exchange rates is SWARCH (3,2).

  10. Experimental Study and Modelling of Poly (Methyl Methacrylate) and Polycarbonate Compressive Behavior from Low to High Strain Rates

    NASA Astrophysics Data System (ADS)

    El-Qoubaa, Z.; Colard, L.; Matadi Boumbimba, R.; Rusinek, A.

    2018-03-01

    This paper concerns an experimental investigation of Polycarbonate and Poly (methyl methacrylate) compressive behavior from low to high strain rates. Experiments were conducted from 0.001/s to ≈ 5000/s for PC and from 0.001/s to ≈ 2000/s for PMMA. The true strain-stress behavior is established and analyzed at various stain rates. Both PC and PMMA mechanical behavior appears as known, to be strain rate and temperature dependent. The DSGZ model is selected for modelling the strain-stress curves while the yield stress is reproduced using the cooperative model and a modified Eyring equation based on Eyring first process theory. All the three models predictions are in agreement with experiments performed on PC and PMMA.

  11. Exploring Latent Class Based on Growth Rates in Number Sense Ability

    ERIC Educational Resources Information Center

    Kim, Dongil; Shin, Jaehyun; Lee, Kijyung

    2013-01-01

    The purpose of this study was to explore latent class based on growth rates in number sense ability by using latent growth class modeling (LGCM). LGCM is one of the noteworthy methods for identifying growth patterns of the progress monitoring within the response to intervention framework in that it enables us to analyze latent sub-groups based not…

  12. A study of life prediction differences for a nickel-base Alloy 690 using a threshold and a non-threshold model

    NASA Astrophysics Data System (ADS)

    Young, B. A.; Gao, Xiaosheng; Srivatsan, T. S.

    2009-10-01

    In this paper we compare and contrast the crack growth rate of a nickel-base superalloy (Alloy 690) in the Pressurized Water Reactor (PWR) environment. Over the last few years, a preponderance of test data has been gathered on both Alloy 690 thick plate and Alloy 690 tubing. The original model, essentially based on a small data set for thick plate, compensated for temperature, load ratio and stress-intensity range but did not compensate for the fatigue threshold of the material. As additional test data on both plate and tube product became available the model was gradually revised to account for threshold properties. Both the original and revised models generated acceptable results for data that were above 1 × 10 -11 m/s. However, the test data at the lower growth rates were over-predicted by the non-threshold model. Since the original model did not take the fatigue threshold into account, this model predicted no operating stress below which the material would effectively undergo fatigue crack growth. Because of an over-prediction of the growth rate below 1 × 10 -11 m/s, due to a combination of low stress, small crack size and long rise-time, the model in general leads to an under-prediction of the total available life of the components.

  13. Model‐Based Approach to Predict Adherence to Protocol During Antiobesity Trials

    PubMed Central

    Sharma, Vishnu D.; Combes, François P.; Vakilynejad, Majid; Lahu, Gezim; Lesko, Lawrence J.

    2017-01-01

    Abstract Development of antiobesity drugs is continuously challenged by high dropout rates during clinical trials. The objective was to develop a population pharmacodynamic model that describes the temporal changes in body weight, considering disease progression, lifestyle intervention, and drug effects. Markov modeling (MM) was applied for quantification and characterization of responder and nonresponder as key drivers of dropout rates, to ultimately support the clinical trial simulations and the outcome in terms of trial adherence. Subjects (n = 4591) from 6 Contrave® trials were included in this analysis. An indirect‐response model developed by van Wart et al was used as a starting point. Inclusion of drug effect was dose driven using a population dose‐ and time‐dependent pharmacodynamic (DTPD) model. Additionally, a population‐pharmacokinetic parameter‐ and data (PPPD)‐driven model was developed using the final DTPD model structure and final parameter estimates from a previously developed population pharmacokinetic model based on available Contrave® pharmacokinetic concentrations. Last, MM was developed to predict transition rate probabilities among responder, nonresponder, and dropout states driven by the pharmacodynamic effect resulting from the DTPD or PPPD model. Covariates included in the models and parameters were diabetes mellitus and race. The linked DTPD‐MM and PPPD‐MM was able to predict transition rates among responder, nonresponder, and dropout states well. The analysis concluded that body‐weight change is an important factor influencing dropout rates, and the MM depicted that overall a DTPD model‐driven approach provides a reasonable prediction of clinical trial outcome probabilities similar to a pharmacokinetic‐driven approach. PMID:28858397

  14. Efficacy of bedrock erosion by subglacial water flow

    NASA Astrophysics Data System (ADS)

    Beaud, F.; Flowers, G. E.; Venditti, J. G.

    2015-09-01

    Bedrock erosion by sediment-bearing subglacial water remains little-studied, however the process is thought to contribute to bedrock erosion rates in glaciated landscapes and is implicated in the excavation of tunnel valleys and the incision of inner gorges. We adapt physics-based models of fluvial abrasion to the subglacial environment, assembling the first model designed to quantify bedrock erosion caused by transient subglacial water flow. The subglacial drainage model consists of a one-dimensional network of cavities dynamically coupled to one or several Röthlisberger channels (R-channels). The bedrock erosion model is based on the tools and cover effect, whereby particles entrained by the flow impact exposed bedrock. We explore the dependency of glacial meltwater erosion on the structure and magnitude of water input to the system, the ice geometry and the sediment supply. We find that erosion is not a function of water discharge alone, but also depends on channel size, water pressure and on sediment supply, as in fluvial systems. Modelled glacial meltwater erosion rates are one to two orders of magnitude lower than the expected rates of total glacial erosion required to produce the sediment supply rates we impose, suggesting that glacial meltwater erosion is negligible at the basin scale. Nevertheless, due to the extreme localization of glacial meltwater erosion (at the base of R-channels), this process can carve bedrock (Nye) channels. In fact, our simulations suggest that the incision of bedrock channels several centimetres deep and a few meters wide can occur in a single year. Modelled incision rates indicate that subglacial water flow can gradually carve a tunnel valley and enhance the relief or even initiate the carving of an inner gorge.

  15. Predicting paddlefish roe yields using an extension of the Beverton–Holt equilibrium yield-per-recruit model

    USGS Publications Warehouse

    Colvin, M.E.; Bettoli, Phillip William; Scholten, G.D.

    2013-01-01

    Equilibrium yield models predict the total biomass removed from an exploited stock; however, traditional yield models must be modified to simulate roe yields because a linear relationship between age (or length) and mature ovary weight does not typically exist. We extended the traditional Beverton-Holt equilibrium yield model to predict roe yields of Paddlefish Polyodon spathula in Kentucky Lake, Tennessee-Kentucky, as a function of varying conditional fishing mortality rates (10-70%), conditional natural mortality rates (cm; 9% and 18%), and four minimum size limits ranging from 864 to 1,016mm eye-to-fork length. These results were then compared to a biomass-based yield assessment. Analysis of roe yields indicated the potential for growth overfishing at lower exploitation rates and smaller minimum length limits than were suggested by the biomass-based assessment. Patterns of biomass and roe yields in relation to exploitation rates were similar regardless of the simulated value of cm, thus indicating that the results were insensitive to changes in cm. Our results also suggested that higher minimum length limits would increase roe yield and reduce the potential for growth overfishing and recruitment overfishing at the simulated cm values. Biomass-based equilibrium yield assessments are commonly used to assess the effects of harvest on other caviar-based fisheries; however, our analysis demonstrates that such assessments likely underestimate the probability and severity of growth overfishing when roe is targeted. Therefore, equilibrium roe yield-per-recruit models should also be considered to guide the management process for caviar-producing fish species.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liang, Linyun; Mei, Zhi -Gang; Kim, Yeon Soo

    A mesoscale model is developed by integrating the rate theory and phase-field models and is used to study the fission-induced recrystallization in U-7Mo alloy. The rate theory model is used to predict the dislocation density and the recrystallization nuclei density due to irradiation. The predicted fission rate and temperature dependences of the dislocation density are in good agreement with experimental measurements. This information is used as input for the multiphase phase-field model to investigate the fission-induced recrystallization kinetics. The simulated recrystallization volume fraction and bubble induced swelling agree well with experimental data. The effects of the fission rate, initial grainmore » size, and grain morphology on the recrystallization kinetics are discussed based on an analysis of recrystallization growth rate using the modified Avrami equation. Here, we conclude that the initial microstructure of the U-Mo fuels, especially the grain size, can be used to effectively control the rate of fission-induced recrystallization and therefore swelling.« less

  17. A Thermodynamically-consistent FBA-based Approach to Biogeochemical Reaction Modeling

    NASA Astrophysics Data System (ADS)

    Shapiro, B.; Jin, Q.

    2015-12-01

    Microbial rates are critical to understanding biogeochemical processes in natural environments. Recently, flux balance analysis (FBA) has been applied to predict microbial rates in aquifers and other settings. FBA is a genome-scale constraint-based modeling approach that computes metabolic rates and other phenotypes of microorganisms. This approach requires a prior knowledge of substrate uptake rates, which is not available for most natural microbes. Here we propose to constrain substrate uptake rates on the basis of microbial kinetics. Specifically, we calculate rates of respiration (and fermentation) using a revised Monod equation; this equation accounts for both the kinetics and thermodynamics of microbial catabolism. Substrate uptake rates are then computed from the rates of respiration, and applied to FBA to predict rates of microbial growth. We implemented this method by linking two software tools, PHREEQC and COBRA Toolbox. We applied this method to acetotrophic methanogenesis by Methanosarcina barkeri, and compared the simulation results to previous laboratory observations. The new method constrains acetate uptake by accounting for the kinetics and thermodynamics of methanogenesis, and predicted well the observations of previous experiments. In comparison, traditional methods of dynamic-FBA constrain acetate uptake on the basis of enzyme kinetics, and failed to reproduce the experimental results. These results show that microbial rate laws may provide a better constraint than enzyme kinetics for applying FBA to biogeochemical reaction modeling.

  18. Conditional Probabilities of Large Earthquake Sequences in California from the Physics-based Rupture Simulator RSQSim

    NASA Astrophysics Data System (ADS)

    Gilchrist, J. J.; Jordan, T. H.; Shaw, B. E.; Milner, K. R.; Richards-Dinger, K. B.; Dieterich, J. H.

    2017-12-01

    Within the SCEC Collaboratory for Interseismic Simulation and Modeling (CISM), we are developing physics-based forecasting models for earthquake ruptures in California. We employ the 3D boundary element code RSQSim (Rate-State Earthquake Simulator of Dieterich & Richards-Dinger, 2010) to generate synthetic catalogs with tens of millions of events that span up to a million years each. This code models rupture nucleation by rate- and state-dependent friction and Coulomb stress transfer in complex, fully interacting fault systems. The Uniform California Earthquake Rupture Forecast Version 3 (UCERF3) fault and deformation models are used to specify the fault geometry and long-term slip rates. We have employed the Blue Waters supercomputer to generate long catalogs of simulated California seismicity from which we calculate the forecasting statistics for large events. We have performed probabilistic seismic hazard analysis with RSQSim catalogs that were calibrated with system-wide parameters and found a remarkably good agreement with UCERF3 (Milner et al., this meeting). We build on this analysis, comparing the conditional probabilities of sequences of large events from RSQSim and UCERF3. In making these comparisons, we consider the epistemic uncertainties associated with the RSQSim parameters (e.g., rate- and state-frictional parameters), as well as the effects of model-tuning (e.g., adjusting the RSQSim parameters to match UCERF3 recurrence rates). The comparisons illustrate how physics-based rupture simulators might assist forecasters in understanding the short-term hazards of large aftershocks and multi-event sequences associated with complex, multi-fault ruptures.

  19. Room model based Monte Carlo simulation study of the relationship between the airborne dose rate and the surface-deposited radon progeny.

    PubMed

    Sun, Kainan; Field, R William; Steck, Daniel J

    2010-01-01

    The quantitative relationships between radon gas concentration, the surface-deposited activities of various radon progeny, the airborne radon progeny dose rate, and various residential environmental factors were investigated through a Monte Carlo simulation study based on the extended Jacobi room model. Airborne dose rates were calculated from the unattached and attached potential alpha-energy concentrations (PAECs) using two dosimetric models. Surface-deposited (218)Po and (214)Po were significantly correlated with radon concentration, PAECs, and airborne dose rate (p-values <0.0001) in both non-smoking and smoking environments. However, in non-smoking environments, the deposited radon progeny were not highly correlated to the attached PAEC. In multiple linear regression analysis, natural logarithm transformation was performed for airborne dose rate as a dependent variable, as well as for radon and deposited (218)Po and (214)Po as predictors. In non-smoking environments, after adjusting for the effect of radon, deposited (214)Po was a significant positive predictor for one dose model (RR 1.46, 95% CI 1.27-1.67), while deposited (218)Po was a negative predictor for the other dose model (RR 0.90, 95% CI 0.83-0.98). In smoking environments, after adjusting for radon and room size, deposited (218)Po was a significant positive predictor for one dose model (RR 1.10, 95% CI 1.02-1.19), while a significant negative predictor for the other model (RR 0.90, 95% CI 0.85-0.95). After adjusting for radon and deposited (218)Po, significant increases of 1.14 (95% CI 1.03-1.27) and 1.13 (95% CI 1.05-1.22) in the mean dose rates were found for large room sizes relative to small room sizes in the different dose models.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hansen, Benjamin L; Bronkhorst, Curt; Beyerlein, Irene

    The goal of this work is to formulate a constitutive model for the deformation of metals over a wide range of strain rates. Damage and failure of materials frequently occurs at a variety of deformation rates within the same sample. The present state of the art in single crystal constitutive models relies on thermally-activated models which are believed to become less reliable for problems exceeding strain rates of 10{sup 4} s{sup -1}. This talk presents work in which we extend the applicability of the single crystal model to the strain rate region where dislocation drag is believed to dominate. Themore » elastic model includes effects from volumetric change and pressure sensitive moduli. The plastic model transitions from the low-rate thermally-activated regime to the high-rate drag dominated regime. The direct use of dislocation density as a state parameter gives a measurable physical mechanism to strain hardening. Dislocation densities are separated according to type and given a systematic set of interactions rates adaptable by type. The form of the constitutive model is motivated by previously published dislocation dynamics work which articulated important behaviors unique to high-rate response in fcc systems. The proposed material model incorporates thermal coupling. The hardening model tracks the varying dislocation population with respect to each slip plane and computes the slip resistance based on those values. Comparisons can be made between the responses of single crystals and polycrystals at a variety of strain rates. The material model is fit to copper.« less

  1. Model-Based Approach to Predict Adherence to Protocol During Antiobesity Trials.

    PubMed

    Sharma, Vishnu D; Combes, François P; Vakilynejad, Majid; Lahu, Gezim; Lesko, Lawrence J; Trame, Mirjam N

    2018-02-01

    Development of antiobesity drugs is continuously challenged by high dropout rates during clinical trials. The objective was to develop a population pharmacodynamic model that describes the temporal changes in body weight, considering disease progression, lifestyle intervention, and drug effects. Markov modeling (MM) was applied for quantification and characterization of responder and nonresponder as key drivers of dropout rates, to ultimately support the clinical trial simulations and the outcome in terms of trial adherence. Subjects (n = 4591) from 6 Contrave ® trials were included in this analysis. An indirect-response model developed by van Wart et al was used as a starting point. Inclusion of drug effect was dose driven using a population dose- and time-dependent pharmacodynamic (DTPD) model. Additionally, a population-pharmacokinetic parameter- and data (PPPD)-driven model was developed using the final DTPD model structure and final parameter estimates from a previously developed population pharmacokinetic model based on available Contrave ® pharmacokinetic concentrations. Last, MM was developed to predict transition rate probabilities among responder, nonresponder, and dropout states driven by the pharmacodynamic effect resulting from the DTPD or PPPD model. Covariates included in the models and parameters were diabetes mellitus and race. The linked DTPD-MM and PPPD-MM was able to predict transition rates among responder, nonresponder, and dropout states well. The analysis concluded that body-weight change is an important factor influencing dropout rates, and the MM depicted that overall a DTPD model-driven approach provides a reasonable prediction of clinical trial outcome probabilities similar to a pharmacokinetic-driven approach. © 2017, The Authors. The Journal of Clinical Pharmacology published by Wiley Periodicals, Inc. on behalf of American College of Clinical Pharmacology.

  2. Grain-Size Based Additivity Models for Scaling Multi-rate Uranyl Surface Complexation in Subsurface Sediments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Xiaoying; Liu, Chongxuan; Hu, Bill X.

    The additivity model assumed that field-scale reaction properties in a sediment including surface area, reactive site concentration, and reaction rate can be predicted from field-scale grain-size distribution by linearly adding reaction properties estimated in laboratory for individual grain-size fractions. This study evaluated the additivity model in scaling mass transfer-limited, multi-rate uranyl (U(VI)) surface complexation reactions in a contaminated sediment. Experimental data of rate-limited U(VI) desorption in a stirred flow-cell reactor were used to estimate the statistical properties of the rate constants for individual grain-size fractions, which were then used to predict rate-limited U(VI) desorption in the composite sediment. The resultmore » indicated that the additivity model with respect to the rate of U(VI) desorption provided a good prediction of U(VI) desorption in the composite sediment. However, the rate constants were not directly scalable using the additivity model. An approximate additivity model for directly scaling rate constants was subsequently proposed and evaluated. The result found that the approximate model provided a good prediction of the experimental results within statistical uncertainty. This study also found that a gravel-size fraction (2 to 8 mm), which is often ignored in modeling U(VI) sorption and desorption, is statistically significant to the U(VI) desorption in the sediment.« less

  3. Model free approach to kinetic analysis of real-time hyperpolarized 13C magnetic resonance spectroscopy data.

    PubMed

    Hill, Deborah K; Orton, Matthew R; Mariotti, Erika; Boult, Jessica K R; Panek, Rafal; Jafar, Maysam; Parkes, Harold G; Jamin, Yann; Miniotis, Maria Falck; Al-Saffar, Nada M S; Beloueche-Babari, Mounia; Robinson, Simon P; Leach, Martin O; Chung, Yuen-Li; Eykyn, Thomas R

    2013-01-01

    Real-time detection of the rates of metabolic flux, or exchange rates of endogenous enzymatic reactions, is now feasible in biological systems using Dynamic Nuclear Polarization Magnetic Resonance. Derivation of reaction rate kinetics from this technique typically requires multi-compartmental modeling of dynamic data, and results are therefore model-dependent and prone to misinterpretation. We present a model-free formulism based on the ratio of total areas under the curve (AUC) of the injected and product metabolite, for example pyruvate and lactate. A theoretical framework to support this novel analysis approach is described, and demonstrates that the AUC ratio is proportional to the forward rate constant k. We show that the model-free approach strongly correlates with k for whole cell in vitro experiments across a range of cancer cell lines, and detects response in cells treated with the pan-class I PI3K inhibitor GDC-0941 with comparable or greater sensitivity. The same result is seen in vivo with tumor xenograft-bearing mice, in control tumors and following drug treatment with dichloroacetate. An important finding is that the area under the curve is independent of both the input function and of any other metabolic pathways arising from the injected metabolite. This model-free approach provides a robust and clinically relevant alternative to kinetic model-based rate measurements in the clinical translation of hyperpolarized (13)C metabolic imaging in humans, where measurement of the input function can be problematic.

  4. Model Free Approach to Kinetic Analysis of Real-Time Hyperpolarized 13C Magnetic Resonance Spectroscopy Data

    PubMed Central

    Mariotti, Erika; Boult, Jessica K. R.; Panek, Rafal; Jafar, Maysam; Parkes, Harold G.; Jamin, Yann; Miniotis, Maria Falck; Al-Saffar, Nada M. S.; Beloueche-Babari, Mounia; Robinson, Simon P.; Leach, Martin O.; Chung, Yuen-Li; Eykyn, Thomas R.

    2013-01-01

    Real-time detection of the rates of metabolic flux, or exchange rates of endogenous enzymatic reactions, is now feasible in biological systems using Dynamic Nuclear Polarization Magnetic Resonance. Derivation of reaction rate kinetics from this technique typically requires multi-compartmental modeling of dynamic data, and results are therefore model-dependent and prone to misinterpretation. We present a model-free formulism based on the ratio of total areas under the curve (AUC) of the injected and product metabolite, for example pyruvate and lactate. A theoretical framework to support this novel analysis approach is described, and demonstrates that the AUC ratio is proportional to the forward rate constant k. We show that the model-free approach strongly correlates with k for whole cell in vitro experiments across a range of cancer cell lines, and detects response in cells treated with the pan-class I PI3K inhibitor GDC-0941 with comparable or greater sensitivity. The same result is seen in vivo with tumor xenograft-bearing mice, in control tumors and following drug treatment with dichloroacetate. An important finding is that the area under the curve is independent of both the input function and of any other metabolic pathways arising from the injected metabolite. This model-free approach provides a robust and clinically relevant alternative to kinetic model-based rate measurements in the clinical translation of hyperpolarized 13C metabolic imaging in humans, where measurement of the input function can be problematic. PMID:24023724

  5. The Red Queen model of recombination hot-spot evolution: a theoretical investigation.

    PubMed

    Latrille, Thibault; Duret, Laurent; Lartillot, Nicolas

    2017-12-19

    In humans and many other species, recombination events cluster in narrow and short-lived hot spots distributed across the genome, whose location is determined by the Zn-finger protein PRDM9. To explain these fast evolutionary dynamics, an intra-genomic Red Queen model has been proposed, based on the interplay between two antagonistic forces: biased gene conversion, mediated by double-strand breaks, resulting in hot-spot extinction, followed by positive selection favouring new PRDM9 alleles recognizing new sequence motifs. Thus far, however, this Red Queen model has not been formalized as a quantitative population-genetic model, fully accounting for the intricate interplay between biased gene conversion, mutation, selection, demography and genetic diversity at the PRDM9 locus. Here, we explore the population genetics of the Red Queen model of recombination. A Wright-Fisher simulator was implemented, allowing exploration of the behaviour of the model (mean equilibrium recombination rate, diversity at the PRDM9 locus or turnover rate) as a function of the parameters (effective population size, mutation and erosion rates). In a second step, analytical results based on self-consistent mean-field approximations were derived, reproducing the scaling relations observed in the simulations. Empirical fit of the model to current data from the mouse suggests both a high mutation rate at PRDM9 and strong biased gene conversion on its targets.This article is part of the themed issue 'Evolutionary causes and consequences of recombination rate variation in sexual organisms'. © 2017 The Authors.

  6. The Red Queen model of recombination hot-spot evolution: a theoretical investigation

    PubMed Central

    Latrille, Thibault; Duret, Laurent

    2017-01-01

    In humans and many other species, recombination events cluster in narrow and short-lived hot spots distributed across the genome, whose location is determined by the Zn-finger protein PRDM9. To explain these fast evolutionary dynamics, an intra-genomic Red Queen model has been proposed, based on the interplay between two antagonistic forces: biased gene conversion, mediated by double-strand breaks, resulting in hot-spot extinction, followed by positive selection favouring new PRDM9 alleles recognizing new sequence motifs. Thus far, however, this Red Queen model has not been formalized as a quantitative population-genetic model, fully accounting for the intricate interplay between biased gene conversion, mutation, selection, demography and genetic diversity at the PRDM9 locus. Here, we explore the population genetics of the Red Queen model of recombination. A Wright–Fisher simulator was implemented, allowing exploration of the behaviour of the model (mean equilibrium recombination rate, diversity at the PRDM9 locus or turnover rate) as a function of the parameters (effective population size, mutation and erosion rates). In a second step, analytical results based on self-consistent mean-field approximations were derived, reproducing the scaling relations observed in the simulations. Empirical fit of the model to current data from the mouse suggests both a high mutation rate at PRDM9 and strong biased gene conversion on its targets. This article is part of the themed issue ‘Evolutionary causes and consequences of recombination rate variation in sexual organisms’. PMID:29109226

  7. All Hands on Deck: A Comprehensive, Results-Driven Counseling Model

    ERIC Educational Resources Information Center

    Salina, Charles; Girtz, Suzann; Eppinga, Joanie; Martinez, David; Kilian, Diana Blumer; Lozano, Elizabeth; Martinez, Adrian P.; Crowe, Dustin; De La Barrera, Maria; Mendez, Maribel Madrigal; Shines, Terry

    2014-01-01

    A graduation rate of 49% alarmed Sunnyside High School in 2009. With graduation rates in the bottom 5% statewide, Sunnyside was awarded a federally funded School Improvement Grant. The "turnaround" principal and the school counselors aligned goals with the ASCA National Model through the program All Hands On Deck (AHOD), based on…

  8. RNA-DNA and DNA-DNA base-pairing at the upstream edge of the transcription bubble regulate translocation of RNA polymerase and transcription rate.

    PubMed

    KIreeva, Maria; Trang, Cyndi; Matevosyan, Gayane; Turek-Herman, Joshua; Chasov, Vitaly; Lubkowska, Lucyna; Kashlev, Mikhail

    2018-06-20

    Translocation of RNA polymerase (RNAP) along DNA may be rate-limiting for transcription elongation. The Brownian ratchet model posits that RNAP rapidly translocates back and forth until the post-translocated state is stabilized by NTP binding. An alternative model suggests that RNAP translocation is slow and poorly reversible. To distinguish between these two models, we take advantage of an observation that pyrophosphorolysis rates directly correlate with the abundance of the pre-translocated fraction. Pyrophosphorolysis by RNAP stabilized in the pre-translocated state by bacteriophage HK022 protein Nun was used as a reference point to determine the pre-translocated fraction in the absence of Nun. The stalled RNAP preferentially occupies the post-translocated state. The forward translocation rate depends, among other factors, on melting of the RNA-DNA base pair at the upstream edge of the transcription bubble. DNA-DNA base pairing immediately upstream from the RNA-DNA hybrid stabilizes the post-translocated state. This mechanism is conserved between E. coli RNAP and S. cerevisiae RNA polymerase II and is partially dependent on the lid domain of the catalytic subunit. Thus, the RNA-DNA hybrid and DNA reannealing at the upstream edge of the transcription bubble emerge as targets for regulation of the transcription elongation rate.

  9. Mesoscale model for fission-induced recrystallization in U-7Mo alloy

    DOE PAGES

    Liang, Linyun; Mei, Zhi -Gang; Kim, Yeon Soo; ...

    2016-08-09

    A mesoscale model is developed by integrating the rate theory and phase-field models and is used to study the fission-induced recrystallization in U-7Mo alloy. The rate theory model is used to predict the dislocation density and the recrystallization nuclei density due to irradiation. The predicted fission rate and temperature dependences of the dislocation density are in good agreement with experimental measurements. This information is used as input for the multiphase phase-field model to investigate the fission-induced recrystallization kinetics. The simulated recrystallization volume fraction and bubble induced swelling agree well with experimental data. The effects of the fission rate, initial grainmore » size, and grain morphology on the recrystallization kinetics are discussed based on an analysis of recrystallization growth rate using the modified Avrami equation. Here, we conclude that the initial microstructure of the U-Mo fuels, especially the grain size, can be used to effectively control the rate of fission-induced recrystallization and therefore swelling.« less

  10. SCS-CN based time-distributed sediment yield model

    NASA Astrophysics Data System (ADS)

    Tyagi, J. V.; Mishra, S. K.; Singh, Ranvir; Singh, V. P.

    2008-05-01

    SummaryA sediment yield model is developed to estimate the temporal rates of sediment yield from rainfall events on natural watersheds. The model utilizes the SCS-CN based infiltration model for computation of rainfall-excess rate, and the SCS-CN-inspired proportionality concept for computation of sediment-excess. For computation of sedimentographs, the sediment-excess is routed to the watershed outlet using a single linear reservoir technique. Analytical development of the model shows the ratio of the potential maximum erosion (A) to the potential maximum retention (S) of the SCS-CN method is constant for a watershed. The model is calibrated and validated on a number of events using the data of seven watersheds from India and the USA. Representative values of the A/S ratio computed for the watersheds from calibration are used for the validation of the model. The encouraging results of the proposed simple four parameter model exhibit its potential in field application.

  11. Dependence of credit spread and macro-conditions based on an alterable structure model.

    PubMed

    Xie, Yun; Tian, Yixiang; Xiao, Zhuang; Zhou, Xiangyun

    2018-01-01

    The fat-tail financial data and cyclical financial market makes it difficult for the fixed structure model based on Gaussian distribution to characterize the dynamics of corporate bonds spreads. Using a flexible structure model based on generalized error distribution, this paper focuses on the impact of macro-level factors on the spreads of corporate bonds in China. It is found that in China's corporate bonds market, macroeconomic conditions have obvious structural transformational effects on bonds spreads, and their structural features remain stable with the downgrade of bonds ratings. The impact of macroeconomic conditions on spreads is significant for different structures, and the differences between the structures increase as ratings decline. For different structures, the persistent characteristics of bonds spreads are obviously stronger than those of recursive ones, which suggest an obvious speculation in bonds market. It is also found that the structure switching of bonds with different ratings is not synchronous, which indicates the shift of investment between different grades of bonds.

  12. Dependence of credit spread and macro-conditions based on an alterable structure model

    PubMed Central

    2018-01-01

    The fat-tail financial data and cyclical financial market makes it difficult for the fixed structure model based on Gaussian distribution to characterize the dynamics of corporate bonds spreads. Using a flexible structure model based on generalized error distribution, this paper focuses on the impact of macro-level factors on the spreads of corporate bonds in China. It is found that in China's corporate bonds market, macroeconomic conditions have obvious structural transformational effects on bonds spreads, and their structural features remain stable with the downgrade of bonds ratings. The impact of macroeconomic conditions on spreads is significant for different structures, and the differences between the structures increase as ratings decline. For different structures, the persistent characteristics of bonds spreads are obviously stronger than those of recursive ones, which suggest an obvious speculation in bonds market. It is also found that the structure switching of bonds with different ratings is not synchronous, which indicates the shift of investment between different grades of bonds. PMID:29723295

  13. Astronomical tunings of the Oligocene-Miocene transition from Pacific Ocean Site U1334 and implications for the carbon cycle

    NASA Astrophysics Data System (ADS)

    Beddow, Helen M.; Liebrand, Diederik; Wilson, Douglas S.; Hilgen, Frits J.; Sluijs, Appy; Wade, Bridget S.; Lourens, Lucas J.

    2018-03-01

    Astronomical tuning of sediment sequences requires both unambiguous cycle pattern recognition in climate proxy records and astronomical solutions, as well as independent information about the phase relationship between these two. Here we present two different astronomically tuned age models for the Oligocene-Miocene transition (OMT) from Integrated Ocean Drilling Program Site U1334 (equatorial Pacific Ocean) to assess the effect tuning has on astronomically calibrated ages and the geologic timescale. These alternative age models (roughly from ˜ 22 to ˜ 24 Ma) are based on different tunings between proxy records and eccentricity: the first age model is based on an aligning CaCO3 weight (wt%) to Earth's orbital eccentricity, and the second age model is based on a direct age calibration of benthic foraminiferal stable carbon isotope ratios (δ13C) to eccentricity. To independently test which tuned age model and associated tuning assumptions are in best agreement with independent ages based on tectonic plate-pair spreading rates, we assign the tuned ages to magnetostratigraphic reversals identified in deep-marine magnetic anomaly profiles. Subsequently, we compute tectonic plate-pair spreading rates based on the tuned ages. The resultant alternative spreading-rate histories indicate that the CaCO3 tuned age model is most consistent with a conservative assumption of constant, or linearly changing, spreading rates. The CaCO3 tuned age model thus provides robust ages and durations for polarity chrons C6Bn.1n-C7n.1r, which are not based on astronomical tuning in the latest iteration of the geologic timescale. Furthermore, it provides independent evidence that the relatively large (several 10 000 years) time lags documented in the benthic foraminiferal isotope records relative to orbital eccentricity constitute a real feature of the Oligocene-Miocene climate system and carbon cycle. The age constraints from Site U1334 thus indicate that the delayed responses of the Oligocene-Miocene climate-cryosphere system and (marine) carbon cycle resulted from highly non-linear feedbacks to astronomical forcing.

  14. An Analysis of Counterinsurgency Campaigns Using Lanchestrian Based Marketing Differential Equations

    DTIC Science & Technology

    2010-09-01

    Coca - Cola would be assessed to be high relative to Shasta Brand cola , as Coca - Cola advertises more than Shasta. The analogous comparison in our model...marketing models. . . have a strong resemblance to Lanchester’s models of warfare.” (Little, 1979) Mathematical modeling of marketing and advertising ... advertising expenditure or effort, ρ is the response constant measuring the rate of effectiveness per unit of effort, and δ is the rate at which the

  15. Experimental and Numerical Study of Spacecraft Contamination Problems Associated With Gas and Gas-Droplet Thruster Plume Flows

    DTIC Science & Technology

    2006-04-17

    of the droplet phase are then used for validation of theoretical models of the gas-droplet plume flow. Based on experimental and numerical results...with the continuous model adequately reproduces the Arrhenius rate at high temperatures but significantly underpredicts the theoretical rate at low...continuous model and discrete model of real gas effects, and the results on the shock -wave stand-off distance were compared with the experimental data of

  16. Did case-based payment influence surgical readmission rates in France? A retrospective study

    PubMed Central

    Vuagnat, Albert; Yilmaz, Engin; Roussot, Adrien; Rodwin, Victor; Gadreau, Maryse; Bernard, Alain; Creuzot-Garcher, Catherine; Quantin, Catherine

    2018-01-01

    Objectives To determine whether implementation of a case-based payment system changed all-cause readmission rates in the 30 days following discharge after surgery, we analysed all surgical procedures performed in all hospitals in France before (2002–2004), during (2005–2008) and after (2009–2012) its implementation. Setting Our study is based on claims data for all surgical procedures performed in all acute care hospitals with >300 surgical admissions per year (740 hospitals) in France over 11 years (2002–2012; n=51.6 million admissions). Interventions We analysed all-cause 30-day readmission rates after surgery using a logistic regression model and an interrupted time series analysis. Results The overall 30-day all-cause readmission rate following discharge after surgery increased from 8.8% to 10.0% (P<0.001) for the public sector and from 5.9% to 8.6% (P<0.001) for the private sector. Interrupted time series models revealed a significant linear increase in readmission rates over the study period in all types of hospitals. However, the implementation of case-based payment was only associated with a significant increase in rehospitalisation rates for private hospitals (P<0.001). Conclusion In France, the increase in the readmission rate appears to be relatively steady in both the private and public sector but appears not to have been affected by the introduction of a case-based payment system after accounting for changes in care practices in the public sector. PMID:29391376

  17. Probabilistic seismic hazard study based on active fault and finite element geodynamic models

    NASA Astrophysics Data System (ADS)

    Kastelic, Vanja; Carafa, Michele M. C.; Visini, Francesco

    2016-04-01

    We present a probabilistic seismic hazard analysis (PSHA) that is exclusively based on active faults and geodynamic finite element input models whereas seismic catalogues were used only in a posterior comparison. We applied the developed model in the External Dinarides, a slow deforming thrust-and-fold belt at the contact between Adria and Eurasia.. is the Our method consists of establishing s two earthquake rupture forecast models: (i) a geological active fault input (GEO) model and, (ii) a finite element (FEM) model. The GEO model is based on active fault database that provides information on fault location and its geometric and kinematic parameters together with estimations on its slip rate. By default in this model all deformation is set to be released along the active faults. The FEM model is based on a numerical geodynamic model developed for the region of study. In this model the deformation is, besides along the active faults, released also in the volumetric continuum elements. From both models we calculated their corresponding activity rates, its earthquake rates and their final expected peak ground accelerations. We investigated both the source model and the earthquake model uncertainties by varying the main active fault and earthquake rate calculation parameters through constructing corresponding branches of the seismic hazard logic tree. Hazard maps and UHS curves have been produced for horizontal ground motion on bedrock conditions VS 30 ≥ 800 m/s), thereby not considering local site amplification effects. The hazard was computed over a 0.2° spaced grid considering 648 branches of the logic tree and the mean value of 10% probability of exceedance in 50 years hazard level, while the 5th and 95th percentiles were also computed to investigate the model limits. We conducted a sensitivity analysis to control which of the input parameters influence the final hazard results in which measure. The results of such comparison evidence the deformation model and with their internal variability together with the choice of the ground motion prediction equations (GMPEs) are the most influencing parameter. Both of these parameters have significan affect on the hazard results. Thus having good knowledge of the existence of active faults and their geometric and activity characteristics is of key importance. We also show that PSHA models based exclusively on active faults and geodynamic inputs, which are thus not dependent on past earthquake occurrences, provide a valid method for seismic hazard calculation.

  18. A comparison between rate-and-state friction and microphysical models, based on numerical simulations of fault slip

    NASA Astrophysics Data System (ADS)

    van den Ende, M. P. A.; Chen, J.; Ampuero, J.-P.; Niemeijer, A. R.

    2018-05-01

    Rate-and-state friction (RSF) is commonly used for the characterisation of laboratory friction experiments, such as velocity-step tests. However, the RSF framework provides little physical basis for the extrapolation of these results to the scales and conditions of natural fault systems, and so open questions remain regarding the applicability of the experimentally obtained RSF parameters for predicting seismic cycle transients. As an alternative to classical RSF, microphysics-based models offer means for interpreting laboratory and field observations, but are generally over-simplified with respect to heterogeneous natural systems. In order to bridge the temporal and spatial gap between the laboratory and nature, we have implemented existing microphysical model formulations into an earthquake cycle simulator. Through this numerical framework, we make a direct comparison between simulations exhibiting RSF-controlled fault rheology, and simulations in which the fault rheology is dictated by the microphysical model. Even though the input parameters for the RSF simulation are directly derived from the microphysical model, the microphysics-based simulations produce significantly smaller seismic event sizes than the RSF-based simulation, and suggest a more stable fault slip behaviour. Our results reveal fundamental limitations in using classical rate-and-state friction for the extrapolation of laboratory results. The microphysics-based approach offers a more complete framework in this respect, and may be used for a more detailed study of the seismic cycle in relation to material properties and fault zone pressure-temperature conditions.

  19. Revisiting r > g-The asymptotic dynamics of wealth inequality

    NASA Astrophysics Data System (ADS)

    Berman, Yonatan; Shapira, Yoash

    2017-02-01

    Studying the underlying mechanisms of wealth inequality dynamics is essential for its understanding and for policy aiming to regulate its level. We apply a heterogeneous non-interacting agent-based modeling approach, solved using iterated maps to model the dynamics of wealth inequality based on 3 parameters-the economic output growth rate g, the capital value change rate a and the personal savings rate s and show that for a < g the wealth distribution reaches an asymptotic shape and becomes close to the income distribution. If a > g, the wealth distribution constantly becomes more and more inegalitarian. We also show that when a < g, wealth is asymptotically accumulated at the same rate as the economic output, which also implies that the wealth-disposable income ratio asymptotically converges to s /(g - a) .

  20. dK/da effects on the SCC growth rates of nickel base alloys in high-temperature water

    NASA Astrophysics Data System (ADS)

    Chen, Kai; Wang, Jiamei; Du, Donghai; Andresen, Peter L.; Zhang, Lefu

    2018-05-01

    The effect of dK/da on crack growth behavior of nickel base alloys has been studied by conducting stress corrosion cracking tests under positive and negative dK/da loading conditions on Alloys 690, 600 and X-750 in high temperature water. Results indicate that positive dK/da accelerates the SCC growth rates, and the accelerating effect increases with dK/da and the initial CGR. The FRI model was found to underestimate the dK/da effect by ∼100X, especially for strain hardening materials, and this underscores the need for improved insight and models for crack tip strain rate. The effect of crack tip strain rate and dK/dt in particular can explain the dK/da accelerating effect.

  1. A fault-based model for crustal deformation, fault slip-rates and off-fault strain rate in California

    USGS Publications Warehouse

    Zeng, Yuehua; Shen, Zheng-Kang

    2016-01-01

    We invert Global Positioning System (GPS) velocity data to estimate fault slip rates in California using a fault‐based crustal deformation model with geologic constraints. The model assumes buried elastic dislocations across the region using Uniform California Earthquake Rupture Forecast Version 3 (UCERF3) fault geometries. New GPS velocity and geologic slip‐rate data were compiled by the UCERF3 deformation working group. The result of least‐squares inversion shows that the San Andreas fault slips at 19–22  mm/yr along Santa Cruz to the North Coast, 25–28  mm/yr along the central California creeping segment to the Carrizo Plain, 20–22  mm/yr along the Mojave, and 20–24  mm/yr along the Coachella to the Imperial Valley. Modeled slip rates are 7–16  mm/yr lower than the preferred geologic rates from the central California creeping section to the San Bernardino North section. For the Bartlett Springs section, fault slip rates of 7–9  mm/yr fall within the geologic bounds but are twice the preferred geologic rates. For the central and eastern Garlock, inverted slip rates of 7.5 and 4.9  mm/yr, respectively, match closely with the geologic rates. For the western Garlock, however, our result suggests a low slip rate of 1.7  mm/yr. Along the eastern California shear zone and southern Walker Lane, our model shows a cumulative slip rate of 6.2–6.9  mm/yr across its east–west transects, which is ∼1  mm/yr increase of the geologic estimates. For the off‐coast faults of central California, from Hosgri to San Gregorio, fault slips are modeled at 1–5  mm/yr, similar to the lower geologic bounds. For the off‐fault deformation, the total moment rate amounts to 0.88×1019  N·m/yr, with fast straining regions found around the Mendocino triple junction, Transverse Ranges and Garlock fault zones, Landers and Brawley seismic zones, and farther south. The overall California moment rate is 2.76×1019  N·m/yr, which is a 16% increase compared with the UCERF2 model.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, C.; Potts, I.; Reeks, M. W., E-mail: mike.reeks@ncl.ac.uk

    We present a simple stochastic quadrant model for calculating the transport and deposition of heavy particles in a fully developed turbulent boundary layer based on the statistics of wall-normal fluid velocity fluctuations obtained from a fully developed channel flow. Individual particles are tracked through the boundary layer via their interactions with a succession of random eddies found in each of the quadrants of the fluid Reynolds shear stress domain in a homogeneous Markov chain process. In this way, we are able to account directly for the influence of ejection and sweeping events as others have done but without resorting tomore » the use of adjustable parameters. Deposition rate predictions for a wide range of heavy particles predicted by the model compare well with benchmark experimental measurements. In addition, deposition rates are compared with those obtained from continuous random walk models and Langevin equation based ejection and sweep models which noticeably give significantly lower deposition rates. Various statistics related to the particle near wall behavior are also presented. Finally, we consider the model limitations in using the model to calculate deposition in more complex flows where the near wall turbulence may be significantly different.« less

  3. Hypovigilance Detection for UCAV Operators Based on a Hidden Markov Model

    PubMed Central

    Kwon, Namyeon; Shin, Yongwook; Ryo, Chuh Yeop; Park, Jonghun

    2014-01-01

    With the advance of military technology, the number of unmanned combat aerial vehicles (UCAVs) has rapidly increased. However, it has been reported that the accident rate of UCAVs is much higher than that of manned combat aerial vehicles. One of the main reasons for the high accident rate of UCAVs is the hypovigilance problem which refers to the decrease in vigilance levels of UCAV operators while maneuvering. In this paper, we propose hypovigilance detection models for UCAV operators based on EEG signal to minimize the number of occurrences of hypovigilance. To enable detection, we have applied hidden Markov models (HMMs), two of which are used to indicate the operators' dual states, normal vigilance and hypovigilance, and, for each operator, the HMMs are trained as a detection model. To evaluate the efficacy and effectiveness of the proposed models, we conducted two experiments on the real-world data obtained by using EEG-signal acquisition devices, and they yielded satisfactory results. By utilizing the proposed detection models, the problem of hypovigilance of UCAV operators and the problem of high accident rate of UCAVs can be addressed. PMID:24963338

  4. IRLT: Integrating Reputation and Local Trust for Trustworthy Service Recommendation in Service-Oriented Social Networks

    PubMed Central

    Liu, Zhiquan; Ma, Jianfeng; Jiang, Zhongyuan; Miao, Yinbin; Gao, Cong

    2016-01-01

    With the prevalence of Social Networks (SNs) and services, plenty of trust models for Trustworthy Service Recommendation (TSR) in Service-oriented SNs (S-SNs) have been proposed. The reputation-based schemes usually do not contain user preferences and are vulnerable to unfair rating attacks. Meanwhile, the local trust-based schemes generally have low reliability or even fail to work when the trust path is too long or does not exist. Thus it is beneficial to integrate them for TSR in S-SNs. This work improves the state-of-the-art Combining Global and Local Trust (CGLT) scheme and proposes a novel Integrating Reputation and Local Trust (IRLT) model which mainly includes four modules, namely Service Recommendation Interface (SRI) module, Local Trust-based Trust Evaluation (LTTE) module, Reputation-based Trust Evaluation (RTE) module and Aggregation Trust Evaluation (ATE) module. Besides, a synthetic S-SN based on the famous Advogato dataset is deployed and the well-known Discount Cumulative Gain (DCG) metric is employed to measure the service recommendation performance of our IRLT model with comparing to that of the excellent CGLT model. The results illustrate that our IRLT model is slightly superior to the CGLT model in honest environment and significantly outperforms the CGLT model in terms of the robustness against unfair rating attacks. PMID:26963089

  5. IRLT: Integrating Reputation and Local Trust for Trustworthy Service Recommendation in Service-Oriented Social Networks.

    PubMed

    Liu, Zhiquan; Ma, Jianfeng; Jiang, Zhongyuan; Miao, Yinbin; Gao, Cong

    2016-01-01

    With the prevalence of Social Networks (SNs) and services, plenty of trust models for Trustworthy Service Recommendation (TSR) in Service-oriented SNs (S-SNs) have been proposed. The reputation-based schemes usually do not contain user preferences and are vulnerable to unfair rating attacks. Meanwhile, the local trust-based schemes generally have low reliability or even fail to work when the trust path is too long or does not exist. Thus it is beneficial to integrate them for TSR in S-SNs. This work improves the state-of-the-art Combining Global and Local Trust (CGLT) scheme and proposes a novel Integrating Reputation and Local Trust (IRLT) model which mainly includes four modules, namely Service Recommendation Interface (SRI) module, Local Trust-based Trust Evaluation (LTTE) module, Reputation-based Trust Evaluation (RTE) module and Aggregation Trust Evaluation (ATE) module. Besides, a synthetic S-SN based on the famous Advogato dataset is deployed and the well-known Discount Cumulative Gain (DCG) metric is employed to measure the service recommendation performance of our IRLT model with comparing to that of the excellent CGLT model. The results illustrate that our IRLT model is slightly superior to the CGLT model in honest environment and significantly outperforms the CGLT model in terms of the robustness against unfair rating attacks.

  6. Efficient SRAM yield optimization with mixture surrogate modeling

    NASA Astrophysics Data System (ADS)

    Zhongjian, Jiang; Zuochang, Ye; Yan, Wang

    2016-12-01

    Largely repeated cells such as SRAM cells usually require extremely low failure-rate to ensure a moderate chi yield. Though fast Monte Carlo methods such as importance sampling and its variants can be used for yield estimation, they are still very expensive if one needs to perform optimization based on such estimations. Typically the process of yield calculation requires a lot of SPICE simulation. The circuit SPICE simulation analysis accounted for the largest proportion of time in the process yield calculation. In the paper, a new method is proposed to address this issue. The key idea is to establish an efficient mixture surrogate model. The surrogate model is based on the design variables and process variables. This model construction method is based on the SPICE simulation to get a certain amount of sample points, these points are trained for mixture surrogate model by the lasso algorithm. Experimental results show that the proposed model is able to calculate accurate yield successfully and it brings significant speed ups to the calculation of failure rate. Based on the model, we made a further accelerated algorithm to further enhance the speed of the yield calculation. It is suitable for high-dimensional process variables and multi-performance applications.

  7. Translation elicits a growth rate-dependent, genome-wide, differential protein production in Bacillus subtilis.

    PubMed

    Borkowski, Olivier; Goelzer, Anne; Schaffer, Marc; Calabre, Magali; Mäder, Ulrike; Aymerich, Stéphane; Jules, Matthieu; Fromion, Vincent

    2016-05-17

    Complex regulatory programs control cell adaptation to environmental changes by setting condition-specific proteomes. In balanced growth, bacterial protein abundances depend on the dilution rate, transcript abundances and transcript-specific translation efficiencies. We revisited the current theory claiming the invariance of bacterial translation efficiency. By integrating genome-wide transcriptome datasets and datasets from a library of synthetic gfp-reporter fusions, we demonstrated that translation efficiencies in Bacillus subtilis decreased up to fourfold from slow to fast growth. The translation initiation regions elicited a growth rate-dependent, differential production of proteins without regulators, hence revealing a unique, hard-coded, growth rate-dependent mode of regulation. We combined model-based data analyses of transcript and protein abundances genome-wide and revealed that this global regulation is extensively used in B. subtilis We eventually developed a knowledge-based, three-step translation initiation model, experimentally challenged the model predictions and proposed that a growth rate-dependent drop in free ribosome abundance accounted for the differential protein production. © 2016 The Authors. Published under the terms of the CC BY 4.0 license.

  8. Characterization of exchange rate regimes based on scaling and correlation properties of volatility for ASEAN-5 countries

    NASA Astrophysics Data System (ADS)

    Muniandy, Sithi V.; Uning, Rosemary

    2006-11-01

    Foreign currency exchange rate policies of ASEAN member countries have undergone tremendous changes following the 1997 Asian financial crisis. In this paper, we study the fractal and long-memory characteristics in the volatility of five ASEAN founding members’ exchange rates with respect to US dollar. The impact of exchange rate policies implemented by the ASEAN-5 countries on the currency fluctuations during pre-, mid- and post-crisis are briefly discussed. The time series considered are daily price returns, absolute returns and aggregated absolute returns, each partitioned into three segments based on the crisis regimes. These time series are then modeled using fractional Gaussian noise, fractionally integrated ARFIMA (0,d,0) and generalized Cauchy process. The first two stationary models provide the description of long-range dependence through Hurst and fractional differencing parameter, respectively. Meanwhile, the generalized Cauchy process offers independent estimation of fractal dimension and long memory exponent. In comparison, among the three models we found that the generalized Cauchy process showed greater sensitivity to transition of exchange rate regimes that were implemented by ASEAN-5 countries.

  9. A simplified model for glass formation

    NASA Technical Reports Server (NTRS)

    Uhlmann, D. R.; Onorato, P. I. K.; Scherer, G. W.

    1979-01-01

    A simplified model of glass formation based on the formal theory of transformation kinetics is presented, which describes the critical cooling rates implied by the occurrence of glassy or partly crystalline bodies. In addition, an approach based on the nose of the time-temperature-transformation (TTT) curve as an extremum in temperature and time has provided a relatively simple relation between the activation energy for viscous flow in the undercooled region and the temperature of the nose of the TTT curve. Using this relation together with the simplified model, it now seems possible to predict cooling rates using only the liquidus temperature, glass transition temperature, and heat of fusion.

  10. Functional response models to estimate feeding rates of wading birds

    USGS Publications Warehouse

    Collazo, J.A.; Gilliam, J.F.; Miranda-Castro, L.

    2010-01-01

    Forager (predator) abundance may mediate feeding rates in wading birds. Yet, when modeled, feeding rates are typically derived from the purely prey-dependent Holling Type II (HoII) functional response model. Estimates of feeding rates are necessary to evaluate wading bird foraging strategies and their role in food webs; thus, models that incorporate predator dependence warrant consideration. Here, data collected in a mangrove swamp in Puerto Rico in 1994 were reanalyzed, reporting feeding rates for mixed-species flocks after comparing fits of the HoII model, as used in the original work, to the Beddington-DeAngelis (BD) and Crowley-Martin (CM) predator-dependent models. Model CM received most support (AIC c wi = 0.44), but models BD and HoII were plausible alternatives (AIC c ??? 2). Results suggested that feeding rates were constrained by predator abundance. Reductions in rates were attributed to interference, which was consistent with the independently observed increase in aggression as flock size increased (P < 0.05). Substantial discrepancies between the CM and HoII models were possible depending on flock sizes used to model feeding rates. However, inferences derived from the HoII model, as used in the original work, were sound. While Holling's Type II and other purely prey-dependent models have fostered advances in wading bird foraging ecology, evaluating models that incorporate predator dependence could lead to a more adequate description of data and processes of interest. The mechanistic bases used to derive models used here lead to biologically interpretable results and advance understanding of wading bird foraging ecology.

  11. Using the Many-Faceted Rasch Model to Evaluate Standard Setting Judgments: An Illustration with the Advanced Placement Environmental Science Exam

    ERIC Educational Resources Information Center

    Kaliski, Pamela K.; Wind, Stefanie A.; Engelhard, George, Jr.; Morgan, Deanna L.; Plake, Barbara S.; Reshetar, Rosemary A.

    2013-01-01

    The many-faceted Rasch (MFR) model has been used to evaluate the quality of ratings on constructed response assessments; however, it can also be used to evaluate the quality of judgments from panel-based standard setting procedures. The current study illustrates the use of the MFR model for examining the quality of ratings obtained from a standard…

  12. Observer-based perturbation extremum seeking control with input constraints for direct-contact membrane distillation process

    NASA Astrophysics Data System (ADS)

    Eleiwi, Fadi; Laleg-Kirati, Taous Meriem

    2018-06-01

    An observer-based perturbation extremum seeking control is proposed for a direct-contact membrane distillation (DCMD) process. The process is described with a dynamic model that is based on a 2D advection-diffusion equation model which has pump flow rates as process inputs. The objective of the controller is to optimise the trade-off between the permeate mass flux and the energy consumption by the pumps inside the process. Cases of single and multiple control inputs are considered through the use of only the feed pump flow rate or both the feed and the permeate pump flow rates. A nonlinear Lyapunov-based observer is designed to provide an estimation for the temperature distribution all over the designated domain of the DCMD process. Moreover, control inputs are constrained with an anti-windup technique to be within feasible and physical ranges. Performance of the proposed structure is analysed, and simulations based on real DCMD process parameters for each control input are provided.

  13. Machine learning and linear regression models to predict catchment-level base cation weathering rates across the southern Appalachian Mountain region, USA

    Treesearch

    Nicholas A. Povak; Paul F. Hessburg; Todd C. McDonnell; Keith M. Reynolds; Timothy J. Sullivan; R. Brion Salter; Bernard J. Crosby

    2014-01-01

    Accurate estimates of soil mineral weathering are required for regional critical load (CL) modeling to identify ecosystems at risk of the deleterious effects from acidification. Within a correlative modeling framework, we used modeled catchment-level base cation weathering (BCw) as the response variable to identify key environmental correlates and predict a continuous...

  14. Objective Quantification of Pre-and Postphonosurgery Vocal Fold Vibratory Characteristics Using High-Speed Videoendoscopy and a Harmonic Waveform Model

    ERIC Educational Resources Information Center

    Ikuma, Takeshi; Kunduk, Melda; McWhorter, Andrew J.

    2014-01-01

    Purpose: The model-based quantitative analysis of high-speed videoendoscopy (HSV) data at a low frame rate of 2,000 frames per second was assessed for its clinical adequacy. Stepwise regression was employed to evaluate the HSV parameters using harmonic models and their relationships to the Voice Handicap Index (VHI). Also, the model-based HSV…

  15. Using a Time-Driven Activity-Based Costing Model To Determine the Actual Cost of Services Provided by a Transgenic Core.

    PubMed

    Gerwin, Philip M; Norinsky, Rada M; Tolwani, Ravi J

    2018-03-01

    Laboratory animal programs and core laboratories often set service rates based on cost estimates. However, actual costs may be unknown, and service rates may not reflect the actual cost of services. Accurately evaluating the actual costs of services can be challenging and time-consuming. We used a time-driven activity-based costing (ABC) model to determine the cost of services provided by a resource laboratory at our institution. The time-driven approach is a more efficient approach to calculating costs than using a traditional ABC model. We calculated only 2 parameters: the time required to perform an activity and the unit cost of the activity based on employee cost. This method allowed us to rapidly and accurately calculate the actual cost of services provided, including microinjection of a DNA construct, microinjection of embryonic stem cells, embryo transfer, and in vitro fertilization. We successfully implemented a time-driven ABC model to evaluate the cost of these services and the capacity of labor used to deliver them. We determined how actual costs compared with current service rates. In addition, we determined that the labor supplied to conduct all services (10,645 min/wk) exceeded the practical labor capacity (8400 min/wk), indicating that the laboratory team was highly efficient and that additional labor capacity was needed to prevent overloading of the current team. Importantly, this time-driven ABC approach allowed us to establish a baseline model that can easily be updated to reflect operational changes or changes in labor costs. We demonstrated that a time-driven ABC model is a powerful management tool that can be applied to other core facilities as well as to entire animal programs, providing valuable information that can be used to set rates based on the actual cost of services and to improve operating efficiency.

  16. Reconstructing the 2003/2004 H3N2 influenza epidemic in Switzerland with a spatially explicit, individual-based model

    PubMed Central

    2011-01-01

    Background Simulation models of influenza spread play an important role for pandemic preparedness. However, as the world has not faced a severe pandemic for decades, except the rather mild H1N1 one in 2009, pandemic influenza models are inherently hypothetical and validation is, thus, difficult. We aim at reconstructing a recent seasonal influenza epidemic that occurred in Switzerland and deem this to be a promising validation strategy for models of influenza spread. Methods We present a spatially explicit, individual-based simulation model of influenza spread. The simulation model bases upon (i) simulated human travel data, (ii) data on human contact patterns and (iii) empirical knowledge on the epidemiology of influenza. For model validation we compare the simulation outcomes with empirical knowledge regarding (i) the shape of the epidemic curve, overall infection rate and reproduction number, (ii) age-dependent infection rates and time of infection, (iii) spatial patterns. Results The simulation model is capable of reproducing the shape of the 2003/2004 H3N2 epidemic curve of Switzerland and generates an overall infection rate (14.9 percent) and reproduction numbers (between 1.2 and 1.3), which are realistic for seasonal influenza epidemics. Age and spatial patterns observed in empirical data are also reflected by the model: Highest infection rates are in children between 5 and 14 and the disease spreads along the main transport axes from west to east. Conclusions We show that finding evidence for the validity of simulation models of influenza spread by challenging them with seasonal influenza outbreak data is possible and promising. Simulation models for pandemic spread gain more credibility if they are able to reproduce seasonal influenza outbreaks. For more robust modelling of seasonal influenza, serological data complementing sentinel information would be beneficial. PMID:21554680

  17. Investigation of Particle Deposition in Internal Cooling Cavities of a Nozzle Guide Vane

    NASA Astrophysics Data System (ADS)

    Casaday, Brian Patrick

    Experimental and computational studies were conducted regarding particle deposition in the internal film cooling cavities of nozzle guide vanes. An experimental facility was fabricated to simulate particle deposition on an impingement liner and upstream surface of a nozzle guide vane wall. The facility supplied particle-laden flow at temperatures up to 1000°F (540°C) to a simplified impingement cooling test section. The heated flow passed through a perforated impingement plate and impacted on a heated flat wall. The particle-laden impingement jets resulted in the buildup of deposit cones associated with individual impingement jets. The deposit growth rate increased with increasing temperature and decreasing impinging velocities. For some low flow rates or high flow temperatures, the deposit cones heights spanned the entire gap between the impingement plate and wall, and grew through the impingement holes. For high flow rates, deposit structures were removed by shear forces from the flow. At low temperatures, deposit formed not only as individual cones, but as ridges located at the mid-planes between impinging jets. A computational model was developed to predict the deposit buildup seen in the experiments. The test section geometry and fluid flow from the experiment were replicated computationally and an Eulerian-Lagrangian particle tracking technique was employed. Several particle sticking models were employed and tested for adequacy. Sticking models that accurately predicted locations and rates in external deposition experiments failed to predict certain structures or rates seen in internal applications. A geometry adaptation technique was employed and the effect on deposition prediction was discussed. A new computational sticking model was developed that predicts deposition rates based on the local wall shear. The growth patterns were compared to experiments under different operating conditions. Of all the sticking models employed, the model based on wall shear, in conjunction with geometry adaptation, proved to be the most accurate in predicting the forms of deposit growth. It was the only model that predicted the changing deposition trends based on flow temperature or Reynolds number, and is recommended for further investigation and application in the modeling of deposition in internal cooling cavities.

  18. Structure analysis of tax revenue and inflation rate in Banda Aceh using vector error correction model with multiple alpha

    NASA Astrophysics Data System (ADS)

    Sofyan, Hizir; Maulia, Eva; Miftahuddin

    2017-11-01

    A country has several important parameters to achieve economic prosperity, such as tax revenue and inflation rate. One of the largest revenues of the State Budget in Indonesia comes from the tax sector. Meanwhile, the rate of inflation occurring in a country can be used as an indicator, to measure the good and bad economic problems faced by the country. Given the importance of tax revenue and inflation rate control in achieving economic prosperity, it is necessary to analyze the structure of tax revenue relations and inflation rate. This study aims to produce the best VECM (Vector Error Correction Model) with optimal lag using various alpha and perform structural analysis using the Impulse Response Function (IRF) of the VECM models to examine the relationship of tax revenue, and inflation in Banda Aceh. The results showed that the best model for the data of tax revenue and inflation rate in Banda Aceh City using alpha 0.01 is VECM with optimal lag 2, while the best model for data of tax revenue and inflation rate in Banda Aceh City using alpha 0.05 and 0,1 VECM with optimal lag 3. However, the VECM model with alpha 0.01 yielded four significant models of income tax model, inflation rate of Banda Aceh, inflation rate of health and inflation rate of education in Banda Aceh. While the VECM model with alpha 0.05 and 0.1 yielded one significant model that is income tax model. Based on the VECM models, then there are two structural analysis IRF which is formed to look at the relationship of tax revenue, and inflation in Banda Aceh, the IRF with VECM (2) and IRF with VECM (3).

  19. Use, misuse and extensions of "ideal gas" models of animal encounter.

    PubMed

    Hutchinson, John M C; Waser, Peter M

    2007-08-01

    Biologists have repeatedly rediscovered classical models from physics predicting collision rates in an ideal gas. These models, and their two-dimensional analogues, have been used to predict rates and durations of encounters among animals or social groups that move randomly and independently, given population density, velocity, and distance at which an encounter occurs. They have helped to separate cases of mixed-species association based on behavioural attraction from those that simply reflect high population densities, and to detect cases of attraction or avoidance among conspecifics. They have been used to estimate the impact of population density, speeds of movement and size on rates of encounter between members of the opposite sex, between gametes, between predators and prey, and between observers and the individuals that they are counting. One limitation of published models has been that they predict rates of encounter, but give no means of determining whether observations differ significantly from predictions. Another uncertainty is the robustness of the predictions when animal movements deviate from the model's assumptions in specific, biologically relevant ways. Here, we review applications of the ideal gas model, derive extensions of the model to cover some more realistic movement patterns, correct several errors that have arisen in the literature, and show how to generate confidence limits for expected rates of encounter among independently moving individuals. We illustrate these results using data from mangabey monkeys originally used along with the ideal gas model to argue that groups avoid each other. Although agent-based simulations provide a more flexible alternative approach, the ideal gas model remains both a valuable null model and a useful, less onerous, approximation to biological reality.

  20. Recognition ROCS Are Curvilinear--Or Are They? On Premature Arguments against the Two-High-Threshold Model of Recognition

    ERIC Educational Resources Information Center

    Broder, Arndt; Schutz, Julia

    2009-01-01

    Recent reviews of recognition receiver operating characteristics (ROCs) claim that their curvilinear shape rules out threshold models of recognition. However, the shape of ROCs based on confidence ratings is not diagnostic to refute threshold models, whereas ROCs based on experimental bias manipulations are. Also, fitting predicted frequencies to…

  1. Tuition Elasticity of the Demand for Higher Education among Current Students: A Pricing Model.

    ERIC Educational Resources Information Center

    Bryan, Glenn A.; Whipple, Thomas W.

    1995-01-01

    A pricing model is offered, based on retention of current students, that colleges can use to determine appropriate tuition. A computer-based model that quantifies the relationship between tuition elasticity and projected net return to the college was developed and applied to determine an appropriate tuition rate for a small, private liberal arts…

  2. Evaluating crown fire rate of spread predictions from physics-based models

    Treesearch

    C. M. Hoffman; J. Ziegler; J. Canfield; R. R. Linn; W. Mell; C. H. Sieg; F. Pimont

    2015-01-01

    Modeling the behavior of crown fires is challenging due to the complex set of coupled processes that drive the characteristics of a spreading wildfire and the large range of spatial and temporal scales over which these processes occur. Detailed physics-based modeling approaches such as FIRETEC and the Wildland Urban Interface Fire Dynamics Simulator (WFDS) simulate...

  3. Modelling of Dynamic Rock Fracture Process with a Rate-Dependent Combined Continuum Damage-Embedded Discontinuity Model Incorporating Microstructure

    NASA Astrophysics Data System (ADS)

    Saksala, Timo

    2016-10-01

    This paper deals with numerical modelling of rock fracture under dynamic loading. For this end, a combined continuum damage-embedded discontinuity model is applied in finite element modelling of crack propagation in rock. In this model, the strong loading rate sensitivity of rock is captured by the rate-dependent continuum scalar damage model that controls the pre-peak nonlinear hardening part of rock behaviour. The post-peak exponential softening part of the rock behaviour is governed by the embedded displacement discontinuity model describing the mode I, mode II and mixed mode fracture of rock. Rock heterogeneity is incorporated in the present approach by random description of the rock mineral texture based on the Voronoi tessellation. The model performance is demonstrated in numerical examples where the uniaxial tension and compression tests on rock are simulated. Finally, the dynamic three-point bending test of a semicircular disc is simulated in order to show that the model correctly predicts the strain rate-dependent tensile strengths as well as the failure modes of rock in this test. Special emphasis is laid on modelling the loading rate sensitivity of tensile strength of Laurentian granite.

  4. Correction of distortions in distressed mothers' ratings of their preschool children's psychopathology.

    PubMed

    Müller, Jörg M; Furniss, Tilman

    2013-11-30

    The often-reported low informant agreement about child psychopathology between multiple informants has lead to various suggestions about how to address discrepant ratings. Among the factors that may lower agreement that have been discussed is informant credibility, reliability, or psychopathology, which is of interest in this paper. We tested three different models, namely, the accuracy, the distortion, and an integrated so-called combined model, that conceptualize parental ratings to assess child psychopathology. The data comprise ratings of child psychopathology from multiple informants (mother, therapist and kindergarten teacher) and ratings of maternal psychopathology. The children were patients in a preschool psychiatry unit (N=247). The results from structural equation modeling show that maternal ratings of child psychopathology were biased by maternal psychopathology (distortion model). Based on this statistical background, we suggest a method to adjust biased maternal ratings. We illustrate the maternal bias by comparing the ratings of mother to expert ratings (combined kindergarten teacher and therapist ratings) and show that the correction equation increases the agreement between maternal and expert ratings. We conclude that this approach may help to reduce misclassification of preschool children as 'clinical' on the basis of biased maternal ratings. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  5. Fault-based PSHA of an active tectonic region characterized by low deformation rates: the case of the Lower Rhine Graben

    NASA Astrophysics Data System (ADS)

    Vanneste, Kris; Vleminckx, Bart; Camelbeeck, Thierry

    2016-04-01

    The Lower Rhine Graben (LRG) is one of the few regions in intraplate NW Europe where seismic activity can be linked to active faults, yet probabilistic seismic hazard assessments of this region have hitherto been based on area-source models, in which the LRG is modeled as a single or a small number of seismotectonic zones with uniform seismicity. While fault-based PSHA has become common practice in more active regions of the world (e.g., California, Japan, New Zealand, Italy), knowledge of active faults has been lagging behind in other regions, due to incomplete tectonic inventory, low level of seismicity, lack of systematic fault parameterization, or a combination thereof. The past few years, efforts are increasingly being directed to the inclusion of fault sources in PSHA in these regions as well, in order to predict hazard on a more physically sound basis. In Europe, the EC project SHARE ("Seismic Hazard Harmonization in Europe", http://www.share-eu.org/) represented an important step forward in this regard. In the frame of this project, we previously compiled the first parameterized fault model for the LRG that can be applied in PSHA. We defined 15 fault sources based on major stepovers, bifurcations, gaps, and important changes in strike, dip direction or slip rate. Based on the available data, we were able to place reasonable bounds on the parameters required for time-independent PSHA: length, width, strike, dip, rake, slip rate, and maximum magnitude. With long-term slip rates remaining below 0.1 mm/yr, the LRG can be classified as a low-deformation-rate structure. Information on recurrence interval and elapsed time since the last major earthquake is lacking for most faults, impeding time-dependent PSHA. We consider different models to construct the magnitude-frequency distribution (MFD) of each fault: a slip-rate constrained form of the classical truncated Gutenberg-Richter MFD (Anderson & Luco, 1983) versus a characteristic MFD following Youngs & Coppersmith (1985). The summed Anderson & Luco fault MFDs show a remarkably good agreement with the MFD obtained from the historical and instrumental catalog for the entire LRG, whereas the summed Youngs & Coppersmith MFD clearly underpredicts low to moderate magnitudes, but yields higher occurrence rates for M > 6.3 than would be obtained by simple extrapolation of the catalog MFD. The moment rate implied by the Youngs & Coppersmith MFDs is about three times higher, but is still within the range allowed by current GPS uncertainties. Using the open-source hazard engine OpenQuake (http://openquake.org/), we compute hazard maps for return periods of 475, 2475, and 10,000 yr, and for spectral periods of 0 s (PGA) and 1 s. We explore the impact of various parameter choices, such as MFD model, GMPE distance metric, and inclusion of a background zone to account for lower magnitudes, and we also compare the results with hazard maps based on area-source models. References: Anderson, J. G., and J. E. Luco (1983), Consequences of slip rate constraints on earthquake occurrence relations, Bull. Seismol. Soc. Am., 73(2), 471-496. Youngs, R. R., and K. J. Coppersmith (1985), Implications of fault slip rates and earthquake recurrence models to probabilistic seismic hazard estimates, Bull. Seismol. Soc. Am., 75(4), 939-964.

  6. Risk of fetal mortality after exposure to Listeria monocytogenes based on dose-response data from pregnant guinea pigs and primates.

    PubMed

    Williams, Denita; Castleman, Jennifer; Lee, Chi-Ching; Mote, Beth; Smith, Mary Alice

    2009-11-01

    One-third of the annual cases of listeriosis in the United States occur during pregnancy and can lead to miscarriage or stillbirth, premature delivery, or infection of the newborn. Previous risk assessments completed by the Food and Drug Administration/the Food Safety Inspection Service of the U.S. Department of Agriculture/the Centers for Disease Control and Prevention (FDA/USDA/CDC) and Food and Agricultural Organization/the World Health Organization (FAO/WHO) were based on dose-response data from mice. Recent animal studies using nonhuman primates and guinea pigs have both estimated LD(50)s of approximately 10(7) Listeria monocytogenes colony forming units (cfu). The FAO/WHO estimated a human LD(50) of 1.9 x 10(6) cfu based on data from a pregnant woman consuming contaminated soft cheese. We reevaluated risk based on dose-response curves from pregnant rhesus monkeys and guinea pigs. Using standard risk assessment methodology including hazard identification, exposure assessment, hazard characterization, and risk characterization, risk was calculated based on the new dose-response information. To compare models, we looked at mortality rate per serving at predicted doses ranging from 10(-4) to 10(12) L. monocytogenes cfu. Based on a serving of 10(6) L. monocytogenes cfu, the primate model predicts a death rate of 5.9 x 10(-1) compared to the FDA/USDA/CDC (fig. IV-12) predicted rate of 1.3 x 10(-7). Based on the guinea pig and primate models, the mortality rate calculated by the FDA/USDA/CDC is underestimated for this susceptible population.

  7. DEVELOPMENT OF A PHYSIOLOGICALLY BASED PHARMACOKINETIC MODEL FOR DELTAMETHRIN IN DEVELOPING SPRAGUE-DAWLEY RATS

    EPA Science Inventory

    This work describes the development of a physiologically based pharmacokinetic (PBPK) model of deltamethrin, a type II pyrethroid, in the developing male Sprague-Dawley rat. Generalized Michaelis-Menten equations were used to calculate metabolic rate constants and organ weights ...

  8. Ground-Based Remote Retrievals of Cumulus Entrainment Rates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wagner, Timothy J.; Turner, David D.; Berg, Larry K.

    2013-07-26

    While fractional entrainment rates for cumulus clouds have typically been derived from airborne observations, this limits the size and scope of available data sets. To increase the number of continental cumulus entrainment rate observations available for study, an algorithm for retrieving them from ground-based remote sensing observations has been developed. This algorithm, called the Entrainment Rate In Cumulus Algorithm (ERICA), uses the suite of instruments at the Southern Great Plains (SGP) site of the United States Department of Energy's Atmospheric Radiation Measurement (ARM) Climate Research Facility as inputs into a Gauss-Newton optimal estimation scheme, in which an assumed guess ofmore » the entrainment rate is iteratively adjusted through intercomparison of modeled liquid water path and cloud droplet effective radius to their observed counterparts. The forward model in this algorithm is the Explicit Mixing Parcel Model (EMPM), a cloud parcel model that treats entrainment as a series of discrete entrainment events. A quantified value for measurement uncertainty is also returned as part of the retrieval. Sensitivity testing and information content analysis demonstrate the robust nature of this method for retrieving accurate observations of the entrainment rate without the drawbacks of airborne sampling. Results from a test of ERICA on three months of shallow cumulus cloud events show significant variability of the entrainment rate of clouds in a single day and from one day to the next. The mean value of 1.06 km-¹ for the entrainment rate in this dataset corresponds well with prior observations and simulations of the entrainment rate in cumulus clouds.« less

  9. An evaluation method of computer usability based on human-to-computer information transmission model.

    PubMed

    Ogawa, K

    1992-01-01

    This paper proposes a new evaluation and prediction method for computer usability. This method is based on our two previously proposed information transmission measures created from a human-to-computer information transmission model. The model has three information transmission levels: the device, software, and task content levels. Two measures, called the device independent information measure (DI) and the computer independent information measure (CI), defined on the software and task content levels respectively, are given as the amount of information transmitted. Two information transmission rates are defined as DI/T and CI/T, where T is the task completion time: the device independent information transmission rate (RDI), and the computer independent information transmission rate (RCI). The method utilizes the RDI and RCI rates to evaluate relatively the usability of software and device operations on different computer systems. Experiments using three different systems, in this case a graphical information input task, confirm that the method offers an efficient way of determining computer usability.

  10. Scaling in situ cosmogenic nuclide production rates using analytical approximations to atmospheric cosmic-ray fluxes

    NASA Astrophysics Data System (ADS)

    Lifton, Nathaniel; Sato, Tatsuhiko; Dunai, Tibor J.

    2014-01-01

    Several models have been proposed for scaling in situ cosmogenic nuclide production rates from the relatively few sites where they have been measured to other sites of interest. Two main types of models are recognized: (1) those based on data from nuclear disintegrations in photographic emulsions combined with various neutron detectors, and (2) those based largely on neutron monitor data. However, stubborn discrepancies between these model types have led to frequent confusion when calculating surface exposure ages from production rates derived from the models. To help resolve these discrepancies and identify the sources of potential biases in each model, we have developed a new scaling model based on analytical approximations to modeled fluxes of the main atmospheric cosmic-ray particles responsible for in situ cosmogenic nuclide production. Both the analytical formulations and the Monte Carlo model fluxes on which they are based agree well with measured atmospheric fluxes of neutrons, protons, and muons, indicating they can serve as a robust estimate of the atmospheric cosmic-ray flux based on first principles. We are also using updated records for quantifying temporal and spatial variability in geomagnetic and solar modulation effects on the fluxes. A key advantage of this new model (herein termed LSD) over previous Monte Carlo models of cosmogenic nuclide production is that it allows for faster estimation of scaling factors based on time-varying geomagnetic and solar inputs. Comparing scaling predictions derived from the LSD model with those of previously published models suggest potential sources of bias in the latter can be largely attributed to two factors: different energy responses of the secondary neutron detectors used in developing the models, and different geomagnetic parameterizations. Given that the LSD model generates flux spectra for each cosmic-ray particle of interest, it is also relatively straightforward to generate nuclide-specific scaling factors based on recently updated neutron and proton excitation functions (probability of nuclide production in a given nuclear reaction as a function of energy) for commonly measured in situ cosmogenic nuclides. Such scaling factors reflect the influence of the energy distribution of the flux folded with the relevant excitation functions. Resulting scaling factors indicate 3He shows the strongest positive deviation from the flux-based scaling, while 14C exhibits a negative deviation. These results are consistent with a recent Monte Carlo-based study using a different cosmic-ray physics code package but the same excitation functions.

  11. The role of reaction affinity and secondary minerals in regulating chemical weathering rates at the Santa Cruz Soil Chronosequence, California

    USGS Publications Warehouse

    Maher, K.; Steefel, Carl; White, A.F.; Stonestrom, David A.

    2009-01-01

    In order to explore the reasons for the apparent discrepancy between laboratory and field weathering rates and to determine the extent to which weathering rates are controlled by the approach to thermodynamic equilibrium, secondary mineral precipitation, and flow rates, a multicomponent reactive transport model (CrunchFlow) was used to interpret soil profile development and mineral precipitation and dissolution rates at the 226 ka Marine Terrace Chronosequence near Santa Cruz, CA. Aqueous compositions, fluid chemistry, transport, and mineral abundances are well characterized [White A. F., Schulz M. S., Vivit D. V., Blum A., Stonestrom D. A. and Anderson S. P. (2008) Chemical weathering of a Marine Terrace Chronosequence, Santa Cruz, California. I: interpreting the long-term controls on chemical weathering based on spatial and temporal element and mineral distributions. Geochim. Cosmochim. Acta 72 (1), 36-68] and were used to constrain the reaction rates for the weathering and precipitating minerals in the reactive transport modeling. When primary mineral weathering rates are calculated with either of two experimentally determined rate constants, the nonlinear, parallel rate law formulation of Hellmann and Tisserand [Hellmann R. and Tisserand D. (2006) Dissolution kinetics as a function of the Gibbs free energy of reaction: An experimental study based on albite feldspar. Geochim. Cosmochim. Acta 70 (2), 364-383] or the aluminum inhibition model proposed by Oelkers et al. [Oelkers E. H., Schott J. and Devidal J. L. (1994) The effect of aluminum, pH, and chemical affinity on the rates of aluminosilicate dissolution reactions. Geochim. Cosmochim. Acta 58 (9), 2011-2024], modeling results are consistent with field-scale observations when independently constrained clay precipitation rates are accounted for. Experimental and field rates, therefore, can be reconciled at the Santa Cruz site. Additionally, observed maximum clay abundances in the argillic horizons occur at the depth and time where the reaction fronts of the primary minerals overlap. The modeling indicates that the argillic horizon at Santa Cruz can be explained almost entirely by weathering of primary minerals and in situ clay precipitation accompanied by undersaturation of kaolinite at the top of the profile. The rate constant for kaolinite precipitation was also determined based on model simulations of mineral abundances and dissolved Al, SiO2(aq) and pH in pore waters. Changes in the rate of kaolinite precipitation or the flow rate do not affect the gradient of the primary mineral weathering profiles, but instead control the rate of propagation of the primary mineral weathering fronts and thus total mass removed from the weathering profile. Our analysis suggests that secondary clay precipitation is as important as aqueous transport in governing the amount of dissolution that occurs within a profile because clay minerals exert a strong control over the reaction affinity of the dissolving primary minerals. The modeling also indicates that the weathering advance rate and the total mass of mineral dissolved is controlled by the thermodynamic saturation of the primary dissolving phases plagioclase and K-feldspar, as is evident from the difference in propagation rates of the reaction fronts for the two minerals despite their very similar kinetic rate laws. ?? 2009 Elsevier Ltd.

  12. Digital modulation and achievable information rates of thru-body haptic communications

    NASA Astrophysics Data System (ADS)

    Hanisch, Natalie; Pierobon, Massimiliano

    2017-05-01

    The ever increasing biocompatibility and pervasive nature of wearable and implantable devices demand novel sustainable solutions to realize their connectivity, which can impact broad application scenarios such as in the defense, biomedicine, and entertainment fields. Where wireless electromagnetic communications are facing challenges such as device miniaturization, energy scarcity, limited range, and possibility of interception, solutions not only inspired but also based on natural communication means might result into valid alternatives. In this paper, a communication paradigm where digital information is propagated through the nervous system is proposed and analyzed on the basis of achievable information rates. In particular, this paradigm is based on an analytical framework where the response of a system based on haptic (tactile) information transmission and ElectroEncephaloGraphy (EEG)-based reception is modeled and characterized. Computational neuroscience models of the somatosensory signal representation in the brain, coupled with models of the generation and propagation of somatosensory stimulation from skin mechanoreceptors, are employed in this paper to provide a proof-of-concept evaluation of achievable performance in encoding information bits into tactile stimulation, and decoding them from the recorded brain activity. Based on these models, the system is simulated and the resulting data are utilized to train a Support Vector Machine (SVM) classifier, which is finally used to provide a proof-of-concept validation of the system performance in terms of information rates against bit error probability at the reception.

  13. A European benchmarking system to evaluate in-hospital mortality rates in acute coronary syndrome: the EURHOBOP project.

    PubMed

    Dégano, Irene R; Subirana, Isaac; Torre, Marina; Grau, María; Vila, Joan; Fusco, Danilo; Kirchberger, Inge; Ferrières, Jean; Malmivaara, Antti; Azevedo, Ana; Meisinger, Christa; Bongard, Vanina; Farmakis, Dimitros; Davoli, Marina; Häkkinen, Unto; Araújo, Carla; Lekakis, John; Elosua, Roberto; Marrugat, Jaume

    2015-03-01

    Hospital performance models in acute myocardial infarction (AMI) are useful to assess patient management. While models are available for individual countries, mainly US, cross-European performance models are lacking. Thus, we aimed to develop a system to benchmark European hospitals in AMI and percutaneous coronary intervention (PCI), based on predicted in-hospital mortality. We used the EURopean HOspital Benchmarking by Outcomes in ACS Processes (EURHOBOP) cohort to develop the models, which included 11,631 AMI patients and 8276 acute coronary syndrome (ACS) patients who underwent PCI. Models were validated with a cohort of 55,955 European ACS patients. Multilevel logistic regression was used to predict in-hospital mortality in European hospitals for AMI and PCI. Administrative and clinical models were constructed with patient- and hospital-level covariates, as well as hospital- and country-based random effects. Internal cross-validation and external validation showed good discrimination at the patient level and good calibration at the hospital level, based on the C-index (0.736-0.819) and the concordance correlation coefficient (55.4%-80.3%). Mortality ratios (MRs) showed excellent concordance between administrative and clinical models (97.5% for AMI and 91.6% for PCI). Exclusion of transfers and hospital stays ≤1day did not affect in-hospital mortality prediction in sensitivity analyses, as shown by MR concordance (80.9%-85.4%). Models were used to develop a benchmarking system to compare in-hospital mortality rates of European hospitals with similar characteristics. The developed system, based on the EURHOBOP models, is a simple and reliable tool to compare in-hospital mortality rates between European hospitals in AMI and PCI. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  14. A citation-based assessment of the performance of U.S. boiling water reactors following extended power up-rates

    NASA Astrophysics Data System (ADS)

    Heidrich, Brenden J.

    Nuclear power plants produce 20 percent of the electricity generated in the U.S. Nuclear generated electricity is increasingly valuable to a utility because it can be produced at a low marginal cost and it does not release any carbon dioxide. It can also be a hedge against uncertain fossil fuel prices. The construction of new nuclear power plants in the U.S. is cautiously moving forward, restrained by high capital costs. Since 1998, nuclear utilities have been increasing the power output of their reactors by implementing extended power up-rates. Power increases of up to 20 percent are allowed under this process. The equivalent of nine large power plants has been added via extended power up-rates. These up-rates require the replacement of large capital equipment and are often performed in concert with other plant life extension activities such as license renewals. This dissertation examines the effect of these extended power up-rates on the safety performance of U.S. boiling water reactors. Licensing event reports are submitted by the utilities to the Nuclear Regulatory Commission, the federal nuclear regulator, for a wide range of abnormal events. Two methods are used to examine the effect of extended power up-rates on the frequency of abnormal events at the reactors. The Crow/AMSAA model, a univariate technique is used to determine if the implementation of an extended power up-rate affects the rate of abnormal events. The method has a long history in the aerospace industry and in the military. At a 95-percent confidence level, the rate of events requiring the submission of a licensing event report decreases following the implementation of an extended power up-rate. It is hypothesized that the improvement in performance is tied to the equipment replacement and refurbishment that is performed as part of the up-rate process. The reactor performance is also analyzed using the proportional hazards model. This technique allows for the estimation of the effects of multiple independent variables on the event rate. Both the Cox and Weibull formulations were tested. The Cox formulation is more commonly used in survival analysis because of its flexibility. The best Cox model included fixed effects at the multi-reactor site level. The Weibull parametric formulation has the same base hazard rate as the Crow/AMSAA model. This theoretical connection was confirmed through a series of tests that demonstrated both models predicted the same base hazard rates. The Weibull formulation produced a model with most of the same statistically significant variables as the Cox model. The beneficial effect of extended power up-rates was predicted in the proportional hazards models as well as the Crow/AMSAA model. The Weibull model also indicated an effect that can be traced back to a plant’s construction. Performance was also found to improve in plants that had been divested from their original owners. This research developed a consistent evaluation toolkit for nuclear power plant performance using either a univariate method that allows for simple graphical evaluation at its heart or a more complex multivariate method that includes the effects of several independent variables with data that are available from public sources. Utilities or regulators with access to proprietary data may be able to expand upon this research with additional data that is not readily available to an academic researcher. Even without access to special data, the methods developed are valuable tools in evaluating and predicting nuclear power plant reliability performance.

  15. An Agent-Based Model for Studying Child Maltreatment and Child Maltreatment Prevention

    NASA Astrophysics Data System (ADS)

    Hu, Xiaolin; Puddy, Richard W.

    This paper presents an agent-based model that simulates the dynamics of child maltreatment and child maltreatment prevention. The developed model follows the principles of complex systems science and explicitly models a community and its families with multi-level factors and interconnections across the social ecology. This makes it possible to experiment how different factors and prevention strategies can affect the rate of child maltreatment. We present the background of this work and give an overview of the agent-based model and show some simulation results.

  16. Patterns of breast cancer mortality trends in Europe.

    PubMed

    Amaro, Joana; Severo, Milton; Vilela, Sofia; Fonseca, Sérgio; Fontes, Filipa; La Vecchia, Carlo; Lunet, Nuno

    2013-06-01

    To identify patterns of variation in breast cancer mortality in Europe (1980-2010), using a model-based approach. Mortality data were obtained from the World Health Organization database and mixed models were used to describe the time trends in the age-standardized mortality rates (ASMR). Model-based clustering was used to identify clusters of countries with homogeneous variation in ASMR. Three patterns were identified. Patterns 1 and 2 are characterized by stable or slightly increasing trends in ASMR in the first half of the period analysed, and a clear decline is observed thereafter; in pattern 1 the median of the ASMR is higher, and the highest rates were achieved sooner. Pattern 3 is characterised by a rapid increase in mortality until 1999, declining slowly thereafter. This study provides a general model for the description and interpretation of the variation in breast cancer mortality in Europe, based in three main patterns. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. Modeling the Declining Positivity Rates for Human Immunodeficiency Virus Testing in New York State.

    PubMed

    Martin, Erika G; MacDonald, Roderick H; Smith, Lou C; Gordon, Daniel E; Lu, Tao; OʼConnell, Daniel A

    2015-01-01

    New York health care providers have experienced declining percentages of positive human immunodeficiency virus (HIV) tests among patients. Furthermore, observed positivity rates are lower than expected on the basis of the national estimate that one-fifth of HIV-infected residents are unaware of their infection. We used mathematical modeling to evaluate whether this decline could be a result of declining numbers of HIV-infected persons who are unaware of their infection, a measure that is impossible to measure directly. A stock-and-flow mathematical model of HIV incidence, testing, and diagnosis was developed. The model includes stocks for uninfected, infected and unaware (in 4 disease stages), and diagnosed individuals. Inputs came from published literature and time series (2006-2009) for estimated new infections, newly diagnosed HIV cases, living diagnosed cases, mortality, and diagnosis rates in New York. Primary model outcomes were the percentage of HIV-infected persons unaware of their infection and the percentage of HIV tests with a positive result (HIV positivity rate). In the base case, the estimated percentage of unaware HIV-infected persons declined from 14.2% in 2006 (range, 11.9%-16.5%) to 11.8% in 2010 (range, 9.9%-13.1%). The HIV positivity rate, assuming testing occurred independent of risk, was 0.12% in 2006 (range, 0.11%-0.15%) and 0.11% in 2010 (range, 0.10%-0.13%). The observed HIV positivity rate was more than 4 times the expected positivity rate based on the model. HIV test positivity is a readily available indicator, but it cannot distinguish causes of underlying changes. Findings suggest that the percentage of unaware HIV-infected New Yorkers is lower than the national estimate and that the observed HIV test positivity rate is greater than expected if infected and uninfected individuals tested at the same rate, indicating that testing efforts are appropriately targeting undiagnosed cases.

  18. Quantitative extraction of the bedrock exposure rate based on unmanned aerial vehicle data and Landsat-8 OLI image in a karst environment

    NASA Astrophysics Data System (ADS)

    Wang, Hongyan; Li, Qiangzi; Du, Xin; Zhao, Longcai

    2017-12-01

    In the karst regions of southwest China, rocky desertification is one of the most serious problems in land degradation. The bedrock exposure rate is an important index to assess the degree of rocky desertification in karst regions. Because of the inherent merits of macro-scale, frequency, efficiency, and synthesis, remote sensing is a promising method to monitor and assess karst rocky desertification on a large scale. However, actual measurement of the bedrock exposure rate is difficult and existing remote-sensing methods cannot directly be exploited to extract the bedrock exposure rate owing to the high complexity and heterogeneity of karst environments. Therefore, using unmanned aerial vehicle (UAV) and Landsat-8 Operational Land Imager (OLI) data for Xingren County, Guizhou Province, quantitative extraction of the bedrock exposure rate based on multi-scale remote-sensing data was developed. Firstly, we used an object-oriented method to carry out accurate classification of UAVimages. From the results of rock extraction, the bedrock exposure rate was calculated at the 30 m grid scale. Parts of the calculated samples were used as training data; other data were used for model validation. Secondly, in each grid the band reflectivity of Landsat-8 OLI data was extracted and a variety of rock and vegetation indexes (e.g., NDVI and SAVI) were calculated. Finally, a network model was established to extract the bedrock exposure rate. The correlation coefficient of the network model was 0.855, that of the validation model was 0.677 and the root mean square error of the validation model was 0.073. This method is valuable for wide-scale estimation of bedrock exposure rate in karst environments. Using the quantitative inversion model, a distribution map of the bedrock exposure rate in Xingren County was obtained.

  19. Modeling of copper sorption onto GFH and design of full-scale GFH adsorbers.

    PubMed

    Steiner, Michele; Pronk, Wouter; Boller, Markus A

    2006-03-01

    During rain events, copper wash-off occurring from copper roofs results in environmental hazards. In this study, columns filled with granulated ferric hydroxide (GFH) were used to treat copper-containing roof runoff. It was shown that copper could be removed to a high extent. A model was developed to describe this removal process. The model was based on the Two Region Model (TRM), extended with an additional diffusion zone. The extended model was able to describe the copper removal in long-term experiments (up to 125 days) with variable flow rates reflecting realistic runoff events. The four parameters of the model were estimated based on data gained with specific column experiments according to maximum sensitivity for each parameter. After model validation, the parameter set was used for the design of full-scale adsorbers. These full-scale adsorbers show high removal rates during extended periods of time.

  20. Home Energy Scoring Tools (website) and Application Programming Interfaces, APIs (aka HEScore)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mills, Evan; Bourassa, Norm; Rainer, Leo

    A web-based residential energy rating tool with APIs that runs the LBNL website: Provides customized estimates of residential energy use and energy bills based on building description information provided by the user. Energy use is estimated using engineering models developed at LBNL. Space heating and cooling use is based on the DOE-2. 1E building simulation model. Other end-users (water heating, appliances, lighting, and misc. equipment) are based on engineering models developed by LBNL.

  1. Home Energy Scoring Tools (website) and Application Programming Interfaces, APIs (aka HEScore)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mills, Evan; Bourassa, Norm; Rainer, Leo

    2016-04-22

    A web-based residential energy rating tool with APIs that runs the LBNL website: Provides customized estimates of residential energy use and energy bills based on building description information provided by the user. Energy use is estimated using engineering models developed at LBNL. Space heating and cooling use is based on the DOE-2. 1E building simulation model. Other end-users (water heating, appliances, lighting, and misc. equipment) are based on engineering models developed by LBNL.

  2. Piezoresistivity, mechanisms and model of cement-based materials with CNT/NCB composite fillers

    NASA Astrophysics Data System (ADS)

    Zhang, Liqing; Ding, Siqi; Dong, Sufen; Li, Zhen; Ouyang, Jian; Yu, Xun; Han, Baoguo

    2017-12-01

    The use of conductive cement-based materials as sensors has attracted intense interest over past decades. In this paper, carbon nanotube (CNT)/nano carbon black (NCB) composite fillers made by electrostatic self-assembly are used to fabricate conductive cement-based materials. Electrical and piezoresistive properties of the fabricated cement-based materials are investigated. Effect of filler content, load amplitudes and rate on piezoresistive property within elastic regime and piezoresistive behaviors during compressive loading to destruction are explored. Finally, a model describing piezoresistive property of cement-based materials with CNT/NCB composite fillers is established based on the effective conductive path and tunneling effect theory. The research results demonstrate that filler content and load amplitudes have obvious effect on piezoresistive property of the composites materials, while load rate has little influence on piezoresistive property. During compressive loading to destruction, the composites also show sensitive piezoresistive property. Therefore, the cement-based composites can be used to monitor the health state of structures during their whole life. The built model can well describe the piezoresistive property of the composites during compressive loading to destruction. The good match between the model and experiment data indicates that tunneling effect actually contributes to piezoresistive phenomenon.

  3. a New Dynamic Community Model for Social Networks

    NASA Astrophysics Data System (ADS)

    Lu, Zhe-Ming; Wu, Zhen; Guo, Shi-Ze; Chen, Zhe; Song, Guang-Hua

    2014-09-01

    In this paper, based on the phenomenon that individuals join into and jump from the organizations in the society, we propose a dynamic community model to construct social networks. Two parameters are adopted in our model, one is the communication rate Pa that denotes the connection strength in the organization and the other is the turnover rate Pb, that stands for the frequency of jumping among the organizations. Based on simulations, we analyze not only the degree distribution, the clustering coefficient, the average distance and the network diameter but also the group distribution which is closely related to their community structure. Moreover, we discover that the networks generated by the proposed model possess the small-world property and can well reproduce the networks of social contacts.

  4. Discrete statistical model of fatigue crack growth in a Ni-base superalloy, capable of life prediction

    NASA Astrophysics Data System (ADS)

    Boyd-Lee, Ashley; King, Julia

    1992-07-01

    A discrete statistical model of fatigue crack growth in a nickel base superalloy Waspaloy, which is quantitative from the start of the short crack regime to failure, is presented. Instantaneous crack growth rate distributions and persistence of arrest distributions are used to compute fatigue lives and worst case scenarios without extrapolation. The basis of the model is non-material specific, it provides an improved method of analyzing crack growth rate data. For Waspaloy, the model shows the importance of good bulk fatigue crack growth resistance to resist early short fatigue crack growth and the importance of maximizing crack arrest both by the presence of a proportion of small grains and by maximizing grain boundary corrugation.

  5. Formal implementation of a performance evaluation model for the face recognition system.

    PubMed

    Shin, Yong-Nyuo; Kim, Jason; Lee, Yong-Jun; Shin, Woochang; Choi, Jin-Young

    2008-01-01

    Due to usability features, practical applications, and its lack of intrusiveness, face recognition technology, based on information, derived from individuals' facial features, has been attracting considerable attention recently. Reported recognition rates of commercialized face recognition systems cannot be admitted as official recognition rates, as they are based on assumptions that are beneficial to the specific system and face database. Therefore, performance evaluation methods and tools are necessary to objectively measure the accuracy and performance of any face recognition system. In this paper, we propose and formalize a performance evaluation model for the biometric recognition system, implementing an evaluation tool for face recognition systems based on the proposed model. Furthermore, we performed evaluations objectively by providing guidelines for the design and implementation of a performance evaluation system, formalizing the performance test process.

  6. Error-Rate Estimation Based on Multi-Signal Flow Graph Model and Accelerated Radiation Tests

    PubMed Central

    Wang, Yueke; Xing, Kefei; Deng, Wei; Zhang, Zelong

    2016-01-01

    A method of evaluating the single-event effect soft-error vulnerability of space instruments before launched has been an active research topic in recent years. In this paper, a multi-signal flow graph model is introduced to analyze the fault diagnosis and meantime to failure (MTTF) for space instruments. A model for the system functional error rate (SFER) is proposed. In addition, an experimental method and accelerated radiation testing system for a signal processing platform based on the field programmable gate array (FPGA) is presented. Based on experimental results of different ions (O, Si, Cl, Ti) under the HI-13 Tandem Accelerator, the SFER of the signal processing platform is approximately 10−3(error/particle/cm2), while the MTTF is approximately 110.7 h. PMID:27583533

  7. Error-Rate Estimation Based on Multi-Signal Flow Graph Model and Accelerated Radiation Tests.

    PubMed

    He, Wei; Wang, Yueke; Xing, Kefei; Deng, Wei; Zhang, Zelong

    2016-01-01

    A method of evaluating the single-event effect soft-error vulnerability of space instruments before launched has been an active research topic in recent years. In this paper, a multi-signal flow graph model is introduced to analyze the fault diagnosis and meantime to failure (MTTF) for space instruments. A model for the system functional error rate (SFER) is proposed. In addition, an experimental method and accelerated radiation testing system for a signal processing platform based on the field programmable gate array (FPGA) is presented. Based on experimental results of different ions (O, Si, Cl, Ti) under the HI-13 Tandem Accelerator, the SFER of the signal processing platform is approximately 10-3(error/particle/cm2), while the MTTF is approximately 110.7 h.

  8. A grid of MHD models for stellar mass loss and spin-down rates of solar analogs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cohen, O.; Drake, J. J.

    2014-03-01

    Stellar winds are believed to be the dominant factor in the spin-down of stars over time. However, stellar winds of solar analogs are poorly constrained due to observational challenges. In this paper, we present a grid of magnetohydrodynamic models to study and quantify the values of stellar mass loss and angular momentum loss rates as a function of the stellar rotation period, magnetic dipole component, and coronal base density. We derive simple scaling laws for the loss rates as a function of these parameters, and constrain the possible mass loss rate of stars with thermally driven winds. Despite the successmore » of our scaling law in matching the results of the model, we find a deviation between the 'solar dipole' case and a real case based on solar observations that overestimates the actual solar mass loss rate by a factor of three. This implies that the model for stellar fields might require a further investigation with additional complexity. Mass loss rates in general are largely controlled by the magnetic field strength, with the wind density varying in proportion to the confining magnetic pressure B {sup 2}. We also find that the mass loss rates obtained using our grid models drop much faster with the increase in rotation period than scaling laws derived using observed stellar activity. For main-sequence solar-like stars, our scaling law for angular momentum loss versus poloidal magnetic field strength retrieves the well-known Skumanich decline of angular velocity with time, Ω{sub *}∝t {sup –1/2}, if the large-scale poloidal magnetic field scales with rotation rate as B{sub p}∝Ω{sub ⋆}{sup 2}.« less

  9. Parameterized data-driven fuzzy model based optimal control of a semi-batch reactor.

    PubMed

    Kamesh, Reddi; Rani, K Yamuna

    2016-09-01

    A parameterized data-driven fuzzy (PDDF) model structure is proposed for semi-batch processes, and its application for optimal control is illustrated. The orthonormally parameterized input trajectories, initial states and process parameters are the inputs to the model, which predicts the output trajectories in terms of Fourier coefficients. Fuzzy rules are formulated based on the signs of a linear data-driven model, while the defuzzification step incorporates a linear regression model to shift the domain from input to output domain. The fuzzy model is employed to formulate an optimal control problem for single rate as well as multi-rate systems. Simulation study on a multivariable semi-batch reactor system reveals that the proposed PDDF modeling approach is capable of capturing the nonlinear and time-varying behavior inherent in the semi-batch system fairly accurately, and the results of operating trajectory optimization using the proposed model are found to be comparable to the results obtained using the exact first principles model, and are also found to be comparable to or better than parameterized data-driven artificial neural network model based optimization results. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  10. Delamination modeling of laminate plate made of sublaminates

    NASA Astrophysics Data System (ADS)

    Kormaníková, Eva; Kotrasová, Kamila

    2017-07-01

    The paper presents the mixed-mode delamination of plates made of sublaminates. To this purpose an opening load mode of delamination is proposed as failure model. The failure model is implemented in ANSYS code to calculate the mixed-mode delamination response as energy release rate. The analysis is based on interface techniques. Within the interface finite element modeling there are calculated the individual components of damage parameters as spring reaction forces, relative displacements and energy release rates along the lamination front.

  11. A transverse isotropic constitutive model for the aortic valve tissue incorporating rate-dependency and fibre dispersion: Application to biaxial deformation.

    PubMed

    Anssari-Benam, Afshin; Tseng, Yuan-Tsan; Bucchi, Andrea

    2018-05-26

    This paper presents a continuum-based transverse isotropic model incorporating rate-dependency and fibre dispersion, applied to the planar biaxial deformation of aortic valve (AV) specimens under various stretch rates. The rate dependency of the mechanical behaviour of the AV tissue under biaxial deformation, the (pseudo-) invariants of the right Cauchy-Green deformation-rate tensor Ċ associated with fibre dispersion, and a new fibre orientation density function motivated by fibre kinematics are presented for the first time. It is shown that the model captures the experimentally observed deformation of the specimens, and characterises a shear-thinning behaviour associated with the dissipative (viscous) kinematics of the matrix and the fibres. The application of the model for predicting the deformation behaviour of the AV under physiological rates is illustrated and an example of the predicted σ-λ curves is presented. While the development of the model was principally motivated by the AV biomechanics requisites, the comprehensive theoretical approach employed in the study renders the model suitable for application to other fibrous soft tissues that possess similar rate-dependent and structural attributes. Crown Copyright © 2018. Published by Elsevier Ltd. All rights reserved.

  12. Stationarity test with a direct test for heteroskedasticity in exchange rate forecasting models

    NASA Astrophysics Data System (ADS)

    Khin, Aye Aye; Chau, Wong Hong; Seong, Lim Chee; Bin, Raymond Ling Leh; Teng, Kevin Low Lock

    2017-05-01

    Global economic has been decreasing in the recent years, manifested by the greater exchange rates volatility on international commodity market. This study attempts to analyze some prominent exchange rate forecasting models on Malaysian commodity trading: univariate ARIMA, ARCH and GARCH models in conjunction with stationarity test on residual diagnosis direct testing of heteroskedasticity. All forecasting models utilized the monthly data from 1990 to 2015. Given a total of 312 observations, the data used to forecast both short-term and long-term exchange rate. The forecasting power statistics suggested that the forecasting performance of ARIMA (1, 1, 1) model is more efficient than the ARCH (1) and GARCH (1, 1) models. For ex-post forecast, exchange rate was increased from RM 3.50 per USD in January 2015 to RM 4.47 per USD in December 2015 based on the baseline data. For short-term ex-ante forecast, the analysis results indicate a decrease in exchange rate on 2016 June (RM 4.27 per USD) as compared with 2015 December. A more appropriate forecasting method of exchange rate is vital to aid the decision-making process and planning on the sustainable commodities' production in the world economy.

  13. Functional response and capture timing in an individual-based model: predation by northern squawfish (Ptychocheilus oregonensis) on juvenile salmonids in the Columbia River

    USGS Publications Warehouse

    Petersen, James H.; DeAngelis, Donald L.

    1992-01-01

    The behavior of individual northern squawfish (Ptychocheilus oregonensis) preying on juvenile salmonids was modeled to address questions about capture rate and the timing of prey captures (random versus contagious). Prey density, predator weight, prey weight, temperature, and diel feeding pattern were first incorporated into predation equations analogous to Holling Type 2 and Type 3 functional response models. Type 2 and Type 3 equations fit field data from the Columbia River equally well, and both models predicted predation rates on five of seven independent dates. Selecting a functional response type may be complicated by variable predation rates, analytical methods, and assumptions of the model equations. Using the Type 2 functional response, random versus contagious timing of prey capture was tested using two related models. ln the simpler model, salmon captures were assumed to be controlled by a Poisson renewal process; in the second model, several salmon captures were assumed to occur during brief "feeding bouts", modeled with a compound Poisson process. Salmon captures by individual northern squawfish were clustered through time, rather than random, based on comparison of model simulations and field data. The contagious-feeding result suggests that salmonids may be encountered as patches or schools in the river.

  14. Development of braided rope seals for hypersonic engine applications: Flow modeling

    NASA Technical Reports Server (NTRS)

    Mutharasan, Rajakkannu; Steinetz, Bruce M.; Tao, Xiaoming; Du, Guang-Wu; Ko, Frank

    1992-01-01

    A new type of engine seal is being developed to meet the needs of advanced hypersonic engines. A seal braided of emerging high temperature ceramic fibers comprised of a sheath-core construction was selected for study based on its low leakage rates. Flexible, low-leakage, high temperature seals are required to seal the movable engine panels of advanced ramjet-scramjet engines either preventing potentially dangerous leakage into backside engine cavities or limiting the purge coolant flow rates through the seals. To predict the leakage through these flexible, porous seal structures new analytical flow models are required. Two such models based on the Kozeny-Carman equations are developed herein and are compared to experimental leakage measurements for simulated pressure and seal gap conditions. The models developed allow prediction of the gas leakage rate as a function of fiber diameter, fiber packing density, gas properties, and pressure drop across the seal. The first model treats the seal as a homogeneous fiber bed. The second model divides the seal into two homogeneous fiber beds identified as the core and the sheath of the seal. Flow resistances of each of the main seal elements are combined to determine the total flow resistance. Comparisons between measured leakage rates and model predictions for seal structures covering a wide range of braid architectures show good agreement. Within the experimental range, the second model provides a prediction within 6 to 13 percent of the flow for many of the cases examined. Areas where future model refinements are required are identified.

  15. Development of a method to rate the primary safety of vehicles using linked New Zealand crash and vehicle licensing data.

    PubMed

    Keall, Michael D; Newstead, Stuart

    2016-01-01

    Vehicle safety rating systems aim firstly to inform consumers about safe vehicle choices and, secondly, to encourage vehicle manufacturers to aspire to safer levels of vehicle performance. Primary rating systems (that measure the ability of a vehicle to assist the driver in avoiding crashes) have not been developed for a variety of reasons, mainly associated with the difficult task of disassociating driver behavior and vehicle exposure characteristics from the estimation of crash involvement risk specific to a given vehicle. The aim of the current study was to explore different approaches to primary safety estimation, identifying which approaches (if any) may be most valid and most practical, given typical data that may be available for producing ratings. Data analyzed consisted of crash data and motor vehicle registration data for the period 2003 to 2012: 21,643,864 observations (representing vehicle-years) and 135,578 crashed vehicles. Various logistic models were tested as a means to estimate primary safety: Conditional models (conditioning on the vehicle owner over all vehicles owned); full models not conditioned on the owner, with all available owner and vehicle data; reduced models with few variables; induced exposure models; and models that synthesised elements from the latter two models. It was found that excluding young drivers (aged 25 and under) from all primary safety estimates attenuated some high risks estimated for make/model combinations favored by young people. The conditional model had clear biases that made it unsuitable. Estimates from a reduced model based just on crash rates per year (but including an owner location variable) produced estimates that were generally similar to the full model, although there was more spread in the estimates. The best replication of the full model estimates was generated by a synthesis of the reduced model and an induced exposure model. This study compared approaches to estimating primary safety that could mimic an analysis based on a very rich data set, using variables that are commonly available when registered fleet data are linked to crash data. This exploratory study has highlighted promising avenues for developing primary safety rating systems for vehicle makes and models.

  16. Prioritizing public- private partnership models for public hospitals of iran based on performance indicators.

    PubMed

    Gholamzadeh Nikjoo, Raana; Jabbari Beyrami, Hossein; Jannati, Ali; Asghari Jaafarabadi, Mohammad

    2012-01-01

    The present study was conducted to scrutinize Public- Private Partnership (PPP) models in public hospitals of different countries based on performance indicators in order to se-lect appropriated models for Iran hospitals. In this mixed (quantitative-qualitative) study, systematic review and expert panel has been done to identify varied models of PPP as well as performance indicators. In the second step we prioritized performance indicator and PPP models based on selected performance indicators by Analytical Hierarchy process (AHP) technique. The data were analyzed by Excel 2007 and Expert Choice11 software's. In quality - effectiveness area, indicators like the rate of hospital infections (100%), hospital accidents prevalence rate (73%), pure rate of hospital mortality (63%), patient satisfaction percentage (53%), in accessibility equity area indicators such as average inpatient waiting time (100%) and average outpatient waiting time (74%), and in financial - efficiency area, indicators including average length of stay (100%), bed occupation ratio (99%), specific income to total cost ratio (97%) have been chosen to be the most key performance indicators. In the pri¬oritization of the PPP models clinical outsourcing, management, privatization, BOO (build, own, operate) and non-clinical outsourcing models, achieved high priority for various performance in¬dicator areas. This study had been provided the most common PPP options in the field of public hospitals and had gathered suitable evidences from experts for choosing appropriate PPP option for public hospitals. Effect of private sector presence in public hospital performance, based on which PPP options undertaken, will be different.

  17. Prioritizing Public- Private Partnership Models for Public Hospitals of Iran Based on Performance Indicators

    PubMed Central

    Gholamzadeh Nikjoo, Raana; Jabbari Beyrami, Hossein; Jannati, Ali; Asghari Jaafarabadi, Mohammad

    2012-01-01

    Background: The present study was conducted to scrutinize Public- Private Partnership (PPP) models in public hospitals of different countries based on performance indicators in order to se-lect appropriated models for Iran hospitals. Methods: In this mixed (quantitative-qualitative) study, systematic review and expert panel has been done to identify varied models of PPP as well as performance indicators. In the second step we prioritized performance indicator and PPP models based on selected performance indicators by Analytical Hierarchy process (AHP) technique. The data were analyzed by Excel 2007 and Expert Choice11 software’s. Results: In quality – effectiveness area, indicators like the rate of hospital infections (100%), hospital accidents prevalence rate (73%), pure rate of hospital mortality (63%), patient satisfaction percentage (53%), in accessibility equity area indicators such as average inpatient waiting time (100%) and average outpatient waiting time (74%), and in financial – efficiency area, indicators including average length of stay (100%), bed occupation ratio (99%), specific income to total cost ratio (97%) have been chosen to be the most key performance indicators. In the pri¬oritization of the PPP models clinical outsourcing, management, privatization, BOO (build, own, operate) and non-clinical outsourcing models, achieved high priority for various performance in¬dicator areas. Conclusion: This study had been provided the most common PPP options in the field of public hospitals and had gathered suitable evidences from experts for choosing appropriate PPP option for public hospitals. Effect of private sector presence in public hospital performance, based on which PPP options undertaken, will be different. PMID:24688942

  18. Modelling the epidemiology of Escherichia coli ST131 and the impact of interventions on the community and healthcare centres.

    PubMed

    Talaminos, A; López-Cerero, L; Calvillo, J; Pascual, A; Roa, L M; Rodríguez-Baño, J

    2016-07-01

    ST131 Escherichia coli is an emergent clonal group that has achieved successful worldwide spread through a combination of virulence and antimicrobial resistance. Our aim was to develop a mathematical model, based on current knowledge of the epidemiology of ESBL-producing and non-ESBL-producing ST131 E. coli, to provide a framework enabling a better understanding of its spread within the community, in hospitals and long-term care facilities, and the potential impact of specific interventions on the rates of infection. A model belonging to the SEIS (Susceptible-Exposed-Infected-Susceptible) class of compartmental models, with specific modifications, was developed. Quantification of the model is based on the law of mass preservation, which helps determine the relationships between flows of individuals and different compartments. Quantification is deterministic or probabilistic depending on subpopulation size. The assumptions for the model are based on several developed epidemiological studies. Based on the assumptions of the model, an intervention capable of sustaining a 25% reduction in person-to-person transmission shows a significant reduction in the rate of infections caused by ST131; the impact is higher for non-ESBL-producing ST131 isolates than for ESBL producers. On the other hand, an isolated intervention reducing exposure to antimicrobial agents has much more limited impact on the rate of ST131 infection. Our results suggest that interventions achieving a continuous reduction in the transmission of ST131 in households, nursing homes and hospitals offer the best chance of reducing the burden of the infections caused by these isolates.

  19. USE OF ROUGH SETS AND SPECTRAL DATA FOR BUILDING PREDICTIVE MODELS OF REACTION RATE CONSTANTS

    EPA Science Inventory

    A model for predicting the log of the rate constants for alkaline hydrolysis of organic esters has been developed with the use of gas-phase min-infrared library spectra and a rule-building software system based on the mathematical theory of rough sets. A diverse set of 41 esters ...

  20. Wildfire risk and housing prices: a case study from Colorado Springs.

    Treesearch

    G.H. Donovan; P.A. Champ; D.T. Butry

    2007-01-01

    Unlike other natural hazards such as floods, hurricanes, and earthquakes, wildfire risk has not previously been examined using a hedonic property value model. In this article, we estimate a hedonic model based on parcel-level wildfire risk ratings from Colorado Springs. We found that providing homeowners with specific information about the wildfire risk rating of their...

  1. Galactic and solar radiation exposure to aircrew during a solar cycle.

    PubMed

    Lewis, B J; Bennett, L G I; Green, A R; McCall, M J; Ellaschuk, B; Butler, A; Pierre, M

    2002-01-01

    An on-going investigation using a tissue-equivalent proportional counter (TEPC) has been carried out to measure the ambient dose equivalent rate of the cosmic radiation exposure of aircrew during a solar cycle. A semi-empirical model has been derived from these data to allow for the interpolation of the dose rate for any global position. The model has been extended to an altitude of up to 32 km with further measurements made on board aircraft and several balloon flights. The effects of changing solar modulation during the solar cycle are characterised by correlating the dose rate data to different solar potential models. Through integration of the dose-rate function over a great circle flight path or between given waypoints, a Predictive Code for Aircrew Radiation Exposure (PCAIRE) has been further developed for estimation of the route dose from galactic cosmic radiation exposure. This estimate is provided in units of ambient dose equivalent as well as effective dose, based on E/H x (10) scaling functions as determined from transport code calculations with LUIN and FLUKA. This experimentally based treatment has also been compared with the CARI-6 and EPCARD codes that are derived solely from theoretical transport calculations. Using TEPC measurements taken aboard the International Space Station, ground based neutron monitoring, GOES satellite data and transport code analysis, an empirical model has been further proposed for estimation of aircrew exposure during solar particle events. This model has been compared to results obtained during recent solar flare events.

  2. [Study on brand traceability of vinegar based on near infrared spectroscopy technology].

    PubMed

    Guan, Xiao; Liu, Jing; Gu, Fang-Qing; Yang, Yong-Jian

    2014-09-01

    In the present paper, 152 vinegar samples with four different brands were chosen as research targets, and their near infrared spectra were collected by diffusion reflection mode and transmission mode, respectively. Furthermore, the brand traceability models for edible vinegar were constructed. The effects of the collection mode and pretreatment methods of spectrum on the precision of traceability models were investigated intensively. The models constructed by PLS1-DA modeling method using spectrum data of 114 training samples were applied to predict 38 test samples, and R2, RMSEC and RMSEP of the model based on transmission mode data were 0.92, 0.113 and 0.127, respectively, with recognition rate of 76.32%, and those based on diffusion reflection mode data were 0.97, 0.102 and 0.119, with recognition rate of 86.84%. The results demonstrated that the near infrared spectrum combined with PLS1-DA can be used to establish the brand traceability models for edible vinegar, and diffuse reflection mode is more beneficial for predictive ability of the model.

  3. Modelling high data rate communication network access protocol

    NASA Technical Reports Server (NTRS)

    Khanna, S.; Foudriat, E. C.; Paterra, Frank; Maly, Kurt J.; Overstreet, C. Michael

    1990-01-01

    Modeling of high data rate communication systems is different from the low data rate systems. Three simulations were built during the development phase of Carrier Sensed Multiple Access/Ring Network (CSMA/RN) modeling. The first was a model using SIMCRIPT based upon the determination and processing of each event at each node. The second simulation was developed in C based upon isolating the distinct object that can be identified as the ring, the message, the node, and the set of critical events. The third model further identified the basic network functionality by creating a single object, the node which includes the set of critical events which occur at the node. The ring structure is implicit in the node structure. This model was also built in C. Each model is discussed and their features compared. It should be stated that the language used was mainly selected by the model developer because of his past familiarity. Further the models were not built with the intent to compare either structure or language but because the complexity of the problem and initial results contained obvious errors, so alternative models were built to isolate, determine, and correct programming and modeling errors. The CSMA/RN protocol is discussed in sufficient detail to understand modeling complexities. Each model is described along with its features and problems. The models are compared and concluding observations and remarks are presented.

  4. Models for H₃ receptor antagonist activity of sulfonylurea derivatives.

    PubMed

    Khatri, Naveen; Madan, A K

    2014-03-01

    The histamine H₃ receptor has been perceived as an auspicious target for the treatment of various central and peripheral nervous system diseases. In present study, a wide variety of 60 2D and 3D molecular descriptors (MDs) were successfully utilized for the development of models for the prediction of antagonist activity of sulfonylurea derivatives for histamine H₃ receptors. Models were developed through decision tree (DT), random forest (RF) and moving average analysis (MAA). Dragon software version 6.0.28 was employed for calculation of values of diverse MDs of each analogue involved in the data set. The DT classified and correctly predicted the input data with an impressive non-error rate of 94% in the training set and 82.5% during cross validation. RF correctly classified the analogues into active and inactive with a non-error rate of 79.3%. The MAA based models predicted the antagonist histamine H₃ receptor activity with non-error rate up to 90%. Active ranges of the proposed MAA based models not only exhibited high potency but also showed improved safety as indicated by relatively high values of selectivity index. The statistical significance of the models was assessed through sensitivity, specificity, non-error rate, Matthew's correlation coefficient and intercorrelation analysis. Proposed models offer vast potential for providing lead structures for development of potent but safe H₃ receptor antagonist sulfonylurea derivatives. Copyright © 2013 Elsevier Inc. All rights reserved.

  5. A multi-species reactive transport model to estimate biogeochemical rates based on single-well push-pull test data

    NASA Astrophysics Data System (ADS)

    Phanikumar, Mantha S.; McGuire, Jennifer T.

    2010-08-01

    Push-pull tests are a popular technique to investigate various aquifer properties and microbial reaction kinetics in situ. Most previous studies have interpreted push-pull test data using approximate analytical solutions to estimate (generally first-order) reaction rate coefficients. Though useful, these analytical solutions may not be able to describe important complexities in rate data. This paper reports the development of a multi-species, radial coordinate numerical model (PPTEST) that includes the effects of sorption, reaction lag time and arbitrary reaction order kinetics to estimate rates in the presence of mixing interfaces such as those created between injected "push" water and native aquifer water. The model has the ability to describe an arbitrary number of species and user-defined reaction rate expressions including Monod/Michelis-Menten kinetics. The FORTRAN code uses a finite-difference numerical model based on the advection-dispersion-reaction equation and was developed to describe the radial flow and transport during a push-pull test. The accuracy of the numerical solutions was assessed by comparing numerical results with analytical solutions and field data available in the literature. The model described the observed breakthrough data for tracers (chloride and iodide-131) and reactive components (sulfate and strontium-85) well and was found to be useful for testing hypotheses related to the complex set of processes operating near mixing interfaces.

  6. A global reference for caesarean section rates (C-Model): a multicountry cross-sectional study.

    PubMed

    Souza, J P; Betran, A P; Dumont, A; de Mucio, B; Gibbs Pickens, C M; Deneux-Tharaux, C; Ortiz-Panozo, E; Sullivan, E; Ota, E; Togoobaatar, G; Carroli, G; Knight, H; Zhang, J; Cecatti, J G; Vogel, J P; Jayaratne, K; Leal, M C; Gissler, M; Morisaki, N; Lack, N; Oladapo, O T; Tunçalp, Ö; Lumbiganon, P; Mori, R; Quintana, S; Costa Passos, A D; Marcolin, A C; Zongo, A; Blondel, B; Hernández, B; Hogue, C J; Prunet, C; Landman, C; Ochir, C; Cuesta, C; Pileggi-Castro, C; Walker, D; Alves, D; Abalos, E; Moises, Ecd; Vieira, E M; Duarte, G; Perdona, G; Gurol-Urganci, I; Takahiko, K; Moscovici, L; Campodonico, L; Oliveira-Ciabati, L; Laopaiboon, M; Danansuriya, M; Nakamura-Pereira, M; Costa, M L; Torloni, M R; Kramer, M R; Borges, P; Olkhanud, P B; Pérez-Cuevas, R; Agampodi, S B; Mittal, S; Serruya, S; Bataglia, V; Li, Z; Temmerman, M; Gülmezoglu, A M

    2016-02-01

    To generate a global reference for caesarean section (CS) rates at health facilities. Cross-sectional study. Health facilities from 43 countries. Thirty eight thousand three hundred and twenty-four women giving birth from 22 countries for model building and 10,045,875 women giving birth from 43 countries for model testing. We hypothesised that mathematical models could determine the relationship between clinical-obstetric characteristics and CS. These models generated probabilities of CS that could be compared with the observed CS rates. We devised a three-step approach to generate the global benchmark of CS rates at health facilities: creation of a multi-country reference population, building mathematical models, and testing these models. Area under the ROC curves, diagnostic odds ratio, expected CS rate, observed CS rate. According to the different versions of the model, areas under the ROC curves suggested a good discriminatory capacity of C-Model, with summary estimates ranging from 0.832 to 0.844. The C-Model was able to generate expected CS rates adjusted for the case-mix of the obstetric population. We have also prepared an e-calculator to facilitate use of C-Model (www.who.int/reproductivehealth/publications/maternal_perinatal_health/c-model/en/). This article describes the development of a global reference for CS rates. Based on maternal characteristics, this tool was able to generate an individualised expected CS rate for health facilities or groups of health facilities. With C-Model, obstetric teams, health system managers, health facilities, health insurance companies, and governments can produce a customised reference CS rate for assessing use (and overuse) of CS. The C-Model provides a customized benchmark for caesarean section rates in health facilities and systems. © 2015 World Health Organization; licensed by John Wiley & Sons Ltd on behalf of Royal College of Obstetricians and Gynaecologists.

  7. Development of interactive graphic user interfaces for modeling reaction-based biogeochemical processes in batch systems with BIOGEOCHEM

    NASA Astrophysics Data System (ADS)

    Chang, C.; Li, M.; Yeh, G.

    2010-12-01

    The BIOGEOCHEM numerical model (Yeh and Fang, 2002; Fang et al., 2003) was developed with FORTRAN for simulating reaction-based geochemical and biochemical processes with mixed equilibrium and kinetic reactions in batch systems. A complete suite of reactions including aqueous complexation, adsorption/desorption, ion-exchange, redox, precipitation/dissolution, acid-base reactions, and microbial mediated reactions were embodied in this unique modeling tool. Any reaction can be treated as fast/equilibrium or slow/kinetic reaction. An equilibrium reaction is modeled with an implicit finite rate governed by a mass action equilibrium equation or by a user-specified algebraic equation. A kinetic reaction is modeled with an explicit finite rate with an elementary rate, microbial mediated enzymatic kinetics, or a user-specified rate equation. None of the existing models has encompassed this wide array of scopes. To ease the input/output learning curve using the unique feature of BIOGEOCHEM, an interactive graphic user interface was developed with the Microsoft Visual Studio and .Net tools. Several user-friendly features, such as pop-up help windows, typo warning messages, and on-screen input hints, were implemented, which are robust. All input data can be real-time viewed and automated to conform with the input file format of BIOGEOCHEM. A post-processor for graphic visualizations of simulated results was also embedded for immediate demonstrations. By following data input windows step by step, errorless BIOGEOCHEM input files can be created even if users have little prior experiences in FORTRAN. With this user-friendly interface, the time effort to conduct simulations with BIOGEOCHEM can be greatly reduced.

  8. Predicting Nitrate Transport under Future Climate Scenarios beneath the Nebraska Management Systems Evaluation Area (MSEA) site

    NASA Astrophysics Data System (ADS)

    Li, Y.; Akbariyeh, S.; Gomez Peña, C. A.; Bartlet-Hunt, S.

    2017-12-01

    Understanding the impacts of future climate change on soil hydrological processes and solute transport is crucial to develop appropriate strategies to minimize adverse impacts of agricultural activities on groundwater quality. The goal of this work is to evaluate the direct effects of climate change on the fate and transport of nitrate beneath a center-pivot irrigated corn field in Nebraska Management Systems Evaluation Area (MSEA) site. Future groundwater recharge rate and actual evapotranspiration rate were predicted based on an inverse modeling approach using climate data generated by Weather Research and Forecasting (WRF) model under the RCP 8.5 scenario, which was downscaled from global CCSM4 model to a resolution of 24 by 24 km2. A groundwater flow model was first calibrated based on historical groundwater table measurement and was then applied to predict future groundwater table in the period 2057-2060. Finally, predicted future groundwater recharge rate, actual evapotranspiration rate, and groundwater level, together with future precipitation data from WRF, were used in a three-dimensional (3D) model, which was validated based on rich historic data set collected from 1993-1996, to predict nitrate concentration in soil and groundwater from the year 2057 to 2060. Future groundwater recharge was found to be decreasing in the study area compared to average groundwater recharge data from the literature. Correspondingly, groundwater elevation was predicted to decrease (1 to 2 ft) over the five years of simulation. Predicted higher transpiration data from climate model resulted in lower infiltration of nitrate concentration in subsurface within the root zone.

  9. Predicting Nitrate Transport under Future Climate Scenarios beneath the Nebraska Management Systems Evaluation Area (MSEA) site

    NASA Astrophysics Data System (ADS)

    Li, Y.; Akbariyeh, S.; Gomez Peña, C. A.; Bartlet-Hunt, S.

    2016-12-01

    Understanding the impacts of future climate change on soil hydrological processes and solute transport is crucial to develop appropriate strategies to minimize adverse impacts of agricultural activities on groundwater quality. The goal of this work is to evaluate the direct effects of climate change on the fate and transport of nitrate beneath a center-pivot irrigated corn field in Nebraska Management Systems Evaluation Area (MSEA) site. Future groundwater recharge rate and actual evapotranspiration rate were predicted based on an inverse modeling approach using climate data generated by Weather Research and Forecasting (WRF) model under the RCP 8.5 scenario, which was downscaled from global CCSM4 model to a resolution of 24 by 24 km2. A groundwater flow model was first calibrated based on historical groundwater table measurement and was then applied to predict future groundwater table in the period 2057-2060. Finally, predicted future groundwater recharge rate, actual evapotranspiration rate, and groundwater level, together with future precipitation data from WRF, were used in a three-dimensional (3D) model, which was validated based on rich historic data set collected from 1993-1996, to predict nitrate concentration in soil and groundwater from the year 2057 to 2060. Future groundwater recharge was found to be decreasing in the study area compared to average groundwater recharge data from the literature. Correspondingly, groundwater elevation was predicted to decrease (1 to 2 ft) over the five years of simulation. Predicted higher transpiration data from climate model resulted in lower infiltration of nitrate concentration in subsurface within the root zone.

  10. A testing-coverage software reliability model considering fault removal efficiency and error generation

    PubMed Central

    Li, Qiuying; Pham, Hoang

    2017-01-01

    In this paper, we propose a software reliability model that considers not only error generation but also fault removal efficiency combined with testing coverage information based on a nonhomogeneous Poisson process (NHPP). During the past four decades, many software reliability growth models (SRGMs) based on NHPP have been proposed to estimate the software reliability measures, most of which have the same following agreements: 1) it is a common phenomenon that during the testing phase, the fault detection rate always changes; 2) as a result of imperfect debugging, fault removal has been related to a fault re-introduction rate. But there are few SRGMs in the literature that differentiate between fault detection and fault removal, i.e. they seldom consider the imperfect fault removal efficiency. But in practical software developing process, fault removal efficiency cannot always be perfect, i.e. the failures detected might not be removed completely and the original faults might still exist and new faults might be introduced meanwhile, which is referred to as imperfect debugging phenomenon. In this study, a model aiming to incorporate fault introduction rate, fault removal efficiency and testing coverage into software reliability evaluation is developed, using testing coverage to express the fault detection rate and using fault removal efficiency to consider the fault repair. We compare the performance of the proposed model with several existing NHPP SRGMs using three sets of real failure data based on five criteria. The results exhibit that the model can give a better fitting and predictive performance. PMID:28750091

  11. Modelling past, present and future peatland carbon accumulation across the pan-Arctic region

    NASA Astrophysics Data System (ADS)

    Chaudhary, Nitin; Miller, Paul A.; Smith, Benjamin

    2017-09-01

    Most northern peatlands developed during the Holocene, sequestering large amounts of carbon in terrestrial ecosystems. However, recent syntheses have highlighted the gaps in our understanding of peatland carbon accumulation. Assessments of the long-term carbon accumulation rate and possible warming-driven changes in these accumulation rates can therefore benefit from process-based modelling studies. We employed an individual-based dynamic global ecosystem model with dynamic peatland and permafrost functionalities and patch-based vegetation dynamics to quantify long-term carbon accumulation rates and to assess the effects of historical and projected climate change on peatland carbon balances across the pan-Arctic region. Our results are broadly consistent with published regional and global carbon accumulation estimates. A majority of modelled peatland sites in Scandinavia, Europe, Russia and central and eastern Canada change from carbon sinks through the Holocene to potential carbon sources in the coming century. In contrast, the carbon sink capacity of modelled sites in Siberia, far eastern Russia, Alaska and western and northern Canada was predicted to increase in the coming century. The greatest changes were evident in eastern Siberia, north-western Canada and in Alaska, where peat production hampered by permafrost and low productivity due the cold climate in these regions in the past was simulated to increase greatly due to warming, a wetter climate and higher CO2 levels by the year 2100. In contrast, our model predicts that sites that are expected to experience reduced precipitation rates and are currently permafrost free will lose more carbon in the future.

  12. Development and external validation of a risk-prediction model to predict 5-year overall survival in advanced larynx cancer.

    PubMed

    Petersen, Japke F; Stuiver, Martijn M; Timmermans, Adriana J; Chen, Amy; Zhang, Hongzhen; O'Neill, James P; Deady, Sandra; Vander Poorten, Vincent; Meulemans, Jeroen; Wennerberg, Johan; Skroder, Carl; Day, Andrew T; Koch, Wayne; van den Brekel, Michiel W M

    2018-05-01

    TNM-classification inadequately estimates patient-specific overall survival (OS). We aimed to improve this by developing a risk-prediction model for patients with advanced larynx cancer. Cohort study. We developed a risk prediction model to estimate the 5-year OS rate based on a cohort of 3,442 patients with T3T4N0N+M0 larynx cancer. The model was internally validated using bootstrapping samples and externally validated on patient data from five external centers (n = 770). The main outcome was performance of the model as tested by discrimination, calibration, and the ability to distinguish risk groups based on tertiles from the derivation dataset. The model performance was compared to a model based on T and N classification only. We included age, gender, T and N classification, and subsite as prognostic variables in the standard model. After external validation, the standard model had a significantly better fit than a model based on T and N classification alone (C statistic, 0.59 vs. 0.55, P < .001). The model was able to distinguish well among three risk groups based on tertiles of the risk score. Adding treatment modality to the model did not decrease the predictive power. As a post hoc analysis, we tested the added value of comorbidity as scored by American Society of Anesthesiologists score in a subsample, which increased the C statistic to 0.68. A risk prediction model for patients with advanced larynx cancer, consisting of readily available clinical variables, gives more accurate estimations of the estimated 5-year survival rate when compared to a model based on T and N classification alone. 2c. Laryngoscope, 128:1140-1145, 2018. © 2017 The American Laryngological, Rhinological and Otological Society, Inc.

  13. Schistosome Materials for Vaccine Development.

    DTIC Science & Technology

    1981-09-01

    snail/day; 100 or more adult worms/mouse; snail infection rates of 90% or more; death rates of in- fected snails 10% or less biweekly. Exposure...Recorded cercarial output, miracidial infectivity and snail death rates from 1976 to present as base line information for our snail-schistosome model

  14. Global Earthquake Activity Rate models based on version 2 of the Global Strain Rate Map

    NASA Astrophysics Data System (ADS)

    Bird, P.; Kreemer, C.; Kagan, Y. Y.; Jackson, D. D.

    2013-12-01

    Global Earthquake Activity Rate (GEAR) models have usually been based on either relative tectonic motion (fault slip rates and/or distributed strain rates), or on smoothing of seismic catalogs. However, a hybrid approach appears to perform better than either parent, at least in some retrospective tests. First, we construct a Tectonic ('T') forecast of shallow (≤ 70 km) seismicity based on global plate-boundary strain rates from version 2 of the Global Strain Rate Map. Our approach is the SHIFT (Seismic Hazard Inferred From Tectonics) method described by Bird et al. [2010, SRL], in which the character of the strain rate tensor (thrusting and/or strike-slip and/or normal) is used to select the most comparable type of plate boundary for calibration of the coupled seismogenic lithosphere thickness and corner magnitude. One difference is that activity of offshore plate boundaries is spatially smoothed using empirical half-widths [Bird & Kagan, 2004, BSSA] before conversion to seismicity. Another is that the velocity-dependence of coupling in subduction and continental-convergent boundaries [Bird et al., 2009, BSSA] is incorporated. Another forecast component is the smoothed-seismicity ('S') forecast model of [Kagan & Jackson, 1994, JGR; Kagan & Jackson, 2010, GJI], which was based on optimized smoothing of the shallow part of the GCMT catalog, years 1977-2004. Both forecasts were prepared for threshold magnitude 5.767. Then, we create hybrid forecasts by one of 3 methods: (a) taking the greater of S or T; (b) simple weighted-average of S and T; or (c) log of the forecast rate is a weighted average of the logs of S and T. In methods (b) and (c) there is one free parameter, which is the fractional contribution from S. All hybrid forecasts are normalized to the same global rate. Pseudo-prospective tests for 2005-2012 (using versions of S and T calibrated on years 1977-2004) show that many hybrid models outperform both parents (S and T), and that the optimal weight on S is in the neighborhood of 5/8. This is true whether forecast performance is scored by Kagan's [2009, GJI] I1 information score, or by the S-test of Zechar & Jordan [2010, BSSA]. These hybrids also score well (0.97) in the ASS-test of Zechar & Jordan [2008, GJI] with respect to prior relative intensity.

  15. Continuum Fatigue Damage Modeling for Use in Life Extending Control

    NASA Technical Reports Server (NTRS)

    Lorenzo, Carl F.

    1994-01-01

    This paper develops a simplified continuum (continuous wrp to time, stress, etc.) fatigue damage model for use in Life Extending Controls (LEC) studies. The work is based on zero mean stress local strain cyclic damage modeling. New nonlinear explicit equation forms of cyclic damage in terms of stress amplitude are derived to facilitate the continuum modeling. Stress based continuum models are derived. Extension to plastic strain-strain rate models are also presented. Application of these models to LEC applications is considered. Progress toward a nonzero mean stress based continuum model is presented. Also, new nonlinear explicit equation forms in terms of stress amplitude are also derived for this case.

  16. DEVELOPMENT OF A PHYSIOLOGICALLY BASED PHARMACOKINETIC MODEL FOR DELTAMETHRIN IN ADULT AND DEVELOPING SPRAGUE-DAWLEY RATS

    EPA Science Inventory

    This work describes the development of a physiologically based pharmacokinetic (PBPK) model of deltamethrin, a type II pyrethroid, in the developing male Sprague-Dawley rat. Generalized Michaelis-Menten equations were used to calculate metabolic rate constants and organ weights ...

  17. A Case-Based Learning Model in Orthodontics.

    ERIC Educational Resources Information Center

    Engel, Francoise E.; Hendricson, William D.

    1994-01-01

    A case-based, student-centered instructional model designed to mimic orthodontic problem solving and decision making in dental general practice is described. Small groups of students analyze case data, then record and discuss their diagnoses and treatments. Students and instructors rated the seminars positively, and students reported improved…

  18. Mesoscopic modeling and parameter estimation of a lithium-ion battery based on LiFePO4/graphite

    NASA Astrophysics Data System (ADS)

    Jokar, Ali; Désilets, Martin; Lacroix, Marcel; Zaghib, Karim

    2018-03-01

    A novel numerical model for simulating the behavior of lithium-ion batteries based on LiFePO4(LFP)/graphite is presented. The model is based on the modified Single Particle Model (SPM) coupled to a mesoscopic approach for the LFP electrode. The model comprises one representative spherical particle as the graphite electrode, and N LFP units as the positive electrode. All the SPM equations are retained to model the negative electrode performance. The mesoscopic model rests on non-equilibrium thermodynamic conditions and uses a non-monotonic open circuit potential for each unit. A parameter estimation study is also carried out to identify all the parameters needed for the model. The unknown parameters are the solid diffusion coefficient of the negative electrode (Ds,n), reaction-rate constant of the negative electrode (Kn), negative and positive electrode porosity (εn&εn), initial State-Of-Charge of the negative electrode (SOCn,0), initial partial composition of the LFP units (yk,0), minimum and maximum resistance of the LFP units (Rmin&Rmax), and solution resistance (Rcell). The results show that the mesoscopic model can simulate successfully the electrochemical behavior of lithium-ion batteries at low and high charge/discharge rates. The model also describes adequately the lithiation/delithiation of the LFP particles, however, it is computationally expensive compared to macro-based models.

  19. Simulation and analysis of traffic flow based on cellular automaton

    NASA Astrophysics Data System (ADS)

    Ren, Xianping; Liu, Xia

    2018-03-01

    In this paper, single-lane and two-lane traffic model are established based on cellular automaton. Different values of vehicle arrival rate at the entrance and vehicle departure rate at the exit are set to analyze their effects on density, average speed and traffic flow. If the road exit is unblocked, vehicles can pass through the road smoothly despite of the arrival rate at the entrance. If vehicles enter into the road continuously, the traffic condition is varied with the departure rate at the exit. To avoid traffic jam, reasonable vehicle departure rate should be adopted.

  20. Human demography and reserve size predict wildlife extinction in West Africa.

    PubMed Central

    Brashares, J S; Arcese, P; Sam, M K

    2001-01-01

    Species-area models have become the primary tool used to predict baseline extinction rates for species in isolated habitats, and have influenced conservation and land-use planning worldwide. In particular, these models have been used to predict extinction rates following the loss or fragmentation of natural habitats in the absence of direct human influence on species persistence. Thus, where direct human influences, such as hunting, put added pressure on species in remnant habitat patches, we should expect to observe extinction rates higher than those predicted by simple species-area models. Here, we show that extinction rates for 41 species of large mammals in six nature reserves in West Africa are 14-307 times higher than those predicted by models based on reserve size alone. Human population and reserve size accounted for 98% of the observed variation in extinction rates between reserves. Extinction occurred at higher rates than predicted by species-area models for carnivores, primates and ungulates, and at the highest rates overall near reserve borders. Our results indicate that, where the harvest of wildlife is common, conservation plans should focus on increasing the size of reserves and reducing the rate of hunting. PMID:11747566

  1. Thermal history regulates methylbutenol basal emission rate in Pinus ponderosa.

    PubMed

    Gray, Dennis W; Goldstein, Allen H; Lerdau, Manuel T

    2006-07-01

    Methylbutenol (MBO) is a 5-carbon alcohol that is emitted by many pines in western North America, which may have important impacts on the tropospheric chemistry of this region. In this study, we document seasonal changes in basal MBO emission rates and test several models predicting these changes based on thermal history. These models represent extensions of the ISO G93 model that add a correction factor C(basal), allowing MBO basal emission rates to change as a function of thermal history. These models also allow the calculation of a new emission parameter E(standard30), which represents the inherent capacity of a plant to produce MBO, independent of current or past environmental conditions. Most single-component models exhibited large departures in early and late season, and predicted day-to-day changes in basal emission rate with temporal offsets of up to 3 d relative to measured basal emission rates. Adding a second variable describing thermal history at a longer time scale improved early and late season model performance while retaining the day-to-day performance of the parent single-component model. Out of the models tested, the T(amb),T(max7) model exhibited the best combination of day-to-day and seasonal predictions of basal MBO emission rates.

  2. Bayesian semi-parametric analysis of Poisson change-point regression models: application to policy making in Cali, Colombia.

    PubMed

    Park, Taeyoung; Krafty, Robert T; Sánchez, Alvaro I

    2012-07-27

    A Poisson regression model with an offset assumes a constant baseline rate after accounting for measured covariates, which may lead to biased estimates of coefficients in an inhomogeneous Poisson process. To correctly estimate the effect of time-dependent covariates, we propose a Poisson change-point regression model with an offset that allows a time-varying baseline rate. When the nonconstant pattern of a log baseline rate is modeled with a nonparametric step function, the resulting semi-parametric model involves a model component of varying dimension and thus requires a sophisticated varying-dimensional inference to obtain correct estimates of model parameters of fixed dimension. To fit the proposed varying-dimensional model, we devise a state-of-the-art MCMC-type algorithm based on partial collapse. The proposed model and methods are used to investigate an association between daily homicide rates in Cali, Colombia and policies that restrict the hours during which the legal sale of alcoholic beverages is permitted. While simultaneously identifying the latent changes in the baseline homicide rate which correspond to the incidence of sociopolitical events, we explore the effect of policies governing the sale of alcohol on homicide rates and seek a policy that balances the economic and cultural dependencies on alcohol sales to the health of the public.

  3. O₂migration rates in [NiFe] hydrogenases. A joint approach combining free-energy calculations and kinetic modeling.

    PubMed

    Topin, Jérémie; Diharce, Julien; Fiorucci, Sébastien; Antonczak, Serge; Golebiowski, Jérôme

    2014-01-23

    Hydrogenases are promising candidates for the catalytic production of green energy by means of biological ways. The major impediment to such a production is rooted in their inhibition under aerobic conditions. In this work, we model dioxygen migration rates in mutants of a hydrogenase of Desulfovibrio fructusovorans. The approach relies on the calculation of the whole potential of mean force for O2 migration within the wild-type as well as in V74M, V74F, and V74Q mutant channels. The three free-energy barriers along the entire migration pathway are converted into chemical rates through modeling based on Transition State Theory. The use of such a model recovers the trend of O2 migration rates among the series.

  4. A population model for a long-lived, resprouting chaparral shrub: Adenostoma fasciculatum

    USGS Publications Warehouse

    Stohlgren, Thomas J.; Rundel, Philip W.

    1986-01-01

    Extensive stands of Adenostoma fasciculatum H.&A. (chamise) in the chaparral of California are periodically rejuvenated by fire. A population model based on size-specific demographic characteristics (thinning and fire-caused mortality) was developed to generate probable age distributions within size classes and survivorship curves for typical stands. The model was modified to assess the long term effects of different mortality rates on age distributions. Under observed mean mortality rates (28.7%), model output suggests some shrubs can survive more than 23 fires. A 10% increase in mortality rate by size class slightly shortened the survivorship curve, while a 10% decrease in mortality rate by size class greatly elongated the curve. This approach may be applicable to other long-lived plant species with complex life histories.

  5. Improved model for the angular dependence of excimer laser ablation rates in polymer materials

    NASA Astrophysics Data System (ADS)

    Pedder, J. E. A.; Holmes, A. S.; Dyer, P. E.

    2009-10-01

    Measurements of the angle-dependent ablation rates of polymers that have applications in microdevice fabrication are reported. A simple model based on Beer's law, including plume absorption, is shown to give good agreement with the experimental findings for polycarbonate and SU8, ablated using the 193 and 248 nm excimer lasers, respectively. The modeling forms a useful tool for designing masks needed to fabricate complex surface relief by ablation.

  6. Geohydrology of the French Creek basin and simulated effects of droughtand ground-water withdrawals, Chester County, Pennsylvania

    USGS Publications Warehouse

    Sloto, Ronald A.

    2004-01-01

    This report describes the results of a study by the U.S. Geological Survey, in cooperation with the Delaware River Basin Commission, to develop a regional ground-water-flow model of the French Creek Basin in Chester County, Pa. The model was used to assist water-resource managers by illustrating the interconnection between ground-water and surface-water systems. The 70.7-mi2 (square mile) French Creek Basin is in the Piedmont Physiographic Province and is underlain by crystalline and sedimentary fractured-rock aquifers. Annual water budgets were calculated for 1969-2001 for the French Creek Basin upstream of streamflow measurement station French Creek near Phoenixville (01472157). Average annual precipitation was 46.28 in. (inches), average annual streamflow was 20.29 in., average annual base flow determined by hydrograph separation was 12.42 in., and estimated average annual ET (evapotranspiration) was 26.10 in. Estimated average annual recharge was 14.32 in. and is equal to 31 percent of the average annual precipitation. Base flow made up an average of 61 percent of streamflow. Ground-water flow in the French Creek Basin was simulated using the finite-difference MODFLOW-96 computer program. The model structure is based on a simplified two-dimensional conceptualization of the ground-water-flow system. The modeled area was extended outside the French Creek Basin to natural hydrologic boundaries; the modeled area includes 40 mi2 of adjacent areas outside the basin. The hydraulic conductivity for each geologic unit was calculated from reported specific-capacity data determined from aquifer tests and was adjusted during model calibration. The model was calibrated for aboveaverage conditions by simulating base-flow and water-level measurements made on May 1, 2001, using a recharge rate of 20 in/yr (inches per year). The model was calibrated for below-average conditions by simulating base-flow and water-level measurements made on September 11 and 17, 2001, using a recharge rate of 6.2 in/yr. Average conditions were simulated by adjusting the recharge rate until simulated streamflow at streamflow-measurement station 01472157 matched the long-term (1968-2001) average base flow of 54.1 cubic feet per second. The recharge rate used for average conditions was 15.7 in/yr. The effect of drought in the French Creek Basin was simulated using a drought year recharge rate of 8 in/yr for 3 months. After 3 months of drought, the simulated streamflow of French Creek at streamflow-measurement station 01472157 decreased 34 percent. The simulations show that after 6 months of average recharge (15.7 in/yr) following drought, streamflow and water levels recovered almost to pre-drought conditions. The effect of increased ground-water withdrawals on stream base flow in the South Branch French Creek Subbasin was simulated under average and drought conditions with pumping rates equal to 50, 75, and 100 percent of the Delaware River Basin Commission Ground Water Protected Area (GWPA) withdrawal limit (1,393 million gallons per year) with all pumped water removed from the basin. For average recharge conditions, the simulated streamflow of South Branch French Creek at the mouth decreased 18, 28, and 37 percent at a withdrawal rate equal to 50, 75, and 100 percent of the GWPA limit, respectively. After 3 months of drought recharge conditions, the simulated streamflow of South Branch French Creek at the mouth decreased 27, 40, and 52 percent at a withdrawal rate equal to 50, 75, and 100 percent of the GWPA limit, respectively. The effect of well location on base flow, water levels, and the sources of water to the well was simulated by locating a hypothetical well pumping 200 gallons per minute in different places in the Beaver Run Subbasin with all pumped water removed from the basin. The smallest reduction in the base flow of Beaver Run was from a well on the drainage divide

  7. Mechanistic quantitative structure-activity relationship model for the photoinduced toxicity of polycyclic aromatic hydrocarbons. 2: An empirical model for the toxicity of 16 polycyclic aromatic hydrocarbons to the duckweed Lemna gibba L. G-3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, X.D.; Krylov, S.N.; Ren, L.

    1997-11-01

    Photoinduced toxicity of polycyclic aromatic hydrocarbons (PAHs) occurs via photosensitization reactions (e.g., generation of singlet-state oxygen) and by photomodification (photooxidation and/or photolysis) of the chemicals to more toxic species. The quantitative structure-activity relationship (QSAR) described in the companion paper predicted, in theory, that photosensitization and photomodification additively contribute to toxicity. To substantiate this QSAR modeling exercise it was necessary to show that toxicity can be described by empirically derived parameters. The toxicity of 16 PAHs to the duckweed Lemna gibba was measured as inhibition of leaf production in simulated solar radiation (a light source with a spectrum similar to thatmore » of sunlight). A predictive model for toxicity was generated based on the theoretical model developed in the companion paper. The photophysical descriptors required of each PAH for modeling were efficiency of photon absorbance, relative uptake, quantum yield for triplet-state formation, and the rate of photomodification. The photomodification rates of the PAHs showed a moderate correlation to toxicity, whereas a derived photosensitization factor (PSF; based on absorbance, triplet-state quantum yield, and uptake) for each PAH showed only a weak, complex correlation to toxicity. However, summing the rate of photomodification and the PSF resulted in a strong correlation to toxicity that had predictive value. When the PSF and a derived photomodification factor (PMF; based on the photomodification rate and toxicity of the photomodified PAHs) were summed, an excellent explanatory model of toxicity was produced, substantiating the additive contributions of the two factors.« less

  8. Attrition and changes in size distribution of lime sorbents during fluidization in a circulating fluidized bed absorber. Double quarterly report, January 1--August 31, 1993

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Sang-Kwun; Keener, T.C.; Cook, J.L.

    1993-12-31

    The experimental data of lime sorbent attrition obtained from attriton tests in a circulating fluidized bed absorber (CFBA) are represented. The results are interpreted as both the weight-based attrition rate and size-based attrition rate. The weight-based attrition rate constants are obtained from a modified second-order attrition model, incorporating a minimum fluidization weight, W{sub min}, and excess velocity. Furthermore, this minimum fluidization weight, or W{sub min} was found to be a function of both particle size and velocity. A plot of the natural log of the overall weight-based attrition rate constants (ln K{sub a}) for Lime 1 (903 MMD) at superficialmore » gas velocities of 2 m/s, 2.35 m/s, and 2.69 m/s and for Lime 2 (1764 MMD) at superficial gas velocities of 2 m/s, 3 m/s, 4 m/s and 5 m/s versus the energy term, 1/(U-U{sub mf}){sup 2}, yielded a linear relationship. And, a regression coefficient of 0.9386 for the linear regression confirms that K{sub a} may be expressed in Arrhenius form. In addition, an unsteady state population model is represented to predict the changes in size distribution of bed materials during fluidization. The unsteady state population model was verified experimentally and the solid size distribution predicted by the model agreed well with the corresponding experimental size distributions. The model may be applicable for the batch and continuous operations of fluidized beds in which the solids size reduction is predominantly resulted from attritions and elutriations. Such significance of the mechanical attrition and elutriation is frequently seen in a fast fluidized bed as well as in a circulating fluidized bed.« less

  9. Antarctic sub-shelf melt rates via PICO

    NASA Astrophysics Data System (ADS)

    Reese, Ronja; Albrecht, Torsten; Mengel, Matthias; Asay-Davis, Xylar; Winkelmann, Ricarda

    2018-06-01

    Ocean-induced melting below ice shelves is one of the dominant drivers for mass loss from the Antarctic Ice Sheet at present. An appropriate representation of sub-shelf melt rates is therefore essential for model simulations of marine-based ice sheet evolution. Continental-scale ice sheet models often rely on simple melt-parameterizations, in particular for long-term simulations, when fully coupled ice-ocean interaction becomes computationally too expensive. Such parameterizations can account for the influence of the local depth of the ice-shelf draft or its slope on melting. However, they do not capture the effect of ocean circulation underneath the ice shelf. Here we present the Potsdam Ice-shelf Cavity mOdel (PICO), which simulates the vertical overturning circulation in ice-shelf cavities and thus enables the computation of sub-shelf melt rates consistent with this circulation. PICO is based on an ocean box model that coarsely resolves ice shelf cavities and uses a boundary layer melt formulation. We implement it as a module of the Parallel Ice Sheet Model (PISM) and evaluate its performance under present-day conditions of the Southern Ocean. We identify a set of parameters that yield two-dimensional melt rate fields that qualitatively reproduce the typical pattern of comparably high melting near the grounding line and lower melting or refreezing towards the calving front. PICO captures the wide range of melt rates observed for Antarctic ice shelves, with an average of about 0.1 m a-1 for cold sub-shelf cavities, for example, underneath Ross or Ronne ice shelves, to 16 m a-1 for warm cavities such as in the Amundsen Sea region. This makes PICO a computationally feasible and more physical alternative to melt parameterizations purely based on ice draft geometry.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tyldesley, Scott, E-mail: styldesl@bccancer.bc.c; Delaney, Geoff; Foroudi, Farshad

    Purpose: Estimates of the need for radiotherapy (RT) using different methods (criterion based benchmarking [CBB]and the Canadian [C-EBEST]and Australian [A-EBEST]epidemiologically based estimates) exist for various cancer sites. We compared these model estimates to actual RT rates for lung, breast, and prostate cancers in British Columbia (BC). Methods and Materials: All cases of lung, breast, and prostate cancers in BC from 1997 to 2004 and all patients receiving RT within 1 year (RT{sub 1Y}) and within 5 years (RT{sub 5Y}) of diagnosis were identified. The RT{sub 1Y} and RT{sub 5Y} proportions in health regions with a cancer center for the mostmore » recent year were then calculated. RT rates were compared with CBB and EBEST estimates of RT needs. Variation was assessed by time and region. Results: The RT{sub 1Y} in regions with a cancer center for lung, breast, and prostate cancers were 51%, 58%, and 33% compared with 45%, 57%, and 32% for C-EBEST and 41%, 61%, and 37% for CBB models. The RT{sub 5Y} rates in regions with a cancer center for lung, breast, and prostate cancers were 59%, 61%, and 40% compared with 61%, 66%, and 61% for C-EBEST and 75%, 83%, and 60% for A-EBEST models. The RT{sub 1Y} rates increased for breast and prostate cancers. Conclusions: C-EBEST and CBB model estimates are closer to the actual RT rates than the A-EBEST estimates. Application of these model estimates by health care decision makers should be undertaken with an understanding of the methods used and the assumptions on which they were based.« less

  11. The Galactic Nova Rate Revisited

    NASA Astrophysics Data System (ADS)

    Shafter, A. W.

    2017-01-01

    Despite its fundamental importance, a reliable estimate of the Galactic nova rate has remained elusive. Here, the overall Galactic nova rate is estimated by extrapolating the observed rate for novae reaching m≤slant 2 to include the entire Galaxy using a two component disk plus bulge model for the distribution of stars in the Milky Way. The present analysis improves on previous work by considering important corrections for incompleteness in the observed rate of bright novae and by employing a Monte Carlo analysis to better estimate the uncertainty in the derived nova rates. Several models are considered to account for differences in the assumed properties of bulge and disk nova populations and in the absolute magnitude distribution. The simplest models, which assume uniform properties between bulge and disk novae, predict Galactic nova rates of ˜50 to in excess of 100 per year, depending on the assumed incompleteness at bright magnitudes. Models where the disk novae are assumed to be more luminous than bulge novae are explored, and predict nova rates up to 30% lower, in the range of ˜35 to ˜75 per year. An average of the most plausible models yields a rate of {50}-23+31 yr-1, which is arguably the best estimate currently available for the nova rate in the Galaxy. Virtually all models produce rates that represent significant increases over recent estimates, and bring the Galactic nova rate into better agreement with that expected based on comparison with the latest results from extragalactic surveys.

  12. Kinetic modeling of non-ideal explosives with CHEETAH

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fried, L E; Howard, W M; Souers, P C

    1998-08-06

    We report an implementation of the Wood-Kirkwood kinetic detonation model based on multi-species equations of state and multiple reaction rate laws. Finite rate laws are used for the slowest chemical reactions. Other reactions are given infinite rates and are kept in constant thermodynamic equilibrium. We model a wide range of ideal and non-ideal composite energetic materials. We find that we can replicate experimental detonation velocities to within a few per cent, while obtaining good agreement with estimated reaction zone lengths. The detonation velocity as a function of charge radius is also correctly reproduced.

  13. Radiation Hormesis: Historical Perspective and Implications for Low-Dose Cancer Risk Assessment

    PubMed Central

    Vaiserman, Alexander M.

    2010-01-01

    Current guidelines for limiting exposure of humans to ionizing radiation are based on the linear-no-threshold (LNT) hypothesis for radiation carcinogenesis under which cancer risk increases linearly as the radiation dose increases. With the LNT model even a very small dose could cause cancer and the model is used in establishing guidelines for limiting radiation exposure of humans. A slope change at low doses and dose rates is implemented using an empirical dose and dose rate effectiveness factor (DDREF). This imposes usually unacknowledged nonlinearity but not a threshold in the dose-response curve for cancer induction. In contrast, with the hormetic model, low doses of radiation reduce the cancer incidence while it is elevated after high doses. Based on a review of epidemiological and other data for exposure to low radiation doses and dose rates, it was found that the LNT model fails badly. Cancer risk after ordinarily encountered radiation exposure (medical X-rays, natural background radiation, etc.) is much lower than projections based on the LNT model and is often less than the risk for spontaneous cancer (a hormetic response). Understanding the mechanistic basis for hormetic responses will provide new insights about both risks and benefits from low-dose radiation exposure. PMID:20585444

  14. Analysis of the dynamic behavior of structures using the high-rate GNSS-PPP method combined with a wavelet-neural model: Numerical simulation and experimental tests

    NASA Astrophysics Data System (ADS)

    Kaloop, Mosbeh R.; Yigit, Cemal O.; Hu, Jong W.

    2018-03-01

    Recently, the high rate global navigation satellite system-precise point positioning (GNSS-PPP) technique has been used to detect the dynamic behavior of structures. This study aimed to increase the accuracy of the extraction oscillation properties of structural movements based on the high-rate (10 Hz) GNSS-PPP monitoring technique. A developmental model based on the combination of wavelet package transformation (WPT) de-noising and neural network prediction (NN) was proposed to improve the dynamic behavior of structures for GNSS-PPP method. A complicated numerical simulation involving highly noisy data and 13 experimental cases with different loads were utilized to confirm the efficiency of the proposed model design and the monitoring technique in detecting the dynamic behavior of structures. The results revealed that, when combined with the proposed model, GNSS-PPP method can be used to accurately detect the dynamic behavior of engineering structures as an alternative to relative GNSS method.

  15. Estimating the personal cure rate of cancer patients using population-based grouped cancer survival data.

    PubMed

    Binbing Yu; Tiwari, Ram C; Feuer, Eric J

    2011-06-01

    Cancer patients are subject to multiple competing risks of death and may die from causes other than the cancer diagnosed. The probability of not dying from the cancer diagnosed, which is one of the patients' main concerns, is sometimes called the 'personal cure' rate. Two approaches of modelling competing-risk survival data, namely the cause-specific hazards approach and the mixture model approach, have been used to model competing-risk survival data. In this article, we first show the connection and differences between crude cause-specific survival in the presence of other causes and net survival in the absence of other causes. The mixture survival model is extended to population-based grouped survival data to estimate the personal cure rate. Using the colorectal cancer survival data from the Surveillance, Epidemiology and End Results Programme, we estimate the probabilities of dying from colorectal cancer, heart disease, and other causes by age at diagnosis, race and American Joint Committee on Cancer stage.

  16. Validation of the generalized model of two-phase thermosyphon loop based on experimental measurements of volumetric flow rate

    NASA Astrophysics Data System (ADS)

    Bieliński, Henryk

    2016-09-01

    The current paper presents the experimental validation of the generalized model of the two-phase thermosyphon loop. The generalized model is based on mass, momentum, and energy balances in the evaporators, rising tube, condensers and the falling tube. The theoretical analysis and the experimental data have been obtained for a new designed variant. The variant refers to a thermosyphon loop with both minichannels and conventional tubes. The thermosyphon loop consists of an evaporator on the lower vertical section and a condenser on the upper vertical section. The one-dimensional homogeneous and separated two-phase flow models were used in calculations. The latest minichannel heat transfer correlations available in literature were applied. A numerical analysis of the volumetric flow rate in the steady-state has been done. The experiment was conducted on a specially designed test apparatus. Ultrapure water was used as a working fluid. The results show that the theoretical predictions are in good agreement with the measured volumetric flow rate at steady-state.

  17. The Significance of Temperature Based Approach Over the Energy Based Approaches in the Buildings Thermal Assessment

    NASA Astrophysics Data System (ADS)

    Albatayneh, Aiman; Alterman, Dariusz; Page, Adrian; Moghtaderi, Behdad

    2017-05-01

    The design of low energy buildings requires accurate thermal simulation software to assess the heating and cooling loads. Such designs should sustain thermal comfort for occupants and promote less energy usage over the life time of any building. One of the house energy rating used in Australia is AccuRate, star rating tool to assess and compare the thermal performance of various buildings where the heating and cooling loads are calculated based on fixed operational temperatures between 20 °C to 25 °C to sustain thermal comfort for the occupants. However, these fixed settings for the time and temperatures considerably increase the heating and cooling loads. On the other hand the adaptive thermal model applies a broader range of weather conditions, interacts with the occupants and promotes low energy solutions to maintain thermal comfort. This can be achieved by natural ventilation (opening window/doors), suitable clothes, shading and low energy heating/cooling solutions for the occupied spaces (rooms). These activities will save significant amount of operating energy what can to be taken into account to predict energy consumption for a building. Most of the buildings thermal assessment tools depend on energy-based approaches to predict the thermal performance of any building e.g. AccuRate in Australia. This approach encourages the use of energy to maintain thermal comfort. This paper describes the advantages of a temperature-based approach to assess the building's thermal performance (using an adaptive thermal comfort model) over energy based approach (AccuRate Software used in Australia). The temperature-based approach was validated and compared with the energy-based approach using four full scale housing test modules located in Newcastle, Australia (Cavity Brick (CB), Insulated Cavity Brick (InsCB), Insulated Brick Veneer (InsBV) and Insulated Reverse Brick Veneer (InsRBV)) subjected to a range of seasonal conditions in a moderate climate. The time required for heating and/or cooling using the adaptive thermal comfort approach and AccuRate predictions were estimated. Significant savings (of about 50 %) in energy consumption in minimising the time required for heating and cooling were achieved by using the adaptive thermal comfort model.

  18. Rate-distortion analysis of dead-zone plus uniform threshold scalar quantization and its application--part II: two-pass VBR coding for H.264/AVC.

    PubMed

    Sun, Jun; Duan, Yizhou; Li, Jiangtao; Liu, Jiaying; Guo, Zongming

    2013-01-01

    In the first part of this paper, we derive a source model describing the relationship between the rate, distortion, and quantization steps of the dead-zone plus uniform threshold scalar quantizers with nearly uniform reconstruction quantizers for generalized Gaussian distribution. This source model consists of rate-quantization, distortion-quantization (D-Q), and distortion-rate (D-R) models. In this part, we first rigorously confirm the accuracy of the proposed source model by comparing the calculated results with the coding data of JM 16.0. Efficient parameter estimation strategies are then developed to better employ this source model in our two-pass rate control method for H.264 variable bit rate coding. Based on our D-Q and D-R models, the proposed method is of high stability, low complexity and is easy to implement. Extensive experiments demonstrate that the proposed method achieves: 1) average peak signal-to-noise ratio variance of only 0.0658 dB, compared to 1.8758 dB of JM 16.0's method, with an average rate control error of 1.95% and 2) significant improvement in smoothing the video quality compared with the latest two-pass rate control method.

  19. Integrated Model for Performance Analysis of All-Optical Multihop Packet Switches

    NASA Astrophysics Data System (ADS)

    Jeong, Han-You; Seo, Seung-Woo

    2000-09-01

    The overall performance of an all-optical packet switching system is usually determined by two criteria, i.e., switching latency and packet loss rate. In some real-time applications, however, in which packets arriving later than a timeout period are discarded as loss, the packet loss rate becomes the most dominant criterion for system performance. Here we focus on evaluating the performance of all-optical packet switches in terms of the packet loss rate, which normally arises from the insufficient hardware or the degradation of an optical signal. Considering both aspects, we propose what we believe is a new analysis model for the packet loss rate that reflects the complicated interactions between physical impairments and system-level parameters. On the basis of the estimation model for signal quality degradation in a multihop path we construct an equivalent analysis model of a switching network for evaluating an average bit error rate. With the model constructed we then propose an integrated model for estimating the packet loss rate in three architectural examples of multihop packet switches, each of which is based on a different switching concept. We also derive the bounds on the packet loss rate induced by bit errors. Finally, it is verified through simulation studies that our analysis model accurately predicts system performance.

  20. Estimating the variance for heterogeneity in arm-based network meta-analysis.

    PubMed

    Piepho, Hans-Peter; Madden, Laurence V; Roger, James; Payne, Roger; Williams, Emlyn R

    2018-04-19

    Network meta-analysis can be implemented by using arm-based or contrast-based models. Here we focus on arm-based models and fit them using generalized linear mixed model procedures. Full maximum likelihood (ML) estimation leads to biased trial-by-treatment interaction variance estimates for heterogeneity. Thus, our objective is to investigate alternative approaches to variance estimation that reduce bias compared with full ML. Specifically, we use penalized quasi-likelihood/pseudo-likelihood and hierarchical (h) likelihood approaches. In addition, we consider a novel model modification that yields estimators akin to the residual maximum likelihood estimator for linear mixed models. The proposed methods are compared by simulation, and 2 real datasets are used for illustration. Simulations show that penalized quasi-likelihood/pseudo-likelihood and h-likelihood reduce bias and yield satisfactory coverage rates. Sum-to-zero restriction and baseline contrasts for random trial-by-treatment interaction effects, as well as a residual ML-like adjustment, also reduce bias compared with an unconstrained model when ML is used, but coverage rates are not quite as good. Penalized quasi-likelihood/pseudo-likelihood and h-likelihood are therefore recommended. Copyright © 2018 John Wiley & Sons, Ltd.

  1. Development and Characterization of a Rate-Dependent Three-Dimensional Macroscopic Plasticity Model Suitable for Use in Composite Impact Problems

    NASA Technical Reports Server (NTRS)

    Goldberg, Robert K.; Carney, Kelly S.; DuBois, Paul; Hoffarth, Canio; Rajan, Subramaniam; Blankenhorn, Gunther

    2015-01-01

    Several key capabilities have been identified by the aerospace community as lacking in the material/models for composite materials currently available within commercial transient dynamic finite element codes such as LS-DYNA. Some of the specific desired features that have been identified include the incorporation of both plasticity and damage within the material model, the capability of using the material model to analyze the response of both three-dimensional solid elements and two dimensional shell elements, and the ability to simulate the response of composites composed with a variety of composite architectures, including laminates, weaves and braids. In addition, a need has been expressed to have a material model that utilizes tabulated experimentally based input to define the evolution of plasticity and damage as opposed to utilizing discrete input parameters (such as modulus and strength) and analytical functions based on curve fitting. To begin to address these needs, an orthotropic macroscopic plasticity based model suitable for implementation within LS-DYNA has been developed. Specifically, the Tsai-Wu composite failure model has been generalized and extended to a strain-hardening based orthotropic plasticity model with a non-associative flow rule. The coefficients in the yield function are determined based on tabulated stress-strain curves in the various normal and shear directions, along with selected off-axis curves. Incorporating rate dependence into the yield function is achieved by using a series of tabluated input curves, each at a different constant strain rate. The non-associative flow-rule is used to compute the evolution of the effective plastic strain. Systematic procedures have been developed to determine the values of the various coefficients in the yield function and the flow rule based on the tabulated input data. An algorithm based on the radial return method has been developed to facilitate the numerical implementation of the material model. The presented paper will present in detail the development of the orthotropic plasticity model and the procedures used to obtain the required material parameters. Methods in which a combination of actual testing and selective numerical testing can be combined to yield the appropriate input data for the model will be described. A specific laminated polymer matrix composite will be examined to demonstrate the application of the model.

  2. Scene-aware joint global and local homographic video coding

    NASA Astrophysics Data System (ADS)

    Peng, Xiulian; Xu, Jizheng; Sullivan, Gary J.

    2016-09-01

    Perspective motion is commonly represented in video content that is captured and compressed for various applications including cloud gaming, vehicle and aerial monitoring, etc. Existing approaches based on an eight-parameter homography motion model cannot deal with this efficiently, either due to low prediction accuracy or excessive bit rate overhead. In this paper, we consider the camera motion model and scene structure in such video content and propose a joint global and local homography motion coding approach for video with perspective motion. The camera motion is estimated by a computer vision approach, and camera intrinsic and extrinsic parameters are globally coded at the frame level. The scene is modeled as piece-wise planes, and three plane parameters are coded at the block level. Fast gradient-based approaches are employed to search for the plane parameters for each block region. In this way, improved prediction accuracy and low bit costs are achieved. Experimental results based on the HEVC test model show that up to 9.1% bit rate savings can be achieved (with equal PSNR quality) on test video content with perspective motion. Test sequences for the example applications showed a bit rate savings ranging from 3.7 to 9.1%.

  3. A matlab framework for estimation of NLME models using stochastic differential equations: applications for estimation of insulin secretion rates.

    PubMed

    Mortensen, Stig B; Klim, Søren; Dammann, Bernd; Kristensen, Niels R; Madsen, Henrik; Overgaard, Rune V

    2007-10-01

    The non-linear mixed-effects model based on stochastic differential equations (SDEs) provides an attractive residual error model, that is able to handle serially correlated residuals typically arising from structural mis-specification of the true underlying model. The use of SDEs also opens up for new tools for model development and easily allows for tracking of unknown inputs and parameters over time. An algorithm for maximum likelihood estimation of the model has earlier been proposed, and the present paper presents the first general implementation of this algorithm. The implementation is done in Matlab and also demonstrates the use of parallel computing for improved estimation times. The use of the implementation is illustrated by two examples of application which focus on the ability of the model to estimate unknown inputs facilitated by the extension to SDEs. The first application is a deconvolution-type estimation of the insulin secretion rate based on a linear two-compartment model for C-peptide measurements. In the second application the model is extended to also give an estimate of the time varying liver extraction based on both C-peptide and insulin measurements.

  4. Information spreading dynamics in hypernetworks

    NASA Astrophysics Data System (ADS)

    Suo, Qi; Guo, Jin-Li; Shen, Ai-Zhong

    2018-04-01

    Contact pattern and spreading strategy fundamentally influence the spread of information. Current mathematical methods largely assume that contacts between individuals are fixed by networks. In fact, individuals are affected by all his/her neighbors in different social relationships. Here, we develop a mathematical approach to depict the information spreading process in hypernetworks. Each individual is viewed as a node, and each social relationship containing the individual is viewed as a hyperedge. Based on SIS epidemic model, we construct two spreading models. One model is based on global transmission, corresponding to RP strategy. The other is based on local transmission, corresponding to CP strategy. These models can degenerate into complex network models with a special parameter. Thus hypernetwork models extend the traditional models and are more realistic. Further, we discuss the impact of parameters including structure parameters of hypernetwork, spreading rate, recovering rate as well as information seed on the models. Propagation time and density of informed nodes can reveal the overall trend of information dissemination. Comparing these two models, we find out that there is no spreading threshold in RP, while there exists a spreading threshold in CP. The RP strategy induces a broader and faster information spreading process under the same parameters.

  5. An Improved Computing Method for 3D Mechanical Connectivity Rates Based on a Polyhedral Simulation Model of Discrete Fracture Network in Rock Masses

    NASA Astrophysics Data System (ADS)

    Li, Mingchao; Han, Shuai; Zhou, Sibao; Zhang, Ye

    2018-06-01

    Based on a 3D model of a discrete fracture network (DFN) in a rock mass, an improved projective method for computing the 3D mechanical connectivity rate was proposed. The Monte Carlo simulation method, 2D Poisson process and 3D geological modeling technique were integrated into a polyhedral DFN modeling approach, and the simulation results were verified by numerical tests and graphical inspection. Next, the traditional projective approach for calculating the rock mass connectivity rate was improved using the 3D DFN models by (1) using the polyhedral model to replace the Baecher disk model; (2) taking the real cross section of the rock mass, rather than a part of the cross section, as the test plane; and (3) dynamically searching the joint connectivity rates using different dip directions and dip angles at different elevations to calculate the maximum, minimum and average values of the joint connectivity at each elevation. In a case study, the improved method and traditional method were used to compute the mechanical connectivity rate of the slope of a dam abutment. The results of the two methods were further used to compute the cohesive force of the rock masses. Finally, a comparison showed that the cohesive force derived from the traditional method had a higher error, whereas the cohesive force derived from the improved method was consistent with the suggested values. According to the comparison, the effectivity and validity of the improved method were verified indirectly.

  6. Probabilistic estimation of residential air exchange rates for ...

    EPA Pesticide Factsheets

    Residential air exchange rates (AERs) are a key determinant in the infiltration of ambient air pollution indoors. Population-based human exposure models using probabilistic approaches to estimate personal exposure to air pollutants have relied on input distributions from AER measurements. An algorithm for probabilistically estimating AER was developed based on the Lawrence Berkley National Laboratory Infiltration model utilizing housing characteristics and meteorological data with adjustment for window opening behavior. The algorithm was evaluated by comparing modeled and measured AERs in four US cities (Los Angeles, CA; Detroit, MI; Elizabeth, NJ; and Houston, TX) inputting study-specific data. The impact on the modeled AER of using publically available housing data representative of the region for each city was also assessed. Finally, modeled AER based on region-specific inputs was compared with those estimated using literature-based distributions. While modeled AERs were similar in magnitude to the measured AER they were consistently lower for all cities except Houston. AERs estimated using region-specific inputs were lower than those using study-specific inputs due to differences in window opening probabilities. The algorithm produced more spatially and temporally variable AERs compared with literature-based distributions reflecting within- and between-city differences, helping reduce error in estimates of air pollutant exposure. Published in the Journal of

  7. Fate of Volatile Organic Compounds in Constructed Wastewater Treatment Wetlands

    USGS Publications Warehouse

    Keefe, S.H.; Barber, L.B.; Runkel, R.L.; Ryan, J.N.

    2004-01-01

    The fate of volatile organic compounds was evaluated in a wastewater-dependent constructed wetland near Phoenix, AZ, using field measurements and solute transport modeling. Numerically based volatilization rates were determined using inverse modeling techniques and hydraulic parameters established by sodium bromide tracer experiments. Theoretical volatilization rates were calculated from the two-film method incorporating physicochemical properties and environmental conditions. Additional analyses were conducted using graphically determined volatilization rates based on field measurements. Transport (with first-order removal) simulations were performed using a range of volatilization rates and were evaluated with respect to field concentrations. The inverse and two-film reactive transport simulations demonstrated excellent agreement with measured concentrations for 1,4-dichlorobenzene, tetrachloroethene, dichloromethane, and trichloromethane and fair agreement for dibromochloromethane, bromo-dichloromethane, and toluene. Wetland removal efficiencies from inlet to outlet ranged from 63% to 87% for target compounds.

  8. Prevalence of non-traumatic spinal cord injury in Victoria, Australia.

    PubMed

    New, P W; Farry, A; Baxter, D; Noonan, V K

    2013-02-01

    Forecasting using population modelling. To determine the prevalence of non-traumatic spinal cord injury (NTSCI) on 30 June 2010. Victoria, Australia. Modelling used the following data: incidence of NTSCI based on state-wide, population-based, health-administration database of hospital admissions; state and national population profiles and life tables; levels of NTSCI based on national rehabilitation outcomes data; and life expectancy for persons with SCI. The total population prevalence rate was 367.2 per million, whereas the prevalence in adults aged 16 years and older was estimated to be 2027, equivalent to a population prevalence rate of 455 per million persons. There were more males (1097) with NTSCI (prevalence rate males 197.8 per million population; females 169.1 per million population) and the prevalence was much higher among those with paraplegia (prevalence rate 269.3 per million compared to 97.8 per million with tetraplegia) and incomplete NTSCI. Ventilator dependency (prevalence rate 1.6 per million population) and paediatric NTSCI (prevalence rate 6 per million population ≤ 15 years old) were extremely rare. We have reported a method for calculating an estimate of the prevalence of NTSCI that provides information that will be vital to optimise health care planning for this group of highly disabled members of society. It is suggested that refinements to the modelling methods are required to enhance its reliability. Future projects should be directed at refining the mortality ratios and performing cohort survival studies.

  9. Mechanistic Kinetic Modeling of Thiol-Michael Addition Photopolymerizations via Photocaged "Superbase" Generators: An Analytical Approach.

    PubMed

    Claudino, Mauro; Zhang, Xinpeng; Alim, Marvin D; Podgórski, Maciej; Bowman, Christopher N

    2016-11-08

    A kinetic mechanism and the accompanying mathematical framework are presented for base-mediated thiol-Michael photopolymerization kinetics involving a photobase generator. Here, model kinetic predictions demonstrate excellent agreement with a representative experimental system composed of 2-(2-nitrophenyl)propyloxycarbonyl-1,1,3,3-tetramethylguanidine (NPPOC-TMG) as a photobase generator that is used to initiate thiol-vinyl sulfone Michael addition reactions and polymerizations. Modeling equations derived from a basic mechanistic scheme indicate overall polymerization rates that follow a pseudo-first-order kinetic process in the base and coreactant concentrations, controlled by the ratio of the propagation to chain-transfer kinetic parameters ( k p / k CT ) which is dictated by the rate-limiting step and controls the time necessary to reach gelation. Gelation occurs earlier as the k p / k CT ratio reaches a critical value, wherefrom gel times become nearly independent of k p / k CT . The theoretical approach allowed determining the effect of induction time on the reaction kinetics due to initial acid-base neutralization for the photogenerated base caused by the presence of protic contaminants. Such inhibition kinetics may be challenging for reaction systems that require high curing rates but are relevant for chemical systems that need to remain kinetically dormant until activated although at the ultimate cost of lower polymerization rates. The pure step-growth character of this living polymerization and the exhibited kinetics provide unique potential for extended dark-cure reactions and uniform material properties. The general kinetic model is applicable to photobase initiators where photolysis follows a unimolecular cleavage process releasing a strong base catalyst without cogeneration of intermediate radical species.

  10. Uncertainty analysis of multi-rate kinetics of uranium desorption from sediments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Xiaoying; Liu, Chongxuan; Hu, Bill X.

    2014-01-01

    A multi-rate expression for uranyl [U(VI)] surface complexation reactions has been proposed to describe diffusion-limited U(VI) sorption/desorption in heterogeneous subsurface sediments. An important assumption in the rate expression is that its rate constants follow a certain type probability distribution. In this paper, a Bayes-based, Differential Evolution Markov Chain method was used to assess the distribution assumption and to analyze parameter and model structure uncertainties. U(VI) desorption from a contaminated sediment at the US Hanford 300 Area, Washington was used as an example for detail analysis. The results indicated that: 1) the rate constants in the multi-rate expression contain uneven uncertaintiesmore » with slower rate constants having relative larger uncertainties; 2) the lognormal distribution is an effective assumption for the rate constants in the multi-rate model to simualte U(VI) desorption; 3) however, long-term prediction and its uncertainty may be significantly biased by the lognormal assumption for the smaller rate constants; and 4) both parameter and model structure uncertainties can affect the extrapolation of the multi-rate model with a larger uncertainty from the model structure. The results provide important insights into the factors contributing to the uncertainties of the multi-rate expression commonly used to describe the diffusion or mixing-limited sorption/desorption of both organic and inorganic contaminants in subsurface sediments.« less

  11. Disturbance Distance: Combining a process based ecosystem model and remote sensing data to map the vulnerability of U.S. forested ecosystems to potentially altered disturbance rates

    NASA Astrophysics Data System (ADS)

    Dolan, K. A.

    2015-12-01

    Disturbance plays a critical role in shaping the structure and function of forested ecosystems as well as the ecosystem services they provide, including but not limited to: carbon storage, biodiversity habitat, water quality and flow, and land atmosphere exchanges of energy and water. In addition, recent studies suggest that disturbance rates may increase in the future under altered climate and land use scenarios. Thus understanding how vulnerable forested ecosystems are to potential changes in disturbance rates is of high importance. This study calculated the theoretical threshold rate of disturbance for which forest ecosystems could no longer be sustained (λ*) across the Coterminous U.S. using an advanced process based ecosystem model (ED). Published rates of disturbance (λ) in 50 study sites were obtained from the North American Forest Disturbance (NAFD) program. Disturbance distance (λ* - λ) was calculated for each site by differencing the model based threshold under current climate conditions and average observed rates of disturbance over the last quarter century. Preliminary results confirm all sample forest sites have current average rates of disturbance below λ*, but there were interesting patterns in the recorded disturbance distances. In general western sites had much smaller disturbance distances, suggesting higher vulnerability to change, while eastern sites showed larger buffers. Ongoing work is being conducted to assess the vulnerability of these sites in the context of potential future changes by propagating scenarios of future climate and land-use change through the analysis.

  12. Improvement of specific growth rate of Pichia pastoris for effective porcine interferon-α production with an on-line model-based glycerol feeding strategy.

    PubMed

    Gao, Min-Jie; Zheng, Zhi-Yong; Wu, Jian-Rong; Dong, Shi-Juan; Li, Zhen; Jin, Hu; Zhan, Xiao-Bei; Lin, Chi-Chung

    2012-02-01

    Effective expression of porcine interferon-α (pIFN-α) with recombinant Pichia pastoris was conducted in a bench-scale fermentor. The influence of the glycerol feeding strategy on the specific growth rate and protein production was investigated. The traditional DO-stat feeding strategy led to very low cell growth rate resulting in low dry cell weight (DCW) of about 90 g/L during the subsequent induction phase. The previously reported Artificial Neural Network Pattern Recognition (ANNPR) model-based glycerol feeding strategy improved the cell density to 120 g DCW/L, while the specific growth rate decreased from 0.15 to 0.18 to 0.03-0.08 h(-1) during the last 10 h of the glycerol feeding stage leading to a variation of the porcine interferon-α production, as the glycerol feeding scheme had a significant effect on the induction phase. This problem was resolved by an improved ANNPR model-based feeding strategy to maintain the specific growth rate above 0.11 h(-1). With this feeding strategy, the pIFN-α concentration reached a level of 1.43 g/L, more than 1.5-fold higher than that obtained with the previously adopted feeding strategy. Our results showed that increasing the specific growth rate favored the target protein production and the glycerol feeding methods directly influenced the induction stage. Consequently, higher cell density and specific growth rate as well as effective porcine interferon-α production have been achieved by our novel glycerol feeding strategy.

  13. Application of an Uncoupled Elastic-plastic-creep Constitutive Model to Metals at High Temperature

    NASA Technical Reports Server (NTRS)

    Haisler, W. E.

    1983-01-01

    A uniaxial, uncoupled constitutive model to predict the response of thermal and rate dependent elastic-plastic material behavior is presented. The model is based on an incremental classicial plasticity theory extended to account for thermal, creep, and transient temperature conditions. Revisions to he combined hardening rule of the theory allow for better representation of cyclic phenomenon including the high rate of strain hardening upon cyclic reyield and cyclic saturation. An alternative approach is taken to model the rate dependent inelastic deformation which utilizes hysteresis loops and stress relaxation test data at various temperatures. The model is evaluated and compared to experiments which involve various thermal and mechanical load histories on 5086 aluminum alloy, 304 stainless steel and Hastelloy-X.

  14. Micromolecular modeling

    NASA Technical Reports Server (NTRS)

    Guillet, J. E.

    1984-01-01

    A reaction kinetics based model of the photodegradation process, which measures all important rate constants, and a computerized model capable of predicting the photodegradation rate and failure modes of a 30 year period, were developed. It is shown that the computerized photodegradation model for polyethylene correctly predicts failure of ELVAX 15 and cross linked ELVAX 150 on outdoor exposure. It is indicated that cross linking ethylene vinyl acetate (EVA) does not significantly change its degradation rate. It is shown that the effect of the stabilizer package is approximately equivalent on both polymers. The computerized model indicates that peroxide decomposers and UV absorbers are the most effective stabilizers. It is found that a combination of UV absorbers and a hindered amine light stabilizer (HALS) is the most effective stabilizer system.

  15. ESTIMATION OF PHOSPHATE ESTER HYDROLYSIS RATE CONSTANTS. II. ACID AND GENERAL BASE CATALYZED HYDROLYSIS

    EPA Science Inventory

    SPARC (SPARC Performs Automated Reasoning in Chemistry) chemical reactivity models were extended to calculate acid and neutral hydrolysis rate constants of phosphate esters in water. The rate is calculated from the energy difference between the initial and transition states of a ...

  16. Performance comparison of LUR and OK in PM2.5 concentration mapping: a multidimensional perspective

    PubMed Central

    Zou, Bin; Luo, Yanqing; Wan, Neng; Zheng, Zhong; Sternberg, Troy; Liao, Yilan

    2015-01-01

    Methods of Land Use Regression (LUR) modeling and Ordinary Kriging (OK) interpolation have been widely used to offset the shortcomings of PM2.5 data observed at sparse monitoring sites. However, traditional point-based performance evaluation strategy for these methods remains stagnant, which could cause unreasonable mapping results. To address this challenge, this study employs ‘information entropy’, an area-based statistic, along with traditional point-based statistics (e.g. error rate, RMSE) to evaluate the performance of LUR model and OK interpolation in mapping PM2.5 concentrations in Houston from a multidimensional perspective. The point-based validation reveals significant differences between LUR and OK at different test sites despite the similar end-result accuracy (e.g. error rate 6.13% vs. 7.01%). Meanwhile, the area-based validation demonstrates that the PM2.5 concentrations simulated by the LUR model exhibits more detailed variations than those interpolated by the OK method (i.e. information entropy, 7.79 vs. 3.63). Results suggest that LUR modeling could better refine the spatial distribution scenario of PM2.5 concentrations compared to OK interpolation. The significance of this study primarily lies in promoting the integration of point- and area-based statistics for model performance evaluation in air pollution mapping. PMID:25731103

  17. Improved Early Cleft Lip and Palate Complications at a Surgery Specialty Center in the Developing World.

    PubMed

    Park, Eugene; Deshpande, Gaurav; Schonmeyr, Bjorn; Restrepo, Carolina; Campbell, Alex

    2018-01-01

    To evaluate complication rates following cleft lip and cleft palate repairs during the transition from mission-based care to center-based care in a developing region. We performed a retrospective review of 3419 patients who underwent cleft lip repair and 1728 patients who underwent cleft palate repair in Guwahati, India between December 2010 and February 2014. Of those who underwent cleft lip repair, 654 were treated during a surgical mission and 2765 were treated at a permanent center. Of those who underwent cleft palate repair, 236 were treated during a surgical mission and 1491 were treated at a permanent center. Two large surgical missions to Guwahati, India, and the Guwahati Comprehensive Cleft Care Center (GCCCC) in Assam, India. Overall complication rates following cleft lip and cleft palate repair. Overall complication rates following cleft lip repair were 13.2% for the first mission, 6.7% for the second mission, and 4.0% at GCCCC. Overall complication rates following cleft palate repair were 28.0% for the first mission, 30.0% for the second mission, and 15.8% at GCCCC. Complication rates following cleft palate repair by the subset of surgeons permanently based at GCCCC (7.2%) were lower than visiting surgeons ( P < .05). Our findings support the notion that transitioning from a mission-based model to a permanent facility-based model of cleft care delivery in the developing world can lead to decreased complication rates.

  18. The Integrated Soil Erosion Risk Management Model of Central Java, Indonesia

    NASA Astrophysics Data System (ADS)

    Setiawan, M. A.; Stoetter, J.; Sartohadi, J.; Christanto, N.

    2009-04-01

    Many types of soil erosion modeling have been developed worldwide; each of models has its own advantage and assumption based on the originated area. Ironically, in the tropical countries where the rainfall intensity is higher than other area, the soil erosion problem gain less attention. As in Indonesia, due the inadequate supporting data and method to dealing with, the soil erosion management appears to be least prior in the policy decision. Hence, there is increasing necessity towards the initiation and integration of risk management model in the soil erosion, to prevent further land degradation problem in Indonesia. The main research objective is to generate a model which can analyze the dynamic system of soil erosion problem. This model will comprehensively consider four main aspects within the dynamic system analysis, i.e.: soil erosion rate modeling, the tolerable soil erosion rate, total soil erosion cost, and soil erosion management measures. The generating model will involve some sub-software i.e. the PC Raster to maintain the soil erosion modeling, Powersim Constructor Ver. 2.5 as the tool to analyze the dynamic system and Python Ver. 2.6.1 to build the main Graphical User Interface model. The first step addressed in this research is figuring the most appropriate soil erosion model to be applied in Indonesia based on landscape, climate, and data availability condition. This appropriate model must have the simplicity aspect in input data but still deal with the process based analysis. By using the soil erosion model result, the total soil erosion cost will be calculated both on-site and off-site effect. The total soil erosion cost will be stated in Rupiah (Indonesian currency) and Dollar. That total result is then used as one of input parameters for the tolerable soil erosion rate. Subsequently, the tolerable soil erosion rate decides whether the soil erosion rate has exceeded the allowed value or not. If the soil erosion rate has bigger value than the tolerable soil erosion rate, the soil erosion management will be applied base on cost and benefit analysis. The soil erosion management measures will conduct as decision maker of defining the best alternative soil conservation method in a certain area. Besides the engineering and theoretical methods, the local wisdom also will be taken into account in defining the alternative manners of soil erosion management. As a prototype, this integrated model will be generated and simulated in Serayu Watershed, Central Java, since this area has a serious issue in soil erosion problem mainly in the upper stream area (Dieng area). The extraordinary monoculture plantation (potatoes) and very intensive soil tillage without proper soil conservation method has accelerated the soil erosion and depleted the soil fertility. Based on the potatoes productivity data (kg/ha) from 1997-2007 showed that there was a declining trend line, approximately minus 8,2% every year. On the other hand the fertilizer and pesticide consumption in agricultural land are significantly increasing every year. In the same time, the high erosion rate causes serious sedimentation problem in lower stream. Those conditions can be used as study case in determining the element at risk of soil erosion and calculation method for the total soil erosion cost (on-site and off-site effect). Moreover, The Serayu Watershed consists of complex landforms which might have variation of soil erosion tolerable rate. In the future, this integrated model can obtain valuable basis data of the soil erosion hazard in spatial and temporal information including its total cost, the sustainability time of certain land or agriculture area, also the consequences price of applying certain agriculture or soil management. Since this model give result explicitly in spatial and temporal, this model can be used by the local authority to run the land use scenario in term of soil erosion impact before applied them in the real condition. In practice, such integrated model could give more understanding knowledge to the local people about the soil erosion, its processes, impacts, and how to manage that. Keywords: Risk assessment, soil erosion, dynamic system, environmental valuation

  19. Category Rating Is Based on Prototypes and Not Instances: Evidence from Feedback-Dependent Context Effects

    ERIC Educational Resources Information Center

    Petrov, Alexander A.

    2011-01-01

    Context effects in category rating on a 7-point scale are shown to reverse direction depending on feedback. Context (skewed stimulus frequencies) was manipulated between and feedback within subjects in two experiments. The diverging predictions of prototype- and exemplar-based scaling theories were tested using two representative models: ANCHOR…

  20. Development and validation of a comprehensive model for map of fruits based on enzyme kinetics theory and arrhenius relation.

    PubMed

    Mangaraj, S; K Goswami, T; Mahajan, P V

    2015-07-01

    MAP is a dynamic system where respiration of the packaged product and gas permeation through the packaging film takes place simultaneously. The desired level of O2 and CO2 in a package is achieved by matching film permeation rates for O2 and CO2 with respiration rate of the packaged product. A mathematical model for MAP of fresh fruits applying enzyme kinetics based respiration equation coupled with the Arrhenious type model was developed. The model was solved numerically using MATLAB programme. The model was used to determine the time to reach to the equilibrium concentration inside the MA package and the level of O2 and CO2 concentration at equilibrium state. The developed model for prediction of equilibrium O2 and CO2 concentration was validated using experimental data for MA packaging of apple, guava and litchi.

  1. A crystallographic model for nickel base single crystal alloys

    NASA Technical Reports Server (NTRS)

    Dame, L. T.; Stouffer, D. C.

    1988-01-01

    The purpose of this research is to develop a tool for the mechanical analysis of nickel-base single-crystal superalloys, specifically Rene N4, used in gas turbine engine components. This objective is achieved by developing a rate-dependent anisotropic constitutive model and implementing it in a nonlinear three-dimensional finite-element code. The constitutive model is developed from metallurgical concepts utilizing a crystallographic approach. An extension of Schmid's law is combined with the Bodner-Partom equations to model the inelastic tension/compression asymmetry and orientation-dependence in octahedral slip. Schmid's law is used to approximate the inelastic response of the material in cube slip. The constitutive equations model the tensile behavior, creep response and strain-rate sensitivity of the single-crystal superalloys. Methods for deriving the material constants from standard tests are also discussed. The model is implemented in a finite-element code, and the computed and experimental results are compared for several orientations and loading conditions.

  2. SPATIO-TEMPORAL MODELING OF AGRICULTURAL YIELD DATA WITH AN APPLICATION TO PRICING CROP INSURANCE CONTRACTS

    PubMed Central

    Ozaki, Vitor A.; Ghosh, Sujit K.; Goodwin, Barry K.; Shirota, Ricardo

    2009-01-01

    This article presents a statistical model of agricultural yield data based on a set of hierarchical Bayesian models that allows joint modeling of temporal and spatial autocorrelation. This method captures a comprehensive range of the various uncertainties involved in predicting crop insurance premium rates as opposed to the more traditional ad hoc, two-stage methods that are typically based on independent estimation and prediction. A panel data set of county-average yield data was analyzed for 290 counties in the State of Paraná (Brazil) for the period of 1990 through 2002. Posterior predictive criteria are used to evaluate different model specifications. This article provides substantial improvements in the statistical and actuarial methods often applied to the calculation of insurance premium rates. These improvements are especially relevant to situations where data are limited. PMID:19890450

  3. Modeling of the pyruvate production with Escherichia coli: comparison of mechanistic and neural networks-based models.

    PubMed

    Zelić, B; Bolf, N; Vasić-Racki, D

    2006-06-01

    Three different models: the unstructured mechanistic black-box model, the input-output neural network-based model and the externally recurrent neural network model were used to describe the pyruvate production process from glucose and acetate using the genetically modified Escherichia coli YYC202 ldhA::Kan strain. The experimental data were used from the recently described batch and fed-batch experiments [ Zelić B, Study of the process development for Escherichia coli-based pyruvate production. PhD Thesis, University of Zagreb, Faculty of Chemical Engineering and Technology, Zagreb, Croatia, July 2003. (In English); Zelić et al. Bioproc Biosyst Eng 26:249-258 (2004); Zelić et al. Eng Life Sci 3:299-305 (2003); Zelić et al Biotechnol Bioeng 85:638-646 (2004)]. The neural networks were built out of the experimental data obtained in the fed-batch pyruvate production experiments with the constant glucose feed rate. The model validation was performed using the experimental results obtained from the batch and fed-batch pyruvate production experiments with the constant acetate feed rate. Dynamics of the substrate and product concentration changes was estimated using two neural network-based models for biomass and pyruvate. It was shown that neural networks could be used for the modeling of complex microbial fermentation processes, even in conditions in which mechanistic unstructured models cannot be applied.

  4. Model of epidemic control based on quarantine and message delivery

    NASA Astrophysics Data System (ADS)

    Wang, Xingyuan; Zhao, Tianfang; Qin, Xiaomeng

    2016-09-01

    The model provides two novel strategies for the preventive control of epidemic diseases. One approach is related to the different isolating rates in latent period and invasion period. Experiments show that the increasing of isolating rates in invasion period, as long as over 0.5, contributes little to the preventing of epidemic; the improvement of isolation rate in latent period is key to control the disease spreading. Another is a specific mechanism of message delivering and forwarding. Information quality and information accumulating process are also considered there. Macroscopically, diseases are easy to control as long as the immune messages reach a certain quality. Individually, the accumulating messages bring people with certain immunity to the disease. Also, the model is performed on the classic complex networks like scale-free network and small-world network, and location-based social networks. Results show that the proposed measures demonstrate superior performance and significantly reduce the negative impact of epidemic disease.

  5. Modified retrieval algorithm for three types of precipitation distribution using x-band synthetic aperture radar

    NASA Astrophysics Data System (ADS)

    Xie, Yanan; Zhou, Mingliang; Pan, Dengke

    2017-10-01

    The forward-scattering model is introduced to describe the response of normalized radar cross section (NRCS) of precipitation with synthetic aperture radar (SAR). Since the distribution of near-surface rainfall is related to the rate of near-surface rainfall and horizontal distribution factor, a retrieval algorithm called modified regression empirical and model-oriented statistical (M-M) based on the volterra integration theory is proposed. Compared with the model-oriented statistical and volterra integration (MOSVI) algorithm, the biggest difference is that the M-M algorithm is based on the modified regression empirical algorithm rather than the linear regression formula to retrieve the value of near-surface rainfall rate. Half of the empirical parameters are reduced in the weighted integral work and a smaller average relative error is received while the rainfall rate is less than 100 mm/h. Therefore, the algorithm proposed in this paper can obtain high-precision rainfall information.

  6. The Dynamics of Scaling: A Memory-Based Anchor Model of Category Rating and Absolute Identification

    ERIC Educational Resources Information Center

    Petrov, Alexander A.; Anderson, John R.

    2005-01-01

    A memory-based scaling model--ANCHOR--is proposed and tested. The perceived magnitude of the target stimulus is compared with a set of anchors in memory. Anchor selection is probabilistic and sensitive to similarity, base-level strength, and recency. The winning anchor provides a reference point near the target and thereby converts the global…

  7. Adjusted hospital death rates: a potential screen for quality of medical care.

    PubMed

    Dubois, R W; Brook, R H; Rogers, W H

    1987-09-01

    Increased economic pressure on hospitals has accelerated the need to develop a screening tool for identifying hospitals that potentially provide poor quality care. Based upon data from 93 hospitals and 205,000 admissions, we used a multiple regression model to adjust the hospitals crude death rate. The adjustment process used age, origin of patient from the emergency department or nursing home, and a hospital case mix index based on DRGs (diagnostic related groups). Before adjustment, hospital death rates ranged from 0.3 to 5.8 per 100 admissions. After adjustment, hospital death ratios ranged from 0.36 to 1.36 per 100 (actual death rate divided by predicted death rate). Eleven hospitals (12 per cent) were identified where the actual death rate exceeded the predicted death rate by more than two standard deviations. In nine hospitals (10 per cent), the predicted death rate exceeded the actual death rate by a similar statistical margin. The 11 hospitals with higher than predicted death rates may provide inadequate quality of care or have uniquely ill patient populations. The adjusted death rate model needs to be validated and generalized before it can be used routinely to screen hospitals. However, the remaining large differences in observed versus predicted death rates lead us to believe that important differences in hospital performance may exist.

  8. Vertical Bridgman growth of Hg 1-xMn xTe with variational withdrawal rate

    NASA Astrophysics Data System (ADS)

    Zhi, Gu; Wan-Qi, Jie; Guo-Qiang, Li; Long, Zhang

    2004-09-01

    Based on the solute redistribution models, Vertical Bridgman growth of Hg1-xMnxTe with variational withdrawal rate is studied. Both theoretical analysis and experimental results show that the axial composition uniformity is improved and the crystal growth rate is also increased at the optimized variational method of withdrawal rate.

  9. Modelling the charring behaviour of structural lumber

    Treesearch

    Peter W.C. Lau; Robert White; Ineke Van Zealand

    1999-01-01

    Charring rates for large-section timber based on experimental data have been generally established. The established rates may not be appropriately used for the prediction of failure times of lumber members which are small by comparison. It is questionable whether a constant rate can be safely assumed for lumber members since the rate is likely to increase once the...

  10. Are Physics-Based Simulators Ready for Prime Time? Comparisons of RSQSim with UCERF3 and Observations.

    NASA Astrophysics Data System (ADS)

    Milner, K. R.; Shaw, B. E.; Gilchrist, J. J.; Jordan, T. H.

    2017-12-01

    Probabilistic seismic hazard analysis (PSHA) is typically performed by combining an earthquake rupture forecast (ERF) with a set of empirical ground motion prediction equations (GMPEs). ERFs have typically relied on observed fault slip rates and scaling relationships to estimate the rate of large earthquakes on pre-defined fault segments, either ignoring or relying on expert opinion to set the rates of multi-fault or multi-segment ruptures. Version 3 of the Uniform California Earthquake Rupture Forecast (UCERF3) is a significant step forward, replacing expert opinion and fault segmentation with an inversion approach that matches observations better than prior models while incorporating multi-fault ruptures. UCERF3 is a statistical model, however, and doesn't incorporate the physics of earthquake nucleation, rupture propagation, and stress transfer. We examine the feasibility of replacing UCERF3, or components therein, with physics-based rupture simulators such as the Rate-State Earthquake Simulator (RSQSim), developed by Dieterich & Richards-Dinger (2010). RSQSim simulations on the UCERF3 fault system produce catalogs of seismicity that match long term rates on major faults, and produce remarkable agreement with UCERF3 when carried through to PSHA calculations. Averaged over a representative set of sites, the RSQSim-UCERF3 hazard-curve differences are comparable to the small differences between UCERF3 and its predecessor, UCERF2. The hazard-curve agreement between the empirical and physics-based models provides substantial support for the PSHA methodology. RSQSim catalogs include many complex multi-fault ruptures, which we compare with the UCERF3 rupture-plausibility metrics as well as recent observations. Complications in generating physically plausible kinematic descriptions of multi-fault ruptures have thus far prevented us from using UCERF3 in the CyberShake physics-based PSHA platform, which replaces GMPEs with deterministic ground motion simulations. RSQSim produces full slip/time histories that can be directly implemented as sources in CyberShake, without relying on the conditional hypocenter and slip distributions needed for the UCERF models. We also compare RSQSim with time-dependent PSHA calculations based on multi-fault renewal models.

  11. Solution processed deposition of electron transport layers on perovskite crystal surface-A modeling based study

    NASA Astrophysics Data System (ADS)

    Mortuza, S. M.; Taufique, M. F. N.; Banerjee, Soumik

    2017-02-01

    The power conversion efficiency (PCE) of planar perovskite solar cells (PSCs) has reached up to ∼20%. However, structural and chemicals defects that lead to hysteresis in the perovskite based thin film pose challenges. Recent work has shown that thin films of [6,6]-phenyl-C61-butyric acid methyl ester (PCBM) deposited on the photo absorption layer, using solution processing techniques, minimize surface pin holes and defects thereby increasing the PCE. We developed and employed a multiscale model based on molecular dynamics (MD) and kinetic Monte Carlo (kMC) to establish a relationship between deposition rate and surface coverage on perovskite surface. The MD simulations of PCBMs dispersed in chlorobenzene, sandwiched between (110) perovskite substrates, indicate that PCBMs are deposited through anchoring of the oxygen atom of carbonyl group to the exposed lead (Pb) atom of (110) perovskite surface. Based on rates of distinct deposition events calculated from MD, kMC simulations were run to determine surface coverage at much larger time and length scales than accessible by MD alone. Based on the model, a generic relationship is established between deposition rate of PCBMs and surface coverage on perovskite crystal. The study also provides detailed insights into the morphology of the deposited film.

  12. Estimating Divergence Dates and Substitution Rates in the Drosophila Phylogeny

    PubMed Central

    Obbard, Darren J.; Maclennan, John; Kim, Kang-Wook; Rambaut, Andrew; O’Grady, Patrick M.; Jiggins, Francis M.

    2012-01-01

    An absolute timescale for evolution is essential if we are to associate evolutionary phenomena, such as adaptation or speciation, with potential causes, such as geological activity or climatic change. Timescales in most phylogenetic studies use geologically dated fossils or phylogeographic events as calibration points, but more recently, it has also become possible to use experimentally derived estimates of the mutation rate as a proxy for substitution rates. The large radiation of drosophilid taxa endemic to the Hawaiian islands has provided multiple calibration points for the Drosophila phylogeny, thanks to the "conveyor belt" process by which this archipelago forms and is colonized by species. However, published date estimates for key nodes in the Drosophila phylogeny vary widely, and many are based on simplistic models of colonization and coalescence or on estimates of island age that are not current. In this study, we use new sequence data from seven species of Hawaiian Drosophila to examine a range of explicit coalescent models and estimate substitution rates. We use these rates, along with a published experimentally determined mutation rate, to date key events in drosophilid evolution. Surprisingly, our estimate for the date for the most recent common ancestor of the genus Drosophila based on mutation rate (25–40 Ma) is closer to being compatible with independent fossil-derived dates (20–50 Ma) than are most of the Hawaiian-calibration models and also has smaller uncertainty. We find that Hawaiian-calibrated dates are extremely sensitive to model choice and give rise to point estimates that range between 26 and 192 Ma, depending on the details of the model. Potential problems with the Hawaiian calibration may arise from systematic variation in the molecular clock due to the long generation time of Hawaiian Drosophila compared with other Drosophila and/or uncertainty in linking island formation dates with colonization dates. As either source of error will bias estimates of divergence time, we suggest mutation rate estimates be used until better models are available. PMID:22683811

  13. Abnormal Strain Rate Sensitivity Driven by a Unit Dislocation-Obstacle Interaction in bcc Fe

    NASA Astrophysics Data System (ADS)

    Bai, Zhitong; Fan, Yue

    2018-03-01

    The interaction between an edge dislocation and a sessile vacancy cluster in bcc Fe is investigated over a wide range of strain rates from 108 down to 103 s-1 , which is enabled by employing an energy landscape-based atomistic modeling algorithm. It is observed that, at low strain rates regime less than 105 s-1 , such interaction leads to a surprising negative strain rate sensitivity behavior because of the different intermediate microstructures emerged under the complex interplays between thermal activation and applied strain rate. Implications of our findings regarding the previously established global diffusion model are also discussed.

  14. Prediction of indoor radon/thoron concentration in a model room from exhalation rates of building materials for different ventilation rates

    NASA Astrophysics Data System (ADS)

    Kumar, Manish; Sharma, Navjeet; Sarin, Amit

    2018-05-01

    Studies have confirmed that elevated levels of radon/thoron in the human-environments can substantially increase the risk of lung cancer in general population. The building materials are the second largest contributors to indoor radon/thoron after soil and bedrock beneath dwellings. In present investigation, the exhalation rates of radon/thoron from different building materials samples have been analysed using active technique. Radon/thoron concentrations in a model room have been predicted based on the exhalation rates from walls, floor and roof. The indoor concentrations show significant variations depending upon the ventilation rate and type of building materials used.

  15. Applying the natural disasters vulnerability evaluation model to the March 2011 north-east Japan earthquake and tsunami.

    PubMed

    Ruiz Estrada, Mario Arturo; Yap, Su Fei; Park, Donghyun

    2014-07-01

    Natural hazards have a potentially large impact on economic growth, but measuring their economic impact is subject to a great deal of uncertainty. The central objective of this paper is to demonstrate a model--the natural disasters vulnerability evaluation (NDVE) model--that can be used to evaluate the impact of natural hazards on gross national product growth. The model is based on five basic indicators-natural hazards growth rates (αi), the national natural hazards vulnerability rate (ΩT), the natural disaster devastation magnitude rate (Π), the economic desgrowth rate (i.e. shrinkage of the economy) (δ), and the NHV surface. In addition, we apply the NDVE model to the north-east Japan earthquake and tsunami of March 2011 to evaluate its impact on the Japanese economy. © 2014 The Author(s). Disasters © Overseas Development Institute, 2014.

  16. What explains usage of mobile physician-rating apps? Results from a web-based questionnaire.

    PubMed

    Bidmon, Sonja; Terlutter, Ralf; Röttl, Johanna

    2014-06-11

    Consumers are increasingly accessing health-related information via mobile devices. Recently, several apps to rate and locate physicians have been released in the United States and Germany. However, knowledge about what kinds of variables explain usage of mobile physician-rating apps is still lacking. This study analyzes factors influencing the adoption of and willingness to pay for mobile physician-rating apps. A structural equation model was developed based on the Technology Acceptance Model and the literature on health-related information searches and usage of mobile apps. Relationships in the model were analyzed for moderating effects of physician-rating website (PRW) usage. A total of 1006 randomly selected German patients who had visited a general practitioner at least once in the 3 months before the beginning of the survey were randomly selected and surveyed. A total of 958 usable questionnaires were analyzed by partial least squares path modeling and moderator analyses. The suggested model yielded a high model fit. We found that perceived ease of use (PEOU) of the Internet to gain health-related information, the sociodemographic variables age and gender, and the psychographic variables digital literacy, feelings about the Internet and other Web-based applications in general, patients' value of health-related knowledgeability, as well as the information-seeking behavior variables regarding the amount of daily private Internet use for health-related information, frequency of using apps for health-related information in the past, and attitude toward PRWs significantly affected the adoption of mobile physician-rating apps. The sociodemographic variable age, but not gender, and the psychographic variables feelings about the Internet and other Web-based applications in general and patients' value of health-related knowledgeability, but not digital literacy, were significant predictors of willingness to pay. Frequency of using apps for health-related information in the past and attitude toward PRWs, but not the amount of daily Internet use for health-related information, were significant predictors of willingness to pay. The perceived usefulness of the Internet to gain health-related information and the amount of daily Internet use in general did not have any significant effect on both of the endogenous variables. The moderation analysis with the group comparisons for users and nonusers of PRWs revealed that the attitude toward PRWs had significantly more impact on the adoption and willingness to pay for mobile physician-rating apps in the nonuser group. Important variables that contribute to the adoption of a mobile physician-rating app and the willingness to pay for it were identified. The results of this study are important for researchers because they can provide important insights about the variables that influence the acceptance of apps that allow for ratings of physicians. They are also useful for creators of mobile physician-rating apps because they can help tailor mobile physician-rating apps to the consumers' characteristics and needs.

  17. What Explains Usage of Mobile Physician-Rating Apps? Results From a Web-Based Questionnaire

    PubMed Central

    Terlutter, Ralf; Röttl, Johanna

    2014-01-01

    Background Consumers are increasingly accessing health-related information via mobile devices. Recently, several apps to rate and locate physicians have been released in the United States and Germany. However, knowledge about what kinds of variables explain usage of mobile physician-rating apps is still lacking. Objective This study analyzes factors influencing the adoption of and willingness to pay for mobile physician-rating apps. A structural equation model was developed based on the Technology Acceptance Model and the literature on health-related information searches and usage of mobile apps. Relationships in the model were analyzed for moderating effects of physician-rating website (PRW) usage. Methods A total of 1006 randomly selected German patients who had visited a general practitioner at least once in the 3 months before the beginning of the survey were randomly selected and surveyed. A total of 958 usable questionnaires were analyzed by partial least squares path modeling and moderator analyses. Results The suggested model yielded a high model fit. We found that perceived ease of use (PEOU) of the Internet to gain health-related information, the sociodemographic variables age and gender, and the psychographic variables digital literacy, feelings about the Internet and other Web-based applications in general, patients’ value of health-related knowledgeability, as well as the information-seeking behavior variables regarding the amount of daily private Internet use for health-related information, frequency of using apps for health-related information in the past, and attitude toward PRWs significantly affected the adoption of mobile physician-rating apps. The sociodemographic variable age, but not gender, and the psychographic variables feelings about the Internet and other Web-based applications in general and patients’ value of health-related knowledgeability, but not digital literacy, were significant predictors of willingness to pay. Frequency of using apps for health-related information in the past and attitude toward PRWs, but not the amount of daily Internet use for health-related information, were significant predictors of willingness to pay. The perceived usefulness of the Internet to gain health-related information and the amount of daily Internet use in general did not have any significant effect on both of the endogenous variables. The moderation analysis with the group comparisons for users and nonusers of PRWs revealed that the attitude toward PRWs had significantly more impact on the adoption and willingness to pay for mobile physician-rating apps in the nonuser group. Conclusions Important variables that contribute to the adoption of a mobile physician-rating app and the willingness to pay for it were identified. The results of this study are important for researchers because they can provide important insights about the variables that influence the acceptance of apps that allow for ratings of physicians. They are also useful for creators of mobile physician-rating apps because they can help tailor mobile physician-rating apps to the consumers’ characteristics and needs. PMID:24918859

  18. Extending rule-based methods to model molecular geometry and 3D model resolution.

    PubMed

    Hoard, Brittany; Jacobson, Bruna; Manavi, Kasra; Tapia, Lydia

    2016-08-01

    Computational modeling is an important tool for the study of complex biochemical processes associated with cell signaling networks. However, it is challenging to simulate processes that involve hundreds of large molecules due to the high computational cost of such simulations. Rule-based modeling is a method that can be used to simulate these processes with reasonably low computational cost, but traditional rule-based modeling approaches do not include details of molecular geometry. The incorporation of geometry into biochemical models can more accurately capture details of these processes, and may lead to insights into how geometry affects the products that form. Furthermore, geometric rule-based modeling can be used to complement other computational methods that explicitly represent molecular geometry in order to quantify binding site accessibility and steric effects. We propose a novel implementation of rule-based modeling that encodes details of molecular geometry into the rules and binding rates. We demonstrate how rules are constructed according to the molecular curvature. We then perform a study of antigen-antibody aggregation using our proposed method. We simulate the binding of antibody complexes to binding regions of the shrimp allergen Pen a 1 using a previously developed 3D rigid-body Monte Carlo simulation, and we analyze the aggregate sizes. Then, using our novel approach, we optimize a rule-based model according to the geometry of the Pen a 1 molecule and the data from the Monte Carlo simulation. We use the distances between the binding regions of Pen a 1 to optimize the rules and binding rates. We perform this procedure for multiple conformations of Pen a 1 and analyze the impact of conformation and resolution on the optimal rule-based model. We find that the optimized rule-based models provide information about the average steric hindrance between binding regions and the probability that antibodies will bind to these regions. These optimized models quantify the variation in aggregate size that results from differences in molecular geometry and from model resolution.

  19. A mathematical framework for yield (vs. rate) optimization in constraint-based modeling and applications in metabolic engineering.

    PubMed

    Klamt, Steffen; Müller, Stefan; Regensburger, Georg; Zanghellini, Jürgen

    2018-05-01

    The optimization of metabolic rates (as linear objective functions) represents the methodical core of flux-balance analysis techniques which have become a standard tool for the study of genome-scale metabolic models. Besides (growth and synthesis) rates, metabolic yields are key parameters for the characterization of biochemical transformation processes, especially in the context of biotechnological applications. However, yields are ratios of rates, and hence the optimization of yields (as nonlinear objective functions) under arbitrary linear constraints is not possible with current flux-balance analysis techniques. Despite the fundamental importance of yields in constraint-based modeling, a comprehensive mathematical framework for yield optimization is still missing. We present a mathematical theory that allows one to systematically compute and analyze yield-optimal solutions of metabolic models under arbitrary linear constraints. In particular, we formulate yield optimization as a linear-fractional program. For practical computations, we transform the linear-fractional yield optimization problem to a (higher-dimensional) linear problem. Its solutions determine the solutions of the original problem and can be used to predict yield-optimal flux distributions in genome-scale metabolic models. For the theoretical analysis, we consider the linear-fractional problem directly. Most importantly, we show that the yield-optimal solution set (like the rate-optimal solution set) is determined by (yield-optimal) elementary flux vectors of the underlying metabolic model. However, yield- and rate-optimal solutions may differ from each other, and hence optimal (biomass or product) yields are not necessarily obtained at solutions with optimal (growth or synthesis) rates. Moreover, we discuss phase planes/production envelopes and yield spaces, in particular, we prove that yield spaces are convex and provide algorithms for their computation. We illustrate our findings by a small example and demonstrate their relevance for metabolic engineering with realistic models of E. coli. We develop a comprehensive mathematical framework for yield optimization in metabolic models. Our theory is particularly useful for the study and rational modification of cell factories designed under given yield and/or rate requirements. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  20. Disturbance Distance: Using a process based ecosystem model to estimate and map potential thresholds in disturbance rates that would give rise to fundamentally altered ecosystems

    NASA Astrophysics Data System (ADS)

    Dolan, K. A.; Hurtt, G. C.; Fisk, J.; Flanagan, S.; LePage, Y.; Sahajpal, R.

    2014-12-01

    Disturbance plays a critical role in shaping the structure and function of forested ecosystems as well as the ecosystem services they provide, including but not limited to: carbon storage, biodiversity habitat, water quality and flow, and land atmosphere exchanges of energy and water. As recent studies highlight novel disturbance regimes resulting from pollution, invasive pests and climate change, there is a need to include these alterations in predictions of future forest function and structure. The Ecosystem Demography (ED) model is a mechanistic model of forest ecosystem dynamics in which individual-based forest dynamics can be efficiently implemented over regional to global scales due to advanced scaling methods. We utilize ED to characterize the sensitivity of potential vegetation structure and function to changes in rates of density independent mortality. Disturbance rate within ED can either be altered directly or through the development of sub-models. Disturbance sub-models in ED currently include fire, land use and hurricanes. We use a tiered approach to understand the sensitivity of North American ecosystems to changes in background density independent mortality. Our first analyses were conducted at half-degree spatial resolution with a constant rate of disturbance in space and time, which was altered between runs. Annual climate was held constant at the site level and the land use and fire sub-models were turned off. Results showed an ~ 30% increase in non-forest area across the US when disturbance rates were changed from 0.6% a year to 1.2% a year and a more than 3.5 fold increase in non-forest area when disturbance rates doubled again from 1.2% to 2.4%. Continued runs altered natural background disturbance rates with the existing fire and hurricane sub models turned on as well as historic and future land use. By quantify differences between model outputs that characterize ecosystem structure and function related to the carbon cycle across the US, we are identifying areas and characteristics that display higher sensitivities to change in disturbance rates.

Top