Kumar, Atul; Samadder, S R
2017-10-01
Accurate prediction of the quantity of household solid waste generation is very much essential for effective management of municipal solid waste (MSW). In actual practice, modelling methods are often found useful for precise prediction of MSW generation rate. In this study, two models have been proposed that established the relationships between the household solid waste generation rate and the socioeconomic parameters, such as household size, total family income, education, occupation and fuel used in the kitchen. Multiple linear regression technique was applied to develop the two models, one for the prediction of biodegradable MSW generation rate and the other for non-biodegradable MSW generation rate for individual households of the city Dhanbad, India. The results of the two models showed that the coefficient of determinations (R 2 ) were 0.782 for biodegradable waste generation rate and 0.676 for non-biodegradable waste generation rate using the selected independent variables. The accuracy tests of the developed models showed convincing results, as the predicted values were very close to the observed values. Validation of the developed models with a new set of data indicated a good fit for actual prediction purpose with predicted R 2 values of 0.76 and 0.64 for biodegradable and non-biodegradable MSW generation rate respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.
Developing models for the prediction of hospital healthcare waste generation rate.
Tesfahun, Esubalew; Kumie, Abera; Beyene, Abebe
2016-01-01
An increase in the number of health institutions, along with frequent use of disposable medical products, has contributed to the increase of healthcare waste generation rate. For proper handling of healthcare waste, it is crucial to predict the amount of waste generation beforehand. Predictive models can help to optimise healthcare waste management systems, set guidelines and evaluate the prevailing strategies for healthcare waste handling and disposal. However, there is no mathematical model developed for Ethiopian hospitals to predict healthcare waste generation rate. Therefore, the objective of this research was to develop models for the prediction of a healthcare waste generation rate. A longitudinal study design was used to generate long-term data on solid healthcare waste composition, generation rate and develop predictive models. The results revealed that the healthcare waste generation rate has a strong linear correlation with the number of inpatients (R(2) = 0.965), and a weak one with the number of outpatients (R(2) = 0.424). Statistical analysis was carried out to develop models for the prediction of the quantity of waste generated at each hospital (public, teaching and private). In these models, the number of inpatients and outpatients were revealed to be significant factors on the quantity of waste generated. The influence of the number of inpatients and outpatients treated varies at different hospitals. Therefore, different models were developed based on the types of hospitals. © The Author(s) 2015.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keser, Saniye; Duzgun, Sebnem; Department of Geodetic and Geographic Information Technologies, Middle East Technical University, 06800 Ankara
Highlights: Black-Right-Pointing-Pointer Spatial autocorrelation exists in municipal solid waste generation rates for different provinces in Turkey. Black-Right-Pointing-Pointer Traditional non-spatial regression models may not provide sufficient information for better solid waste management. Black-Right-Pointing-Pointer Unemployment rate is a global variable that significantly impacts the waste generation rates in Turkey. Black-Right-Pointing-Pointer Significances of global parameters may diminish at local scale for some provinces. Black-Right-Pointing-Pointer GWR model can be used to create clusters of cities for solid waste management. - Abstract: In studies focusing on the factors that impact solid waste generation habits and rates, the potential spatial dependency in solid waste generation datamore » is not considered in relating the waste generation rates to its determinants. In this study, spatial dependency is taken into account in determination of the significant socio-economic and climatic factors that may be of importance for the municipal solid waste (MSW) generation rates in different provinces of Turkey. Simultaneous spatial autoregression (SAR) and geographically weighted regression (GWR) models are used for the spatial data analyses. Similar to ordinary least squares regression (OLSR), regression coefficients are global in SAR model. In other words, the effect of a given independent variable on a dependent variable is valid for the whole country. Unlike OLSR or SAR, GWR reveals the local impact of a given factor (or independent variable) on the waste generation rates of different provinces. Results show that provinces within closer neighborhoods have similar MSW generation rates. On the other hand, this spatial autocorrelation is not very high for the exploratory variables considered in the study. OLSR and SAR models have similar regression coefficients. GWR is useful to indicate the local determinants of MSW generation rates. GWR model can be utilized to plan waste management activities at local scale including waste minimization, collection, treatment, and disposal. At global scale, the MSW generation rates in Turkey are significantly related to unemployment rate and asphalt-paved roads ratio. Yet, significances of these variables may diminish at local scale for some provinces. At local scale, different factors may be important in affecting MSW generation rates.« less
A global reference for caesarean section rates (C-Model): a multicountry cross-sectional study.
Souza, J P; Betran, A P; Dumont, A; de Mucio, B; Gibbs Pickens, C M; Deneux-Tharaux, C; Ortiz-Panozo, E; Sullivan, E; Ota, E; Togoobaatar, G; Carroli, G; Knight, H; Zhang, J; Cecatti, J G; Vogel, J P; Jayaratne, K; Leal, M C; Gissler, M; Morisaki, N; Lack, N; Oladapo, O T; Tunçalp, Ö; Lumbiganon, P; Mori, R; Quintana, S; Costa Passos, A D; Marcolin, A C; Zongo, A; Blondel, B; Hernández, B; Hogue, C J; Prunet, C; Landman, C; Ochir, C; Cuesta, C; Pileggi-Castro, C; Walker, D; Alves, D; Abalos, E; Moises, Ecd; Vieira, E M; Duarte, G; Perdona, G; Gurol-Urganci, I; Takahiko, K; Moscovici, L; Campodonico, L; Oliveira-Ciabati, L; Laopaiboon, M; Danansuriya, M; Nakamura-Pereira, M; Costa, M L; Torloni, M R; Kramer, M R; Borges, P; Olkhanud, P B; Pérez-Cuevas, R; Agampodi, S B; Mittal, S; Serruya, S; Bataglia, V; Li, Z; Temmerman, M; Gülmezoglu, A M
2016-02-01
To generate a global reference for caesarean section (CS) rates at health facilities. Cross-sectional study. Health facilities from 43 countries. Thirty eight thousand three hundred and twenty-four women giving birth from 22 countries for model building and 10,045,875 women giving birth from 43 countries for model testing. We hypothesised that mathematical models could determine the relationship between clinical-obstetric characteristics and CS. These models generated probabilities of CS that could be compared with the observed CS rates. We devised a three-step approach to generate the global benchmark of CS rates at health facilities: creation of a multi-country reference population, building mathematical models, and testing these models. Area under the ROC curves, diagnostic odds ratio, expected CS rate, observed CS rate. According to the different versions of the model, areas under the ROC curves suggested a good discriminatory capacity of C-Model, with summary estimates ranging from 0.832 to 0.844. The C-Model was able to generate expected CS rates adjusted for the case-mix of the obstetric population. We have also prepared an e-calculator to facilitate use of C-Model (www.who.int/reproductivehealth/publications/maternal_perinatal_health/c-model/en/). This article describes the development of a global reference for CS rates. Based on maternal characteristics, this tool was able to generate an individualised expected CS rate for health facilities or groups of health facilities. With C-Model, obstetric teams, health system managers, health facilities, health insurance companies, and governments can produce a customised reference CS rate for assessing use (and overuse) of CS. The C-Model provides a customized benchmark for caesarean section rates in health facilities and systems. © 2015 World Health Organization; licensed by John Wiley & Sons Ltd on behalf of Royal College of Obstetricians and Gynaecologists.
Daugirdas, John T; Depner, Thomas A
2017-11-01
A convenient method to estimate the creatinine generation rate and measures of creatinine clearance in hemodialysis patients using formal kinetic modeling and standard pre- and postdialysis blood samples has not been described. We used data from 366 dialysis sessions characterized during follow-up month 4 of the HEMO study, during which cross-dialyzer clearances for both urea and creatinine were available. Blood samples taken at 1 h into dialysis and 30 min and 60 min after dialysis were used to determine how well a two-pool kinetic model could predict creatinine concentrations and other kinetic parameters, including the creatinine generation rate. An extrarenal creatinine clearance of 0.038 l/kg/24 h was included in the model. Diffusive cross-dialyzer clearances of urea [230 (SD 37 mL/min] correlated well (R2 = 0.78) with creatinine clearances [164 (SD 30) mL/min]. When the effective diffusion volume flow rate was set at 0.791 times the blood flow rate for the cross-dialyzer clearance measurements at 1 h into dialysis, the mean calculated volume of creatinine distribution averaged 29.6 (SD 7.2) L], compared with 31.6 (SD 7.0) L for urea (P < 0.01). The modeled creatinine generation rate [1183 (SD 463) mg/day] averaged 100.1 % (SD 29; median 99.3) of that predicted in nondialysis patients by an anthropometric equation. A simplified method for modeling the creatinine generation rate using the urea distribution volume and urea dialyzer clearance without use of the postdialysis serum creatinine measurement gave results for creatinine generation rate [1187 (SD 475) mg/day; that closely matched the value calculated using the formally modeled value, R2 = 0.971]. Our analysis confirms previous findings of similar distribution volumes for creatinine and urea. After taking extra-renal clearance into consideration, the creatinine generation rate in dialysis patients is similar to that in nondialysis patients. A simplified method based on urea clearance and urea distribution volume not requiring a postdialysis serum creatinine measurement can be used to yield creatinine generation rates that closely match those determined from standard modeling. © The Author 2017. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.
Verification of Bwo Model of Vlf Chorus Generation Using Magion 5 Data
NASA Astrophysics Data System (ADS)
Titova, E. E.; Kozelov, B. V.; Jiricek, F.; Smilauer, J.; Demekhov, A. G.; Trakhtengerts, V. Yu.
We present a detailed study of chorus emissions in the magnetosphere detected on- board the Magion 5, when the satellite was at low magnetic latitudes. We determine the frequency sweep rate and the periods of electromagnetic VLF chorus emissions. These results are considered within the concept of the backward wave oscillator (BWO) regime of chorus generation. Comparison of the frequency sweep rate of chorus el- ements shows: (i) There is a correlation between the frequency sweep rates and the chorus amplitudes. The frequency sweep rate increases with chorus amplitude in ac- cord with expectations from the BWO model. (ii) The chorus growth rate, estimated from the frequency sweep rate, is in accord with that inferred from the BWO gener- ation mechanism. (iii) The BWO regime of chorus generation ensures the observed decrease in the frequency sweep rate of the chorus elements with increasing L shell. We also discuss the relationship between the observed periods of chorus elements with the predictions following from the BWO model of chorus generation.
Constructing stage-structured matrix population models from life tables: comparison of methods
Diaz-Lopez, Jasmin
2017-01-01
A matrix population model is a convenient tool for summarizing per capita survival and reproduction rates (collectively vital rates) of a population and can be used for calculating an asymptotic finite population growth rate (λ) and generation time. These two pieces of information can be used for determining the status of a threatened species. The use of stage-structured population models has increased in recent years, and the vital rates in such models are often estimated using a life table analysis. However, potential bias introduced when converting age-structured vital rates estimated from a life table into parameters for a stage-structured population model has not been assessed comprehensively. The objective of this study was to investigate the performance of methods for such conversions using simulated life histories of organisms. The underlying models incorporate various types of life history and true population growth rates of varying levels. The performance was measured by comparing differences in λ and the generation time calculated using the Euler-Lotka equation, age-structured population matrices, and several stage-structured population matrices that were obtained by applying different conversion methods. The results show that the discretization of age introduces only small bias in λ or generation time. Similarly, assuming a fixed age of maturation at the mean age of maturation does not introduce much bias. However, aggregating age-specific survival rates into a stage-specific survival rate and estimating a stage-transition rate can introduce substantial bias depending on the organism’s life history type and the true values of λ. In order to aggregate survival rates, the use of the weighted arithmetic mean was the most robust method for estimating λ. Here, the weights are given by survivorship curve after discounting with λ. To estimate a stage-transition rate, matching the proportion of individuals transitioning, with λ used for discounting the rate, was the best approach. However, stage-structured models performed poorly in estimating generation time, regardless of the methods used for constructing the models. Based on the results, we recommend using an age-structured matrix population model or the Euler-Lotka equation for calculating λ and generation time when life table data are available. Then, these age-structured vital rates can be converted into a stage-structured model for further analyses. PMID:29085763
Constructing stage-structured matrix population models from life tables: comparison of methods.
Fujiwara, Masami; Diaz-Lopez, Jasmin
2017-01-01
A matrix population model is a convenient tool for summarizing per capita survival and reproduction rates (collectively vital rates) of a population and can be used for calculating an asymptotic finite population growth rate ( λ ) and generation time. These two pieces of information can be used for determining the status of a threatened species. The use of stage-structured population models has increased in recent years, and the vital rates in such models are often estimated using a life table analysis. However, potential bias introduced when converting age-structured vital rates estimated from a life table into parameters for a stage-structured population model has not been assessed comprehensively. The objective of this study was to investigate the performance of methods for such conversions using simulated life histories of organisms. The underlying models incorporate various types of life history and true population growth rates of varying levels. The performance was measured by comparing differences in λ and the generation time calculated using the Euler-Lotka equation, age-structured population matrices, and several stage-structured population matrices that were obtained by applying different conversion methods. The results show that the discretization of age introduces only small bias in λ or generation time. Similarly, assuming a fixed age of maturation at the mean age of maturation does not introduce much bias. However, aggregating age-specific survival rates into a stage-specific survival rate and estimating a stage-transition rate can introduce substantial bias depending on the organism's life history type and the true values of λ . In order to aggregate survival rates, the use of the weighted arithmetic mean was the most robust method for estimating λ . Here, the weights are given by survivorship curve after discounting with λ . To estimate a stage-transition rate, matching the proportion of individuals transitioning, with λ used for discounting the rate, was the best approach. However, stage-structured models performed poorly in estimating generation time, regardless of the methods used for constructing the models. Based on the results, we recommend using an age-structured matrix population model or the Euler-Lotka equation for calculating λ and generation time when life table data are available. Then, these age-structured vital rates can be converted into a stage-structured model for further analyses.
Karanjekar, Richa V; Bhatt, Arpita; Altouqui, Said; Jangikhatoonabad, Neda; Durai, Vennila; Sattler, Melanie L; Hossain, M D Sahadat; Chen, Victoria
2015-12-01
Accurately estimating landfill methane emissions is important for quantifying a landfill's greenhouse gas emissions and power generation potential. Current models, including LandGEM and IPCC, often greatly simplify treatment of factors like rainfall and ambient temperature, which can substantially impact gas production. The newly developed Capturing Landfill Emissions for Energy Needs (CLEEN) model aims to improve landfill methane generation estimates, but still require inputs that are fairly easy to obtain: waste composition, annual rainfall, and ambient temperature. To develop the model, methane generation was measured from 27 laboratory scale landfill reactors, with varying waste compositions (ranging from 0% to 100%); average rainfall rates of 2, 6, and 12 mm/day; and temperatures of 20, 30, and 37°C, according to a statistical experimental design. Refuse components considered were the major biodegradable wastes, food, paper, yard/wood, and textile, as well as inert inorganic waste. Based on the data collected, a multiple linear regression equation (R(2)=0.75) was developed to predict first-order methane generation rate constant values k as functions of waste composition, annual rainfall, and temperature. Because, laboratory methane generation rates exceed field rates, a second scale-up regression equation for k was developed using actual gas-recovery data from 11 landfills in high-income countries with conventional operation. The Capturing Landfill Emissions for Energy Needs (CLEEN) model was developed by incorporating both regression equations into the first-order decay based model for estimating methane generation rates from landfills. CLEEN model values were compared to actual field data from 6 US landfills, and to estimates from LandGEM and IPCC. For 4 of the 6 cases, CLEEN model estimates were the closest to actual. Copyright © 2015 Elsevier Ltd. All rights reserved.
Thermal modeling of the lithium/polymer battery
NASA Astrophysics Data System (ADS)
Pals, C. R.
1994-10-01
Research in the area of advanced batteries for electric-vehicle applications has increased steadily since the 1990 zero-emission-vehicle mandate of the California Air Resources Board. Due to their design flexibility and potentially high energy and power densities, lithium/polymer batteries are an emerging technology for electric-vehicle applications. Thermal modeling of lithium/polymer batteries is particularly important because the transport properties of the system depend exponentially on temperature. Two models have been presented for assessment of the thermal behavior of lithium/polymer batteries. The one-cell model predicts the cell potential, the concentration profiles, and the heat-generation rate during discharge. The cell-stack model predicts temperature profiles and heat transfer limitations of the battery. Due to the variation of ionic conductivity and salt diffusion coefficient with temperature, the performance of the lithium/polymer battery is greatly affected by temperature. Because of this variation, it is important to optimize the cell operating temperature and design a thermal management system for the battery. Since the thermal conductivity of the polymer electrolyte is very low, heat is not easily conducted in the direction perpendicular to cell layers. Temperature profiles in the cells are not as significant as expected because heat-generation rates in warmer areas of the cell stack are lower than heat-generation rates in cooler areas of the stack. This nonuniform heat-generation rate flattens the temperature profile. Temperature profiles as calculated by this model are not as steep as those calculated by previous models that assume a uniform heat-generation rate.
Modeled occupational exposures to gas-phase medical laser-generated air contaminants.
Lippert, Julia F; Lacey, Steven E; Jones, Rachael M
2014-01-01
Exposure monitoring data indicate the potential for substantive exposure to laser-generated air contaminants (LGAC); however the diversity of medical lasers and their applications limit generalization from direct workplace monitoring. Emission rates of seven previously reported gas-phase constituents of medical laser-generated air contaminants (LGAC) were determined experimentally and used in a semi-empirical two-zone model to estimate a range of plausible occupational exposures to health care staff. Single-source emission rates were generated in an emission chamber as a one-compartment mass balance model at steady-state. Clinical facility parameters such as room size and ventilation rate were based on standard ventilation and environmental conditions required for a laser surgical facility in compliance with regulatory agencies. All input variables in the model including point source emission rates were varied over an appropriate distribution in a Monte Carlo simulation to generate a range of time-weighted average (TWA) concentrations in the near and far field zones of the room in a conservative approach inclusive of all contributing factors to inform future predictive models. The concentrations were assessed for risk and the highest values were shown to be at least three orders of magnitude lower than the relevant occupational exposure limits (OELs). Estimated values do not appear to present a significant exposure hazard within the conditions of our emission rate estimates.
NASA Astrophysics Data System (ADS)
Kovalev, I. V.; Sidorov, V. G.; Zelenkov, P. V.; Khoroshko, A. Y.; Lelekov, A. T.
2015-10-01
To optimize parameters of beta-electrical converter of isotope Nickel-63 radiation, model of the distribution of EHP generation rate in semiconductor must be derived. By using Monte-Carlo methods in GEANT4 system with ultra-low energy electron physics models this distribution in silicon calculated and approximated with Gauss function. Maximal efficient isotope layer thickness and maximal energy efficiency of EHP generation were estimated.
Azadi, Sama; Karimi-Jashni, Ayoub
2016-02-01
Predicting the mass of solid waste generation plays an important role in integrated solid waste management plans. In this study, the performance of two predictive models, Artificial Neural Network (ANN) and Multiple Linear Regression (MLR) was verified to predict mean Seasonal Municipal Solid Waste Generation (SMSWG) rate. The accuracy of the proposed models is illustrated through a case study of 20 cities located in Fars Province, Iran. Four performance measures, MAE, MAPE, RMSE and R were used to evaluate the performance of these models. The MLR, as a conventional model, showed poor prediction performance. On the other hand, the results indicated that the ANN model, as a non-linear model, has a higher predictive accuracy when it comes to prediction of the mean SMSWG rate. As a result, in order to develop a more cost-effective strategy for waste management in the future, the ANN model could be used to predict the mean SMSWG rate. Copyright © 2015 Elsevier Ltd. All rights reserved.
Reaction Mechanism Generator: Automatic construction of chemical kinetic mechanisms
NASA Astrophysics Data System (ADS)
Gao, Connie W.; Allen, Joshua W.; Green, William H.; West, Richard H.
2016-06-01
Reaction Mechanism Generator (RMG) constructs kinetic models composed of elementary chemical reaction steps using a general understanding of how molecules react. Species thermochemistry is estimated through Benson group additivity and reaction rate coefficients are estimated using a database of known rate rules and reaction templates. At its core, RMG relies on two fundamental data structures: graphs and trees. Graphs are used to represent chemical structures, and trees are used to represent thermodynamic and kinetic data. Models are generated using a rate-based algorithm which excludes species from the model based on reaction fluxes. RMG can generate reaction mechanisms for species involving carbon, hydrogen, oxygen, sulfur, and nitrogen. It also has capabilities for estimating transport and solvation properties, and it automatically computes pressure-dependent rate coefficients and identifies chemically-activated reaction paths. RMG is an object-oriented program written in Python, which provides a stable, robust programming architecture for developing an extensible and modular code base with a large suite of unit tests. Computationally intensive functions are cythonized for speed improvements.
Energy metabolism, body composition, and urea generation rate in hemodialysis patients.
Sridharan, Sivakumar; Vilar, Enric; Berdeprado, Jocelyn; Farrington, Ken
2013-10-01
Hemodialysis (HD) adequacy is currently assessed using normalized urea clearance (Kt/V), although scaling based on Watson volume (V) may disadvantage women and men with low body weight. Alternative scaling factors such as resting energy expenditure and high metabolic rate organ mass have been suggested. The relationship between such factors and uremic toxin generation has not been established. We aimed to study the relationship between body size, energy metabolism, and urea generation rate. A cross-sectional cohort of 166 HD patients was studied. Anthropometric measurements were carried on all. Resting energy expenditure was measured by indirect calorimetry, fat-free mass by bio-impedance and total energy expenditure by combining resting energy expenditure with a questionnaire-derived physical activity data. High metabolic rate organ mass was calculated using a published equation and urea generation rate using formal urea kinetic modeling. Metabolic factors including resting energy expenditure, total energy expenditure and fat-free mass correlated better with urea generation rate than did Watson volume. Total energy expenditure and fat-free mass (but not Watson Volume) were independent predictors of urea generation rate, the model explaining 42% of its variation. Small women (
Are dialysis adequacy indices independent of solute generation rate?
Waniewski, Jacek; Debowska, Malgorzata; Lindholm, Bengt
2014-01-01
KT/V is by definition independent of solute generation rate. Alternative dialysis adequacy indices (DAIs) such as equivalent renal clearance (EKR), standard KT/V (stdKT/V), and solute removal index (SRI) are estimated as the ratio of solute mass removed to an average solute mass in the body or solute concentration in blood; both nominator and denominator in these formulas depend on the solute generation rate. Our objective was to investigate whether and under which conditions the alternative DAIs are independent of solute generation rate. By using general compartment modeling, we show that for the metabolically stable patient (in whom the solute generated during the dialysis cycle, typically, 1 week, is equal to the solute removed from the body), DAIs estimated for the dialysis cycle are in general independent of the average solute generation rate (although they may depend on the pattern of oscillations in the generation rate). However, the alternative adequacy parameters (such as EKR, stdKT/V, and SRI) may depend on solute generation rate for metabolically unstable patients.
NASA Astrophysics Data System (ADS)
Wang, Liping; Wang, Boquan; Zhang, Pu; Liu, Minghao; Li, Chuangang
2017-06-01
The study of reservoir deterministic optimal operation can improve the utilization rate of water resource and help the hydropower stations develop more reasonable power generation schedules. However, imprecise forecasting inflow may lead to output error and hinder implementation of power generation schedules. In this paper, output error generated by the uncertainty of the forecasting inflow was regarded as a variable to develop a short-term reservoir optimal operation model for reducing operation risk. To accomplish this, the concept of Value at Risk (VaR) was first applied to present the maximum possible loss of power generation schedules, and then an extreme value theory-genetic algorithm (EVT-GA) was proposed to solve the model. The cascade reservoirs of Yalong River Basin in China were selected as a case study to verify the model, according to the results, different assurance rates of schedules can be derived by the model which can present more flexible options for decision makers, and the highest assurance rate can reach 99%, which is much higher than that without considering output error, 48%. In addition, the model can greatly improve the power generation compared with the original reservoir operation scheme under the same confidence level and risk attitude. Therefore, the model proposed in this paper can significantly improve the effectiveness of power generation schedules and provide a more scientific reference for decision makers.
Rodríguez, Sylian; Almquist, Catherine; Lee, Tai Gyu; Furuuchi, Masami; Hedrick, Elizabeth; Biswas, Pratim
2004-02-01
A mechanistic model to predict the capture of gas-phase mercury (Hg) species using in situ-generated titania nanosize particles activated by UV irradiation is developed. The model is an extension of a recently reported model for photochemical reactions by Almquist and Biswas that accounts for the rates of electron-hole pair generation, the adsorption of the compound to be oxidized, and the adsorption of water vapor. The role of water vapor in the removal efficiency of Hg was investigated to evaluate the rates of Hg oxidation at different water vapor concentrations. As the water vapor concentration is increased, more hydroxy radical species are generated on the surface of the titania particle, increasing the number of active sites for the photooxidation and capture of Hg. At very high water vapor concentrations, competitive adsorption is expected to be important and reduce the number of sites available for photooxidation of Hg. The predictions of the developed phenomenological model agreed well with the measured Hg oxidation rates in this study and with the data on oxidation of organic compounds reported in the literature.
Sun, Dajun D; Lee, Ping I
2013-11-04
The combination of a rapidly dissolving and supersaturating "spring" with a precipitation retarding "parachute" has often been pursued as an effective formulation strategy for amorphous solid dispersions (ASDs) to enhance the rate and extent of oral absorption. However, the interplay between these two rate processes in achieving and maintaining supersaturation remains inadequately understood, and the effect of rate of supersaturation buildup on the overall time evolution of supersaturation during the dissolution of amorphous solids has not been explored. The objective of this study is to investigate the effect of supersaturation generation rate on the resulting kinetic solubility profiles of amorphous pharmaceuticals and to delineate the evolution of supersaturation from a mechanistic viewpoint. Experimental concentration-time curves under varying rates of supersaturation generation and recrystallization for model drugs, indomethacin (IND), naproxen (NAP) and piroxicam (PIR), were generated from infusing dissolved drug (e.g., in ethanol) into the dissolution medium and compared with that predicted from a comprehensive mechanistic model based on the classical nucleation theory taking into account both the particle growth and ripening processes. In the absence of any dissolved polymer to inhibit drug precipitation, both our experimental and predicted results show that the maximum achievable supersaturation (i.e., kinetic solubility) of the amorphous solids increases, the time to reach maximum decreases, and the rate of concentration decline in the de-supersaturation phase increases, with increasing rate of supersaturation generation (i.e., dissolution rate). Our mechanistic model also predicts the existence of an optimal supersaturation rate which maximizes the area under the curve (AUC) of the kinetic solubility concentration-time profile, which agrees well with experimental data. In the presence of a dissolved polymer from ASD dissolution, these observed trends also hold true except the de-supersaturation phase is more extended due to the crystallization inhibition effect. Since the observed kinetic solubility of nonequilibrium amorphous solids depends on the rate of supersaturation generation, our results also highlight the underlying difficulty in determining a reproducible solubility advantage for amorphous solids.
Earnest, G S; Mickelsen, R L; McCammon, J B; O'Brien, D M
1997-11-01
This study modeled the time required for a gasoline-powered, 5 horsepower (hp), 4-cycle engine to generate carbon monoxide (CO) concentrations exceeding the National Institute for Occupational Safety and Health 200-ppm ceiling and 1200-ppm immediately dangerous to life and health concentration for various room sizes and ventilation rates. The model permitted the ambiguous term "well-ventilated area" to be defined. The model was compared with field data collected at a site where two workers were poisoned while operating a 5-hp concrete saw in a bathroom having open doors and an operating ventilation system. There is agreement between both the modeled and field-generated data, indicating that hazardous CO concentrations can develop within minutes. Comparison of field and modeling data showed the measured CO generation rate at approximately one-half of the value used in the model, which may be partially because the engine used in the field was not under load during data collection. The generation rate and room size from the actual poisoning was then used in the model. The model determined that ventilation rates of nearly 5000 ft3/min (120 air changes per hour) would be required to prevent the CO concentration from exceeding the 200-ppm ceiling for short periods. Results suggest that small gasoline-powered engines should not be operated inside of buildings or in semienclosed spaces and that manufacturers of such tools should improve their warnings and develop engineering control options for better user protection.
Reaction Mechanism Generator: Automatic construction of chemical kinetic mechanisms
Gao, Connie W.; Allen, Joshua W.; Green, William H.; ...
2016-02-24
Reaction Mechanism Generator (RMG) constructs kinetic models composed of elementary chemical reaction steps using a general understanding of how molecules react. Species thermochemistry is estimated through Benson group additivity and reaction rate coefficients are estimated using a database of known rate rules and reaction templates. At its core, RMG relies on two fundamental data structures: graphs and trees. Graphs are used to represent chemical structures, and trees are used to represent thermodynamic and kinetic data. Models are generated using a rate-based algorithm which excludes species from the model based on reaction fluxes. RMG can generate reaction mechanisms for species involvingmore » carbon, hydrogen, oxygen, sulfur, and nitrogen. It also has capabilities for estimating transport and solvation properties, and it automatically computes pressure-dependent rate coefficients and identifies chemically-activated reaction paths. RMG is an object-oriented program written in Python, which provides a stable, robust programming architecture for developing an extensible and modular code base with a large suite of unit tests. Computationally intensive functions are cythonized for speed improvements.« less
Translating landfill methane generation parameters among first-order decay models.
Krause, Max J; Chickering, Giles W; Townsend, Timothy G
2016-11-01
Landfill gas (LFG) generation is predicted by a first-order decay (FOD) equation that incorporates two parameters: a methane generation potential (L 0 ) and a methane generation rate (k). Because non-hazardous waste landfills may accept many types of waste streams, multiphase models have been developed in an attempt to more accurately predict methane generation from heterogeneous waste streams. The ability of a single-phase FOD model to predict methane generation using weighted-average methane generation parameters and tonnages translated from multiphase models was assessed in two exercises. In the first exercise, waste composition from four Danish landfills represented by low-biodegradable waste streams was modeled in the Afvalzorg Multiphase Model and methane generation was compared to the single-phase Intergovernmental Panel on Climate Change (IPCC) Waste Model and LandGEM. In the second exercise, waste composition represented by IPCC waste components was modeled in the multiphase IPCC and compared to single-phase LandGEM and Australia's Solid Waste Calculator (SWC). In both cases, weight-averaging of methane generation parameters from waste composition data in single-phase models was effective in predicting cumulative methane generation from -7% to +6% of the multiphase models. The results underscore the understanding that multiphase models will not necessarily improve LFG generation prediction because the uncertainty of the method rests largely within the input parameters. A unique method of calculating the methane generation rate constant by mass of anaerobically degradable carbon was presented (k c ) and compared to existing methods, providing a better fit in 3 of 8 scenarios. Generally, single phase models with weighted-average inputs can accurately predict methane generation from multiple waste streams with varied characteristics; weighted averages should therefore be used instead of regional default values when comparing models. Translating multiphase first-order decay model input parameters by weighted average shows that single-phase models can predict cumulative methane generation within the level of uncertainty of many of the input parameters as defined by the Intergovernmental Panel on Climate Change (IPCC), which indicates that decreasing the uncertainty of the input parameters will make the model more accurate rather than adding multiple phases or input parameters.
A New Model that Generates Lotka's Law.
ERIC Educational Resources Information Center
Huber, John C.
2002-01-01
Develops a new model for a process that generates Lotka's Law. Topics include measuring scientific productivity through the number of publications; rate of production; career duration; randomness; Poisson distribution; computer simulations; goodness-of-fit; theoretical support for the model; and future research. (Author/LRW)
A Protocol for Generating and Exchanging (Genome-Scale) Metabolic Resource Allocation Models.
Reimers, Alexandra-M; Lindhorst, Henning; Waldherr, Steffen
2017-09-06
In this article, we present a protocol for generating a complete (genome-scale) metabolic resource allocation model, as well as a proposal for how to represent such models in the systems biology markup language (SBML). Such models are used to investigate enzyme levels and achievable growth rates in large-scale metabolic networks. Although the idea of metabolic resource allocation studies has been present in the field of systems biology for some years, no guidelines for generating such a model have been published up to now. This paper presents step-by-step instructions for building a (dynamic) resource allocation model, starting with prerequisites such as a genome-scale metabolic reconstruction, through building protein and noncatalytic biomass synthesis reactions and assigning turnover rates for each reaction. In addition, we explain how one can use SBML level 3 in combination with the flux balance constraints and our resource allocation modeling annotation to represent such models.
A mechanistic model to predict the capture of gas phase mercury species using in-situ generated titania nanosize particles activated by UV irradiation is developed. The model is an extension of a recently reported model1 for photochemical reactions that accounts for the rates of...
Periodic acoustic radiation from a low aspect ratio propeller
NASA Astrophysics Data System (ADS)
Muench, John David
An experimental program was conducted with the objective of providing high fidelity measurements of propeller inflow, unsteady blade surface pressures, and discrete acoustic radiation over a wide range of speeds. Anechoic wind tunnel experiments were preformed using the SISUP propeller. The upstream stator blades generate large wake deficits that result in periodic unsteady blade forces that acoustically radiate at blade passing frequency and higher harmonics. The experimental portion of this research successfully measured the inflow velocity, blade span unsteady pressures and directive characteristics of the blade-rate radiated noise associated with this complex propeller geometry while the propeller was operating on design. The spatial harmonic decomposition of the inflow revealed significant coefficients at 8, 16 and 24. The magnitude of the unsteady blade forces scale as U4 and linearly shift in frequency with speed. The magnitude of the discrete frequency acoustic levels associated with blade rate scale as U6 and also shift linearly with speed. At blade-rate, the far-field acoustic directivity has a dipole-like directivity oriented perpendicular to the inflow. At the first harmonic of blade-rate, the far-field directivity is not as well defined. The experimental inflow and blade surface pressure results were used to generate an acoustic prediction at blade rate based on a blade strip theory model developed by Blake (1986). The predicted acoustic levels were compared to the experimental results. The model adequately predicts the measured sound field at blade rate at 120 ft/sec. Radiated noise at blade-rate for 120 ft/s can be described by a dipole, whose orientation is perpendicular to the flow and is generated by the interaction of the rotating propeller with the 8th harmonic of the inflow. At blade-rate for 60 ft/s, the model under predicts measured levels. At the first harmonic of blade-rate, for 120 ft/s, the sound field is described as a combination of dipole sources, one generated by the 16 th harmonic, perpendicular to the inflow, and the other generated by the 12th harmonic of the inflow parallel to the inflow. At the first harmonic of blade-rate for 60 ft/s, the model under predicts measured levels.
Modeling streamflow from coupled airborne laser scanning and acoustic Doppler current profiler data
Norris, Lam; Kean, Jason W.; Lyon, Steve
2016-01-01
The rating curve enables the translation of water depth into stream discharge through a reference cross-section. This study investigates coupling national scale airborne laser scanning (ALS) and acoustic Doppler current profiler (ADCP) bathymetric survey data for generating stream rating curves. A digital terrain model was defined from these data and applied in a physically based 1-D hydraulic model to generate rating curves for a regularly monitored location in northern Sweden. Analysis of the ALS data showed that overestimation of the streambank elevation could be adjusted with a root mean square error (RMSE) block adjustment using a higher accuracy manual topographic survey. The results of our study demonstrate that the rating curve generated from the vertically corrected ALS data combined with ADCP data had lower errors (RMSE = 0.79 m3/s) than the empirical rating curve (RMSE = 1.13 m3/s) when compared to streamflow measurements. We consider these findings encouraging as hydrometric agencies can potentially leverage national-scale ALS and ADCP instrumentation to reduce the cost and effort required for maintaining and establishing rating curves at gauging station sites similar to the Röån River.
Thermal mathematical modeling of a multicell common pressure vessel nickel-hydrogen battery
NASA Technical Reports Server (NTRS)
Kim, Junbom; Nguyen, T. V.; White, R. E.
1992-01-01
A two-dimensional and time-dependent thermal model of a multicell common pressure vessel (CPV) nickel-hydrogen battery was developed. A finite element solver called PDE/Protran was used to solve this model. The model was used to investigate the effects of various design parameters on the temperature profile within the cell. The results were used to help find a design that will yield an acceptable temperature gradient inside a multicell CPV nickel-hydrogen battery. Steady-state and unsteady-state cases with a constant heat generation rate and a time-dependent heat generation rate were solved.
Thompson, Shirley; Sawyer, Jennifer; Bonam, Rathan; Valdivia, J E
2009-07-01
The German EPER, TNO, Belgium, LandGEM, and Scholl Canyon models for estimating methane production were compared to methane recovery rates for 35 Canadian landfills, assuming that 20% of emissions were not recovered. Two different fractions of degradable organic carbon (DOC(f)) were applied in all models. Most models performed better when the DOC(f) was 0.5 compared to 0.77. The Belgium, Scholl Canyon, and LandGEM version 2.01 models produced the best results of the existing models with respective mean absolute errors compared to methane generation rates (recovery rates + 20%) of 91%, 71%, and 89% at 0.50 DOC(f) and 171%, 115%, and 81% at 0.77 DOC(f). The Scholl Canyon model typically overestimated methane recovery rates and the LandGEM version 2.01 model, which modifies the Scholl Canyon model by dividing waste by 10, consistently underestimated methane recovery rates; this comparison suggested that modifying the divisor for waste in the Scholl Canyon model between one and ten could improve its accuracy. At 0.50 DOC(f) and 0.77 DOC(f) the modified model had the lowest absolute mean error when divided by 1.5 yielding 63 +/- 45% and 2.3 yielding 57 +/- 47%, respectively. These modified models reduced error and variability substantially and both have a strong correlation of r = 0.92.
Govindan, Siva Shangari; Agamuthu, P
2014-10-01
Waste management can be regarded as a cross-cutting environmental 'mega-issue'. Sound waste management practices support the provision of basic needs for general health, such as clean air, clean water and safe supply of food. In addition, climate change mitigation efforts can be achieved through reduction of greenhouse gas emissions from waste management operations, such as landfills. Landfills generate landfill gas, especially methane, as a result of anaerobic degradation of the degradable components of municipal solid waste. Evaluating the mode of generation and collection of landfill gas has posted a challenge over time. Scientifically, landfill gas generation rates are presently estimated using numerical models. In this study the Intergovernmental Panel on Climate Change's Waste Model is used to estimate the methane generated from a Malaysian sanitary landfill. Key parameters of the model, which are the decay rate and degradable organic carbon, are analysed in two different approaches; the bulk waste approach and waste composition approach. The model is later validated using error function analysis and optimum decay rate, and degradable organic carbon for both approaches were also obtained. The best fitting values for the bulk waste approach are a decay rate of 0.08 y(-1) and degradable organic carbon value of 0.12; and for the waste composition approach the decay rate was found to be 0.09 y(-1) and degradable organic carbon value of 0.08. From this validation exercise, the estimated error was reduced by 81% and 69% for the bulk waste and waste composition approach, respectively. In conclusion, this type of modelling could constitute a sensible starting point for landfills to introduce careful planning for efficient gas recovery in individual landfills. © The Author(s) 2014.
Modeling of Diffusion Based Correlations Between Heart Rate Modulations and Respiration Pattern
2001-10-25
1 of 4 MODELING OF DIFFUSION BASED CORRELATIONS BETWEEN HEART RATE MODULATIONS AND RESPIRATION PATTERN R.Langer,(1) Y.Smorzik,(2) S.Akselrod,(1...generations of the bronchial tree. The second stage describes the oxygen diffusion process from the pulmonary gas in the alveoli into the pulmonary...patterns (FRC, TV, rate). Keywords – Modeling, Diffusion , Heart Rate fluctuations I. INTRODUCTION Under a whole-body management perception, the
76 FR 11940 - Airworthiness Directives; Turbomeca Model Arriel 1E2, 1S, and 1S1 Turboshaft Engines
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-04
... discrepancies led to a ``one-off'' abnormal evolution of gas generator (NG) rating during engine starting. In... evolution of gas generator (NG) rating during engine starting. In one of these cases, this resulted in an...
NASA Astrophysics Data System (ADS)
Andre, B. J.; Rajaram, H.; Silverstein, J.
2010-12-01
Acid mine drainage, AMD, results from the oxidation of metal sulfide minerals (e.g. pyrite), producing ferrous iron and sulfuric acid. Acidophilic autotrophic bacteria such as Acidithiobacillus ferrooxidans and Leptospirillum ferrooxidans obtain energy by oxidizing ferrous iron back to ferric iron, using oxygen as the electron acceptor. Most existing models of AMD do not account for microbial kinetics or iron geochemistry rigorously. Instead they assume that oxygen limitation controls pyrite oxidation and thus focus on oxygen transport. These models have been successfully used for simulating conditions where oxygen availability is a limiting factor (e.g. source prevention by capping), but have not been shown to effectively model acid generation and effluent chemistry under a wider range of conditions. The key reactions, oxidation of pyrite and oxidation of ferrous iron, are both slow kinetic processes. Despite being extensively studied for the last thirty years, there is still not a consensus in the literature about the basic mechanisms, limiting factors or rate expressions for microbially enhanced oxidation of metal sulfides. An indirect leaching mechanism (chemical oxidation of pyrite by ferric iron to produce ferrous iron, with regeneration of ferric iron by microbial oxidation of ferrous iron) is used as the foundation of a conceptual model for microbially enhanced oxidation of pyrite. Using literature data, a rate expression for microbial consumption of ferrous iron is developed that accounts for oxygen, ferrous iron and pH limitation. Reaction rate expressions for oxidation of pyrite and chemical oxidation of ferrous iron are selected from the literature. A completely mixed stirred tank reactor (CSTR) model is implemented coupling the kinetic rate expressions, speciation calculations and flow. The model simulates generation of AMD and effluent chemistry that qualitatively agrees with column reactor and single rock experiments. A one dimensional reaction diffusion model at the scale of a single rock is developed incorporating the proposed kinetic rate expressions. Simulations of initiation, washout and AMD flows are discussed to gain a better understanding of the role of porosity, effective diffusivity and reactive surface area in generating AMD. Simulations indicate that flow boundary conditions control generation of acid rock drainage as porosity increases.
The Validity of Poisson Assumptions in a Combined Loglinear/MDS Mapping Model.
ERIC Educational Resources Information Center
Everett, James E.
1993-01-01
Addresses objections to the validity of assuming a Poisson loglinear model as the generating process for citations from one journal into another. Fluctuations in citation rate, serial dependence on citations, impossibility of distinguishing between rate changes and serial dependence, evidence for changes in Poisson rate, and transitivity…
Accounting for orphaned aftershocks in the earthquake background rate
Van Der Elst, Nicholas
2017-01-01
Aftershocks often occur within cascades of triggered seismicity in which each generation of aftershocks triggers an additional generation, and so on. The rate of earthquakes in any particular generation follows Omori's law, going approximately as 1/t. This function decays rapidly, but is heavy-tailed, and aftershock sequences may persist for long times at a rate that is difficult to discriminate from background. It is likely that some apparently spontaneous earthquakes in the observational catalogue are orphaned aftershocks of long-past main shocks. To assess the relative proportion of orphaned aftershocks in the apparent background rate, I develop an extension of the ETAS model that explicitly includes the expected contribution of orphaned aftershocks to the apparent background rate. Applying this model to California, I find that the apparent background rate can be almost entirely attributed to orphaned aftershocks, depending on the assumed duration of an aftershock sequence. This implies an earthquake cascade with a branching ratio (the average number of directly triggered aftershocks per main shock) of nearly unity. In physical terms, this implies that very few earthquakes are completely isolated from the perturbing effects of other earthquakes within the fault system. Accounting for orphaned aftershocks in the ETAS model gives more accurate estimates of the true background rate, and more realistic expectations for long-term seismicity patterns.
Accounting for orphaned aftershocks in the earthquake background rate
NASA Astrophysics Data System (ADS)
van der Elst, Nicholas J.
2017-11-01
Aftershocks often occur within cascades of triggered seismicity in which each generation of aftershocks triggers an additional generation, and so on. The rate of earthquakes in any particular generation follows Omori's law, going approximately as 1/t. This function decays rapidly, but is heavy-tailed, and aftershock sequences may persist for long times at a rate that is difficult to discriminate from background. It is likely that some apparently spontaneous earthquakes in the observational catalogue are orphaned aftershocks of long-past main shocks. To assess the relative proportion of orphaned aftershocks in the apparent background rate, I develop an extension of the ETAS model that explicitly includes the expected contribution of orphaned aftershocks to the apparent background rate. Applying this model to California, I find that the apparent background rate can be almost entirely attributed to orphaned aftershocks, depending on the assumed duration of an aftershock sequence. This implies an earthquake cascade with a branching ratio (the average number of directly triggered aftershocks per main shock) of nearly unity. In physical terms, this implies that very few earthquakes are completely isolated from the perturbing effects of other earthquakes within the fault system. Accounting for orphaned aftershocks in the ETAS model gives more accurate estimates of the true background rate, and more realistic expectations for long-term seismicity patterns.
The power and robustness of maximum LOD score statistics.
Yoo, Y J; Mendell, N R
2008-07-01
The maximum LOD score statistic is extremely powerful for gene mapping when calculated using the correct genetic parameter value. When the mode of genetic transmission is unknown, the maximum of the LOD scores obtained using several genetic parameter values is reported. This latter statistic requires higher critical value than the maximum LOD score statistic calculated from a single genetic parameter value. In this paper, we compare the power of maximum LOD scores based on three fixed sets of genetic parameter values with the power of the LOD score obtained after maximizing over the entire range of genetic parameter values. We simulate family data under nine generating models. For generating models with non-zero phenocopy rates, LOD scores maximized over the entire range of genetic parameters yielded greater power than maximum LOD scores for fixed sets of parameter values with zero phenocopy rates. No maximum LOD score was consistently more powerful than the others for generating models with a zero phenocopy rate. The power loss of the LOD score maximized over the entire range of genetic parameters, relative to the maximum LOD score calculated using the correct genetic parameter value, appeared to be robust to the generating models.
Generator replacement is associated with an increased rate of ICD lead alerts.
Lovelock, Joshua D; Cruz, Cesar; Hoskins, Michael H; Jones, Paul; El-Chami, Mikhael F; Lloyd, Michael S; Leon, Angel; DeLurgio, David B; Langberg, Jonathan J
2014-10-01
Lead malfunction is an important cause of morbidity and mortality in patients with an implantable cardioverter-defibrillator (ICD). We have shown that the failure of recalled high-voltage leads significantly increases after ICD generator replacement. However, generator replacement has not been recognized as a predictor of lead failure in general. The purpose of this study is to assess the effect of ICD generator exchange on the rate of ICD lead alerts. A time-dependent Cox proportional hazards model was used to analyze a database of remotely monitored ICDs. The model assessed the impact of generator exchange on the rate of lead alerts after ICD generator replacement. The analysis included 60,219 patients followed for 37 ± 19 months. The 5-year lead survival was 99.3% (95% confidence interval 99.2%-99.4%). Of 60,219 patients, 7458 patients (12.9%) underwent ICD generator exchange without lead replacement. After generator replacement, the rate of lead alerts was more than 5-fold higher than in controls with leads of the same age without generator replacement (hazard ratio 5.19; 95% confidence interval 3.45-7.84). A large number of lead alerted within 3 months of generator replacement. Lead alerts were more common in patients with single- vs dual-chamber ICDs and in younger patients. Sex was not associated with lead alerts. Routine generator replacement is associated with a 5-fold higher risk of lead alert compared to age-matched leads without generator replacement. This suggests the need for intense surveillance after generator replacement and the development of techniques to minimize the risk of lead damage during generator replacement. Copyright © 2014 Heart Rhythm Society. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Behrens, R.; Minier, L.; Bulusu, S.
1998-12-31
The time-dependent, solid-phase thermal decomposition behavior of 2,4-dinitroimidazole (2,4-DNI) has been measured utilizing simultaneous thermogravimetric modulated beam mass spectrometry (STMBMS) methods. The decomposition products consist of gaseous and non-volatile polymeric products. The temporal behavior of the gas formation rates of the identified products indicate that the overall thermal decomposition process is complex. In isothermal experiments with 2,4-DNI in the solid phase, four distinguishing features are observed: (1) elevated rates of gas formation are observed during the early stages of the decomposition, which appear to be correlated to the presence of exogenous water in the sample; (2) this is followed bymore » a period of relatively constant rates of gas formation; (3) next, the rates of gas formation accelerate, characteristic of an autocatalytic reaction; (4) finally, the 2,4-DNI is depleted and gaseous decomposition products continue to evolve at a decreasing rate. A physicochemical and mathematical model of the decomposition of 2,4-DNI has been developed and applied to the experimental results. The first generation of this model is described in this paper. Differences between the first generation of the model and the experimental data collected under different conditions suggest refinements for the next generation of the model.« less
Kumar, Supriya; Piper, Kaitlin; Galloway, David D; Hadler, James L; Grefenstette, John J
2015-09-23
In New Haven County, CT (NHC), influenza hospitalization rates have been shown to increase with census tract poverty in multiple influenza seasons. Though multiple factors have been hypothesized to cause these inequalities, including population structure, differential vaccine uptake, and differential access to healthcare, the impact of each in generating observed inequalities remains unknown. We can design interventions targeting factors with the greatest explanatory power if we quantify the proportion of observed inequalities that hypothesized factors are able to generate. Here, we ask if population structure is sufficient to generate the observed area-level inequalities in NHC. To our knowledge, this is the first use of simulation models to examine the causes of differential poverty-related influenza rates. Using agent-based models with a census-informed, realistic representation of household size, age-structure, population density in NHC census tracts, and contact rates in workplaces, schools, households, and neighborhoods, we measured poverty-related differential influenza attack rates over the course of an epidemic with a 23 % overall clinical attack rate. We examined the role of asthma prevalence rates as well as individual contact rates and infection susceptibility in generating observed area-level influenza inequalities. Simulated attack rates (AR) among adults increased with census tract poverty level (F = 30.5; P < 0.001) in an epidemic caused by a virus similar to A (H1N1) pdm09. We detected a steeper, earlier influenza rate increase in high-poverty census tracts-a finding that we corroborate with a temporal analysis of NHC surveillance data during the 2009 H1N1 pandemic. The ratio of the simulated adult AR in the highest- to lowest-poverty tracts was 33 % of the ratio observed in surveillance data. Increasing individual contact rates in the neighborhood did not increase simulated area-level inequalities. When we modified individual susceptibility such that it was inversely proportional to household income, inequalities in AR between high- and low-poverty census tracts were comparable to those observed in reality. To our knowledge, this is the first study to use simulations to probe the causes of observed inequalities in influenza disease patterns. Knowledge of the causes and their relative explanatory power will allow us to design interventions that have the greatest impact on reducing inequalities. Differential exposure due to population structure in our realistic simulation model explains a third of the observed inequality. Differential susceptibility to disease due to prevailing chronic conditions, vaccine uptake, and smoking should be considered in future models in order to quantify the role of additional factors in generating influenza inequalities.
Akbarian, Vahe; Wang, Weijia; Audet, Julie
2012-05-01
Herein, we describe an experimental and computational approach to perform quantitative carboxyfluorescein diacetate succinimidyl ester (CFSE) cell-division tracking in cultures of primary colony-forming unit-erythroid (CFU-E) cells, a hematopoietic progenitor cell type, which is an important target for the treatment of blood disorders and for the manufacture of red blood cells. CFSE labeling of CFU-Es isolated from mouse fetal livers was performed to examine the effects of stem cell factor (SCF) and erythropoietin (EPO) in culture. We used a dynamic model of proliferation based on the Smith-Martin representation of the cell cycle to extract proliferation rates and death rates from CFSE time-series. However, we found that to accurately represent the cell population dynamics in differentiation cultures of CFU-Es, it was necessary to develop a model with generation-specific rate parameters. The generation-specific rates of proliferation and death were extracted for six generations (G(0) -G(5) ) and they revealed that, although SCF alone or EPO alone supported similar total cell outputs in culture, stimulation with EPO resulted in significantly higher proliferation rates from G(2) to G(5) and higher death rates in G(2) , G(3) , and G(5) compared with SCF. In addition, proliferation rates tended to increase from G(1) to G(5) in cultures supplemented with EPO and EPO + SCF, while they remained lower and more constant across generations with SCF. The results are consistent with the notion that SCF promotes CFU-E self-renewal while EPO promotes CFU-E differentiation in culture. Copyright © 2012 International Society for Advancement of Cytometry.
Wisely, Beth A.; Schmidt, David A.; Weldon, Ray J.
2008-01-01
This Appendix contains 3 sections that 1) documents published observations of surface creep on California faults, 2) constructs line integrals across the WG-07 deformation model to compare to the Pacific ? North America plate motion, and 3) constructs strain tensors of volumes across the WG-07 deformation model to compare to the Pacific ? North America plate motion. Observation of creep on faults is a critical part of our earthquake rupture model because if a fault is observed to creep the moment released as earthquakes is reduced from what would be inferred directly from the fault?s slip rate. There is considerable debate about how representative creep measured at the surface during a short time period is of the whole fault surface through the entire seismic cycle (e.g. Hudnut and Clark, 1989). Observationally, it is clear that the amount of creep varies spatially and temporally on a fault. However, from a practical point of view a single creep rate is associated with a fault section and the reduction in seismic moment generated by the fault is accommodated in seismic hazard models by reducing the surface area that generates earthquakes or by reducing the slip rate that is converted into seismic energy. WG-07 decided to follow the practice of past Working Groups and the National Seismic Hazard Map and used creep rate (where it was judged to be interseismic, see Table P1) to reduce the area of the fault surface that generates seismic events. In addition to following past practice, this decision allowed the Working Group to use a reduction of slip rate as a separate factor to accommodate aftershocks, post seismic slip, possible aseismic permanent deformation along fault zones and other processes that are inferred to affect the entire surface area of a fault, and thus are better modeled as a reduction in slip rate. C-zones are also handled by a reduction in slip rate, because they are inferred to include regions of widely distributed shear that is not completely expressed as earthquakes large enough to model. Because the ratio of the rate of creep relative to the total slip rate is often used to infer the average depth of creep, the ?depth? of creep can be calculated and used to reduce the surface area of a fault that generates earthquakes in our model. This reduction of surface area of rupture is described by an ?aseismicity factor,? assigned to each creeping fault in Appendix A. An aseismicity factor of less than 1 is only assigned to faults that are inferred to creep during the entire interseismic period. A single aseismicity factor was chosen for each section of the fault that creeps by expert opinion from the observations documented here. Uncertainties were not determined for the aseismicity factor, and thus it represents an unmodeled (and difficult to model) source of error. This Appendix simply provides the documentation of known creep, the type and precision of its measurement, and attempts to characterize the creep as interseismic, afterslip, transient or triggered. Parts 2 and 3 of this Appendix compare the WG-07 deformation model and the seismic source model it generates to the strain generated by the Pacific - North American plate motion. The concept is that plate motion generates essentially all of the elastic strain in the vicinity of the plate boundary that can be released as earthquakes. Adding up the slip rates on faults and all others sources of deformation (such as C-zones and distributed ?background? seismicity) should approximately yield the plate motion. This addition is usually accomplished by one of four approaches: 1) line integrals that sum deformation along discrete paths through the deforming zone between the two plates, 2) seismic moment tensors that add up seismic moment of a representative set of earthquakes generated by a crustal volume spanning the plate boundary, 3) strain tensors generated by adding up the strain associated with all of the faults in a crustal volume spanning the plate
Laboratory Photoionization Fronts in Nitrogen Gas: A Numerical Feasibility and Parameter Study
NASA Astrophysics Data System (ADS)
Gray, William J.; Keiter, P. A.; Lefevre, H.; Patterson, C. R.; Davis, J. S.; van Der Holst, B.; Powell, K. G.; Drake, R. P.
2018-05-01
Photoionization fronts play a dominant role in many astrophysical situations but remain difficult to achieve in a laboratory experiment. We present the results from a computational parameter study evaluating the feasibility of the photoionization experiment presented in the design paper by Drake et al. in which a photoionization front is generated in a nitrogen medium. The nitrogen gas density and the Planckian radiation temperature of the X-ray source define each simulation. Simulations modeled experiments in which the X-ray flux is generated by a laser-heated gold foil, suitable for experiments using many kJ of laser energy, and experiments in which the flux is generated by a “z-pinch” device, which implodes a cylindrical shell of conducting wires. The models are run using CRASH, our block-adaptive-mesh code for multimaterial radiation hydrodynamics. The radiative transfer model uses multigroup, flux-limited diffusion with 30 radiation groups. In addition, electron heat conduction is modeled using a single-group, flux-limited diffusion. In the theory, a photoionization front can exist only when the ratios of the electron recombination rate to the photoionization rate and the electron-impact ionization rate to the recombination rate lie in certain ranges. These ratios are computed for several ionization states of nitrogen. Photoionization fronts are found to exist for laser-driven models with moderate nitrogen densities (∼1021 cm‑3) and radiation temperatures above 90 eV. For “z-pinch”-driven models, lower nitrogen densities are preferred (<1021 cm‑3). We conclude that the proposed experiments are likely to generate photoionization fronts.
DOT National Transportation Integrated Search
2014-08-01
Workshop Objectives: : Present Texas Trip Generation Manual : How developed : How it can be used, built upon : Provide examples and discuss : Present Generic WP Attraction Rates : Review Trip Attractions and Advanced Models
Evolution of seafloor spreading rate based on Ar-40 degassing history
NASA Astrophysics Data System (ADS)
Tajika, Eiichi; Matsui, Takafumi
1993-05-01
A new degassing model of Ar-40 coupled with thermal evolution of the mantle is constructed to constrain the temporal variation of seafloor spreading rate. In this model, we take into account the effects of elemental partition and solubility during melt generation and bubble formation, and changes in both seafloor spreading rate and melt generation depth in the mantle. It is suggested that the seafloor spreading rate would have been almost the same as that of today over the history of the earth in order to explain the present amount of Ar-40 in the atmosphere. This result may also imply the mild degassing history of volatiles from the mantle.
Assessment of Wind Resource in the Palk Strait using Different Methods
NASA Astrophysics Data System (ADS)
Gupta, T.; Khan, F.; Baidya Roy, S.; Miller, L.
2017-12-01
The Government of India has proposed a target of 60 GW in grid power from the wind by the year 2022. The Palk Strait is one of the potential offshore wind power generation sites in India. It is a 65-135 km wide and 135 km long channel lying between the south eastern tip of India and northern Sri Lanka. The complex terrain bounding the two sides of the strait leads to enhanced wind speed and reduced variability in the wind direction. Here, we compare 3 distinct methodologies for estimating the generation rates for a hypothetical offshore wind farm array located in the strait. The methodologies include: 1) traditional wind power density model that ignores the effect of turbine interactions on generation rates; 2) the PARK wake model; and 3) a high resolution weather model (WRF) with a wind turbine parameterization. Using the WRF model as our baseline, we find that the simple model overestimates generation by an order-of-magnitude, while the wake model underestimates generation rates by about 5%. The reason for these differences relates to the influence of wind turbines on the atmospheric flow, wherein, the WRF model is able to capture the effect of both the complex terrain and wind turbine atmospheric boundary layer interactions. Lastly, a model evaluation is conducted which shows that 10m wind speeds and directions from WRF are comparable with the satellite data. Hence, we conclude from the study that each of these methodologies may have merit, but should a wind farm is deployed in such a complex terrain, we expect the WRF method to give better estimates of wind resource assessment capturing the physical processes emerging due to the interactions between offshore wind farm and the surrounding terrain.
Advances in Time Estimation Methods for Molecular Data.
Kumar, Sudhir; Hedges, S Blair
2016-04-01
Molecular dating has become central to placing a temporal dimension on the tree of life. Methods for estimating divergence times have been developed for over 50 years, beginning with the proposal of molecular clock in 1962. We categorize the chronological development of these methods into four generations based on the timing of their origin. In the first generation approaches (1960s-1980s), a strict molecular clock was assumed to date divergences. In the second generation approaches (1990s), the equality of evolutionary rates between species was first tested and then a strict molecular clock applied to estimate divergence times. The third generation approaches (since ∼2000) account for differences in evolutionary rates across the tree by using a statistical model, obviating the need to assume a clock or to test the equality of evolutionary rates among species. Bayesian methods in the third generation require a specific or uniform prior on the speciation-process and enable the inclusion of uncertainty in clock calibrations. The fourth generation approaches (since 2012) allow rates to vary from branch to branch, but do not need prior selection of a statistical model to describe the rate variation or the specification of speciation model. With high accuracy, comparable to Bayesian approaches, and speeds that are orders of magnitude faster, fourth generation methods are able to produce reliable timetrees of thousands of species using genome scale data. We found that early time estimates from second generation studies are similar to those of third and fourth generation studies, indicating that methodological advances have not fundamentally altered the timetree of life, but rather have facilitated time estimation by enabling the inclusion of more species. Nonetheless, we feel an urgent need for testing the accuracy and precision of third and fourth generation methods, including their robustness to misspecification of priors in the analysis of large phylogenies and data sets. © The Author(s) 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Khan, M. Ijaz; Hayat, Tasawar; Alsaedi, Ahmed
2018-02-01
This modeling and computations present the study of viscous fluid flow with variable properties by a rotating stretchable disk. Rotating flow is generated through nonlinear rotating stretching surface. Nonlinear thermal radiation and heat generation/absorption are studied. Flow is conducting for a constant applied magnetic field. No polarization is taken. Induced magnetic field is not taken into account. Attention is focused on the entropy generation rate and Bejan number. The entropy generation rate and Bejan number clearly depend on velocity and thermal fields. The von Kármán approach is utilized to convert the partial differential expressions into ordinary ones. These expressions are non-dimensionalized, and numerical results are obtained for flow variables. The effects of the magnetic parameter, Prandtl number, radiative parameter, heat generation/absorption parameter, and slip parameter on velocity and temperature fields as well as the entropy generation rate and Bejan number are discussed. Drag forces (radial and tangential) and heat transfer rates are calculated and discussed. Furthermore the entropy generation rate is a decreasing function of magnetic variable and Reynolds number. The Bejan number effect on the entropy generation rate is reverse to that of the magnetic variable. Also opposite behavior of heat transfers is observed for varying estimations of radiative and slip variables.
Global sensitivity analysis of the BSM2 dynamic influent disturbance scenario generator.
Flores-Alsina, Xavier; Gernaey, Krist V; Jeppsson, Ulf
2012-01-01
This paper presents the results of a global sensitivity analysis (GSA) of a phenomenological model that generates dynamic wastewater treatment plant (WWTP) influent disturbance scenarios. This influent model is part of the Benchmark Simulation Model (BSM) family and creates realistic dry/wet weather files describing diurnal, weekend and seasonal variations through the combination of different generic model blocks, i.e. households, industry, rainfall and infiltration. The GSA is carried out by combining Monte Carlo simulations and standardized regression coefficients (SRC). Cluster analysis is then applied, classifying the influence of the model parameters into strong, medium and weak. The results show that the method is able to decompose the variance of the model predictions (R(2)> 0.9) satisfactorily, thus identifying the model parameters with strongest impact on several flow rate descriptors calculated at different time resolutions. Catchment size (PE) and the production of wastewater per person equivalent (QperPE) are two parameters that strongly influence the yearly average dry weather flow rate and its variability. Wet weather conditions are mainly affected by three parameters: (1) the probability of occurrence of a rain event (Llrain); (2) the catchment size, incorporated in the model as a parameter representing the conversion from mm rain · day(-1) to m(3) · day(-1) (Qpermm); and, (3) the quantity of rain falling on permeable areas (aH). The case study also shows that in both dry and wet weather conditions the SRC ranking changes when the time scale of the analysis is modified, thus demonstrating the potential to identify the effect of the model parameters on the fast/medium/slow dynamics of the flow rate. The paper ends with a discussion on the interpretation of GSA results and of the advantages of using synthetic dynamic flow rate data for WWTP influent scenario generation. This section also includes general suggestions on how to use the proposed methodology to any influent generator to adapt the created time series to a modeller's demands.
Geometric model for softwood transverse thermal conductivity. Part I
Hong-mei Gu; Audrey Zink-Sharp
2005-01-01
Thermal conductivity is a very important parameter in determining heat transfer rate and is required for developing of drying models and in industrial operations such as adhesive cure rate. Geometric models for predicting softwood thermal conductivity in the radial and tangential directions were generated in this study based on obervation and measurements of wood...
Characterization of Filters Loaded With Reactor Strontium Carbonate - 13203
DOE Office of Scientific and Technical Information (OSTI.GOV)
Josephson, Walter S.; Steen, Franciska H.
A collection of three highly radioactive filters containing reactor strontium carbonate were being prepared for disposal. All three filters were approximately characterized at the time of manufacture by gravimetric methods. The first filter had been partially emptied, and the quantity of residual activity was uncertain. Dose rate to activity modeling using the Monte-Carlo N Particle (MCNP) code was selected to confirm the gravimetric characterization of the full filters, and to fully characterize the partially emptied filter. Although dose rate to activity modeling using MCNP is a common technique, it is not often used for Bremsstrahlung-dominant materials such as reactor strontium.more » As a result, different MCNP modeling options were compared to determine the optimum approach. This comparison indicated that the accuracy of the results were heavily dependent on the MCNP modeling details and the location of the dose rate measurement point. The optimum model utilized a photon spectrum generated by the Oak Ridge Isotope Generation and Depletion (ORIGEN) code and dose rates measured at 30 cm. Results from the optimum model agreed with the gravimetric estimates within 15%. It was demonstrated that dose rate to activity modeling can be successful for Bremsstrahlung-dominant radioactive materials. However, the degree of success is heavily dependent on the choice of modeling techniques. (authors)« less
The new Kuznets cycle: a test of the Easterlin-Wachter-Wachter hypothesis.
Ahlburg, D A
1982-01-01
The aim of this paper is to evaluate the Easterlin-Wachter-Wachter model of the effect of the size of one generation on the size of the succeeding generation. An attempt is made "to identify and test empirically each component of the Easterlin-Wachter-Wachter model..., to show how the components collapse to give a closed demographic model of generation size, and to investigate the impacts of relative cohort size on the economic performance of a cohort." The models derived are then used to generate forecasts of the U.S. birth rate to the year 2050. The results provide support for the major components of the original model. excerpt
Predictive model for CO2 generation and decay in building envelopes
NASA Astrophysics Data System (ADS)
Aglan, Heshmat A.
2003-01-01
Understanding carbon dioxide generation and decay patterns in buildings with high occupancy levels is useful to identify their indoor air quality, air change rates, percent fresh air makeup, occupancy pattern, and how a variable air volume system to off-set undesirable CO2 level can be modulated. A mathematical model governing the generation and decay of CO2 in building envelopes with forced ventilation due to high occupancy is developed. The model has been verified experimentally in a newly constructed energy efficient healthy house. It was shown that the model accurately predicts the CO2 concentration at any time during the generation and decay processes.
Sun, Bo; Dong, Hongyu; He, Di; Rao, Dandan; Guan, Xiaohong
2016-02-02
Permanganate can be activated by bisulfite to generate soluble Mn(III) (noncomplexed with ligands other than H2O and OH(-)) which oxidizes organic contaminants at extraordinarily high rates. However, the generation of Mn(III) in the permanganate/bisulfite (PM/BS) process and the reactivity of Mn(III) toward emerging contaminants have never been quantified. In this work, Mn(III) generated in the PM/BS process was shown to absorb at 230-290 nm for the first time and disproportionated more easily at higher pH, and thus, the utilization rate of Mn(III) for decomposing organic contaminant was low under alkaline conditions. A Mn(III) generation and utilization model was developed to get the second-order reaction rate parameters of benzene oxidation by soluble Mn(III), and then, benzene was chosen as the reference probe to build a competition kinetics method, which was employed to obtain the second-order rate constants of organic contaminants oxidation by soluble Mn(III). The results revealed that the second-order rate constants of aniline and bisphenol A oxidation by soluble Mn(III) were in the range of 10(5)-10(6) M(-1) s(-1). With the presence of soluble Mn(III) at micromolar concentration, contaminants could be oxidized with the observed rates several orders of magnitude higher than those by common oxidation processes, implying the great potential application of the PM/BS process in water and wastewater treatment.
An Extended IEEE 118-Bus Test System With High Renewable Penetration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pena, Ivonne; Martinez-Anido, Carlo Brancucci; Hodge, Bri-Mathias
This article describes a new publicly available version of the IEEE 118-bus test system, named NREL-118. The database is based on the transmission representation (buses and lines) of the IEEE 118-bus test system, with a reconfigured generation representation using three regions of the US Western Interconnection from the latest Western Electricity Coordination Council (WECC) 2024 Common Case [1]. Time-synchronous hourly load, wind, and solar time series are provided for over one year (8784 hours). The public database presented and described in this manuscript will allow researchers to model a test power system using detailed transmission, generation, load, wind, and solarmore » data. This database includes key additional features that add to the current IEEE 118-bus test model, such as: the inclusion of 10 generation technologies with different heat rate functions, minimum stable levels and ramping rates, GHG emissions rates, regulation and contingency reserves, and hourly time series data for one full year for load, wind and solar generation.« less
Roberts, Michael F; Lightfoot, Edwin N; Porter, Warren P
2011-01-01
Our recent article (Roberts et al. 2010 ) proposes a mechanistic model for the relation between basal metabolic rate (BMR) and body mass (M) in mammals. The model is based on heat-transfer principles in the form of an equation for distributed heat generation within the body. The model can also be written in the form of the allometric equation BMR = aM(b), in which a is the coefficient of the mass term and b is the allometric exponent. The model generates two interesting results: it predicts that b takes the value 2/3, indicating that BMR is proportional to surface area in endotherms. It also provides an explanation of the physiological components that make up a, that is, respiratory heat loss, core-skin thermal conductance, and core-skin thermal gradient. Some of the ideas in our article have been questioned (Seymour and White 2011 ), and this is our response to those questions. We specifically address the following points: whether a heat-transfer model can explain the level of BMR in mammals, whether our test of the model is inadequate because it uses the same literature data that generated the values of the physiological variables, and whether geometry and empirical values combine to make a "coincidence" that makes the model only appear to conform to real processes.
Modeling of grain size strengthening in tantalum at high pressures and strain rates
Rudd, Robert E.; Park, H. -S.; Cavallo, R. M.; ...
2017-01-01
Laser-driven ramp wave compression experiments have been used to investigate the strength (flow stress) of tantalum and other metals at high pressures and high strain rates. Recently this kind of experiment has been used to assess the dependence of the strength on the average grain size of the material, finding no detectable variation with grain size. The insensitivity to grain size has been understood theoretically to result from the dominant effect of the high dislocation density generated at the extremely high strain rates of the experiment. Here we review the experiments and describe in detail the multiscale strength model usedmore » to simulate them. The multiscale strength model has been extended to include the effect of geometrically necessary dislocations generated at the grain boundaries during compatible plastic flow in the polycrystalline metal. Lastly, we use the extended model to make predictions of the threshold strain rates and grain sizes below which grain size strengthening would be observed in the laser-driven Rayleigh-Taylor experiments.« less
Finite Volume Numerical Methods for Aeroheating Rate Calculations from Infrared Thermographic Data
NASA Technical Reports Server (NTRS)
Daryabeigi, Kamran; Berry, Scott A.; Horvath, Thomas J.; Nowak, Robert J.
2003-01-01
The use of multi-dimensional finite volume numerical techniques with finite thickness models for calculating aeroheating rates from measured global surface temperatures on hypersonic wind tunnel models was investigated. Both direct and inverse finite volume techniques were investigated and compared with the one-dimensional semi -infinite technique. Global transient surface temperatures were measured using an infrared thermographic technique on a 0.333-scale model of the Hyper-X forebody in the Langley Research Center 20-Inch Mach 6 Air tunnel. In these tests the effectiveness of vortices generated via gas injection for initiating hypersonic transition on the Hyper-X forebody were investigated. An array of streamwise orientated heating striations were generated and visualized downstream of the gas injection sites. In regions without significant spatial temperature gradients, one-dimensional techniques provided accurate aeroheating rates. In regions with sharp temperature gradients due to the striation patterns two-dimensional heat transfer techniques were necessary to obtain accurate heating rates. The use of the one-dimensional technique resulted in differences of 20% in the calculated heating rates because it did not account for lateral heat conduction in the model.
Tewari, Shivendra G.; Bugenhagen, Scott M.; Palmer, Bradley M.; Beard, Daniel A.
2015-01-01
Despite extensive study over the past six decades the coupling of chemical reaction and mechanical processes in muscle dynamics is not well understood. We lack a theoretical description of how chemical processes (metabolite binding, ATP hydrolysis) influence and are influenced by mechanical processes (deformation and force generation). To address this need, a mathematical model of the muscle cross-bridge (XB) cycle based on Huxley’s sliding filament theory is developed that explicitly accounts for the chemical transformation events and the influence of strain on state transitions. The model is identified based on elastic and viscous moduli data from mouse and rat myocardial strips over a range of perturbation frequencies, and MgATP and inorganic phosphate (Pi) concentrations. Simulations of the identified model reproduce the observed effects of MgATP and MgADP on the rate of force development. Furthermore, simulations reveal that the rate of force re-development measured in slack-restretch experiments is not directly proportional to the rate of XB cycling. For these experiments, the model predicts that the observed increase in the rate of force generation with increased Pi concentration is due to inhibition of cycle turnover by Pi. Finally, the model captures the observed phenomena of force yielding suggesting that it is a result of rapid detachment of stretched attached myosin heads. PMID:25681584
Killeen, Peter R.; Sitomer, Matthew T.
2008-01-01
Mathematical Principles of Reinforcement (MPR) is a theory of reinforcement schedules. This paper reviews the origin of the principles constituting MPR: arousal, association and constraint. Incentives invigorate responses, in particular those preceding and predicting the incentive. The process that generates an associative bond between stimuli, responses and incentives is called coupling. The combination of arousal and coupling constitutes reinforcement. Models of coupling play a central role in the evolution of the theory. The time required to respond constrains the maximum response rates, and generates a hyperbolic relation between rate of responding and rate of reinforcement. Models of control by ratio schedules are developed to illustrate the interaction of the principles. Correlations among parameters are incorporated into the structure of the models, and assumptions that were made in the original theory are refined in light of current data. PMID:12729968
A continuum mathematical model of endothelial layer maintenance and senescence
Wang, Ying; Aguda, Baltazar D; Friedman, Avner
2007-01-01
Background The monolayer of endothelial cells (ECs) lining the inner wall of blood vessels deteriorates as a person ages due to a complex interplay of a variety of causes including cell death arising from shear stress of blood flow and cellular oxidative stress, cellular senescence, and decreased rate of replacement of dead ECs by progenitor stem cells. Results A continuum mathematical model is developed to describe the dynamics of large EC populations of the endothelium using a system of differential equations for the number densities of cells of different generations starting from endothelial progenitors to senescent cells, as well as the densities of dead cells and the holes created upon clearing dead cells. Aging of cells is manifested in three ways, namely, losing the ability to divide when the Hayflick limit of 50 generations is reached, decreasing replication rate parameters and increasing death rate parameters as cells divide; due to the dependence of these rate parameters on cell generation, the model predicts a narrow distribution of cell densities peaking at a particular cell generation. As the chronological age of a person advances, the peak of the distribution – corresponding to the age of the endothelium – moves towards senescence correspondingly. However, computer simulations also demonstrate that sustained and enhanced stem cell homing can halt the aging process of the endothelium by maintaining a stationary cell density distribution that peaks well before the Hayflick limit. The healing rates of damaged endothelia for young, middle-aged, and old persons are compared and are found to be particularly sensitive to the stem cell homing parameter. Conclusion The proposed model describes the aging of the endothelium as being driven by cellular senescence, with a rate that does not necessarily correspond to the chronological aging of a person. It is shown that the age of the endothelium depends sensitively on the homing rates of EC progenitor cells. PMID:17692115
A continuum mathematical model of endothelial layer maintenance and senescence.
Wang, Ying; Aguda, Baltazar D; Friedman, Avner
2007-08-10
The monolayer of endothelial cells (ECs) lining the inner wall of blood vessels deteriorates as a person ages due to a complex interplay of a variety of causes including cell death arising from shear stress of blood flow and cellular oxidative stress, cellular senescence, and decreased rate of replacement of dead ECs by progenitor stem cells. A continuum mathematical model is developed to describe the dynamics of large EC populations of the endothelium using a system of differential equations for the number densities of cells of different generations starting from endothelial progenitors to senescent cells, as well as the densities of dead cells and the holes created upon clearing dead cells. Aging of cells is manifested in three ways, namely, losing the ability to divide when the Hayflick limit of 50 generations is reached, decreasing replication rate parameters and increasing death rate parameters as cells divide; due to the dependence of these rate parameters on cell generation, the model predicts a narrow distribution of cell densities peaking at a particular cell generation. As the chronological age of a person advances, the peak of the distribution - corresponding to the age of the endothelium - moves towards senescence correspondingly. However, computer simulations also demonstrate that sustained and enhanced stem cell homing can halt the aging process of the endothelium by maintaining a stationary cell density distribution that peaks well before the Hayflick limit. The healing rates of damaged endothelia for young, middle-aged, and old persons are compared and are found to be particularly sensitive to the stem cell homing parameter. The proposed model describes the aging of the endothelium as being driven by cellular senescence, with a rate that does not necessarily correspond to the chronological aging of a person. It is shown that the age of the endothelium depends sensitively on the homing rates of EC progenitor cells.
Bendor, Daniel
2015-01-01
In auditory cortex, temporal information within a sound is represented by two complementary neural codes: a temporal representation based on stimulus-locked firing and a rate representation, where discharge rate co-varies with the timing between acoustic events but lacks a stimulus-synchronized response. Using a computational neuronal model, we find that stimulus-locked responses are generated when sound-evoked excitation is combined with strong, delayed inhibition. In contrast to this, a non-synchronized rate representation is generated when the net excitation evoked by the sound is weak, which occurs when excitation is coincident and balanced with inhibition. Using single-unit recordings from awake marmosets (Callithrix jacchus), we validate several model predictions, including differences in the temporal fidelity, discharge rates and temporal dynamics of stimulus-evoked responses between neurons with rate and temporal representations. Together these data suggest that feedforward inhibition provides a parsimonious explanation of the neural coding dichotomy observed in auditory cortex. PMID:25879843
Heylman, Christopher M; Santoso, Sharon; Krebs, Melissa D; Saidel, Gerald M; Alsberg, Eben; Muschler, George F
2014-04-01
We have developed a mathematical model that allows simulation of oxygen distribution in a bone defect as a tool to explore the likely effects of local changes in cell concentration, defect size or geometry, local oxygen delivery with oxygen-generating biomaterials (OGBs), and changes in the rate of oxygen consumption by cells within a defect. Experimental data for the oxygen release rate from an OGB and the oxygen consumption rate of a transplanted cell population are incorporated into the model. With these data, model simulations allow prediction of spatiotemporal oxygen concentration within a given defect and the sensitivity of oxygen tension to changes in critical variables. This information may help to minimize the number of experiments in animal models that determine the optimal combinations of cells, scaffolds, and OGBs in the design of current and future bone regeneration strategies. Bone marrow-derived nucleated cell data suggest that oxygen consumption is dependent on oxygen concentration. OGB oxygen release is shown to be a time-dependent function that must be measured for accurate simulation. Simulations quantify the dependency of oxygen gradients in an avascular defect on cell concentration, cell oxygen consumption rate, OGB oxygen generation rate, and OGB geometry.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Majumdar, S.
1997-02-01
Available models for predicting failure of flawed and unflawed steam generator tubes under normal operating, accident, and severe accident conditions are reviewed. Tests conducted in the past, though limited, tended to show that the earlier flow-stress model for part-through-wall axial cracks overestimated the damaging influence of deep cracks. This observation was confirmed by further tests at high temperatures, as well as by finite-element analysis. A modified correlation for deep cracks can correct this shortcoming of the model. Recent tests have shown that lateral restraint can significantly increase the failure pressure of tubes with unsymmetrical circumferential cracks. This observation was confirmedmore » by finite-element analysis. The rate-independent flow stress models that are successful at low temperatures cannot predict the rate-sensitive failure behavior of steam generator tubes at high temperatures. Therefore, a creep rupture model for predicting failure was developed and validated by tests under various temperature and pressure loadings that can occur during postulated severe accidents.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pursley, J; Gueorguiev, G; Prichard, H
Purpose: To demonstrate the commissioning of constant dose rate volumetric modulated arc therapy (VMAT) in the Raystation treatment planning system for a Varian Clinac iX with Exact couch. Methods: Constant dose rate (CDR) VMAT is an option in the Raystation treatment planning system, enabling VMAT delivery on Varian linacs without a RapidArc upgrade. Raystation 4.7 was used to commission CDR-VMAT for a Varian Clinac iX. Raystation arc model parameters were selected to match machine deliverability characteristics. A Varian Exact couch model was added to Raystation 4.7 and commissioned for use in VMAT optimization. CDR-VMAT commissioning checks were performed on themore » linac, including patient-specific QA measurements for 10 test patients using both the ArcCHECK from Sun Nuclear Corporation and COMPASS from IBA Dosimetry. Multi-criteria optimization (MCO) in Raystation was used for CDR-VMAT planning. Results: Raystation 4.7 generated clinically acceptable and deliverable CDR-VMAT plans for the Varian Clinac. VMAT plans were optimized including a model of the Exact couch with both rails in the out positions. CDR-VMAT plans generated with MCO in Raystation were dosimetrically comparable to Raystation MCO-generated IMRT plans. Patient-specific QA measurements with the ArcCHECK on the couch showed good agreement with the treatment planning system prediction. Patient-specific, structure-specific, multi-statistical parameter 3D QA measurements with gantry-mounted COMPASS also showed good agreement. Conclusion: Constant dose rate VMAT was successfully modeled in Raystation 4.7 for a Varian Clinac iX, and Raystation’s multicriteria optimization generated constant dose rate VMAT plans which were deliverable and dosimetrically comparable to IMRT plans.« less
NASA Astrophysics Data System (ADS)
Rout, Bapin Kumar; Brooks, Geoff; Rhamdhani, M. Akbar; Li, Zushu; Schrama, Frank N. H.; Sun, Jianjun
2018-04-01
A multi-zone kinetic model coupled with a dynamic slag generation model was developed for the simulation of hot metal and slag composition during the basic oxygen furnace (BOF) operation. The three reaction zones (i) jet impact zone, (ii) slag-bulk metal zone, (iii) slag-metal-gas emulsion zone were considered for the calculation of overall refining kinetics. In the rate equations, the transient rate parameters were mathematically described as a function of process variables. A micro and macroscopic rate calculation methodology (micro-kinetics and macro-kinetics) were developed to estimate the total refining contributed by the recirculating metal droplets through the slag-metal emulsion zone. The micro-kinetics involves developing the rate equation for individual droplets in the emulsion. The mathematical models for the size distribution of initial droplets, kinetics of simultaneous refining of elements, the residence time in the emulsion, and dynamic interfacial area change were established in the micro-kinetic model. In the macro-kinetics calculation, a droplet generation model was employed and the total amount of refining by emulsion was calculated by summing the refining from the entire population of returning droplets. A dynamic FetO generation model based on oxygen mass balance was developed and coupled with the multi-zone kinetic model. The effect of post-combustion on the evolution of slag and metal composition was investigated. The model was applied to a 200-ton top blowing converter and the simulated value of metal and slag was found to be in good agreement with the measured data. The post-combustion ratio was found to be an important factor in controlling FetO content in the slag and the kinetics of Mn and P in a BOF process.
Cue generation and memory construction in direct and generative autobiographical memory retrieval.
Harris, Celia B; O'Connor, Akira R; Sutton, John
2015-05-01
Theories of autobiographical memory emphasise effortful, generative search processes in memory retrieval. However recent research suggests that memories are often retrieved directly, without effortful search. We investigated whether direct and generative retrieval differed in the characteristics of memories recalled, or only in terms of retrieval latency. Participants recalled autobiographical memories in response to cue words. For each memory, they reported whether it was retrieved directly or generatively, rated its visuo-spatial perspective, and judged its accompanying recollective experience. Our results indicated that direct retrieval was commonly reported and was faster than generative retrieval, replicating recent findings. The characteristics of directly retrieved memories differed from generatively retrieved memories: directly retrieved memories had higher field perspective ratings and lower observer perspective ratings. However, retrieval mode did not influence recollective experience. We discuss our findings in terms of cue generation and content construction, and the implication for reconstructive models of autobiographical memory. Copyright © 2015 Elsevier Inc. All rights reserved.
Moon, Byeong-Ui; Jones, Steven G; Hwang, Dae Kun; Tsai, Scott S H
2015-06-07
We present a technique that generates droplets using ultralow interfacial tension aqueous two-phase systems (ATPS). Our method combines a classical microfluidic flow focusing geometry with precisely controlled pulsating inlet pressure, to form monodisperse ATPS droplets. The dextran (DEX) disperse phase enters through the central inlet with variable on-off pressure cycles controlled by a pneumatic solenoid valve. The continuous phase polyethylene glycol (PEG) solution enters the flow focusing junction through the cross channels at a fixed flow rate. The on-off cycles of the applied pressure, combined with the fixed flow rate cross flow, make it possible for the ATPS jet to break up into droplets. We observe different droplet formation regimes with changes in the applied pressure magnitude and timing, and the continuous phase flow rate. We also develop a scaling model to predict the size of the generated droplets, and the experimental results show a good quantitative agreement with our scaling model. Additionally, we demonstrate the potential for scaling-up of the droplet production rate, with a simultaneous two-droplet generating geometry. We anticipate that this simple and precise approach to making ATPS droplets will find utility in biological applications where the all-biocompatibility of ATPS is desirable.
The TimeGeo modeling framework for urban mobility without travel surveys
Jiang, Shan; Yang, Yingxiang; Gupta, Siddharth; Veneziano, Daniele; Athavale, Shounak; González, Marta C.
2016-01-01
Well-established fine-scale urban mobility models today depend on detailed but cumbersome and expensive travel surveys for their calibration. Not much is known, however, about the set of mechanisms needed to generate complete mobility profiles if only using passive datasets with mostly sparse traces of individuals. In this study, we present a mechanistic modeling framework (TimeGeo) that effectively generates urban mobility patterns with resolution of 10 min and hundreds of meters. It ties together the inference of home and work activity locations from data, with the modeling of flexible activities (e.g., other) in space and time. The temporal choices are captured by only three features: the weekly home-based tour number, the dwell rate, and the burst rate. These combined generate for each individual: (i) stay duration of activities, (ii) number of visited locations per day, and (iii) daily mobility networks. These parameters capture how an individual deviates from the circadian rhythm of the population, and generate the wide spectrum of empirically observed mobility behaviors. The spatial choices of visited locations are modeled by a rank-based exploration and preferential return (r-EPR) mechanism that incorporates space in the EPR model. Finally, we show that a hierarchical multiplicative cascade method can measure the interaction between land use and generation of trips. In this way, urban structure is directly related to the observed distance of travels. This framework allows us to fully embrace the massive amount of individual data generated by information and communication technologies (ICTs) worldwide to comprehensively model urban mobility without travel surveys. PMID:27573826
The TimeGeo modeling framework for urban motility without travel surveys.
Jiang, Shan; Yang, Yingxiang; Gupta, Siddharth; Veneziano, Daniele; Athavale, Shounak; González, Marta C
2016-09-13
Well-established fine-scale urban mobility models today depend on detailed but cumbersome and expensive travel surveys for their calibration. Not much is known, however, about the set of mechanisms needed to generate complete mobility profiles if only using passive datasets with mostly sparse traces of individuals. In this study, we present a mechanistic modeling framework (TimeGeo) that effectively generates urban mobility patterns with resolution of 10 min and hundreds of meters. It ties together the inference of home and work activity locations from data, with the modeling of flexible activities (e.g., other) in space and time. The temporal choices are captured by only three features: the weekly home-based tour number, the dwell rate, and the burst rate. These combined generate for each individual: (i) stay duration of activities, (ii) number of visited locations per day, and (iii) daily mobility networks. These parameters capture how an individual deviates from the circadian rhythm of the population, and generate the wide spectrum of empirically observed mobility behaviors. The spatial choices of visited locations are modeled by a rank-based exploration and preferential return (r-EPR) mechanism that incorporates space in the EPR model. Finally, we show that a hierarchical multiplicative cascade method can measure the interaction between land use and generation of trips. In this way, urban structure is directly related to the observed distance of travels. This framework allows us to fully embrace the massive amount of individual data generated by information and communication technologies (ICTs) worldwide to comprehensively model urban mobility without travel surveys.
Martins, Alexandra; Guilhermino, Lúcia
2018-08-01
The environmental contamination by microplastics is a global challenge to ecosystem and human health, and the knowledge on the long-term effects of such particles is limited. Thus, the effects of microplastics and post-exposure recovery were investigated over 4 generations (F 0 , F 1 , F 2 , F 3 ) using Daphnia magna as model. Effect criteria were parental mortality, growth, several reproductive parameters, and population growth rate. Microplastics exposure (0.1mg/l of pristine polymer microspheres 1-5μm diameter) caused parental mortality (10-100%), and significantly (p≤0.05) decreased growth, reproduction, and population growth rate leading to the extinction of the microplastics-exposed model population in the F 1 generation. Females descending from those exposed to microplastics in F 0 and exposed to clean medium presented some recovery but up to the F 3 generation they still had significantly (p≤0.05) reduced growth, reproduction, and population growth rate. Overall, these results indicate that D. magna recovery from chronic exposure to microplastics may take several generations, and that the continuous exposure over generations to microplastics may cause population extinction. These findings have implications to aquatic ecosystem functioning and services, and raise concern on the long-term animal and human exposure to microplastics through diverse routes. Copyright © 2018. Published by Elsevier B.V.
Kaman 40 kW wind turbine generator - control system dynamics
NASA Technical Reports Server (NTRS)
Perley, R.
1981-01-01
The generator design incorporates an induction generator for application where a utility line is present and a synchronous generator for standalone applications. A combination of feed forward and feedback control is used to achieve synchronous speed prior to connecting the generator to the load, and to control the power level once the generator is connected. The dynamics of the drive train affect several aspects of the system operation. These were analyzed to arrive at the required shaft stiffness. The rotor parameters that affect the stability of the feedback control loop vary considerably over the wind speed range encountered. Therefore, the controller gain was made a function of wind speed in order to maintain consistent operation over the whole wind speed range. The velocity requirement for the pitch control mechanism is related to the nature of the wind gusts to be encountered, the dynamics of the system, and the acceptable power fluctuations and generator dropout rate. A model was developed that allows the probable dropout rate to be determined from a statistical model of wind gusts and the various system parameters, including the acceptable power fluctuation.
Models for nearly every occasion: Part I - One box models.
Hewett, Paul; Ganser, Gary H
2017-01-01
The standard "well mixed room," "one box" model cannot be used to predict occupational exposures whenever the scenario involves the use of local controls. New "constant emission" one box models are proposed that permit either local exhaust or local exhaust with filtered return, coupled with general room ventilation or the recirculation of a portion of the general room exhaust. New "two box" models are presented in Part II of this series. Both steady state and transient models were developed. The steady state equation for each model, including the standard one box steady state model, is augmented with an additional factor reflecting the fraction of time the substance was generated during each task. This addition allows the easy calculation of the average exposure for cyclic and irregular emission patterns, provided the starting and ending concentrations are zero or near zero, or the cumulative time across all tasks is long (e.g., several tasks to a full shift). The new models introduce additional variables, such as the efficiency of the local exhaust to immediately capture freshly generated contaminant and the filtration efficiency whenever filtered exhaust is returned to the workspace. Many of the model variables are knowable (e.g., room volume and ventilation rate). A structured procedure for calibrating a model to a work scenario is introduced that can be applied to both continuous and cyclic processes. The "calibration" procedure generates estimates of the generation rate and all of remaining unknown model variables.
Self-generated visual imagery alters the mere exposure effect.
Craver-Lemley, Catherine; Bornstein, Robert F
2006-12-01
To determine whether self-generated visual imagery alters liking ratings of merely exposed stimuli, 79 college students were repeatedly exposed to the ambiguous duck-rabbit figure. Half the participants were told to picture the image as a duck and half to picture it as a rabbit. When participants made liking ratings of both disambiguated versions of the figure, they rated the version consistent with earlier encoding more positively than the alternate version. Implications of these findings for theoretical models of the exposure effect are discussed.
Mathematical modeling to predict residential solid waste generation.
Benítez, Sara Ojeda; Lozano-Olvera, Gabriela; Morelos, Raúl Adalberto; Vega, Carolina Armijo de
2008-01-01
One of the challenges faced by waste management authorities is determining the amount of waste generated by households in order to establish waste management systems, as well as trying to charge rates compatible with the principle applied worldwide, and design a fair payment system for households according to the amount of residential solid waste (RSW) they generate. The goal of this research work was to establish mathematical models that correlate the generation of RSW per capita to the following variables: education, income per household, and number of residents. This work was based on data from a study on generation, quantification and composition of residential waste in a Mexican city in three stages. In order to define prediction models, five variables were identified and included in the model. For each waste sampling stage a different mathematical model was developed, in order to find the model that showed the best linear relation to predict residential solid waste generation. Later on, models to explore the combination of included variables and select those which showed a higher R(2) were established. The tests applied were normality, multicolinearity and heteroskedasticity. Another model, formulated with four variables, was generated and the Durban-Watson test was applied to it. Finally, a general mathematical model is proposed to predict residential waste generation, which accounts for 51% of the total.
Effect of repeat copy number on variable-number tandem repeat mutations in Escherichia coli O157:H7.
Vogler, Amy J; Keys, Christine; Nemoto, Yoshimi; Colman, Rebecca E; Jay, Zack; Keim, Paul
2006-06-01
Variable-number tandem repeat (VNTR) loci have shown a remarkable ability to discriminate among isolates of the recently emerged clonal pathogen Escherichia coli O157:H7, making them a very useful molecular epidemiological tool. However, little is known about the rates at which these sequences mutate, the factors that affect mutation rates, or the mechanisms by which mutations occur at these loci. Here, we measure mutation rates for 28 VNTR loci and investigate the effects of repeat copy number and mismatch repair on mutation rate using in vitro-generated populations for 10 E. coli O157:H7 strains. We find single-locus rates as high as 7.0 x 10(-4) mutations/generation and a combined 28-locus rate of 6.4 x 10(-4) mutations/generation. We observed single- and multirepeat mutations that were consistent with a slipped-strand mispairing mutation model, as well as a smaller number of large repeat copy number mutations that were consistent with recombination-mediated events. Repeat copy number within an array was strongly correlated with mutation rate both at the most mutable locus, O157-10 (r2= 0.565, P = 0.0196), and across all mutating loci. The combined locus model was significant whether locus O157-10 was included (r2= 0.833, P < 0.0001) or excluded (r2= 0.452, P < 0.0001) from the analysis. Deficient mismatch repair did not affect mutation rate at any of the 28 VNTRs with repeat unit sizes of >5 bp, although a poly(G) homomeric tract was destabilized in the mutS strain. Finally, we describe a general model for VNTR mutations that encompasses insertions and deletions, single- and multiple-repeat mutations, and their relative frequencies based upon our empirical mutation rate data.
Karpušenkaitė, Aistė; Ruzgas, Tomas; Denafas, Gintaras
2018-05-01
The aim of the study was to create a hybrid forecasting method that could produce higher accuracy forecasts than previously used 'pure' time series methods. Mentioned methods were already tested with total automotive waste, hazardous automotive waste, and total medical waste generation, but demonstrated at least a 6% error rate in different cases and efforts were made to decrease it even more. Newly developed hybrid models used a random start generation method to incorporate different time-series advantages and it helped to increase the accuracy of forecasts by 3%-4% in hazardous automotive waste and total medical waste generation cases; the new model did not increase the accuracy of total automotive waste generation forecasts. Developed models' abilities to forecast short- and mid-term forecasts were tested using prediction horizon.
NASA Astrophysics Data System (ADS)
Baaquie, Belal E.; Liang, Cui
2007-01-01
The industry standard for pricing an interest-rate caplet is Black's formula. Another distinct price of the same caplet can be derived using a quantum field theory model of the forward interest rates. An empirical study is carried out to compare the two caplet pricing formulae. Historical volatility and correlation of forward interest rates are used to generate the field theory caplet price; another approach is to fit a parametric formula for the effective volatility using market caplet price. The study shows that the field theory model generates the price of a caplet and cap fairly accurately. Black's formula for a caplet is compared with field theory pricing formula. It is seen that the field theory formula for caplet price has many advantages over Black's formula.
NASA Astrophysics Data System (ADS)
Li, Chunguang; Maini, Philip K.
2005-10-01
The Penna bit-string model successfully encompasses many phenomena of population evolution, including inheritance, mutation, evolution, and aging. If we consider social interactions among individuals in the Penna model, the population will form a complex network. In this paper, we first modify the Verhulst factor to control only the birth rate, and introduce activity-based preferential reproduction of offspring in the Penna model. The social interactions among individuals are generated by both inheritance and activity-based preferential increase. Then we study the properties of the complex network generated by the modified Penna model. We find that the resulting complex network has a small-world effect and the assortative mixing property.
Ma, Tian; Garg, Shikha; Miller, Christopher J; Waite, T David
2015-05-15
The kinetics and mechanism of light-mediated formic acid (HCOO(-)) degradation in the presence of semiconducting silver chloride particles are investigated in this study. Our experimental results show that visible-light irradiation of AgCl(s) results in generation of holes and electrons with the photo-generated holes and its initial oxidation product carbonate radical, oxidizing HCOO(-) to form CO2. The HCOO(-) degradation rate increases with increase in silver concentration due to increase in rate of photo-generation of holes while the increase in chloride concentration decreases the degradation rate of HCOO(-) as a result of the scavenging of holes by Cl(-), thereby resulting in decreased holes and carbonate radical concentration. The results obtained indicate that a variety of other solution conditions including dioxygen concentration, bicarbonate concentration and pH influence the availability of holes and hence the HCOO(-) degradation rate in a manner consistent with our understanding of key processes. Based on our experimental results, we have developed a kinetic model capable of predicting AgCl(s)-mediated HCOO(-) photo-degradation over a wide range of conditions. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Sun, H. Y.; Lu, B. X.; Wang, M.; Guo, Q. F.; Feng, Q. K.
2017-10-01
The swarm parameters of the negative corona discharge are improved to calculate the discharge model under different environmental conditions. The effects of temperature, humidity, and air pressure are studied using a conventional needle-to-plane configuration in air. The electron density, electric field, electron generation rate, and photoelectron generation rate are discussed in this paper. The role of photoionization under these conditions is also studied by numerical simulation. The photoelectrons generated in weak ionization region are proved to be dominant.
Crago, Patrick E; Makowski, Nathaniel S
2014-10-01
Stimulation of peripheral nerves is often superimposed on ongoing motor and sensory activity in the same axons, without a quantitative model of the net action potential train at the axon endpoint. We develop a model of action potential patterns elicited by superimposing constant frequency axonal stimulation on the action potentials arriving from a physiologically activated neural source. The model includes interactions due to collision block, resetting of the neural impulse generator, and the refractory period of the axon at the point of stimulation. Both the mean endpoint firing rate and the probability distribution of the action potential firing periods depend strongly on the relative firing rates of the two sources and the intersite conduction time between them. When the stimulus rate exceeds the neural rate, neural action potentials do not reach the endpoint and the rate of endpoint action potentials is the same as the stimulus rate, regardless of the intersite conduction time. However, when the stimulus rate is less than the neural rate, and the intersite conduction time is short, the two rates partially sum. Increases in stimulus rate produce non-monotonic increases in endpoint rate and continuously increasing block of neurally generated action potentials. Rate summation is reduced and more neural action potentials are blocked as the intersite conduction time increases. At long intersite conduction times, the endpoint rate simplifies to being the maximum of either the neural or the stimulus rate. This study highlights the potential of increasing the endpoint action potential rate and preserving neural information transmission by low rate stimulation with short intersite conduction times. Intersite conduction times can be decreased with proximal stimulation sites for muscles and distal stimulation sites for sensory endings. The model provides a basis for optimizing experiments and designing neuroprosthetic interventions involving motor or sensory stimulation.
Verification of an Analytical Method for Measuring Crystal Nucleation Rates in Glasses from DTA Data
NASA Technical Reports Server (NTRS)
Ranasinghe, K. S.; Wei, P. F.; Kelton, K. F.; Ray, C. S.; Day, D. E.
2004-01-01
A recently proposed analytical (DTA) method for estimating the nucleation rates in glasses has been evaluated by comparing experimental data with numerically computed nucleation rates for a model lithium disilicate glass. The time and temperature dependent nucleation rates were predicted using the model and compared with those values from an analysis of numerically calculated DTA curves. The validity of the numerical approach was demonstrated earlier by a comparison with experimental data. The excellent agreement between the nucleation rates from the model calculations and fiom the computer generated DTA data demonstrates the validity of the proposed analytical DTA method.
Antiresonance induced spin-polarized current generation
NASA Astrophysics Data System (ADS)
Yin, Sun; Min, Wen-Jing; Gao, Kun; Xie, Shi-Jie; Liu, De-Sheng
2011-12-01
According to the one-dimensional antiresonance effect (Wang X R, Wang Y and Sun Z Z 2003 Phys. Rev. B 65 193402), we propose a possible spin-polarized current generation device. Our proposed model consists of one chain and an impurity coupling to the chain. The energy level of the impurity can be occupied by an electron with a specific spin, and the electron with such a spin is blocked because of the antiresonance effect. Based on this phenomenon our model can generate the spin-polarized current flowing through the chain due to different polarization rates. On the other hand, the device can also be used to measure the generated spin accumulation. Our model is feasible with today's technology.
Contribution to irradiation creep arising from gas-driven bubbles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Woo, C.H.; Garner, F.A.
1998-03-01
In a previous paper the relationship was defined between void swelling and irradiation creep arising from the interaction of the SIPA and SIG creep-driven deformation and swelling-driven deformation was highly interactive in nature, and that the two contributions could not be independently calculated and then considered as directly additive. This model could be used to explain the recent experimental observation that the creep-swelling coupling coefficient was not a constant as previously assumed, but declined continuously as the swelling rate increased. Such a model thereby explained the creep-disappearance and creep-damping anomalies observed in conditions where significant void swelling occurred before substantialmore » creep deformation developed. At lower irradiation temperatures and high helium/hydrogen generation rates, such as found in light water cooled reactors and some fusion concepts, gas-filled cavities that have not yet exceeded the critical radius for bubble-void conversion should also exert an influence on irradiation creep. In this paper the original concept is adapted to include such conditions, and its predictions then compared with available data. It is shown that a measurable increase in the creep rate is expected compared to the rate found in low gas-generating environments. The creep rate is directly related to the gas generation rate and thereby to the neutron flux and spectrum.« less
Cunningham, Marc; Bock, Ariella; Brown, Niquelle; Sacher, Suzy; Hatch, Benjamin; Inglis, Andrew; Aronovich, Dana
2015-09-01
Contraceptive prevalence rate (CPR) is a vital indicator used by country governments, international donors, and other stakeholders for measuring progress in family planning programs against country targets and global initiatives as well as for estimating health outcomes. Because of the need for more frequent CPR estimates than population-based surveys currently provide, alternative approaches for estimating CPRs are being explored, including using contraceptive logistics data. Using data from the Demographic and Health Surveys (DHS) in 30 countries, population data from the United States Census Bureau International Database, and logistics data from the Procurement Planning and Monitoring Report (PPMR) and the Pipeline Monitoring and Procurement Planning System (PipeLine), we developed and evaluated 3 models to generate country-level, public-sector contraceptive prevalence estimates for injectable contraceptives, oral contraceptives, and male condoms. Models included: direct estimation through existing couple-years of protection (CYP) conversion factors, bivariate linear regression, and multivariate linear regression. Model evaluation consisted of comparing the referent DHS prevalence rates for each short-acting method with the model-generated prevalence rate using multiple metrics, including mean absolute error and proportion of countries where the modeled prevalence rate for each method was within 1, 2, or 5 percentage points of the DHS referent value. For the methods studied, family planning use estimates from public-sector logistics data were correlated with those from the DHS, validating the quality and accuracy of current public-sector logistics data. Logistics data for oral and injectable contraceptives were significantly associated (P<.05) with the referent DHS values for both bivariate and multivariate models. For condoms, however, that association was only significant for the bivariate model. With the exception of the CYP-based model for condoms, models were able to estimate public-sector prevalence rates for each short-acting method to within 2 percentage points in at least 85% of countries. Public-sector contraceptive logistics data are strongly correlated with public-sector prevalence rates for short-acting methods, demonstrating the quality of current logistics data and their ability to provide relatively accurate prevalence estimates. The models provide a starting point for generating interim estimates of contraceptive use when timely survey data are unavailable. All models except the condoms CYP model performed well; the regression models were most accurate but the CYP model offers the simplest calculation method. Future work extending the research to other modern methods, relating subnational logistics data with prevalence rates, and tracking that relationship over time is needed. © Cunningham et al.
Cunningham, Marc; Brown, Niquelle; Sacher, Suzy; Hatch, Benjamin; Inglis, Andrew; Aronovich, Dana
2015-01-01
Background: Contraceptive prevalence rate (CPR) is a vital indicator used by country governments, international donors, and other stakeholders for measuring progress in family planning programs against country targets and global initiatives as well as for estimating health outcomes. Because of the need for more frequent CPR estimates than population-based surveys currently provide, alternative approaches for estimating CPRs are being explored, including using contraceptive logistics data. Methods: Using data from the Demographic and Health Surveys (DHS) in 30 countries, population data from the United States Census Bureau International Database, and logistics data from the Procurement Planning and Monitoring Report (PPMR) and the Pipeline Monitoring and Procurement Planning System (PipeLine), we developed and evaluated 3 models to generate country-level, public-sector contraceptive prevalence estimates for injectable contraceptives, oral contraceptives, and male condoms. Models included: direct estimation through existing couple-years of protection (CYP) conversion factors, bivariate linear regression, and multivariate linear regression. Model evaluation consisted of comparing the referent DHS prevalence rates for each short-acting method with the model-generated prevalence rate using multiple metrics, including mean absolute error and proportion of countries where the modeled prevalence rate for each method was within 1, 2, or 5 percentage points of the DHS referent value. Results: For the methods studied, family planning use estimates from public-sector logistics data were correlated with those from the DHS, validating the quality and accuracy of current public-sector logistics data. Logistics data for oral and injectable contraceptives were significantly associated (P<.05) with the referent DHS values for both bivariate and multivariate models. For condoms, however, that association was only significant for the bivariate model. With the exception of the CYP-based model for condoms, models were able to estimate public-sector prevalence rates for each short-acting method to within 2 percentage points in at least 85% of countries. Conclusions: Public-sector contraceptive logistics data are strongly correlated with public-sector prevalence rates for short-acting methods, demonstrating the quality of current logistics data and their ability to provide relatively accurate prevalence estimates. The models provide a starting point for generating interim estimates of contraceptive use when timely survey data are unavailable. All models except the condoms CYP model performed well; the regression models were most accurate but the CYP model offers the simplest calculation method. Future work extending the research to other modern methods, relating subnational logistics data with prevalence rates, and tracking that relationship over time is needed. PMID:26374805
Peralta-Hernández, J M; Meas-Vong, Yunny; Rodríguez, Francisco J; Chapman, Thomas W; Maldonado, Manuel I; Godínez, Luis A
2006-05-01
In this work, the design and construction of an annular tube reactor for the electrochemical and photo-electrochemical in situ generation of H2O2 are described. By cathodic reduction of dissolved oxygen and the coupled oxidation of water at a UV-illuminated nanocrystalline-TiO2 semiconductor anode, it was found that the electrochemically generated H2O2 can be employed to readily oxidize the model compound Direct Yellow-52 in dilute acidic solution at high rates in the presence of small quantities of dissolved iron(II). Although, the model organic compound is chemically stable under UV radiation, its electrochemical oxidation rate increases substantially when the semiconductor anode is illuminated as compared to the same processes carried out in the dark.
Electron beam induced current in the high injection regime.
Haney, Paul M; Yoon, Heayoung P; Koirala, Prakash; Collins, Robert W; Zhitenev, Nikolai B
2015-07-24
Electron beam induced current (EBIC) is a powerful technique which measures the charge collection efficiency of photovoltaics with sub-micron spatial resolution. The exciting electron beam results in a high generation rate density of electron-hole pairs, which may drive the system into nonlinear regimes. An analytic model is presented which describes the EBIC response when the total electron-hole pair generation rate exceeds the rate at which carriers are extracted by the photovoltaic cell, and charge accumulation and screening occur. The model provides a simple estimate of the onset of the high injection regime in terms of the material resistivity and thickness, and provides a straightforward way to predict the EBIC lineshape in the high injection regime. The model is verified by comparing its predictions to numerical simulations in one- and two-dimensions. Features of the experimental data, such as the magnitude and position of maximum collection efficiency versus electron beam current, are consistent with the three-dimensional model.
Materials Database Development for Ballistic Impact Modeling
NASA Technical Reports Server (NTRS)
Pereira, J. Michael
2007-01-01
A set of experimental data is being generated under the Fundamental Aeronautics Program Supersonics project to help create and validate accurate computational impact models of jet engine impact events. The data generated will include material property data generated at a range of different strain rates, from 1x10(exp -4)/sec to 5x10(exp 4)/sec, over a range of temperatures. In addition, carefully instrumented ballistic impact tests will be conducted on flat plates and curved structures to provide material and structural response information to help validate the computational models. The material property data and the ballistic impact data will be generated using materials from the same lot, as far as possible. It was found in preliminary testing that the surface finish of test specimens has an effect on measured high strain rate tension response of AL2024. Both the maximum stress and maximum elongation are greater on specimens with a smoother finish. This report gives an overview of the testing that is being conducted and presents results of preliminary testing of the surface finish study.
Pilkington, Rhiannon; Taylor, Anne W; Hugo, Graeme; Wittert, Gary
2014-01-01
To determine differences in sociodemographic and health related characteristics of Australian Baby Boomers and Generation X at the same relative age. The 1989/90 National Health Survey (NHS) for Boomers (1946-1965) and the 2007/08 NHS for Generation Xers (1966-1980) was used to compare the cohorts at the same age of 25-44 years. Generational differences for males and females in education, employment, smoking, physical activity, Body Mass Index (BMI), self-rated health, and diabetes were determined using Z tests. Prevalence estimates and p-values are reported. Logistic regression models examining overweight/obesity (BMI≥25) and diabetes prevalence as the dependent variables, with generation as the independent variable were adjusted for sex, age, education, physical activity, smoking and BMI(diabetes model only). Adjusted odds ratios (OR) and 95% confidence intervals are reported. At the same age, tertiary educational attainment was higher among Generation X males (27.6% vs. 15.2% p<0.001) and females (30.0% vs. 10.6% p<0.001). Boomer females had a higher rate of unemployment (5.6% vs. 2.5% p<0.001). Boomer males and females had a higher prevalence of "excellent" self-reported health (35.9% vs. 21.8% p<0.001; 36.3% vs. 25.1% p<0.001) and smoking (36.3% vs. 30.4% p<0.001; 28.3% vs. 22.3% p<0.001). Generation X males (18.3% vs. 9.4% p<0.001) and females (12.7% vs. 10.4% p = 0.015) demonstrated a higher prevalence of obesity (BMI>30). There were no differences in physical activity. Modelling indicated that Generation X were more likely than Boomers to be overweight/obese (OR:2.09, 1.77-2.46) and have diabetes (OR:1.79, 1.47-2.18). Self-rated health has deteriorated while obesity and diabetes prevalence has increased. This may impact workforce participation and health care utilization in the future.
NASA Astrophysics Data System (ADS)
Srinivas, Vikram; Menon, Sandeep; Osterman, Michael; Pecht, Michael G.
2013-08-01
Solder durability models frequently focus on the applied strain range; however, the rate of applied loading, or strain rate, is also important. In this study, an approach to incorporate strain rate dependency into durability estimation for solder interconnects is examined. Failure data were collected for SAC105 solder ball grid arrays assembled with SAC305 solder that were subjected to displacement-controlled torsion loads. Strain-rate-dependent (Johnson-Cook model) and strain-rate-independent elastic-plastic properties were used to model the solders in finite-element simulation. Test data were then used to extract damage model constants for the reduced-Ag SAC solder. A generalized Coffin-Manson damage model was used to estimate the durability. The mechanical fatigue durability curve for reduced-silver SAC solder was generated and compared with durability curves for SAC305 and Sn-Pb from the literature.
Future trends in computer waste generation in India.
Dwivedy, Maheshwar; Mittal, R K
2010-11-01
The objective of this paper is to estimate the future projection of computer waste in India and to subsequently analyze their flow at the end of their useful phase. For this purpose, the study utilizes the logistic model-based approach proposed by Yang and Williams to forecast future trends in computer waste. The model estimates future projection of computer penetration rate utilizing their first lifespan distribution and historical sales data. A bounding analysis on the future carrying capacity was simulated using the three parameter logistic curve. The observed obsolete generation quantities from the extrapolated penetration rates are then used to model the disposal phase. The results of the bounding analysis indicate that in the year 2020, around 41-152 million units of computers will become obsolete. The obsolete computer generation quantities are then used to estimate the End-of-Life outflows by utilizing a time-series multiple lifespan model. Even a conservative estimate of the future recycling capacity of PCs will reach upwards of 30 million units during 2025. Apparently, more than 150 million units could be potentially recycled in the upper bound case. However, considering significant future investment in the e-waste recycling sector from all stakeholders in India, we propose a logistic growth in the recycling rate and estimate the requirement of recycling capacity between 60 and 400 million units for the lower and upper bound case during 2025. Finally, we compare the future obsolete PC generation amount of the US and India. Copyright © 2010 Elsevier Ltd. All rights reserved.
Next-generation genome-scale models for metabolic engineering.
King, Zachary A; Lloyd, Colton J; Feist, Adam M; Palsson, Bernhard O
2015-12-01
Constraint-based reconstruction and analysis (COBRA) methods have become widely used tools for metabolic engineering in both academic and industrial laboratories. By employing a genome-scale in silico representation of the metabolic network of a host organism, COBRA methods can be used to predict optimal genetic modifications that improve the rate and yield of chemical production. A new generation of COBRA models and methods is now being developed--encompassing many biological processes and simulation strategies-and next-generation models enable new types of predictions. Here, three key examples of applying COBRA methods to strain optimization are presented and discussed. Then, an outlook is provided on the next generation of COBRA models and the new types of predictions they will enable for systems metabolic engineering. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Sun, Hongyue; Luo, Shuai; Jin, Ran; He, Zhen
2017-07-01
Mathematical modeling is an important tool to investigate the performance of microbial fuel cell (MFC) towards its optimized design. To overcome the shortcoming of traditional MFC models, an ensemble model is developed through integrating both engineering model and statistical analytics for the extrapolation scenarios in this study. Such an ensemble model can reduce laboring effort in parameter calibration and require fewer measurement data to achieve comparable accuracy to traditional statistical model under both the normal and extreme operation regions. Based on different weight between current generation and organic removal efficiency, the ensemble model can give recommended input factor settings to achieve the best current generation and organic removal efficiency. The model predicts a set of optimal design factors for the present tubular MFCs including the anode flow rate of 3.47 mL min-1, organic concentration of 0.71 g L-1, and catholyte pumping flow rate of 14.74 mL min-1 to achieve the peak current at 39.2 mA. To maintain 100% organic removal efficiency, the anode flow rate and organic concentration should be controlled lower than 1.04 mL min-1 and 0.22 g L-1, respectively. The developed ensemble model can be potentially modified to model other types of MFCs or bioelectrochemical systems.
Bates, Jonathan; Parzynski, Craig S; Dhruva, Sanket S; Coppi, Andreas; Kuntz, Richard; Li, Shu-Xia; Marinac-Dabic, Danica; Masoudi, Frederick A; Shaw, Richard E; Warner, Frederick; Krumholz, Harlan M; Ross, Joseph S
2018-06-12
To estimate medical device utilization needed to detect safety differences among implantable cardioverter defibrillators (ICDs) generator models and compare these estimates to utilization in practice. We conducted repeated sample size estimates to calculate the medical device utilization needed, systematically varying device-specific safety event rate ratios and significance levels while maintaining 80% power, testing 3 average adverse event rates (3.9, 6.1, and 12.6 events per 100 person-years) estimated from the American College of Cardiology's 2006 to 2010 National Cardiovascular Data Registry of ICDs. We then compared with actual medical device utilization. At significance level 0.05 and 80% power, 34% or fewer ICD models accrued sufficient utilization in practice to detect safety differences for rate ratios <1.15 and an average event rate of 12.6 events per 100 person-years. For average event rates of 3.9 and 12.6 events per 100 person-years, 30% and 50% of ICD models, respectively, accrued sufficient utilization for a rate ratio of 1.25, whereas 52% and 67% for a rate ratio of 1.50. Because actual ICD utilization was not uniformly distributed across ICD models, the proportion of individuals receiving any ICD that accrued sufficient utilization in practice was 0% to 21%, 32% to 70%, and 67% to 84% for rate ratios of 1.05, 1.15, and 1.25, respectively, for the range of 3 average adverse event rates. Small safety differences among ICD generator models are unlikely to be detected through routine surveillance given current ICD utilization in practice, but large safety differences can be detected for most patients at anticipated average adverse event rates. Copyright © 2018 John Wiley & Sons, Ltd.
Tsunami probability in the Caribbean Region
Parsons, T.; Geist, E.L.
2008-01-01
We calculated tsunami runup probability (in excess of 0.5 m) at coastal sites throughout the Caribbean region. We applied a Poissonian probability model because of the variety of uncorrelated tsunami sources in the region. Coastlines were discretized into 20 km by 20 km cells, and the mean tsunami runup rate was determined for each cell. The remarkable ???500-year empirical record compiled by O'Loughlin and Lander (2003) was used to calculate an empirical tsunami probability map, the first of three constructed for this study. However, it is unclear whether the 500-year record is complete, so we conducted a seismic moment-balance exercise using a finite-element model of the Caribbean-North American plate boundaries and the earthquake catalog, and found that moment could be balanced if the seismic coupling coefficient is c = 0.32. Modeled moment release was therefore used to generate synthetic earthquake sequences to calculate 50 tsunami runup scenarios for 500-year periods. We made a second probability map from numerically-calculated runup rates in each cell. Differences between the first two probability maps based on empirical and numerical-modeled rates suggest that each captured different aspects of tsunami generation; the empirical model may be deficient in primary plate-boundary events, whereas numerical model rates lack backarc fault and landslide sources. We thus prepared a third probability map using Bayesian likelihood functions derived from the empirical and numerical rate models and their attendant uncertainty to weight a range of rates at each 20 km by 20 km coastal cell. Our best-estimate map gives a range of 30-year runup probability from 0 - 30% regionally. ?? irkhaueser 2008.
Endogenous fertility, altruistic behavior across generations, and social security systems.
Prinz, A
1990-01-01
This study examines the possible link between the existence of a pay-as-you-go social security program and individual procreative behavior. When a public old-age income support system takes the place of within-family support, the theoretical literature preducts that fertility rates will decline since children are no longer perceived as important to the old age security of the parents. The author takes up this theoretical problem and examines it through three different but related issues: optimal capital accumulation, optimal population growth and the role of social institutions affecting efficient intergenerational allocations. Econometric analysis employing a steady state growth model is used. Altruism between generations is studied for effect on the standard model. The model shows that for social optimum the per capita pension is related to the growth rate of the population, therefore, for society as a whole, children are investment goods. However, given the existence of a social security system, it is in each household's best interest to have no children at all. Only a government transfer, a child allowance to parents, changes the model and fertility rates. When modified to account for "caring" the model demonstrates that altruistic behavior between generations is not symmetrical. The study concludes that a pay-as-you-go funded social security system should be supplemented by a system of child allowances or replaced by a fully funded social security system.
NASA Astrophysics Data System (ADS)
Adamkovics, M.; Boering, K. A.
2003-12-01
The presence of photochemically-generated hazes has a significant impact on radiative transfer in planetary atmospheres. While the rates of particle formation have been inferred from photochemical or microphysical models constrained to match observations, these rates have not been determined experimentally. Thus, the fundamental kinetics of particle formation are not known and remain highly parameterized in planetary atmospheric models. We have developed instrumentation for measuring the formation rates and optical properties of organic aerosols produced by irradiating mixtures of precursor gases via in situ optical (633nm) scattering and online quadrupole mass spectrometry (1-200 amu). Results for the generation of particulate hydrocarbons from the irradiation of pure, gas-phase CH4 as well as CH4/CO2 mixtures with vacuum ultraviolet (120-160nm) light, along with simultaneous measurements of the evolution of higher gas-phase hydrocarbons will be presented.
Ultra-fast quantum randomness generation by accelerated phase diffusion in a pulsed laser diode.
Abellán, C; Amaya, W; Jofre, M; Curty, M; Acín, A; Capmany, J; Pruneri, V; Mitchell, M W
2014-01-27
We demonstrate a high bit-rate quantum random number generator by interferometric detection of phase diffusion in a gain-switched DFB laser diode. Gain switching at few-GHz frequencies produces a train of bright pulses with nearly equal amplitudes and random phases. An unbalanced Mach-Zehnder interferometer is used to interfere subsequent pulses and thereby generate strong random-amplitude pulses, which are detected and digitized to produce a high-rate random bit string. Using established models of semiconductor laser field dynamics, we predict a regime of high visibility interference and nearly complete vacuum-fluctuation-induced phase diffusion between pulses. These are confirmed by measurement of pulse power statistics at the output of the interferometer. Using a 5.825 GHz excitation rate and 14-bit digitization, we observe 43 Gbps quantum randomness generation.
Predictive models of radiative neutrino masses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Julio, J., E-mail: julio@lipi.go.id
2016-06-21
We discuss two models of radiative neutrino mass generation. The first model features one–loop Zee model with Z{sub 4} symmetry. The second model is the two–loop neutrino mass model with singly- and doubly-charged scalars. These two models fit neutrino oscillation data well and predict some interesting rates for lepton flavor violation processes.
Hadidi, Laith A; Omer, Mohamed Mahmoud
2017-01-01
Municipal Solid Waste (MSW) generation in Saudi Arabia is increasingly growing at a fast rate, as it hurtles towards ever increasing urban development coupled with rapid developments and expanding population. Saudi Arabia's energy demands are also rising at a faster rate. Therefore, the importance of an integrated waste management system in Saudi Arabia is increasingly rising and introducing Waste to Energy (WTE) facilities is becoming an absolute necessity. This paper analyzes the current situation of MSW management in Saudi Arabia and proposes a financial model to assess the viability of WTE investments in Saudi Arabia in order to address its waste management challenges and meet its forecasted energy demands. The research develops a financial model to investigate the financial viability of WTE plants utilizing gasification and Anaerobic Digestion (AD) conversion technologies. The financial model provides a cost estimate of establishing both gasification and anaerobic digestion WTE plants in Saudi Arabia through a set of financial indicators, i.e. net present value (NPV), internal rate of return (IRR), modified internal rate of return (MIRR), profitability index (PI), payback period, discounted payback period, Levelized Cost of Electricity (LCOE) and Levelized Cost of Waste (LCOW). Finally, the analysis of the financial model reveals the main affecting factors of the gasification plants investment decision, namely: facility generation capacity, generated electricity revenue, and the capacity factor. Similarly, the paper also identifies facility waste capacity and the capacity factor as the main affecting factors on the AD plants' investment decision. Copyright © 2016 Elsevier Ltd. All rights reserved.
Rating knowledge sharing in cross-domain collaborative filtering.
Li, Bin; Zhu, Xingquan; Li, Ruijiang; Zhang, Chengqi
2015-05-01
Cross-domain collaborative filtering (CF) aims to share common rating knowledge across multiple related CF domains to boost the CF performance. In this paper, we view CF domains as a 2-D site-time coordinate system, on which multiple related domains, such as similar recommender sites or successive time-slices, can share group-level rating patterns. We propose a unified framework for cross-domain CF over the site-time coordinate system by sharing group-level rating patterns and imposing user/item dependence across domains. A generative model, say ratings over site-time (ROST), which can generate and predict ratings for multiple related CF domains, is developed as the basic model for the framework. We further introduce cross-domain user/item dependence into ROST and extend it to two real-world cross-domain CF scenarios: 1) ROST (sites) for alleviating rating sparsity in the target domain, where multiple similar sites are viewed as related CF domains and some items in the target domain depend on their correspondences in the related ones; and 2) ROST (time) for modeling user-interest drift over time, where a series of time-slices are viewed as related CF domains and a user at current time-slice depends on herself in the previous time-slice. All these ROST models are instances of the proposed unified framework. The experimental results show that ROST (sites) can effectively alleviate the sparsity problem to improve rating prediction performance and ROST (time) can clearly track and visualize user-interest drift over time.
Ellens, Harma; Meng, Zhou; Le Marchand, Sylvain J; Bentz, Joe
2018-06-01
In vitro transporter kinetics are typically analyzed by steady-state Michaelis-Menten approximations. However, no clear evidence exists that these approximations, applied to multiple transporters in biological membranes, yield system-independent mechanistic parameters needed for reliable in vivo hypothesis generation and testing. Areas covered: The classical mass action model has been developed for P-glycoprotein (P-gp) mediated transport across confluent polarized cell monolayers. Numerical integration of the mass action equations for transport using a stable global optimization program yields fitted elementary rate constants that are system-independent. The efflux active P-gp was defined by the rate at which P-gp delivers drugs to the apical chamber, since as much as 90% of drugs effluxed by P-gp partition back into nearby microvilli prior to reaching the apical chamber. The efflux active P-gp concentration was 10-fold smaller than the total expressed P-gp for Caco-2 cells, due to their microvilli membrane morphology. The mechanistic insights from this analysis are readily extrapolated to P-gp mediated transport in vivo. Expert opinion: In vitro system-independent elementary rate constants for transporters are essential for the generation and validation of robust mechanistic PBPK models. Our modeling approach and programs have broad application potential. They can be used for any drug transporter with minor adaptations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
HU TA
2009-10-26
Assess the steady-state flammability level at normal and off-normal ventilation conditions. The hydrogen generation rate was calculated for 177 tanks using the rate equation model. Flammability calculations based on hydrogen, ammonia, and methane were performed for 177 tanks for various scenarios.
Modelling small-area inequality in premature mortality using years of life lost rates
NASA Astrophysics Data System (ADS)
Congdon, Peter
2013-04-01
Analysis of premature mortality variations via standardized expected years of life lost (SEYLL) measures raises questions about suitable modelling for mortality data, especially when developing SEYLL profiles for areas with small populations. Existing fixed effects estimation methods take no account of correlations in mortality levels over ages, causes, socio-ethnic groups or areas. They also do not specify an underlying data generating process, or a likelihood model that can include trends or correlations, and are likely to produce unstable estimates for small-areas. An alternative strategy involves a fully specified data generation process, and a random effects model which "borrows strength" to produce stable SEYLL estimates, allowing for correlations between ages, areas and socio-ethnic groups. The resulting modelling strategy is applied to gender-specific differences in SEYLL rates in small-areas in NE London, and to cause-specific mortality for leading causes of premature mortality in these areas.
Siddiqi, Ariba; Poosapadi Arjunan, Sridhar; Kumar, Dinesh Kant
2018-01-16
This study describes a new model of the force generated by tibialis anterior muscle with three new features: single-fiber action potential, twitch force, and pennation angle. This model was used to investigate the relative effects and interaction of ten age-associated neuromuscular parameters. Regression analysis (significance level of 0.05) between the neuromuscular properties and corresponding simulated force produced at the footplate was performed. Standardized slope coefficients were computed to rank the effect of the parameters. The results show that reduction in the average firing rate is the reason for the sharp decline in the force and other factors, such as number of muscle fibers, specific force, pennation angle, and innervation ratio. The fast fiber ratio affects the simulated force through two significant interactions. This study has ranked the individual contributions of the neuromuscular factors to muscle strength decline of the TA and identified firing rate decline as the biggest cause followed by decrease in muscle fiber number and specific force. The strategy for strength preservation for the elderly should focus on improving firing rate. Graphical abstract Neuromuscular properties of Tibialis Anterior on force generated during ankle dorsiflexion.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jahandideh, Sepideh; Jahandideh, Samad; Asadabadi, Ebrahim Barzegari
2009-11-15
Prediction of the amount of hospital waste production will be helpful in the storage, transportation and disposal of hospital waste management. Based on this fact, two predictor models including artificial neural networks (ANNs) and multiple linear regression (MLR) were applied to predict the rate of medical waste generation totally and in different types of sharp, infectious and general. In this study, a 5-fold cross-validation procedure on a database containing total of 50 hospitals of Fars province (Iran) were used to verify the performance of the models. Three performance measures including MAR, RMSE and R{sup 2} were used to evaluate performancemore » of models. The MLR as a conventional model obtained poor prediction performance measure values. However, MLR distinguished hospital capacity and bed occupancy as more significant parameters. On the other hand, ANNs as a more powerful model, which has not been introduced in predicting rate of medical waste generation, showed high performance measure values, especially 0.99 value of R{sup 2} confirming the good fit of the data. Such satisfactory results could be attributed to the non-linear nature of ANNs in problem solving which provides the opportunity for relating independent variables to dependent ones non-linearly. In conclusion, the obtained results showed that our ANN-based model approach is very promising and may play a useful role in developing a better cost-effective strategy for waste management in future.« less
NASA Astrophysics Data System (ADS)
Kangale, Akshay; Krishna Kumar, S.; Arshad Naeem, Mohd; Williams, Mark; Tiwari, M. K.
2016-10-01
With the massive growth of the internet, product reviews increasingly serve as an important source of information for customers to make choices online. Customers depend on these reviews to understand users' experience, and manufacturers rely on this user-generated content to capture user sentiments about their product. Therefore, it is in the best interest of both customers and manufacturers to have a portal where they can read a complete comprehensive summary of these reviews in minimum time. With this in mind, we arrived at our first objective which is to generate a feature-based review-summary. Our second objective is to develop a predictive model to know the next week's product sales based on numerical review ratings and textual features embedded in the reviews. When it comes to product features, every user has different priorities for different features. To capture this aspect of decision-making, we have designed a new mechanism to generate a numerical rating for every feature of the product individually. The data have been collected from a well-known commercial website for two different products. The validation of the model is carried out using a crowd-sourcing technique.
Kandasamy, Palani; Moitra, Ranabir; Mukherjee, Souti
2015-01-01
Experiments were conducted to determine the respiration rate of tomato at 10, 20 and 30 °C using closed respiration system. Oxygen depletion and carbon dioxide accumulation in the system containing tomato was monitored. Respiration rate was found to decrease with increasing CO2 and decreasing O2 concentration. Michaelis-Menten type model based on enzyme kinetics was evaluated using experimental data generated for predicting the respiration rate. The model parameters that obtained from the respiration rate at different O2 and CO2 concentration levels were used to fit the model against the storage temperatures. The fitting was fair (R2 = 0.923 to 0.970) when the respiration rate was expressed as O2 concentation. Since inhibition constant for CO2 concentration tended towards negetive, the model was modified as a function of O2 concentration only. The modified model was fitted to the experimental data and showed good agreement (R2 = 0.998) with experimentally estimated respiration rate.
Scaled CMOS Technology Reliability Users Guide
NASA Technical Reports Server (NTRS)
White, Mark
2010-01-01
The desire to assess the reliability of emerging scaled microelectronics technologies through faster reliability trials and more accurate acceleration models is the precursor for further research and experimentation in this relevant field. The effect of semiconductor scaling on microelectronics product reliability is an important aspect to the high reliability application user. From the perspective of a customer or user, who in many cases must deal with very limited, if any, manufacturer's reliability data to assess the product for a highly-reliable application, product-level testing is critical in the characterization and reliability assessment of advanced nanometer semiconductor scaling effects on microelectronics reliability. A methodology on how to accomplish this and techniques for deriving the expected product-level reliability on commercial memory products are provided.Competing mechanism theory and the multiple failure mechanism model are applied to the experimental results of scaled SDRAM products. Accelerated stress testing at multiple conditions is applied at the product level of several scaled memory products to assess the performance degradation and product reliability. Acceleration models are derived for each case. For several scaled SDRAM products, retention time degradation is studied and two distinct soft error populations are observed with each technology generation: early breakdown, characterized by randomly distributed weak bits with Weibull slope (beta)=1, and a main population breakdown with an increasing failure rate. Retention time soft error rates are calculated and a multiple failure mechanism acceleration model with parameters is derived for each technology. Defect densities are calculated and reflect a decreasing trend in the percentage of random defective bits for each successive product generation. A normalized soft error failure rate of the memory data retention time in FIT/Gb and FIT/cm2 for several scaled SDRAM generations is presented revealing a power relationship. General models describing the soft error rates across scaled product generations are presented. The analysis methodology may be applied to other scaled microelectronic products and their key parameters.
Medium term municipal solid waste generation prediction by autoregressive integrated moving average
DOE Office of Scientific and Technical Information (OSTI.GOV)
Younes, Mohammad K.; Nopiah, Z. M.; Basri, Noor Ezlin A.
2014-09-12
Generally, solid waste handling and management are performed by municipality or local authority. In most of developing countries, local authorities suffer from serious solid waste management (SWM) problems and insufficient data and strategic planning. Thus it is important to develop robust solid waste generation forecasting model. It helps to proper manage the generated solid waste and to develop future plan based on relatively accurate figures. In Malaysia, solid waste generation rate increases rapidly due to the population growth and new consumption trends that characterize the modern life style. This paper aims to develop monthly solid waste forecasting model using Autoregressivemore » Integrated Moving Average (ARIMA), such model is applicable even though there is lack of data and will help the municipality properly establish the annual service plan. The results show that ARIMA (6,1,0) model predicts monthly municipal solid waste generation with root mean square error equals to 0.0952 and the model forecast residuals are within accepted 95% confident interval.« less
NASA Astrophysics Data System (ADS)
Sun, Congcong; Wang, Zhijie; Liu, Sanming; Jiang, Xiuchen; Sheng, Gehao; Liu, Tianyu
2017-05-01
Wind power has the advantages of being clean and non-polluting and the development of bundled wind-thermal generation power systems (BWTGSs) is one of the important means to improve wind power accommodation rate and implement “clean alternative” on generation side. A two-stage optimization strategy for BWTGSs considering wind speed forecasting results and load characteristics is proposed. By taking short-term wind speed forecasting results of generation side and load characteristics of demand side into account, a two-stage optimization model for BWTGSs is formulated. By using the environmental benefit index of BWTGSs as the objective function, supply-demand balance and generator operation as the constraints, the first-stage optimization model is developed with the chance-constrained programming theory. By using the operation cost for BWTGSs as the objective function, the second-stage optimization model is developed with the greedy algorithm. The improved PSO algorithm is employed to solve the model and numerical test verifies the effectiveness of the proposed strategy.
Medium term municipal solid waste generation prediction by autoregressive integrated moving average
NASA Astrophysics Data System (ADS)
Younes, Mohammad K.; Nopiah, Z. M.; Basri, Noor Ezlin A.; Basri, Hassan
2014-09-01
Generally, solid waste handling and management are performed by municipality or local authority. In most of developing countries, local authorities suffer from serious solid waste management (SWM) problems and insufficient data and strategic planning. Thus it is important to develop robust solid waste generation forecasting model. It helps to proper manage the generated solid waste and to develop future plan based on relatively accurate figures. In Malaysia, solid waste generation rate increases rapidly due to the population growth and new consumption trends that characterize the modern life style. This paper aims to develop monthly solid waste forecasting model using Autoregressive Integrated Moving Average (ARIMA), such model is applicable even though there is lack of data and will help the municipality properly establish the annual service plan. The results show that ARIMA (6,1,0) model predicts monthly municipal solid waste generation with root mean square error equals to 0.0952 and the model forecast residuals are within accepted 95% confident interval.
Pharmacophore-Map-Pick: A Method to Generate Pharmacophore Models for All Human GPCRs.
Dai, Shao-Xing; Li, Gong-Hua; Gao, Yue-Dong; Huang, Jing-Fei
2016-02-01
GPCR-based drug discovery is hindered by a lack of effective screening methods for most GPCRs that have neither ligands nor high-quality structures. With the aim to identify lead molecules for these GPCRs, we developed a new method called Pharmacophore-Map-Pick to generate pharmacophore models for all human GPCRs. The model of ADRB2 generated using this method not only predicts the binding mode of ADRB2-ligands correctly but also performs well in virtual screening. Findings also demonstrate that this method is powerful for generating high-quality pharmacophore models. The average enrichment for the pharmacophore models of the 15 targets in different GPCR families reached 15-fold at 0.5 % false-positive rate. Therefore, the pharmacophore models can be applied in virtual screening directly with no requirement for any ligand information or shape constraints. A total of 2386 pharmacophore models for 819 different GPCRs (99 % coverage (819/825)) were generated and are available at http://bsb.kiz.ac.cn/GPCRPMD. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
[Vitamin K3-induced activation of molecular oxygen in glioma cells].
Krylova, N G; Kulagova, T A; Semenkova, G N; Cherenkevich, S N
2009-01-01
It has been shown by the method of fluorescent analysis that the rate of hydrogen peroxide generation in human U251 glioma cells under the effect of lipophilic (menadione) or hydrophilic (vikasol) analogues of vitamin K3 was different. Analyzing experimental data we can conclude that menadione underwent one- and two-electron reduction by intracellular reductases in glioma cells. Reduced forms of menadione interact with molecular oxygen leading to reactive oxygen species (ROS) generation. The theoretical model of ROS generation including two competitive processes of one- and two-electron reduction of menadione has been proposed. Rate constants of ROS generation mediated by one-electron reduction process have been estimated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lombardo, N.J.; Marseille, T.J.; White, M.D.
TRUMP-BD (Boil Down) is an extension of the TRUMP (Edwards 1972) computer program for the analysis of nuclear fuel assemblies under severe accident conditions. This extension allows prediction of the heat transfer rates, metal-water oxidation rates, fission product release rates, steam generation and consumption rates, and temperature distributions for nuclear fuel assemblies under core uncovery conditions. The heat transfer processes include conduction in solid structures, convection across fluid-solid boundaries, and radiation between interacting surfaces. Metal-water reaction kinetics are modeled with empirical relationships to predict the oxidation rates of steam-exposed Zircaloy and uranium metal. The metal-water oxidation models are parabolic inmore » form with an Arrhenius temperature dependence. Uranium oxidation begins when fuel cladding failure occurs; Zircaloy oxidation occurs continuously at temperatures above 13000{degree}F when metal and steam are available. From the metal-water reactions, the hydrogen generation rate, total hydrogen release, and temporal and spatial distribution of oxide formations are computed. Consumption of steam from the oxidation reactions and the effect of hydrogen on the coolant properties is modeled for independent coolant flow channels. Fission product release from exposed uranium metal Zircaloy-clad fuel is modeled using empirical time and temperature relationships that consider the release to be subject to oxidation and volitization/diffusion ( bake-out'') release mechanisms. Release of the volatile species of iodine (I), tellurium (Te), cesium (Ce), ruthenium (Ru), strontium (Sr), zirconium (Zr), cerium (Cr), and barium (Ba) from uranium metal fuel may be modeled.« less
Radiolytic and thermolytic bubble gas hydrogen composition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Woodham, W.
This report describes the development of a mathematical model for the estimation of the hydrogen composition of gas bubbles trapped in radioactive waste. The model described herein uses a material balance approach to accurately incorporate the rates of hydrogen generation by a number of physical phenomena and scale the aforementioned rates in a manner that allows calculation of the final hydrogen composition.
NASA Astrophysics Data System (ADS)
Farag, Mohammed; Sweity, Haitham; Fleckenstein, Matthias; Habibi, Saeid
2017-08-01
Real-time prediction of the battery's core temperature and terminal voltage is very crucial for an accurate battery management system. In this paper, a combined electrochemical, heat generation, and thermal model is developed for large prismatic cells. The proposed model consists of three sub-models, an electrochemical model, heat generation model, and thermal model which are coupled together in an iterative fashion through physicochemical temperature dependent parameters. The proposed parameterization cycles identify the sub-models' parameters separately by exciting the battery under isothermal and non-isothermal operating conditions. The proposed combined model structure shows accurate terminal voltage and core temperature prediction at various operating conditions while maintaining a simple mathematical structure, making it ideal for real-time BMS applications. Finally, the model is validated against both isothermal and non-isothermal drive cycles, covering a broad range of C-rates, and temperature ranges [-25 °C to 45 °C].
Hemolytic potential of hydrodynamic cavitation.
Chambers, S D; Bartlett, R H; Ceccio, S L
2000-08-01
The purpose of this study was to determine the hemolytic potentials of discrete bubble cavitation and attached cavitation. To generate controlled cavitation events, a venturigeometry hydrodynamic device, called a Cavitation Susceptibility Meter (CSM), was constructed. A comparison between the hemolytic potential of discrete bubble cavitation and attached cavitation was investigated with a single-pass flow apparatus and a recirculating flow apparatus, both utilizing the CSM. An analytical model, based on spherical bubble dynamics, was developed for predicting the hemolysis caused by discrete bubble cavitation. Experimentally, discrete bubble cavitation did not correlate with a measurable increase in plasma-free hemoglobin (PFHb), as predicted by the analytical model. However, attached cavitation did result in significant PFHb generation. The rate of PFHb generation scaled inversely with the Cavitation number at a constant flow rate, suggesting that the size of the attached cavity was the dominant hemolytic factor.
A Dynamic Mesh-Based Approach to Model Melting and Shape of an ESR Electrode
NASA Astrophysics Data System (ADS)
Karimi-Sibaki, E.; Kharicha, A.; Bohacek, J.; Wu, M.; Ludwig, A.
2015-10-01
This paper presents a numerical method to investigate the shape of tip and melt rate of an electrode during electroslag remelting process. The interactions between flow, temperature, and electromagnetic fields are taken into account. A dynamic mesh-based approach is employed to model the dynamic formation of the shape of electrode tip. The effect of slag properties such as thermal and electrical conductivities on the melt rate and electrode immersion depth is discussed. The thermal conductivity of slag has a dominant influence on the heat transfer in the system, hence on melt rate of electrode. The melt rate decreases with increasing thermal conductivity of slag. The electrical conductivity of slag governs the electric current path that in turn influences flow and temperature fields. The melting of electrode is a quite unstable process due to the complex interaction between the melt rate, immersion depth, and shape of electrode tip. Therefore, a numerical adaptation of electrode position in the slag has been implemented in order to achieve steady state melting. In fact, the melt rate, immersion depth, and shape of electrode tip are interdependent parameters of process. The generated power in the system is found to be dependent on both immersion depth and shape of electrode tip. In other words, the same amount of power was generated for the systems where the shapes of tip and immersion depth were different. Furthermore, it was observed that the shape of electrode tip is very similar for the systems running with the same ratio of power generation to melt rate. Comparison between simulations and experimental results was made to verify the numerical model.
Thermal Aspects of Lithium Ion Cells
NASA Technical Reports Server (NTRS)
Frank, H.; Shakkottai, P.; Bugga, R.; Smart, M.; Huang, C. K.; Timmerman, P.; Surampudi, S.
2000-01-01
This viewgraph presentation outlines the development of a thermal model of Li-ion cells in terms of heat generation, thermal mass, and thermal resistance. Intended for incorporation into battery model. The approach was to estimate heat generation: with semi-theoretical model, and then to check accuracy with efficiency measurements. Another objective was to compute thermal mass from component weights and specific heats, and to compute the thermal resistance from component dimensions and conductivities. Two lithium batteries are compared, the Cylindrical lithium battery, and the prismatic lithium cell. It reviews methodology for estimating the heat generation rate. Graphs of the Open-circuit curves of the cells and the heat evolution during discharge are given.
Field-circuit analysis and measurements of a single-phase self-excited induction generator
NASA Astrophysics Data System (ADS)
Makowski, Krzysztof; Leicht, Aleksander
2017-12-01
The paper deals with a single-phase induction machine operating as a stand-alone self-excited single-phase induction generator for generation of electrical energy from renewable energy sources. By changing number of turns and size of wires in the auxiliary stator winding, an improvement of performance characteristics of the generator were obtained as regards no-load and load voltage of the stator windings as well as stator winding currents of the generator. Field-circuit simulation models of the generator were developed using Flux2D software package for the generator with shunt capacitor in the main stator winding. The obtained results have been validated experimentally at the laboratory setup using the single-phase capacitor induction motor of 1.1 kW rated power and 230 V voltage as a base model of the generator.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gongalsky, Maxim B., E-mail: mgongalsky@gmail.com; Timoshenko, Victor Yu.
2014-12-28
We propose a phenomenological model to explain photoluminescence degradation of silicon nanocrystals under singlet oxygen generation in gaseous and liquid systems. The model considers coupled rate equations, which take into account the exciton radiative recombination in silicon nanocrystals, photosensitization of singlet oxygen generation, defect formation on the surface of silicon nanocrystals as well as quenching processes for both excitons and singlet oxygen molecules. The model describes well the experimentally observed power law dependences of the photoluminescence intensity, singlet oxygen concentration, and lifetime versus photoexcitation time. The defect concentration in silicon nanocrystals increases by power law with a fractional exponent, whichmore » depends on the singlet oxygen concentration and ambient conditions. The obtained results are discussed in a view of optimization of the photosensitized singlet oxygen generation for biomedical applications.« less
Raghuram, Jayaram; Miller, David J; Kesidis, George
2014-07-01
We propose a method for detecting anomalous domain names, with focus on algorithmically generated domain names which are frequently associated with malicious activities such as fast flux service networks, particularly for bot networks (or botnets), malware, and phishing. Our method is based on learning a (null hypothesis) probability model based on a large set of domain names that have been white listed by some reliable authority. Since these names are mostly assigned by humans, they are pronounceable, and tend to have a distribution of characters, words, word lengths, and number of words that are typical of some language (mostly English), and often consist of words drawn from a known lexicon. On the other hand, in the present day scenario, algorithmically generated domain names typically have distributions that are quite different from that of human-created domain names. We propose a fully generative model for the probability distribution of benign (white listed) domain names which can be used in an anomaly detection setting for identifying putative algorithmically generated domain names. Unlike other methods, our approach can make detections without considering any additional (latency producing) information sources, often used to detect fast flux activity. Experiments on a publicly available, large data set of domain names associated with fast flux service networks show encouraging results, relative to several baseline methods, with higher detection rates and low false positive rates.
Raghuram, Jayaram; Miller, David J.; Kesidis, George
2014-01-01
We propose a method for detecting anomalous domain names, with focus on algorithmically generated domain names which are frequently associated with malicious activities such as fast flux service networks, particularly for bot networks (or botnets), malware, and phishing. Our method is based on learning a (null hypothesis) probability model based on a large set of domain names that have been white listed by some reliable authority. Since these names are mostly assigned by humans, they are pronounceable, and tend to have a distribution of characters, words, word lengths, and number of words that are typical of some language (mostly English), and often consist of words drawn from a known lexicon. On the other hand, in the present day scenario, algorithmically generated domain names typically have distributions that are quite different from that of human-created domain names. We propose a fully generative model for the probability distribution of benign (white listed) domain names which can be used in an anomaly detection setting for identifying putative algorithmically generated domain names. Unlike other methods, our approach can make detections without considering any additional (latency producing) information sources, often used to detect fast flux activity. Experiments on a publicly available, large data set of domain names associated with fast flux service networks show encouraging results, relative to several baseline methods, with higher detection rates and low false positive rates. PMID:25685511
Finite Volume Numerical Methods for Aeroheating Rate Calculations from Infrared Thermographic Data
NASA Technical Reports Server (NTRS)
Daryabeigi, Kamran; Berry, Scott A.; Horvath, Thomas J.; Nowak, Robert J.
2006-01-01
The use of multi-dimensional finite volume heat conduction techniques for calculating aeroheating rates from measured global surface temperatures on hypersonic wind tunnel models was investigated. Both direct and inverse finite volume techniques were investigated and compared with the standard one-dimensional semi-infinite technique. Global transient surface temperatures were measured using an infrared thermographic technique on a 0.333-scale model of the Hyper-X forebody in the NASA Langley Research Center 20-Inch Mach 6 Air tunnel. In these tests the effectiveness of vortices generated via gas injection for initiating hypersonic transition on the Hyper-X forebody was investigated. An array of streamwise-orientated heating striations was generated and visualized downstream of the gas injection sites. In regions without significant spatial temperature gradients, one-dimensional techniques provided accurate aeroheating rates. In regions with sharp temperature gradients caused by striation patterns multi-dimensional heat transfer techniques were necessary to obtain more accurate heating rates. The use of the one-dimensional technique resulted in differences of 20% in the calculated heating rates compared to 2-D analysis because it did not account for lateral heat conduction in the model.
Improved trip generation data for Texas using work place and special generator survey data.
DOT National Transportation Integrated Search
2015-05-01
Travel estimates from models and manuals developed from trip attraction rates having high variances due to few : survey observations can reduce confidence and accuracy in estimates. This project compiled and analyzed data from : more than a decade of...
A kinetic model for stress generation in thin films grown from energetic vapor fluxes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chason, E.; Karlson, M.; Colin, J. J.
We have developed a kinetic model for residual stress generation in thin films grown from energetic vapor fluxes, encountered, e.g., during sputter deposition. The new analytical model considers sub-surface point defects created by atomic peening, along with processes treated in already existing stress models for non-energetic deposition, i.e., thermally activated diffusion processes at the surface and the grain boundary. According to the new model, ballistically induced sub-surface defects can get incorporated as excess atoms at the grain boundary, remain trapped in the bulk, or annihilate at the free surface, resulting in a complex dependence of the steady-state stress on themore » grain size, the growth rate, as well as the energetics of the incoming particle flux. We compare calculations from the model with in situ stress measurements performed on a series of Mo films sputter-deposited at different conditions and having different grain sizes. The model is able to reproduce the observed increase of compressive stress with increasing growth rate, behavior that is the opposite of what is typically seen under non-energetic growth conditions. On a grander scale, this study is a step towards obtaining a comprehensive understanding of stress generation and evolution in vapor deposited polycrystalline thin films.« less
Gas Generation Testing of Spherical Resorcinol-Formaldehyde (sRF) Resin
DOE Office of Scientific and Technical Information (OSTI.GOV)
Colburn, Heather A.; Bryan, Samuel A.; Camaioni, Donald M.
This report describes gas generation testing of the spherical resorcinol-formaldehyde (sRF) resin that was conducted to support the technology maturation of the LAWPS facility. The current safety basis for the LAWPS facility is based primarily on two studies that had limited or inconclusive data sets. The two studies indicated a 40% increase in hydrogen generation rate of water (as predicted by the Hu model) with sRF resin over water alone. However, the previous studies did not test the range of conditions (process fluids and temperatures) that are expected in the LAWPS facility. Additionally, the previous studies did not obtain replicatemore » test results or comparable liquid-only control samples. All of the testing described in this report, conducted with water, 0.45M nitric acid, and waste simulants with and without sRF resin, returned hydrogen generation rates that are within the current safety basis for the facility of 1.4 times the Hu model output for water.« less
Zarghami, Zabihullah; Akbari, Ahmad; Latifi, Ali Mohammad; Amani, Mohammad Ali
2016-04-01
In this research, different generations of PAMAM-grafted chitosan as integrated biosorbents were successfully synthesized via step by step divergent growth approach of dendrimer. The synthesized products were utilized as adsorbents for heavy metals (Pb(2+) in this study) removing from aqueous solution and their reactive Pb(2+) removal potential was evaluated. The results showed that as-synthesized products with higher generations of dendrimer, have more adsorption capacity compared to products with lower generations of dendrimer and sole chitosan. Adsorption capacity of as-prepared product with generation 3 of dendrimer is 18times more than sole chitosan. Thermodynamic and kinetic studies were performed for understanding equilibrium data of the uptake capacity and kinetic rate uptake, respectively. Thermodynamic and kinetic studies showed that Langmuir isotherm model and pseudo second order kinetic model are more compatible for describing equilibrium data of the uptake capacity and kinetic rate of the Pb(2+) uptake, respectively. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Fujihara, S.; Korenaga, M.; Kawaji, K.; Akiyama, S.
2013-12-01
We try to compare and evaluate the nature of tsunami generation and seismic wave generation in occurrence of the 2011 Tohoku-Oki earthquake (hereafter, called as TOH11), in terms of two type of moment rate functions, inferred from finite source imaging of tsunami waveforms and seismic waveforms. Since 1970's, the nature of "tsunami earthquakes" has been discussed in many researches (e.g. Kanamori, 1972; Kanamori and Kikuchi, 1993; Kikuchi and Kanamori, 1995; Ide et al., 1993; Satake, 1994) mostly based on analysis of seismic waveform data , in terms of the "slow" nature of tsunami earthquakes (e.g., the 1992 Nicaragura earthquake). Although TOH11 is not necessarily understood as a tsunami earthquake, TOH11 is one of historical earthquakes that simultaneously generated large seismic waves and tsunami. Also, TOH11 is one of earthquakes which was observed both by seismic observation network and tsunami observation network around the Japanese islands. Therefore, for the purpose of analyzing the nature of tsunami generation, we try to utilize tsunami waveform data as much as possible. In our previous studies of TOH11 (Fujihara et al., 2012a; Fujihara et al., 2012b), we inverted tsunami waveforms at GPS wave gauges of NOWPHAS to image the spatio-temporal slip distribution. The "temporal" nature of our tsunami source model is generally consistent with the other tsunami source models (e.g., Satake et al, 2013). For seismic waveform inversion based on 1-D structure, here we inverted broadband seismograms at GSN stations based on the teleseismic body-wave inversion scheme (Kikuchi and Kanamori, 2003). Also, for seismic waveform inversion considering the inhomogeneous internal structure, we inverted strong motion seismograms at K-NET and KiK-net stations, based on 3-D Green's functions (Fujihara et al., 2013a; Fujihara et al., 2013b). The gross "temporal" nature of our seismic source models are generally consistent with the other seismic source models (e.g., Yoshida et al., 2011; Ide at al., 2011; Yagi and Fukahata, 2011; Suzuki et al., 2011). The comparison of two type of moment rate functions, inferred from finite source imaging of tsunami waveforms and seismic waveforms, suggested that there was the time period common to both seismic wave generation and tsunami generation followed by the time period unique to tsunami generation. At this point, we think that comparison of the absolute values of moment rates is not so meaningful between tsunami waveform inversion and seismic waveform inversion, because of general ambiguity of rigidity values of each subfault in the fault region (assuming the rigidity value of 30 GPa of Yoshida et al (2011)). Considering this, the normalized value of moment rate function was also evaluated and it does not change the general feature of two moment rate functions in terms of duration property. Furthermore, the results suggested that tsunami generation process apparently took more time than seismic wave generation process did. Tsunami can be generated even by "extra" motions resulting from many suggested abnormal mechanisms. These extra motions may be attribute to the relatively larger-scale tsunami generation than expected from the magnitude level from seismic ground motion, and attribute to the longer duration of tsunami generation process.
Trujillo, Francisco Javier; Knoerzer, Kai
2011-11-01
High power ultrasound reactors have gained a lot of interest in the food industry given the effects that can arise from ultrasonic-induced cavitation in liquid foods. However, most of the new food processing developments have been based on empirical approaches. Thus, there is a need for mathematical models which help to understand, optimize, and scale up ultrasonic reactors. In this work, a computational fluid dynamics (CFD) model was developed to predict the acoustic streaming and induced heat generated by an ultrasonic horn reactor. In the model it is assumed that the horn tip is a fluid inlet, where a turbulent jet flow is injected into the vessel. The hydrodynamic momentum rate of the incoming jet is assumed to be equal to the total acoustic momentum rate emitted by the acoustic power source. CFD velocity predictions show excellent agreement with the experimental data for power densities higher than W(0)/V ≥ 25kWm(-3). This model successfully describes hydrodynamic fields (streaming) generated by low-frequency-high-power ultrasound. Crown Copyright © 2011. Published by Elsevier B.V. All rights reserved.
A model for predicting Xanthomonas arboricola pv. pruni growth as a function of temperature
Llorente, Isidre; Montesinos, Emilio; Moragrega, Concepció
2017-01-01
A two-step modeling approach was used for predicting the effect of temperature on the growth of Xanthomonas arboricola pv. pruni, causal agent of bacterial spot disease of stone fruit. The in vitro growth of seven strains was monitored at temperatures from 5 to 35°C with a Bioscreen C system, and a calibrating equation was generated for converting optical densities to viable counts. In primary modeling, Baranyi, Buchanan, and modified Gompertz equations were fitted to viable count growth curves over the entire temperature range. The modified Gompertz model showed the best fit to the data, and it was selected to estimate the bacterial growth parameters at each temperature. Secondary modeling of maximum specific growth rate as a function of temperature was performed by using the Ratkowsky model and its variations. The modified Ratkowsky model showed the best goodness of fit to maximum specific growth rate estimates, and it was validated successfully for the seven strains at four additional temperatures. The model generated in this work will be used for predicting temperature-based Xanthomonas arboricola pv. pruni growth rate and derived potential daily doublings, and included as the inoculum potential component of a bacterial spot of stone fruit disease forecaster. PMID:28493954
Rapid recipe formulation for plasma etching of new materials
NASA Astrophysics Data System (ADS)
Chopra, Meghali; Zhang, Zizhuo; Ekerdt, John; Bonnecaze, Roger T.
2016-03-01
A fast and inexpensive scheme for etch rate prediction using flexible continuum models and Bayesian statistics is demonstrated. Bulk etch rates of MgO are predicted using a steady-state model with volume-averaged plasma parameters and classical Langmuir surface kinetics. Plasma particle and surface kinetics are modeled within a global plasma framework using single component Metropolis Hastings methods and limited data. The accuracy of these predictions is evaluated with synthetic and experimental etch rate data for magnesium oxide in an ICP-RIE system. This approach is compared and superior to factorial models generated from JMP, a software package frequently employed for recipe creation and optimization.
Modeling the growth of Listeria monocytogenes in mold-ripened cheeses.
Lobacz, Adriana; Kowalik, Jaroslaw; Tarczynska, Anna
2013-06-01
This study presents possible applications of predictive microbiology to model the safety of mold-ripened cheeses with respect to bacteria of the species Listeria monocytogenes during (1) the ripening of Camembert cheese, (2) cold storage of Camembert cheese at temperatures ranging from 3 to 15°C, and (3) cold storage of blue cheese at temperatures ranging from 3 to 15°C. The primary models used in this study, such as the Baranyi model and modified Gompertz function, were fitted to growth curves. The Baranyi model yielded the most accurate goodness of fit and the growth rates generated by this model were used for secondary modeling (Ratkowsky simple square root and polynomial models). The polynomial model more accurately predicted the influence of temperature on the growth rate, reaching the adjusted coefficients of multiple determination 0.97 and 0.92 for Camembert and blue cheese, respectively. The observed growth rates of L. monocytogenes in mold-ripened cheeses were compared with simulations run with the Pathogen Modeling Program (PMP 7.0, USDA, Wyndmoor, PA) and ComBase Predictor (Institute of Food Research, Norwich, UK). However, the latter predictions proved to be consistently overestimated and contained a significant error level. In addition, a validation process using independent data generated in dairy products from the ComBase database (www.combase.cc) was performed. In conclusion, it was found that L. monocytogenes grows much faster in Camembert than in blue cheese. Both the Baranyi and Gompertz models described this phenomenon accurately, although the Baranyi model contained a smaller error. Secondary modeling and further validation of the generated models highlighted the issue of usability and applicability of predictive models in the food processing industry by elaborating models targeted at a specific product or a group of similar products. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Can low-resolution airborne laser scanning data be used to model stream rating curves?
Lyon, Steve; Nathanson, Marcus; Lam, Norris; Dahlke, Helen; Rutzinger, Martin; Kean, Jason W.; Laudon, Hjalmar
2015-01-01
This pilot study explores the potential of using low-resolution (0.2 points/m2) airborne laser scanning (ALS)-derived elevation data to model stream rating curves. Rating curves, which allow the functional translation of stream water depth into discharge, making them integral to water resource monitoring efforts, were modeled using a physics-based approach that captures basic geometric measurements to establish flow resistance due to implicit channel roughness. We tested synthetically thinned high-resolution (more than 2 points/m2) ALS data as a proxy for low-resolution data at a point density equivalent to that obtained within most national-scale ALS strategies. Our results show that the errors incurred due to the effect of low-resolution versus high-resolution ALS data were less than those due to flow measurement and empirical rating curve fitting uncertainties. As such, although there likely are scale and technical limitations to consider, it is theoretically possible to generate rating curves in a river network from ALS data of the resolution anticipated within national-scale ALS schemes (at least for rivers with relatively simple geometries). This is promising, since generating rating curves from ALS scans would greatly enhance our ability to monitor streamflow by simplifying the overall effort required.
Ensemble forecast of human West Nile virus cases and mosquito infection rates
NASA Astrophysics Data System (ADS)
Defelice, Nicholas B.; Little, Eliza; Campbell, Scott R.; Shaman, Jeffrey
2017-02-01
West Nile virus (WNV) is now endemic in the continental United States; however, our ability to predict spillover transmission risk and human WNV cases remains limited. Here we develop a model depicting WNV transmission dynamics, which we optimize using a data assimilation method and two observed data streams, mosquito infection rates and reported human WNV cases. The coupled model-inference framework is then used to generate retrospective ensemble forecasts of historical WNV outbreaks in Long Island, New York for 2001-2014. Accurate forecasts of mosquito infection rates are generated before peak infection, and >65% of forecasts accurately predict seasonal total human WNV cases up to 9 weeks before the past reported case. This work provides the foundation for implementation of a statistically rigorous system for real-time forecast of seasonal outbreaks of WNV.
Ensemble forecast of human West Nile virus cases and mosquito infection rates.
DeFelice, Nicholas B; Little, Eliza; Campbell, Scott R; Shaman, Jeffrey
2017-02-24
West Nile virus (WNV) is now endemic in the continental United States; however, our ability to predict spillover transmission risk and human WNV cases remains limited. Here we develop a model depicting WNV transmission dynamics, which we optimize using a data assimilation method and two observed data streams, mosquito infection rates and reported human WNV cases. The coupled model-inference framework is then used to generate retrospective ensemble forecasts of historical WNV outbreaks in Long Island, New York for 2001-2014. Accurate forecasts of mosquito infection rates are generated before peak infection, and >65% of forecasts accurately predict seasonal total human WNV cases up to 9 weeks before the past reported case. This work provides the foundation for implementation of a statistically rigorous system for real-time forecast of seasonal outbreaks of WNV.
Link, William A; Barker, Richard J
2005-03-01
We present a hierarchical extension of the Cormack-Jolly-Seber (CJS) model for open population capture-recapture data. In addition to recaptures of marked animals, we model first captures of animals and losses on capture. The parameter set includes capture probabilities, survival rates, and birth rates. The survival rates and birth rates are treated as a random sample from a bivariate distribution, thus the model explicitly incorporates correlation in these demographic rates. A key feature of the model is that the likelihood function, which includes a CJS model factor, is expressed entirely in terms of identifiable parameters; losses on capture can be factored out of the model. Since the computational complexity of classical likelihood methods is prohibitive, we use Markov chain Monte Carlo in a Bayesian analysis. We describe an efficient candidate-generation scheme for Metropolis-Hastings sampling of CJS models and extensions. The procedure is illustrated using mark-recapture data for the moth Gonodontis bidentata.
Link, William A.; Barker, Richard J.
2005-01-01
We present a hierarchical extension of the Cormack–Jolly–Seber (CJS) model for open population capture–recapture data. In addition to recaptures of marked animals, we model first captures of animals and losses on capture. The parameter set includes capture probabilities, survival rates, and birth rates. The survival rates and birth rates are treated as a random sample from a bivariate distribution, thus the model explicitly incorporates correlation in these demographic rates. A key feature of the model is that the likelihood function, which includes a CJS model factor, is expressed entirely in terms of identifiable parameters; losses on capture can be factored out of the model. Since the computational complexity of classical likelihood methods is prohibitive, we use Markov chain Monte Carlo in a Bayesian analysis. We describe an efficient candidate-generation scheme for Metropolis–Hastings sampling of CJS models and extensions. The procedure is illustrated using mark-recapture data for the moth Gonodontis bidentata.
Zeitler, Emily P; Patel, Divyang; Hasselblad, Vic; Sanders, Gillian D; Al-Khatib, Sana M
2015-07-01
The number of cardiac implantable electronic device (CIED) recalls and advisories has increased over the past 3 decades, yet no consensus exists on how to best manage patients with these CIEDs, partially because rates of complications from prophylactic replacement are unknown. The purpose of this study was to establish rates of complications when recalled CIED generators are replaced prophylactically. We searched MEDLINE and the Cochrane Controlled Trials Register for reports of prophylactic replacement of recalled CIED generators. Studies with <20 subjects were excluded. We then conducted a meta-analysis of qualifying studies to determine the rates of combined major complications, mortality, and reoperation. We identified 7 citations that met our inclusion criteria and reported ≥1 end-points of interest. Four were single center, and 3 were multicenter. Six studies collected data retrospectively (n = 1213) and 1 prospectively (n = 222). Using a random effects model to combine data from all included studies, the rate of major complications was 2.5% (95% confidence interval [CI] 1.0%-4.5%). Combining data from 6 studies reporting mortality and reoperation, the rates were 0.5% (95% CI 0.1%-0.9%) and 2.5% (95% CI 0.8%-4.5%), respectively. Prophylactic replacement of recalled CIED generators is associated with a low mortality rate but nontrivial rates of other major complications similar to those reported when CIED generators are replaced for other reasons. Thus, when considering replacing a recalled CIED generator, known risks of elective generator replacement likely apply and can be weighed against risks associated with device failure. Copyright © 2015 Heart Rhythm Society. Published by Elsevier Inc. All rights reserved.
Damage Propagation Modeling for Aircraft Engine Prognostics
NASA Technical Reports Server (NTRS)
Saxena, Abhinav; Goebel, Kai; Simon, Don; Eklund, Neil
2008-01-01
This paper describes how damage propagation can be modeled within the modules of aircraft gas turbine engines. To that end, response surfaces of all sensors are generated via a thermo-dynamical simulation model for the engine as a function of variations of flow and efficiency of the modules of interest. An exponential rate of change for flow and efficiency loss was imposed for each data set, starting at a randomly chosen initial deterioration set point. The rate of change of the flow and efficiency denotes an otherwise unspecified fault with increasingly worsening effect. The rates of change of the faults were constrained to an upper threshold but were otherwise chosen randomly. Damage propagation was allowed to continue until a failure criterion was reached. A health index was defined as the minimum of several superimposed operational margins at any given time instant and the failure criterion is reached when health index reaches zero. Output of the model was the time series (cycles) of sensed measurements typically available from aircraft gas turbine engines. The data generated were used as challenge data for the Prognostics and Health Management (PHM) data competition at PHM 08.
Effect of Repeat Copy Number on Variable-Number Tandem Repeat Mutations in Escherichia coli O157:H7
Vogler, Amy J.; Keys, Christine; Nemoto, Yoshimi; Colman, Rebecca E.; Jay, Zack; Keim, Paul
2006-01-01
Variable-number tandem repeat (VNTR) loci have shown a remarkable ability to discriminate among isolates of the recently emerged clonal pathogen Escherichia coli O157:H7, making them a very useful molecular epidemiological tool. However, little is known about the rates at which these sequences mutate, the factors that affect mutation rates, or the mechanisms by which mutations occur at these loci. Here, we measure mutation rates for 28 VNTR loci and investigate the effects of repeat copy number and mismatch repair on mutation rate using in vitro-generated populations for 10 E. coli O157:H7 strains. We find single-locus rates as high as 7.0 × 10−4 mutations/generation and a combined 28-locus rate of 6.4 × 10−4 mutations/generation. We observed single- and multirepeat mutations that were consistent with a slipped-strand mispairing mutation model, as well as a smaller number of large repeat copy number mutations that were consistent with recombination-mediated events. Repeat copy number within an array was strongly correlated with mutation rate both at the most mutable locus, O157-10 (r2 = 0.565, P = 0.0196), and across all mutating loci. The combined locus model was significant whether locus O157-10 was included (r2 = 0.833, P < 0.0001) or excluded (r2 = 0.452, P < 0.0001) from the analysis. Deficient mismatch repair did not affect mutation rate at any of the 28 VNTRs with repeat unit sizes of >5 bp, although a poly(G) homomeric tract was destabilized in the mutS strain. Finally, we describe a general model for VNTR mutations that encompasses insertions and deletions, single- and multiple-repeat mutations, and their relative frequencies based upon our empirical mutation rate data. PMID:16740932
Development and Validation of an NPSS Model of a Small Turbojet Engine
NASA Astrophysics Data System (ADS)
Vannoy, Stephen Michael
Recent studies have shown that integrated gas turbine engine (GT)/solid oxide fuel cell (SOFC) systems for combined propulsion and power on aircraft offer a promising method for more efficient onboard electrical power generation. However, it appears that nobody has actually attempted to construct a hybrid GT/SOFC prototype for combined propulsion and electrical power generation. This thesis contributes to this ambition by developing an experimentally validated thermodynamic model of a small gas turbine (˜230 N thrust) platform for a bench-scale GT/SOFC system. The thermodynamic model is implemented in a NASA-developed software environment called Numerical Propulsion System Simulation (NPSS). An indoor test facility was constructed to measure the engine's performance parameters: thrust, air flow rate, fuel flow rate, engine speed (RPM), and all axial stage stagnation temperatures and pressures. The NPSS model predictions are compared to the measured performance parameters for steady state engine operation.
MULTI-LABORATORY STUDY OF FLOW-INDUCED HEMOLYSIS USING THE FDA BENCHMARK NOZZLE MODEL
Herbertson, Luke H.; Olia, Salim E.; Daly, Amanda; Noatch, Christopher P.; Smith, William A.; Kameneva, Marina V.; Malinauskas, Richard A.
2015-01-01
Multilaboratory in vitro blood damage testing was performed on a simple nozzle model to determine how different flow parameters and blood properties affect device-induced hemolysis and to generate data for comparison with computational fluid dynamics-based predictions of blood damage as part of an FDA initiative for assessing medical device safety. Three independent laboratories evaluated hemolysis as a function of nozzle entrance geometry, flow rate, and blood properties. Bovine blood anticoagulated with acid citrate dextrose solution (2–80 h post-draw) was recirculated through nozzle-containing and paired nozzle-free control loops for 2 h. Controlled parameters included hematocrit (36 ± 1.5%), temperature (25°C), blood volume, flow rate, and pressure. Three nozzle test conditions were evaluated (n = 26–36 trials each): (i) sudden contraction at the entrance with a blood flow rate of 5 L/min, (ii) gradual cone at the entrance with a 6-L/min blood flow rate, and (iii) sudden-contraction inlet at 6 L/min. The blood damage caused only by the nozzle model was calculated by subtracting the hemolysis generated by the paired control loop test. Despite high intralaboratory variability, significant differences among the three test conditions were observed, with the sharp nozzle entrance causing the most hemolysis. Modified index of hemolysis (MIHnozzle) values were 0.292 ± 0.249, 0.021 ± 0.128, and 1.239 ± 0.667 for conditions i–iii, respectively. Porcine blood generated hemolysis results similar to those obtained with bovine blood. Although the interlaboratory hemolysis results are only applicable for the specific blood parameters and nozzle model used here, these empirical data may help to advance computational fluid dynamics models for predicting blood damage. PMID:25180887
Kwag, Jeehyun; Jang, Hyun Jae; Kim, Mincheol; Lee, Sujeong
2014-01-01
Rate and phase codes are believed to be important in neural information processing. Hippocampal place cells provide a good example where both coding schemes coexist during spatial information processing. Spike rate increases in the place field, whereas spike phase precesses relative to the ongoing theta oscillation. However, what intrinsic mechanism allows for a single neuron to generate spike output patterns that contain both neural codes is unknown. Using dynamic clamp, we simulate an in vivo-like subthreshold dynamics of place cells to in vitro CA1 pyramidal neurons to establish an in vitro model of spike phase precession. Using this in vitro model, we show that membrane potential oscillation (MPO) dynamics is important in the emergence of spike phase codes: blocking the slowly activating, non-inactivating K+ current (IM), which is known to control subthreshold MPO, disrupts MPO and abolishes spike phase precession. We verify the importance of adaptive IM in the generation of phase codes using both an adaptive integrate-and-fire and a Hodgkin–Huxley (HH) neuron model. Especially, using the HH model, we further show that it is the perisomatically located IM with slow activation kinetics that is crucial for the generation of phase codes. These results suggest an important functional role of IM in single neuron computation, where IM serves as an intrinsic mechanism allowing for dual rate and phase coding in single neurons. PMID:25100320
Crago, Patrick E; Makowski, Nathan S
2014-01-01
Objective Stimulation of peripheral nerves is often superimposed on ongoing motor and sensory activity in the same axons, without a quantitative model of the net action potential train at the axon endpoint. Approach We develop a model of action potential patterns elicited by superimposing constant frequency axonal stimulation on the action potentials arriving from a physiologically activated neural source. The model includes interactions due to collision block, resetting of the neural impulse generator, and the refractory period of the axon at the point of stimulation. Main Results Both the mean endpoint firing rate and the probability distribution of the action potential firing periods depend strongly on the relative firing rates of the two sources and the intersite conduction time between them. When the stimulus rate exceeds the neural rate, neural action potentials do not reach the endpoint and the rate of endpoint action potentials is the same as the stimulus rate, regardless of the intersite conduction time. However, when the stimulus rate is less than the neural rate, and the intersite conduction time is short, the two rates partially sum. Increases in stimulus rate produce non-monotonic increases in endpoint rate and continuously increasing block of neurally generated action potentials. Rate summation is reduced and more neural action potentials are blocked as the intersite conduction time increases.. At long intersite conduction times, the endpoint rate simplifies to being the maximum of either the neural or the stimulus rate. Significance This study highlights the potential of increasing the endpoint action potential rate and preserving neural information transmission by low rate stimulation with short intersite conduction times. Intersite conduction times can be decreased with proximal stimulation sites for muscles and distal stimulation sites for sensory endings. The model provides a basis for optimizing experiments and designing neuroprosthetic interventions involving motor or sensory stimulation. PMID:25161163
NASA Astrophysics Data System (ADS)
Crago, Patrick E.; Makowski, Nathaniel S.
2014-10-01
Objective. Stimulation of peripheral nerves is often superimposed on ongoing motor and sensory activity in the same axons, without a quantitative model of the net action potential train at the axon endpoint. Approach. We develop a model of action potential patterns elicited by superimposing constant frequency axonal stimulation on the action potentials arriving from a physiologically activated neural source. The model includes interactions due to collision block, resetting of the neural impulse generator, and the refractory period of the axon at the point of stimulation. Main results. Both the mean endpoint firing rate and the probability distribution of the action potential firing periods depend strongly on the relative firing rates of the two sources and the intersite conduction time between them. When the stimulus rate exceeds the neural rate, neural action potentials do not reach the endpoint and the rate of endpoint action potentials is the same as the stimulus rate, regardless of the intersite conduction time. However, when the stimulus rate is less than the neural rate, and the intersite conduction time is short, the two rates partially sum. Increases in stimulus rate produce non-monotonic increases in endpoint rate and continuously increasing block of neurally generated action potentials. Rate summation is reduced and more neural action potentials are blocked as the intersite conduction time increases. At long intersite conduction times, the endpoint rate simplifies to being the maximum of either the neural or the stimulus rate. Significance. This study highlights the potential of increasing the endpoint action potential rate and preserving neural information transmission by low rate stimulation with short intersite conduction times. Intersite conduction times can be decreased with proximal stimulation sites for muscles and distal stimulation sites for sensory endings. The model provides a basis for optimizing experiments and designing neuroprosthetic interventions involving motor or sensory stimulation.
Min, Yul Ha; Park, Hyeoun-Ae; Lee, Joo Yun; Jo, Soo Jung; Jeon, Eunjoo; Byeon, Namsoo; Choi, Seung Yong; Chung, Eunja
2014-01-01
The aim of this study is to develop and evaluate a natural language generation system to populate nursing narratives using detailed clinical models. Semantic, contextual, and syntactical knowledges were extracted. A natural language generation system linking these knowledges was developed. The quality of generated nursing narratives was evaluated by the three nurse experts using a five-point rating scale. With 82 detailed clinical models, in total 66,888 nursing narratives in four different types of statement were generated. The mean scores for overall quality was 4.66, for content 4.60, for grammaticality 4.40, for writing style 4.13, and for correctness 4.60. The system developed in this study generated nursing narratives with different levels of granularity. The generated nursing narratives can improve semantic interoperability of nursing data documented in nursing records.
Effects of yearling, juvenile and adult survival on reef manta ray (Manta alfredi) demography
van der Ouderaa, Isabelle B.C.; Tibiriçá, Yara
2016-01-01
Background The trade in manta ray gill plates has considerably increased over the last two decades. The resulting increases in ray mortality, in addition to mortality caused by by-catch, has caused many ray populations to decrease in size. The aim of this study was to ascertain how yearling and juvenile growth and survival, and adult survival and reproduction affect reef manta ray (Manta alfredi) population change, to increase our understanding of manta ray demography and thereby improve conservation research and measures for these fish. Methods We developed a population projection model for reef manta rays, and used published life history data on yearling and juvenile growth and adult reproduction to parameterise the model. Because little is known about reef manta ray yearling and juvenile survival, we conducted our analyses using a range of plausible survival rate values for yearlings, juveniles and adults. Results The model accurately captured observed variation in population growth rate, lifetime reproductive success and cohort generation time in different reef manta ray populations. Our demographic analyses revealed a range of population consequences in response to variation in demographic rates. For example, an increase in yearling or adult survival rates always elicited greater responses in population growth rate, lifetime reproductive success and cohort generation time than the same increase in juvenile survival rate. The population growth rate increased linearly, but lifetime reproductive success and cohort generation time increased at an accelerating rate with increasing yearling or adult survival rates. Hence, even a small increase in survival rate could increase lifetime reproductive success by one pup, and cohort generation time by several years. Elasticity analyses revealed that, depending on survival rate values of all life stages, the population growth rate is either most sensitive to changes in the rate with which juveniles survive but stay juveniles (i.e., do not mature into adults) or to changes in adult survival rate. However, when assessing these results against estimates on population growth and adult survival rates for populations off the coasts of Mozambique and Japan, we found that the population growth rate is predicted to be always most sensitive to changes in the adult survival rate. Discussion It is important to gain an in-depth understanding of reef manta ray life histories, particularly of yearling and adult survival rates, as these can influence reef manta ray population dynamics in a variety of ways. For declining populations in particular, it is crucial to know which life stage should be targeted for their conservation. For one such declining population off the coast of Mozambique, adult annual survival rate has the greatest effect on population growth, and by increasing adult survival by protecting adult aggregation sites, this population’s decline could be halted or even reversed. PMID:27635337
Worth Longest, P; Hindle, Michael; Das Choudhuri, Suparna
2009-06-01
For most newly developed spray aerosol inhalers, the generation time is a potentially important variable that can be fully controlled. The objective of this study was to determine the effects of spray aerosol generation time on transport and deposition in a standard induction port (IP) and more realistic mouth-throat (MT) geometry. Capillary aerosol generation (CAG) was selected as a representative system in which spray momentum was expected to significantly impact deposition. Sectional and total depositions in the IP and MT geometries were assessed at a constant CAG flow rate of 25 mg/sec for aerosol generation times of 1, 2, and 4 sec using both in vitro experiments and a previously developed computational fluid dynamics (CFD) model. Both the in vitro and numerical results indicated that extending the generation time of the spray aerosol, delivered at a constant mass flow rate, significantly reduced deposition in the IP and more realistic MT geometry. Specifically, increasing the generation time of the CAG system from 1 to 4 sec reduced the deposition fraction in the IP and MT geometries by approximately 60 and 33%, respectively. Furthermore, the CFD predictions of deposition fraction were found to be in good agreement with the in vitro results for all times considered in both the IP and MT geometries. The numerical results indicated that the reduction in deposition fraction over time was associated with temporal dissipation of what was termed the spray aerosol "burst effect." Based on these results, increasing the spray aerosol generation time, at a constant mass flow rate, may be an effective strategy for reducing deposition in the standard IP and in more realistic MT geometries.
Biological Potential in Serpentinizing Systems
NASA Technical Reports Server (NTRS)
Hoehler, Tori M.
2016-01-01
Generation of the microbial substrate hydrogen during serpentinization, the aqueous alteration of ultramafic rocks, has focused interest on the potential of serpentinizing systems to support biological communities or even the origin of life. However the process also generates considerable alkalinity, a challenge to life, and both pH and hydrogen concentrations vary widely across natural systems as a result of different host rock and fluid composition and differing physical and hydrogeologic conditions. Biological potential is expected to vary in concert. We examined the impact of such variability on the bioenergetics of an example metabolism, methanogenesis, using a cell-scale reactive transport model to compare rates of metabolic energy generation as a function of physicochemical environment. Potential rates vary over more than 5 orders of magnitude, including bioenergetically non-viable conditions, across the range of naturally occurring conditions. In parallel, we assayed rates of hydrogen metabolism in wells associated with the actively serpentinizing Coast Range Ophiolite, which includes conditions more alkaline and considerably less reducing than is typical of serpentinizing systems. Hydrogen metabolism is observed at pH approaching 12 but, consistent with the model predictions, biological methanogenesis is not observed.
NASA Astrophysics Data System (ADS)
Changqing, Zhao; Kai, Liu; Tong, Zhao; Takei, Masahiro; Weian, Ren
2014-04-01
The mud-pulse logging instrument is an advanced measurement-while-drilling (MWD) tool and widely used by the industry in the world. In order to improve the signal transmission rate, ensure the accurate transmission of information and address the issue of the weak signal on the ground of oil and gas wells, the signal generator should send out the strong mud-pulse signals with the maximum amplitude. With the rotary valve pulse generator as the study object, the three-dimensional Reynolds NS equations and standard k - ɛ turbulent model were used as a mathematical model. The speed and pressure coupling calculation was done by simple algorithms to get the amplitudes of different rates of flow and axial clearances. Tests were done to verify the characteristics of the pressure signals. The pressure signal was captured by the standpiece pressure monitoring system. The study showed that the axial clearances grew bigger as the pressure wave amplitude value decreased and caused the weakening of the pulse signal. As the rate of flow got larger, the pressure wave amplitude would increase and the signal would be enhanced.
Game-theoretic equilibrium analysis applications to deregulated electricity markets
NASA Astrophysics Data System (ADS)
Joung, Manho
This dissertation examines game-theoretic equilibrium analysis applications to deregulated electricity markets. In particular, three specific applications are discussed: analyzing the competitive effects of ownership of financial transmission rights, developing a dynamic game model considering the ramp rate constraints of generators, and analyzing strategic behavior in electricity capacity markets. In the financial transmission right application, an investigation is made of how generators' ownership of financial transmission rights may influence the effects of the transmission lines on competition. In the second application, the ramp rate constraints of generators are explicitly modeled using a dynamic game framework, and the equilibrium is characterized as the Markov perfect equilibrium. Finally, the strategic behavior of market participants in electricity capacity markets is analyzed and it is shown that the market participants may exaggerate their available capacity in a Nash equilibrium. It is also shown that the more conservative the independent system operator's capacity procurement, the higher the risk of exaggerated capacity offers.
Forecast of the World's Electrical Demands until 2025.
ERIC Educational Resources Information Center
Claverie, Maurice J.; Dupas, Alain P.
1979-01-01
Models of global energy demand, a lower-growth-rate model developed at Case Western Reserve University and the H5 model of the Conservation Committee of the World Energy Conference, assess the features of decentralized and centralized electricity generation in the years 2000 and 2025. (BT)
Snip, L J P; Flores-Alsina, X; Aymerich, I; Rodríguez-Mozaz, S; Barceló, D; Plósz, B G; Corominas, Ll; Rodriguez-Roda, I; Jeppsson, U; Gernaey, K V
2016-11-01
The use of process models to simulate the fate of micropollutants in wastewater treatment plants is constantly growing. However, due to the high workload and cost of measuring campaigns, many simulation studies lack sufficiently long time series representing realistic wastewater influent dynamics. In this paper, the feasibility of the Benchmark Simulation Model No. 2 (BSM2) influent generator is tested to create realistic dynamic influent (micro)pollutant disturbance scenarios. The presented set of models is adjusted to describe the occurrence of three pharmaceutical compounds and one of each of its metabolites with samples taken every 2-4h: the anti-inflammatory drug ibuprofen (IBU), the antibiotic sulfamethoxazole (SMX) and the psychoactive carbamazepine (CMZ). Information about type of excretion and total consumption rates forms the basis for creating the data-defined profiles used to generate the dynamic time series. In addition, the traditional influent characteristics such as flow rate, ammonium, particulate chemical oxygen demand and temperature are also modelled using the same framework with high frequency data. The calibration is performed semi-automatically with two different methods depending on data availability. The 'traditional' variables are calibrated with the Bootstrap method while the pharmaceutical loads are estimated with a least squares approach. The simulation results demonstrate that the BSM2 influent generator can describe the dynamics of both traditional variables and pharmaceuticals. Lastly, the study is complemented with: 1) the generation of longer time series for IBU following the same catchment principles; 2) the study of the impact of in-sewer SMX biotransformation when estimating the average daily load; and, 3) a critical discussion of the results, and the future opportunities of the presented approach balancing model structure/calibration procedure complexity versus predictive capabilities. Copyright © 2016. Published by Elsevier B.V.
Pilkington, Rhiannon; Taylor, Anne W.; Hugo, Graeme; Wittert, Gary
2014-01-01
Background To determine differences in sociodemographic and health related characteristics of Australian Baby Boomers and Generation X at the same relative age. Methods The 1989/90 National Health Survey (NHS) for Boomers (1946–1965) and the 2007/08 NHS for Generation Xers (1966–1980) was used to compare the cohorts at the same age of 25–44 years. Generational differences for males and females in education, employment, smoking, physical activity, Body Mass Index (BMI), self-rated health, and diabetes were determined using Z tests. Prevalence estimates and p-values are reported. Logistic regression models examining overweight/obesity (BMI≥25) and diabetes prevalence as the dependent variables, with generation as the independent variable were adjusted for sex, age, education, physical activity, smoking and BMI(diabetes model only). Adjusted odds ratios (OR) and 95% confidence intervals are reported. Results At the same age, tertiary educational attainment was higher among Generation X males (27.6% vs. 15.2% p<0.001) and females (30.0% vs. 10.6% p<0.001). Boomer females had a higher rate of unemployment (5.6% vs. 2.5% p<0.001). Boomer males and females had a higher prevalence of “excellent” self-reported health (35.9% vs. 21.8% p<0.001; 36.3% vs. 25.1% p<0.001) and smoking (36.3% vs. 30.4% p<0.001; 28.3% vs. 22.3% p<0.001). Generation X males (18.3% vs. 9.4% p<0.001) and females (12.7% vs. 10.4% p = 0.015) demonstrated a higher prevalence of obesity (BMI>30). There were no differences in physical activity. Modelling indicated that Generation X were more likely than Boomers to be overweight/obese (OR:2.09, 1.77–2.46) and have diabetes (OR:1.79, 1.47–2.18). Conclusion Self-rated health has deteriorated while obesity and diabetes prevalence has increased. This may impact workforce participation and health care utilization in the future. PMID:24671114
NASA Astrophysics Data System (ADS)
Munteshari, Obaidallah; Lau, Jonathan; Krishnan, Atindra; Dunn, Bruce; Pilon, Laurent
2018-01-01
Heat generation in electric double layer capacitors (EDLCs) may lead to temperature rise and reduce their lifetime and performance. This study aims to measure the time-dependent heat generation rate in individual carbon electrode of EDLCs under various charging conditions. First, the design, fabrication, and validation of an isothermal calorimeter are presented. The calorimeter consisted of two thermoelectric heat flux sensors connected to a data acquisition system, two identical and cold plates fed with a circulating coolant, and an electrochemical test section connected to a potentiostat/galvanostat system. The EDLC cells consisted of two identical activated carbon electrodes and a separator immersed in an electrolyte. Measurements were performed on three cells with different electrolytes under galvanostatic cycling for different current density and polarity. The measured time-averaged irreversible heat generation rate was in excellent agreement with predictions for Joule heating. The reversible heat generation rate in the positive electrode was exothermic during charging and endothermic during discharging. By contrast, the negative electrode featured both exothermic and endothermic heat generation during both charging and discharging. The results of this study can be used to validate existing thermal models, to develop thermal management strategies, and to gain insight into physicochemical phenomena taking place during operation.
Automatic Generation of Just-in-Time Online Assessments from Software Design Models
ERIC Educational Resources Information Center
Zualkernan, Imran A.; El-Naaj, Salim Abou; Papadopoulos, Maria; Al-Amoudi, Budoor K.; Matthews, Charles E.
2009-01-01
Computer software is pervasive in today's society. The rate at which new versions of computer software products are released is phenomenal when compared to the release rate of new products in traditional industries such as aircraft building. This rapid rate of change can partially explain why most certifications in the software industry are…
Potential and limits for rapid genetic adaptation to warming in a Great Barrier Reef coral.
Matz, Mikhail V; Treml, Eric A; Aglyamova, Galina V; Bay, Line K
2018-04-01
Can genetic adaptation in reef-building corals keep pace with the current rate of sea surface warming? Here we combine population genomics, biophysical modeling, and evolutionary simulations to predict future adaptation of the common coral Acropora millepora on the Great Barrier Reef (GBR). Genomics-derived migration rates were high (0.1-1% of immigrants per generation across half the latitudinal range of the GBR) and closely matched the biophysical model of larval dispersal. Both genetic and biophysical models indicated the prevalence of southward migration along the GBR that would facilitate the spread of heat-tolerant alleles to higher latitudes as the climate warms. We developed an individual-based metapopulation model of polygenic adaptation and parameterized it with population sizes and migration rates derived from the genomic analysis. We find that high migration rates do not disrupt local thermal adaptation, and that the resulting standing genetic variation should be sufficient to fuel rapid region-wide adaptation of A. millepora populations to gradual warming over the next 20-50 coral generations (100-250 years). Further adaptation based on novel mutations might also be possible, but this depends on the currently unknown genetic parameters underlying coral thermal tolerance and the rate of warming realized. Despite this capacity for adaptation, our model predicts that coral populations would become increasingly sensitive to random thermal fluctuations such as ENSO cycles or heat waves, which corresponds well with the recent increase in frequency of catastrophic coral bleaching events.
Rate distortion optimal bit allocation methods for volumetric data using JPEG 2000.
Kosheleva, Olga M; Usevitch, Bryan E; Cabrera, Sergio D; Vidal, Edward
2006-08-01
Computer modeling programs that generate three-dimensional (3-D) data on fine grids are capable of generating very large amounts of information. These data sets, as well as 3-D sensor/measured data sets, are prime candidates for the application of data compression algorithms. A very flexible and powerful compression algorithm for imagery data is the newly released JPEG 2000 standard. JPEG 2000 also has the capability to compress volumetric data, as described in Part 2 of the standard, by treating the 3-D data as separate slices. As a decoder standard, JPEG 2000 does not describe any specific method to allocate bits among the separate slices. This paper proposes two new bit allocation algorithms for accomplishing this task. The first procedure is rate distortion optimal (for mean squared error), and is conceptually similar to postcompression rate distortion optimization used for coding codeblocks within JPEG 2000. The disadvantage of this approach is its high computational complexity. The second bit allocation algorithm, here called the mixed model (MM) approach, mathematically models each slice's rate distortion curve using two distinct regions to get more accurate modeling at low bit rates. These two bit allocation algorithms are applied to a 3-D Meteorological data set. Test results show that the MM approach gives distortion results that are nearly identical to the optimal approach, while significantly reducing computational complexity.
Sengupta, Biswa; Laughlin, Simon Barry; Niven, Jeremy Edward
2014-01-01
Information is encoded in neural circuits using both graded and action potentials, converting between them within single neurons and successive processing layers. This conversion is accompanied by information loss and a drop in energy efficiency. We investigate the biophysical causes of this loss of information and efficiency by comparing spiking neuron models, containing stochastic voltage-gated Na(+) and K(+) channels, with generator potential and graded potential models lacking voltage-gated Na(+) channels. We identify three causes of information loss in the generator potential that are the by-product of action potential generation: (1) the voltage-gated Na(+) channels necessary for action potential generation increase intrinsic noise and (2) introduce non-linearities, and (3) the finite duration of the action potential creates a 'footprint' in the generator potential that obscures incoming signals. These three processes reduce information rates by ∼50% in generator potentials, to ∼3 times that of spike trains. Both generator potentials and graded potentials consume almost an order of magnitude less energy per second than spike trains. Because of the lower information rates of generator potentials they are substantially less energy efficient than graded potentials. However, both are an order of magnitude more efficient than spike trains due to the higher energy costs and low information content of spikes, emphasizing that there is a two-fold cost of converting analogue to digital; information loss and cost inflation.
Sengupta, Biswa; Laughlin, Simon Barry; Niven, Jeremy Edward
2014-01-01
Information is encoded in neural circuits using both graded and action potentials, converting between them within single neurons and successive processing layers. This conversion is accompanied by information loss and a drop in energy efficiency. We investigate the biophysical causes of this loss of information and efficiency by comparing spiking neuron models, containing stochastic voltage-gated Na+ and K+ channels, with generator potential and graded potential models lacking voltage-gated Na+ channels. We identify three causes of information loss in the generator potential that are the by-product of action potential generation: (1) the voltage-gated Na+ channels necessary for action potential generation increase intrinsic noise and (2) introduce non-linearities, and (3) the finite duration of the action potential creates a ‘footprint’ in the generator potential that obscures incoming signals. These three processes reduce information rates by ∼50% in generator potentials, to ∼3 times that of spike trains. Both generator potentials and graded potentials consume almost an order of magnitude less energy per second than spike trains. Because of the lower information rates of generator potentials they are substantially less energy efficient than graded potentials. However, both are an order of magnitude more efficient than spike trains due to the higher energy costs and low information content of spikes, emphasizing that there is a two-fold cost of converting analogue to digital; information loss and cost inflation. PMID:24465197
Krams, Indrikis A; Niemelä, Petri T; Trakimas, Giedrius; Krams, Ronalds; Burghardt, Gordon M; Krama, Tatjana; Kuusik, Aare; Mänd, Marika; Rantala, Markus J; Mänd, Raivo; Kekäläinen, Jukka; Sirkka, Ilkka; Luoto, Severi; Kortet, Raine
2017-03-29
The causes and consequences of among-individual variation and covariation in behaviours are of substantial interest to behavioural ecology, but the proximate mechanisms underpinning this (co)variation are still unclear. Previous research suggests metabolic rate as a potential proximate mechanism to explain behavioural covariation. We measured the resting metabolic rate (RMR), boldness and exploration in western stutter-trilling crickets, Gryllus integer , selected differentially for short and fast development over two generations. After applying mixed-effects models to reveal the sign of the covariation, we applied structural equation models to an individual-level covariance matrix to examine whether the RMR generates covariation between the measured behaviours. All traits showed among-individual variation and covariation: RMR and boldness were positively correlated, RMR and exploration were negatively correlated, and boldness and exploration were negatively correlated. However, the RMR was not a causal factor generating covariation between boldness and exploration. Instead, the covariation between all three traits was explained by another, unmeasured mechanism. The selection lines differed from each other in all measured traits and significantly affected the covariance matrix structure between the traits, suggesting that there is a genetic component in the trait integration. Our results emphasize that interpretations made solely from the correlation matrix might be misleading. © 2017 The Author(s).
Trakimas, Giedrius; Krams, Ronalds; Burghardt, Gordon M.; Krama, Tatjana; Kuusik, Aare; Mänd, Marika; Rantala, Markus J.; Mänd, Raivo; Sirkka, Ilkka; Luoto, Severi; Kortet, Raine
2017-01-01
The causes and consequences of among-individual variation and covariation in behaviours are of substantial interest to behavioural ecology, but the proximate mechanisms underpinning this (co)variation are still unclear. Previous research suggests metabolic rate as a potential proximate mechanism to explain behavioural covariation. We measured the resting metabolic rate (RMR), boldness and exploration in western stutter-trilling crickets, Gryllus integer, selected differentially for short and fast development over two generations. After applying mixed-effects models to reveal the sign of the covariation, we applied structural equation models to an individual-level covariance matrix to examine whether the RMR generates covariation between the measured behaviours. All traits showed among-individual variation and covariation: RMR and boldness were positively correlated, RMR and exploration were negatively correlated, and boldness and exploration were negatively correlated. However, the RMR was not a causal factor generating covariation between boldness and exploration. Instead, the covariation between all three traits was explained by another, unmeasured mechanism. The selection lines differed from each other in all measured traits and significantly affected the covariance matrix structure between the traits, suggesting that there is a genetic component in the trait integration. Our results emphasize that interpretations made solely from the correlation matrix might be misleading. PMID:28330918
Gioannis, G De; Muntoni, A; Cappai, G; Milia, S
2009-03-01
Mechanical biological treatment (MBT) of residual municipal solid waste (RMSW) was investigated with respect to landfill gas generation. Mechanically treated RMSW was sampled at a full-scale plant and aerobically stabilized for 8 and 15 weeks. Anaerobic tests were performed on the aerobically treated waste (MBTW) in order to estimate the gas generation rate constants (k,y(-1)), the potential gas generation capacity (L(o), Nl/kg) and the amount of gasifiable organic carbon. Experimental results show how MBT allowed for a reduction of the non-methanogenic phase and of the landfill gas generation potential by, respectively, 67% and 83% (8 weeks treatment), 82% and 91% (15 weeks treatment), compared to the raw waste. The amount of gasified organic carbon after 8 weeks and 15 weeks of treatment was equal to 11.01+/-1.25kgC/t(MBTW) and 4.54+/-0.87kgC/t(MBTW), respectively, that is 81% and 93% less than the amount gasified from the raw waste. The values of gas generation rate constants obtained for MBTW anaerobic degradation (0.0347-0.0803y(-1)) resemble those usually reported for the slowly and moderately degradable fractions of raw MSW. Simulations performed using a prediction model support the hypothesis that due to the low production rate, gas production from MBTW landfills is well-suited to a passive management strategy.
Combinatorial Histone Acetylation Patterns Are Generated by Motif-Specific Reactions.
Blasi, Thomas; Feller, Christian; Feigelman, Justin; Hasenauer, Jan; Imhof, Axel; Theis, Fabian J; Becker, Peter B; Marr, Carsten
2016-01-27
Post-translational modifications (PTMs) are pivotal to cellular information processing, but how combinatorial PTM patterns ("motifs") are set remains elusive. We develop a computational framework, which we provide as open source code, to investigate the design principles generating the combinatorial acetylation patterns on histone H4 in Drosophila melanogaster. We find that models assuming purely unspecific or lysine site-specific acetylation rates were insufficient to explain the experimentally determined motif abundances. Rather, these abundances were best described by an ensemble of models with acetylation rates that were specific to motifs. The model ensemble converged upon four acetylation pathways; we validated three of these using independent data from a systematic enzyme depletion study. Our findings suggest that histone acetylation patterns originate through specific pathways involving motif-specific acetylation activity. Copyright © 2016 Elsevier Inc. All rights reserved.
Direct estimate of the spontaneous germ line mutation rate in African green monkeys.
Pfeifer, Susanne P
2017-12-01
Here, I provide the first direct estimate of the spontaneous mutation rate in an Old World monkey, using a seven individual, three-generation pedigree of African green monkeys. Eight de novo mutations were identified within ∼1.5 Gbp of accessible genome, corresponding to an estimated point mutation rate of 0.94 × 10 -8 per site per generation, suggesting an effective population size of ∼12000 for the species. This estimation represents a significant improvement in our knowledge of the population genetics of the African green monkey, one of the most important nonhuman primate models in biomedical research. Furthermore, by comparing mutation rates in Old World monkeys with the only other direct estimates in primates to date-humans and chimpanzees-it is possible to uniquely address how mutation rates have evolved over longer time scales. While the estimated spontaneous mutation rate for African green monkeys is slightly lower than the rate of 1.2 × 10 -8 per base pair per generation reported in chimpanzees, it is similar to the lower range of rates of 0.96 × 10 -8 -1.28 × 10 -8 per base pair per generation recently estimated from whole genome pedigrees in humans. This result suggests a long-term constraint on mutation rate that is quite different from similar evidence pertaining to recombination rate evolution in primates. © 2017 The Author(s). Evolution © 2017 The Society for the Study of Evolution.
Modelling of the hole-initiated impact ionization current in the framework of hydrodynamic equations
NASA Astrophysics Data System (ADS)
Lorenzini, Martino; Van Houdt, Jan
2002-02-01
Several research papers have shown the feasibility of the hydrodynamic transport model to investigate impact ionization in semiconductor devices by means of mean-energy-dependent generation rates. However, the analysis has been usually carried out for the case of the electron-initiated impact ionization process and less attention has been paid to the modelling of the generation rate due to impact ionization events initiated by holes. This paper therefore presents an original model for the hole-initiated impact ionization in silicon and validates it by comparing simulation results with substrate currents taken from p-channel transistors manufactured in a 0.35 μm CMOS technology having three different channel lengths. The experimental data are successfully reproduced over a wide range of applied voltages using only one fitting parameter. Since the impact ionization of holes triggers the mechanism responsible for the back-bias enhanced gate current in deep submicron nMOS devices, the model can be exploited in the development of non-volatile memories programmed by secondary electron injection.
Cyclic Voltammetry of Polysulfide (Thiokol) Prepolymers and Related Compounds
1983-12-01
low scan rates suqges t that A and B are unstable and undergo chesical reactions on the cyclic voltametry time scale. A more detailed examination is...A Utah Electronics model 0152 potentiostat was used 2 together with a model 0151 sweep generator. The voltamgnaor were recorded on a Rikadenki model
You, Shutang; Hadley, Stanton W.; Shankar, Mallikarjun; ...
2016-01-12
This paper studies the generation and transmission expansion co-optimization problem with a high wind power penetration rate in the US Eastern Interconnection (EI) power grid. In this paper, the generation and transmission expansion problem for the EI system is modeled as a mixed-integer programming (MIP) problem. Our paper also analyzed a time series generation method to capture the variation and correlation of both load and wind power across regions. The obtained series can be easily introduced into the expansion planning problem and then solved through existing MIP solvers. Simulation results show that the proposed planning model and series generation methodmore » can improve the expansion result significantly through modeling more detailed information of wind and load variation among regions in the US EI system. Moreover, the improved expansion plan that combines generation and transmission will aid system planners and policy makers to maximize the social welfare in large-scale power grids.« less
NASA Astrophysics Data System (ADS)
Salem, Reza; Jiang, Zack; Liu, Dongfeng; Pafchek, Robert; Foy, Paul; Saad, Mohammed; Jenkins, Doug; Cable, Alex; Fendel, Peter
2016-03-01
We report mid-infrared supercontinuum (SC) generation in a dispersion-engineered step-index indium fluoride fiber pumped by a femtosecond fiber laser near 2 μm. The SC spans 1.8 octaves from 1.25 μm to 4.6 μm with an average output power of 270 mW. The pump source is an all-fiber femtosecond laser that generates sub-100 fs pulses at 50 MHz repetition rate with 570 mW average power. The indium fluoride fiber used for SC generation is designed to have a zerodispersion wavelength close to 1.9 μm. Two fiber lengths of 30 cm and 55 cm are selected for the SC generation experiments based on the numerical modelling results. The measured spectra and the numerical modelling results are presented showing good agreement for both lengths. The femtosecond pumping regime is a key requirement for generating a coherent SC. We show by modelling that the SC is coherent for a pump with the same pulse width and energy as our fiber laser and added quantum-limited noise. The results are promising for the realization of coherent and high-repetition-rate SC sources, two conditions that are critical for spectroscopy applications using FTIR spectrometers. Additionally, the entire SC system is built using optical fibers with similar core diameters, which enables integration into a compact platform.
NASA Astrophysics Data System (ADS)
Rodriguez Marco, Albert
Battery management systems (BMS) require computationally simple but highly accurate models of the battery cells they are monitoring and controlling. Historically, empirical equivalent-circuit models have been used, but increasingly researchers are focusing their attention on physics-based models due to their greater predictive capabilities. These models are of high intrinsic computational complexity and so must undergo some kind of order-reduction process to make their use by a BMS feasible: we favor methods based on a transfer-function approach of battery cell dynamics. In prior works, transfer functions have been found from full-order PDE models via two simplifying assumptions: (1) a linearization assumption--which is a fundamental necessity in order to make transfer functions--and (2) an assumption made out of expedience that decouples the electrolyte-potential and electrolyte-concentration PDEs in order to render an approach to solve for the transfer functions from the PDEs. This dissertation improves the fidelity of physics-based models by eliminating the need for the second assumption and, by linearizing nonlinear dynamics around different constant currents. Electrochemical transfer functions are infinite-order and cannot be expressed as a ratio of polynomials in the Laplace variable s. Thus, for practical use, these systems need to be approximated using reduced-order models that capture the most significant dynamics. This dissertation improves the generation of physics-based reduced-order models by introducing different realization algorithms, which produce a low-order model from the infinite-order electrochemical transfer functions. Physics-based reduced-order models are linear and describe cell dynamics if operated near the setpoint at which they have been generated. Hence, multiple physics-based reduced-order models need to be generated at different setpoints (i.e., state-of-charge, temperature and C-rate) in order to extend the cell operating range. This dissertation improves the implementation of physics-based reduced-order models by introducing different blending approaches that combine the pre-computed models generated (offline) at different setpoints in order to produce good electrochemical estimates (online) along the cell state-of-charge, temperature and C-rate range.
Modeling free energy availability from Hadean hydrothermal systems to the first metabolism.
Simoncini, E; Russell, M J; Kleidon, A
2011-12-01
Off-axis Hydrothermal Systems (HSs) are seen as the possible setting for the emergence of life. As the availability of free energy is a general requirement to drive any form of metabolism, we ask here under which conditions free energy generation by geologic processes is greatest and relate these to the conditions found at off-axis HSs. To do so, we present a conceptual model in which we explicitly capture the energetics of fluid motion and its interaction with exothermic reactions to maintain a state of chemical disequilibrium. Central to the interaction is the temperature at which the exothermic reactions take place. This temperature not only sets the equilibrium constant of the chemical reactions and thereby the distance of the actual state to chemical equilibrium, but these reactions also shape the temperature gradient that drives convection and thereby the advection of reactants to the reaction sites and the removal of the products that relate to geochemical free energy generation. What this conceptual model shows is that the positive feedback between convection and the chemical kinetics that is found at HSs favors a greater rate of free energy generation than in the absence of convection. Because of the lower temperatures and because the temperature of reactions is determined more strongly by these dynamics rather than an external heat flux, the conditions found at off-axis HSs should result in the greatest rates of geochemical free energy generation. Hence, we hypothesize from these thermodynamic considerations that off-axis HSs seem most conducive for the emergence of protometabolic pathways as these provide the greatest, abiotic generation rates of chemical free energy.
A population model for a long-lived, resprouting chaparral shrub: Adenostoma fasciculatum
Stohlgren, Thomas J.; Rundel, Philip W.
1986-01-01
Extensive stands of Adenostoma fasciculatum H.&A. (chamise) in the chaparral of California are periodically rejuvenated by fire. A population model based on size-specific demographic characteristics (thinning and fire-caused mortality) was developed to generate probable age distributions within size classes and survivorship curves for typical stands. The model was modified to assess the long term effects of different mortality rates on age distributions. Under observed mean mortality rates (28.7%), model output suggests some shrubs can survive more than 23 fires. A 10% increase in mortality rate by size class slightly shortened the survivorship curve, while a 10% decrease in mortality rate by size class greatly elongated the curve. This approach may be applicable to other long-lived plant species with complex life histories.
The wandering self: Tracking distracting self-generated thought in a cognitively demanding context.
Huijser, Stefan; van Vugt, Marieke K; Taatgen, Niels A
2018-02-01
We investigated how self-referential processing (SRP) affected self-generated thought in a complex working memory task (CWM) to test the predictions of a computational cognitive model. This model described self-generated thought as resulting from competition between task- and distracting processes, and predicted that self-generated thought interferes with rehearsal, reducing memory performance. SRP was hypothesized to influence this goal competition process by encouraging distracting self-generated thinking. We used a spatial CWM task to examine if SRP instigated such thoughts, and employed eye-tracking to examine rehearsal interference in eye-movement and self-generated thinking in pupil size. The results showed that SRP was associated with lower performance and higher rates of self-generated thought. Self-generated thought was associated with less rehearsal and we observed a smaller pupil size for mind wandering. We conclude that SRP can instigate self-generated thought and that goal competition provides a likely explanation for how self-generated thoughts arises in a demanding task. Copyright © 2017 Elsevier Inc. All rights reserved.
Chatterjee, Abhijit; Bhattacharya, Swati
2015-09-21
Several studies in the past have generated Markov State Models (MSMs), i.e., kinetic models, of biomolecular systems by post-analyzing long standard molecular dynamics (MD) calculations at the temperature of interest and focusing on the maximally ergodic subset of states. Questions related to goodness of these models, namely, importance of the missing states and kinetic pathways, and the time for which the kinetic model is valid, are generally left unanswered. We show that similar questions arise when we generate a room-temperature MSM (denoted MSM-A) for solvated alanine dipeptide using state-constrained MD calculations at higher temperatures and Arrhenius relation — the main advantage of such a procedure being a speed-up of several thousand times over standard MD-based MSM building procedures. Bounds for rate constants calculated using probability theory from state-constrained MD at room temperature help validate MSM-A. However, bounds for pathways possibly missing in MSM-A show that alternate kinetic models exist that produce the same dynamical behaviour at short time scales as MSM-A but diverge later. Even in the worst case scenario, MSM-A is found to be valid longer than the time required to generate it. Concepts introduced here can be straightforwardly extended to other MSM building techniques.
The effect of gender and age structure on municipal waste generation in Poland
DOE Office of Scientific and Technical Information (OSTI.GOV)
Talalaj, Izabela Anna, E-mail: izabela.tj@gmail.com; Walery, Maria, E-mail: m.walery@pb.edu.pl
Highlights: • An effect of gender and age structure on municipal waste generation was presented. • The waste accumulation index is influenced by a number of unemployed women. • Greater share of women in society contributes to greater waste production. • A model describing the analyzed dependences was determined. - Abstract: In this study the effect of gender and age structure on municipal waste generation was investigated. The data from 10-year period, from 2001 to 2010 year, were taken into consideration. The following parameters of gender and age structure were analyzed: men and woman quantity, female to male ratio, numbermore » of working, pre-working and post-working age men/women, number of unemployed men/women. The results have showed a strong correlation of annual per capita waste generation rate with number of unemployed women (r = 0.70) and female to male ratio (r = 0.81). This indicates that waste generation rate is more depended on ratio of men and women that on quantitative size of each group. Using the regression analysis a model describing the dependence between female to male ratio, number of unemployed woman and waste quantity was determined. The model explains 70% of waste quantity variation. Obtained results can be used both to improve waste management and to a fuller understanding of gender behavior.« less
International Geomagnetic Reference Field: the 12th generation
NASA Astrophysics Data System (ADS)
Thébault, Erwan; Finlay, Christopher C.; Beggan, Ciarán D.; Alken, Patrick; Aubert, Julien; Barrois, Olivier; Bertrand, Francois; Bondar, Tatiana; Boness, Axel; Brocco, Laura; Canet, Elisabeth; Chambodut, Aude; Chulliat, Arnaud; Coïsson, Pierdavide; Civet, François; Du, Aimin; Fournier, Alexandre; Fratter, Isabelle; Gillet, Nicolas; Hamilton, Brian; Hamoudi, Mohamed; Hulot, Gauthier; Jager, Thomas; Korte, Monika; Kuang, Weijia; Lalanne, Xavier; Langlais, Benoit; Léger, Jean-Michel; Lesur, Vincent; Lowes, Frank J.; Macmillan, Susan; Mandea, Mioara; Manoj, Chandrasekharan; Maus, Stefan; Olsen, Nils; Petrov, Valeriy; Ridley, Victoria; Rother, Martin; Sabaka, Terence J.; Saturnino, Diana; Schachtschneider, Reyko; Sirol, Olivier; Tangborn, Andrew; Thomson, Alan; Tøffner-Clausen, Lars; Vigneron, Pierre; Wardinski, Ingo; Zvereva, Tatiana
2015-05-01
The 12th generation of the International Geomagnetic Reference Field (IGRF) was adopted in December 2014 by the Working Group V-MOD appointed by the International Association of Geomagnetism and Aeronomy (IAGA). It updates the previous IGRF generation with a definitive main field model for epoch 2010.0, a main field model for epoch 2015.0, and a linear annual predictive secular variation model for 2015.0-2020.0. Here, we present the equations defining the IGRF model, provide the spherical harmonic coefficients, and provide maps of the magnetic declination, inclination, and total intensity for epoch 2015.0 and their predicted rates of change for 2015.0-2020.0. We also update the magnetic pole positions and discuss briefly the latest changes and possible future trends of the Earth's magnetic field.
NASA Technical Reports Server (NTRS)
Campbell, J. W.
1973-01-01
A stochasitc model of the atmosphere between 30 and 90 km was developed for use in Monte Carlo space shuttle entry studies. The model is actually a family of models, one for each latitude-season category as defined in the 1966 U.S. Standard Atmosphere Supplements. Each latitude-season model generates a pseudo-random temperature profile whose mean is the appropriate temperature profile from the Standard Atmosphere Supplements. The standard deviation of temperature at each altitude for a given latitude-season model was estimated from sounding-rocket data. Departures from the mean temperature at each altitude were produced by assuming a linear regression of temperature on the solar heating rate of ozone. A profile of random ozone concentrations was first generated using an auxiliary stochastic ozone model, also developed as part of this study, and then solar heating rates were computed for the random ozone concentrations.
Al-Khatib, Issam A; Abu Fkhidah, Ismail; Khatib, Jumana I; Kontogianni, Stamatia
2016-03-01
Forecasting of hospital solid waste generation is a critical challenge for future planning. The composition and generation rate of hospital solid waste in hospital units was the field where the proposed methodology of the present article was applied in order to validate the results and secure the outcomes of the management plan in national hospitals. A set of three multiple-variable regression models has been derived for estimating the daily total hospital waste, general hospital waste, and total hazardous waste as a function of number of inpatients, number of total patients, and number of beds. The application of several key indicators and validation procedures indicates the high significance and reliability of the developed models in predicting the hospital solid waste of any hospital. Methodology data were drawn from existent scientific literature. Also, useful raw data were retrieved from international organisations and the investigated hospitals' personnel. The primal generation outcomes are compared with other local hospitals and also with hospitals from other countries. The main outcome, which is the developed model results, are presented and analysed thoroughly. The goal is this model to act as leverage in the discussions among governmental authorities on the implementation of a national plan for safe hospital waste management in Palestine. © The Author(s) 2016.
Unsteady Crystal Growth Due to Step-Bunch Cascading
NASA Technical Reports Server (NTRS)
Vekilov, Peter G.; Lin, Hong; Rosenberger, Franz
1997-01-01
Based on our experimental findings of growth rate fluctuations during the crystallization of the protein lysozym, we have developed a numerical model that combines diffusion in the bulk of a solution with diffusive transport to microscopic growth steps that propagate on a finite crystal facet. Nonlinearities in layer growth kinetics arising from step interaction by bulk and surface diffusion, and from step generation by surface nucleation, are taken into account. On evaluation of the model with properties characteristic for the solute transport, and the generation and propagation of steps in the lysozyme system, growth rate fluctuations of the same magnitude and characteristic time, as in the experiments, are obtained. The fluctuation time scale is large compared to that of step generation. Variations of the governing parameters of the model reveal that both the nonlinearity in step kinetics and mixed transport-kinetics control of the crystallization process are necessary conditions for the fluctuations. On a microscopic scale, the fluctuations are associated with a morphological instability of the vicinal face, in which a step bunch triggers a cascade of new step bunches through the microscopic interfacial supersaturation distribution.
Irreversible entropy model for damage diagnosis in resistors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cuadras, Angel, E-mail: angel.cuadras@upc.edu; Crisóstomo, Javier; Ovejas, Victoria J.
2015-10-28
We propose a method to characterize electrical resistor damage based on entropy measurements. Irreversible entropy and the rate at which it is generated are more convenient parameters than resistance for describing damage because they are essentially positive in virtue of the second law of thermodynamics, whereas resistance may increase or decrease depending on the degradation mechanism. Commercial resistors were tested in order to characterize the damage induced by power surges. Resistors were biased with constant and pulsed voltage signals, leading to power dissipation in the range of 4–8 W, which is well above the 0.25 W nominal power to initiate failure. Entropymore » was inferred from the added power and temperature evolution. A model is proposed to understand the relationship among resistance, entropy, and damage. The power surge dissipates into heat (Joule effect) and damages the resistor. The results show a correlation between entropy generation rate and resistor failure. We conclude that damage can be conveniently assessed from irreversible entropy generation. Our results for resistors can be easily extrapolated to other systems or machines that can be modeled based on their resistance.« less
NASA Astrophysics Data System (ADS)
Zhang, Xia; Niu, Guo-Yue; Elshall, Ahmed S.; Ye, Ming; Barron-Gafford, Greg A.; Pavao-Zuckerman, Mitch
2014-09-01
Soil microbial respiration pulses in response to episodic rainfall pulses (the "Birch effect") are poorly understood. We developed and assessed five evolving microbial enzyme models against field measurements from a semiarid savannah characterized by pulsed precipitation to understand the mechanisms to generate the Birch pulses. The five models evolve from an existing four-carbon (C) pool model to models with additional C pools and explicit representations of soil moisture controls on C degradation and microbial uptake rates. Assessing the models using techniques of model selection and model averaging suggests that models with additional C pools for accumulation of degraded C in the dry zone of the soil pore space result in a higher probability of reproducing the observed Birch pulses. Degraded C accumulated in dry soil pores during dry periods becomes immediately accessible to microbes in response to rainstorms, providing a major mechanism to generate respiration pulses. Explicitly representing the transition of degraded C and enzymes between dry and wet soil pores in response to soil moisture changes and soil moisture controls on C degradation and microbial uptake rates improve the models' efficiency and robustness in simulating the Birch effect. Assuming that enzymes in the dry soil pores facilitate degradation of complex C during dry periods (though at a lower rate) results in a greater accumulation of degraded C and thus further improves the models' performance. However, the actual mechanism inducing the greater accumulation of labile C needs further experimental studies.
Determination of LEDs degradation with entropy generation rate
NASA Astrophysics Data System (ADS)
Cuadras, Angel; Yao, Jiaqiang; Quilez, Marcos
2017-10-01
We propose a method to assess the degradation and aging of light emitting diodes (LEDs) based on irreversible entropy generation rate. We degraded several LEDs and monitored their entropy generation rate ( S ˙ ) in accelerated tests. We compared the thermoelectrical results with the optical light emission evolution during degradation. We find a good relationship between aging and S ˙ (t), because S ˙ is both related to device parameters and optical performance. We propose a threshold of S ˙ (t) as a reliable damage indicator of LED end-of-life that can avoid the need to perform optical measurements to assess optical aging. The method lays beyond the typical statistical laws for lifetime prediction provided by manufacturers. We tested different LED colors and electrical stresses to validate the electrical LED model and we analyzed the degradation mechanisms of the devices.
The Modellers' Halting Foray into Ecological Theory: Or, What is This Thing Called 'Growth Rate'?
Deveau, Michael; Karsten, Richard; Teismann, Holger
2015-06-01
This discussion paper describes the attempt of an imagined group of non-ecologists ("Modellers") to determine the population growth rate from field data. The Modellers wrestle with the multiple definitions of the growth rate available in the literature and the fact that, in their modelling, it appears to be drastically model-dependent, which seems to throw into question the very concept itself. Specifically, they observe that six representative models used to capture the data produce growth-rate values, which differ significantly. Almost ready to concede that the problem they set for themselves is ill-posed, they arrive at an alternative point of view that not only preserves the identity of the concept of the growth rate, but also helps discriminate between competing models for capturing the data. This is accomplished by assessing how robustly a given model is able to generate growth-rate values from randomized time-series data. This leads to the proposal of an iterative approach to ecological modelling in which the definition of theoretical concepts (such as the growth rate) and model selection complement each other. The paper is based on high-quality field data of mites on apple trees and may be called a "data-driven opinion piece".
The effect of gender and age structure on municipal waste generation in Poland.
Talalaj, Izabela Anna; Walery, Maria
2015-06-01
In this study the effect of gender and age structure on municipal waste generation was investigated. The data from 10-year period, from 2001 to 2010 year, were taken into consideration. The following parameters of gender and age structure were analyzed: men and woman quantity, female to male ratio, number of working, pre-working and post-working age men/women, number of unemployed men/women. The results have showed a strong correlation of annual per capita waste generation rate with number of unemployed women (r=0.70) and female to male ratio (r=0.81). This indicates that waste generation rate is more depended on ratio of men and women that on quantitative size of each group. Using the regression analysis a model describing the dependence between female to male ratio, number of unemployed woman and waste quantity was determined. The model explains 70% of waste quantity variation. Obtained results can be used both to improve waste management and to a fuller understanding of gender behavior. Copyright © 2015 Elsevier Ltd. All rights reserved.
Li, Qiuying; Pham, Hoang
2017-01-01
In this paper, we propose a software reliability model that considers not only error generation but also fault removal efficiency combined with testing coverage information based on a nonhomogeneous Poisson process (NHPP). During the past four decades, many software reliability growth models (SRGMs) based on NHPP have been proposed to estimate the software reliability measures, most of which have the same following agreements: 1) it is a common phenomenon that during the testing phase, the fault detection rate always changes; 2) as a result of imperfect debugging, fault removal has been related to a fault re-introduction rate. But there are few SRGMs in the literature that differentiate between fault detection and fault removal, i.e. they seldom consider the imperfect fault removal efficiency. But in practical software developing process, fault removal efficiency cannot always be perfect, i.e. the failures detected might not be removed completely and the original faults might still exist and new faults might be introduced meanwhile, which is referred to as imperfect debugging phenomenon. In this study, a model aiming to incorporate fault introduction rate, fault removal efficiency and testing coverage into software reliability evaluation is developed, using testing coverage to express the fault detection rate and using fault removal efficiency to consider the fault repair. We compare the performance of the proposed model with several existing NHPP SRGMs using three sets of real failure data based on five criteria. The results exhibit that the model can give a better fitting and predictive performance.
NASA Astrophysics Data System (ADS)
Marsudi, Hidayat, Noor; Wibowo, Ratno Bagus Edy
2017-12-01
In this article, we present a deterministic model for the transmission dynamics of HIV/AIDS in which condom campaign and antiretroviral therapy are both important for the disease management. We calculate the effective reproduction number using the next generation matrix method and investigate the local and global stability of the disease-free equilibrium of the model. Sensitivity analysis of the effective reproduction number with respect to the model parameters were carried out. Our result shows that efficacy rate of condom campaign, transmission rate for contact with the asymptomatic infective, progression rate from the asymptomatic infective to the pre-AIDS infective, transmission rate for contact with the pre-AIDS infective, ARV therapy rate, proportion of the susceptible receiving condom campaign and proportion of the pre-AIDS receiving ARV therapy are highly sensitive parameters that effect the transmission dynamics of HIV/AIDS infection.
Hook, T.O.; Rutherford, E.S.; Brines, Shannon J.; Geddes, C.A.; Mason, D.M.; Schwab, D.J.; Fleischer, G.W.
2004-01-01
The relative quality of a habitat can influence fish consumption, growth, mortality, and production. In order to quantify habitat quality, several authors have combined bioenergetic and foraging models to generate spatially explicit estimates of fish growth rate potential (GRP). However, the capacity of GRP to reflect the spatial distributions of fishes over large areas has not been fully evaluated. We generated landscape scale estimates of steelhead (Oncorhynchus mykiss) GRP throughout Lake Michigan for 1994-1996, and used these estimates to test the hypotheses that GRP is a good predictor of spatial patterns of steelhead catch rates. We used surface temperatures (measured with AVHRR satellite imagery) and acoustically measured steelhead prey densities (alewife, Alosa pseudoharengus) as inputs for the GRP model. Our analyses demonstrate that potential steelhead growth rates in Lake Michigan are highly variable in both space and time. Steelhead GRP tended to increase with latitude, and mean GRP was much higher during September 1995, compared to 1994 and 1996. In addition, our study suggests that landscape scale measures of GRP are not good predictors of steelhead catch rates throughout Lake Michigan, but may provide an index of interannual variation in system-wide habitat quality.
Control of serpentinisation rate by reaction-induced cracking
NASA Astrophysics Data System (ADS)
Malvoisin, Benjamin; Brantut, Nicolas; Kaczmarek, Mary-Alix
2017-10-01
Serpentinisation of mantle rocks requires the generation and maintenance of transport pathways for water. The solid volume increase during serpentinisation can lead to stress build-up and trigger cracking, which ease fluid penetration into the rock. The quantitative effect of this reaction-induced cracking mechanism on reactive surface generation is poorly constrained, thus hampering our ability to predict serpentinisation rate in geological environments. Here we use a combined approach with numerical modelling and observations in natural samples to provide estimates of serpentinisation rate at mid-ocean ridges. We develop a micromechanical model to quantify the propagation of serpentinisation-induced cracks in olivine. The maximum crystallisation pressure deduced from thermodynamic calculations reaches several hundreds of megapascals but does not necessary lead to crack propagation if the olivine grain is subjected to high compressive stresses. The micromechanical model is then coupled to a simple geometrical model to predict reactive surface area formation during grain splitting, and thus bulk reaction rate. Our model reproduces quantitatively experimental kinetic data and the typical mesh texture formed during serpentinisation. We also compare the model results with olivine grain size distribution data obtained on natural serpentinised peridotites from the Marum ophiolite and the Papuan ultramafic belt (Papua New Guinea). The natural serpentinised peridotites show an increase of the number of olivine grains for a decrease of the mean grain size by one order of magnitude as reaction progresses from 5 to 40%. These results are in agreement with our model predictions, suggesting that reaction-induced cracking controls the serpentinisation rate. We use our model to estimate that, at mid-ocean ridges, serpentinisation occurs up to 12 km depth and reaction-induced cracking reduces the characteristic time of serpentinisation by one order of magnitude, down to values comprised between 10 and 1000 yr. The increase of effective pressure with depth also prevents cracking, which positions the peak in serpentinisation rate at shallower depths, 4 km above previous predictions.
Estimating maquiladora hazardous waste generation on the U.S./Mexico border
NASA Astrophysics Data System (ADS)
Bowen, Mace M.; Kontuly, Thomas; Hepner, George F.
1995-03-01
Maquiladoras, manufacturing plants that primarily assemble foreign components for reexport, are located in concentrations along the northern frontier of the US/Mexico border. These plants process a wide variety of materials using modern industrial technologies within the context of developing world institutions and infrastructure. Hazardous waste generation by maquiladoras represents a critical environmental management issue because of the spatial concentration of these plants in border municipalities where the infrastructure for waste management is nonexistent or poor. These border municipalities contain rapidly increasing populations, which further stress their waste handling infrastructure capacities while exposing their populations to greater contaminant risks. Limited empirical knowledge exists concerning hazardous waste types and generation rates from maquiladorsas. There is no standard reporting method for waste generation or methodology for estimating generation rates at this time. This paper presents a method that can be used for the rapid assessment of hazardous waste generation. A first approximation of hazardous waste generation is produced for maquiladoras in the three municipalities of Nogales, Sonora, Mexicali, Baja California, and Cd. Juarez, Chihuahua, using the INVENT model developed by the World Bank. In addition, our intent is to evaluate the potential of the INVENT model for adaptation to the US/Mexico border industrial situation. The press of border industrial development, especially with the recent adoption of the NAFTA, make such assessments necessary as a basis for the environmental policy formulation and management needed in the immediate future.
NASA Astrophysics Data System (ADS)
Mishev, A. L.; Velinov, P. I. Y.
2014-12-01
In the last few years an essential progress in development of physical models for cosmic ray induced ionization in the atmosphere is achieved. The majority of these models are full target, i.e. based on Monte Carlo simulation of an electromagnetic-muon-nucleon cascade in the atmosphere. Basically, the contribution of proton nuclei is highlighted, i.e. the contribution of primary cosmic ray α-particles and heavy nuclei to the atmospheric ionization is neglected or scaled to protons. The development of cosmic ray induced atmospheric cascade is sensitive to the energy and mass of the primary cosmic ray particle. The largest uncertainties in Monte Carlo simulations of a cascade in the Earth atmosphere are due to assumed hadron interaction models, the so-called hadron generators. In the work presented here we compare the ionization yield functions Y for primary cosmic ray nuclei, such as α-particles, Oxygen and Iron nuclei, assuming different hadron interaction models. The computations are fulfilled with the CORSIKA 6.9 code using GHEISHA 2002, FLUKA 2011, UrQMD hadron generators for energy below 80 GeV/nucleon and QGSJET II for energy above 80 GeV/nucleon. The observed difference between hadron generators is widely discussed. The influence of different atmospheric parametrizations, namely US standard atmosphere, US standard atmosphere winter and summer profiles on ion production rate is studied. Assuming realistic primary cosmic ray mass composition, the ion production rate is obtained at several rigidity cut-offs - from 1 GV (high latitudes) to 15 GV (equatorial latitudes) using various hadron generators. The computations are compared with experimental data. A conclusion concerning the consistency of the hadron generators is stated.
Claudino, Mauro; Zhang, Xinpeng; Alim, Marvin D; Podgórski, Maciej; Bowman, Christopher N
2016-11-08
A kinetic mechanism and the accompanying mathematical framework are presented for base-mediated thiol-Michael photopolymerization kinetics involving a photobase generator. Here, model kinetic predictions demonstrate excellent agreement with a representative experimental system composed of 2-(2-nitrophenyl)propyloxycarbonyl-1,1,3,3-tetramethylguanidine (NPPOC-TMG) as a photobase generator that is used to initiate thiol-vinyl sulfone Michael addition reactions and polymerizations. Modeling equations derived from a basic mechanistic scheme indicate overall polymerization rates that follow a pseudo-first-order kinetic process in the base and coreactant concentrations, controlled by the ratio of the propagation to chain-transfer kinetic parameters ( k p / k CT ) which is dictated by the rate-limiting step and controls the time necessary to reach gelation. Gelation occurs earlier as the k p / k CT ratio reaches a critical value, wherefrom gel times become nearly independent of k p / k CT . The theoretical approach allowed determining the effect of induction time on the reaction kinetics due to initial acid-base neutralization for the photogenerated base caused by the presence of protic contaminants. Such inhibition kinetics may be challenging for reaction systems that require high curing rates but are relevant for chemical systems that need to remain kinetically dormant until activated although at the ultimate cost of lower polymerization rates. The pure step-growth character of this living polymerization and the exhibited kinetics provide unique potential for extended dark-cure reactions and uniform material properties. The general kinetic model is applicable to photobase initiators where photolysis follows a unimolecular cleavage process releasing a strong base catalyst without cogeneration of intermediate radical species.
Rating Movies and Rating the Raters Who Rate Them
Zhou, Hua; Lange, Kenneth
2010-01-01
The movie distribution company Netflix has generated considerable buzz in the statistics community by offering a million dollar prize for improvements to its movie rating system. Among the statisticians and computer scientists who have disclosed their techniques, the emphasis has been on machine learning approaches. This article has the modest goal of discussing a simple model for movie rating and other forms of democratic rating. Because the model involves a large number of parameters, it is nontrivial to carry out maximum likelihood estimation. Here we derive a straightforward EM algorithm from the perspective of the more general MM algorithm. The algorithm is capable of finding the global maximum on a likelihood landscape littered with inferior modes. We apply two variants of the model to a dataset from the MovieLens archive and compare their results. Our model identifies quirky raters, redefines the raw rankings, and permits imputation of missing ratings. The model is intended to stimulate discussion and development of better theory rather than to win the prize. It has the added benefit of introducing readers to some of the issues connected with analyzing high-dimensional data. PMID:20802818
Rating Movies and Rating the Raters Who Rate Them.
Zhou, Hua; Lange, Kenneth
2009-11-01
The movie distribution company Netflix has generated considerable buzz in the statistics community by offering a million dollar prize for improvements to its movie rating system. Among the statisticians and computer scientists who have disclosed their techniques, the emphasis has been on machine learning approaches. This article has the modest goal of discussing a simple model for movie rating and other forms of democratic rating. Because the model involves a large number of parameters, it is nontrivial to carry out maximum likelihood estimation. Here we derive a straightforward EM algorithm from the perspective of the more general MM algorithm. The algorithm is capable of finding the global maximum on a likelihood landscape littered with inferior modes. We apply two variants of the model to a dataset from the MovieLens archive and compare their results. Our model identifies quirky raters, redefines the raw rankings, and permits imputation of missing ratings. The model is intended to stimulate discussion and development of better theory rather than to win the prize. It has the added benefit of introducing readers to some of the issues connected with analyzing high-dimensional data.
De la Cruz, Florentino B; Barlaz, Morton A
2010-06-15
The current methane generation model used by the U.S. EPA (Landfill Gas Emissions Model) treats municipal solid waste (MSW) as a homogeneous waste with one decay rate. However, component-specific decay rates are required to evaluate the effects of changes in waste composition on methane generation. Laboratory-scale rate constants, k(lab), for the major biodegradable MSW components were used to derive field-scale decay rates (k(field)) for each waste component using the assumption that the average of the field-scale decay rates for each waste component, weighted by its composition, is equal to the bulk MSW decay rate. For an assumed bulk MSW decay rate of 0.04 yr(-1), k(field) was estimated to be 0.298, 0.171, 0.015, 0.144, 0.033, 0.02, 0.122, and 0.029 yr(-1), for grass, leaves, branches, food waste, newsprint, corrugated containers, coated paper, and office paper, respectively. The effect of landfill waste diversion programs on methane production was explored to illustrate the use of component-specific decay rates. One hundred percent diversion of yard waste and food waste reduced the year 20 methane production rate by 45%. When a landfill gas collection schedule was introduced, collectable methane was most influenced by food waste diversion at years 10 and 20 and paper diversion at year 40.
NASA Astrophysics Data System (ADS)
Shao, Quanxi; Dutta, Dushmanta; Karim, Fazlul; Petheram, Cuan
2018-01-01
Streamflow discharge is a fundamental dataset required to effectively manage water and land resources. However, developing robust stage - discharge relationships called rating curves, from which streamflow discharge is derived, is time consuming and costly, particularly in remote areas and especially at high stage levels. As a result stage - discharge relationships are often heavily extrapolated. Hydrodynamic (HD) models are physically based models used to simulate the flow of water along river channels and over adjacent floodplains. In this paper we demonstrate a method by which a HD model can be used to generate a 'synthetic' stage - discharge relationship at high stages. The method uses a both-side Box-Cox transformation to calibrate the synthetic rating curve such that the regression residuals are as close to the normal distribution as possible. By doing this both-side transformation, the statistical uncertainty in the synthetically derived stage - discharge relationship can be calculated. This enables people trying to make decisions to determine whether the uncertainty in the synthetically generated rating curve at high stage levels is acceptable for their decision. The proposed method is demonstrated in two streamflow gauging stations in north Queensland, Australia.
Generation expansion planning in a competitive electric power industry
NASA Astrophysics Data System (ADS)
Chuang, Angela Shu-Woan
This work investigates the application of non-cooperative game theory to generation expansion planning (GEP) in a competitive electricity industry. We identify fundamental ways competition changes the nature of GEP, review different models of oligopoly behavior, and argue that assumptions of the Cournot model are compatible with GEP. Applying Cournot theory of oligopoly behavior, we formulate a GEP model that may characterize expansion in the new competitive regime, particularly in pool-dominated generation supply industries. Our formulation incorporates multiple markets and is patterned after the basic design of the California ISO/PX system. Applying the model, we conduct numerical experiments on a test system, and analyze generation investment and market participation decisions of different candidate expansion units that vary in costs and forced outage rates. Simulations are performed under different scenarios of competition. In particular, we observe higher probabilistic measures of reliability from Cournot expansion compared to the expansion plan of a monopoly with an equivalent minimum reserve margin requirement. We prove several results for a subclass of problems encompassed by our formulation. In particular, we prove that under certain conditions Cournot competition leads to greater total capacity expansion than a situation in which generators collude in a cartel. We also show that industry output after introduction of new technology is no less than monopoly output. So a monopoly may lack sufficient incentive to introduce new technologies. Finally, we discuss the association between capacity payments and the issue of pricing reliability. And we derive a formula for computing ideal capacity payment rates by extending the Value of Service Reliability technique.
Influence of reaction-induced fracturing on serpentinisation rate
NASA Astrophysics Data System (ADS)
Malvoisin, B.; Brantut, N.; Kaczmarek, M. A.
2017-12-01
The alteration of mantle rocks at mid-ocean ridges (i.e. serpentinisation) can lead to a solid volume increase responsible for stress build-up and cracking during reaction (reaction-induced fracturing). This mechanism has been proposed to play a key role for maintaining fluid pathways during reaction. However, its impact on the reaction rate is not yet quantified. We propose here a micromechanical model to quantify the influence of the crystallisation pressure generated during serpentine precipitation on crack propagation in olivine. This model is then coupled to a simple geometrical model to calculate the generation of reactive surface area during grain splitting, and thus bulk reaction rate. The model is able to reproduce experimental kinetic data as well as the mesh texture observed in natural samples. The model results are compared to olivine grain size distribution in serpentinised peridotites from the Marum ophiolite and the Papuan ultramafic belt (Papuan New Guinea). The observations and the model both indicate a decrease of the mean grain size by one order of magnitude as the reaction progresses from 5 to 40 %. Based on this good agreement, we use our model to predict that cracking reduces the characteristic time of serpentinisation by one order of magnitude, down to values comprised between 10 and 1,000 yr. The peak serpentinisation is also shifted 4 km above the previous predictions due to effective pressure increase with depth.
NASA Astrophysics Data System (ADS)
Sun, Y.; Li, Y. P.; Huang, G. H.
2012-06-01
In this study, a queuing-theory-based interval-fuzzy robust two-stage programming (QB-IRTP) model is developed through introducing queuing theory into an interval-fuzzy robust two-stage (IRTP) optimization framework. The developed QB-IRTP model can not only address highly uncertain information for the lower and upper bounds of interval parameters but also be used for analysing a variety of policy scenarios that are associated with different levels of economic penalties when the promised targets are violated. Moreover, it can reflect uncertainties in queuing theory problems. The developed method has been applied to a case of long-term municipal solid waste (MSW) management planning. Interval solutions associated with different waste-generation rates, different waiting costs and different arriving rates have been obtained. They can be used for generating decision alternatives and thus help managers to identify desired MSW management policies under various economic objectives and system reliability constraints.
Yovich, John L; Conceicao, Jason L; Marjanovich, Nicole; Ye, Yun; Hinchliffe, Peter M; Dhaliwal, Satvinder S; Keane, Kevin N
2018-05-22
IVF cycles utilizing the ICSI technique for fertilization have been rising over the 25 years since its introduction, with indications now extending beyond male factor infertility. We have performed ICSI for 87% of cases compared with the ANZARD average of 67%. This retrospective study reports on the outcomes of 1547 autologous ART treatments undertaken over a recent 3-year period. Based on various indications, cases were managed within 3 groupings - IVF Only, ICSI Only or IVF-ICSI Split insemination where oocytes were randomly allocated. Overall 567 pregnancies arose from mostly single embryo transfer procedures up to December 2016, with 402 live births, comprising 415 infants and a low fetal abnormality rate (1.9%) was recorded. When the data was adjusted for confounders such as maternal age, measures of ovarian reserve and sperm quality, it appeared that IVF-generated and ICSI-generated embryos had a similar chance of both pregnancy and live birth. In the IVF-ICSI Split model, significantly more ICSI-generated embryos were utilised (2.5 vs 1.8; p < 0.003) with productivity rates of 67.8% for pregnancy and 43.4% for livebirths per OPU for this group. We conclude that ART clinics should apply the insemination method which will maximize embryo numbers and the first treatment for unexplained infertility should be undertaken within the IVF-ICSI Split model. Whilst ICSI-generated pregnancies are reported to have a higher rate of fetal abnormalities, our data is consistent with the view that the finding is not due to the ICSI technique per se. Copyright © 2018 Society for Biology of Reproduction & the Institute of Animal Reproduction and Food Research of Polish Academy of Sciences in Olsztyn. Published by Elsevier B.V. All rights reserved.
Photodynamic therapy: computer modeling of diffusion and reaction phenomena
NASA Astrophysics Data System (ADS)
Hampton, James A.; Mahama, Patricia A.; Fournier, Ronald L.; Henning, Jeffery P.
1996-04-01
We have developed a transient, one-dimensional mathematical model for the reaction and diffusion phenomena that occurs during photodynamic therapy (PDT). This model is referred to as the PDTmodem program. The model is solved by the Crank-Nicholson finite difference technique and can be used to predict the fates of important molecular species within the intercapillary tissue undergoing PDT. The following factors govern molecular oxygen consumption and singlet oxygen generation within a tumor: (1) photosensitizer concentration; (2) fluence rate; and (3) intercapillary spacing. In an effort to maximize direct tumor cell killing, the model allows educated decisions to be made to insure the uniform generation and exposure of singlet oxygen to tumor cells across the intercapillary space. Based on predictions made by the model, we have determined that the singlet oxygen concentration profile within the intercapillary space is controlled by the product of the drug concentration, and light fluence rate. The model predicts that at high levels of this product, within seconds singlet oxygen generation is limited to a small core of cells immediately surrounding the capillary. The remainder of the tumor tissue in the intercapillary space is anoxic and protected from the generation and toxic effects of singlet oxygen. However, at lower values of this product, the PDT-induced anoxic regions are not observed. An important finding is that an optimal value of this product can be defined that maintains the singlet oxygen concentration throughout the intercapillary space at a near constant level. Direct tumor cell killing is therefore postulated to depend on the singlet oxygen exposure, defined as the product of the uniform singlet oxygen concentration and the time of exposure, and not on the total light dose.
NASA Astrophysics Data System (ADS)
Gilchrist, J. J.; Jordan, T. H.; Shaw, B. E.; Milner, K. R.; Richards-Dinger, K. B.; Dieterich, J. H.
2017-12-01
Within the SCEC Collaboratory for Interseismic Simulation and Modeling (CISM), we are developing physics-based forecasting models for earthquake ruptures in California. We employ the 3D boundary element code RSQSim (Rate-State Earthquake Simulator of Dieterich & Richards-Dinger, 2010) to generate synthetic catalogs with tens of millions of events that span up to a million years each. This code models rupture nucleation by rate- and state-dependent friction and Coulomb stress transfer in complex, fully interacting fault systems. The Uniform California Earthquake Rupture Forecast Version 3 (UCERF3) fault and deformation models are used to specify the fault geometry and long-term slip rates. We have employed the Blue Waters supercomputer to generate long catalogs of simulated California seismicity from which we calculate the forecasting statistics for large events. We have performed probabilistic seismic hazard analysis with RSQSim catalogs that were calibrated with system-wide parameters and found a remarkably good agreement with UCERF3 (Milner et al., this meeting). We build on this analysis, comparing the conditional probabilities of sequences of large events from RSQSim and UCERF3. In making these comparisons, we consider the epistemic uncertainties associated with the RSQSim parameters (e.g., rate- and state-frictional parameters), as well as the effects of model-tuning (e.g., adjusting the RSQSim parameters to match UCERF3 recurrence rates). The comparisons illustrate how physics-based rupture simulators might assist forecasters in understanding the short-term hazards of large aftershocks and multi-event sequences associated with complex, multi-fault ruptures.
Navy Nurse Corps manpower management model.
Kinstler, Daniel P; Johnson, Raymond W; Richter, Anke; Kocher, Kathryn
2008-01-01
The Navy Nurse Corps is part of a team of professionals that provides high quality, economical health care to approximately 700,000 active duty Navy and Marine Corps members, as well as 2.6 million retired and family members. Navy Nurse Corps manpower management efficiency is critical to providing this care. This paper aims to focus on manpower planning in the Navy Nurse Corps. The Nurse Corps manages personnel primarily through the recruitment process, drawing on multiple hiring sources. Promotion rates at the lowest two ranks are mandated, but not at the higher ranks. Retention rates vary across pay grades. Using these promotion and attrition rates, a Markov model was constructed to model the personnel flow of junior nurse corps officers. Hiring sources were shown to have a statistically significant effect on promotion and retention rates. However, these effects were not found to be practically significant in the Markov model. Only small improvements in rank imbalances are possible given current recruiting guidelines. Allowing greater flexibility in recruiting practices, fewer recruits would generate a 25 percent reduction in rank imbalances, but result in understaffing. Recruiting different ranks at entry would generate a 65 percent reduction in rank imbalances without understaffing issues. Policies adjusting promotion and retention rates are more powerful in controlling personnel flows than adjusting hiring sources. These policies are the only means for addressing the fundamental sources of rank imbalances in the Navy Nurse Corps arising from current manpower guidelines. The paper shows that modeling to improve manpower management may enable the Navy Nurse Corps to more efficiently fulfill its mandate for high-quality healthcare.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jarvie, D.M.; Elsinger, R.J.; Inden, R.F.
1996-06-01
Recent successes in the Lodgepole Waulsortian Mound play have resulted in the reevaluation of the Williston Basin petroleum systems. It has been postulated that hydrocarbons were generated from organic-rich Bakken Formation source rocks in the Williston Basin. However, Canadian geoscientists have indicated that the Lodgepole Formation is responsible for oil entrapped in Lodgepole Formation and other Madison traps in portions of the Canadian Williston Basin. Furthermore, geoscientists in the U.S. have recently shown oils from mid-Madison conventional reservoirs in the U.S. Williston Basin were not derived from Bakken Formation source rocks. Kinetic data showing the rate of hydrocarbon formation frommore » petroleum source rocks were measured on source rocks from the Lodgepole, False Bakken, and Bakken Formations. These results show a wide range of values in the rate of hydrocarbon generation. Oil prone facies within the Lodgepole Formation tend to generate hydrocarbons earlier than the oil prone facies in the Bakken Formation and mixed oil/gas prone and gas prone facies in the Lodgepole Formation. A comparison of these source rocks using a geological model of hydrocarbon generation reveals differences in the timing of generation and the required level of maturity to generate significant amounts of hydrocarbons.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Energy Operation Model (EOM) simulates the operation of the electric grid at the zonal scale, including inter-zonal transmission constraints. It generates the production cost, power generation by plant and category, fuel usage, and locational marginal price (LMP) with a flexible way to constrain the power production by environmental constraints, e.g. heat waves, drought conditions). Different from commercial software such as PROMOD IV where generator capacity and heat rate efficiency can only be adjusted on a monthly basis, EOM calculates capacity impacts and plant efficiencies based on hourly ambient conditions (air temperature and humidity) and cooling water availability for thermal plants.more » What is missing is a hydro power dispatch.« less
Hall, Matt; Christensen, Kim; di Collobiano, Simone A; Jensen, Henrik Jeldtoft
2002-07-01
We present a model of evolutionary ecology consisting of a web of interacting individuals, a tangle-nature model. The reproduction rate of individuals characterized by their genome depends on the composition of the population in genotype space. Ecological features such as the taxonomy and the macroevolutionary mode of the dynamics are emergent properties. The macrodynamics exhibit intermittent two-mode switching with a gradually decreasing extinction rate. The generated ecologies become gradually better adapted as well as more complex in a collective sense. The form of the species abundance curve compares well with observed functional forms. The model's error threshold can be understood in terms of the characteristics of the two dynamical modes of the system.
Modeling the Synergy of Cofilin and Arp2/3 in Lamellipodial Protrusive Activity
Tania, Nessy; Condeelis, John; Edelstein-Keshet, Leah
2013-01-01
Rapid polymerization of actin filament barbed ends generates protrusive forces at the cell edge, leading to cell migration. Two important regulators of free barbed ends, cofilin and Arp2/3, have been shown to work in synergy (net effect greater than additive). To explore this synergy, we model the dynamics of F-actin at the leading edge, motivated by data from EGF-stimulated mammary carcinoma cells. We study how synergy depends on the localized rates and relative timing of cofilin and Arp2/3 activation at the cell edge. The model incorporates diffusion of cofilin, membrane protrusion, F-actin capping, aging, and severing by cofilin and branch nucleation by Arp2/3 (but not G-actin recycling). In a well-mixed system, cofilin and Arp2/3 can each generate a large pulse of barbed ends on their own, but have little synergy; high synergy occurs only at low activation rates, when few barbed ends are produced. In the full spatially distributed model, both synergy and barbed-end production are significant over a range of activation rates. Furthermore, barbed-end production is greatest when Arp2/3 activation is delayed relative to cofilin. Our model supports a direct role for cofilin-mediated actin polymerization in stimulated cell migration, including chemotaxis and cancer invasion. PMID:24209839
Confidence and self-attribution bias in an artificial stock market.
Bertella, Mario A; Pires, Felipe R; Rego, Henio H A; Silva, Jonathas N; Vodenska, Irena; Stanley, H Eugene
2017-01-01
Using an agent-based model we examine the dynamics of stock price fluctuations and their rates of return in an artificial financial market composed of fundamentalist and chartist agents with and without confidence. We find that chartist agents who are confident generate higher price and rate of return volatilities than those who are not. We also find that kurtosis and skewness are lower in our simulation study of agents who are not confident. We show that the stock price and confidence index-both generated by our model-are cointegrated and that stock price affects confidence index but confidence index does not affect stock price. We next compare the results of our model with the S&P 500 index and its respective stock market confidence index using cointegration and Granger tests. As in our model, we find that stock prices drive their respective confidence indices, but that the opposite relationship, i.e., the assumption that confidence indices drive stock prices, is not significant.
Gulf of California Response to Hurricane Juliette
2010-01-01
desert. J. Geophys. Res. 109, C03043. doi:10.1029/2003JC001938. Barth, A., Alvera -Azcárate, A., Weisberg, R.H., 2008a. A nested model study of the Loop...Current generated variability and its impact on the West Florida Shelf. J. Geophys. Res. 113, C05009. doi:10.1029/2007JC004492. Barth, A., Alvera ...C08033. doi:10.1029/2007JC004585. Barth, A., Alvera -Azcárate, A., Weisberg, R.H., 2008c. Benefit of nesting a regional model into a large-scale ocean model
Gaussian Mixture Model of Heart Rate Variability
Costa, Tommaso; Boccignone, Giuseppe; Ferraro, Mario
2012-01-01
Heart rate variability (HRV) is an important measure of sympathetic and parasympathetic functions of the autonomic nervous system and a key indicator of cardiovascular condition. This paper proposes a novel method to investigate HRV, namely by modelling it as a linear combination of Gaussians. Results show that three Gaussians are enough to describe the stationary statistics of heart variability and to provide a straightforward interpretation of the HRV power spectrum. Comparisons have been made also with synthetic data generated from different physiologically based models showing the plausibility of the Gaussian mixture parameters. PMID:22666386
Gouvêa de Barros, Bruno; Weber dos Santos, Rodrigo; Alonso, Sergio
2015-01-01
The inclusion of nonconducting media, mimicking cardiac fibrosis, in two models of cardiac tissue produces the formation of ectopic beats. The fraction of nonconducting media in comparison with the fraction of healthy myocytes and the topological distribution of cells determines the probability of ectopic beat generation. First, a detailed subcellular microscopic model that accounts for the microstructure of the cardiac tissue is constructed and employed for the numerical simulation of action potential propagation. Next, an equivalent discrete model is implemented, which permits a faster integration of the equations. This discrete model is a simplified version of the microscopic model that maintains the distribution of connections between cells. Both models produce similar results when describing action potential propagation in homogeneous tissue; however, they slightly differ in the generation of ectopic beats in heterogeneous tissue. Nevertheless, both models present the generation of reentry inside fibrotic tissues. This kind of reentry restricted to microfibrosis regions can result in the formation of ectopic pacemakers, that is, regions that will generate a series of ectopic stimulus at a fast pacing rate. In turn, such activity has been related to trigger fibrillation in the atria and in the ventricles in clinical and animal studies. PMID:26583127
Markov-modulated Markov chains and the covarion process of molecular evolution.
Galtier, N; Jean-Marie, A
2004-01-01
The covarion (or site specific rate variation, SSRV) process of biological sequence evolution is a process by which the evolutionary rate of a nucleotide/amino acid/codon position can change in time. In this paper, we introduce time-continuous, space-discrete, Markov-modulated Markov chains as a model for representing SSRV processes, generalizing existing theory to any model of rate change. We propose a fast algorithm for diagonalizing the generator matrix of relevant Markov-modulated Markov processes. This algorithm makes phylogeny likelihood calculation tractable even for a large number of rate classes and a large number of states, so that SSRV models become applicable to amino acid or codon sequence datasets. Using this algorithm, we investigate the accuracy of the discrete approximation to the Gamma distribution of evolutionary rates, widely used in molecular phylogeny. We show that a relatively large number of classes is required to achieve accurate approximation of the exact likelihood when the number of analyzed sequences exceeds 20, both under the SSRV and among site rate variation (ASRV) models.
Live Birth from Slow-Frozen Rabbit Oocytes after In Vivo Fertilisation
Jiménez-Trigos, Estrella; Vicente, José S.; Marco-Jiménez, Francisco
2013-01-01
In vivo fertilisation techniques such as intraoviductal oocyte transfer have been considered as alternatives to bypass the inadequacy of conventional in vitro fertilisation in rabbit. There is only one study in the literature, published in 1989, that reports live offspring from cryopreserved rabbit oocytes. The aim of the present study was to establish the in vivo fertilisation procedure to generate live offspring with frozen oocytes. First, the effect of two recipient models (i) ovariectomised or (ii) oviduct ligated immediately after transfer on the ability of fresh oocytes to fertilise were compared. Second, generation of live offspring from slow-frozen oocytes was carried out using the ligated oviduct recipient model. Throughout the experiment, recipients were artificially inseminated 9 hours prior to oocyte transfer. In the first experiment, two days after unilateral transfer of fresh oocytes, oviducts and uterine horns were flushed to assess embryo recovery rates. The embryo recovery rates were low compared to control in both ovariectomised and ligated oviduct groups. However, ligated oviduct recipient showed significantly (P<0.05) higher embryo recovery rates compared to ovariectomised and control-transferred. In the second experiment, using bilateral oviduct ligation model, all females that received slow-frozen oocytes became pregnant and delivered a total of 4 live young naturally. Thus, in vivo fertilisation is an effective technique to generate live offspring using slow-frozen oocytes in rabbits. PMID:24358281
NASA Astrophysics Data System (ADS)
Liu, Ruipeng; Di Matteo, T.; Lux, Thomas
2007-09-01
In this paper, we consider daily financial data of a collection of different stock market indices, exchange rates, and interest rates, and we analyze their multi-scaling properties by estimating a simple specification of the Markov-switching multifractal (MSM) model. In order to see how well the estimated model captures the temporal dependence of the data, we estimate and compare the scaling exponents H(q) (for q=1,2) for both empirical data and simulated data of the MSM model. In most cases the multifractal model appears to generate ‘apparent’ long memory in agreement with the empirical scaling laws.
Wind Technology Modeling Within the System Advisor Model (SAM) (Poster)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blair, N.; Dobos, A.; Ferguson, T.
This poster provides detail for implementation and the underlying methodology for modeling wind power generation performance in the National Renewable Energy Laboratory's (NREL's) System Advisor Model (SAM). SAM's wind power model allows users to assess projects involving one or more large or small wind turbines with any of the detailed options for residential, commercial, or utility financing. The model requires information about the wind resource, wind turbine specifications, wind farm layout (if applicable), and costs, and provides analysis to compare the absolute or relative impact of these inputs. SAM is a system performance and economic model designed to facilitate analysismore » and decision-making for project developers, financers, policymakers, and energy researchers. The user pairs a generation technology with a financing option (residential, commercial, or utility) to calculate the cost of energy over the multi-year project period. Specifically, SAM calculates the value of projects which buy and sell power at retail rates for residential and commercial systems, and also for larger-scale projects which operate through a power purchase agreement (PPA) with a utility. The financial model captures complex financing and rate structures, taxes, and incentives.« less
Information Theoretic Secret Key Generation: Structured Codes and Tree Packing
ERIC Educational Resources Information Center
Nitinawarat, Sirin
2010-01-01
This dissertation deals with a multiterminal source model for secret key generation by multiple network terminals with prior and privileged access to a set of correlated signals complemented by public discussion among themselves. Emphasis is placed on a characterization of secret key capacity, i.e., the largest rate of an achievable secret key,…
Triangular Arbitrage as an Interaction in Foreign Exchange Markets
NASA Astrophysics Data System (ADS)
Aiba, Yukihiro; Hatano, Naomichi
Analyzing correlation in financial time series is a topic of considerable interest [1]-[17]. In the foreign exchange market, a correlation among the exchange rates can be generated by a triangular arbitrage transaction. The purpose of this article is to review our recent study [18]-[23] on modeling the interaction generated by the triangular arbitrage.
An Approach for Reducing the Error Rate in Automated Lung Segmentation
Gill, Gurman; Beichel, Reinhard R.
2016-01-01
Robust lung segmentation is challenging, especially when tens of thousands of lung CT scans need to be processed, as required by large multi-center studies. The goal of this work was to develop and assess a method for the fusion of segmentation results from two different methods to generate lung segmentations that have a lower failure rate than individual input segmentations. As basis for the fusion approach, lung segmentations generated with a region growing and model-based approach were utilized. The fusion result was generated by comparing input segmentations and selectively combining them using a trained classification system. The method was evaluated on a diverse set of 204 CT scans of normal and diseased lungs. The fusion approach resulted in a Dice coefficient of 0.9855 ± 0.0106 and showed a statistically significant improvement compared to both input segmentation methods. In addition, the failure rate at different segmentation accuracy levels was assessed. For example, when requiring that lung segmentations must have a Dice coefficient of better than 0.97, the fusion approach had a failure rate of 6.13%. In contrast, the failure rate for region growing and model-based methods was 18.14% and 15.69%, respectively. Therefore, the proposed method improves the quality of the lung segmentations, which is important for subsequent quantitative analysis of lungs. Also, to enable a comparison with other methods, results on the LOLA11 challenge test set are reported. PMID:27447897
Influence of Ar addition on ozone generation in a non-thermal plasma—a numerical investigation
NASA Astrophysics Data System (ADS)
Chen, Hsin Liang; Lee, How Ming; Chen, Shiaw Huei; Wei, Ta Chin; Been Chang, Moo
2010-10-01
A numerical model based on a dielectric barrier discharge is developed in this study to investigate the influence of Ar addition on ozone generation. The simulation results show good agreement with the experimental data, confirming the validity of the numerical model. The mechanisms regarding how the Ar addition affects ozone generation are investigated with the assistance of a numerical simulation by probing into the following two questions, (1) why the ozone concentration just slightly decreases in the low specific input energy (SIE, the ratio of discharge power to gas flow rate) region even if the inlet O2 concentration is substantially decreased and (2) why the variation of the increased rate of ozone concentration with SIE (i.e. the variation in the slope of ozone concentration versus SIE) is more significant for an O2/Ar mixture plasma. As SIE is relatively low, ozone decomposition through electron-impact and radical attack reactions is less significant because of low ozone concentration and gas temperature. Therefore, the ozone concentration depends mainly on the amount of oxygen atoms generated. The simulation results indicate that the amount of oxygen atoms generated per electronvolt for Ar concentrations of 0%, 10%, 30%, 50% and 80% are 0.178, 0.174, 0.169, 0.165 and 0.166, respectively, explaining why the ozone concentration does not decrease linearly with the inlet O2 concentration in the low SIE region. On the other hand, the simulation results show that increasing Ar concentration would lead to a lower reduced field and a higher gas temperature. The former would lead to an increase in the rate constant of e + O3 → e + O + O2 while the latter would result in a decrease in the rate constant of O + O2 + M → O3 + M and an increase in that of O3 + O → 2O2. The changes in the rate constants of these reactions would have a negative effect on ozone generation, which is the rationale for the second question.
Booth, James F; Naud, Catherine M; Willison, Jeff
2018-03-01
The representation of extratropical cyclones (ETCs) precipitation in general circulation models (GCMs) and a weather research and forecasting (WRF) model is analyzed. This work considers the link between ETC precipitation and dynamical strength and tests if parameterized convection affects this link for ETCs in the North Atlantic Basin. Lagrangian cyclone tracks of ETCs in ERA-Interim reanalysis (ERAI), the GISS and GFDL CMIP5 models, and WRF with two horizontal resolutions are utilized in a compositing analysis. The 20-km resolution WRF model generates stronger ETCs based on surface wind speed and cyclone precipitation. The GCMs and ERAI generate similar composite means and distributions for cyclone precipitation rates, but GCMs generate weaker cyclone surface winds than ERAI. The amount of cyclone precipitation generated by the convection scheme differs significantly across the datasets, with GISS generating the most, followed by ERAI and then GFDL. The models and reanalysis generate relatively more parameterized convective precipitation when the total cyclone-averaged precipitation is smaller. This is partially due to the contribution of parameterized convective precipitation occurring more often late in the ETC life cycle. For reanalysis and models, precipitation increases with both cyclone moisture and surface wind speed, and this is true if the contribution from the parameterized convection scheme is larger or not. This work shows that these different models generate similar total ETC precipitation despite large differences in the parameterized convection, and these differences do not cause unexpected behavior in ETC precipitation sensitivity to cyclone moisture or surface wind speed.
Al-Khatib, Issam A; Eleyan, Derar; Garfield, Joy
2016-09-01
Hospitals and health centers provide a variety of healthcare services and normally generate hazardous waste as well as general waste. General waste has a similar nature to that of municipal solid waste and therefore could be disposed of in municipal landfills. However, hazardous waste poses risks to public health, unless it is properly managed. The hospital waste management system encompasses many factors, i.e., number of beds, number of employees, level of service, population, birth rate, fertility rate, and not in my back yard (NIMBY) syndrome. Therefore, this management system requires a comprehensive analysis to determine the role of each factor and its influence on the whole system. In this research, a hospital waste management simulation model is presented based on the system dynamics technique to determine the interaction among these factors in the system using a software package, ithink. This model is used to estimate waste segregation as this is important in the hospital waste management system to minimize risk to public health. Real data has been obtained from a case study of the city of Nablus, Palestine to validate the model. The model exhibits wastes generated from three types of hospitals (private, charitable, and government) by considering the number of both inpatients and outpatients depending on the population of the city under study. The model also offers the facility to compare the total waste generated among these different types of hospitals and anticipate and predict the future generated waste both infectious and non-infectious and the treatment cost incurred.
Appropriate prediction of residential air exchange rate (AER) is important for estimating human exposures in the residential microenvironment, as AER drives the infiltration of outdoor-generated air pollutants indoors. AER differences among homes may result from a number of fact...
Uncertainty estimation with bias-correction for flow series based on rating curve
NASA Astrophysics Data System (ADS)
Shao, Quanxi; Lerat, Julien; Podger, Geoff; Dutta, Dushmanta
2014-03-01
Streamflow discharge constitutes one of the fundamental data required to perform water balance studies and develop hydrological models. A rating curve, designed based on a series of concurrent stage and discharge measurements at a gauging location, provides a way to generate complete discharge time series with a reasonable quality if sufficient measurement points are available. However, the associated uncertainty is frequently not available even though it has a significant impact on hydrological modelling. In this paper, we identify the discrepancy of the hydrographers' rating curves used to derive the historical discharge data series and proposed a modification by bias correction which is also in the form of power function as the traditional rating curve. In order to obtain the uncertainty estimation, we propose a further both-side Box-Cox transformation to stabilize the regression residuals as close to the normal distribution as possible, so that a proper uncertainty can be attached for the whole discharge series in the ensemble generation. We demonstrate the proposed method by applying it to the gauging stations in the Flinders and Gilbert rivers in north-west Queensland, Australia.
Flores-Alsina, Xavier; Saagi, Ramesh; Lindblom, Erik; Thirsing, Carsten; Thornberg, Dines; Gernaey, Krist V; Jeppsson, Ulf
2014-03-15
The objective of this paper is to demonstrate the full-scale feasibility of the phenomenological dynamic influent pollutant disturbance scenario generator (DIPDSG) that was originally used to create the influent data of the International Water Association (IWA) Benchmark Simulation Model No. 2 (BSM2). In this study, the influent characteristics of two large Scandinavian treatment facilities are studied for a period of two years. A step-wise procedure based on adjusting the most sensitive parameters at different time scales is followed to calibrate/validate the DIPDSG model blocks for: 1) flow rate; 2) pollutants (carbon, nitrogen); 3) temperature; and, 4) transport. Simulation results show that the model successfully describes daily/weekly and seasonal variations and the effect of rainfall and snow melting on the influent flow rate, pollutant concentrations and temperature profiles. Furthermore, additional phenomena such as size and accumulation/flush of particulates of/in the upstream catchment and sewer system are incorporated in the simulated time series. Finally, this study is complemented with: 1) the generation of additional future scenarios showing the effects of different rainfall patterns (climate change) or influent biodegradability (process uncertainty) on the generated time series; 2) a demonstration of how to reduce the cost/workload of measuring campaigns by filling the gaps due to missing data in the influent profiles; and, 3) a critical discussion of the presented results balancing model structure/calibration procedure complexity and prediction capabilities. Copyright © 2013 Elsevier Ltd. All rights reserved.
Modeling Japan-South Seas trade in forest products.
J.R. Vincent
1987-01-01
The international trade of forest products has generated increasing research interest, yet experience with modeling such trade is limited. Primary issues include the effects of trade barriers and exchange rates on trade patterns and national welfare. This paper attempts to add to experience by modeling hardwood log, lumber, and plywood trade in a region that has been...
Johnson-Cook Strength Model for Automotive Steels
NASA Astrophysics Data System (ADS)
Vedantam, K.
2005-07-01
Over the last few years most automotive companies are engaged in performing simulations of the capability of individual components or entire structure of a motor vehicle to adequately sustain the shock (impacts) and to protect the occupants from injuries during crashes. These simulations require constitutive material models (e.g., Johnson-Cook) of the sheet steel and other components based on the compression/tension data obtained in a series of tests performed at quasi-static (˜1/s) to high strain rates (˜2000/s). One such study is undertaken by the recently formed IISI (International Iron and Steel Institute) in organizing the round robin tests to compare the tensile data generated at our Laboratory at strain rates of ˜1/s, ˜300/s, ˜800/s, and ˜2000/s on two grades of automotive steel (Mild steel and Dual Phase-DP 590) using split Hopkinson bar with those generated at high strain rate testing facilities in Germany and Japan. Our tension data on mild steel (flow stress ˜ 500 MPa) suggest a relatively small strain rate sensitivity of the material. The second steel grade (DP-590) tested exhibits significant strain rate sensitivity in that the flow stress increases from about 700 MPa (at ˜1/s) to 900 MPa (at ˜2000/s). J-C strength model constants (A, B, n, and C) for the two steel grades will be presented.
Transgenerational Adaptation to Pollution Changes Energy Allocation in Populations of Nematodes.
Goussen, Benoit; Péry, Alexandre R R; Bonzom, Jean-Marc; Beaudouin, Rémy
2015-10-20
Assessing the evolutionary responses of long-term exposed populations requires multigeneration ecotoxicity tests. However, the analysis of the data from these tests is not straightforward. Mechanistic models allow the in-depth analysis of the variation of physiological traits over many generations, by quantifying the trend of the physiological and toxicological parameters of the model. In the present study, a bioenergetic mechanistic model has been used to assess the evolution of two populations of the nematode Caenorhabditis elegans in control conditions or exposed to uranium. This evolutionary pressure resulted in a brood size reduction of 60%. We showed an adaptation of individuals of both populations to experimental conditions (increase of maximal length, decrease of growth rate, decrease of brood size, and decrease of the elimination rate). In addition, differential evolution was also highlighted between the two populations once the maternal effects had been diminished after several generations. Thus, individuals that were greater in maximal length, but with apparently a greater sensitivity to uranium were selected in the uranium population. In this study, we showed that this bioenergetics mechanistic modeling approach provided a precise, certain, and powerful analysis of the life strategy of C. elegans populations exposed to heavy metals resulting in an evolutionary pressure across successive generations.
Start-up performance of parabolic trough concentrating solar power plants
NASA Astrophysics Data System (ADS)
Ferruzza, Davide; Topel, Monika; Basaran, Ibrahim; Laumert, Björn; Haglind, Fredrik
2017-06-01
Concentrating solar power plants, even though they can be integrated with thermal energy storage, are still subjected to cyclic start-up and shut-downs. As a consequence, in order to maximize their profitability and performance, the flexibility with respect to transient operations is essential. In this regard, two of the key components identified are the steam generation system and steam turbine. In general it is desirable to have fast ramp-up rates during the start-up of a power plant. However ramp-up rates are limited by, among other things, thermal stresses, which if high enough can compromise the life of the components. Moreover, from an operability perspective it might not be optimal to have designs for the highest heating rates, as there may be other components limiting the power plant start-up. Therefore, it is important to look at the interaction between the steam turbine and steam generator to determine the optimal ramp rates. This paper presents a methodology to account for thermal stresses limitations during the power plant start up, aiming at identifying which components limit the ramp rates. A detailed dynamic model of a parabolic trough power plant was developed and integrated with a control strategy to account for the start-up limitations of both the turbine and steam generator. The models have been introduced in an existing techno-economic tool developed by the authors (DYESOPT). The results indicated that for each application, an optimal heating rates range can be identified. For the specific case presented in the paper, an optimal range of 7-10 K/min of evaporator heating rate can result in a 1.7-2.1% increase in electricity production compared to a slower component (4 K/min).
The UK waste input-output table: Linking waste generation to the UK economy.
Salemdeeb, Ramy; Al-Tabbaa, Abir; Reynolds, Christian
2016-10-01
In order to achieve a circular economy, there must be a greater understanding of the links between economic activity and waste generation. This study introduces the first version of the UK waste input-output table that could be used to quantify both direct and indirect waste arisings across the supply chain. The proposed waste input-output table features 21 industrial sectors and 34 waste types and is for the 2010 time-period. Using the waste input-output table, the study results quantitatively confirm that sectors with a long supply chain (i.e. manufacturing and services sectors) have higher indirect waste generation rates compared with industrial primary sectors (e.g. mining and quarrying) and sectors with a shorter supply chain (e.g. construction). Results also reveal that the construction, mining and quarrying sectors have the highest waste generation rates, 742 and 694 tonne per £1m of final demand, respectively. Owing to the aggregated format of the first version of the waste input-output, the model does not address the relationship between waste generation and recycling activities. Therefore, an updated version of the waste input-output table is expected be developed considering this issue. Consequently, the expanded model would lead to a better understanding of waste and resource flows in the supply chain. © The Author(s) 2016.
Li, Qiuying; Pham, Hoang
2017-01-01
In this paper, we propose a software reliability model that considers not only error generation but also fault removal efficiency combined with testing coverage information based on a nonhomogeneous Poisson process (NHPP). During the past four decades, many software reliability growth models (SRGMs) based on NHPP have been proposed to estimate the software reliability measures, most of which have the same following agreements: 1) it is a common phenomenon that during the testing phase, the fault detection rate always changes; 2) as a result of imperfect debugging, fault removal has been related to a fault re-introduction rate. But there are few SRGMs in the literature that differentiate between fault detection and fault removal, i.e. they seldom consider the imperfect fault removal efficiency. But in practical software developing process, fault removal efficiency cannot always be perfect, i.e. the failures detected might not be removed completely and the original faults might still exist and new faults might be introduced meanwhile, which is referred to as imperfect debugging phenomenon. In this study, a model aiming to incorporate fault introduction rate, fault removal efficiency and testing coverage into software reliability evaluation is developed, using testing coverage to express the fault detection rate and using fault removal efficiency to consider the fault repair. We compare the performance of the proposed model with several existing NHPP SRGMs using three sets of real failure data based on five criteria. The results exhibit that the model can give a better fitting and predictive performance. PMID:28750091
Generation of Complex Karstic Conduit Networks with a Hydro-chemical Model
NASA Astrophysics Data System (ADS)
De Rooij, R.; Graham, W. D.
2016-12-01
The discrete-continuum approach is very well suited to simulate flow and solute transport within karst aquifers. Using this approach, discrete one-dimensional conduits are embedded within a three-dimensional continuum representative of the porous limestone matrix. Typically, however, little is known about the geometry of the karstic conduit network. As such the discrete-continuum approach is rarely used for practical applications. It may be argued, however, that the uncertainty associated with the geometry of the network could be handled by modeling an ensemble of possible karst conduit networks within a stochastic framework. We propose to generate stochastically realistic karst conduit networks by simulating the widening of conduits as caused by the dissolution of limestone over geological relevant timescales. We illustrate that advanced numerical techniques permit to solve the non-linear and coupled hydro-chemical processes efficiently, such that relatively large and complex networks can be generated in acceptable time frames. Instead of specifying flow boundary conditions on conduit cells to recharge the network as is typically done in classical speleogenesis models, we specify an effective rainfall rate over the land surface and let model physics determine the amount of water entering the network. This is advantageous since the amount of water entering the network is extremely difficult to reconstruct, whereas the effective rainfall rate may be quantified using paleoclimatic data. Furthermore, we show that poorly known flow conditions may be constrained by requiring a realistic flow field. Using our speleogenesis model we have investigated factors that influence the geometry of simulated conduit networks. We illustrate that our model generates typical branchwork, network and anastomotic conduit systems. Flow, solute transport and water ages in karst aquifers are simulated using a few illustrative networks.
Single photon counting linear mode avalanche photodiode technologies
NASA Astrophysics Data System (ADS)
Williams, George M.; Huntington, Andrew S.
2011-10-01
The false count rate of a single-photon-sensitive photoreceiver consisting of a high-gain, low-excess-noise linear-mode InGaAs avalanche photodiode (APD) and a high-bandwidth transimpedance amplifier (TIA) is fit to a statistical model. The peak height distribution of the APD's multiplied dark current is approximated by the weighted sum of McIntyre distributions, each characterizing dark current generated at a different location within the APD's junction. The peak height distribution approximated in this way is convolved with a Gaussian distribution representing the input-referred noise of the TIA to generate the statistical distribution of the uncorrelated sum. The cumulative distribution function (CDF) representing count probability as a function of detection threshold is computed, and the CDF model fit to empirical false count data. It is found that only k=0 McIntyre distributions fit the empirically measured CDF at high detection threshold, and that false count rate drops faster than photon count rate as detection threshold is raised. Once fit to empirical false count data, the model predicts the improvement of the false count rate to be expected from reductions in TIA noise and APD dark current. Improvement by at least three orders of magnitude is thought feasible with further manufacturing development and a capacitive-feedback TIA (CTIA).
The use of vestibular models for design and evaluation of flight simulator motion
NASA Technical Reports Server (NTRS)
Bussolari, Steven R.; Young, Laurence R.; Lee, Alfred T.
1989-01-01
Quantitative models for the dynamics of the human vestibular system are applied to the design and evaluation of flight simulator platform motion. An optimal simulator motion control algorithm is generated to minimize the vector difference between perceived spatial orientation estimated in flight and in simulation. The motion controller has been implemented on the Vertical Motion Simulator at NASA Ames Research Center and evaluated experimentally through measurement of pilot performance and subjective rating during VTOL aircraft simulation. In general, pilot performance in a longitudinal tracking task (formation flight) did not appear to be sensitive to variations in platform motion condition as long as motion was present. However, pilot assessment of motion fidelity by means of a rating scale designed for this purpose, were sensitive to motion controller design. Platform motion generated with the optimal motion controller was found to be generally equivalent to that generated by conventional linear crossfeed washout. The vestibular models are used to evaluate the motion fidelity of transport category aircraft (Boeing 727) simulation in a pilot performance and simulator acceptability study at the Man-Vehicle Systems Research Facility at NASA Ames Research Center. Eighteen airline pilots, currently flying B-727, were given a series of flight scenarios in the simulator under various conditions of simulator motion. The scenarios were chosen to reflect the flight maneuvers that these pilots might expect to be given during a routine pilot proficiency check. Pilot performance and subjective rating of simulator fidelity was relatively insensitive to the motion condition, despite large differences in the amplitude of motion provided. This lack of sensitivity may be explained by means of the vestibular models, which predict little difference in the modeled motion sensations of the pilots when different motion conditions are imposed.
A Model of Generating Visual Place Cells Based on Environment Perception and Similar Measure.
Zhou, Yang; Wu, Dewei
2016-01-01
It is an important content to generate visual place cells (VPCs) in the field of bioinspired navigation. By analyzing the firing characteristic of biological place cells and the existing methods for generating VPCs, a model of generating visual place cells based on environment perception and similar measure is abstracted in this paper. VPCs' generation process is divided into three phases, including environment perception, similar measure, and recruiting of a new place cell. According to this process, a specific method for generating VPCs is presented. External reference landmarks are obtained based on local invariant characteristics of image and a similar measure function is designed based on Euclidean distance and Gaussian function. Simulation validates the proposed method is available. The firing characteristic of the generated VPCs is similar to that of biological place cells, and VPCs' firing fields can be adjusted flexibly by changing the adjustment factor of firing field (AFFF) and firing rate's threshold (FRT).
A Model of Generating Visual Place Cells Based on Environment Perception and Similar Measure
2016-01-01
It is an important content to generate visual place cells (VPCs) in the field of bioinspired navigation. By analyzing the firing characteristic of biological place cells and the existing methods for generating VPCs, a model of generating visual place cells based on environment perception and similar measure is abstracted in this paper. VPCs' generation process is divided into three phases, including environment perception, similar measure, and recruiting of a new place cell. According to this process, a specific method for generating VPCs is presented. External reference landmarks are obtained based on local invariant characteristics of image and a similar measure function is designed based on Euclidean distance and Gaussian function. Simulation validates the proposed method is available. The firing characteristic of the generated VPCs is similar to that of biological place cells, and VPCs' firing fields can be adjusted flexibly by changing the adjustment factor of firing field (AFFF) and firing rate's threshold (FRT). PMID:27597859
NASA Technical Reports Server (NTRS)
Herskovits, E. H.; Itoh, R.; Melhem, E. R.
2001-01-01
OBJECTIVE: The objective of our study was to determine the effects of MR sequence (fluid-attenuated inversion-recovery [FLAIR], proton density--weighted, and T2-weighted) and of lesion location on sensitivity and specificity of lesion detection. MATERIALS AND METHODS: We generated FLAIR, proton density-weighted, and T2-weighted brain images with 3-mm lesions using published parameters for acute multiple sclerosis plaques. Each image contained from zero to five lesions that were distributed among cortical-subcortical, periventricular, and deep white matter regions; on either side; and anterior or posterior in position. We presented images of 540 lesions, distributed among 2592 image regions, to six neuroradiologists. We constructed a contingency table for image regions with lesions and another for image regions without lesions (normal). Each table included the following: the reviewer's number (1--6); the MR sequence; the side, position, and region of the lesion; and the reviewer's response (lesion present or absent [normal]). We performed chi-square and log-linear analyses. RESULTS: The FLAIR sequence yielded the highest true-positive rates (p < 0.001) and the highest true-negative rates (p < 0.001). Regions also differed in reviewers' true-positive rates (p < 0.001) and true-negative rates (p = 0.002). The true-positive rate model generated by log-linear analysis contained an additional sequence-location interaction. The true-negative rate model generated by log-linear analysis confirmed these associations, but no higher order interactions were added. CONCLUSION: We developed software with which we can generate brain images of a wide range of pulse sequences and that allows us to specify the location, size, shape, and intrinsic characteristics of simulated lesions. We found that the use of FLAIR sequences increases detection accuracy for cortical-subcortical and periventricular lesions over that associated with proton density- and T2-weighted sequences.
Scavenging and recombination kinetics in a radiation spur: The successive ordered scavenging events
NASA Astrophysics Data System (ADS)
Al-Samra, Eyad H.; Green, Nicholas J. B.
2018-03-01
This study describes stochastic models to investigate the successive ordered scavenging events in a spur of four radicals, a model system based on a radiation spur. Three simulation models have been developed to obtain the probabilities of the ordered scavenging events: (i) a Monte Carlo random flight (RF) model, (ii) hybrid simulations in which the reaction rate coefficient is used to generate scavenging times for the radicals and (iii) the independent reaction times (IRT) method. The results of these simulations are found to be in agreement with one another. In addition, a detailed master equation treatment is also presented, and used to extract simulated rate coefficients of the ordered scavenging reactions from the RF simulations. These rate coefficients are transient, the rate coefficients obtained for subsequent reactions are effectively equal, and in reasonable agreement with the simple correction for competition effects that has recently been proposed.
Should metacognition be measured by logistic regression?
Rausch, Manuel; Zehetleitner, Michael
2017-03-01
Are logistic regression slopes suitable to quantify metacognitive sensitivity, i.e. the efficiency with which subjective reports differentiate between correct and incorrect task responses? We analytically show that logistic regression slopes are independent from rating criteria in one specific model of metacognition, which assumes (i) that rating decisions are based on sensory evidence generated independently of the sensory evidence used for primary task responses and (ii) that the distributions of evidence are logistic. Given a hierarchical model of metacognition, logistic regression slopes depend on rating criteria. According to all considered models, regression slopes depend on the primary task criterion. A reanalysis of previous data revealed that massive numbers of trials are required to distinguish between hierarchical and independent models with tolerable accuracy. It is argued that researchers who wish to use logistic regression as measure of metacognitive sensitivity need to control the primary task criterion and rating criteria. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Norbeck, J. H.; Rubinstein, J. L.
2017-12-01
The earthquake activity in Oklahoma and Kansas that began in 2008 reflects the most widespread instance of induced seismicity observed to date. In this work, we demonstrate that the basement fault stressing conditions that drive seismicity rate evolution are related directly to the operational history of 958 saltwater disposal wells completed in the Arbuckle aquifer. We developed a fluid pressurization model based on the assumption that pressure changes are dominated by reservoir compressibility effects. Using injection well data, we established a detailed description of the temporal and spatial variability in stressing conditions over the 21.5-year period from January 1995 through June 2017. With this stressing history, we applied a numerical model based on rate-and-state friction theory to generate seismicity rate forecasts across a broad range of spatial scales. The model replicated the onset of seismicity, the timing of the peak seismicity rate, and the reduction in seismicity following decreased disposal activity. The behavior of the induced earthquake sequence was consistent with the prediction from rate-and-state theory that the system evolves toward a steady seismicity rate depending on the ratio between the current and background stressing rates. Seismicity rate transients occurred over characteristic timescales inversely proportional to stressing rate. We found that our hydromechanical earthquake rate model outperformed observational and empirical forecast models for one-year forecast durations over the period 2008 through 2016.
Vertical Integration of Geographic Information Sciences: A Recruitment Model for GIS Education
ERIC Educational Resources Information Center
Yu, Jaehyung; Huynh, Niem Tu; McGehee, Thomas Lee
2011-01-01
An innovative vertical integration model for recruiting to GIS education was introduced and tested following four driving forces: curriculum development, GIS presentations, institutional collaboration, and faculty training. Curriculum development was a useful approach to recruitment, student credit hour generation, and retention-rate improvement.…
Mutations, mutation rates, and evolution at the hypervariable VNTR loci of Yersinia pestis.
Vogler, Amy J; Keys, Christine E; Allender, Christopher; Bailey, Ira; Girard, Jessica; Pearson, Talima; Smith, Kimothy L; Wagner, David M; Keim, Paul
2007-03-01
VNTRs are able to discriminate among closely related isolates of recently emerged clonal pathogens, including Yersinia pestis the etiologic agent of plague, because of their great diversity. Diversity is driven largely by mutation but little is known about VNTR mutation rates, factors affecting mutation rates, or the mutational mechanisms. The molecular epidemiological utility of VNTRs will be greatly enhanced when this foundational knowledge is available. Here, we measure mutation rates for 43 VNTR loci in Y. pestis using an in vitro generated population encompassing approximately 96,000 generations. We estimate the combined 43-locus rate and individual rates for 14 loci. A comparison of Y. pestis and Escherichia coli O157:H7 VNTR mutation rates and products revealed a similar relationship between diversity and mutation rate in these two species. Likewise, the relationship between repeat copy number and mutation rate is nearly identical between these species, suggesting a generalized relationship that may be applicable to other species. The single- versus multiple-repeat mutation ratios and the insertion versus deletion mutation ratios were also similar, providing support for a general model for the mutations associated with VNTRs. Finally, we use two small sets of Y. pestis isolates to show how this general model and our estimated mutation rates can be used to compare alternate phylogenies, and to evaluate the significance of genotype matches, near-matches, and mismatches found in empirical comparisons with a reference database.
Structural and leakage integrity of tubes affected by circumferential cracking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hernalsteen, P.
1997-02-01
In this paper the author deals with the notion that circumferential cracks are generally considered unacceptable. He argues for the need to differentiate two facets of such cracks: the issue of the size and growth rate of a crack; and the issue of the structural strength and leakage potential of the tube in the presence of the crack. In this paper the author tries to show that the second point is not a major concern for such cracks. The paper presents data on the structural strength or burst pressure characteristics of steam generator tubes derived from models and data basesmore » of experimental work. He also presents a leak rate model, and compares the performance of circumferential and axial cracks as far as burst strength and leak rate. The final conclusion is that subject to improvement in NDE capabilities (sizing, detection, growth), that Steam Generator Defect Specific Management can be used to allow circumferentially degraded tubes to remain in service.« less
J. D. Carlson; Larry S. Bradshaw; Ralph M. Nelson; Randall R Bensch; Rafal Jabrzemski
2007-01-01
The application of a next-generation dead-fuel moisture model, the 'Nelson model', to four timelag fuel classes using an extensive 21-month dataset of dead-fuel moisture observations is described. Developed by Ralph Nelson in the 1990s, the Nelson model is a dead-fuel moisture model designed to take advantage of frequent automated weather observations....
A generalized land-use scenario generator: a case study for the Congo basin.
NASA Astrophysics Data System (ADS)
Caporaso, Luca; Tompkins, Adrian Mark; Biondi, Riccardo; Bell, Jean Pierre
2014-05-01
The impact of deforestation on climate is often studied using highly idealized "instant deforestation" experiments due to the lack of generalized deforestation scenario generators coupled to climate model land-surface schemes. A new deforestation scenario generator has been therefore developed to fulfill this role known as the deforestation ScenArio GEnerator, or FOREST-SAGE. The model produces distributed maps of deforestation rates that account for local factors such as proximity to transport networks, distance weighted population density, forest fragmentation and presence of protected areas and logging concessions. The integrated deforestation risk is scaled to give the deforestation rate as specified by macro-region scenarios such as "business as usual" or "increased protection legislation" which are a function of future time. FOREST-SAGE was initialized and validated using the MODerate Resolution Imaging Spectroradiometer Vegetation Continuous Field data. Despite the high cloud coverage of Congo Basin over the year, we were able to validate the results with high confidence from 2001 to 2010 in a large forested area. Furthermore a set of scenarios has been used to provide a range of possible pathways for the evolution of land-use change over the Congo Basin for the period 2010-2030.
NASA Technical Reports Server (NTRS)
Madnia, C. K.; Frankel, S. H.; Givi, P.
1992-01-01
The presently obtained closed-form analytical expressions, which predict the limiting rate of mean reactant conversion in homogeneous turbulent flows under the influence of a binary reaction, are derived via the single-point pdf method based on amplitude mapping closure. With this model, the maximum rate of the mean reactant's decay can be conveniently expressed in terms of definite integrals of the parabolic cylinder functions. The results obtained are shown to be in good agreement with data generated by direct numerical simulations.
Drilling in bone: modeling heat generation and temperature distribution.
Davidson, Sean R; James, David F
2003-06-01
Thermo-mechanical equations were developed from machining theory to predict heat generation due to drilling and were coupled with a heat transfer FEM simulation to predict the temperature rise and thermal injury in bone during a drilling operation. The rotational speed, feed rate, drill geometry and bone material properties were varied in a parametric analysis to determine the importance of each on temperature rise and therefore on thermal damage. It was found that drill speed, feed rate and drill diameter had the most significant thermal impact while changes in drill helix angle, point angle and bone thermal properties had relatively little effect.
Mazutti, Marcio A; Zabot, Giovani; Boni, Gabriela; Skovronski, Aline; de Oliveira, Débora; Di Luccio, Marco; Rodrigues, Maria Isabel; Maugeri, Francisco; Treichel, Helen
2010-04-01
This work investigated the growth of Kluyveromyces marxianus NRRL Y-7571 in solid-state fermentation in a medium composed of sugarcane bagasse, molasses, corn steep liquor and soybean meal within a packed-bed bioreactor. Seven experimental runs were carried out to evaluate the effects of flow rate and inlet air temperature on the following microbial rates: cell mass production, total reducing sugar and oxygen consumption, carbon dioxide and ethanol production, metabolic heat and water generation. A mathematical model based on an artificial neural network was developed to predict the above-mentioned microbial rates as a function of the fermentation time, initial total reducing sugar concentration, inlet and outlet air temperatures. The results showed that the microbial rates were temperature dependent for the range 27-50 degrees C. The proposed model efficiently predicted the microbial rates, indicating that the neural network approach could be used to simulate the microbial growth in SSF.
DEPENDENCE OF X-RAY BURST MODELS ON NUCLEAR REACTION RATES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cyburt, R. H.; Keek, L.; Schatz, H.
2016-10-20
X-ray bursts are thermonuclear flashes on the surface of accreting neutron stars, and reliable burst models are needed to interpret observations in terms of properties of the neutron star and the binary system. We investigate the dependence of X-ray burst models on uncertainties in (p, γ ), ( α , γ ), and ( α , p) nuclear reaction rates using fully self-consistent burst models that account for the feedbacks between changes in nuclear energy generation and changes in astrophysical conditions. A two-step approach first identified sensitive nuclear reaction rates in a single-zone model with ignition conditions chosen to matchmore » calculations with a state-of-the-art 1D multi-zone model based on the Kepler stellar evolution code. All relevant reaction rates on neutron-deficient isotopes up to mass 106 were individually varied by a factor of 100 up and down. Calculations of the 84 changes in reaction rate with the highest impact were then repeated in the 1D multi-zone model. We find a number of uncertain reaction rates that affect predictions of light curves and burst ashes significantly. The results provide insights into the nuclear processes that shape observables from X-ray bursts, and guidance for future nuclear physics work to reduce nuclear uncertainties in X-ray burst models.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, X.D.; Krylov, S.N.; Ren, L.
1997-11-01
Photoinduced toxicity of polycyclic aromatic hydrocarbons (PAHs) occurs via photosensitization reactions (e.g., generation of singlet-state oxygen) and by photomodification (photooxidation and/or photolysis) of the chemicals to more toxic species. The quantitative structure-activity relationship (QSAR) described in the companion paper predicted, in theory, that photosensitization and photomodification additively contribute to toxicity. To substantiate this QSAR modeling exercise it was necessary to show that toxicity can be described by empirically derived parameters. The toxicity of 16 PAHs to the duckweed Lemna gibba was measured as inhibition of leaf production in simulated solar radiation (a light source with a spectrum similar to thatmore » of sunlight). A predictive model for toxicity was generated based on the theoretical model developed in the companion paper. The photophysical descriptors required of each PAH for modeling were efficiency of photon absorbance, relative uptake, quantum yield for triplet-state formation, and the rate of photomodification. The photomodification rates of the PAHs showed a moderate correlation to toxicity, whereas a derived photosensitization factor (PSF; based on absorbance, triplet-state quantum yield, and uptake) for each PAH showed only a weak, complex correlation to toxicity. However, summing the rate of photomodification and the PSF resulted in a strong correlation to toxicity that had predictive value. When the PSF and a derived photomodification factor (PMF; based on the photomodification rate and toxicity of the photomodified PAHs) were summed, an excellent explanatory model of toxicity was produced, substantiating the additive contributions of the two factors.« less
On beyond the standard model for high explosives: challenges & obstacles to surmount
DOE Office of Scientific and Technical Information (OSTI.GOV)
Menikoff, Ralph Ds
2009-01-01
Plastic-bonded explosives (PBX) are heterogeneous materials. Nevertheless, current explosive models treat them as homogeneous materials. To compensate, an empirically determined effective burn rate is used in place of a chemical reaction rate. A significant limitation of these models is that different burn parameters are needed for applications in different regimes; for example, shock initiation of a PBX at different initial temperatures or different initial densities. This is due to temperature fluctuations generated when a heterogeneous material is shock compressed. Localized regions of high temperatures are called hot spots. They dominate the reaction for shock initiation. The understanding of hot spotmore » generation and their subsequent evolution has been limited by the inability to measure transients on small spatial ({approx} 1 {micro}m) and small temporal ({approx} 1 ns) scales in the harsh environment of a detonation. With the advances in computing power, it is natural to try and gain an understanding of hot-spot initiation with numerical experiments based on meso-scale simulations that resolve material heterogeneities and utilize realistic chemical reaction rates. However, to capture the underlying physics correctly, such high resolution simulations will require more than fast computers with a large amount of memory. Here we discuss some of the issues that need to be addressed. These include dissipative mechanisms that generate hot spots, accurate thermal propceties for the equations of state of the reactants and products, and controlling numerical entropy error from shock impedance mismatches at material interfaces. The later can generate artificial hot spots and lead to premature reaction. Eliminating numerical hot spots is critical for shock initiation simulations due to the positive feedback between the energy release from reaction and the hydrodynamic flow.« less
Extension of the master sintering curve for constant heating rate modeling
NASA Astrophysics Data System (ADS)
McCoy, Tammy Michelle
The purpose of this work is to extend the functionality of the Master Sintering Curve (MSC) such that it can be used as a practical tool for predicting sintering schemes that combine both a constant heating rate and an isothermal hold. Rather than just being able to predict a final density for the object of interest, the extension to the MSC will actually be able to model a sintering run from start to finish. Because the Johnson model does not incorporate this capability, the work presented is an extension of what has already been shown in literature to be a valuable resource in many sintering situations. A predicted sintering curve that incorporates a combination of constant heating rate and an isothermal hold is more indicative of what is found in real-life sintering operations. This research offers the possibility of predicting the sintering schedule for a material, thereby having advanced information about the extent of sintering, the time schedule for sintering, and the sintering temperature with a high degree of accuracy and repeatability. The research conducted in this thesis focuses on the development of a working model for predicting the sintering schedules of several stabilized zirconia powders having the compositions YSZ (HSY8), 10Sc1CeSZ, 10Sc1YSZ, and 11ScSZ1A. The compositions of the four powders are first verified using x-ray diffraction (XRD) and the particle size and surface area are verified using a particle size analyzer and BET analysis, respectively. The sintering studies were conducted on powder compacts using a double pushrod dilatometer. Density measurements are obtained both geometrically and using the Archimedes method. Each of the four powders is pressed into ¼" diameter pellets using a manual press with no additives, such as a binder or lubricant. Using a double push-rod dilatometer, shrinkage data for the pellets is obtained over several different heating rates. The shrinkage data is then converted to reflect the change in relative density of the pellets based on the green density and the theoretical density of each of the compositions. The Master Sintering Curve (MSC) model is then utilized to generate data that can be utilized to predict the final density of the respective powder over a range of heating rates. The Elton Master Sintering Curve Extension (EMSCE) is developed to extend the functionality of the MSC tool. The parameters generated from the original MSC are used in tandem with the solution to the closed integral, theta ≡ 1cTo T1Texp -QRT dT, over a set range of temperatures. The EMSCE is used to generate a set of sintering curves having both constant heating rate and isothermal hold portions. The EMSCE extends the usefulness of the MSC by allowing this generation of a complete sintering schedule rather than just being able to predict the final relative density of a given material. The EMSCE is verified by generating a set of curves having both constant heating rate and an isothermal hold for the heat-treatment. The modeled curves are verified experimentally and a comparison of the model and experimental results are given for a selected composition. Porosity within the final product can hinder the product from sintering to full density. It is shown that some of the compositions studied did not sinter to full density because of the presence of large porosity that could not be eliminated in a reasonable amount of time. A statistical analysis of the volume fraction of porosity is completed to show the significance of the presence in the final product. The reason this is relevant to the MSC is that the model does not take into account the presence of porosity and assumes that the samples sinter to full density. When this does not happen, the model actually under-predicts the final density of the material.
Liu, Jianli; Lughofer, Edwin; Zeng, Xianyi
2015-01-01
Modeling human aesthetic perception of visual textures is important and valuable in numerous industrial domains, such as product design, architectural design, and decoration. Based on results from a semantic differential rating experiment, we modeled the relationship between low-level basic texture features and aesthetic properties involved in human aesthetic texture perception. First, we compute basic texture features from textural images using four classical methods. These features are neutral, objective, and independent of the socio-cultural context of the visual textures. Then, we conduct a semantic differential rating experiment to collect from evaluators their aesthetic perceptions of selected textural stimuli. In semantic differential rating experiment, eights pairs of aesthetic properties are chosen, which are strongly related to the socio-cultural context of the selected textures and to human emotions. They are easily understood and connected to everyday life. We propose a hierarchical feed-forward layer model of aesthetic texture perception and assign 8 pairs of aesthetic properties to different layers. Finally, we describe the generation of multiple linear and non-linear regression models for aesthetic prediction by taking dimensionality-reduced texture features and aesthetic properties of visual textures as dependent and independent variables, respectively. Our experimental results indicate that the relationships between each layer and its neighbors in the hierarchical feed-forward layer model of aesthetic texture perception can be fitted well by linear functions, and the models thus generated can successfully bridge the gap between computational texture features and aesthetic texture properties.
Sreedevi, Gudapati; Prasad, Yenumula Gerard; Prabhakar, Mathyam; Rao, Gubbala Ramachandra; Vennila, Sengottaiyan; Venkateswarlu, Bandi
2013-01-01
Temperature-driven development and survival rates of the mealybug, Phenacoccussolenopsis Tinsley (Hemiptera: Pseudococcidae) were examined at nine constant temperatures (15, 20, 25, 27, 30, 32, 35 and 40°C) on hibiscus ( Hibiscus rosa -sinensis L.). Crawlers successfully completed development to adult stage between 15 and 35°C, although their survival was affected at low temperatures. Two linear and four nonlinear models were fitted to describe developmental rates of P . solenopsis as a function of temperature, and for estimating thermal constants and bioclimatic thresholds (lower, optimum and upper temperature thresholds for development: Tmin, Topt and Tmax, respectively). Estimated thresholds between the two linear models were statistically similar. Ikemoto and Takai’s linear model permitted testing the equivalence of lower developmental thresholds for life stages of P . solenopsis reared on two hosts, hibiscus and cotton. Thermal constants required for completion of cumulative development of female and male nymphs and for the whole generation were significantly lower on hibiscus (222.2, 237.0, 308.6 degree-days, respectively) compared to cotton. Three nonlinear models performed better in describing the developmental rate for immature instars and cumulative life stages of female and male and for generation based on goodness-of-fit criteria. The simplified β type distribution function estimated Topt values closer to the observed maximum rates. Thermodynamic SSI model indicated no significant differences in the intrinsic optimum temperature estimates for different geographical populations of P . solenopsis . The estimated bioclimatic thresholds and the observed survival rates of P . solenopsis indicate the species to be high-temperature adaptive, and explained the field abundance of P . solenopsis on its host plants. PMID:24086597
Formation of inorganic nitrogenous byproducts in aqueous solution under ultrasound irradiation.
Yao, Juanjuan; Chen, Longfu; Chen, Xiangyu; Zhou, Lingxi; Liu, Wei; Zhang, Zhi
2018-04-01
The effects of ultrasonic frequency, power intensity, temperature and sparged gas on the generation of nitrogenous by-products NO 2 - and NO 3 - have been investigated, and the new kinetics model of NO 2 - and NO 3 - generation was also explored. The results show that the highest primary generation rate of NO 2 - and NO 3 - by direct sonolysis in the cavitation bubbles (represented by k 1 ' and k 2 ', respectively) was obtained at 600 kHz and 200 kHz, respectively, in the applied ultrasonic frequency range of 200 to 800 kHz. The primary generation rate of NO 2 - (represented by k 1 ') increased with the increasing ultrasonic intensity while the primary generation rate of NO 3 - (represented by k 2 ') decreased. The lower temperature is beneficial to the primary generation of both NO 2 - and NO 3 - in the cavitation bubbles. The optimal overall yields of both NO 2 - and NO 3 - were obtained at the N 2 /O 2 volume (in the sparged gas) ratio of 3:1 which is near to the ratio of N 2 /O 2 in air. The dissolved O 2 is the dominant oxygen element source for both NO and NO 2 , compared with water vapor. Ultrasonic irradiation can significant enhance the recovery rates of dissolved N 2 and O 2 and thus keep the N 2 fixation reaction going even without aeration. Copyright © 2017 Elsevier B.V. All rights reserved.
New Methodology for Estimating Fuel Economy by Vehicle Class
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chin, Shih-Miao; Dabbs, Kathryn; Hwang, Ho-Ling
2011-01-01
Office of Highway Policy Information to develop a new methodology to generate annual estimates of average fuel efficiency and number of motor vehicles registered by vehicle class for Table VM-1 of the Highway Statistics annual publication. This paper describes the new methodology developed under this effort and compares the results of the existing manual method and the new systematic approach. The methodology developed under this study takes a two-step approach. First, the preliminary fuel efficiency rates are estimated based on vehicle stock models for different classes of vehicles. Then, a reconciliation model is used to adjust the initial fuel consumptionmore » rates from the vehicle stock models and match the VMT information for each vehicle class and the reported total fuel consumption. This reconciliation model utilizes a systematic approach that produces documentable and reproducible results. The basic framework utilizes a mathematical programming formulation to minimize the deviations between the fuel economy estimates published in the previous year s Highway Statistics and the results from the vehicle stock models, subject to the constraint that fuel consumptions for different vehicle classes must sum to the total fuel consumption estimate published in Table MF-21 of the current year Highway Statistics. The results generated from this new approach provide a smoother time series for the fuel economies by vehicle class. It also utilizes the most up-to-date and best available data with sound econometric models to generate MPG estimates by vehicle class.« less
Sarnoff JND Vision Model for Flat-Panel Design
NASA Technical Reports Server (NTRS)
Brill, Michael H.; Lubin, Jeffrey
1998-01-01
This document describes adaptation of the basic Sarnoff JND Vision Model created in response to the NASA/ARPA need for a general-purpose model to predict the perceived image quality attained by flat-panel displays. The JND model predicts the perceptual ratings that humans will assign to a degraded color-image sequence relative to its nondegraded counterpart. Substantial flexibility is incorporated into this version of the model so it may be used to model displays at the sub-pixel and sub-frame level. To model a display (e.g., an LCD), the input-image data can be sampled at many times the pixel resolution and at many times the digital frame rate. The first stage of the model downsamples each sequence in time and in space to physiologically reasonable rates, but with minimum interpolative artifacts and aliasing. Luma and chroma parts of the model generate (through multi-resolution pyramid representation) a map of differences-between test and reference called the JND map, from which a summary rating predictor is derived. The latest model extensions have done well in calibration against psychophysical data and against image-rating data given a CRT-based front-end. THe software was delivered to NASA Ames and is being integrated with LCD display models at that facility,
2011-06-01
Microturbine. Given the approximate nature of the source data and the gas production models , this material can only be used for a preliminary assessment...methane generation rate, k, used in the first order decay model can vary widely from landfill to landfill and are partly dependent on waste composition...State Status (active/closed/ closure in progress) Gross Power Generation Potential (kW) 345 ARMY WHITE SANDS MISSLE RANGE DONA ANA NM ACTIVE
Forecasting paratransit services demand : review and recommendations.
DOT National Transportation Integrated Search
2013-06-01
Travel demand forecasting tools for Floridas paratransit services are outdated, utilizing old national trip : generation rate generalities and simple linear regression models. In its guidance for the development of : mandated Transportation Disadv...
Evaluating mallard adaptive management models with time series
Conn, P.B.; Kendall, W.L.
2004-01-01
Wildlife practitioners concerned with midcontinent mallard (Anas platyrhynchos) management in the United States have instituted a system of adaptive harvest management (AHM) as an objective format for setting harvest regulations. Under the AHM paradigm, predictions from a set of models that reflect key uncertainties about processes underlying population dynamics are used in coordination with optimization software to determine an optimal set of harvest decisions. Managers use comparisons of the predictive abilities of these models to gauge the relative truth of different hypotheses about density-dependent recruitment and survival, with better-predicting models giving more weight to the determination of harvest regulations. We tested the effectiveness of this strategy by examining convergence rates of 'predictor' models when the true model for population dynamics was known a priori. We generated time series for cases when the a priori model was 1 of the predictor models as well as for several cases when the a priori model was not in the model set. We further examined the addition of different levels of uncertainty into the variance structure of predictor models, reflecting different levels of confidence about estimated parameters. We showed that in certain situations, the model-selection process favors a predictor model that incorporates the hypotheses of additive harvest mortality and weakly density-dependent recruitment, even when the model is not used to generate data. Higher levels of predictor model variance led to decreased rates of convergence to the model that generated the data, but model weight trajectories were in general more stable. We suggest that predictive models should incorporate all sources of uncertainty about estimated parameters, that the variance structure should be similar for all predictor models, and that models with different functional forms for population dynamics should be considered for inclusion in predictor model! sets. All of these suggestions should help lower the probability of erroneous learning in mallard ABM and adaptive management in general.
Multi-Item Direct Behavior Ratings: Dependability of Two Levels of Assessment Specificity
ERIC Educational Resources Information Center
Volpe, Robert J.; Briesch, Amy M.
2015-01-01
Direct Behavior Rating-Multi-Item Scales (DBR-MIS) have been developed as formative measures of behavioral assessment for use in school-based problem-solving models. Initial research has examined the dependability of composite scores generated by summing all items comprising the scales. However, it has been argued that DBR-MIS may offer assessment…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krogh, B.; Chow, J.H.; Javid, H.S.
1983-05-01
A multi-stage formulation of the problem of scheduling generation, load shedding and short term transmission capacity for the alleviation of a viability emergency is presented. The formulation includes generation rate of change constraints, a linear network solution, and a model of the short term thermal overload capacity of transmission lines. The concept of rotating transmission line overloads for emergency state control is developed. The ideas are illustrated by a numerical example.
Ferguson, Christobel M; Croke, Barry F W; Beatson, Peter J; Ashbolt, Nicholas J; Deere, Daniel A
2007-06-01
In drinking water catchments, reduction of pathogen loads delivered to reservoirs is an important priority for the management of raw source water quality. To assist with the evaluation of management options, a process-based mathematical model (pathogen catchment budgets - PCB) is developed to predict Cryptosporidium, Giardia and E. coli loads generated within and exported from drinking water catchments. The model quantifies the key processes affecting the generation and transport of microorganisms from humans and animals using land use and flow data, and catchment specific information including point sources such as sewage treatment plants and on-site systems. The resultant pathogen catchment budgets (PCB) can be used to prioritize the implementation of control measures for the reduction of pathogen risks to drinking water. The model is applied in the Wingecarribee catchment and used to rank those sub-catchments that would contribute the highest pathogen loads in dry weather, and in intermediate and large wet weather events. A sensitivity analysis of the model identifies that pathogen excretion rates from animals and humans, and manure mobilization rates are significant factors determining the output of the model and thus warrant further investigation.
Estimation of methane emission rate changes using age-defined waste in a landfill site.
Ishii, Kazuei; Furuichi, Toru
2013-09-01
Long term methane emissions from landfill sites are often predicted by first-order decay (FOD) models, in which the default coefficients of the methane generation potential and the methane generation rate given by the Intergovernmental Panel on Climate Change (IPCC) are usually used. However, previous studies have demonstrated the large uncertainty in these coefficients because they are derived from a calibration procedure under ideal steady-state conditions, not actual landfill site conditions. In this study, the coefficients in the FOD model were estimated by a new approach to predict more precise long term methane generation by considering region-specific conditions. In the new approach, age-defined waste samples, which had been under the actual landfill site conditions, were collected in Hokkaido, Japan (in cold region), and the time series data on the age-defined waste sample's methane generation potential was used to estimate the coefficients in the FOD model. The degradation coefficients were 0.0501/y and 0.0621/y for paper and food waste, and the methane generation potentials were 214.4 mL/g-wet waste and 126.7 mL/g-wet waste for paper and food waste, respectively. These coefficients were compared with the default coefficients given by the IPCC. Although the degradation coefficient for food waste was smaller than the default value, the other coefficients were within the range of the default coefficients. With these new coefficients to calculate methane generation, the long term methane emissions from the landfill site was estimated at 1.35×10(4)m(3)-CH(4), which corresponds to approximately 2.53% of the total carbon dioxide emissions in the city (5.34×10(5)t-CO(2)/y). Copyright © 2013 Elsevier Ltd. All rights reserved.
Wind-energy recovery by a static Scherbius induction generator
NASA Astrophysics Data System (ADS)
Smith, G. A.; Nigim, K. A.
1981-11-01
The paper describes a technique for controlling a doubly fed induction generator driven by a windmill, or other form of variable-speed prime mover, to provide power generation into the national grid system. The secondary circuit of the generator is supplied at a variable frequency from a current source inverter which for test purposes is rated to allow energy recovery, from a simulated windmill, from maximum speed to standstill. To overcome the stability problems normally associated with doubly fed machines a novel signal generator, which is locked in phase with the rotor EMF, controls the secondary power to provide operation over a wide range of subsynchronous and supersynchronous speeds. Consideration of power flow enables the VA rating of the secondary power source to be determined as a function of the gear ratio and online operating range of the system. A simple current source model is used to predict performance which is compared with experimental results. The results indicate a viable system, and suggestions for further work are proposed.
NASA Technical Reports Server (NTRS)
Glass, Christopher E.
1989-01-01
The effects of cylindrical leading edge sweep on surface pressure and heat transfer rate for swept shock wave interference were investigated. Experimental tests were conducted in the Calspan 48-inch Hypersonic Shock Tunnel at a nominal Mach number of 8, nominal unit Reynolds number of 1.5 x 10 to the 6th power per foot, leading edge and incident shock generator sweep angles of 0, 15, and 30 deg, and incident shock generator angle-of-attack fixed at 12.5 deg. Detailed surface pressure and heat transfer rate on the cylindircal leading edge of a swept shock wave interference model were measured at the region of the maximum surface pressure and heat transfer rate. Results show that pressure and heat transfer rate on the cylindrical leading edge of the shock wave interference model were reduced as the sweep was increased over the range of tested parameters. Peak surface pressure and heat transfer rate on the cylinder were about 10 and 30 times the undisturbed flow stagnation point value, respectively, for the 0 deg sweep test. A comparison of the 15 and 30 deg swept results with the 0 deg swept results showed that peak pressure was reduced about 13 percent and 44 percent, respectively, and peak heat transfer rate was reduced about 7 percent and 27 percent, respectively.
Neumann, Rebecca B.; Blazewicz, Steven J.; Conaway, Christopher H.; ...
2015-12-16
Quantifying rates of microbial carbon transformation in peatlands is essential for gaining mechanistic understanding of the factors that influence methane emissions from these systems, and for predicting how emissions will respond to climate change and other disturbances. In this study, we used porewater stable isotopes collected from both the edge and center of a thermokarst bog in Interior Alaska to estimate in situ microbial reaction rates. We expected that near the edge of the thaw feature, actively thawing permafrost and greater abundance of sedges would increase carbon, oxygen and nutrient availability, enabling faster microbial rates relative to the center ofmore » the thaw feature. We developed three different conceptual reaction networks that explained the temporal change in porewater CO2, CH4, δ13C-CO2 and δ13C-CH4. All three reaction-network models included methane production, methane oxidation and CO2 production, and two of the models included homoacetogenesis — a reaction not previously included in isotope-based porewater models. All three models fit the data equally well, but rates resulting from the models differed. Most notably, inclusion of homoacetogenesis altered the modeled pathways of methane production when the reaction was directly coupled to methanogenesis, and it decreased gross methane production rates by up to a factor of five when it remained decoupled from methanogenesis. The ability of all three conceptual reaction networks to successfully match the measured data indicate that this technique for estimating in-situ reaction rates requires other data and information from the site to confirm the considered set of microbial reactions. Despite these differences, all models indicated that, as expected, rates were greater at the edge than in the center of the thaw bog, that rates at the edge increased more during the growing season than did rates in the center, and that the ratio of acetoclastic to hydrogenotrophic methanogenesis was greater at the edge than in the center. In both locations, modeled rates (excluding methane oxidation) increased with depth. A puzzling outcome from the effort was that none of the models could fit the porewater dataset without generating “fugitive” carbon (i.e., methane or acetate generated by the models but not detected at the field site), indicating that either our conceptualization of the reactions occurring at the site remains incomplete or our site measurements are missing important carbon transformations and/or carbon fluxes. This model–data discrepancy will motivate and inform future research efforts focused on improving our understanding of carbon cycling in permafrost wetlands.« less
NASA Technical Reports Server (NTRS)
Weber, Arthur L.
2003-01-01
Our research objective is to understand and model the chemical processes on the primitive Earth that generated the first autocatalytic molecules and microstructures involved in the origin of life. Our approach involves: (a) investigation of a model origin-of-life process named the Sugar Model that is based on the reaction of formaldehyde- derived sugars (trioses and tetroses) with ammonia, and (b) elucidation of the constraints imposed on the chemistry of the origin of life by the fixed energies and rates of C,H,O-organic reactions under mild aqueous conditions. Recently, we demonstrated that under mild aqueous conditions the Sugar Model process yields autocatalytic products, and generates organic micropherules (2-20 micron dia.) that exhibit budding, size uniformity, and chain formation. We also discovered that the sugar substrates of the Sugar Model are capable of reducing nitrite to ammonia under mild aqueous conditions. In addition studies done in collaboration with Sandra Pizzarrello (Arizona State University) revealed that chiral amino acids (including meteoritic isovaline) catalyze both the synthesis and specific handedness of chiral sugars. Our systematic survey of the energies and rates of reactions of C,H,O-organic substrates under mild aqueous conditions revealed several general principles (rules) that govern the direction and rate of organic reactions. These reactivity principles constrain the structure of chemical pathways used in the origin of life, and in modern and primitive metabolism.
Prospects for distinguishing dark matter models using annual modulation
Witte, Samuel J.; Gluscevic, Vera; McDermott, Samuel D.
2017-02-24
It has recently been demonstrated that, in the event of a putative signal in dark matter direct detection experiments, properly identifying the underlying dark matter-nuclei interaction promises to be a challenging task. Given the most optimistic expectations for the number counts of recoil events in the forthcoming Generation 2 experiments, differentiating between interactions that produce distinct features in the recoil energy spectra will only be possible if a strong signal is observed simultaneously on a variety of complementary targets. However, there is a wide range of viable theories that give rise to virtually identical energy spectra, and may only differmore » by the dependence of the recoil rate on the dark matter velocity. In this work, we investigate how degeneracy between such competing models may be broken by analyzing the time dependence of nuclear recoils, i.e. the annual modulation of the rate. For this purpose, we simulate dark matter events for a variety of interactions and experiments, and perform a Bayesian model-selection analysis on all simulated data sets, evaluating the chance of correctly identifying the input model for a given experimental setup. Lastly, we find that including information on the annual modulation of the rate may significantly enhance the ability of a single target to distinguish dark matter models with nearly degenerate recoil spectra, but only with exposures beyond the expectations of Generation 2 experiments.« less
A thermodynamic framework for the study of crystallization in polymers
NASA Astrophysics Data System (ADS)
Rao, I. J.; Rajagopal, K. R.
In this paper, we present a new thermodynamic framework within the context of continuum mechanics, to predict the behavior of crystallizing polymers. The constitutive models that are developed within this thermodynamic setting are able to describe the main features of the crystallization process. The model is capable of capturing the transition from a fluid like behavior to a solid like behavior in a rational manner without appealing to any adhoc transition criterion. The anisotropy of the crystalline phase is built into the model and the specific anisotropy of the crystalline phase depends on the deformation in the melt. These features are incorporated into a recent framework that associates different natural configurations and material symmetries with distinct microstructural features within the body that arise during the process under consideration. Specific models are generated by choosing particular forms for the internal energy, entropy and the rate of dissipation. Equations governing the evolution of the natural configurations and the rate of crystallization are obtained by maximizing the rate of dissipation, subject to appropriate constraints. The initiation criterion, marking the onset of crystallization, arises naturally in this setting in terms of the thermodynamic functions. The model generated within such a framework is used to simulate bi-axial extension of a polymer film that is undergoing crystallization. The predictions of the theory that has been proposed are consistent with the experimental results (see [28] and [7]).
NASA Technical Reports Server (NTRS)
Hoppa, Mary Ann; Wilson, Larry W.
1994-01-01
There are many software reliability models which try to predict future performance of software based on data generated by the debugging process. Our research has shown that by improving the quality of the data one can greatly improve the predictions. We are working on methodologies which control some of the randomness inherent in the standard data generation processes in order to improve the accuracy of predictions. Our contribution is twofold in that we describe an experimental methodology using a data structure called the debugging graph and apply this methodology to assess the robustness of existing models. The debugging graph is used to analyze the effects of various fault recovery orders on the predictive accuracy of several well-known software reliability algorithms. We found that, along a particular debugging path in the graph, the predictive performance of different models can vary greatly. Similarly, just because a model 'fits' a given path's data well does not guarantee that the model would perform well on a different path. Further we observed bug interactions and noted their potential effects on the predictive process. We saw that not only do different faults fail at different rates, but that those rates can be affected by the particular debugging stage at which the rates are evaluated. Based on our experiment, we conjecture that the accuracy of a reliability prediction is affected by the fault recovery order as well as by fault interaction.
Modeling the synergy of cofilin and Arp2/3 in lamellipodial protrusive activity.
Tania, Nessy; Condeelis, John; Edelstein-Keshet, Leah
2013-11-05
Rapid polymerization of actin filament barbed ends generates protrusive forces at the cell edge, leading to cell migration. Two important regulators of free barbed ends, cofilin and Arp2/3, have been shown to work in synergy (net effect greater than additive). To explore this synergy, we model the dynamics of F-actin at the leading edge, motivated by data from EGF-stimulated mammary carcinoma cells. We study how synergy depends on the localized rates and relative timing of cofilin and Arp2/3 activation at the cell edge. The model incorporates diffusion of cofilin, membrane protrusion, F-actin capping, aging, and severing by cofilin and branch nucleation by Arp2/3 (but not G-actin recycling). In a well-mixed system, cofilin and Arp2/3 can each generate a large pulse of barbed ends on their own, but have little synergy; high synergy occurs only at low activation rates, when few barbed ends are produced. In the full spatially distributed model, both synergy and barbed-end production are significant over a range of activation rates. Furthermore, barbed-end production is greatest when Arp2/3 activation is delayed relative to cofilin. Our model supports a direct role for cofilin-mediated actin polymerization in stimulated cell migration, including chemotaxis and cancer invasion. Copyright © 2013 Biophysical Society. Published by Elsevier Inc. All rights reserved.
High-flow oxygen therapy: pressure analysis in a pediatric airway model.
Urbano, Javier; del Castillo, Jimena; López-Herce, Jesús; Gallardo, José A; Solana, María J; Carrillo, Ángel
2012-05-01
The mechanism of high-flow oxygen therapy and the pressures reached in the airway have not been defined. We hypothesized that the flow would generate a low continuous positive pressure, and that elevated flow rates in this model could produce moderate pressures. The objective of this study was to analyze the pressure generated by a high-flow oxygen therapy system in an experimental model of the pediatric airway. An experimental in vitro study was performed. A high-flow oxygen therapy system was connected to 3 types of interface (nasal cannulae, nasal mask, and oronasal mask) and applied to 2 types of pediatric manikin (infant and neonatal). The pressures generated in the circuit, in the airway, and in the pharynx were measured at different flow rates (5, 10, 15, and 20 L/min). The experiment was conducted with and without a leak (mouth sealed and unsealed). Linear regression analyses were performed for each set of measurements. The pressures generated with the different interfaces were very similar. The maximum pressure recorded was 4 cm H(2)O with a flow of 20 L/min via nasal cannulae or nasal mask. When the mouth of the manikin was held open, the pressures reached in the airway and pharynxes were undetectable. Linear regression analyses showed a similar linear relationship between flow and pressures measured in the pharynx (pressure = -0.375 + 0.138 × flow) and in the airway (pressure = -0.375 + 0.158 × flow) with the closed mouth condition. According to our hypothesis, high-flow oxygen therapy systems produced a low-level CPAP in an experimental pediatric model, even with the use of very high flow rates. Linear regression analyses showed similar linear relationships between flow and pressures measured in the pharynx and in the airway. This finding suggests that, at least in part, the effects may be due to other mechanisms.
A model for bacterial colonization of sinking aggregates.
Bearon, R N
2007-01-01
Sinking aggregates provide important nutrient-rich environments for marine bacteria. Quantifying the rate at which motile bacteria colonize such aggregations is important in understanding the microbial loop in the pelagic food web. In this paper, a simple analytical model is presented to predict the rate at which bacteria undergoing a random walk encounter a sinking aggregate. The model incorporates the flow field generated by the sinking aggregate, the swimming behavior of the bacteria, and the interaction of the flow with the swimming behavior. An expression for the encounter rate is computed in the limit of large Péclet number when the random walk can be approximated by a diffusion process. Comparison with an individual-based numerical simulation is also given.
Dispatch Control with PEV Charging and Renewables for Multiplayer Game Application
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, Nathan; Johnson, Brian; McJunkin, Timothy
This paper presents a demand response model for a hypothetical microgrid that integrates renewable resources and plug-in electric vehicle (PEV) charging systems. It is assumed that the microgrid has black start capability and that external generation is available for purchase while grid connected to satisfy additional demand. The microgrid is developed such that in addition to renewable, non-dispatchable generation from solar, wind and run of the river hydroelectric resources, local dispatchable generation is available in the form of small hydroelectric and moderately sized gas and coal fired facilities. To accurately model demand, the load model is separated into independent residential,more » commercial, industrial, and PEV charging systems. These are dispatched and committed based on a mixed integer linear program developed to minimize the cost of generation and load shedding while satisfying constraints associated with line limits, conservation of energy, and ramp rates of the generation units. The model extends a research tool to longer time frames intended for policy setting and educational environments and provides a realistic and intuitive understanding of beneficial and challenging aspects of electrification of vehicles combined with integration of green electricity production.« less
Modeling perspectives on echolocation strategies inspired by bats flying in groups.
Lin, Yuan; Abaid, Nicole
2015-12-21
Bats navigating with echolocation - which is a type of active sensing achieved by interpreting echoes resulting from self-generated ultrasonic pulses - exhibit unique behaviors during group flight. While bats may benefit from eavesdropping on their peers׳ echolocation, they also potentially suffer from confusion between their own and peers׳ pulses, caused by an effect called frequency jamming. This hardship of group flight is supported by experimental observations of bats simplifying their sound-scape by shifting their pulse frequencies or suppressing echolocation altogether. Here, we investigate eavesdropping and varying pulse emission rate from a modeling perspective to understand these behaviors׳ potential benefits and detriments. We define an agent-based model of echolocating bats avoiding collisions in a three-dimensional tunnel. Through simulation, we show that bats with reasonably accurate eavesdropping can reduce collisions compared to those neglecting information from peers. In large populations, bats minimize frequency jamming by decreasing pulse emission rate, while collision risk increases; conversely, increasing pulse emission rate minimizes collisions by allowing more sensing information generated per bat. These strategies offer benefits for both biological and engineered systems, since frequency jamming is a concern in systems using active sensing. Copyright © 2015 Elsevier Ltd. All rights reserved.
Stability Assessment of a System Comprising a Single Machine and Inverter with Scalable Ratings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Brian B; Lin, Yashen; Gevorgian, Vahan
Synchronous machines have traditionally acted as the foundation of large-scale electrical infrastructures and their physical properties have formed the cornerstone of system operations. However, with the increased integration of distributed renewable resources and energy-storage technologies, there is a need to systematically acknowledge the dynamics of power-electronics inverters - the primary energy-conversion interface in such systems - in all aspects of modeling, analysis, and control of the bulk power network. In this paper, we assess the properties of coupled machine-inverter systems by studying an elementary system comprised of a synchronous generator, three-phase inverter, and a load. The inverter model is formulatedmore » such that its power rating can be scaled continuously across power levels while preserving its closed-loop response. Accordingly, the properties of the machine-inverter system can be assessed for varying ratios of machine-to-inverter power ratings. After linearizing the model and assessing its eigenvalues, we show that system stability is highly dependent on the inverter current controller and machine exciter, thus uncovering a key concern with mixed machine-inverter systems and motivating the need for next-generation grid-stabilizing inverter controls.« less
Developing Chemistry and Kinetic Modeling Tools for Low-Temperature Plasma Simulations
NASA Astrophysics Data System (ADS)
Jenkins, Thomas; Beckwith, Kris; Davidson, Bradley; Kruger, Scott; Pankin, Alexei; Roark, Christine; Stoltz, Peter
2015-09-01
We discuss the use of proper orthogonal decomposition (POD) methods in VSim, a FDTD plasma simulation code capable of both PIC/MCC and fluid modeling. POD methods efficiently generate smooth representations of noisy self-consistent or test-particle PIC data, and are thus advantageous in computing macroscopic fluid quantities from large PIC datasets (e.g. for particle-based closure computations) and in constructing optimal visual representations of the underlying physics. They may also confer performance advantages for massively parallel simulations, due to the significant reduction in dataset sizes conferred by truncated singular-value decompositions of the PIC data. We also demonstrate how complex LTP chemistry scenarios can be modeled in VSim via an interface with MUNCHKIN, a developing standalone python/C++/SQL code that identifies reaction paths for given input species, solves 1D rate equations for the time-dependent chemical evolution of the system, and generates corresponding VSim input blocks with appropriate cross-sections/reaction rates. MUNCHKIN also computes reaction rates from user-specified distribution functions, and conducts principal path analyses to reduce the number of simulated chemical reactions. Supported by U.S. Department of Energy SBIR program, Award DE-SC0009501.
PubChem3D: Conformer generation
2011-01-01
Background PubChem, an open archive for the biological activities of small molecules, provides search and analysis tools to assist users in locating desired information. Many of these tools focus on the notion of chemical structure similarity at some level. PubChem3D enables similarity of chemical structure 3-D conformers to augment the existing similarity of 2-D chemical structure graphs. It is also desirable to relate theoretical 3-D descriptions of chemical structures to experimental biological activity. As such, it is important to be assured that the theoretical conformer models can reproduce experimentally determined bioactive conformations. In the present study, we investigate the effects of three primary conformer generation parameters (the fragment sampling rate, the energy window size, and force field variant) upon the accuracy of theoretical conformer models, and determined optimal settings for PubChem3D conformer model generation and conformer sampling. Results Using the software package OMEGA from OpenEye Scientific Software, Inc., theoretical 3-D conformer models were generated for 25,972 small-molecule ligands, whose 3-D structures were experimentally determined. Different values for primary conformer generation parameters were systematically tested to find optimal settings. Employing a greater fragment sampling rate than the default did not improve the accuracy of the theoretical conformer model ensembles. An ever increasing energy window did increase the overall average accuracy, with rapid convergence observed at 10 kcal/mol and 15 kcal/mol for model building and torsion search, respectively; however, subsequent study showed that an energy threshold of 25 kcal/mol for torsion search resulted in slightly improved results for larger and more flexible structures. Exclusion of coulomb terms from the 94s variant of the Merck molecular force field (MMFF94s) in the torsion search stage gave more accurate conformer models at lower energy windows. Overall average accuracy of reproduction of bioactive conformations was remarkably linear with respect to both non-hydrogen atom count ("size") and effective rotor count ("flexibility"). Using these as independent variables, a regression equation was developed to predict the RMSD accuracy of a theoretical ensemble to reproduce bioactive conformations. The equation was modified to give a minimum RMSD conformer sampling value to help ensure that 90% of the sampled theoretical models should contain at least one conformer within the RMSD sampling value to a "bioactive" conformation. Conclusion Optimal parameters for conformer generation using OMEGA were explored and determined. An equation was developed that provides an RMSD sampling value to use that is based on the relative accuracy to reproduce bioactive conformations. The optimal conformer generation parameters and RMSD sampling values determined are used by the PubChem3D project to generate theoretical conformer models. PMID:21272340
NASA Astrophysics Data System (ADS)
Flores, Robert Joseph
Distributed generation can provide many benefits over traditional central generation such as increased reliability and efficiency while reducing emissions. Despite these potential benefits, distributed generation is generally not purchased unless it reduces energy costs. Economic dispatch strategies can be designed such that distributed generation technologies reduce overall facility energy costs. In this thesis, a microturbine generator is dispatched using different economic control strategies, reducing the cost of energy to the facility. Several industrial and commercial facilities are simulated using acquired electrical, heating, and cooling load data. Industrial and commercial utility rate structures are modeled after Southern California Edison and Southern California Gas Company tariffs and used to find energy costs for the simulated buildings and corresponding microturbine dispatch. Using these control strategies, building models, and utility rate models, a parametric study examining various generator characteristics is performed. An economic assessment of the distributed generation is then performed for both the microturbine generator and parametric study. Without the ability to export electricity to the grid, the economic value of distributed generation is limited to reducing the individual costs that make up the cost of energy for a building. Any economic dispatch strategy must be built to reduce these individual costs. While the ability of distributed generation to reduce cost depends of factors such as electrical efficiency and operations and maintenance cost, the building energy demand being serviced has a strong effect on cost reduction. Buildings with low load factors can accept distributed generation with higher operating costs (low electrical efficiency and/or high operations and maintenance cost) due to the value of demand reduction. As load factor increases, lower operating cost generators are desired due to a larger portion of the building load being met in an effort to reduce demand. In addition, buildings with large thermal demand have access to the least expensive natural gas, lowering the cost of operating distributed generation. Recovery of exhaust heat from DG reduces cost only if the buildings thermal demand coincides with the electrical demand. Capacity limits exist where annual savings from operation of distributed generation decrease if further generation is installed. For low operating cost generators, the approximate limit is the average building load. This limit decreases as operating costs increase. In addition, a high capital cost of distributed generation can be accepted if generator operating costs are low. As generator operating costs increase, capital cost must decrease if a positive economic performance is desired.
The fiscal outcome of artificial conception in Brazil--creating citizens in developing countries.
Kröger, G B; Ejzenberg, D
2012-01-01
Infertility is an important health issue, but only a small fraction of the affected population receives treatment in Brazil, because it is not covered by the government or private health insurance plans. We developed a generational accounting-based mathematical model to assess the direct economic result of creating a citizen through IVF in different economic scenarios, and the potential economic benefit generated by the individual and his/her future offspring. A mathematical model analyzes the revenues and expenses of an IVF-conceived individual over his lifetime. We calculated the net present value (NPV) of an IVF-conceived citizen, and this value corresponds to the fiscal contribution to the government by an individual, from birth through his predicted life expectancy. The calculation used discount rates of 4.0 and 7.0% to depreciate the money value by time. A 4.0% discount rate represents the most favorable economic scenario in Brazil, and it results in an NPV of US$ 61 428. A 7.0% discount rate represents a less favorable economic reality, and it results in a debit of U$ 563, but this debt may be compensated by his/her future offspring. The fiscal contribution generated by each IVF-conceived citizen can justify an initial government investment in infertility treatment. Poor economic times in Brazil can sometimes result in a fiscal debt from each new IVF-conceived child, but this initial expenditure may be compensated by the fiscal contribution in the next generation.
Model-based estimation of individual fitness
Link, W.A.; Cooch, E.G.; Cam, E.
2002-01-01
Fitness is the currency of natural selection, a measure of the propagation rate of genotypes into future generations. Its various definitions have the common feature that they are functions of survival and fertility rates. At the individual level, the operative level for natural selection, these rates must be understood as latent features, genetically determined propensities existing at birth. This conception of rates requires that individual fitness be defined and estimated by consideration of the individual in a modelled relation to a group of similar individuals; the only alternative is to consider a sample of size one, unless a clone of identical individuals is available. We present hierarchical models describing individual heterogeneity in survival and fertility rates and allowing for associations between these rates at the individual level. We apply these models to an analysis of life histories of Kittiwakes (Rissa tridactyla) observed at several colonies on the Brittany coast of France. We compare Bayesian estimation of the population distribution of individual fitness with estimation based on treating individual life histories in isolation, as samples of size one (e.g. McGraw and Caswell, 1996).
Model-based estimation of individual fitness
Link, W.A.; Cooch, E.G.; Cam, E.
2002-01-01
Fitness is the currency of natural selection, a measure of the propagation rate of genotypes into future generations. Its various definitions have the common feature that they are functions of survival and fertility rates. At the individual level, the operative level for natural selection, these rates must be understood as latent features, genetically determined propensities existing at birth. This conception of rates requires that individual fitness be defined and estimated by consideration of the individual in a modelled relation to a group of similar individuals; the only alternative is to consider a sample of size one, unless a clone of identical individuals is available. We present hierarchical models describing individual heterogeneity in survival and fertility rates and allowing for associations between these rates at the individual level. We apply these models to an analysis of life histories of Kittiwakes (Rissa tridactyla ) observed at several colonies on the Brittany coast of France. We compare Bayesian estimation of the population distribution of individual fitness with estimation based on treating individual life histories in isolation, as samples of size one (e.g. McGraw & Caswell, 1996).
A Bayes linear Bayes method for estimation of correlated event rates.
Quigley, John; Wilson, Kevin J; Walls, Lesley; Bedford, Tim
2013-12-01
Typically, full Bayesian estimation of correlated event rates can be computationally challenging since estimators are intractable. When estimation of event rates represents one activity within a larger modeling process, there is an incentive to develop more efficient inference than provided by a full Bayesian model. We develop a new subjective inference method for correlated event rates based on a Bayes linear Bayes model under the assumption that events are generated from a homogeneous Poisson process. To reduce the elicitation burden we introduce homogenization factors to the model and, as an alternative to a subjective prior, an empirical method using the method of moments is developed. Inference under the new method is compared against estimates obtained under a full Bayesian model, which takes a multivariate gamma prior, where the predictive and posterior distributions are derived in terms of well-known functions. The mathematical properties of both models are presented. A simulation study shows that the Bayes linear Bayes inference method and the full Bayesian model provide equally reliable estimates. An illustrative example, motivated by a problem of estimating correlated event rates across different users in a simple supply chain, shows how ignoring the correlation leads to biased estimation of event rates. © 2013 Society for Risk Analysis.
NASA Astrophysics Data System (ADS)
Takahashi, Hirona; Hagiwara, Kenta; Kawai, Akio
2016-11-01
Addition reaction of photo-generated radicals to double bonds of diethyl fumarate (deF) and diethyl maleate (deM), which are geometrical isomers, was studied by means of time-resolved- (TR-) and pulsed-electron paramagnetic resonance (EPR). Analysis of TR-EPR spectra indicates that adduct radicals from deF and deM should have the same structure. The double bonds of these monomers are converted to single ones by addition reaction, which allows hindered internal rotation to give the same structure of adduct radical. The rate constants for addition reaction of photo-generated radicals were determined by Stern-Volmer analysis of the decay time of electron spin-echo intensity of these radicals measured by the pulsed EPR method. Rate constants for deF were found to be larger than those for deM. This relation is in good consistent with efficiency of polymerisation of deF and deM. Experimentally determined rate constants were evaluated by introducing the addition reaction model on the basis of two important factors enthalpy and polar effects.
Gas Generator Feedline Orifice Sizing Methodology: Effects of Unsteadiness and Non-Axisymmetric Flow
NASA Technical Reports Server (NTRS)
Rothermel, Jeffry; West, Jeffrey S.
2011-01-01
Engine LH2 and LO2 gas generator feed assemblies were modeled with computational fluid dynamics (CFD) methods at 100% rated power level, using on-center square- and round-edge orifices. The purpose of the orifices is to regulate the flow of fuel and oxidizer to the gas generator, enabling optimal power supply to the turbine and pump assemblies. The unsteady Reynolds-Averaged Navier-Stokes equations were solved on unstructured grids at second-order spatial and temporal accuracy. The LO2 model was validated against published experimental data and semi-empirical relationships for thin-plate orifices over a range of Reynolds numbers. Predictions for the LO2 square- and round-edge orifices precisely match experiment and semi-empirical formulas, despite complex feedline geometry whereby a portion of the flow from the engine main feedlines travels at a right-angle through a smaller-diameter pipe containing the orifice. Predictions for LH2 square- and round-edge orifice designs match experiment and semi-empirical formulas to varying degrees depending on the semi-empirical formula being evaluated. LO2 mass flow rate through the square-edge orifice is predicted to be 25 percent less than the flow rate budgeted in the original engine balance, which was subsequently modified. LH2 mass flow rate through the square-edge orifice is predicted to be 5 percent greater than the flow rate budgeted in the engine balance. Since CFD predictions for LO2 and LH2 square-edge orifice pressure loss coefficients, K, both agree with published data, the equation for K has been used to define a procedure for orifice sizing.
Thermal modeling of cometary nuclei
NASA Astrophysics Data System (ADS)
Weissman, P. R.; Kieffer, H. H.
1981-09-01
A model of the sublimation of volatile ices from a cometary nucleus is presented which includes the effects of (1) diurnal heating and cooling, (2) rotation period and pole orientation, (3) the thermal properties of the ice and subsurface layers, and (4) the contributions from coma opacity, scattering and thermal emission where the properties of the coma are derived from the integrated rate of volatile production by the nucleus. In applying the model to the case of the 1986 apparition of Halley's comet, it is found that the generation of a cometary dust coma increases the total energy reaching the Halley nucleus due to the greater geometrical cross-section of the coma as compared with the bare nucleus. The calculated coma opacity of Halley is about 0.2 at 1 AU from the sun and 1.2 at perihelion. Possible consequences of the results obtained for the generation of nongravitational forces, volatile production rates for comets and cometary lifetimes against sublimation are discussed.
A Lagrangian Approach for Calculating Microsphere Deposition in a One-Dimensional Lung-Airway Model.
Vaish, Mayank; Kleinstreuer, Clement
2015-09-01
Using the open-source software openfoam as the solver, a novel approach to calculate microsphere transport and deposition in a 1D human lung-equivalent trumpet model (TM) is presented. Specifically, for particle deposition in a nonlinear trumpetlike configuration a new radial force has been developed which, along with the regular drag force, generates particle trajectories toward the wall. The new semi-empirical force is a function of any given inlet volumetric flow rate, micron-particle diameter, and lung volume. Particle-deposition fractions (DFs) in the size range from 2 μm to 10 μm are in agreement with experimental datasets for different laminar and turbulent inhalation flow rates as well as total volumes. Typical run times on a single processor workstation to obtain actual total deposition results at comparable accuracy are 200 times less than that for an idealized whole-lung geometry (i.e., a 3D-1D model with airways up to 23rd generation in single-path only).
Confidence and self-attribution bias in an artificial stock market
Bertella, Mario A.; Pires, Felipe R.; Rego, Henio H. A.; Vodenska, Irena; Stanley, H. Eugene
2017-01-01
Using an agent-based model we examine the dynamics of stock price fluctuations and their rates of return in an artificial financial market composed of fundamentalist and chartist agents with and without confidence. We find that chartist agents who are confident generate higher price and rate of return volatilities than those who are not. We also find that kurtosis and skewness are lower in our simulation study of agents who are not confident. We show that the stock price and confidence index—both generated by our model—are cointegrated and that stock price affects confidence index but confidence index does not affect stock price. We next compare the results of our model with the S&P 500 index and its respective stock market confidence index using cointegration and Granger tests. As in our model, we find that stock prices drive their respective confidence indices, but that the opposite relationship, i.e., the assumption that confidence indices drive stock prices, is not significant. PMID:28231255
Remote sensing inputs to water demand modeling
NASA Technical Reports Server (NTRS)
Estes, J. E.; Jensen, J. R.; Tinney, L. R.; Rector, M.
1975-01-01
In an attempt to determine the ability of remote sensing techniques to economically generate data required by water demand models, the Geography Remote Sensing Unit, in conjunction with the Kern County Water Agency of California, developed an analysis model. As a result it was determined that agricultural cropland inventories utilizing both high altitude photography and LANDSAT imagery can be conducted cost effectively. In addition, by using average irrigation application rates in conjunction with cropland data, estimates of agricultural water demand can be generated. However, more accurate estimates are possible if crop type, acreage, and crop specific application rates are employed. An analysis of the effect of saline-alkali soils on water demand in the study area is also examined. Finally, reference is made to the detection and delineation of water tables that are perched near the surface by semi-permeable clay layers. Soil salinity prediction, automated crop identification on a by-field basis, and a potential input to the determination of zones of equal benefit taxation are briefly touched upon.
Finite element analysis of the high strain rate testing of polymeric materials
NASA Astrophysics Data System (ADS)
Gorwade, C. V.; Alghamdi, A. S.; Ashcroft, I. A.; Silberschmidt, V. V.; Song, M.
2012-08-01
Advanced polymer materials are finding an increasing range of industrial and defence applications. Ultra-high molecular weight polymers (UHMWPE) are already used in lightweight body armour because of their good impact resistance with light weight. However, a broader use of such materials is limited by the complexity of the manufacturing processes and the lack of experimental data on their behaviour and failure evolution under high-strain rate loading conditions. The current study deals with an investigation of the internal heat generation during tensile of UHMWPE. A 3D finite element (FE) model of the tensile test is developed and validated the with experimental work. An elastic-plastic material model is used with adiabatic heat generation. The temperature and stresses obtained with FE analysis are found to be in a good agreement with the experimental results. The model can be used as a simple and cost effective tool to predict the thermo-mechanical behaviour of UHMWPE part under various loading conditions.
Lepton asymmetry rate from quantum field theory: NLO in the hierarchical limit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bödeker, D.; Sangel, M., E-mail: bodeker@physik.uni-bielefeld.de, E-mail: msangel@physik.uni-bielefeld.de
2017-06-01
The rates for generating a matter-antimatter asymmetry in extensions of the Standard Model (SM) containing right-handed neutrinos are the most interesting and least trivial co\\-efficients in the rate equations for baryogenesis through thermal leptogenesis. We obtain a relation of these rates to finite-temperature real-time correlation functions, similar to the Kubo formulas for transport coefficients. Then we consider the case of hierarchical masses for the sterile neutrinos. At leading order in their Yukawa couplings we find a simple master formula which relates the rates to a single finite temperature three-point spectral function. It is valid to all orders in g ,more » where g denotes a SM gauge or quark Yukawa coupling. We use it to compute the rate for generating a matter-antimatter asymmetry at next-to-leading order in g in the non-relativistic regime. The corrections are of order g {sup 2}, and they amount to 4% or less.« less
Accelerated lamellar disintegration in eutectoid steel
NASA Astrophysics Data System (ADS)
Mishra, Shakti; Mishra, Alok; Show, Bijay Kumar; Maity, Joydeep
2017-04-01
The fastest kinetics of lamellar disintegration (predicted duration of 44 min) in AISI 1080 steel is obtained with a novel approach of incomplete austenitisation-based cyclic heat treatment involving forced air cooling with an air flow rate of 8.7 m3 h-1. A physical model for process kinetics is proposed that involves lamellar fragmentation, lamellar thickening, divorced eutectoid growth and generation of new lamellar faults in remaining cementite lamellae in each cycle. Lamellar fragmentation is accentuated with faster rate of cooling through generation of more intense lamellar faults; but divorced eutectoid growth is ceased. Accordingly, as compared to still air cooling, much faster kinetics of lamellar disintegration is obtained by forced air cooling together with the generation of much smaller submicroscopic cementite particles (containing more proportion of plate-shaped non-spheroids) in divorced eutectoid region.
The economic impact of state ordered avoided cost rates for photovoltaic generated electricity
NASA Astrophysics Data System (ADS)
Bottaro, D.; Wheatley, N. J.
Various methods the states have devised to implement federal policy regarding the Public Utility Regulatory Policies Act (PURPA) of 1978, which requires that utilities pay their full 'avoided costs' to small power producers for the energy and capacity provided, are examined. The actions of several states are compared with rates estimated using utility expansion and rate-setting models, and the potential break-even capital costs of a photovoltaic system are estimated using models which calculate photovoltaic worth. The potential for the development of photovoltaics has been increased by the PURPA regulations more from the guarantee of utility purchase of photovoltaic power than from the high buy-back rates paid. The buy-back rate is high partly because of the surprisingly high effective capacity of photovoltaic systems in some locations.
A generating function approach to HIV transmission with dynamic contact rates
Romero-Severson, Ethan O.; Meadors, Grant D.; Volz, Erik M.
2014-04-24
The basic reproduction number, R 0, is often defined as the average number of infections generated by a newly infected individual in a fully susceptible population. The interpretation, meaning, and derivation of R 0 are controversial. However, in the context of mean field models, R 0 demarcates the epidemic threshold below which the infected population approaches zero in the limit of time. In this manner, R 0 has been proposed as a method for understanding the relative impact of public health interventions with respect to disease eliminations from a theoretical perspective. The use of R 0 is made more complexmore » by both the strong dependency of R 0 on the model form and the stochastic nature of transmission. A common assumption in models of HIV transmission that have closed form expressions for R 0 is that a single individual’s behavior is constant over time. For this research, we derive expressions for both R 0 and probability of an epidemic in a finite population under the assumption that people periodically change their sexual behavior over time. We illustrate the use of generating functions as a general framework to model the effects of potentially complex assumptions on the number of transmissions generated by a newly infected person in a susceptible population. In conclusion, we find that the relationship between the probability of an epidemic and R 0 is not straightforward, but, that as the rate of change in sexual behavior increases both R 0 and the probability of an epidemic also decrease.« less
A generating function approach to HIV transmission with dynamic contact rates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romero-Severson, Ethan O.; Meadors, Grant D.; Volz, Erik M.
The basic reproduction number, R 0, is often defined as the average number of infections generated by a newly infected individual in a fully susceptible population. The interpretation, meaning, and derivation of R 0 are controversial. However, in the context of mean field models, R 0 demarcates the epidemic threshold below which the infected population approaches zero in the limit of time. In this manner, R 0 has been proposed as a method for understanding the relative impact of public health interventions with respect to disease eliminations from a theoretical perspective. The use of R 0 is made more complexmore » by both the strong dependency of R 0 on the model form and the stochastic nature of transmission. A common assumption in models of HIV transmission that have closed form expressions for R 0 is that a single individual’s behavior is constant over time. For this research, we derive expressions for both R 0 and probability of an epidemic in a finite population under the assumption that people periodically change their sexual behavior over time. We illustrate the use of generating functions as a general framework to model the effects of potentially complex assumptions on the number of transmissions generated by a newly infected person in a susceptible population. In conclusion, we find that the relationship between the probability of an epidemic and R 0 is not straightforward, but, that as the rate of change in sexual behavior increases both R 0 and the probability of an epidemic also decrease.« less
Xu, Stanley; Newcomer, Sophia; Nelson, Jennifer; Qian, Lei; McClure, David; Pan, Yi; Zeng, Chan; Glanz, Jason
2014-05-01
The Vaccine Safety Datalink project captures electronic health record data including vaccinations and medically attended adverse events on 8.8 million enrollees annually from participating managed care organizations in the United States. While the automated vaccination data are generally of high quality, a presumptive adverse event based on diagnosis codes in automated health care data may not be true (misclassification). Consequently, analyses using automated health care data can generate false positive results, where an association between the vaccine and outcome is incorrectly identified, as well as false negative findings, where a true association or signal is missed. We developed novel conditional Poisson regression models and fixed effects models that accommodate misclassification of adverse event outcome for self-controlled case series design. We conducted simulation studies to evaluate their performance in signal detection in vaccine safety hypotheses generating (screening) studies. We also reanalyzed four previously identified signals in a recent vaccine safety study using the newly proposed models. Our simulation studies demonstrated that (i) outcome misclassification resulted in both false positive and false negative signals in screening studies; (ii) the newly proposed models reduced both the rates of false positive and false negative signals. In reanalyses of four previously identified signals using the novel statistical models, the incidence rate ratio estimates and statistical significances were similar to those using conventional models and including only medical record review confirmed cases. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Oizumi, Ryo
2014-01-01
Life history of organisms is exposed to uncertainty generated by internal and external stochasticities. Internal stochasticity is generated by the randomness in each individual life history, such as randomness in food intake, genetic character and size growth rate, whereas external stochasticity is due to the environment. For instance, it is known that the external stochasticity tends to affect population growth rate negatively. It has been shown in a recent theoretical study using path-integral formulation in structured linear demographic models that internal stochasticity can affect population growth rate positively or negatively. However, internal stochasticity has not been the main subject of researches. Taking account of effect of internal stochasticity on the population growth rate, the fittest organism has the optimal control of life history affected by the stochasticity in the habitat. The study of this control is known as the optimal life schedule problems. In order to analyze the optimal control under internal stochasticity, we need to make use of "Stochastic Control Theory" in the optimal life schedule problem. There is, however, no such kind of theory unifying optimal life history and internal stochasticity. This study focuses on an extension of optimal life schedule problems to unify control theory of internal stochasticity into linear demographic models. First, we show the relationship between the general age-states linear demographic models and the stochastic control theory via several mathematical formulations, such as path-integral, integral equation, and transition matrix. Secondly, we apply our theory to a two-resource utilization model for two different breeding systems: semelparity and iteroparity. Finally, we show that the diversity of resources is important for species in a case. Our study shows that this unification theory can address risk hedges of life history in general age-states linear demographic models.
Unification Theory of Optimal Life Histories and Linear Demographic Models in Internal Stochasticity
Oizumi, Ryo
2014-01-01
Life history of organisms is exposed to uncertainty generated by internal and external stochasticities. Internal stochasticity is generated by the randomness in each individual life history, such as randomness in food intake, genetic character and size growth rate, whereas external stochasticity is due to the environment. For instance, it is known that the external stochasticity tends to affect population growth rate negatively. It has been shown in a recent theoretical study using path-integral formulation in structured linear demographic models that internal stochasticity can affect population growth rate positively or negatively. However, internal stochasticity has not been the main subject of researches. Taking account of effect of internal stochasticity on the population growth rate, the fittest organism has the optimal control of life history affected by the stochasticity in the habitat. The study of this control is known as the optimal life schedule problems. In order to analyze the optimal control under internal stochasticity, we need to make use of “Stochastic Control Theory” in the optimal life schedule problem. There is, however, no such kind of theory unifying optimal life history and internal stochasticity. This study focuses on an extension of optimal life schedule problems to unify control theory of internal stochasticity into linear demographic models. First, we show the relationship between the general age-states linear demographic models and the stochastic control theory via several mathematical formulations, such as path–integral, integral equation, and transition matrix. Secondly, we apply our theory to a two-resource utilization model for two different breeding systems: semelparity and iteroparity. Finally, we show that the diversity of resources is important for species in a case. Our study shows that this unification theory can address risk hedges of life history in general age-states linear demographic models. PMID:24945258
Takshak, Anjneya; Kunwar, Ambarish
2016-05-01
Many cellular processes are driven by collective forces generated by a team consisting of multiple molecular motor proteins. One aspect that has received less attention is the detachment rate of molecular motors under mechanical force/load. While detachment rate of kinesin motors measured under backward force increases rapidly for forces beyond stall-force; this scenario is just reversed for non-yeast dynein motors where detachment rate from microtubule decreases, exhibiting a catch-bond type behavior. It has been shown recently that yeast dynein responds anisotropically to applied load, i.e. detachment rates are different under forward and backward pulling. Here, we use computational modeling to show that these anisotropic detachment rates might help yeast dynein motors to improve their collective force generation in the absence of catch-bond behavior. We further show that the travel distance of cargos would be longer if detachment rates are anisotropic. Our results suggest that anisotropic detachment rates could be an alternative strategy for motors to improve the transport properties and force production by the team. © 2016 The Protein Society.
Cyriac, Vivek Philip; Kodandaramaiah, Ullasa
2017-11-01
Understanding how and why diversification rates vary across evolutionary time is central to understanding how biodiversity is generated and maintained. Recent mathematical models that allow estimation of diversification rates across time from reconstructed phylogenies have enabled us to make inferences on how biodiversity copes with environmental change. Here, we explore patterns of temporal diversification in Uropeltidae, a diverse fossorial snake family. We generate a time-calibrated phylogenetic hypothesis for Uropeltidae and show a significant correlation between diversification rate and paleotemperature during the Cenozoic. We show that the temporal diversification pattern of this group is punctuated by one rate shift event with a decrease in diversification and turnover rate between ca. 11Ma to present, but there is no strong support for mass extinction events. The analysis indicates higher turnover during periods of drastic climatic fluctuations and reduced diversification rates associated with contraction and fragmentation of forest habitats during the late Miocene. Our study highlights the influence of environmental fluctuations on diversification rates in fossorial taxa such as uropeltids, and raises conservation concerns related to present rate of climate change. Copyright © 2017 Elsevier Inc. All rights reserved.
Who gains and who loses with community rating for small business?
Buchanan, J L; Marquis, M S
1999-01-01
This paper compares community rating with experience rating for small businesses using a microsimulation model to determine what firms offer and who within these firms purchases insurance. We generate four years of data and find that our results are remarkably stable through time. Both offer and purchase rates are about five percentage points higher under experience rating, but community rating leads to more stable offerings. Under community rating, high-risk firms and families purchase insurance, whereas under experience rating, it is the low-risk firms and families who are the purchasers. Young families and poor families have the lowest purchase rates, with these rates being disproportionately low under community rating.
NASA Astrophysics Data System (ADS)
Bojko, Brian T.
Accounting for the effects of finite rate chemistry in reacting flows is intractable when considering the number of species and reactions to be solved for during a large scale flow simulation. This is especially complicated when solid/liquid fuels are also considered. While modeling the reacting boundary layer with the use of finite-rate chemistry may allow for a highly accurate description of the coupling between the flame and fuel surface, it is not tractable in large scale simulations when considering detailed chemical kinetics. It is the goal of this research to investigate a Flamelet-Generated Manifold (FGM) method in order to reduce the finite rate chemistry to a lookup table cataloged by progress variables and queried during runtime. In this study, simplified unsteady 1D flames with mass blowing are considered for a solid biomass fuel where the FGM method is employed as a model reduction strategy for potential application to multidimensional calculations. Two types of FGM are considered. The first are a set of steady-state flames differentiated by their scalar dissipation rate. Results show the use of steady flames produce unacceptable errors compared to the finite-rate chemistry solution, with temperature errors in excess of 45%. To avoid these errors, a new methodology for developing an unsteady FGM (UFGM) is presented that accounts for unsteady diffusion effects and greatly reduces errors in temperature with differences that are under 10%. The FGM modeling is then extended to individual droplet combustion with the development of a Droplet Flamelet-Generated Manifold (DFGM) to account for the effects of finite-rate chemistry of individual droplets. A spherically symmetric droplet model is developed for methanol and aluminum. The inclusion of finite-rate chemistry allows the capturing of the transition from diffusion to kinetically controlled combustion as the droplet diameter decreases. The droplet model is then used to create a DFGM by successively solving the 1D flame equations at varying drop sizes, where the source terms for energy, mixture fraction, and progress variable are cataloged as a function of normalized diameter. A unique coupling of the DFGM and planar UFGM is developed and is used to account for individual and gas phase combustion processes in turbulent combustion situations, such as spray flames, particle laden blasts, etc. The DFGM for the methanol and aluminum droplets are used in mixed Eulerian and Eulerian-Lagrangian formulations of compressible multiphase flows. System level simulations are conducted and compared experimental data for a methanol spray flame and an aluminized blast studied at the Explosives Components Facility (ECF) at Sandia National Laboratories.
Maximova, Katerina; O'Loughlin, Jennifer; Gray-Donald, Katherine
2011-04-01
We sought to determine if the rate of increase in body mass index (BMI) differs between first generation immigrant children (child and both parents born outside Canada); second generation immigrant children (child born in Canada with at least one parent born outside Canada); and native-born children (child and both parents born in Canada), and if the rate of increase varies across ethnic groups. Data were available from the evaluation of a 5-year heart health promotion program targeted to elementary school children from 24 schools in multi-ethnic, disadvantaged, inner-city neighborhoods in Montreal, Canada. Participants were 6392 children aged 9-12 years born in and outside of Canada. Height and weight were measured annually according to a standardized protocol. BMI increases with age were examined using individual growth models stratified by immigrant status grouping (first generation immigrant, second generation immigrant, native-born). On average, BMI increased by 0.59, 0.73, and 0.82 kg/m2 with each year of age among first generation immigrant, second generation immigrant, and native-born children, respectively. These differences held across four family origin grouping (Europe, Asia, Central/South America, and Other). The protective effect of immigrant status on BMI increases with age dissipated in second generation immigrant children, whose rate of increase was similar to that of native-born children. Because immigrants constitute the fastest growing segment of the Canadian population, it is important to understand the causes of the higher BMI increases with successive generations. Copyright © 2011 Elsevier Inc. All rights reserved.
2013-01-01
Background Next generation sequencing technologies have greatly advanced many research areas of the biomedical sciences through their capability to generate massive amounts of genetic information at unprecedented rates. The advent of next generation sequencing has led to the development of numerous computational tools to analyze and assemble the millions to billions of short sequencing reads produced by these technologies. While these tools filled an important gap, current approaches for storing, processing, and analyzing short read datasets generally have remained simple and lack the complexity needed to efficiently model the produced reads and assemble them correctly. Results Previously, we presented an overlap graph coarsening scheme for modeling read overlap relationships on multiple levels. Most current read assembly and analysis approaches use a single graph or set of clusters to represent the relationships among a read dataset. Instead, we use a series of graphs to represent the reads and their overlap relationships across a spectrum of information granularity. At each information level our algorithm is capable of generating clusters of reads from the reduced graph, forming an integrated graph modeling and clustering approach for read analysis and assembly. Previously we applied our algorithm to simulated and real 454 datasets to assess its ability to efficiently model and cluster next generation sequencing data. In this paper we extend our algorithm to large simulated and real Illumina datasets to demonstrate that our algorithm is practical for both sequencing technologies. Conclusions Our overlap graph theoretic algorithm is able to model next generation sequencing reads at various levels of granularity through the process of graph coarsening. Additionally, our model allows for efficient representation of the read overlap relationships, is scalable for large datasets, and is practical for both Illumina and 454 sequencing technologies. PMID:24564333
The effect of the pulse repetition rate on the fast ionization wave discharge
NASA Astrophysics Data System (ADS)
Huang, Bang-Dou; Carbone, Emile; Takashima, Keisuke; Zhu, Xi-Ming; Czarnetzki, Uwe; Pu, Yi-Kang
2018-06-01
The effect of the pulse repetition rate (PRR) on the generation of high energy electrons in a fast ionization wave (FIW) discharge is investigated by both experiment and modelling. The FIW discharge is driven by nanosecond high voltage pulses and is generated in helium with a pressure of 30 mbar. The axial electric field (E z ), as the driven force of high energy electron generation, is strongly influenced by PRR. Both the measurement and the model show that, during the breakdown, the peak value of E z decreases with the PRR, while after the breakdown, the value of E z increases with the PRR. The electron energy distribution function (EEDF) is calculated with a model similar to Boeuf and Pitchford (1995 Phys. Rev. E 51 1376). It is found that, with a low value of PRR, the EEDF during the breakdown is strongly non-Maxwellian with an elevated high energy tail, while the EEDF after the breakdown is also non-Maxwellian but with a much depleted population of high energy electrons. However, with a high value of PRR, the EEDF is Maxwellian-like without much temporal variation both during and after the breakdown. With the calculated EEDF, the temporal evolution of the population of helium excited species given by the model is in good agreement with the measured optical emission, which also depends critically on the shape of the EEDF.
Frequency-dependent selection can lead to evolution of high mutation rates.
Rosenbloom, Daniel I S; Allen, Benjamin
2014-05-01
Theoretical and experimental studies have shown that high mutation rates can be advantageous, especially in novel or fluctuating environments. Here we examine how frequency-dependent competition may lead to fluctuations in trait frequencies that exert upward selective pressure on mutation rates. We use a mathematical model to show that cyclical trait dynamics generated by "rock-paper-scissors" competition can cause the mutation rate in a population to converge to a high evolutionarily stable mutation rate, reflecting a trade-off between generating novelty and reproducing past success. Introducing recombination lowers the evolutionarily stable mutation rate but allows stable coexistence between mutation rates above and below the evolutionarily stable rate. Even considering strong mutational load and ignoring the costs of faithful replication, evolution favors positive mutation rates if the selective advantage of prevailing in competition exceeds the ratio of recombining to nonrecombining offspring. We discuss a number of genomic mechanisms that may meet our theoretical requirements for the adaptive evolution of mutation. Overall, our results suggest that local mutation rates may be higher on genes influencing cyclical competition and that global mutation rates in asexual species may be higher in populations subject to strong cyclical competition.
NASA Astrophysics Data System (ADS)
Miyawaki, Shinjiro; Tawhai, Merryn H.; Hoffman, Eric A.; Lin, Ching-Long
2014-11-01
The authors have developed a method to automatically generate non-uniform CFD mesh for image-based human airway models. The sizes of generated tetrahedral elements vary in both radial and longitudinal directions to account for boundary layer and multiscale nature of pulmonary airflow. The proposed method takes advantage of our previously developed centerline-based geometry reconstruction method. In order to generate the mesh branch by branch in parallel, we used the open-source programs Gmsh and TetGen for surface and volume meshes, respectively. Both programs can specify element sizes by means of background mesh. The size of an arbitrary element in the domain is a function of wall distance, element size on the wall, and element size at the center of airway lumen. The element sizes on the wall are computed based on local flow rate and airway diameter. The total number of elements in the non-uniform mesh (10 M) was about half of that in the uniform mesh, although the computational time for the non-uniform mesh was about twice longer (170 min). The proposed method generates CFD meshes with fine elements near the wall and smooth variation of element size in longitudinal direction, which are required, e.g., for simulations with high flow rate. NIH Grants R01-HL094315, U01-HL114494, and S10-RR022421. Computer time provided by XSEDE.
Abdulredha, Muhammad; Al Khaddar, Rafid; Jordan, David; Kot, Patryk; Abdulridha, Ali; Hashim, Khalid
2018-04-26
Major-religious festivals hosted in the city of Kerbala, Iraq, annually generate large quantities of Municipal Solid Waste (MSW) which negatively impacts the environment and human health when poorly managed. The hospitality sector, specifically hotels, is one of the major sources of MSW generated during these festivals. Because it is essential to establish a proper waste management system for such festivals, accurate information regarding MSW generation is required. This study therefore investigated the rate of production of MSW from hotels in Kerbala during major festivals. A field questionnaire survey was conducted with 150 hotels during the Arba'een festival, one of the largest festivals in the world, attended by about 18 million participants, to identify how much MSW is produced and what features of hotels impact on this. Hotel managers responded to questions regarding features of the hotel such as size (Hs), expenditure (Hex), area (Ha) and number of staff (Hst). An on-site audit was also carried out with all participating hotels to estimate the mass of MSW generated from these hotels. The results indicate that MSW produced by hotels varies widely. In general, it was found that each hotel guest produces an estimated 0.89 kg of MSW per day. However, this figure varies according to the hotels' rating. Average rates of MSW production from one and four star hotels were 0.83 and 1.22 kg per guest per day, respectively. Statistically, it was found that the relationship between MSW production and hotel features can be modelled with an R 2 of 0.799, where the influence of hotel feature on MSW production followed the order Hs > Hex > Hst. Copyright © 2018 Elsevier Ltd. All rights reserved.
Modeling of microporous silicon betaelectric converter with 63Ni plating in GEANT4 toolkit*
NASA Astrophysics Data System (ADS)
Zelenkov, P. V.; Sidorov, V. G.; Lelekov, E. T.; Khoroshko, A. Y.; Bogdanov, S. V.; Lelekov, A. T.
2016-04-01
The model of electron-hole pairs generation rate distribution in semiconductor is needed to optimize the parameters of microporous silicon betaelectric converter, which uses 63Ni isotope radiation. By using Monte-Carlo methods of GEANT4 software with ultra-low energy electron physics models this distribution in silicon was calculated and approximated with exponential function. Optimal pore configuration was estimated.
Temperature and petroleum generation history of the Wilcox Formation, Louisiana
Pitman, Janet K.; Rowan, Elisabeth Rowan
2012-01-01
A one-dimensional petroleum system modeling study of Paleogene source rocks in Louisiana was undertaken in order to characterize their thermal history and to establish the timing and extent of petroleum generation. The focus of the modeling study was the Paleocene and Eocene Wilcox Formation, which contains the youngest source rock interval in the Gulf Coast Province. Stratigraphic input to the models included thicknesses and ages of deposition, lithologies, amounts and ages of erosion, and ages for periods of nondeposition. Oil-generation potential of the Wilcox Formation was modeled using an initial total organic carbon of 2 weight percent and an initial hydrogen index of 261 milligrams of hydrocarbon per grams of total organic carbon. Isothermal, hydrous-pyrolysis kinetics determined experimentally was used to simulate oil generation from coal, which is the primary source of oil in Eocene rocks. Model simulations indicate that generation of oil commenced in the Wilcox Formation during a fairly wide age range, from 37 million years ago to the present day. Differences in maturity with respect to oil generation occur across the Lower Cretaceous shelf edge. Source rocks that are thermally immature and have not generated oil (depths less than about 5,000 feet) lie updip and north of the shelf edge; source rocks that have generated all of their oil and are overmature (depths greater than about 13,000 feet) are present downdip and south of the shelf edge. High rates of sediment deposition coupled with increased accommodation space at the Cretaceous shelf margin led to deep burial of Cretaceous and Tertiary source rocks and, in turn, rapid generation of petroleum and, ultimately, cracking of oil to gas.
NASA Astrophysics Data System (ADS)
Zarifakis, Marios; Coffey, William T.; Kalmykov, Yuri P.; Titov, Sergei V.
2017-06-01
An ever-increasing requirement to integrate greater amounts of electrical energy from renewable sources especially from wind turbines and solar photo-voltaic installations exists and recent experience in the island of Ireland demonstrates that this requirement influences the behaviour of conventional generating stations. One observation is the change in the electrical power output of synchronous generators following a transient disturbance especially their oscillatory behaviour accompanied by similar oscillatory behaviour of the grid frequency, both becoming more pronounced with reducing grid inertia. This behaviour cannot be reproduced with existing mathematical models indicating that an understanding of the behaviour of synchronous generators, subjected to various disturbances especially in a system with low inertia requires a new modelling technique. Thus two models of a generating station based on a double pendulum described by a system of coupled nonlinear differential equations and suitable for analysis of its stability corresponding to infinite or finite grid inertia are presented. Formal analytic solutions of the equations of motion are given and compared with numerical solutions. In particular the new finite grid model will allow one to identify limitations to the operational range of the synchronous generators used in conventional power generation and also to identify limits, such as the allowable Rate of Change of Frequency which is currently set to ± 0.5 Hz/s and is a major factor in describing the volatility of a grid as well as identifying requirements to the total inertia necessary, which is currently provided by conventional power generators only, thus allowing one to maximise the usage of grid connected non-synchronous generators, e.g., wind turbines and solar photo-voltaic installations.
NASA Technical Reports Server (NTRS)
Cummings, Kristin A.; Pickering, Kenneth; Barth, Mary; Weinheimer, A.; Bela, M.; Li, Y; Allen, D.; Bruning, E.; MacGorman, D.; Rutledge, S.;
2015-01-01
The Deep Convective Clouds and Chemistry (DC3) field campaign in 2012 provided a plethora of aircraft and ground-based observations (e.g., trace gases, lightning and radar) to study deep convective storms, their convective transport of trace gases, and associated lightning occurrence and production of nitrogen oxides (NOx). This is a continuation of previous work, which compared lightning observations (Oklahoma Lightning Mapping Array and National Lightning Detection Network) with flashes generated by various flash rate parameterization schemes (FRPSs) from the literature in a Weather Research and Forecasting Chemistry (WRF-Chem) model simulation of the 29-30 May 2012 Oklahoma thunderstorm. Based on the Oklahoma radar observations and Lightning Mapping Array data, new FRPSs are being generated and incorporated into the model. The focus of this analysis is on estimating the amount of lightning-generated nitrogen oxides (LNOx) produced per flash in this storm through a series of model simulations using different production per flash assumptions and comparisons with DC3 aircraft anvil observations. The result of this analysis will be compared with previously studied mid-latitude storms. Additional model simulations are conducted to investigate the upper troposphere transport, distribution, and chemistry of the LNOx plume during the 24 hours following the convective event to investigate ozone production. These model-simulated mixing ratios are compared against the aircraft observations made on 30 May over the southern Appalachians.
Modeling the Impacts of Solar Distributed Generation on U.S. Water Resources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amanda, Smith; Omitaomu, Olufemi A; Jaron, Peck
2015-01-01
Distributed electric power generation technologies typically use little or no water per unit of electrical energy produced; in particular, renewable energy sources such as solar PV systems do not require cooling systems and present an opportunity to reduce water usage for power generation. Within the US, the fuel mix used for power generation varies regionally, and certain areas use more water for power generation than others. The need to reduce water usage for power generation is even more urgent in view of climate change uncertainties. In this paper, we present an example case within the state of Tennessee, one ofmore » the top four states in water consumption for power generation and one of the states with little or no potential for developing centralized renewable energy generations. The potential for developing PV generation within Knox County, Tennessee, is studied, along with the potential for reducing water withdrawal and consumption within the Tennessee Valley stream region. Electric power generation plants in the region are quantified for their electricity production and expected water withdrawal and consumption over one year, where electrical generation data is provided over one year and water usage is modeled based on the cooling system(s) in use. Potential solar PV electrical production is modeled based on LiDAR data and weather data for the same year. Our proposed methodology can be summarized as follows: First, the potential solar generation is compared against the local grid demand. Next, electrical generation reductions are specified that would result in a given reduction in water withdrawal and a given reduction in water consumption, and compared with the current water withdrawal and consumption rates for the existing fuel mix. The increase in solar PV development that would produce an equivalent amount of power, is determined. In this way, we consider how targeted local actions may affect the larger stream region through thoughtful energy development. This model can be applied to other regions, other types of distributed generation, and used as a framework for modeling alternative growth scenarios in power production capacity in addition to modeling adjustments to existing capacity.« less
Generative complexity of Gray-Scott model
NASA Astrophysics Data System (ADS)
Adamatzky, Andrew
2018-03-01
In the Gray-Scott reaction-diffusion system one reactant is constantly fed in the system, another reactant is reproduced by consuming the supplied reactant and also converted to an inert product. The rate of feeding one reactant in the system and the rate of removing another reactant from the system determine configurations of concentration profiles: stripes, spots, waves. We calculate the generative complexity-a morphological complexity of concentration profiles grown from a point-wise perturbation of the medium-of the Gray-Scott system for a range of the feeding and removal rates. The morphological complexity is evaluated using Shannon entropy, Simpson diversity, approximation of Lempel-Ziv complexity, and expressivity (Shannon entropy divided by space-filling). We analyse behaviour of the systems with highest values of the generative morphological complexity and show that the Gray-Scott systems expressing highest levels of the complexity are composed of the wave-fragments (similar to wave-fragments in sub-excitable media) and travelling localisations (similar to quasi-dissipative solitons and gliders in Conway's Game of Life).
Statistical Methods for Rapid Aerothermal Analysis and Design Technology: Validation
NASA Technical Reports Server (NTRS)
DePriest, Douglas; Morgan, Carolyn
2003-01-01
The cost and safety goals for NASA s next generation of reusable launch vehicle (RLV) will require that rapid high-fidelity aerothermodynamic design tools be used early in the design cycle. To meet these requirements, it is desirable to identify adequate statistical models that quantify and improve the accuracy, extend the applicability, and enable combined analyses using existing prediction tools. The initial research work focused on establishing suitable candidate models for these purposes. The second phase is focused on assessing the performance of these models to accurately predict the heat rate for a given candidate data set. This validation work compared models and methods that may be useful in predicting the heat rate.
NASA Technical Reports Server (NTRS)
Anghaie, S.; Chen, G.
1996-01-01
A computational model based on the axisymmetric, thin-layer Navier-Stokes equations is developed to predict the convective, radiation and conductive heat transfer in high temperature space nuclear reactors. An implicit-explicit, finite volume, MacCormack method in conjunction with the Gauss-Seidel line iteration procedure is utilized to solve the thermal and fluid governing equations. Simulation of coolant and propellant flows in these reactors involves the subsonic and supersonic flows of hydrogen, helium and uranium tetrafluoride under variable boundary conditions. An enthalpy-rebalancing scheme is developed and implemented to enhance and accelerate the rate of convergence when a wall heat flux boundary condition is used. The model also incorporated the Baldwin and Lomax two-layer algebraic turbulence scheme for the calculation of the turbulent kinetic energy and eddy diffusivity of energy. The Rosseland diffusion approximation is used to simulate the radiative energy transfer in the optically thick environment of gas core reactors. The computational model is benchmarked with experimental data on flow separation angle and drag force acting on a suspended sphere in a cylindrical tube. The heat transfer is validated by comparing the computed results with the standard heat transfer correlations predictions. The model is used to simulate flow and heat transfer under a variety of design conditions. The effect of internal heat generation on the heat transfer in the gas core reactors is examined for a variety of power densities, 100 W/cc, 500 W/cc and 1000 W/cc. The maximum temperature, corresponding with the heat generation rates, are 2150 K, 2750 K and 3550 K, respectively. This analysis shows that the maximum temperature is strongly dependent on the value of heat generation rate. It also indicates that a heat generation rate higher than 1000 W/cc is necessary to maintain the gas temperature at about 3500 K, which is typical design temperature required to achieve high efficiency in the gas core reactors. The model is also used to predict the convective and radiation heat fluxes for the gas core reactors. The maximum value of heat flux occurs at the exit of the reactor core. Radiation heat flux increases with higher wall temperature. This behavior is due to the fact that the radiative heat flux is strongly dependent on wall temperature. This study also found that at temperature close to 3500 K the radiative heat flux is comparable with the convective heat flux in a uranium fluoride failed gas core reactor.
Thermal effects in two-phase flow through face seals. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Basu, Prithwish
1988-01-01
When liquid is sealed at high temperature, it flashes inside the seal due to pressure drop and/or viscous heat dissipation. Two-phase seals generally exhibit more erratic behavior than their single phase counterparts. Thermal effects, which are often neglected in single phase seal analyses, play an important role in determining seal behavior under two-phase operation. It is necessary to consider the heat generation due to viscous shear, conduction into the seal rings and convection with the leakage flow. Analytical models developed work reasonably well at the two extremes - for low leakage rates when convection is neglected and for higher leakage rates when conduction is neglected. A preliminary model, known as the Film Coefficient Model, is presented which considers conduction and convection both, and allows continuous boiling over an extended region unlike the previous low-leakage rate model which neglects convection and always forces a discrete boiling interface. Another simplified, semi-analytical model, based on the assumption of isothermal conditions along the seal interafce, has been developed for low leakage rates. The Film Coefficient Model may be used for more accurate and realistic description.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zapol, Peter; Bourg, Ian; Criscenti, Louise Jacqueline
2011-10-01
This report summarizes research performed for the Nuclear Energy Advanced Modeling and Simulation (NEAMS) Subcontinuum and Upscaling Task. The work conducted focused on developing a roadmap to include molecular scale, mechanistic information in continuum-scale models of nuclear waste glass dissolution. This information is derived from molecular-scale modeling efforts that are validated through comparison with experimental data. In addition to developing a master plan to incorporate a subcontinuum mechanistic understanding of glass dissolution into continuum models, methods were developed to generate constitutive dissolution rate expressions from quantum calculations, force field models were selected to generate multicomponent glass structures and gel layers,more » classical molecular modeling was used to study diffusion through nanopores analogous to those in the interfacial gel layer, and a micro-continuum model (K{mu}C) was developed to study coupled diffusion and reaction at the glass-gel-solution interface.« less
Hamker, Fred H; Wiltschut, Jan
2007-09-01
Most computational models of coding are based on a generative model according to which the feedback signal aims to reconstruct the visual scene as close as possible. We here explore an alternative model of feedback. It is derived from studies of attention and thus, probably more flexible with respect to attentive processing in higher brain areas. According to this model, feedback implements a gain increase of the feedforward signal. We use a dynamic model with presynaptic inhibition and Hebbian learning to simultaneously learn feedforward and feedback weights. The weights converge to localized, oriented, and bandpass filters similar as the ones found in V1. Due to presynaptic inhibition the model predicts the organization of receptive fields within the feedforward pathway, whereas feedback primarily serves to tune early visual processing according to the needs of the task.
GENERALIZED VISCOPLASTIC MODELING OF DEBRIS FLOW.
Chen, Cheng-lung
1988-01-01
The earliest model developed by R. A. Bagnold was based on the concept of the 'dispersive' pressure generated by grain collisions. Some efforts have recently been made by theoreticians in non-Newtonian fluid mechanics to modify or improve Bagnold's concept or model. A viable rheological model should consist both of a rate-independent part and a rate-dependent part. A generalized viscoplastic fluid (GVF) model that has both parts as well as two major rheological properties (i. e. , the normal stress effect and soil yield criterion) is shown to be sufficiently accurate, yet practical for general use in debris-flow modeling. In fact, Bagnold's model is found to be only a particular case of the GVF model. analytical solutions for (steady) uniform debris flows in wide channels are obtained from the GVF model based on Bagnold's simplified assumption of constant grain concentration.
Lenski, Richard E.; Wiser, Michael J.; Ribeck, Noah; Blount, Zachary D.; Nahum, Joshua R.; Morris, J. Jeffrey; Zaman, Luis; Turner, Caroline B.; Wade, Brian D.; Maddamsetti, Rohan; Burmeister, Alita R.; Baird, Elizabeth J.; Bundy, Jay; Grant, Nkrumah A.; Card, Kyle J.; Rowles, Maia; Weatherspoon, Kiyana; Papoulis, Spiridon E.; Sullivan, Rachel; Clark, Colleen; Mulka, Joseph S.; Hajela, Neerja
2015-01-01
Many populations live in environments subject to frequent biotic and abiotic changes. Nonetheless, it is interesting to ask whether an evolving population's mean fitness can increase indefinitely, and potentially without any limit, even in a constant environment. A recent study showed that fitness trajectories of Escherichia coli populations over 50 000 generations were better described by a power-law model than by a hyperbolic model. According to the power-law model, the rate of fitness gain declines over time but fitness has no upper limit, whereas the hyperbolic model implies a hard limit. Here, we examine whether the previously estimated power-law model predicts the fitness trajectory for an additional 10 000 generations. To that end, we conducted more than 1100 new competitive fitness assays. Consistent with the previous study, the power-law model fits the new data better than the hyperbolic model. We also analysed the variability in fitness among populations, finding subtle, but significant, heterogeneity in mean fitness. Some, but not all, of this variation reflects differences in mutation rate that evolved over time. Taken together, our results imply that both adaptation and divergence can continue indefinitely—or at least for a long time—even in a constant environment. PMID:26674951
David, Matthias; Borde, Theda; Brenne, Silke; Henrich, Wolfgang; Breckenkamp, Jürgen; Razum, Oliver
2015-01-01
Objective The frequency of caesarean section delivery varies between countries and social groups. Among other factors, it is determined by the quality of obstetrics care. Rates of elective (planned) and emergency (in-labor) caesareans may also vary between immigrants (first generation), their offspring (second- and third-generation women), and non-immigrants because of access and language barriers. Other important points to be considered are whether caesarean section indications and the neonatal outcomes differ in babies delivered by caesarean between immigrants, their offspring, and non-immigrants. Methods A standardized interview on admission to delivery wards at three Berlin obstetric hospitals was performed in a 12-month period in 2011/2012. Questions on socio-demographic and care aspects and on migration (immigrated herself vs. second- and third-generation women vs. non-immigrant) and acculturation status were included. Data was linked with information from the expectant mothers’ antenatal records and with perinatal data routinely documented in the hospital. Regression modeling was used to adjust for age, parity and socio-economic status. Results The caesarean section rates for immigrants, second- and third-generation women, and non-immigrant women were similar. Neither indications for caesarean section delivery nor neonatal outcomes showed statistically significant differences. The only difference found was a somewhat higher rate of crash caesarean sections per 100 births among first generation immigrants compared to non-immigrants. Conclusion Unlike earlier German studies and current studies from other European countries, this study did not find an increased rate of caesarean sections among immigrants, as well as second- and third-generation women, with the possible exception of a small high-risk group. This indicates an equally high quality of perinatal care for women with and without a migration history. PMID:25985437
Generation and composition of medical wastes from private medical microbiology laboratories.
Komilis, Dimitrios; Makroleivaditis, Nikolaos; Nikolakopoulou, Eftychia
2017-03-01
A study on the generation rate and the composition of solid medical wastes (MW) produced by private medical microbiology laboratories (PMML) was conducted in Greece. The novelty of the work is that no such information exists in the literature for this type of laboratories worldwide. Seven laboratories were selected with capacities that ranged from 8 to 88 examinees per day. The study lasted 6months and daily recording of MW weights was done over 30days during that period. The rates were correlated to the number of examinees, examinations and personnel. Results indicated that on average 35% of the total MW was hazardous (infectious) medical wastes (IFMW). The IFMW generation rates ranged from 11.5 to 32.5g examinee -1 d -1 while an average value from all 7 labs was 19.6±9.6g examinee -1 d -1 or 2.27±1.11g examination -1 d -1 . The average urban type medical waste generation rate was 44.2±32.5g examinee -1 d -1 . Using basic regression modeling, it was shown that the number of examinees and examinations can be predictors of the IFMW generation, but not of the urban type MW generation. The number of examinations was a better predictor of the MW amounts than the number of examinees. Statistical comparison of the means of the 7PMML was done with standard ANOVA techniques after checking the normality of the data and after doing the appropriate transformations. Based on the results of this work, it is approximated that 580 tonnes of infectious MW are generated annually by the PMML in Greece. Copyright © 2017 Elsevier Ltd. All rights reserved.
David, Matthias; Borde, Theda; Brenne, Silke; Henrich, Wolfgang; Breckenkamp, Jürgen; Razum, Oliver
2015-01-01
The frequency of caesarean section delivery varies between countries and social groups. Among other factors, it is determined by the quality of obstetrics care. Rates of elective (planned) and emergency (in-labor) caesareans may also vary between immigrants (first generation), their offspring (second- and third-generation women), and non-immigrants because of access and language barriers. Other important points to be considered are whether caesarean section indications and the neonatal outcomes differ in babies delivered by caesarean between immigrants, their offspring, and non-immigrants. A standardized interview on admission to delivery wards at three Berlin obstetric hospitals was performed in a 12-month period in 2011/2012. Questions on socio-demographic and care aspects and on migration (immigrated herself vs. second- and third-generation women vs. non-immigrant) and acculturation status were included. Data was linked with information from the expectant mothers' antenatal records and with perinatal data routinely documented in the hospital. Regression modeling was used to adjust for age, parity and socio-economic status. The caesarean section rates for immigrants, second- and third-generation women, and non-immigrant women were similar. Neither indications for caesarean section delivery nor neonatal outcomes showed statistically significant differences. The only difference found was a somewhat higher rate of crash caesarean sections per 100 births among first generation immigrants compared to non-immigrants. Unlike earlier German studies and current studies from other European countries, this study did not find an increased rate of caesarean sections among immigrants, as well as second- and third-generation women, with the possible exception of a small high-risk group. This indicates an equally high quality of perinatal care for women with and without a migration history.
Modeling the effect of temperature on survival rate of Salmonella Enteritidis in yogurt.
Szczawiński, J; Szczawińska, M E; Łobacz, A; Jackowska-Tracz, A
2014-01-01
The aim of the study was to determine the inactivation rates of Salmonella Enteritidis in commercially produced yogurt and to generate primary and secondary mathematical models to predict the behaviour of these bacteria during storage at different temperatures. The samples were inoculated with the mixture of three S. Enteritidis strains and stored at 5 degrees C, 10 degrees C, 15 degrees C, 20 degrees C and 25 degrees C for 24 h. The number of salmonellae was determined every two hours. It was found that the number of bacteria decreased linearly with storage time in all samples. Storage temperature and pH of yogurt significantly influenced survival rate of S. Enteritidis (p < 0.05). In samples kept at 5 degrees C the number of salmonellae decreased at the lowest rate, whereas at 25 degrees C the reduction in number of bacteria was the most dynamic. The natural logarithm of mean inactivation rates of Salmonella calculated from primary model was fitted to two secondary models: linear and polynomial. Equations obtained from both secondary models can be applied as a tool for prediction of inactivation rate of Salmonella in yogurt stored under temperature range from 5 to 25 degrees C; however, polynomial model gave the better fit to the experimental data.
Minakata, Daisuke; Crittenden, John
2011-04-15
The hydroxyl radical (HO(•)) is a strong oxidant that reacts with electron-rich sites on organic compounds and initiates complex radical chain reactions in aqueous phase advanced oxidation processes (AOPs). Computer based kinetic modeling requires a reaction pathway generator and predictions of associated reaction rate constants. Previously, we reported a reaction pathway generator that can enumerate the most important elementary reactions for aliphatic compounds. For the reaction rate constant predictor, we develop linear free energy relationships (LFERs) between aqueous phase literature-reported HO(•) reaction rate constants and theoretically calculated free energies of activation for H-atom abstraction from a C-H bond and HO(•) addition to alkenes. The theoretical method uses ab initio quantum mechanical calculations, Gaussian 1-3, for gas phase reactions and a solvation method, COSMO-RS theory, to estimate the impact of water. Theoretically calculated free energies of activation are found to be within approximately ±3 kcal/mol of experimental values. Considering errors that arise from quantum mechanical calculations and experiments, this should be within the acceptable errors. The established LFERs are used to predict the HO(•) reaction rate constants within a factor of 5 from the experimental values. This approach may be applied to other reaction mechanisms to establish a library of rate constant predictions for kinetic modeling of AOPs.
Oscillator Seeding of a High Gain Harmonic Generation FEL in a Radiator-First Configuration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gandhi, P.; Wurtele, J.; Penn, G.
2012-05-20
A longitudinally coherent X-ray pulse from a high repetition rate free electron laser (FEL) is desired for a wide variety of experimental applications. However, generating such a pulse with a repetition rate greater than 1 MHz is a significant challenge. The desired high repetition rate sources, primarily high harmonic generation with intense lasers in gases or plasmas, do not exist now, and, for the multi-MHz bunch trains that superconducting accelerators can potentially produce, are likely not feasible with current technology. In this paper, we propose to place an oscillator downstream of a radiator. The oscillator generates radiation that is usedmore » as a seed for a high gain harmonic generation (HGHG) FEL which is upstream of the oscillator. For the first few pulses the oscillator builds up power and, until power is built up, the radiator has no HGHG seed. As power in the oscillator saturates, the HGHG is seeded and power is produced. The dynamics and stability of this radiator-first scheme is explored analytically and numerically. A single-pass map is derived using a semi-analytic model for FEL gain and saturation. Iteration of the map is shown to be in good agreement with simulations. A numerical example is presented for a soft X-ray FEL.« less
Research on fuzzy PID control to electronic speed regulator
NASA Astrophysics Data System (ADS)
Xu, Xiao-gang; Chen, Xue-hui; Zheng, Sheng-guo
2007-12-01
As an important part of diesel engine, the speed regulator plays an important role in stabilizing speed and improving engine's performance. Because there are so many model parameters of diesel-engine considered in traditional PID control and these parameters present non-linear characteristic.The method to adjust engine speed using traditional PID is not considered as a best way. Especially for the diesel-engine generator set. In this paper, the Fuzzy PID control strategy is proposed. Some problems about its utilization in electronic speed regulator are discussed. A mathematical model of electric control system for diesel-engine generator set is established and the way of the PID parameters in the model to affect the function of system is analyzed. And then it is proposed the differential coefficient must be applied in control design for reducing dynamic deviation of system and adjusting time. Based on the control theory, a study combined control with PID calculation together for turning fuzzy PID parameter is implemented. And also a simulation experiment about electronic speed regulator system was conducted using Matlab/Simulink and the Fuzzy-Toolbox. Compared with the traditional PID Algorithm, the simulated results presented obvious improvements in the instantaneous speed governing rate and steady state speed governing rate of diesel-engine generator set when the fuzzy logic control strategy used.
In vivo generator for radioimmunotherapy
Mausner, Leonard F.; Srivastava, Suresh G.; Straub, Rita F.
1988-01-01
The present invention involves labeling monoclonal antibodies with intermediate half-life radionuclides which decay to much shorter half-life daughters with desirable high energy beta emissions. Since the daughter will be in equilibrium with the parent, it can exert an in-situ tumoricidal effect over a prolonged period in a localized fashion, essentially as an "in-vivo generator". This approach circumvents the inverse relationship between half-life and beta decay energy. Compartmental modeling was used to determine the relative distribution of dose from both parent and daughter nuclei in target and non-target tissues. Actual antibody biodistribution data have been used to fit realistic rate constants for a model containing tumor, blood, and non-tumor compartments. These rate constants were then used in a variety of simulations for two generator systems, Ba-128/Cs-128 (t.sub.1/2 =2.4d/3.6m) and Pd-112/Ag-112 (t.sub.1/2 =0.9d/192m). The results show that higher tumor/background dose ratios may be achievable by virtue of the rapid excretion of a chemically different daughter during the uptake and clearance phases. This modeling also quantitatively demonstrates the favorable impact on activity distribution of a faster monoclonal antibody tumor uptake, especially when the antibody is labeled with a radionuclide with a comparable half-life.
In vivo generator for radioimmunotherapy
Mausner, Leonard F.; Srivastava, Suresh G.; Straub, Rita F.
1988-11-01
The present invention involves labeling monoclonal antibodies with intermediate half-life radionuclides which decay to much shorter half-life daughters with desirable high energy beta emissions. Since the daughter will be in equilibrium with the parent, it can exert an in-situ tumoricidal effect over a prolonged period in a localized fashion, essentially as an "in-vivo generator". This approach circumvents the inverse relationship between half-life and beta decay energy. Compartmental modeling was used to determine the relative distribution of dose from both parent and daughter nuclei in target and non-target tissues. Actual antibody biodistribution data have been used to fit realistic rate constants for a model containing tumor, blood, and non-tumor compartments. These rate constants were then used in a variety of simulations for two generator systems, Ba-128/Cs-128 (t.sub.1/2 =2.4d/3.6m) and Pd-112/Ag-112 (t.sub.1/2 =0.9d/192m). The results show that higher tumor/background dose ratios may be achievable by virtue of the rapid excretion of a chemically different daughter during the uptake and clearance phases. This modeling also quantitatively demonstrates the favorable impact on activity distribution of a faster monoclonal antibody tumor uptake, especially when the antibody is labeled with a radionuclide with a comparable half-life.
Simulation of a Geiger-Mode Imaging LADAR System for Performance Assessment
Kim, Seongjoon; Lee, Impyeong; Kwon, Yong Joon
2013-01-01
As LADAR systems applications gradually become more diverse, new types of systems are being developed. When developing new systems, simulation studies are an essential prerequisite. A simulator enables performance predictions and optimal system parameters at the design level, as well as providing sample data for developing and validating application algorithms. The purpose of the study is to propose a method for simulating a Geiger-mode imaging LADAR system. We develop simulation software to assess system performance and generate sample data for the applications. The simulation is based on three aspects of modeling—the geometry, radiometry and detection. The geometric model computes the ranges to the reflection points of the laser pulses. The radiometric model generates the return signals, including the noises. The detection model determines the flight times of the laser pulses based on the nature of the Geiger-mode detector. We generated sample data using the simulator with the system parameters and analyzed the detection performance by comparing the simulated points to the reference points. The proportion of the outliers in the simulated points reached 25.53%, indicating the need for efficient outlier elimination algorithms. In addition, the false alarm rate and dropout rate of the designed system were computed as 1.76% and 1.06%, respectively. PMID:23823970
Experimental and numerical investigation of hydro power generator ventilation
NASA Astrophysics Data System (ADS)
Jamshidi, H.; Nilsson, H.; Chernoray, V.
2014-03-01
Improvements in ventilation and cooling offer means to run hydro power generators at higher power output and at varying operating conditions. The electromagnetic, frictional and windage losses generate heat. The heat is removed by an air flow that is driven by fans and/or the rotor itself. The air flow goes through ventilation channels in the stator, to limit the electrical insulation temperatures. The temperature should be kept limited and uniform in both time and space, avoiding thermal stresses and hot-spots. For that purpose it is important that the flow of cooling air is distributed uniformly, and that flow separation and recirculation are minimized. Improvements of the air flow properties also lead to an improvement of the overall efficiency of the machine. A significant part of the windage losses occurs at the entrance of the stator ventilation channels, where the air flow turns abruptly from tangential to radial. The present work focuses exclusively on the air flow inside a generator model, and in particular on the flow inside the stator channels. The generator model design of the present work is based on a real generator that was previously studied. The model is manufactured taking into consideration the needs of both the experimental and numerical methodologies. Computational Fluid Dynamics (CFD) results have been used in the process of designing the experimental setup. The rotor and stator are manufactured using rapid-prototyping and plexi-glass, yielding a high geometrical accuracy, and optical experimental access. A special inlet section is designed for accurate air flow rate and inlet velocity profile measurements. The experimental measurements include Particle Image Velocimetry (PIV) and total pressure measurements inside the generator. The CFD simulations are performed based on the OpenFOAM CFD toolbox, and the steady-state frozen rotor approach. Specific studies are performed, on the effect of adding "pick-up" to spacers, and the effects of the inlet fan blades on the flow rate through the model. The CFD results capture the experimental flow details to a reasonable level of accuracy.
Multiple exciton generation and recombination in carbon nanotubes and nanocrystals.
Kanemitsu, Yoshihiko
2013-06-18
Semiconducting nanomaterials such as single-walled carbon nanotubes (SWCNTs) and nanocrystals (NCs) exhibit unique size-dependent quantum properties. They have therefore attracted considerable attention from the viewpoints of fundamental physics and functional device applications. SWCNTs and NCs also provide an excellent new stage for experimental studies of many-body effects of electrons and excitons on optical processes in nanomaterials. In this Account, we discuss multiple exciton generation and recombination in SWCNTs and NCs for next-generation photovoltaics. Strongly correlated ensembles of conduction-band electrons and valence-band holes in semiconductors are complex quantum systems that exhibit unique optical phenomena. In bulk crystals, the carrier recombination dynamics can be described by a simple model, which includes the nonradiative single-carrier trapping rate, the radiative two-carrier recombination rate, and the nonradiative three-carrier Auger recombination rate. The nonradiative Auger recombination rate determines the carrier recombination dynamics at high carrier density and depends on the spatial localization of carriers in two-dimensional quantum wells. The Auger recombination and multiple exciton generation rates can be advantageously manipulated by nanomaterials with designated energy structures. In addition, SWCNTs and NCs show quantized recombination dynamics of multiple excitons and carriers. In one-dimensional SWCNTs, excitons have large binding energies and are very stable at room temperature. The extremely rapid Auger recombination between excitons determines the photoluminescence (PL) intensity, the PL linewidth, and the PL lifetime. SWCNTs can undergo multiple exciton generation, while strong exciton-exciton interactions and complicated exciton structures affect the quantized Auger rate and the multiple exciton generation efficiency. Interestingly, in zero-dimensional NC quantum dots, quantized Auger recombination causes unique optical phenomena. The breakdown of the k-conversion rule and strong Coulomb interactions between carriers in NCs enhance Auger recombination rate and decrease the energy threshold for multiple exciton generation. We discuss this impact of the k-conservation rule on two-carrier radiative recombination and the three-carrier Auger recombination processes in indirect-gap semiconductor Si NCs. In NCs and SWCNTs, multiple exciton generation competes with Auger recombination, surface trapping of excitons, and cooling of hot electrons or excitons. In addition, we explore heterostructured NCs and impurity-doped NCs in the context of the optimization of charge carrier extraction from excitons in NCs.
NASA Astrophysics Data System (ADS)
Saha, B.; Dietl, C.
2009-04-01
Previous studies on decollement kinematics have shed light on the differing structures of fold thrust belt forming above lithologically different decollements, such as shales, carbonates and evaporites. Factors, affecting the decollement kinematics most are (1) rock rheology and (2) deformation rate. This study is intended to explain the deformation style of the Naga fold thrust belt (NFTB, NE India) with the aid of sand box modelling performed at a basal temperature of 50C and deformed at varying strain rates from 3*10-6 s-1 to 4*10-3 s-1. The models are made up (from bottom to top) of a 0.25 cm thick layer of temperature-sensitive PDMS (polydimethylsiloxane), overlain by 1.75 cm of alternating black and yellow sand. The basal PDMS layer simulates a shale decollement. Decollements in the NFTB are generally developed in the Barail Shale of Oligocene age at 50C (the depth of the Barail Shale is about 2 km and the prevailing geothermal gradient is 25C/km). The sand layers simulate the brittlely behaving sandstones which prevail in the NFTB. All of the models were subjected to 35% compression, as the NFTB experienced similar shortening. The varying deformation velocities were chosen to model differing decollement rheologies. PDMS simulates shale decollement, which is mobile when overpressured and undergoes compression. The rheology of PDMS changes considerably with the applied temperature and strain rate. PDMS, although generally regarded as Newtonian, does behave non-Newtonian at strain rates of 10-3 s-1. The relation between decollement pore fluid overpressure with that of model strain rate, the material rheology, scaled body forces, density of the decollement in nature can be expressed as: λ = 1- [ V ηmodel / f Hmodel ρnatureg Hnature σ*] where λ = coeifficient of pore fluid overpressure in the decollement, V = the deformation velocity with which the models are deforming, ηmodel= viscosity of the decollement material, f = the co efficient of overpressure, and is estimated 0.85 for frictional decollement, Hmodel = thickness of the decollement in the models, ρnature = density of the shale decollement in its natural analogue, g = the acceleration of gravity, Hnature = thickness of the decollement in nature, σ* = the scaled body forces. Hence, it can be suggested that, the value of pore fluid overpressure is dependent on the variables like velocity of the deformation, viscosity and thickness of the model decollement, nature to model ratio of body forces, density and thickness of the natural analogues. The values for natural analogue and model decollement thickness are constant, only the viscosity (dependent on temperature and applied strain rate) varies with different models, in turn altering the co efficient of overpressure values. Rapid shortening rates (model group 1, deforming at a strain rate varying from 4*10-5 s-1 to 4*10-3 s-1) generate more complicated structures than that of those shortening at lower rates (model group 2, deforming at a strain rate varying from 3*10-6 s-1 to 1.6*10-5 s-1). Thrust related folds predominate in model group 1, whereas, thrusts and backthursts dominate in model group 2. Group 1 models display closely spaced horse blocks. Shortening in the horse blocks is accommodated mainly by box folding and they generate fewer backthrusts than group 2 models. Group 2 models develop large spacing between the horse blocks and show structural highs bordered by both forethrusts and backthrusts. The horses are persistent along strike direction. Group 1 models are higher and possess higher structural taper than the group 2 models. In both the models, it is observed that, once a new structure forms, deformation cease to act in the old structure and it is structurally abandoned. Results of these physical models therefore demonstrate very well that the deformation rate and the decollement rheology are the key factors in controlling the structural style of a fold thrust belt. Comparing the modelling results with the published seismic section of the NFTB, it becomes very clear that structures observed in the models of group 2, i.e. those models deformed at slow strain rates, are very close to the deformation structures observed in the NFTB. The seismic section shows a basal decollement forming a low angle thrust that reaches up to the surface. Thrust horses are separated by broad synclines. Furthermore, the data reveal the buried nature of the thrust front with a triangle zone geometry. This observation is in agreement with the results of the group 2 models, which show development of dominantly forward imbricate thrust sequence. Obviously, the deformation evolution and structural features of the NFTB is governed by its weak substrata deforming under slow strain rate resulting in the generation of imbricate thrust zone.
1988/1989 household travel survey
DOT National Transportation Integrated Search
1989-07-01
The primary objectives of this study were to provide the data: (1) : to update the trip generation rates used in the Maricopa Association of Governments (MAG) travel demand forecasting process, and; (2) to validate the MAG trip distribution model. Th...
Kumal, Raju R; Abu-Laban, Mohammad; Landry, Corey R; Kruger, Blake; Zhang, Zhenyu; Hayes, Daniel J; Haber, Louis H
2016-10-11
The photocleaving dynamics of colloidal microRNA-functionalized nanoparticles are studied using time-dependent second harmonic generation (SHG) measurements. Model drug-delivery systems composed of oligonucleotides attached to either silver nanoparticles or polystyrene nanoparticles using a nitrobenzyl photocleavable linker are prepared and characterized. The photoactivated controlled release is observed to be most efficient on resonance at 365 nm irradiation, with pseudo-first-order rate constants that are linearly proportional to irradiation powers. Additionally, silver nanoparticles show a 6-fold plasmon enhancement in photocleaving efficiency over corresponding polystyrene nanoparticle rates, while our previous measurements on gold nanoparticles show a 2-fold plasmon enhancement compared to polystyrene nanoparticles. Characterizations including extinction spectroscopy, electrophoretic mobility, and fluorimetry measurements confirm the analysis from the SHG results. The real-time SHG measurements are shown to be a highly sensitive method for investigating plasmon-enhanced photocleaving dynamics in model drug delivery systems.
Pretest analysis of natural circulation on the PWR model PACTEL with horizontal steam generators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kervinen, T.; Riikonen, V.; Ritonummi, T.
A new tests facility - parallel channel tests loop (PACTEL)- has been designed and built to simulate the major components and system behavior of pressurized water reactors (PWRs) during postulated small- and medium-break loss-of-coolant accidents. Pretest calculations have been performed for the first test series, and the results of these calculations are being used for planning experiments, for adjusting the data acquisition system, and for choosing the optimal position and type of instrumentation. PACTEL is a volumetrically scaled (1:305) model of the VVER-440 PWR. In all the calculated cases, the natural circulation was found to be effective in removing themore » heat from the core to the steam generator. The loop mass flow rate peaked at 60% mass inventory. The straightening of the loop seals increased the mass flow rate significantly.« less
Haji-Maghsoudi, Saiedeh; Haghdoost, Ali-akbar; Rastegari, Azam; Baneshi, Mohammad Reza
2013-01-01
Background: Policy makers need models to be able to detect groups at high risk of HIV infection. Incomplete records and dirty data are frequently seen in national data sets. Presence of missing data challenges the practice of model development. Several studies suggested that performance of imputation methods is acceptable when missing rate is moderate. One of the issues which was of less concern, to be addressed here, is the role of the pattern of missing data. Methods: We used information of 2720 prisoners. Results derived from fitting regression model to whole data were served as gold standard. Missing data were then generated so that 10%, 20% and 50% of data were lost. In scenario 1, we generated missing values, at above rates, in one variable which was significant in gold model (age). In scenario 2, a small proportion of each of independent variable was dropped out. Four imputation methods, under different Event Per Variable (EPV) values, were compared in terms of selection of important variables and parameter estimation. Results: In scenario 2, bias in estimates was low and performances of all methods for handing missing data were similar. All methods at all missing rates were able to detect significance of age. In scenario 1, biases in estimations were increased, in particular at 50% missing rate. Here at EPVs of 10 and 5, imputation methods failed to capture effect of age. Conclusion: In scenario 2, all imputation methods at all missing rates, were able to detect age as being significant. This was not the case in scenario 1. Our results showed that performance of imputation methods depends on the pattern of missing data. PMID:24596839
Doulamis, A D; Doulamis, N D; Kollias, S D
2003-01-01
Multimedia services and especially digital video is expected to be the major traffic component transmitted over communication networks [such as internet protocol (IP)-based networks]. For this reason, traffic characterization and modeling of such services are required for an efficient network operation. The generated models can be used as traffic rate predictors, during the network operation phase (online traffic modeling), or as video generators for estimating the network resources, during the network design phase (offline traffic modeling). In this paper, an adaptable neural-network architecture is proposed covering both cases. The scheme is based on an efficient recursive weight estimation algorithm, which adapts the network response to current conditions. In particular, the algorithm updates the network weights so that 1) the network output, after the adaptation, is approximately equal to current bit rates (current traffic statistics) and 2) a minimal degradation over the obtained network knowledge is provided. It can be shown that the proposed adaptable neural-network architecture simulates a recursive nonlinear autoregressive model (RNAR) similar to the notation used in the linear case. The algorithm presents low computational complexity and high efficiency in tracking traffic rates in contrast to conventional retraining schemes. Furthermore, for the problem of offline traffic modeling, a novel correlation mechanism is proposed for capturing the burstness of the actual MPEG video traffic. The performance of the model is evaluated using several real-life MPEG coded video sources of long duration and compared with other linear/nonlinear techniques used for both cases. The results indicate that the proposed adaptable neural-network architecture presents better performance than other examined techniques.
Statistical characteristics of climbing fiber spikes necessary for efficient cerebellar learning.
Kuroda, S; Yamamoto, K; Miyamoto, H; Doya, K; Kawat, M
2001-03-01
Mean firing rates (MFRs), with analogue values, have thus far been used as information carriers of neurons in most brain theories of learning. However, the neurons transmit the signal by spikes, which are discrete events. The climbing fibers (CFs), which are known to be essential for cerebellar motor learning, fire at the ultra-low firing rates (around 1 Hz), and it is not yet understood theoretically how high-frequency information can be conveyed and how learning of smooth and fast movements can be achieved. Here we address whether cerebellar learning can be achieved by CF spikes instead of conventional MFR in an eye movement task, such as the ocular following response (OFR), and an arm movement task. There are two major afferents into cerebellar Purkinje cells: parallel fiber (PF) and CF, and the synaptic weights between PFs and Purkinje cells have been shown to be modulated by the stimulation of both types of fiber. The modulation of the synaptic weights is regulated by the cerebellar synaptic plasticity. In this study we simulated cerebellar learning using CF signals as spikes instead of conventional MFR. To generate the spikes we used the following four spike generation models: (1) a Poisson model in which the spike interval probability follows a Poisson distribution, (2) a gamma model in which the spike interval probability follows the gamma distribution, (3) a max model in which a spike is generated when a synaptic input reaches maximum, and (4) a threshold model in which a spike is generated when the input crosses a certain small threshold. We found that, in an OFR task with a constant visual velocity, learning was successful with stochastic models, such as Poisson and gamma models, but not in the deterministic models, such as max and threshold models. In an OFR with a stepwise velocity change and an arm movement task, learning could be achieved only in the Poisson model. In addition, for efficient cerebellar learning, the distribution of CF spike-occurrence time after stimulus onset must capture at least the first, second and third moments of the temporal distribution of error signals.
Experimental investigation of fan-folded piezoelectric energy harvesters for powering pacemakers
Ansari, M H; Karami, M Amin
2018-01-01
This paper studies the fabrication and testing of a magnet free piezoelectric energy harvester (EH) for powering biomedical devices and sensors inside the body. The design for the EH is a fan-folded structure consisting of bimorph piezoelectric beams folding on top of each other. An actual size experimental prototype is fabricated to verify the developed analytical models. The model is verified by matching the analytical results of the tip acceleration frequency response functions (FRF) and voltage FRF with the experimental results. The generated electricity is measured when the EH is excited by the heartbeat. A closed loop shaker system is utilized to reproduce the heartbeat vibrations. Achieving low fundamental natural frequency is a key factor to generate sufficient energy for pacemakers using heartbeat vibrations. It is shown that the natural frequency of the small-scale device is less than 20 Hz due to its unique fan-folded design. The experimental results show that the small-scale EH generates sufficient power for state of the art pacemakers. The 1 cm3 EH with18.4 gr tip mass generates more than16 μW of power from a normal heartbeat waveform. The robustness of the device to the heart rate is also studied by measuring the relation between the power output and the heart rate. PMID:29674807
Experimental investigation of fan-folded piezoelectric energy harvesters for powering pacemakers.
Ansari, M H; Karami, M Amin
2017-06-01
This paper studies the fabrication and testing of a magnet free piezoelectric energy harvester (EH) for powering biomedical devices and sensors inside the body. The design for the EH is a fan-folded structure consisting of bimorph piezoelectric beams folding on top of each other. An actual size experimental prototype is fabricated to verify the developed analytical models. The model is verified by matching the analytical results of the tip acceleration frequency response functions (FRF) and voltage FRF with the experimental results. The generated electricity is measured when the EH is excited by the heartbeat. A closed loop shaker system is utilized to reproduce the heartbeat vibrations. Achieving low fundamental natural frequency is a key factor to generate sufficient energy for pacemakers using heartbeat vibrations. It is shown that the natural frequency of the small-scale device is less than 20 Hz due to its unique fan-folded design. The experimental results show that the small-scale EH generates sufficient power for state of the art pacemakers. The 1 cm 3 EH with18.4 gr tip mass generates more than16 μ W of power from a normal heartbeat waveform. The robustness of the device to the heart rate is also studied by measuring the relation between the power output and the heart rate.
Venugopal, G; Deepak, P; Ghosh, Diptasree M; Ramakrishnan, S
2017-11-01
Surface electromyography is a non-invasive technique used for recording the electrical activity of neuromuscular systems. These signals are random, complex and multi-component. There are several techniques to extract information about the force exerted by muscles during any activity. This work attempts to generate surface electromyography signals for various magnitudes of force under isometric non-fatigue and fatigue conditions using a feedback model. The model is based on existing current distribution, volume conductor relations, the feedback control algorithm for rate coding and generation of firing pattern. The result shows that synthetic surface electromyography signals are highly complex in both non-fatigue and fatigue conditions. Furthermore, surface electromyography signals have higher amplitude and lower frequency under fatigue condition. This model can be used to study the influence of various signal parameters under fatigue and non-fatigue conditions.
NASA Astrophysics Data System (ADS)
Hur, Ji-Hyun; Park, Junghak; Jeon, Sanghun
2017-02-01
A model that universally describes the characteristics of photocurrent in molybdenum disulphide (MoS2) thin-film transistor (TFT) photosensors in both ‘light on’ and ‘light off’ conditions is presented for the first time. We considered possible material-property dependent carrier generation and recombination mechanisms in layered MoS2 channels with different numbers of layers. We propose that the recombination rates that are mainly composed of direct band-to-band recombination and interface trap-involved recombination change on changing the light condition and the number of layers. By comparing the experimental results, it is shown that the model performs well in describing the photocurrent behaviors of MoS2 TFT photosensors, including the photocurrent generation under illumination and a hugely long time persistent trend of the photocurrent decay in the dark condition, for a range of MoS2 layer numbers.
From Spiking Neuron Models to Linear-Nonlinear Models
Ostojic, Srdjan; Brunel, Nicolas
2011-01-01
Neurons transform time-varying inputs into action potentials emitted stochastically at a time dependent rate. The mapping from current input to output firing rate is often represented with the help of phenomenological models such as the linear-nonlinear (LN) cascade, in which the output firing rate is estimated by applying to the input successively a linear temporal filter and a static non-linear transformation. These simplified models leave out the biophysical details of action potential generation. It is not a priori clear to which extent the input-output mapping of biophysically more realistic, spiking neuron models can be reduced to a simple linear-nonlinear cascade. Here we investigate this question for the leaky integrate-and-fire (LIF), exponential integrate-and-fire (EIF) and conductance-based Wang-Buzsáki models in presence of background synaptic activity. We exploit available analytic results for these models to determine the corresponding linear filter and static non-linearity in a parameter-free form. We show that the obtained functions are identical to the linear filter and static non-linearity determined using standard reverse correlation analysis. We then quantitatively compare the output of the corresponding linear-nonlinear cascade with numerical simulations of spiking neurons, systematically varying the parameters of input signal and background noise. We find that the LN cascade provides accurate estimates of the firing rates of spiking neurons in most of parameter space. For the EIF and Wang-Buzsáki models, we show that the LN cascade can be reduced to a firing rate model, the timescale of which we determine analytically. Finally we introduce an adaptive timescale rate model in which the timescale of the linear filter depends on the instantaneous firing rate. This model leads to highly accurate estimates of instantaneous firing rates. PMID:21283777
From spiking neuron models to linear-nonlinear models.
Ostojic, Srdjan; Brunel, Nicolas
2011-01-20
Neurons transform time-varying inputs into action potentials emitted stochastically at a time dependent rate. The mapping from current input to output firing rate is often represented with the help of phenomenological models such as the linear-nonlinear (LN) cascade, in which the output firing rate is estimated by applying to the input successively a linear temporal filter and a static non-linear transformation. These simplified models leave out the biophysical details of action potential generation. It is not a priori clear to which extent the input-output mapping of biophysically more realistic, spiking neuron models can be reduced to a simple linear-nonlinear cascade. Here we investigate this question for the leaky integrate-and-fire (LIF), exponential integrate-and-fire (EIF) and conductance-based Wang-Buzsáki models in presence of background synaptic activity. We exploit available analytic results for these models to determine the corresponding linear filter and static non-linearity in a parameter-free form. We show that the obtained functions are identical to the linear filter and static non-linearity determined using standard reverse correlation analysis. We then quantitatively compare the output of the corresponding linear-nonlinear cascade with numerical simulations of spiking neurons, systematically varying the parameters of input signal and background noise. We find that the LN cascade provides accurate estimates of the firing rates of spiking neurons in most of parameter space. For the EIF and Wang-Buzsáki models, we show that the LN cascade can be reduced to a firing rate model, the timescale of which we determine analytically. Finally we introduce an adaptive timescale rate model in which the timescale of the linear filter depends on the instantaneous firing rate. This model leads to highly accurate estimates of instantaneous firing rates.
Early College Can Boost College Success Rates for Low-Income, First-Generation Students
ERIC Educational Resources Information Center
Ndiaye, Mamadou; Wolfe, Rebecca E.
2016-01-01
Early college high school models are designed to encourage and assist traditionally underrepresented groups of students- low income, Latino, and African-American- to persist in and graduate from high school while earning college credit. Some of the models target high school dropouts, with the aim of helping them acquire the education and training…
2013-02-04
i.e., volumetric muscle loss; VML). The explicit goal is to restore functional capacity to the injured tissue by promoting generation of muscle fibers ...3,23,25,27,28]. As a result, trans- plantation of a variety of ECMs in preclinical animal models has resulted in modest levels of muscle fiber generation at...the site of the defect during the initial months post-injury [20,28e30]. However, an apparent enhanced rate of muscle fiber generation at
Titan I propulsion system modeling and possible performance improvements
NASA Astrophysics Data System (ADS)
Giusti, Oreste
This thesis features the Titan I propulsion systems and offers data-supported suggestions for improvements to increase performance. The original propulsion systems were modeled both graphically in CAD and via equations. Due to the limited availability of published information, it was necessary to create a more detailed, secondary set of models. Various engineering equations---pertinent to rocket engine design---were implemented in order to generate the desired extra detail. This study describes how these new models were then imported into the ESI CFD Suite. Various parameters are applied to these imported models as inputs that include, for example, bi-propellant combinations, pressure, temperatures, and mass flow rates. The results were then processed with ESI VIEW, which is visualization software. The output files were analyzed for forces in the nozzle, and various results were generated, including sea level thrust and ISP. Experimental data are provided to compare the original engine configuration models to the derivative suggested improvement models.
Mou, Zishen; Scheutz, Charlotte; Kjeldsen, Peter
2015-06-01
Methane (CH₄) generated from low-organic waste degradation at four Danish landfills was estimated by three first-order decay (FOD) landfill gas (LFG) generation models (LandGEM, IPCC, and Afvalzorg). Actual waste data from Danish landfills were applied to fit model (IPCC and Afvalzorg) required categories. In general, the single-phase model, LandGEM, significantly overestimated CH₄generation, because it applied too high default values for key parameters to handle low-organic waste scenarios. The key parameters were biochemical CH₄potential (BMP) and CH₄generation rate constant (k-value). In comparison to the IPCC model, the Afvalzorg model was more suitable for estimating CH₄generation at Danish landfills, because it defined more proper waste categories rather than traditional municipal solid waste (MSW) fractions. Moreover, the Afvalzorg model could better show the influence of not only the total disposed waste amount, but also various waste categories. By using laboratory-determined BMPs and k-values for shredder, sludge, mixed bulky waste, and street-cleaning waste, the Afvalzorg model was revised. The revised model estimated smaller cumulative CH₄generation results at the four Danish landfills (from the start of disposal until 2020 and until 2100). Through a CH₄mass balance approach, fugitive CH₄emissions from whole sites and a specific cell for shredder waste were aggregated based on the revised Afvalzorg model outcomes. Aggregated results were in good agreement with field measurements, indicating that the revised Afvalzorg model could provide practical and accurate estimation for Danish LFG emissions. This study is valuable for both researchers and engineers aiming to predict, control, and mitigate fugitive CH₄emissions from landfills receiving low-organic waste. Landfill operators use the first-order decay (FOD) models to estimate methane (CH₄) generation. A single-phase model (LandGEM) and a traditional model (IPCC) could result in overestimation when handling a low-organic waste scenario. Site-specific data were important and capable of calibrating key parameter values in FOD models. The comparison study of the revised Afvalzorg model outcomes and field measurements at four Danish landfills provided a guideline for revising the Pollutants Release and Transfer Registers (PRTR) model, as well as indicating noteworthy waste fractions that could emit CH₄at modern landfills.
Rivera-Rivera, Carlos J.; Montoya-Burgos, Juan I.
2016-01-01
Phylogenetic inference artifacts can occur when sequence evolution deviates from assumptions made by the models used to analyze them. The combination of strong model assumption violations and highly heterogeneous lineage evolutionary rates can become problematic in phylogenetic inference, and lead to the well-described long-branch attraction (LBA) artifact. Here, we define an objective criterion for assessing lineage evolutionary rate heterogeneity among predefined lineages: the result of a likelihood ratio test between a model in which the lineages evolve at the same rate (homogeneous model) and a model in which different lineage rates are allowed (heterogeneous model). We implement this criterion in the algorithm Locus Specific Sequence Subsampling (LS³), aimed at reducing the effects of LBA in multi-gene datasets. For each gene, LS³ sequentially removes the fastest-evolving taxon of the ingroup and tests for lineage rate homogeneity until all lineages have uniform evolutionary rates. The sequences excluded from the homogeneously evolving taxon subset are flagged as potentially problematic. The software implementation provides the user with the possibility to remove the flagged sequences for generating a new concatenated alignment. We tested LS³ with simulations and two real datasets containing LBA artifacts: a nucleotide dataset regarding the position of Glires within mammals and an amino-acid dataset concerning the position of nematodes within bilaterians. The initially incorrect phylogenies were corrected in all cases upon removing data flagged by LS³. PMID:26912812
Forecasting Lightning Threat using Cloud-Resolving Model Simulations
NASA Technical Reports Server (NTRS)
McCaul, Eugene W., Jr.; Goodman, Steven J.; LaCasse, Katherine M.; Cecil, Daniel J.
2008-01-01
Two new approaches are proposed and developed for making time and space dependent, quantitative short-term forecasts of lightning threat, and a blend of these approaches is devised that capitalizes on the strengths of each. The new methods are distinctive in that they are based entirely on the ice-phase hydrometeor fields generated by regional cloud-resolving numerical simulations, such as those produced by the WRF model. These methods are justified by established observational evidence linking aspects of the precipitating ice hydrometeor fields to total flash rates. The methods are straightforward and easy to implement, and offer an effective near-term alternative to the incorporation of complex and costly cloud electrification schemes into numerical models. One method is based on upward fluxes of precipitating ice hydrometeors in the mixed phase region at the-15 C level, while the second method is based on the vertically integrated amounts of ice hydrometeors in each model grid column. Each method can be calibrated by comparing domain-wide statistics of the peak values of simulated flash rate proxy fields against domain-wide peak total lightning flash rate density data from observations. Tests show that the first method is able to capture much of the temporal variability of the lightning threat, while the second method does a better job of depicting the areal coverage of the threat. Our blended solution is designed to retain most of the temporal sensitivity of the first method, while adding the improved spatial coverage of the second. Exploratory tests for selected North Alabama cases show that, because WRF can distinguish the general character of most convective events, our methods show promise as a means of generating quantitatively realistic fields of lightning threat. However, because the models tend to have more difficulty in predicting the instantaneous placement of storms, forecasts of the detailed location of the lightning threat based on single simulations can be in error. Although these model shortcomings presently limit the precision of lightning threat forecasts from individual runs of current generation models,the techniques proposed herein should continue to be applicable as newer and more accurate physically-based model versions, physical parameterizations, initialization techniques and ensembles of forecasts become available.
Ahmadpanah, J; Ghavi Hossein-Zadeh, N; Shadparvar, A A; Pakdel, A
2017-02-01
1. The objectives of the current study were to investigate the effect of incidence rate (5%, 10%, 20%, 30% and 50%) of ascites syndrome on the expression of genetic characteristics for body weight at 5 weeks of age (BW5) and AS and to compare different methods of genetic parameter estimation for these traits. 2. Based on stochastic simulation, a population with discrete generations was created in which random mating was used for 10 generations. Two methods of restricted maximum likelihood and Bayesian approach via Gibbs sampling were used for the estimation of genetic parameters. A bivariate model including maternal effects was used. The root mean square error for direct heritabilities was also calculated. 3. The results showed that when incidence rates of ascites increased from 5% to 30%, the heritability of AS increased from 0.013 and 0.005 to 0.110 and 0.162 for linear and threshold models, respectively. 4. Maternal effects were significant for both BW5 and AS. Genetic correlations were decreased by increasing incidence rates of ascites in the population from 0.678 and 0.587 at 5% level of ascites to 0.393 and -0.260 at 50% occurrence for linear and threshold models, respectively. 5. The RMSE of direct heritability from true values for BW5 was greater based on a linear-threshold model compared with the linear model of analysis (0.0092 vs. 0.0015). The RMSE of direct heritability from true values for AS was greater based on a linear-linear model (1.21 vs. 1.14). 6. In order to rank birds for ascites incidence, it is recommended to use a threshold model because it resulted in higher heritability estimates compared with the linear model and that BW5 could be one of the main components of selection goals.
Bugana, Marco; Severi, Stefano; Sobie, Eric A.
2014-01-01
Reverse rate dependence is a problematic property of antiarrhythmic drugs that prolong the cardiac action potential (AP). The prolongation caused by reverse rate dependent agents is greater at slow heart rates, resulting in both reduced arrhythmia suppression at fast rates and increased arrhythmia risk at slow rates. The opposite property, forward rate dependence, would theoretically overcome these parallel problems, yet forward rate dependent (FRD) antiarrhythmics remain elusive. Moreover, there is evidence that reverse rate dependence is an intrinsic property of perturbations to the AP. We have addressed the possibility of forward rate dependence by performing a comprehensive analysis of 13 ventricular myocyte models. By simulating populations of myocytes with varying properties and analyzing population results statistically, we simultaneously predicted the rate-dependent effects of changes in multiple model parameters. An average of 40 parameters were tested in each model, and effects on AP duration were assessed at slow (0.2 Hz) and fast (2 Hz) rates. The analysis identified a variety of FRD ionic current perturbations and generated specific predictions regarding their mechanisms. For instance, an increase in L-type calcium current is FRD when this is accompanied by indirect, rate-dependent changes in slow delayed rectifier potassium current. A comparison of predictions across models identified inward rectifier potassium current and the sodium-potassium pump as the two targets most likely to produce FRD AP prolongation. Finally, a statistical analysis of results from the 13 models demonstrated that models displaying minimal rate-dependent changes in AP shape have little capacity for FRD perturbations, whereas models with large shape changes have considerable FRD potential. This can explain differences between species and between ventricular cell types. Overall, this study provides new insights, both specific and general, into the determinants of AP duration rate dependence, and illustrates a strategy for the design of potentially beneficial antiarrhythmic drugs. PMID:24675446
Cummins, Megan A; Dalal, Pavan J; Bugana, Marco; Severi, Stefano; Sobie, Eric A
2014-03-01
Reverse rate dependence is a problematic property of antiarrhythmic drugs that prolong the cardiac action potential (AP). The prolongation caused by reverse rate dependent agents is greater at slow heart rates, resulting in both reduced arrhythmia suppression at fast rates and increased arrhythmia risk at slow rates. The opposite property, forward rate dependence, would theoretically overcome these parallel problems, yet forward rate dependent (FRD) antiarrhythmics remain elusive. Moreover, there is evidence that reverse rate dependence is an intrinsic property of perturbations to the AP. We have addressed the possibility of forward rate dependence by performing a comprehensive analysis of 13 ventricular myocyte models. By simulating populations of myocytes with varying properties and analyzing population results statistically, we simultaneously predicted the rate-dependent effects of changes in multiple model parameters. An average of 40 parameters were tested in each model, and effects on AP duration were assessed at slow (0.2 Hz) and fast (2 Hz) rates. The analysis identified a variety of FRD ionic current perturbations and generated specific predictions regarding their mechanisms. For instance, an increase in L-type calcium current is FRD when this is accompanied by indirect, rate-dependent changes in slow delayed rectifier potassium current. A comparison of predictions across models identified inward rectifier potassium current and the sodium-potassium pump as the two targets most likely to produce FRD AP prolongation. Finally, a statistical analysis of results from the 13 models demonstrated that models displaying minimal rate-dependent changes in AP shape have little capacity for FRD perturbations, whereas models with large shape changes have considerable FRD potential. This can explain differences between species and between ventricular cell types. Overall, this study provides new insights, both specific and general, into the determinants of AP duration rate dependence, and illustrates a strategy for the design of potentially beneficial antiarrhythmic drugs.
Efficient and robust quantum random number generation by photon number detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Applegate, M. J.; Cavendish Laboratory, University of Cambridge, 19 JJ Thomson Avenue, Cambridge CB3 0HE; Thomas, O.
2015-08-17
We present an efficient and robust quantum random number generator based upon high-rate room temperature photon number detection. We employ an electric field-modulated silicon avalanche photodiode, a type of device particularly suited to high-rate photon number detection with excellent photon number resolution to detect, without an applied dead-time, up to 4 photons from the optical pulses emitted by a laser. By both measuring and modeling the response of the detector to the incident photons, we are able to determine the illumination conditions that achieve an optimal bit rate that we show is robust against variation in the photon flux. Wemore » extract random bits from the detected photon numbers with an efficiency of 99% corresponding to 1.97 bits per detected photon number yielding a bit rate of 143 Mbit/s, and verify that the extracted bits pass stringent statistical tests for randomness. Our scheme is highly scalable and has the potential of multi-Gbit/s bit rates.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Witte, Samuel J.; Gluscevic, Vera; McDermott, Samuel D.
It has recently been demonstrated that, in the event of a putative signal in dark matter direct detection experiments, properly identifying the underlying dark matter-nuclei interaction promises to be a challenging task. Given the most optimistic expectations for the number counts of recoil events in the forthcoming Generation 2 experiments, differentiating between interactions that produce distinct features in the recoil energy spectra will only be possible if a strong signal is observed simultaneously on a variety of complementary targets. However, there is a wide range of viable theories that give rise to virtually identical energy spectra, and may only differmore » by the dependence of the recoil rate on the dark matter velocity. In this work, we investigate how degeneracy between such competing models may be broken by analyzing the time dependence of nuclear recoils, i.e. the annual modulation of the rate. For this purpose, we simulate dark matter events for a variety of interactions and experiments, and perform a Bayesian model-selection analysis on all simulated data sets, evaluating the chance of correctly identifying the input model for a given experimental setup. Lastly, we find that including information on the annual modulation of the rate may significantly enhance the ability of a single target to distinguish dark matter models with nearly degenerate recoil spectra, but only with exposures beyond the expectations of Generation 2 experiments.« less
NASA Astrophysics Data System (ADS)
Song, Rui; Lei, Chengmin; Han, Kai; Chen, Zilun; Pu, Dongsheng; Hou, Jing
2017-05-01
Supercontinuum generation directly from a nonlinear fiber amplifier, especially from a nonlinear ytterbium-doped fiber amplifier, attracts more and more attention due to its all-fiber structure, high optical to optical conversion efficiency, and high power output potential. However, the modeling of supercontinuum generation from a nonlinear fiber amplifier has been rarely reported. In this paper, the modeling of a tapered Ytterbium-doped fiber amplifier for visible extended to infrared supercontinuum generation is proposed based on the combination of the laser rate equations and the generalized nonlinear Schrödinger equation. Ytterbium-doped fiber amplifier generally can not generate visible extended supercontinuum due to its pumping wavelength and zero-dispersion wavelength. However, appropriate tapering and four-wave mixing makes the visible extended supercontinuum generation from an ytterbium-doped fiber amplifier possible. Tapering makes the zero-dispersion wavelength of the ytterbium-doped fiber shift to the short wavelength and minimizes the dispersion matching. Four-wave mixing plays an important role in the visible spectrum generation. The influence of pulse width and pump power on the supercontinuum generation is calculated and analyzed. The simulation results imply that it is promising and possible to fabricate a visible-to-infrared supercontinuum with low pump power and flat spectrum by using the tapered ytterbium-doped fiber amplifier scheme as long as the related parameters are well-selected.
NASA Astrophysics Data System (ADS)
Aziz, Asim; Jamshed, Wasim; Aziz, Taha
2018-04-01
In the present research a simplified mathematical model for the solar thermal collectors is considered in the form of non-uniform unsteady stretching surface. The non-Newtonian Maxwell nanofluid model is utilized for the working fluid along with slip and convective boundary conditions and comprehensive analysis of entropy generation in the system is also observed. The effect of thermal radiation and variable thermal conductivity are also included in the present model. The mathematical formulation is carried out through a boundary layer approach and the numerical computations are carried out for Cu-water and TiO2-water nanofluids. Results are presented for the velocity, temperature and entropy generation profiles, skin friction coefficient and Nusselt number. The discussion is concluded on the effect of various governing parameters on the motion, temperature variation, entropy generation, velocity gradient and the rate of heat transfer at the boundary.
NASA Astrophysics Data System (ADS)
Yousefvand, Hossein Reza
2017-12-01
A self-consistent model of quantum cascade lasers (QCLs) is presented here for the study of the QCL's behavior in the far from equilibrium conditions. The approach is developed by employing a number of physics-based models such as the carrier and photon rate equations, the energy balance equation, the heat transfer equation and a simplified rate equation for the creation and annihilation of nonequilibrium optical phonons. The temperature dependency of the relevant physical effects such as stimulated gain cross section, longitudinal optical (LO) phonons and hot-phonon generation rates are included in the model. Using the presented model, the static and transient device characteristics are calculated and analyzed for a wide range of heat sink temperatures. Besides the output characteristics, this model also provides a way to study the hot-phonon dynamics in the device, and to explore the electron temperature and thermal roll-over in the QCLs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stewart, A.G.
1981-04-01
Superpower pulse generators are fast establishing themselves internationally as candidates for employment in a wide variety of military applications including electronic warfare and jamming, high energy beam weapons, and nuclear weapons effects simulation. Unfortunately, existing multimegajoule pulse power generators such as AURORA do not satisfy many Department of Defense goals for field-adaptable weapon systems-for example, repetition (rep) rate operation, high reliabilty, long life, ease of operation, and low maintenance. The Camelot concept is a multiterawatt rep ratable pulse power source, adaptable to a wide range of output parameters-both charged particles and photons. An analytical computer model has been developed tomore » predict the power flowing through the device. A 5-year development program, culminating in a source region electromagnetic pulse simulator, is presented.« less
Zhou, He; Ma, Tian-Yu; Zhang, Rui; Xu, Qi-Zheng; Shen, Fu; Qin, Yan-Jie; Xu, Wen; Wang, Yuan; Li, Ya-Juan
2016-01-01
In this study, we selected natural polyploidy loach (diploid, triploid and tetraploid) and hybrid F1 generation obverse cross (4 × 2) and inverse cross (2 × 4) by diploids and tetraploids as the research model. The MSAP (methylation-sensitive amplified polymorphism) reaction system was established by our laboratory to explore methylation levels and pattern diversification features at the whole genome level of the polyploidy loach. The results showed that the total methylation and full methylation rates decreased on increased ploidy individuals; moreover, the hemimethylation rate showed no consistent pattern. Compared with diploid loach, the methylation patterns of tetraploid sites changed 68.17%, and the methylation patterns of triploid sites changed 73.05%. The proportion of hypermethylation genes is significantly higher than the proportion of demethylation genes. The methylation level of reciprocal cross F1 generation is lower than the male diploid and higher than the female tetraploid. The hemimethylation and total methylation rate of the cross hybrid F1 generation is significantly higher than the orthogonal F1 generation (p < 0.01). After readjusting, the methylation pattern of genome DNA of reciprocal hybrids changed 69.59% and 72.83%, respectively. PMID:27556458
Nishiura, Hiroshi; Chowell, Gerardo; Safan, Muntaser; Castillo-Chavez, Carlos
2010-01-07
In many parts of the world, the exponential growth rate of infections during the initial epidemic phase has been used to make statistical inferences on the reproduction number, R, a summary measure of the transmission potential for the novel influenza A (H1N1) 2009. The growth rate at the initial stage of the epidemic in Japan led to estimates for R in the range 2.0 to 2.6, capturing the intensity of the initial outbreak among school-age children in May 2009. An updated estimate of R that takes into account the epidemic data from 29 May to 14 July is provided. An age-structured renewal process is employed to capture the age-dependent transmission dynamics, jointly estimating the reproduction number, the age-dependent susceptibility and the relative contribution of imported cases to secondary transmission. Pitfalls in estimating epidemic growth rates are identified and used for scrutinizing and re-assessing the results of our earlier estimate of R. Maximum likelihood estimates of R using the data from 29 May to 14 July ranged from 1.21 to 1.35. The next-generation matrix, based on our age-structured model, predicts that only 17.5% of the population will experience infection by the end of the first pandemic wave. Our earlier estimate of R did not fully capture the population-wide epidemic in quantifying the next-generation matrix from the estimated growth rate during the initial stage of the pandemic in Japan. In order to quantify R from the growth rate of cases, it is essential that the selected model captures the underlying transmission dynamics embedded in the data. Exploring additional epidemiological information will be useful for assessing the temporal dynamics. Although the simple concept of R is more easily grasped by the general public than that of the next-generation matrix, the matrix incorporating detailed information (e.g., age-specificity) is essential for reducing the levels of uncertainty in predictions and for assisting public health policymaking. Model-based prediction and policymaking are best described by sharing fundamental notions of heterogeneous risks of infection and death with non-experts to avoid potential confusion and/or possible misuse of modelling results.
Goovaerts, Pierre
2006-01-01
Boundary analysis of cancer maps may highlight areas where causative exposures change through geographic space, the presence of local populations with distinct cancer incidences, or the impact of different cancer control methods. Too often, such analysis ignores the spatial pattern of incidence or mortality rates and overlooks the fact that rates computed from sparsely populated geographic entities can be very unreliable. This paper proposes a new methodology that accounts for the uncertainty and spatial correlation of rate data in the detection of significant edges between adjacent entities or polygons. Poisson kriging is first used to estimate the risk value and the associated standard error within each polygon, accounting for the population size and the risk semivariogram computed from raw rates. The boundary statistic is then defined as half the absolute difference between kriged risks. Its reference distribution, under the null hypothesis of no boundary, is derived through the generation of multiple realizations of the spatial distribution of cancer risk values. This paper presents three types of neutral models generated using methods of increasing complexity: the common random shuffle of estimated risk values, a spatial re-ordering of these risks, or p-field simulation that accounts for the population size within each polygon. The approach is illustrated using age-adjusted pancreatic cancer mortality rates for white females in 295 US counties of the Northeast (1970–1994). Simulation studies demonstrate that Poisson kriging yields more accurate estimates of the cancer risk and how its value changes between polygons (i.e. boundary statistic), relatively to the use of raw rates or local empirical Bayes smoother. When used in conjunction with spatial neutral models generated by p-field simulation, the boundary analysis based on Poisson kriging estimates minimizes the proportion of type I errors (i.e. edges wrongly declared significant) while the frequency of these errors is predicted well by the p-value of the statistical test. PMID:19023455
Developing and applying metamodels of high resolution ...
As defined by Wikipedia (https://en.wikipedia.org/wiki/Metamodeling), “(a) metamodel or surrogate model is a model of a model, and metamodeling is the process of generating such metamodels.” The goals of metamodeling include, but are not limited to (1) developing functional or statistical relationships between a model’s input and output variables for model analysis, interpretation, or information consumption by users’ clients; (2) quantifying a model’s sensitivity to alternative or uncertain forcing functions, initial conditions, or parameters; and (3) characterizing the model’s response or state space. Using five existing models developed by US Environmental Protection Agency, we generate a metamodeling database of the expected environmental and biological concentrations of 644 organic chemicals released into nine US rivers from wastewater treatment works (WTWs) assuming multiple loading rates and sizes of populations serviced. The chemicals of interest have log n-octanol/water partition coefficients ( ) ranging from 3 to 14, and the rivers of concern have mean annual discharges ranging from 1.09 to 3240 m3/s. Log linear regression models are derived to predict mean annual dissolved and total water concentrations and total sediment concentrations of chemicals of concern based on their , Henry’s Law Constant, and WTW loading rate and on the mean annual discharges of the receiving rivers. Metamodels are also derived to predict mean annual chemical
A hybrid procedure for MSW generation forecasting at multiple time scales in Xiamen City, China.
Xu, Lilai; Gao, Peiqing; Cui, Shenghui; Liu, Chun
2013-06-01
Accurate forecasting of municipal solid waste (MSW) generation is crucial and fundamental for the planning, operation and optimization of any MSW management system. Comprehensive information on waste generation for month-scale, medium-term and long-term time scales is especially needed, considering the necessity of MSW management upgrade facing many developing countries. Several existing models are available but of little use in forecasting MSW generation at multiple time scales. The goal of this study is to propose a hybrid model that combines the seasonal autoregressive integrated moving average (SARIMA) model and grey system theory to forecast MSW generation at multiple time scales without needing to consider other variables such as demographics and socioeconomic factors. To demonstrate its applicability, a case study of Xiamen City, China was performed. Results show that the model is robust enough to fit and forecast seasonal and annual dynamics of MSW generation at month-scale, medium- and long-term time scales with the desired accuracy. In the month-scale, MSW generation in Xiamen City will peak at 132.2 thousand tonnes in July 2015 - 1.5 times the volume in July 2010. In the medium term, annual MSW generation will increase to 1518.1 thousand tonnes by 2015 at an average growth rate of 10%. In the long term, a large volume of MSW will be output annually and will increase to 2486.3 thousand tonnes by 2020 - 2.5 times the value for 2010. The hybrid model proposed in this paper can enable decision makers to develop integrated policies and measures for waste management over the long term. Copyright © 2013 Elsevier Ltd. All rights reserved.
An epidemiological model with vaccination strategies
NASA Astrophysics Data System (ADS)
Prates, Dérek B.; Silva, Jaqueline M.; Gomes, Jessica L.; Kritz, Maurício V.
2016-06-01
Mathematical models can be widely found in the literature describing epidemics. The epidemical models that use differential equations to represent mathematically such description are especially sensible to parameters. This work analyze a variation of the SIR model when applied to a epidemic scenario including several aspects, as constant vaccination, pulse vaccination, seasonality, cross-immunity factor, birth and dead rate. The analysis and results are performed through numerical solutions of the model and a special attention is given to the discussion generated by the paramenters variation.
E-Area LLWF Vadose Zone Model: Probabilistic Model for Estimating Subsided-Area Infiltration Rates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dyer, J.; Flach, G.
A probabilistic model employing a Monte Carlo sampling technique was developed in Python to generate statistical distributions of the upslope-intact-area to subsided-area ratio (Area UAi/Area SAi) for closure cap subsidence scenarios that differ in assumed percent subsidence and the total number of intact plus subsided compartments. The plan is to use this model as a component in the probabilistic system model for the E-Area Performance Assessment (PA), contributing uncertainty in infiltration estimates.
AMTEC Generator: Phase 1 Propane System
2002-10-15
Final Report 15 October 2002 17 Figure 18. Model Predictions with a 28W Gross AMTEC Converter, 27 g/hr, 8.3% Overall Efficiency 5 10 15...hot) (C ) fuel flow rate (mg/s) efficiency electrical output cell hot temp Design point: cell power = 28.3 W η thermal = 8.3% fuel flow rate = 7.4...Metal Thermal to Electric Conversion ( AMTEC ) technology converts the heat from
A Model for Forecasting Enlisted Student IA Billet Requirements
2016-03-01
Professional Apprentice Career Track PCS Permanent Change of Station PG Paygrade PFY Previous Fiscal Year POM Program Objectives Memorandum RCN Rating...paygrade levels contribute to fleet manning issues. Rating Control Number ( RCN ) Fit measures fleet manning levels for each community. Excess manning in one...lower RCN Fit levels. Second, authorized billets in TFMMS serve as the primary input for generating Enlisted Programmed Authorizations (EPA
Zhang, Y T; Frank, C B; Rangayyan, R M; Bell, G D
1992-09-01
Analysis of vibration signals emitted by the knee joint has the potential for the development of a noninvasive procedure for the diagnosis and monitoring of knee pathology. In order to obtain as much information as possible from the power density spectrum of the knee vibration signal, it is necessary to identify the physiological factors (or physiologically relevant parameters) that shape the spectrum. This paper presents a mathematical model for knee vibration signals, in particular the physiological patello-femoral pulse (PFP) train produced by slow knee movement. It demonstrates through the mathematical model that the repetition rate of the physiological PFP train introduces repeated peaks in the power spectrum, and that it affects the spectrum mainly at low frequencies. The theoretical results also show that the spectral peaks at multiples of the PFP repetition rate become more evident when the variance of the interpulse interval (IPI) is small, and that these spectral peaks shift toward higher frequencies with increasing PFP repetition rates. To evaluate the mathematical model, a simulation algorithm was developed, which generates PFP signals with adjustable repetition rate and IPI variance. Signals generated by simulation were seen to possess representative spectral characteristics typically observed in physiological PFP signals. This simulation procedure allows an interactive examination of several factors which affect the PFP train spectrum. Finally, in vivo measurements of physiological PFP signals of normal volunteers are presented. Results of simulations and analysis of signals recorded from human subjects support the mathematical model's prediction that the IPI statistics play a very significant role in determining the low-end power spectrum of the physiological PFP signal.(ABSTRACT TRUNCATED AT 250 WORDS)
The use of direct numerical simulation data in turbulence modeling
NASA Technical Reports Server (NTRS)
Mansour, N. N.
1991-01-01
Direct numerical simulations (DNS) of turbulent flows provide a complete data base to develop and to test turbulence models. In this article, the progress made in developing models for the dissipation rate equation is reviewed. New scaling arguments for the various terms in the dissipation rate equation were tested using data from DNS of homogeneous shear flows. Modifications to the epsilon-equation model that take into account near-wall effects were developed using DNS of turbulent channel flows. Testing of new models for flows under mean compression was carried out using data from DNS of isotropically compressed turbulence. In all of these studies the data from the simulations was essential in guiding the model development. The next generation of DNS will be at higher Reynolds numbers, and will undoubtedly lead to improved models for computations of flows of practical interest.
Lockwood, Sarah Y.; Meisel, Jayda E.; Monsma, Frederick J.; Spence, Dana M.
2016-01-01
The process of bringing a drug to market involves many steps, including the preclinical stage, where various properties of the drug candidate molecule are determined. These properties, which include drug absorption, distribution, metabolism, and excretion, are often displayed in a pharmacokinetic (PK) profile. While PK profiles are determined in animal models, in vitro systems that model in vivo processes are available, although each possesses shortcomings. Here, we present a 3D-printed, diffusion-based, and dynamic in vitro PK device. The device contains six flow channels, each with integrated porous membrane-based insert wells. The pores of these membranes enable drugs to freely diffuse back and forth between the flow channels and the inserts, thus enabling both loading and clearance portions of a standard PK curve to be generated. The device is designed to work with 96-well plate technology and consumes single-digit milliliter volumes to generate multiple PK profiles, simultaneously. Generation of PK profiles by use of the device was initially performed with fluorescein as a test molecule. Effects of such parameters as flow rate, loading time, volume in the insert well, and initial concentration of the test molecule were investigated. A prediction model was generated from this data, enabling the user to predict the concentration of the test molecule at any point along the PK profile within a coefficient of variation of ~5%. Depletion of the analyte from the well was characterized and was determined to follow first-order rate kinetics, indicated by statistically equivalent (p > 0.05) depletion half-lives that were independent of the starting concentration. A PK curve for an approved antibiotic, levofloxacin, was generated to show utility beyond the fluorescein test molecule. PMID:26727249
Loman, Zachary G.; Monroe, Adrian; Riffell, Samuel K.; Miller, Darren A.; Vilella, Francisco; Wheat, Bradley R.; Rush, Scott A.; Martin, James A.
2018-01-01
Switchgrass (Panicum virgatum) intercropping is a novel forest management practice for biomass production intended to generate cellulosic feedstocks within intensively managed loblolly pine‐dominated landscapes. These pine plantations are important for early‐successional bird species, as short rotation times continually maintain early‐successional habitat. We tested the efficacy of using community models compared to individual surrogate species models in understanding influences on nest survival. We analysed nest data to test for differences in habitat use for 14 bird species in plots managed for switchgrass intercropping and controls within loblolly pine (Pinus taeda) plantations in Mississippi, USA.We adapted hierarchical models using hyper‐parameters to incorporate information from both common and rare species to understand community‐level nest survival. This approach incorporates rare species that are often discarded due to low sample sizes, but can inform community‐level demographic parameter estimates. We illustrate use of this approach in generating both species‐level and community‐wide estimates of daily survival rates for songbird nests. We were able to include rare species with low sample size (minimum n = 5) to inform a hyper‐prior, allowing us to estimate effects of covariates on daily survival at the community level, then compare this with a single‐species approach using surrogate species. Using single‐species models, we were unable to generate estimates below a sample size of 21 nests per species.Community model species‐level survival and parameter estimates were similar to those generated by five single‐species models, with improved precision in community model parameters.Covariates of nest placement indicated that switchgrass at the nest site (<4 m) reduced daily nest survival, although intercropping at the forest stand level increased daily nest survival.Synthesis and applications. Community models represent a viable method for estimating community nest survival rates and effects of covariates while incorporating limited data for rarely detected species. Intercropping switchgrass in loblolly pine plantations slightly increased daily nest survival at the research plot scale (0.1 km2), although at a local scale (50 m2) switchgrass negatively influenced nest survival. A likely explanation is intercropping shifted community composition, favouring species with greater disturbance tolerance.
Effect of mutation mechanisms on variant composition and distribution in Caenorhabditis elegans
Wang, Jiou
2017-01-01
Genetic diversity is maintained by continuing generation and removal of variants. While examining over 800,000 DNA variants in wild isolates of Caenorhabditis elegans, we made a discovery that the proportions of variant types are not constant across the C. elegans genome. The variant proportion is defined as the fraction of a specific variant type (e.g. single nucleotide polymorphism (SNP) or indel) within a broader set of variants (e.g. all variants or all non-SNPs). The proportions of most variant types show a correlation with the recombination rate. These correlations can be explained as a result of a concerted action of two mutation mechanisms, which we named Morgan and Sanger mechanisms. The two proposed mechanisms act according to the distinct components of the recombination rate, specifically the genetic and physical distance. Regression analysis was used to explore the characteristics and contributions of the two mutation mechanisms. According to our model, ~20–40% of all mutations in C. elegans wild populations are derived from programmed meiotic double strand breaks, which precede chromosomal crossovers and thus may be the point of origin for the Morgan mechanism. A substantial part of the known correlation between the recombination rate and variant distribution appears to be caused by the mutations generated by the Morgan mechanism. Mathematically integrating the mutation model with background selection model gives a more complete depiction of how the variant landscape is shaped in C. elegans. Similar analysis should be possible in other species by examining the correlation between the recombination rate and variant landscape within the context of our mutation model. PMID:28135268
PWR Facility Dose Modeling Using MCNP5 and the CADIS/ADVANTG Variance-Reduction Methodology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blakeman, Edward D; Peplow, Douglas E.; Wagner, John C
2007-09-01
The feasibility of modeling a pressurized-water-reactor (PWR) facility and calculating dose rates at all locations within the containment and adjoining structures using MCNP5 with mesh tallies is presented. Calculations of dose rates resulting from neutron and photon sources from the reactor (operating and shut down for various periods) and the spent fuel pool, as well as for the photon source from the primary coolant loop, were all of interest. Identification of the PWR facility, development of the MCNP-based model and automation of the run process, calculation of the various sources, and development of methods for visually examining mesh tally filesmore » and extracting dose rates were all a significant part of the project. Advanced variance reduction, which was required because of the size of the model and the large amount of shielding, was performed via the CADIS/ADVANTG approach. This methodology uses an automatically generated three-dimensional discrete ordinates model to calculate adjoint fluxes from which MCNP weight windows and source bias parameters are generated. Investigative calculations were performed using a simple block model and a simplified full-scale model of the PWR containment, in which the adjoint source was placed in various regions. In general, it was shown that placement of the adjoint source on the periphery of the model provided adequate results for regions reasonably close to the source (e.g., within the containment structure for the reactor source). A modification to the CADIS/ADVANTG methodology was also studied in which a global adjoint source is weighted by the reciprocal of the dose response calculated by an earlier forward discrete ordinates calculation. This method showed improved results over those using the standard CADIS/ADVANTG approach, and its further investigation is recommended for future efforts.« less
NASA Astrophysics Data System (ADS)
Murru, M.; Falcone, G.; Taroni, M.; Console, R.
2017-12-01
In 2015 the Italian Department of Civil Protection, started a project for upgrading the official Italian seismic hazard map (MPS04) inviting the Italian scientific community to participate in a joint effort for its realization. We participated providing spatially variable time-independent (Poisson) long-term annual occurrence rates of seismic events on the entire Italian territory, considering cells of 0.1°x0.1° from M4.5 up to M8.1 for magnitude bin of 0.1 units. Our final model was composed by two different models, merged in one ensemble model, each one with the same weight: the first one was realized by a smoothed seismicity approach, the second one using the seismogenic faults. The spatial smoothed seismicity was obtained using the smoothing method introduced by Frankel (1995) applied to the historical and instrumental seismicity. In this approach we adopted a tapered Gutenberg-Richter relation with a b-value fixed to 1 and a corner magnitude estimated with the bigger events in the catalogs. For each seismogenic fault provided by the Database of the Individual Seismogenic Sources (DISS), we computed the annual rate (for each cells of 0.1°x0.1°) for magnitude bin of 0.1 units, assuming that the seismic moments of the earthquakes generated by each fault are distributed according to the same tapered Gutenberg-Richter relation of the smoothed seismicity model. The annual rate for the final model was determined in the following way: if the cell falls within one of the seismic sources, we merge the respective value of rate determined by the seismic moments of the earthquakes generated by each fault and the value of the smoothed seismicity model with the same weight; if instead the cells fall outside of any seismic source we considered the rate obtained from the spatial smoothed seismicity. Here we present the final results of our study to be used for the new Italian seismic hazard map.
Prediction of dislocation generation during Bridgman growth of GaAs crystals
NASA Technical Reports Server (NTRS)
Tsai, C. T.; Yao, M. W.; Chait, Arnon
1992-01-01
Dislocation densities are generated in GaAs single crystals due to the excessive thermal stresses induced by temperature variations during growth. A viscoplastic material model for GaAs, which takes into account the movement and multiplication of dislocations in the plastic deformation, is developed according to Haasen's theory. The dislocation density is expressed as an internal state variable in this dynamic viscoplastic model. The deformation process is a nonlinear function of stress, strain rate, dislocation density and temperature. The dislocation density in the GaAs crystal during vertical Bridgman growth is calculated using a nonlinear finite element model. The dislocation multiplication in GaAs crystals for several temperature fields obtained from thermal modeling of both the GTE GaAs experimental data and artificially designed data are investigated.
Prediction of dislocation generation during Bridgman growth of GaAs crystals
NASA Astrophysics Data System (ADS)
Tsai, C. T.; Yao, M. W.; Chait, Arnon
1992-11-01
Dislocation densities are generated in GaAs single crystals due to the excessive thermal stresses induced by temperature variations during growth. A viscoplastic material model for GaAs, which takes into account the movement and multiplication of dislocations in the plastic deformation, is developed according to Haasen's theory. The dislocation density is expressed as an internal state variable in this dynamic viscoplastic model. The deformation process is a nonlinear function of stress, strain rate, dislocation density and temperature. The dislocation density in the GaAs crystal during vertical Bridgman growth is calculated using a nonlinear finite element model. The dislocation multiplication in GaAs crystals for several temperature fields obtained from thermal modeling of both the GTE GaAs experimental data and artificially designed data are investigated.
Evaluation of machine learning algorithms for improved risk assessment for Down's syndrome.
Koivu, Aki; Korpimäki, Teemu; Kivelä, Petri; Pahikkala, Tapio; Sairanen, Mikko
2018-05-04
Prenatal screening generates a great amount of data that is used for predicting risk of various disorders. Prenatal risk assessment is based on multiple clinical variables and overall performance is defined by how well the risk algorithm is optimized for the population in question. This article evaluates machine learning algorithms to improve performance of first trimester screening of Down syndrome. Machine learning algorithms pose an adaptive alternative to develop better risk assessment models using the existing clinical variables. Two real-world data sets were used to experiment with multiple classification algorithms. Implemented models were tested with a third, real-world, data set and performance was compared to a predicate method, a commercial risk assessment software. Best performing deep neural network model gave an area under the curve of 0.96 and detection rate of 78% with 1% false positive rate with the test data. Support vector machine model gave area under the curve of 0.95 and detection rate of 61% with 1% false positive rate with the same test data. When compared with the predicate method, the best support vector machine model was slightly inferior, but an optimized deep neural network model was able to give higher detection rates with same false positive rate or similar detection rate but with markedly lower false positive rate. This finding could further improve the first trimester screening for Down syndrome, by using existing clinical variables and a large training data derived from a specific population. Copyright © 2018 Elsevier Ltd. All rights reserved.
Prosodic persistence in music performance and speech production
NASA Astrophysics Data System (ADS)
Jungers, Melissa K.; Palmer, Caroline; Speer, Shari R.
2002-05-01
Does the rate of melodies that listeners hear affect the rate of their performed melodies? Skilled adult pianists performed two short melodies as a measure of their preferred performance rate. Next they heard, on each trial, a computer-generated performance of a prime melody at a slow or fast rate (600 or 300 ms per quarter-note beat). Following each prime melody, the pianists performed a target melody from notation. The prime and target melodies were matched for meter and length. The rate of pianists' target melody performances was slower for performances that followed a slow prime than a fast prime, indicating that pianists' performances were influenced by the rate of the prime melody. Performance duration was predicted by a model that includes prime and preferred durations. Findings from an analogous speech production experiment show that a similar model predicts speakers' sentence rate from preferred and prime sentence rates. [Work supported by NIMH Grant 45764 and the Center for Cognitive Science.
NASA Astrophysics Data System (ADS)
Sembroni, A.; Globig, J.; Rozel, A.; Faccenna, C.; Funiciello, F.; Fernandez, M.
2013-12-01
Density anomalies located beneath the lithosphere are thought to generate dynamic topography at the surface of the Earth. Tomographic models are often used to infer the later variations of the density field in the mantle. Surface topography can then be computed using analytical solutions or numerical simulations of mantle convection. It has been shown that the viscosity profile of the upper mantle has a strong influence on the magnitude and spectral signature of surface topography and uplift rate. Here we present results from analogue modeling of the interaction between a rising ball-shaped density anomaly and the lithosphere in an isoviscous, isothermal Newtonian mantle system. Preliminary data show that surface topography is strongly influenced not only by mantle viscosity but also by density and viscosity profiles of the lithosphere. Our apparatus consists of a plexiglass square box (40x40x50 cm3) filled with glucose syrup. From the bottom a silicon ball was free to rise up until impinging a silicon plate floating on top of the syrup, mimicking the lithosphere. In order to investigate the role of lithospheric thickness and layered continental crust on stress partitioning, maximum dynamic topography, uplift rate and signal wavelength, two different configurations were tested: homogeneous lithosphere and stratified lithosphere including a low-viscosity lower crust. The topographic evolution of the surface was tracked using a laser scanning the top of the apparatus. The rise of the density anomaly was recorded by a side camera. We observe that a thick and then more resistant lithosphere makes up to 2 times lower and laterally wider topographic signatures. Layered lithospheres including a decoupling lower crust decrease the equilibrium topography and its lateral extend by ~30% to 40%. Most importantly, the uplift rate is strongly affected by the choice of lithosphere model. Both lithosphere width and the presence of a decoupling lower crust may modify the uplift rate by a factor 3. Thus, depending on the lithosphere rheology, we show that uplift rate may vary by one order of magnitude, for the same density anomaly and mantle viscosity. This result shows that surface uplift rate can be used to infer the viscosity of the upper mantle in specific Earth regions only if the rheology of the lithosphere is well constrained. With respect to previous approaches, whether numerical or analog modeling of dynamic topography, our experiments represent a new attempt to investigate the propagation of normal stresses generated by mantle flow through a rheologically stratified lithosphere and its resulting topographic signal.
NASA Astrophysics Data System (ADS)
Moon, Seulgi; Shelef, Eitan; Hilley, George E.
2015-05-01
In this study, we model postglacial surface processes and examine the evolution of the topography and denudation rates within the deglaciated Washington Cascades to understand the controls on and time scales of landscape response to changes in the surface process regime after deglaciation. The postglacial adjustment of this landscape is modeled using a geomorphic-transport-law-based numerical model that includes processes of river incision, hillslope diffusion, and stochastic landslides. The surface lowering due to landslides is parameterized using a physically based slope stability model coupled to a stochastic model of the generation of landslides. The model parameters of river incision and stochastic landslides are calibrated based on the rates and distribution of thousand-year-time scale denudation rates measured from cosmogenic 10Be isotopes. The probability distributions of those model parameters calculated based on a Bayesian inversion scheme show comparable ranges from previous studies in similar rock types and climatic conditions. The magnitude of landslide denudation rates is determined by failure density (similar to landslide frequency), whereas precipitation and slopes affect the spatial variation in landslide denudation rates. Simulation results show that postglacial denudation rates decay over time and take longer than 100 kyr to reach time-invariant rates. Over time, the landslides in the model consume the steep slopes characteristic of deglaciated landscapes. This response time scale is on the order of or longer than glacial/interglacial cycles, suggesting that frequent climatic perturbations during the Quaternary may produce a significant and prolonged impact on denudation and topography.
Valeriani, M; Restuccia, D; Di Lazzaro, V; Le Pera, D; Barba, C; Tonali, P; Mauguiere, F
1998-06-01
Brain electrical source analysis (BESA) of the scalp electroencephalographic activity is well adapted to distinguish neighbouring cerebral generators precisely. Therefore, we performed dipolar source modelling in scalp medium nerve somatosensory evoked potentials (SEPs) recorded at 1.5-Hz stimulation rate, where all the early components should be identifiable. We built a four-dipole model, which was issued from the grand average, and applied it also to recordings from single individuals. Our model included a dipole at the base of the skull and three other perirolandic dipoles. The first of the latter dipoles was tangentially oriented and was active at the same latencies as the N20/P20 potential and, with opposite polarity, the P24/N24 response. The second perirolandic dipole showed an initial peak of activity slightly earlier than that of the N20/P20 dipolar source and, later, it was active at the same latency as the central P22 potential. Lastly, the third perirolandic dipole explaining the fronto-central N30 potential scalp distribution was constantly more posterior than the first one. In order to evaluate the effect of an increasing repetition frequency on the activity of SEP dipolar sources, we applied the model built from 1.5-Hz SEPs to traces recorded at 3-Hz and 10-Hz repetition rates. We found that the 10-Hz stimulus frequency reduced selectively the later of the two activity phases of the first perirolandic dipole. The decrement in strength of this dipolar source can be explained if we assume that: (a) the later activity of the first perirolandic dipole can represent the inhibitory phase of a "primary response"; (b) two different clusters of cells generate the opposite activities of the tangential perirolandic dipole. An additional finding in our model was that two different perirolandic dipoles contribute to the centro-parietal N20 potential generation.
Goikoetxea, Estibalitz; Murgia, Xabier; Serna-Grande, Pablo; Valls-i-Soler, Adolf; Rey-Santano, Carmen; Rivas, Alejandro; Antón, Raúl; Basterretxea, Francisco J.; Miñambres, Lorena; Méndez, Estíbaliz; Lopez-Arraiza, Alberto; Larrabe-Barrena, Juan Luis; Gomez-Solaetxe, Miguel Angel
2014-01-01
Objective Aerosol delivery holds potential to release surfactant or perfluorocarbon (PFC) to the lungs of neonates with respiratory distress syndrome with minimal airway manipulation. Nevertheless, lung deposition in neonates tends to be very low due to extremely low lung volumes, narrow airways and high respiratory rates. In the present study, the feasibility of enhancing lung deposition by intracorporeal delivery of aerosols was investigated using a physical model of neonatal conducting airways. Methods The main characteristics of the surfactant and PFC aerosols produced by a nebulization system, including the distal air pressure and air flow rate, liquid flow rate and mass median aerodynamic diameter (MMAD), were measured at different driving pressures (4–7 bar). Then, a three-dimensional model of the upper conducting airways of a neonate was manufactured by rapid prototyping and a deposition study was conducted. Results The nebulization system produced relatively large amounts of aerosol ranging between 0.3±0.0 ml/min for surfactant at a driving pressure of 4 bar, and 2.0±0.1 ml/min for distilled water (H2Od) at 6 bar, with MMADs between 2.61±0.1 µm for PFD at 7 bar and 10.18±0.4 µm for FC-75 at 6 bar. The deposition study showed that for surfactant and H2Od aerosols, the highest percentage of the aerosolized mass (∼65%) was collected beyond the third generation of branching in the airway model. The use of this delivery system in combination with continuous positive airway pressure set at 5 cmH2O only increased total airway pressure by 1.59 cmH2O at the highest driving pressure (7 bar). Conclusion This aerosol generating system has the potential to deliver relatively large amounts of surfactant and PFC beyond the third generation of branching in a neonatal airway model with minimal alteration of pre-set respiratory support. PMID:25211475
Lifetime earnings for physicians across specialties.
Leigh, J Paul; Tancredi, Daniel; Jerant, Anthony; Romano, Patrick S; Kravitz, Richard L
2012-12-01
Earlier studies estimated annual income differences across specialties, but lifetime income may be more relevant given physicians' long-term commitments to specialties. Annual income and work hours data were collected from 6381 physicians in the nationally representative 2004-2005 Community Tracking Study. Data regarding years of residency were collected from AMA FREIDA. Present value models were constructed assuming 3% discount rates. Estimates were adjusted for demographic and market covariates. Sensitivity analyses included 4 alternative models involving work hours, retirement, exogenous variables, and 1% discount rate. Estimates were generated for 4 broad specialty categories (Primary Care, Surgery, Internal Medicine and Pediatric Subspecialties, and Other), and for 41 specific specialties. The estimates of lifetime earnings for the broad categories of Surgery, Internal Medicine and Pediatric Subspecialties, and Other specialties were $1,587,722, $1,099,655, and $761,402 more than for Primary Care. For the 41 specific specialties, the top 3 (with family medicine as reference) were neurological surgery ($2,880,601), medical oncology ($2,772,665), and radiation oncology ($2,659,657). The estimates from models with varying rates of retirement and including only exogenous variables were similar to those in the preferred model. The 1% discount model generated estimates that were roughly 150% larger than the 3% model. There was considerable variation in the lifetime earnings across physician specialties. After accounting for varying residency years and discounting future earnings, primary care specialties earned roughly $1-3 million less than other specialties. Earnings' differences across specialties may undermine health reform efforts to control costs and ensure adequate numbers of primary care physicians.
Lifetime enhancement for multiphoton absorption in intermediate band solar cells
NASA Astrophysics Data System (ADS)
Bezerra, Anibal T.; Studart, Nelson
2017-08-01
A semiconductor structure consisting of two coupled quantum wells embedded into the intrinsic region of a p-i-n junction is proposed as an intermediate band solar cell with a photon ratchet state, which would lead to increasing the cell efficiency. The conduction subband of the right-hand side quantum well works as the intermediated band, whereas the excited conduction subband of the left-hand side quantum well operates as the ratchet state. The photoelectrons in the intermediate band are scattered through the thin wells barrier and accumulated into the ratchet subband. A rate equation model for describing the charge transport properties is presented. The efficiency of the current generation is analyzed by studying the occupation of the wells subbands, taking into account the charge dynamic behavior provided by the electrical contacts connected to the cell. The current generation efficiency depends essentially from the relations between the generation, recombination rates and the scattering rate to the ratchet state. The inclusion of the ratchet states led to both an increase and a decrease in the cell current depending on the transition rates. This suggests that the coupling between the intermediate band and the ratchet state is a key point in developing an efficient solar cell.
Storm Water Management Model Reference Manual Volume II ...
SWMM is a dynamic rainfall-runoff simulation model used for single event or long-term (continuous) simulation of runoff quantity and quality from primarily urban areas. The runoff component of SWMM operates on a collection of subcatchment areas that receive precipitation and generate runoff and pollutant loads. The routing portion of SWMM transports this runoff through a system of pipes, channels, storage/treatment devices, pumps, and regulators. SWMM tracks the quantity and quality of runoff generated within each subcatchment, and the flow rate, flow depth, and quality of water in each pipe and channel during a simulation period comprised of multiple time steps. The reference manual for this edition of SWMM is comprised of three volumes. Volume I describes SWMM’s hydrologic models, Volume II its hydraulic models, and Volume III its water quality and low impact development models. This document provides the underlying mathematics for the hydraulic calculations of the Storm Water Management Model (SWMM)
Singlet model interference effects with high scale UV physics
Dawson, S.; Lewis, I. M.
2017-01-06
One of the simplest extensions of the Standard Model (SM) is the addition of a scalar gauge singlet, S . If S is not forbidden by a symmetry from mixing with the Standard Model Higgs boson, the mixing will generate non-SM rates for Higgs production and decays. Generally, there could also be unknown high energy physics that generates additional effective low energy interactions. We show that interference effects between the scalar resonance of the singlet model and the effective field theory (EFT) operators can have significant effects in the Higgs sector. Here, we examine a non- Z 2 symmetricmore » scalar singlet model and demonstrate that a fit to the 125 GeV Higgs boson couplings and to limits on high mass resonances, S , exhibit an interesting structure and possible large cancellations of effects between the resonance contribution and the new EFT interactions, that invalidate conclusions based on the renormalizable singlet model alone.« less
Latest generation, wide-angle, high-definition colonoscopes increase adenoma detection rate.
Adler, Andreas; Aminalai, Alireza; Aschenbeck, Jens; Drossel, Rolf; Mayr, Michael; Scheel, Mathias; Schröder, Andreas; Yenerim, Timur; Wiedenmann, Bertram; Gauger, Ulrich; Roll, Stephanie; Rösch, Thomas
2012-02-01
Improvements to endoscopy imaging technologies might improve detection rates of colorectal cancer and patient outcomes. We compared the accuracy of the latest generation of endoscopes with older generation models in detection of colorectal adenomas. We compared data from 2 prospective screening colonoscopy studies (the Berlin Colonoscopy Project 6); each study lasted approximately 6 months and included the same 6 colonoscopists, who worked in private practice. Participants in group 1 (n = 1256) were all examined by using the latest generation of wide-angle, high-definition colonoscopes that were manufactured by the same company. Individuals in group 2 (n = 1400) were examined by endoscopists who used routine equipment (a mixture of endoscopes from different companies; none of those used to examine group 1). The adenoma detection rate was calculated on the basis of the number of all adenomas/number of all patients. There were no differences in patient parameters or withdrawal time between groups (8.0 vs 8.2 minutes). The adenoma detection rate was significantly higher in group 1 (0.33) than in group 2 (0.27; P = .01); a greater number of patients with least 1 adenoma were identified in group 1 (22.1%) than in group 2 (18.2%; P = .01). A higher percentage of high-grade dysplastic adenomas were detected in group 1 (1.19%) than in group 2 (0.57%), but this difference was not statistically significant (P = .06). The latest generation of wide-angle, high-definition colonoscopes improves rates of adenoma detection by 22%, compared with mixed, older technology endoscopes used in routine private practice. These findings might affect definitions of quality control parameters for colonoscopy screening for colorectal cancer. Copyright © 2012 AGA Institute. Published by Elsevier Inc. All rights reserved.
A geomorphic approach to 100-year floodplain mapping for the Conterminous United States
NASA Astrophysics Data System (ADS)
Jafarzadegan, Keighobad; Merwade, Venkatesh; Saksena, Siddharth
2018-06-01
Floodplain mapping using hydrodynamic models is difficult in data scarce regions. Additionally, using hydrodynamic models to map floodplain over large stream network can be computationally challenging. Some of these limitations of floodplain mapping using hydrodynamic modeling can be overcome by developing computationally efficient statistical methods to identify floodplains in large and ungauged watersheds using publicly available data. This paper proposes a geomorphic model to generate probabilistic 100-year floodplain maps for the Conterminous United States (CONUS). The proposed model first categorizes the watersheds in the CONUS into three classes based on the height of the water surface corresponding to the 100-year flood from the streambed. Next, the probability that any watershed in the CONUS belongs to one of these three classes is computed through supervised classification using watershed characteristics related to topography, hydrography, land use and climate. The result of this classification is then fed into a probabilistic threshold binary classifier (PTBC) to generate the probabilistic 100-year floodplain maps. The supervised classification algorithm is trained by using the 100-year Flood Insurance Rated Maps (FIRM) from the U.S. Federal Emergency Management Agency (FEMA). FEMA FIRMs are also used to validate the performance of the proposed model in areas not included in the training. Additionally, HEC-RAS model generated flood inundation extents are used to validate the model performance at fifteen sites that lack FEMA maps. Validation results show that the probabilistic 100-year floodplain maps, generated by proposed model, match well with both FEMA and HEC-RAS generated maps. On average, the error of predicted flood extents is around 14% across the CONUS. The high accuracy of the validation results shows the reliability of the geomorphic model as an alternative approach for fast and cost effective delineation of 100-year floodplains for the CONUS.
NASA Astrophysics Data System (ADS)
Savrda, Amanda Marie
2011-12-01
This study examines the thermal history of the southern Antarctic Peninsula through the application of thermochronometry, and presents the first high-resolution thermochronologic dataset for arc rocks of northwest Palmer Land. I present 19 new thermochronologic ages obtained via (U-Th-Sm)/He and fission-track analyses of apatite and zircon from arc granitoids of northwest Palmer Land and fore-arc rocks of the LeMay and Fossil Bluff Groups of Alexander Island. These data were modeled via Monte Carlo simulations to generate time-temperature pathways. Thermal models generated for arc granitoids of northwest Palmer Land reveal a Late Cretaceous-Early Cenozoic episode of accelerated cooling from ca. 78--55 Ma not previously recognized in the southern Antarctic Peninsula. Here, faster cooling at an average rate of ˜15°C/Myr is bracketed by slower cooling at rates <3°C/Myr. Modeled thermal histories of metamorphosed fore-arc sedimentary rocks of Alexander Island reveal rapid cooling throughout the Eocene at an average rate of ˜13°C/Myr, preceded and followed by slower rates of cooling on the order of <3°C/Myr. The spatial and temporal distribution of the observed cooling trends may reflect localized variations in the thermal regime due to regional changes in plate kinematics, subduction dynamics, and related magmatism, but the cooling rates are also within range of those typical of exhumational processes such as normal faulting, ductile thinning, and erosion.
The Use of Ambient Humidity Conditions to Improve Influenza Forecast
NASA Astrophysics Data System (ADS)
Shaman, J. L.; Kandula, S.; Yang, W.; Karspeck, A. R.
2017-12-01
Laboratory and epidemiological evidence indicate that ambient humidity modulates the survival and transmission of influenza. Here we explore whether the inclusion of humidity forcing in mathematical models describing influenza transmission improves the accuracy of forecasts generated with those models. We generate retrospective forecasts for 95 cities over 10 seasons in the United States and assess both forecast accuracy and error. Overall, we find that humidity forcing improves forecast performance and that forecasts generated using daily climatological humidity forcing generally outperform forecasts that utilize daily observed humidity forcing. These findings hold for predictions of outbreak peak intensity, peak timing, and incidence over 2- and 4-week horizons. The results indicate that use of climatological humidity forcing is warranted for current operational influenza forecast and provide further evidence that humidity modulates rates of influenza transmission.
a New Dynamic Community Model for Social Networks
NASA Astrophysics Data System (ADS)
Lu, Zhe-Ming; Wu, Zhen; Guo, Shi-Ze; Chen, Zhe; Song, Guang-Hua
2014-09-01
In this paper, based on the phenomenon that individuals join into and jump from the organizations in the society, we propose a dynamic community model to construct social networks. Two parameters are adopted in our model, one is the communication rate Pa that denotes the connection strength in the organization and the other is the turnover rate Pb, that stands for the frequency of jumping among the organizations. Based on simulations, we analyze not only the degree distribution, the clustering coefficient, the average distance and the network diameter but also the group distribution which is closely related to their community structure. Moreover, we discover that the networks generated by the proposed model possess the small-world property and can well reproduce the networks of social contacts.
Heterogenous Combustion of Porous Graphite Particles in Normal and Microgravity
NASA Technical Reports Server (NTRS)
Chelliah, Harsha K.; Miller, Fletcher J.; Delisle, Andrew J.
2001-01-01
Combustion of solid fuel particles has many important applications, including power generation and space propulsion systems. The current models available for describing the combustion process of these particles, especially porous solid particles, include various simplifying approximations. One of the most limiting approximations is the lumping of the physical properties of the porous fuel with the heterogeneous chemical reaction rate constants. The primary objective of the present work is to develop a rigorous model that could decouple such physical and chemical effects from the global heterogeneous reaction rates. For the purpose of validating this model, experiments with porous graphite particles of varying sizes and porosity are being performed. The details of this experimental and theoretical model development effort are described.
Modeling laser velocimeter signals as triply stochastic Poisson processes
NASA Technical Reports Server (NTRS)
Mayo, W. T., Jr.
1976-01-01
Previous models of laser Doppler velocimeter (LDV) systems have not adequately described dual-scatter signals in a manner useful for analysis and simulation of low-level photon-limited signals. At low photon rates, an LDV signal at the output of a photomultiplier tube is a compound nonhomogeneous filtered Poisson process, whose intensity function is another (slower) Poisson process with the nonstationary rate and frequency parameters controlled by a random flow (slowest) process. In the present paper, generalized Poisson shot noise models are developed for low-level LDV signals. Theoretical results useful in detection error analysis and simulation are presented, along with measurements of burst amplitude statistics. Computer generated simulations illustrate the difference between Gaussian and Poisson models of low-level signals.
Ness, Rob W.; Morgan, Andrew D.; Vasanthakrishnan, Radhakrishnan B.; Colegrave, Nick; Keightley, Peter D.
2015-01-01
Describing the process of spontaneous mutation is fundamental for understanding the genetic basis of disease, the threat posed by declining population size in conservation biology, and much of evolutionary biology. Directly studying spontaneous mutation has been difficult, however, because new mutations are rare. Mutation accumulation (MA) experiments overcome this by allowing mutations to build up over many generations in the near absence of natural selection. Here, we sequenced the genomes of 85 MA lines derived from six genetically diverse strains of the green alga Chlamydomonas reinhardtii. We identified 6843 new mutations, more than any other study of spontaneous mutation. We observed sevenfold variation in the mutation rate among strains and that mutator genotypes arose, increasing the mutation rate approximately eightfold in some replicates. We also found evidence for fine-scale heterogeneity in the mutation rate, with certain sequence motifs mutating at much higher rates, and clusters of multiple mutations occurring at closely linked sites. There was little evidence, however, for mutation rate heterogeneity between chromosomes or over large genomic regions of 200 kbp. We generated a predictive model of the mutability of sites based on their genomic properties, including local GC content, gene expression level, and local sequence context. Our model accurately predicted the average mutation rate and natural levels of genetic diversity of sites across the genome. Notably, trinucleotides vary 17-fold in rate between the most and least mutable sites. Our results uncover a rich heterogeneity in the process of spontaneous mutation both among individuals and across the genome. PMID:26260971
Sculpting Mountains: Interactive Terrain Modeling Based on Subsurface Geology.
Cordonnier, Guillaume; Cani, Marie-Paule; Benes, Bedrich; Braun, Jean; Galin, Eric
2018-05-01
Most mountain ranges are formed by the compression and folding of colliding tectonic plates. Subduction of one plate causes large-scale asymmetry while their layered composition (or stratigraphy) explains the multi-scale folded strata observed on real terrains. We introduce a novel interactive modeling technique to generate visually plausible, large scale terrains that capture these phenomena. Our method draws on both geological knowledge for consistency and on sculpting systems for user interaction. The user is provided hands-on control on the shape and motion of tectonic plates, represented using a new geologically-inspired model for the Earth crust. The model captures their volume preserving and complex folding behaviors under collision, causing mountains to grow. It generates a volumetric uplift map representing the growth rate of subsurface layers. Erosion and uplift movement are jointly simulated to generate the terrain. The stratigraphy allows us to render folded strata on eroded cliffs. We validated the usability of our sculpting interface through a user study, and compare the visual consistency of the earth crust model with geological simulation results and real terrains.
Social anxiety and interpersonal stress generation: the moderating role of interpersonal distress.
Siegel, David M; Burke, Taylor A; Hamilton, Jessica L; Piccirillo, Marilyn L; Scharff, Adela; Alloy, Lauren B
2018-06-01
Existing models of social anxiety scarcely account for interpersonal stress generation. These models also seldom include interpersonal factors that compound the effects of social anxiety. Given recent findings that two forms of interpersonal distress, perceived burdensomeness and thwarted belongingness, intensify social anxiety and cause interpersonal stress generation, these two constructs may be especially relevant to examining social anxiety and interpersonal stress generation together. The current study extended prior research by examining the role of social anxiety in the occurrence of negative and positive interpersonal events and evaluated whether interpersonal distress moderated these associations. Undergraduate students (N = 243; M = 20.46 years; 83% female) completed self-report measures of social anxiety, perceived burdensomeness, and thwarted belongingness, as well as a self-report measure and clinician-rated interview assessing negative and positive interpersonal events that occurred over the past six weeks. Higher levels of social anxiety were associated only with a higher occurrence of negative interpersonal dependent events, after controlling for depressive symptoms. This relationship was stronger among individuals who also reported higher levels of perceived burdensomeness, but not thwarted belongingness. It may be important to more strongly consider interpersonal stress generation in models of social anxiety.
NASA Astrophysics Data System (ADS)
Li, Xuebao; Cui, Xiang; Lu, Tiebing; Ma, Wenzuo; Bian, Xingming; Wang, Donglai; Hiziroglu, Huseyin
2016-03-01
The corona-generated audible noise (AN) has become one of decisive factors in the design of high voltage direct current (HVDC) transmission lines. The AN from transmission lines can be attributed to sound pressure pulses which are generated by the multiple corona sources formed on the conductor, i.e., transmission lines. In this paper, a detailed time-domain characteristics of the sound pressure pulses, which are generated by the DC corona discharges formed over the surfaces of a stranded conductors, are investigated systematically in a laboratory settings using a corona cage structure. The amplitude of sound pressure pulse and its time intervals are extracted by observing a direct correlation between corona current pulses and corona-generated sound pressure pulses. Based on the statistical characteristics, a stochastic model is presented for simulating the sound pressure pulses due to DC corona discharges occurring on conductors. The proposed stochastic model is validated by comparing the calculated and measured A-weighted sound pressure level (SPL). The proposed model is then used to analyze the influence of the pulse amplitudes and pulse rate on the SPL. Furthermore, a mathematical relationship is found between the SPL and conductor diameter, electric field, and radial distance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, Wang; Wang, Haiou; Kuenne, Guido
This supplementary material complements the article and provides additional information to the chemical mechanism used in this work, boundary conditions for the LES con guration and table generation, comparisons of axial velocities, results from a LES/ nite-rate chemistry (FRC) approach, and results from the LES/DTF/SPF approach with a particular chemistry table that is generated using a single strained premixed amelet solution.
Ji, Yue; Li, Xingfei; Wu, Tengfei; Chen, Cheng
2015-12-15
The magnetohydrodynamics angular rate sensor (MHD ARS) has received much attention for its ultra-low noise in ultra-broad bandwidth and its impact resistance in harsh environments; however, its poor performance at low frequency hinders its work in long time duration. The paper presents a modified MHD ARS combining Coriolis with MHD effect to extend the measurement scope throughout the whole bandwidth, in which an appropriate radial flow velocity should be provided to satisfy simplified model of the modified MHD ARS. A method that can generate radial velocity by an MHD pump in MHD ARS is proposed. A device is designed to study the radial flow velocity generated by the MHD pump. The influence of structure and physical parameters are studied by numerical simulation and experiment of the device. The analytic expression of the velocity generated by the energized current drawn from simulation and experiment are consistent, which demonstrates the effectiveness of the method generating radial velocity. The study can be applied to generate and control radial velocity in modified MHD ARS, which is essential for the two effects combination throughout the whole bandwidth.
Ji, Yue; Li, Xingfei; Wu, Tengfei; Chen, Cheng
2015-01-01
The magnetohydrodynamics angular rate sensor (MHD ARS) has received much attention for its ultra-low noise in ultra-broad bandwidth and its impact resistance in harsh environments; however, its poor performance at low frequency hinders its work in long time duration. The paper presents a modified MHD ARS combining Coriolis with MHD effect to extend the measurement scope throughout the whole bandwidth, in which an appropriate radial flow velocity should be provided to satisfy simplified model of the modified MHD ARS. A method that can generate radial velocity by an MHD pump in MHD ARS is proposed. A device is designed to study the radial flow velocity generated by the MHD pump. The influence of structure and physical parameters are studied by numerical simulation and experiment of the device. The analytic expression of the velocity generated by the energized current drawn from simulation and experiment are consistent, which demonstrates the effectiveness of the method generating radial velocity. The study can be applied to generate and control radial velocity in modified MHD ARS, which is essential for the two effects combination throughout the whole bandwidth. PMID:26694393
NASA Astrophysics Data System (ADS)
Xu, Qian
The Richtmyer-Meshkov Instability (RMI) (Commun. Pure Appl. Math 23, 297-319, 1960; Izv. Akad. Nauk. SSSR Maekh. Zhidk. Gaza. 4, 151-157, 1969) occurs due to an impulsive acceleration acting on a perturbed interface between two fluids of different densities. In the experiments presented in this thesis, single mode 3D RMI experiments are performed. An oscillating speaker generates a single mode sinusoidal initial perturbation at an interface of two gases, air and SF6. A Mach 1.19 shock wave accelerates the interface and generates the Richtmyer-Meshkov Instability. Both gases are seeded with propylene glycol particles which are illuminated by an Nd: YLF pulsed laser. Three high-speed video cameras record image sequences of the experiment. Particle Image Velocimetry (PIV) is applied to measure the velocity field. Measurements of the amplitude for both spike and bubble are obtained, from which the growth rate is measured. For both spike and bubble experiments, amplitude and growth rate match the linear stability theory at early time, but fall into a non-linear region with amplitude measurements lying between the modified 3D Sadot et al. model ( Phys. Rev. Lett. 80, 1654-1657, 1998) and the Zhang & Sohn model (Phys. Fluids 9. 1106-1124, 1997; Z. Angew. Math Phys 50. 1-46, 1990) at late time. Amplitude and growth rate curves are found to lie above the modified 3D Sadot et al. model and below Zhang & Sohn model for the spike experiments. Conversely, for the bubble experiments, both amplitude and growth rate curves lie above the Zhang & Sohn model, and below the modified 3D Sadot et al. model. Circulation is also calculated using the vorticity and velocity fields from the PIV measurements. The calculated circulation are approximately equal and found to grow with time, a result that differs from the modified Jacobs and Sheeley's circulation model (Phys. Fluids 8, 405-415, 1996).
A model of metastable dynamics during ongoing and evoked cortical activity
NASA Astrophysics Data System (ADS)
La Camera, Giancarlo
The dynamics of simultaneously recorded spike trains in alert animals often evolve through temporal sequences of metastable states. Little is known about the network mechanisms responsible for the genesis of such sequences, or their potential role in neural coding. In the gustatory cortex of alert rates, state sequences can be observed also in the absence of overt sensory stimulation, and thus form the basis of the so-called `ongoing activity'. This activity is characterized by a partial degree of coordination among neurons, sharp transitions among states, and multi-stability of single neurons' firing rates. A recurrent spiking network model with clustered topology can account for both the spontaneous generation of state sequences and the (network-generated) multi-stability. In the model, each network state results from the activation of specific neural clusters with potentiated intra-cluster connections. A mean field solution of the model shows a large number of stable states, each characterized by a subset of simultaneously active clusters. The firing rate in each cluster during ongoing activity depends on the number of active clusters, so that the same neuron can have different firing rates depending on the state of the network. Because of dense intra-cluster connectivity and recurrent inhibition, in finite networks the stable states lose stability due to finite size effects. Simulations of the dynamics show that the model ensemble activity continuously hops among the different states, reproducing the ongoing dynamics observed in the data. Moreover, when probed with external stimuli, the model correctly predicts the quenching of single neuron multi-stability into bi-stability, the reduction of dimensionality of the population activity, the reduction of trial-to-trial variability, and a potential role for metastable states in the anticipation of expected events. Altogether, these results provide a unified mechanistic model of ongoing and evoked cortical dynamics. NSF IIS-1161852, NIDCD K25-DC013557, NIDCD R01-DC010389.
Law, B.E.; Dickinson, W.W.
1985-01-01
The paper suggests that overpressured and underpressured gas accumulations of this type have a common origin. In basins containing overpressured gas accumulations, rates of thermogenic gas accumulation exceed gas loss, causing fluid (gas) pressure to rise above the regional hydrostatic pressure. Free water in the larger pores is forced out of the gas generation zone into overlying and updip, normally pressured, water-bearing rocks. While other diagenetic processes continue, a pore network with very low permeability develops. As a result, gas accumulates in these low-permeability reservoirs at rates higher than it is lost. In basins containing underpressured gas accumulations, rates of gas generation and accumulation are less than gas loss. The basin-center gas accumulation persists, but because of changes in the basin dynamics, the overpressured accumulation evolves into an underpressured system.
Approaches to setting organism-based ballast water discharge standards
Lee, Henry; Reusser, Deborah A.; Frazier, Melanie
2013-01-01
As a vector by which foreign species invade coastal and freshwater waterbodies, ballast water discharge from ships is recognized as a major environmental threat. The International Maritime Organization (IMO) drafted an international treaty establishing ballast water discharge standards based on the number of viable organisms per volume of ballast discharge for different organism size classes. Concerns that the IMO standards are not sufficiently protective have initiated several state and national efforts in the United States to develop more stringent standards. We evaluated seven approaches to establishing discharge standards for the >50-μm size class: (1) expert opinion/management consensus, (2) zero detectable living organisms, (3) natural invasion rates, (4) reaction–diffusion models, (5) population viability analysis (PVA) models, (6) per capita invasion probabilities (PCIP), and (7) experimental studies. Because of the difficulty in synthesizing scientific knowledge in an unbiased and transparent fashion, we recommend the use of quantitative models instead of expert opinion. The actual organism concentration associated with a “zero detectable organisms” standard is defined by the statistical rigor of its monitoring program; thus it is not clear whether such a standard is as stringent as other standards. For several reasons, the natural invasion rate, reaction–diffusion, and experimental approaches are not considered suitable for generating discharge standards. PVA models can be used to predict the likelihood of establishment of introduced species but are limited by a lack of population vital rates for species characteristic of ballast water discharges. Until such rates become available, PVA models are better suited to evaluate relative efficiency of proposed standards rather than predicting probabilities of invasion. The PCIP approach, which is based on historical invasion rates at a regional scale, appears to circumvent many of the indicated problems, although it may underestimate invasions by asexual and parthenogenic species. Further research is needed to better define propagule dose–responses, densities at which Allee effects occur, approaches to predicting the likelihood of invasion from multi-species introductions, and generation of formal comparisons of approaches using standardized scenarios.
Glaude, Pierre Alexandre; Herbinet, Olivier; Bax, Sarah; Biet, Joffrey; Warth, Valérie; Battin-Leclerc, Frédérique
2013-01-01
The modeling of the oxidation of methyl esters was investigated and the specific chemistry, which is due to the presence of the ester group in this class of molecules, is described. New reactions and rate parameters were defined and included in the software EXGAS for the automatic generation of kinetic mechanisms. Models generated with EXGAS were successfully validated against data from the literature (oxidation of methyl hexanoate and methyl heptanoate in a jet-stirred reactor) and a new set of experimental results for methyl decanoate. The oxidation of this last species was investigated in a jet-stirred reactor at temperatures from 500 to 1100 K, including the negative temperature coefficient region, under stoichiometric conditions, at a pressure of 1.06 bar and for a residence time of 1.5 s: more than 30 reaction products, including olefins, unsaturated esters, and cyclic ethers, were quantified and successfully simulated. Flow rate analysis showed that reactions pathways for the oxidation of methyl esters in the low-temperature range are similar to that of alkanes. PMID:23710076
Differences in contraceptive use across generations of migration among women of Mexican origin.
Wilson, Ellen K
2009-09-01
To explore differences in contraceptive use among women of Mexican origin across generations of migration. Logit models were used to assess contraceptive use among 1,830 women of Mexican origin in Cycles 5 (1995) and 6 (2002) of the National Survey of Family Growth (NSFG). Analyses were stratified by age. Initial models controlled for survey year and underlying differences across generations of migration in age and parity; subsequent models added a range of potential mediating variables. Models account for significant interactions between generation of migration and parity. Among women under age 30 who have not yet had any children, women in their twenties with parity 3 or more, and women 30 or older with parity 1 or 2, those born in the US are much more likely to use contraception than immigrant women. For other levels of parity, there are no significant differences in contraceptive use across generations of migration. Generational differences in marital status, socio-economic status, health insurance coverage, and catholic religiosity did little to mediate the association between generation of migration and contraceptive use. Among women of Mexican origin, patterns of contraceptive use among first-generation immigrants and women of generation 1.5 are similar to those of women in Mexico, with very low rates of contraceptive use among young women who have not yet had a child. Further research is needed to investigate the extent to which this pattern is due to fertility preferences, contraceptive access, or concerns about side effects and infertility. Patterns of contraceptive use appear to change more slowly with acculturation than many other factors, such as education, income, and work force participation.
NASA Astrophysics Data System (ADS)
Heidrich, Brenden J.
Nuclear power plants produce 20 percent of the electricity generated in the U.S. Nuclear generated electricity is increasingly valuable to a utility because it can be produced at a low marginal cost and it does not release any carbon dioxide. It can also be a hedge against uncertain fossil fuel prices. The construction of new nuclear power plants in the U.S. is cautiously moving forward, restrained by high capital costs. Since 1998, nuclear utilities have been increasing the power output of their reactors by implementing extended power up-rates. Power increases of up to 20 percent are allowed under this process. The equivalent of nine large power plants has been added via extended power up-rates. These up-rates require the replacement of large capital equipment and are often performed in concert with other plant life extension activities such as license renewals. This dissertation examines the effect of these extended power up-rates on the safety performance of U.S. boiling water reactors. Licensing event reports are submitted by the utilities to the Nuclear Regulatory Commission, the federal nuclear regulator, for a wide range of abnormal events. Two methods are used to examine the effect of extended power up-rates on the frequency of abnormal events at the reactors. The Crow/AMSAA model, a univariate technique is used to determine if the implementation of an extended power up-rate affects the rate of abnormal events. The method has a long history in the aerospace industry and in the military. At a 95-percent confidence level, the rate of events requiring the submission of a licensing event report decreases following the implementation of an extended power up-rate. It is hypothesized that the improvement in performance is tied to the equipment replacement and refurbishment that is performed as part of the up-rate process. The reactor performance is also analyzed using the proportional hazards model. This technique allows for the estimation of the effects of multiple independent variables on the event rate. Both the Cox and Weibull formulations were tested. The Cox formulation is more commonly used in survival analysis because of its flexibility. The best Cox model included fixed effects at the multi-reactor site level. The Weibull parametric formulation has the same base hazard rate as the Crow/AMSAA model. This theoretical connection was confirmed through a series of tests that demonstrated both models predicted the same base hazard rates. The Weibull formulation produced a model with most of the same statistically significant variables as the Cox model. The beneficial effect of extended power up-rates was predicted in the proportional hazards models as well as the Crow/AMSAA model. The Weibull model also indicated an effect that can be traced back to a plant’s construction. Performance was also found to improve in plants that had been divested from their original owners. This research developed a consistent evaluation toolkit for nuclear power plant performance using either a univariate method that allows for simple graphical evaluation at its heart or a more complex multivariate method that includes the effects of several independent variables with data that are available from public sources. Utilities or regulators with access to proprietary data may be able to expand upon this research with additional data that is not readily available to an academic researcher. Even without access to special data, the methods developed are valuable tools in evaluating and predicting nuclear power plant reliability performance.
Choi, Jungyill; Harvey, Judson W.; Conklin, Martha H.
2000-01-01
The fate of contaminants in streams and rivers is affected by exchange and biogeochemical transformation in slowly moving or stagnant flow zones that interact with rapid flow in the main channel. In a typical stream, there are multiple types of slowly moving flow zones in which exchange and transformation occur, such as stagnant or recirculating surface water as well as subsurface hyporheic zones. However, most investigators use transport models with just a single storage zone in their modeling studies, which assumes that the effects of multiple storage zones can be lumped together. Our study addressed the following question: Can a single‐storage zone model reliably characterize the effects of physical retention and biogeochemical reactions in multiple storage zones? We extended an existing stream transport model with a single storage zone to include a second storage zone. With the extended model we generated 500 data sets representing transport of nonreactive and reactive solutes in stream systems that have two different types of storage zones with variable hydrologic conditions. The one storage zone model was tested by optimizing the lumped storage parameters to achieve a best fit for each of the generated data sets. Multiple storage processes were categorized as possessing I, additive; II, competitive; or III, dominant storage zone characteristics. The classification was based on the goodness of fit of generated data sets, the degree of similarity in mean retention time of the two storage zones, and the relative distributions of exchange flux and storage capacity between the two storage zones. For most cases (>90%) the one storage zone model described either the effect of the sum of multiple storage processes (category I) or the dominant storage process (category III). Failure of the one storage zone model occurred mainly for category II, that is, when one of the storage zones had a much longer mean retention time (ts ratio > 5.0) and when the dominance of storage capacity and exchange flux occurred in different storage zones. We also used the one storage zone model to estimate a “single” lumped rate constant representing the net removal of a solute by biogeochemical reactions in multiple storage zones. For most cases the lumped rate constant that was optimized by one storage zone modeling estimated the flux‐weighted rate constant for multiple storage zones. Our results explain how the relative hydrologic properties of multiple storage zones (retention time, storage capacity, exchange flux, and biogeochemical reaction rate constant) affect the reliability of lumped parameters determined by a one storage zone transport model. We conclude that stream transport models with a single storage compartment will in most cases reliably characterize the dominant physical processes of solute retention and biogeochemical reactions in streams with multiple storage zones.
Shear band formation in plastic bonded explosive (PBX)
NASA Astrophysics Data System (ADS)
Dey, T. N.; Johnson, J. N.
1998-07-01
Adiabatic shear bands can be a source of ignition and lead to detonation. At low to moderate deformation rates, 10-1000 s-1, two other mechanisms can also give rise to shear bands. These mechanisms are: 1) softening caused by micro-cracking and 2) a constitutive response with a non-associated flow rule as is observed in granular material such as soil. Brittle behavior at small strains and the granular nature of HMX suggest that PBX-9501 constitutive behavior may be similar to sand. A constitutive model for the first of these mechanisms is studied in a series of calculations. This viscoelastic constitutive model for PBX-9501 softens via a statistical crack model. A sand model is used to provide a non-associated flow rule and detailed results will be reported elsewhere. Both models generate shear band formation at 1-2% strain at nominal strain rates at and below 1000 s-1. Shear band formation is suppressed at higher strain rates. Both mechanisms may accelerate the formation of adiabatic shear bands.
NASA Astrophysics Data System (ADS)
Markov, Detelin
2012-11-01
This paper presents an easy-to-understand procedure for prediction of indoor air composition time variation in air-tight occupied spaces during the night periods. The mathematical model is based on the assumptions for homogeneity and perfect mixing of the indoor air, the ideal gas model for non-reacting gas mixtures, mass conservation equations for the entire system and for each species, a model for prediction of basal metabolic rate of humans as well as a model for prediction of O2 consumption rate and both CO2 and H2O generation rates by breathing. Time variation of indoor air composition is predicted at constant indoor air temperature for three scenarios based on the analytical solution of the mathematical model. The results achieved reveal both the most probable scenario for indoor air time variation in air-tight occupied spaces as well as the cause for morning tiredness after having a sleep in a modern energy efficient space.
Two-stage model of radon-induced malignant lung tumors in rats: effects of cell killing
NASA Technical Reports Server (NTRS)
Luebeck, E. G.; Curtis, S. B.; Cross, F. T.; Moolgavkar, S. H.
1996-01-01
A two-stage stochastic model of carcinogenesis is used to analyze lung tumor incidence in 3750 rats exposed to varying regimens of radon carried on a constant-concentration uranium ore dust aerosol. New to this analysis is the parameterization of the model such that cell killing by the alpha particles could be included. The model contains parameters characterizing the rate of the first mutation, the net proliferation rate of initiated cells, the ratio of the rates of cell loss (cell killing plus differentiation) and cell division, and the lag time between the appearance of the first malignant cell and the tumor. Data analysis was by standard maximum likelihood estimation techniques. Results indicate that the rate of the first mutation is dependent on radon and consistent with in vitro rates measured experimentally, and that the rate of the second mutation is not dependent on radon. An initial sharp rise in the net proliferation rate of initiated cell was found with increasing exposure rate (denoted model I), which leads to an unrealistically high cell-killing coefficient. A second model (model II) was studied, in which the initial rise was attributed to promotion via a step function, implying that it is due not to radon but to the uranium ore dust. This model resulted in values for the cell-killing coefficient consistent with those found for in vitro cells. An "inverse dose-rate" effect is seen, i.e. an increase in the lifetime probability of tumor with a decrease in exposure rate. This is attributed in large part to promotion of intermediate lesions. Since model II is preferable on biological grounds (it yields a plausible cell-killing coefficient), such as uranium ore dust. This analysis presents evidence that a two-stage model describes the data adequately and generates hypotheses regarding the mechanism of radon-induced carcinogenesis.
Extraterrestrial cold chemistry. A need for a specific database.
NASA Astrophysics Data System (ADS)
Pernot, P.; Carrasco, N.; Dobrijevic, M.; Hébrard, E.; Plessis, S.; Wakelam, V.
2008-09-01
The major resource databases for building chemical models for photochemistry in cold environments are mainly based on those designed for Earth atmospheric chemistry or combustion, in which reaction rates are reported for temperatures typically above 300 K [1,2]. Kinetic data measured at low temperatures are very sparse; for instance, in stateoftheart photochemical models of Titan atmosphere, less than 10% of the rates have been measured in the relevant temperature range (100200 K) [35]. In consequence, photochemical models rely mostly on lowT extrapolations by Arrheniustype laws. There is more and more evidence that this is often inappropriate [6], and low T extrapolations are hindered by very high uncertainty [3] (Fig.1). The predictions of models based on those extrapolations are expected to be very inaccurate [4,7]. We argue that there is not much sense in increasing the complexity of the present models as long as this predictivity issue has not been resolved. Fig. 1 Uncertainty of low temperature extrapolation for the N(2D) +C2H4 reaction rate, from measurements in the range 225 292 K [10], assuming an Arrhenius law (blue line). The sample of rate laws is generated by Monte Carlo uncertainty propagation after a Bayesian Data reAnalysis (BDA) of experimental data. A dialogue between modellers and experimentalists is necessary to improve this situation. Considering the heavy costs of low temperature reaction kinetics experiments, the identification of key reactions has to be based on an optimal strategy to improve the predictivity of photochemical models. This can be achieved by global sensitivity analysis, as illustrated on Titan atmospheric chemistry [8]. The main difficulty of this scheme is that it requires a lot of inputs, mainly the evaluation of uncertainty for extrapolated reaction rates. Although a large part has already been achieved by Hébrard et al. [3], extension and validation requires a group of experts. A new generation of collaborative kinetic database is needed to implement efficiently this scheme. The KIDA project [9], initiated by V. Wakelam for astrochemistry, has been joined by planetologists with similar prospects. EuroPlaNet will contribute to this effort through the organization of comities of experts on specific processes in atmospheric photochemistry.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bodvarsson, G.S.; Pruess, K.; Stefansson, V.
A detailed three-dimensional well-by-well model of the East Olkaria geothermal field in Kenya has been developed. The model matches reasonably well the flow rate and enthalpy data from all wells, as well as the overall pressure decline in the reservoir. The model is used to predict the generating capacity of the field, well decline, enthalpy behavior, the number of make-up wells needed and the effects of injection on well performance and overall reservoir depletion. 26 refs., 10 figs.
Evaluation of on-line chelant addition to PWR steam generators. Steam generator cleaning project
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tvedt, T.J.; Wallace, S.L.; Griffin, F. Jr.
1983-09-01
The investigation of chelating agents for continuous water treatment of secondary loops of PWR steam generators were conducted in two general areas: the study of the chemistry of chelating agents and the study of materials compatability with chelating agents. The thermostability of both EDTA and HEDTA metal chelates in All Volatile Treatment (AVT) water chemistry were shown to be greater than or equal to the thermostability of EDTA metal chelates in phosphate-sulfite water chemistry. HEDTA metal chelates were shown to have a much greater stability than EDTA metal chelates. Using samples taken from the EDTA metal chelate thermostability study andmore » from the Commonwealth Research Corporation (CRC) model steam generators (MSG), EDTA decomposition products were determined. Active metal surfaces were shown to become passivated when exposed to EDTA and HEDTA concentrations as high as 0.1% w/w in AVT. Trace amounts of iron in the water were found to increase the rate of passivation. Material balance and visual inspection data from CRC model steam generators showed that metal was transported through and cleaned from the MSG's. The Inconel 600 tubes of the salt water fouled model steam generators experienced pitting corrosion. Results of this study demonstrates the feasibility of EDTA as an on-line water treatment additive to maintain nuclear steam generators in a clean condition.« less
The social architecture of capitalism
NASA Astrophysics Data System (ADS)
Wright, Ian
2005-02-01
A dynamic model of the social relations between workers and capitalists is introduced. The model self-organises into a dynamic equilibrium with statistical properties that are in close qualitative and in many cases quantitative agreement with a broad range of known empirical distributions of developed capitalism, including the power-law firm size distribution, the Laplace firm and GDP growth distribution, the lognormal firm demises distribution, the exponential recession duration distribution, the lognormal-Pareto income distribution, and the gamma-like firm rate-of-profit distribution. Normally these distributions are studied in isolation, but this model unifies and connects them within a single causal framework. The model also generates business cycle phenomena, including fluctuating wage and profit shares in national income about values consistent with empirical studies. The generation of an approximately lognormal-Pareto income distribution and an exponential-Pareto wealth distribution demonstrates that the power-law regime of the income distribution can be explained by an additive process on a power-law network that models the social relation between employers and employees organised in firms, rather than a multiplicative process that models returns to investment in financial markets. A testable consequence of the model is the conjecture that the rate-of-profit distribution is consistent with a parameter-mix of a ratio of normal variates with means and variances that depend on a firm size parameter that is distributed according to a power-law.
Long-term cost-effectiveness of disease management in systolic heart failure.
Miller, George; Randolph, Stephen; Forkner, Emma; Smith, Brad; Galbreath, Autumn Dawn
2009-01-01
Although congestive heart failure (CHF) is a primary target for disease management programs, previous studies have generated mixed results regarding the effectiveness and cost savings of disease management when applied to CHF. We estimated the long-term impact of systolic heart failure disease management from the results of an 18-month clinical trial. We used data generated from the trial (starting population distributions, resource utilization, mortality rates, and transition probabilities) in a Markov model to project results of continuing the disease management program for the patients' lifetimes. Outputs included distribution of illness severity, mortality, resource consumption, and the cost of resources consumed. Both cost and effectiveness were discounted at a rate of 3% per year. Cost-effectiveness was computed as cost per quality-adjusted life year (QALY) gained. Model results were validated against trial data and indicated that, over their lifetimes, patients experienced a lifespan extension of 51 days. Combined discounted lifetime program and medical costs were $4850 higher in the disease management group than the control group, but the program had a favorable long-term discounted cost-effectiveness of $43,650/QALY. These results are robust to assumptions regarding mortality rates, the impact of aging on the cost of care, the discount rate, utility values, and the targeted population. Estimation of the clinical benefits and financial burden of disease management can be enhanced by model-based analyses to project costs and effectiveness. Our results suggest that disease management of heart failure patients can be cost-effective over the long term.
Combustion Of Porous Graphite Particles In Oxygen Enriched Air
NASA Technical Reports Server (NTRS)
Delisle, Andrew J.; Miller, Fletcher J.; Chelliah, Harsha K.
2003-01-01
Combustion of solid fuel particles has many important applications, including power generation and space propulsion systems. The current models available for describing the combustion process of these particles, especially porous solid particles, include various simplifying approximations. One of the most limiting approximations is the lumping of the physical properties of the porous fuel with the heterogeneous chemical reaction rate constants [1]. The primary objective of the present work is to develop a rigorous modeling approach that could decouple such physical and chemical effects from the global heterogeneous reaction rates. For the purpose of validating this model, experiments with porous graphite particles of varying sizes and porosity are being performed under normal and micro gravity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosenkranz, Joshua-Benedict; Brancucci Martinez-Anido, Carlo; Hodge, Bri-Mathias
Solar power generation, unlike conventional forms of electricity generation, has higher variability and uncertainty in its output because solar plant output is strongly impacted by weather. As the penetration rate of solar capacity increases, grid operators are increasingly concerned about accommodating the increased variability and uncertainty that solar power provides. This paper illustrates the impacts of increasing solar power penetration on the ramping of conventional electricity generators by simulating the operation of the Independent System Operator -- New England power system. A production cost model was used to simulate the power system under five different scenarios, one without solar powermore » and four with increasing solar power penetrations up to 18%, in terms of annual energy. The impact of solar power is analyzed on six different temporal intervals, including hourly and multi-hourly (2- to 6-hour) ramping. The results show how the integration of solar power increases the 1- to 6-hour ramping events of the net load (electric load minus solar power). The study also analyzes the impact of solar power on the distribution of multi-hourly ramping events of fossil-fueled generators and shows increasing 1- to 6-hour ramping events for all different generators. Generators with higher ramp rates such as gas and oil turbine and internal combustion engine generators increased their ramping events by 200% to 280%. For other generator types--including gas combined-cycle generators, coal steam turbine generators, and gas and oil steam turbine generators--more and higher ramping events occurred as well for higher solar power penetration levels.« less
Nanobubbles: An Effective Way to Study Gas-Generating Catalysis on a Single Nanoparticle.
Li, Shuping; Du, Ying; He, Ting; Shen, Yangbin; Bai, Chuang; Ning, Fandi; Hu, Xin; Wang, Wenhui; Xi, Shaobo; Zhou, Xiaochun
2017-10-11
Gas-generating catalysis is important to many energy-related research fields, such as photocatalytic water splitting, water electrolysis, etc. The technique of single-nanoparticle catalysis is an effective way to search for highly active nanocatalysts and elucidate the reaction mechanism. However, gas-generating catalysis remains difficult to investigate at the single-nanoparticle level because product gases, such as H 2 and O 2 , are difficult to detect on an individual nanoparticle. Here, we successfully find that nanobubbles can be used to study the gas-generating catalysis, i.e., H 2 generation from formic acid dehydrogenation on a single Pd-Ag nanoplate, with a high time resolution (50 ms) via dark-field microscopy. The research reveals that the nanobubble evolution process includes nucleation time and lifetime. The nucleation rate of nanobubbles is proportional to the catalytic activity of a single nanocatalyst. The relationship between the catalytic activity and the nucleation rate is quantitatively described by a mathematical model, which shows that an onset reaction rate (r onset ) exists for the generation of nanobubbles on a single Pd-Ag nanoplate. The research also reveals that a Pd-Ag nanoplate with larger size usually has a higher activity. However, some large-sized ones still have low activities, indicating the size of the Pd-Ag nanoplate is not the only key factor for the activity. Notablely, further research shows that Pd content is the key factor for the activity of single Pd-Ag nanoplates with similar size. The methodology and knowledge acquired from this research are also applicable to other important gas-generating catalysis reactions at the single-nanoparticle level.
NASA Astrophysics Data System (ADS)
Puram, Rakesh
The Renewable Portfolio Standard (RPS) has become a popular mechanism for states to promote renewable energy and its popularity has spurred a potential bill within Congress for a nationwide Federal RPS. While RPS benefits have been touted by several groups, it also has detractors. Among the concerns is that RPS standards could raise electricity rates, given that renewable energy is costlier than traditional fossil fuels. The evidence on the impact of RPS on electricity prices is murky at best: Complex models by NREL and USEIA utilize computer programs with several assumptions which make empirical studies difficult and only predict slight increases in electricity rates associated with RPS standards. Recent theoretical models and empirical studies have found price increases, but often fail to comprehensively include several sets of variables, which in fact could confound results. Utilizing a combination of past papers and studies to triangulate variables this study aims to develop both a rigorous fixed effects regression model as well as a theoretical framework to explain the results. This study analyzes state level panel data from 2002 to 2008 to analyze the effect of RPS on residential, commercial, and industrial electricity prices, controlling for several factors including amount of electricity generation from renewable and non-renewable sources, customer incentives for renewable energy, macroeconomic and demographic indicators, and fuel price mix. The study contrasts several regressions to illustrate important relationships and how inclusions as well as exclusion of various variables have an effect on electricity rates. Regression results indicate that the presence of RPS within a state increases the commercial and residential electricity rates, but have no discernable effect on the industrial electricity rate. Although RPS tends to increase electricity prices, the effect has a small impact on higher electricity prices. The models also indicate that jointly all renewable energy generation as well as non-renewable energy generation have an impact on residential, industrial, and commercial price. In addition coal price, personal income, and the number of net metering customers in a state impact commercial, industrial and residential electricity rates. There are two main policy implications that stem from this study. First is that while RPS has an impact on residential and commercial electricity rates, the magnitude is small, especially given the average consumption patterns of households and commercial customers. The second policy implication is that it is that given the significance of several explanatory variables in the theoretical model it is important to discuss the relevance of RPS within the context of electricity sources, both renewable and non-renewable, demand side programs, economic factors, as well as fuel costs.
Genetically Engineered Mouse Model of Diffuse Intrinsic Pontine Glioma as a Preclinical Tool
2012-09-01
Hydrocephalus mice were excluded from this calculation. With this particular experiment the hydrocephalus rate is 57% (due to the formation of...is completed. We have generated 10 tumors by injecting 14 mice and an example of one is described in the figure below. Hydrocephalus mice were...excluded from the 4 analysis. The hydrocephalus rate was 51% for this experiment due to the formation of leptomeningeal tumor
Implications of the method of capital cost payment on the weighted average cost of capital.
Boles, K E
1986-01-01
The author develops a theoretical and mathematical model, based on published financial management literature, to describe the cost of capital structure for health care delivery entities. This model is then used to generate the implications of changing the capital cost reimbursement mechanism from a cost basis to a prospective basis. The implications are that the cost of capital is increased substantially, the use of debt must be restricted, interest rates for borrowed funds will increase, and, initially, firms utilizing debt efficiently under cost-basis reimbursement will be restricted to the generation of funds from equity only under a prospective system. PMID:3525468
Improving SysSim's Planetary Occurrence Rate Estimates
NASA Astrophysics Data System (ADS)
Ashby, Keir; Ragozzine, Darin; Hsu, Danley; Ford, Eric B.
2017-10-01
Kepler's catalog of thousands of transiting planet candidates enables statistical characterization of the underlying planet occurrence rates as a function of period and radius. Due to geometric factors and general noise in measurements, we know that many planets--especially those with a small-radius and/or long-period--were not observed by Kepler.To account for Kepler's detection criteria, Hsu et al. 2017 expanded on work in Lissuaer et al. 2011 to develop the Planetary System Simulator or "SysSim". SysSim uses a forward model to generate simulated catalogs of exoplanet systems, determine which of those simulated planets would have been seen by Kepler in the presence of uncertainties, and then compares those “observed planets” to those actually seen by Kepler. It then uses Approximate Bayesian Computation to infer the posterior probability distributions of the input parameters used to generate the forward model. In Hsu et al. 2017, we focused on matching the observed frequency of planets by solving for the underlying occurrence rate for each bin in a 2-dimensional grid of radius and period. After summarizing the results of Hsu et al. 2017, we show new results that investigate the effect on occurrence rates from including more accurate completeness products (from the Kepler DR25 analysis) into SysSim.
Hahn, Philip J; McIntyre, Cameron C
2010-06-01
Deep brain stimulation (DBS) of the subthlamic nucleus (STN) represents an effective treatment for medically refractory Parkinson's disease; however, understanding of its effects on basal ganglia network activity remains limited. We constructed a computational model of the subthalamopallidal network, trained it to fit in vivo recordings from parkinsonian monkeys, and evaluated its response to STN DBS. The network model was created with synaptically connected single compartment biophysical models of STN and pallidal neurons, and stochastically defined inputs driven by cortical beta rhythms. A least mean square error training algorithm was developed to parameterize network connections and minimize error when compared to experimental spike and burst rates in the parkinsonian condition. The output of the trained network was then compared to experimental data not used in the training process. We found that reducing the influence of the cortical beta input on the model generated activity that agreed well with recordings from normal monkeys. Further, during STN DBS in the parkinsonian condition the simulations reproduced the reduction in GPi bursting found in existing experimental data. The model also provided the opportunity to greatly expand analysis of GPi bursting activity, generating three major predictions. First, its reduction was proportional to the volume of STN activated by DBS. Second, GPi bursting decreased in a stimulation frequency dependent manner, saturating at values consistent with clinically therapeutic DBS. And third, ablating STN neurons, reported to generate similar therapeutic outcomes as STN DBS, also reduced GPi bursting. Our theoretical analysis of stimulation induced network activity suggests that regularization of GPi firing is dependent on the volume of STN tissue activated and a threshold level of burst reduction may be necessary for therapeutic effect.
NASA Astrophysics Data System (ADS)
Zuzeek, Yvette; Choi, Inchul; Uddi, Mruthunjaya; Adamovich, Igor V.; Lempert, Walter R.
2010-03-01
Pure rotational CARS thermometry is used to study low-temperature plasma assisted fuel oxidation kinetics in a repetitive nanosecond pulse discharge in ethene-air at stoichiometric and fuel lean conditions at 40 Torr pressure. Air and fuel-air mixtures are excited by a burst of high-voltage nanosecond pulses (peak voltage, 20 kV; pulse duration, ~ 25 ns) at a 40 kHz pulse repetition rate and a burst repetition rate of 10 Hz. The number of pulses in the burst is varied from a few pulses to a few hundred pulses. The results are compared with the previously developed hydrocarbon-air plasma chemistry model, modified to incorporate non-empirical scaling of the nanosecond discharge pulse energy coupled to the plasma with number density, as well as one-dimensional conduction heat transfer. Experimental time-resolved temperature, determined as a function of the number of pulses in the burst, is found to agree well with the model predictions. The results demonstrate that the heating rate in fuel-air plasmas is much faster compared with air plasmas, primarily due to energy release in exothermic reactions of fuel with O atoms generated by the plasma. It is found that the initial heating rate in fuel-air plasmas is controlled by the rate of radical (primarily O atoms) generation and is nearly independent of the equivalence ratio. At long burst durations, the heating rate in lean fuel air-mixtures is significantly reduced when all fuel is oxidized.
Entropy generation analysis for film boiling: A simple model of quenching
NASA Astrophysics Data System (ADS)
Lotfi, Ali; Lakzian, Esmail
2016-04-01
In this paper, quenching in high-temperature materials processing is modeled as a superheated isothermal flat plate. In these phenomena, a liquid flows over the highly superheated surfaces for cooling. So the surface and the liquid are separated by the vapor layer that is formed because of the liquid which is in contact with the superheated surface. This is named forced film boiling. As an objective, the distribution of the entropy generation in the laminar forced film boiling is obtained by similarity solution for the first time in the quenching processes. The PDE governing differential equations of the laminar film boiling including continuity, momentum, and energy are reduced to ODE ones, and a dimensionless equation for entropy generation inside the liquid boundary and vapor layer is obtained. Then the ODEs are solved by applying the 4th-order Runge-Kutta method with a shooting procedure. Moreover, the Bejan number is used as a design criterion parameter for a qualitative study about the rate of cooling and the effects of plate speed are studied in the quenching processes. It is observed that for high speed of the plate the rate of cooling (heat transfer) is more.
Optimized decoy state QKD for underwater free space communication
NASA Astrophysics Data System (ADS)
Lopes, Minal; Sarwade, Nisha
Quantum cryptography (QC) is envisioned as a solution for global key distribution through fiber optic, free space and underwater optical communication due to its unconditional security. In view of this, this paper investigates underwater free space quantum key distribution (QKD) model for enhanced transmission distance, secret key rates and security. It is reported that secure underwater free space QKD is feasible in the clearest ocean water with the sifted key rates up to 207kbps. This paper extends this work by testing performance of optimized decoy state QKD protocol with underwater free space communication model. The attenuation of photons, quantum bit error rate and the sifted key generation rate of underwater quantum communication is obtained with vector radiative transfer theory and Monte Carlo method. It is observed from the simulations that optimized decoy state QKD evidently enhances the underwater secret key transmission distance as well as secret key rates.
NASA Astrophysics Data System (ADS)
Vázquez, Héctor; Troisi, Alessandro
2013-11-01
We investigate the process of exciton dissociation in ordered and disordered model donor/acceptor systems and describe a method to calculate exciton dissociation rates. We consider a one-dimensional system with Frenkel states in the donor material and states where charge transfer has taken place between donor and acceptor. We introduce a Green's function approach to calculate the generation rates of charge-transfer states. For disorder in the Frenkel states we find a clear exponential dependence of charge dissociation rates with exciton-interface distance, with a distance decay constant β that increases linearly with the amount of disorder. Disorder in the parameters that describe (final) charge-transfer states has little effect on the rates. Exciton dissociation invariably leads to partially separated charges. In all cases final states are “hot” charge-transfer states, with electron and hole located far from the interface.
An econometric model of the U.S. secondary copper industry: Recycling versus disposal
Slade, M.E.
1980-01-01
In this paper, a theoretical model of secondary recovery is developed that integrates microeconomic theories of production and cost with a dynamic model of scrap generation and accumulation. The model equations are estimated for the U.S. secondary copper industry and used to assess the impacts that various policies and future events have on copper recycling rates. The alternatives considered are: subsidies for secondary production, differing energy costs, and varying ore quality in primary production. ?? 1990.
Microcanonical model for interface formation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rucklidge, A.; Zaleski, S.
1988-04-01
We describe a new cellular automaton model which allows us to simulate separation of phases. The model is an extension of existing cellular automata for the Ising model, such as Q2R. It conserves particle number and presents the qualitative features of spinodal decomposition. The dynamics is deterministic and does not require random number generators. The spins exchange energy with small local reservoirs or demons. The rate of relaxation to equilibrium is investigated, and the results are compared to the Lifshitz-Slyozov theory.
Efficient Actor-Critic Algorithm with Hierarchical Model Learning and Planning
Fu, QiMing
2016-01-01
To improve the convergence rate and the sample efficiency, two efficient learning methods AC-HMLP and RAC-HMLP (AC-HMLP with ℓ 2-regularization) are proposed by combining actor-critic algorithm with hierarchical model learning and planning. The hierarchical models consisting of the local and the global models, which are learned at the same time during learning of the value function and the policy, are approximated by local linear regression (LLR) and linear function approximation (LFA), respectively. Both the local model and the global model are applied to generate samples for planning; the former is used only if the state-prediction error does not surpass the threshold at each time step, while the latter is utilized at the end of each episode. The purpose of taking both models is to improve the sample efficiency and accelerate the convergence rate of the whole algorithm through fully utilizing the local and global information. Experimentally, AC-HMLP and RAC-HMLP are compared with three representative algorithms on two Reinforcement Learning (RL) benchmark problems. The results demonstrate that they perform best in terms of convergence rate and sample efficiency. PMID:27795704
Efficient Actor-Critic Algorithm with Hierarchical Model Learning and Planning.
Zhong, Shan; Liu, Quan; Fu, QiMing
2016-01-01
To improve the convergence rate and the sample efficiency, two efficient learning methods AC-HMLP and RAC-HMLP (AC-HMLP with ℓ 2 -regularization) are proposed by combining actor-critic algorithm with hierarchical model learning and planning. The hierarchical models consisting of the local and the global models, which are learned at the same time during learning of the value function and the policy, are approximated by local linear regression (LLR) and linear function approximation (LFA), respectively. Both the local model and the global model are applied to generate samples for planning; the former is used only if the state-prediction error does not surpass the threshold at each time step, while the latter is utilized at the end of each episode. The purpose of taking both models is to improve the sample efficiency and accelerate the convergence rate of the whole algorithm through fully utilizing the local and global information. Experimentally, AC-HMLP and RAC-HMLP are compared with three representative algorithms on two Reinforcement Learning (RL) benchmark problems. The results demonstrate that they perform best in terms of convergence rate and sample efficiency.
Mesoscopic modeling of DNA denaturation rates: Sequence dependence and experimental comparison
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dahlen, Oda, E-mail: oda.dahlen@ntnu.no; Erp, Titus S. van, E-mail: titus.van.erp@ntnu.no
Using rare event simulation techniques, we calculated DNA denaturation rate constants for a range of sequences and temperatures for the Peyrard-Bishop-Dauxois (PBD) model with two different parameter sets. We studied a larger variety of sequences compared to previous studies that only consider DNA homopolymers and DNA sequences containing an equal amount of weak AT- and strong GC-base pairs. Our results show that, contrary to previous findings, an even distribution of the strong GC-base pairs does not always result in the fastest possible denaturation. In addition, we applied an adaptation of the PBD model to study hairpin denaturation for which experimentalmore » data are available. This is the first quantitative study in which dynamical results from the mesoscopic PBD model have been compared with experiments. Our results show that present parameterized models, although giving good results regarding thermodynamic properties, overestimate denaturation rates by orders of magnitude. We believe that our dynamical approach is, therefore, an important tool for verifying DNA models and for developing next generation models that have higher predictive power than present ones.« less
Chromosome damage evolution after low and high LET irradiation
NASA Astrophysics Data System (ADS)
Andreev, Sergey; Eidelman, Yuri
Ionizing radiation induces DNA and chromatin lesions which are converted to chromosome lesions detected in the first post-irradiation mitosis by classic cytogenetic techniques as chromosomal aberrations (CAs). These techniques allow to monitor also delayed aberrations observed after many cell generations post-irradiation - the manifestation of chromosomal instability phenotype (CIN). The problem discussed is how to predict time evolution from initial to delayed DNA/chromosome damage. To address this question, in the present work a mechanistic model of CIN is elaborated which integrates pathways of (*) DNA damage induction and its conversion to chromosome lesions (aberrations), (**) lesion transmission and generation through cell cycles. Delayed aberrations in subsequent cycles are formed in the model owing to two pathways, DNA damage generation de novo as well as CA transmission from previous cycles. DNA damage generation rate is assumed to consist of bystander and non-bystander components. Bystander signals impact all cells roughly equally, whereas non-bystander DSB generation rate differs for the descendants of unirradiated and irradiated cells. Monte Carlo simulation of processes underlying CIN allows to predict the time evolution of initial radiation-induced damage - kinetics curve for delayed unstable aberrations (dicentrics) together with dose response and RBE as a function of time after high vs low LET irradiation. The experimental data for radiation-induced CIN in TK6 lymphoblastoid cells and human lymphocytes irradiated with low (gamma) and high (Fe, C) LET radiation are analyzed on the basis of the proposed model. One of the conclusions is that without bystander signaling, just taking into account the initial DNA damage and non-bystander DSB generation, it is impossible to describe the available experimental data for high-LET-induced CIN. The exact contribution of bystander effects for high vs low LET remains unknown, but the relative contribution may be assessed at large times after initial acute irradiation. RBE for delayed aberrations depends on LET, time and cell line, which probably reflects a genetic background for bystander component. The proposed modeling approach creates a basis for integration of complex network of bystander/inflammatory signaling in systems-level platform for quantification of radiation induced CIN.
Manfredi, Simone; Niskanen, Antti; Christensen, Thomas H
2009-05-01
The current landfill gas (LFG) management (based on flaring and utilization for heat generation of the collected gas) and three potential future gas management options (LFG flaring, heat generation and combined heat and power generation) for the Old Ammässuo landfill (Espoo, Finland) were evaluated by life-cycle assessment modeling. The evaluation accounts for all resource utilization and emissions to the environment related to the gas generation and management for a life-cycle time horizon of 100 yr. The assessment criteria comprise standard impact categories (global warming, photo-chemical ozone formation, stratospheric ozone depletion, acidification and nutrient enrichment) and toxicity-related impact categories (human toxicity via soil, via water and via air, eco-toxicity in soil and in water chronic). The results of the life-cycle impact assessment show that disperse emissions of LFG from the landfill surface determine the highest potential impacts in terms of global warming, stratospheric ozone depletion, and human toxicity via soil. Conversely, the impact potentials estimated for other categories are numerically-negative when the collected LFG is utilized for energy generation, demonstrating that net environmental savings can be obtained. Such savings are proportional to the amount of gas utilized for energy generation and the gas energy recovery efficiency achieved, which thus have to be regarded as key parameters. As a result, the overall best performance is found for the heat generation option - as it has the highest LFG utilization/energy recovery rates - whereas the worst performance is estimated for the LFG flaring option, as no LFG is here utilized for energy generation. Therefore, to reduce the environmental burdens caused by the current gas management strategy, more LFG should be used for energy generation. This inherently requires a superior LFG capture rate that, in addition, would reduce fugitive emissions of LFG from the landfill surface, bringing further environmental benefits.
2016-01-01
Muscle contractions are generated by cyclical interactions of myosin heads with actin filaments to form the actomyosin complex. To simulate actomyosin complex stable states, mathematical models usually define an energy landscape with a corresponding number of wells. The jumps between these wells are defined through rate constants. Almost all previous models assign these wells an infinite sharpness by imposing a relatively simple expression for the detailed balance, i.e., the ratio of the rate constants depends exponentially on the sole myosin elastic energy. Physically, this assumption corresponds to neglecting thermal fluctuations in the actomyosin complex stable states. By comparing three mathematical models, we examine the extent to which this hypothesis affects muscle model predictions at the single cross-bridge, single fiber, and organ levels in a ceteris paribus analysis. We show that including fluctuations in stable states allows the lever arm of the myosin to easily and dynamically explore all possible minima in the energy landscape, generating several backward and forward jumps between states during the lifetime of the actomyosin complex, whereas the infinitely sharp minima case is characterized by fewer jumps between states. Moreover, the analysis predicts that thermal fluctuations enable a more efficient contraction mechanism, in which a higher force is sustained by fewer attached cross-bridges. PMID:27626630
Tirone, Felice; Farioli-Vecchioli, Stefano; Micheli, Laura; Ceccarelli, Manuela; Leonardi, Luca
2013-01-01
Within the hippocampal circuitry, the basic function of the dentate gyrus is to transform the memory input coming from the enthorinal cortex into sparse and categorized outputs to CA3, in this way separating related memory information. New neurons generated in the dentate gyrus during adulthood appear to facilitate this process, allowing a better separation between closely spaced memories (pattern separation). The evidence underlying this model has been gathered essentially by ablating the newly adult-generated neurons. This approach, however, does not allow monitoring of the integration of new neurons into memory circuits and is likely to set in motion compensatory circuits, possibly leading to an underestimation of the role of new neurons. Here we review the background of the basic function of the hippocampus and of the known properties of new adult-generated neurons. In this context, we analyze the cognitive performance in mouse models generated by us and others, with modified expression of the genes Btg2 (PC3/Tis21), Btg1, Pten, BMP4, etc., where new neurons underwent a change in their differentiation rate or a partial decrease of their proliferation or survival rate rather than ablation. The effects of these modifications are equal or greater than full ablation, suggesting that the architecture of circuits, as it unfolds from the interaction between existing and new neurons, can have a greater functional impact than the sheer number of new neurons. We propose a model which attempts to measure and correlate the set of cellular changes in the process of neurogenesis with the memory function. PMID:23734097
Cascading of Fluctuations in Interdependent Energy Infrastructures. Gas-Grid Coupling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chertkov, Michael; Lebedev, Vladimir; Backhaus, Scott N.
2014-09-05
The revolution of hydraulic fracturing has dramatically increased the supply and lowered the cost of natural gas in the United States driving an expansion of natural gas-fired generation capacity in many electrical grids. Unrelated to the natural gas expansion, lower capital costs and renewable portfolio standards are driving an expansion of intermittent renewable generation capacity such as wind and photovoltaic generation. These two changes may potentially combine to create new threats to the reliability of these interdependent energy infrastructures. Natural gas-fired generators are often used to balance the fluctuating output of wind generation. However, the time-varying output of these generatorsmore » results in time-varying natural gas burn rates that impact the pressure in interstate transmission pipelines. Fluctuating pressure impacts the reliability of natural gas deliveries to those same generators and the safety of pipeline operations. We adopt a partial differential equation model of natural gas pipelines and use this model to explore the effect of intermittent wind generation on the fluctuations of pressure in natural gas pipelines. The mean square pressure fluctuations are found to grow linearly in time with points of maximum deviation occurring at the locations of flow reversals.« less
NASA Astrophysics Data System (ADS)
Wang, Rong; Moreno-Cruz, Juan; Caldeira, Ken
2017-05-01
Integrated assessment models are commonly used to generate optimal carbon prices based on an objective function that maximizes social welfare. Such models typically project an initially low carbon price that increases with time. This framework does not reflect the incentives of decision makers who are responsible for generating tax revenue. If a rising carbon price is to result in near-zero emissions, it must ultimately result in near-zero carbon tax revenue. That means that at some point, policy makers will be asked to increase the tax rate on carbon emissions to such an extent that carbon tax revenue will fall. Therefore, there is a risk that the use of a carbon tax to generate revenue could eventually create a perverse incentive to continue carbon emissions in order to provide a continued stream of carbon tax revenue. Using the Dynamic Integrated Climate Economy (DICE) model, we provide evidence that this risk is not a concern for the immediate future but that a revenue-generating carbon tax could create this perverse incentive as time goes on. This incentive becomes perverse at about year 2085 under the default configuration of DICE, but the timing depends on a range of factors including the cost of climate damages and the cost of decarbonizing the global energy system. While our study is based on a schematic model, it highlights the importance of considering a broader spectrum of incentives in studies using more comprehensive integrated assessment models. Our study demonstrates that the use of a carbon tax for revenue generation could potentially motivate implementation of such a tax today, but this source of revenue generation risks motivating continued carbon emissions far into the future.
Frank, Steven A.
2010-01-01
We typically observe large-scale outcomes that arise from the interactions of many hidden, small-scale processes. Examples include age of disease onset, rates of amino acid substitutions, and composition of ecological communities. The macroscopic patterns in each problem often vary around a characteristic shape that can be generated by neutral processes. A neutral generative model assumes that each microscopic process follows unbiased or random stochastic fluctuations: random connections of network nodes; amino acid substitutions with no effect on fitness; species that arise or disappear from communities randomly. These neutral generative models often match common patterns of nature. In this paper, I present the theoretical background by which we can understand why these neutral generative models are so successful. I show where the classic patterns come from, such as the Poisson pattern, the normal or Gaussian pattern, and many others. Each classic pattern was often discovered by a simple neutral generative model. The neutral patterns share a special characteristic: they describe the patterns of nature that follow from simple constraints on information. For example, any aggregation of processes that preserves information only about the mean and variance attracts to the Gaussian pattern; any aggregation that preserves information only about the mean attracts to the exponential pattern; any aggregation that preserves information only about the geometric mean attracts to the power law pattern. I present a simple and consistent informational framework of the common patterns of nature based on the method of maximum entropy. This framework shows that each neutral generative model is a special case that helps to discover a particular set of informational constraints; those informational constraints define a much wider domain of non-neutral generative processes that attract to the same neutral pattern. PMID:19538344
Rivera-Rivera, Carlos J; Montoya-Burgos, Juan I
2016-06-01
Phylogenetic inference artifacts can occur when sequence evolution deviates from assumptions made by the models used to analyze them. The combination of strong model assumption violations and highly heterogeneous lineage evolutionary rates can become problematic in phylogenetic inference, and lead to the well-described long-branch attraction (LBA) artifact. Here, we define an objective criterion for assessing lineage evolutionary rate heterogeneity among predefined lineages: the result of a likelihood ratio test between a model in which the lineages evolve at the same rate (homogeneous model) and a model in which different lineage rates are allowed (heterogeneous model). We implement this criterion in the algorithm Locus Specific Sequence Subsampling (LS³), aimed at reducing the effects of LBA in multi-gene datasets. For each gene, LS³ sequentially removes the fastest-evolving taxon of the ingroup and tests for lineage rate homogeneity until all lineages have uniform evolutionary rates. The sequences excluded from the homogeneously evolving taxon subset are flagged as potentially problematic. The software implementation provides the user with the possibility to remove the flagged sequences for generating a new concatenated alignment. We tested LS³ with simulations and two real datasets containing LBA artifacts: a nucleotide dataset regarding the position of Glires within mammals and an amino-acid dataset concerning the position of nematodes within bilaterians. The initially incorrect phylogenies were corrected in all cases upon removing data flagged by LS³. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.
Modeling heart rate variability including the effect of sleep stages
NASA Astrophysics Data System (ADS)
Soliński, Mateusz; Gierałtowski, Jan; Żebrowski, Jan
2016-02-01
We propose a model for heart rate variability (HRV) of a healthy individual during sleep with the assumption that the heart rate variability is predominantly a random process. Autonomic nervous system activity has different properties during different sleep stages, and this affects many physiological systems including the cardiovascular system. Different properties of HRV can be observed during each particular sleep stage. We believe that taking into account the sleep architecture is crucial for modeling the human nighttime HRV. The stochastic model of HRV introduced by Kantelhardt et al. was used as the initial starting point. We studied the statistical properties of sleep in healthy adults, analyzing 30 polysomnographic recordings, which provided realistic information about sleep architecture. Next, we generated synthetic hypnograms and included them in the modeling of nighttime RR interval series. The results of standard HRV linear analysis and of nonlinear analysis (Shannon entropy, Poincaré plots, and multiscale multifractal analysis) show that—in comparison with real data—the HRV signals obtained from our model have very similar properties, in particular including the multifractal characteristics at different time scales. The model described in this paper is discussed in the context of normal sleep. However, its construction is such that it should allow to model heart rate variability in sleep disorders. This possibility is briefly discussed.
The Integrated Soil Erosion Risk Management Model of Central Java, Indonesia
NASA Astrophysics Data System (ADS)
Setiawan, M. A.; Stoetter, J.; Sartohadi, J.; Christanto, N.
2009-04-01
Many types of soil erosion modeling have been developed worldwide; each of models has its own advantage and assumption based on the originated area. Ironically, in the tropical countries where the rainfall intensity is higher than other area, the soil erosion problem gain less attention. As in Indonesia, due the inadequate supporting data and method to dealing with, the soil erosion management appears to be least prior in the policy decision. Hence, there is increasing necessity towards the initiation and integration of risk management model in the soil erosion, to prevent further land degradation problem in Indonesia. The main research objective is to generate a model which can analyze the dynamic system of soil erosion problem. This model will comprehensively consider four main aspects within the dynamic system analysis, i.e.: soil erosion rate modeling, the tolerable soil erosion rate, total soil erosion cost, and soil erosion management measures. The generating model will involve some sub-software i.e. the PC Raster to maintain the soil erosion modeling, Powersim Constructor Ver. 2.5 as the tool to analyze the dynamic system and Python Ver. 2.6.1 to build the main Graphical User Interface model. The first step addressed in this research is figuring the most appropriate soil erosion model to be applied in Indonesia based on landscape, climate, and data availability condition. This appropriate model must have the simplicity aspect in input data but still deal with the process based analysis. By using the soil erosion model result, the total soil erosion cost will be calculated both on-site and off-site effect. The total soil erosion cost will be stated in Rupiah (Indonesian currency) and Dollar. That total result is then used as one of input parameters for the tolerable soil erosion rate. Subsequently, the tolerable soil erosion rate decides whether the soil erosion rate has exceeded the allowed value or not. If the soil erosion rate has bigger value than the tolerable soil erosion rate, the soil erosion management will be applied base on cost and benefit analysis. The soil erosion management measures will conduct as decision maker of defining the best alternative soil conservation method in a certain area. Besides the engineering and theoretical methods, the local wisdom also will be taken into account in defining the alternative manners of soil erosion management. As a prototype, this integrated model will be generated and simulated in Serayu Watershed, Central Java, since this area has a serious issue in soil erosion problem mainly in the upper stream area (Dieng area). The extraordinary monoculture plantation (potatoes) and very intensive soil tillage without proper soil conservation method has accelerated the soil erosion and depleted the soil fertility. Based on the potatoes productivity data (kg/ha) from 1997-2007 showed that there was a declining trend line, approximately minus 8,2% every year. On the other hand the fertilizer and pesticide consumption in agricultural land are significantly increasing every year. In the same time, the high erosion rate causes serious sedimentation problem in lower stream. Those conditions can be used as study case in determining the element at risk of soil erosion and calculation method for the total soil erosion cost (on-site and off-site effect). Moreover, The Serayu Watershed consists of complex landforms which might have variation of soil erosion tolerable rate. In the future, this integrated model can obtain valuable basis data of the soil erosion hazard in spatial and temporal information including its total cost, the sustainability time of certain land or agriculture area, also the consequences price of applying certain agriculture or soil management. Since this model give result explicitly in spatial and temporal, this model can be used by the local authority to run the land use scenario in term of soil erosion impact before applied them in the real condition. In practice, such integrated model could give more understanding knowledge to the local people about the soil erosion, its processes, impacts, and how to manage that. Keywords: Risk assessment, soil erosion, dynamic system, environmental valuation
A personality theory of U.S. migration geography.
Stetzer, F C
1985-01-01
"Neoclassical models of migration fail to account adequately for individual differences in propensity to migrate, rates of emigration from states and cities, and the generally high rates of population circulation common in the United States. This article proposes that migration propensity is related to an individual's personality. Using the facts that personality traits are generationally regenerative, both through inheritance and culture, and that the United States was settled in a series of migration waves from east to west, this theory predicts a spatial structuring of emigration rates which closely correspond to actual rates for states and major cities." excerpt
Evaluation of a total energy-rate sensor on a transport airplane
NASA Technical Reports Server (NTRS)
Ostroff, A. J.; Hueschen, R. M.; Hellbaum, R. F.; Belcastro, C. M.; Creedon, J. F.
1983-01-01
A sensor that measures the rate of change of total energy of an airplane with respect to the airstream has been evaluated. The sensor consists of two cylindrical probes located on the fuselage of a transport airplane, an in line acoustic filter, and a pressure sensing altitude rate transducer. Sections of this report include the sensor description and experimental configuration, frequency response tests, analytical model development, and flight test results for several airplane maneuvers. The results section includes time history comparisons between data generated by the total energy rate sensor and calculated data derived from independent sources.
NASA Astrophysics Data System (ADS)
Zolotovskii, I. O.; Korobko, D. A.; Sysolyatin, A. A.
2018-02-01
We consider a model of a dissipative four-wave mixing, mode-locked fibre ring laser with an intracavity interferometer. The necessary conditions required for mode locking are presented. A pulse train generation is numerically simulated at different repetition rates and gain levels. Admissible ranges of values, for which successful mode locking is possible, are found. It is shown that in the case of normal dispersion of the resonator, a laser with an intracavity interferometer can generate a train of pulses with an energy much greater than that in the case of anomalous dispersion.
Vance, Marina E; Pegues, Valerie; Van Montfrans, Schuyler; Leng, Weinan; Marr, Linsey C
2017-09-05
Three-dimensional (3D) printers are known to emit aerosols, but questions remain about their composition and the fundamental processes driving emissions. The objective of this work was to characterize the aerosol emissions from the operation of a fuse-deposition modeling 3D printer. We modeled the time- and size-resolved emissions of submicrometer aerosols from the printer in a chamber study, gained insight into the chemical composition of emitted aerosols using Raman spectroscopy, and measured the potential for exposure to the aerosols generated by 3D printers under real-use conditions in a variety of indoor environments. The average aerosol emission rates ranged from ∼10 8 to ∼10 11 particles min -1 , and the rates varied over the course of a print job. Acrylonitrile butadiene styrene (ABS) filaments generated the largest number of aerosols, and wood-infused polylactic acid (PLA) filaments generated the smallest amount. The emission factors ranged from 6 × 10 8 to 6 × 10 11 per gram of printed part, depending on the type of filament used. For ABS, the Raman spectra of the filament and the printed part were indistinguishable, while the aerosol spectra lacked important peaks corresponding to styrene and acrylonitrile, which are both present in ABS. This observation suggests that aerosols are not a result of volatilization and subsequent nucleation of ABS or direct release of ABS aerosols.
NASA Astrophysics Data System (ADS)
Vilella, Kenny; Deschamps, Frédéric
2018-07-01
Thermal evolution of terrestrial planets is controlled by heat transfer through their silicate mantles. A suitable framework for modelling this heat transport is a system including bottom heating (from the core) and internal heating, for example, generated by secular cooling or by the decay of radioactive isotopes. The mechanism of heat transfer depends on the physical properties of the system. In systems where convection is able to operate, two different regimes are possible depending on the relative amount of bottom and internal heating. For moderate internal heating rates, the system is composed of active hot upwellings and cold downwellings. For large internal heating rates, the bottom heat flux becomes negative and the system is only composed of active cold downwellings. Here, we build theoretical scaling laws for both convective regimes following the approach of Vilella & Kaminski (2017), which links the surface heat flux and the temperature jump across both the top and the bottom thermal boundary layer (TBL) to the Rayleigh number and the dimensionless internal heating rate. Theoretical predictions are then verified against numerical simulations performed in 2-D and 3-D Cartesiangeometry, and covering a large range of the parameter space. Our theoretical scaling laws are more successful in predicting the thermal structure of systems with large internal heating rates than that of systems with no or moderate internal heating. The differences between moderate and large internal heating rates are interpreted as differences in the mechanisms generating thermal instabilities. We identified three mechanisms: conductive growth of the TBL, instability impacting, and TBL erosion, the last two being present only for moderate internal heating rates, in which hot plumes are generated at the bottom of the system and are able to reach the surface. Finally, we apply our scaling laws to the evolution of the early Earth, proposing a new model for the cooling of the primordial magma ocean that reconciles geochemical observations and magma ocean dynamics.
NASA Astrophysics Data System (ADS)
Vilella, Kenny; Deschamps, Frederic
2018-04-01
Thermal evolution of terrestrial planets is controlled by heat transfer through their silicate mantles. A suitable framework for modelling this heat transport is a system including bottom heating (from the core) and internal heating, e.g., generated by secular cooling or by the decay of radioactive isotopes. The mechanism of heat transfer depends on the physical properties of the system. In systems where convection is able to operate, two different regimes are possible depending on the relative amount of bottom and internal heating. For moderate internal heating rates, the system is composed of active hot upwellings and cold downwellings. For large internal heating rates, the bottom heat flux becomes negative and the system is only composed of active cold downwellings. Here, we build theoretical scaling laws for both convective regimes following the approach of Vilella & Kaminski (2017), which links the surface heat flux and the temperature jump across both the top and bottom thermal boundary layer (TBL) to the Rayleigh number and the dimensionless internal heating rate. Theoretical predictions are then verified against numerical simulations performed in 2D and 3D-Cartesian geometry, and covering a large range of the parameter space. Our theoretical scaling laws are more successful in predicting the thermal structure of systems with large internal heating rates than that of systems with no or moderate internal heating. The differences between moderate and large internal heating rates are interpreted as differences in the mechanisms generating thermal instabilities. We identified three mechanisms: conductive growth of the TBL, instability impacting, and TBL erosion, the last two being present only for moderate internal heating rates, in which hot plumes are generated at the bottom of the system and are able to reach the surface. Finally, we apply our scaling laws to the evolution of the early Earth, proposing a new model for the cooling of the primordial magma ocean that reconciles geochemical observations and magma ocean dynamics.
A forward model-based validation of cardiovascular system identification
NASA Technical Reports Server (NTRS)
Mukkamala, R.; Cohen, R. J.
2001-01-01
We present a theoretical evaluation of a cardiovascular system identification method that we previously developed for the analysis of beat-to-beat fluctuations in noninvasively measured heart rate, arterial blood pressure, and instantaneous lung volume. The method provides a dynamical characterization of the important autonomic and mechanical mechanisms responsible for coupling the fluctuations (inverse modeling). To carry out the evaluation, we developed a computational model of the cardiovascular system capable of generating realistic beat-to-beat variability (forward modeling). We applied the method to data generated from the forward model and compared the resulting estimated dynamics with the actual dynamics of the forward model, which were either precisely known or easily determined. We found that the estimated dynamics corresponded to the actual dynamics and that this correspondence was robust to forward model uncertainty. We also demonstrated the sensitivity of the method in detecting small changes in parameters characterizing autonomic function in the forward model. These results provide confidence in the performance of the cardiovascular system identification method when applied to experimental data.
Spacecraft software training needs assessment research, appendices
NASA Technical Reports Server (NTRS)
Ratcliff, Shirley; Golas, Katharine
1990-01-01
The appendices to the previously reported study are presented: statistical data from task rating worksheets; SSD references; survey forms; fourth generation language, a powerful, long-term solution to maintenance cost; task list; methodology; SwRI's instructional systems development model; relevant research; and references.
A Weight of Evidence Framework for Environmental Assessments: Inferring Quantities
Environmental assessments require the generation of quantitative parameters such as degradation rates and assessment products may be quantities such as criterion values or magnitudes of effects. When multiple data sets or outputs of multiple models are available, it may be appro...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dawson, S.; Lewis, I. M.
One of the simplest extensions of the Standard Model (SM) is the addition of a scalar gauge singlet, S . If S is not forbidden by a symmetry from mixing with the Standard Model Higgs boson, the mixing will generate non-SM rates for Higgs production and decays. Generally, there could also be unknown high energy physics that generates additional effective low energy interactions. We show that interference effects between the scalar resonance of the singlet model and the effective field theory (EFT) operators can have significant effects in the Higgs sector. Here, we examine a non- Z 2 symmetricmore » scalar singlet model and demonstrate that a fit to the 125 GeV Higgs boson couplings and to limits on high mass resonances, S , exhibit an interesting structure and possible large cancellations of effects between the resonance contribution and the new EFT interactions, that invalidate conclusions based on the renormalizable singlet model alone.« less
Modeling and performance assessment in QinetiQ of EO and IR airborne reconnaissance systems
NASA Astrophysics Data System (ADS)
Williams, John W.; Potter, Gary E.
2002-11-01
QinetiQ are the technical authority responsible for specifying the performance requirements for the procurement of airborne reconnaissance systems, on behalf of the UK MoD. They are also responsible for acceptance of delivered systems, overseeing and verifying the installed system performance as predicted and then assessed by the contractor. Measures of functional capability are central to these activities. The conduct of these activities utilises the broad technical insight and wide range of analysis tools and models available within QinetiQ. This paper focuses on the tools, methods and models that are applicable to systems based on EO and IR sensors. The tools, methods and models are described, and representative output for systems that QinetiQ has been responsible for is presented. The principle capability applicable to EO and IR airborne reconnaissance systems is the STAR (Simulation Tools for Airborne Reconnaissance) suite of models. STAR generates predictions of performance measures such as GRD (Ground Resolved Distance) and GIQE (General Image Quality) NIIRS (National Imagery Interpretation Rating Scales). It also generates images representing sensor output, using the scene generation software CAMEO-SIM and the imaging sensor model EMERALD. The simulated image 'quality' is fully correlated with the predicted non-imaging performance measures. STAR also generates image and table data that is compliant with STANAG 7023, which may be used to test ground station functionality.
Athens, Jessica K.; Remington, Patrick L.; Gangnon, Ronald E.
2015-01-01
Objectives The University of Wisconsin Population Health Institute has published the County Health Rankings since 2010. These rankings use population-based data to highlight health outcomes and the multiple determinants of these outcomes and to encourage in-depth health assessment for all United States counties. A significant methodological limitation, however, is the uncertainty of rank estimates, particularly for small counties. To address this challenge, we explore the use of longitudinal and pooled outcome data in hierarchical Bayesian models to generate county ranks with greater precision. Methods In our models we used pooled outcome data for three measure groups: (1) Poor physical and poor mental health days; (2) percent of births with low birth weight and fair or poor health prevalence; and (3) age-specific mortality rates for nine age groups. We used the fixed and random effects components of these models to generate posterior samples of rates for each measure. We also used time-series data in longitudinal random effects models for age-specific mortality. Based on the posterior samples from these models, we estimate ranks and rank quartiles for each measure, as well as the probability of a county ranking in its assigned quartile. Rank quartile probabilities for univariate, joint outcome, and/or longitudinal models were compared to assess improvements in rank precision. Results The joint outcome model for poor physical and poor mental health days resulted in improved rank precision, as did the longitudinal model for age-specific mortality rates. Rank precision for low birth weight births and fair/poor health prevalence based on the univariate and joint outcome models were equivalent. Conclusion Incorporating longitudinal or pooled outcome data may improve rank certainty, depending on characteristics of the measures selected. For measures with different determinants, joint modeling neither improved nor degraded rank precision. This approach suggests a simple way to use existing information to improve the precision of small-area measures of population health. PMID:26098858
Effect of selection for growth rate on relative growth in rabbits.
Pascual, M; Pla, M; Blasco, A
2008-12-01
The effect of selection for growth rate on relative growth of the rabbit body components was studied. Animals from the 18th generation of a line selected for growth rate were compared with a contemporary control group formed with offspring of embryos that were frozen at the seventh generation of selection of the same line. A total of 313 animals were slaughtered at 4, 9, 13, 20, and 40 wk old. The offal, organs, tissues, and retail cuts were weighed, and several carcass linear measurements were recorded. Huxley's allometric equations relating the weights of the components with respect to BW were fitted. Butterfield's quadratic equations relating the degree of maturity of the components and the degree of maturity of BW were also fitted. In most of the components studied, both models lead to similar patterns of growth. Blood was isometric or early maturing and skin was late maturing or isometric depending on the use of Huxley's or Butterfield's model. Full gastrointestinal tract, liver, kidneys, thoracic viscera, and head were early maturing, and the chilled carcass and reference carcass were late maturing. The retail cuts of the reference carcass showed isometry (forelegs) or late maturing growth (breast and ribs, loin, hind legs, and abdominal walls). Dissectible fat of the carcass and meat of the hind leg had a late development, whereas bone of the hind leg was early maturing. Lumbar circumference length was later maturing than the carcass length and thigh length. Sex did not affect the relative growth of most of the components. Butterfield's model showed that males had an earlier development of full gastrointestinal tract and later growth of kidneys than females. No effect of selection on the relative growth of any of the components studied was found, leading to similar patterns of growth and similar carcass composition at a given degree of maturity after 11 generations of selection for growth rate.
Forecasting Lightning Threat using Cloud-resolving Model Simulations
NASA Technical Reports Server (NTRS)
McCaul, E. W., Jr.; Goodman, S. J.; LaCasse, K. M.; Cecil, D. J.
2009-01-01
As numerical forecasts capable of resolving individual convective clouds become more common, it is of interest to see if quantitative forecasts of lightning flash rate density are possible, based on fields computed by the numerical model. Previous observational research has shown robust relationships between observed lightning flash rates and inferred updraft and large precipitation ice fields in the mixed phase regions of storms, and that these relationships might allow simulated fields to serve as proxies for lightning flash rate density. It is shown in this paper that two simple proxy fields do indeed provide reasonable and cost-effective bases for creating time-evolving maps of predicted lightning flash rate density, judging from a series of diverse simulation case study events in North Alabama for which Lightning Mapping Array data provide ground truth. One method is based on the product of upward velocity and the mixing ratio of precipitating ice hydrometeors, modeled as graupel only, in the mixed phase region of storms at the -15\\dgc\\ level, while the second method is based on the vertically integrated amounts of ice hydrometeors in each model grid column. Each method can be calibrated by comparing domainwide statistics of the peak values of simulated flash rate proxy fields against domainwide peak total lightning flash rate density data from observations. Tests show that the first method is able to capture much of the temporal variability of the lightning threat, while the second method does a better job of depicting the areal coverage of the threat. A blended solution is designed to retain most of the temporal sensitivity of the first method, while adding the improved spatial coverage of the second. Weather Research and Forecast Model simulations of selected North Alabama cases show that this model can distinguish the general character and intensity of most convective events, and that the proposed methods show promise as a means of generating quantitatively realistic fields of lightning threat. However, because models tend to have more difficulty in correctly predicting the instantaneous placement of storms, forecasts of the detailed location of the lightning threat based on single simulations can be in error. Although these model shortcomings presently limit the precision of lightning threat forecasts from individual runs of current generation models, the techniques proposed herein should continue to be applicable as newer and more accurate physically-based model versions, physical parameterizations, initialization techniques and ensembles of cloud-allowing forecasts become available.
Biomedical progress rates as new parameters for models of economic growth in developed countries.
Zhavoronkov, Alex; Litovchenko, Maria
2013-11-08
While the doubling of life expectancy in developed countries during the 20th century can be attributed mostly to decreases in child mortality, the trillions of dollars spent on biomedical research by governments, foundations and corporations over the past sixty years are also yielding longevity dividends in both working and retired population. Biomedical progress will likely increase the healthy productive lifespan and the number of years of government support in the old age. In this paper we introduce several new parameters that can be applied to established models of economic growth: the biomedical progress rate, the rate of clinical adoption and the rate of change in retirement age. The biomedical progress rate is comprised of the rejuvenation rate (extending the productive lifespan) and the non-rejuvenating rate (extending the lifespan beyond the age at which the net contribution to the economy becomes negative). While staying within the neoclassical economics framework and extending the overlapping generations (OLG) growth model and assumptions from the life cycle theory of saving behavior, we provide an example of the relations between these new parameters in the context of demographics, labor, households and the firm.
Biomedical Progress Rates as New Parameters for Models of Economic Growth in Developed Countries
Zhavoronkov, Alex; Litovchenko, Maria
2013-01-01
While the doubling of life expectancy in developed countries during the 20th century can be attributed mostly to decreases in child mortality, the trillions of dollars spent on biomedical research by governments, foundations and corporations over the past sixty years are also yielding longevity dividends in both working and retired population. Biomedical progress will likely increase the healthy productive lifespan and the number of years of government support in the old age. In this paper we introduce several new parameters that can be applied to established models of economic growth: the biomedical progress rate, the rate of clinical adoption and the rate of change in retirement age. The biomedical progress rate is comprised of the rejuvenation rate (extending the productive lifespan) and the non-rejuvenating rate (extending the lifespan beyond the age at which the net contribution to the economy becomes negative). While staying within the neoclassical economics framework and extending the overlapping generations (OLG) growth model and assumptions from the life cycle theory of saving behavior, we provide an example of the relations between these new parameters in the context of demographics, labor, households and the firm. PMID:24217179
High rate constitutive modeling of aluminium alloy tube
NASA Astrophysics Data System (ADS)
Salisbury, C. P.; Worswick, M. J.; Mayer, R.
2006-08-01
As the need for fuel efficient automobiles increases, car designers are investigating light-weight materials for automotive bodies that will reduce the overall automobile weight. Aluminium alloy tube is a desirable material to use in automotive bodies due to its light weight. However, aluminium suffers from lower formability than steel and its energy absorption ability in a crash event after a forming operation is largely unknown. As part of a larger study on the relationship between crashworthiness and forming processes, constitutive models for 3mm AA5754 aluminium tube were developed. A nominal strain rate of 100/s is often used to characterize overall automobile crash events, whereas strain rates on the order of 1000/s can occur locally. Therefore, tests were performed at quasi-static rates using an Instron test fixture and at strain rates of 500/s to 1500/s using a tensile split Hopkinson bar. High rate testing was then conducted at rates of 500/s, 1000/s and 1500/s at 21circC, 150circC and 300circC. The generated data was then used to determine the constitutive parameters for the Johnson-Cook and Zerilli-Armstrong material models.
NASA Astrophysics Data System (ADS)
Joyce, C. J.; Schwadron, N. A.; Townsend, L. W.; deWet, W. C.; Wilson, J. K.; Spence, H. E.; Tobiska, W. K.; Shelton-Mur, K.; Yarborough, A.; Harvey, J.; Herbst, A.; Koske-Phillips, A.; Molina, F.; Omondi, S.; Reid, C.; Reid, D.; Shultz, J.; Stephenson, B.; McDevitt, M.; Phillips, T.
2016-09-01
We provide an analysis of the galactic cosmic ray radiation environment of Earth's atmosphere using measurements from the Cosmic Ray Telescope for the Effects of Radiation (CRaTER) aboard the Lunar Reconnaissance Orbiter (LRO) together with the Badhwar-O'Neil model and dose lookup tables generated by the Earth-Moon-Mars Radiation Environment Module (EMMREM). This study demonstrates an updated atmospheric radiation model that uses new dose tables to improve the accuracy of the modeled dose rates. Additionally, a method for computing geomagnetic cutoffs is incorporated into the model in order to account for location-dependent effects of the magnetosphere. Newly available measurements of atmospheric dose rates from instruments aboard commercial aircraft and high-altitude balloons enable us to evaluate the accuracy of the model in computing atmospheric dose rates. When compared to the available observations, the model seems to be reasonably accurate in modeling atmospheric radiation levels, overestimating airline dose rates by an average of 20%, which falls within the uncertainty limit recommended by the International Commission on Radiation Units and Measurements (ICRU). Additionally, measurements made aboard high-altitude balloons during simultaneous launches from New Hampshire and California provide an additional comparison to the model. We also find that the newly incorporated geomagnetic cutoff method enables the model to represent radiation variability as a function of location with sufficient accuracy.
2015-04-22
ceased. Oxygen concentration was continuously measured with a fast laser diode oxygen analyzer (O2CAP, Oxigraf, Inc., Mountain View, CA) throughout the...duration of operation. The output generated from the COGs was analyzed by a gas mass spectrometer (QGA model HAS 301, Hiden Analytical, Livonia, MI...throughout the range of bolus volumes with each device at respiratory rates of 20 and 30 breaths /min with each bolus setting. Data were recorded every
Child allowances, fertility, and chaotic dynamics.
Chen, Hung-Ju; Li, Ming-Chia
2013-06-01
This paper analyzes the dynamics in an overlapping generations model with the provision of child allowances. Fertility is an increasing function of child allowances and there exists a threshold effect of the marginal effect of child allowances on fertility. We show that if the effectiveness of child allowances is sufficiently high, an intermediate-sized tax rate will be enough to generate chaotic dynamics. Besides, a decrease in the inter-temporal elasticity of substitution will prevent the occurrence of irregular cycles.
Predicting functional divergence in protein evolution by site-specific rate shifts
NASA Technical Reports Server (NTRS)
Gaucher, Eric A.; Gu, Xun; Miyamoto, Michael M.; Benner, Steven A.
2002-01-01
Most modern tools that analyze protein evolution allow individual sites to mutate at constant rates over the history of the protein family. However, Walter Fitch observed in the 1970s that, if a protein changes its function, the mutability of individual sites might also change. This observation is captured in the "non-homogeneous gamma model", which extracts functional information from gene families by examining the different rates at which individual sites evolve. This model has recently been coupled with structural and molecular biology to identify sites that are likely to be involved in changing function within the gene family. Applying this to multiple gene families highlights the widespread divergence of functional behavior among proteins to generate paralogs and orthologs.
NASA Astrophysics Data System (ADS)
Brown, S. M.; Behn, M. D.; Grove, T. L.
2017-12-01
We present results of a combined petrologic - geochemical (major and trace element) - geodynamical forward model for mantle melting and subsequent melt modification. The model advances Behn & Grove (2015), and is calibrated using experimental petrology. Our model allows for melting in the plagioclase, spinel, and garnet fields with a flexible retained melt fraction (from pure batch to pure fractional), tracks residual mantle composition, and includes melting with water, variable melt productivity, and mantle mode calculations. This approach is valuable for understanding oceanic crustal accretion, which involves mantle melting and melt modification by migration and aggregation. These igneous processes result in mid-ocean ridge basalts that vary in composition at the local (segment) and global scale. The important variables are geophysical and geochemical and include mantle composition, potential temperature, mantle flow, and spreading rate. Accordingly, our model allows us to systematically quantify the importance of each of these external variables. In addition to discriminating melt generation effects, we are able to discriminate the effects of different melt modification processes (inefficient pooling, melt-rock reaction, and fractional crystallization) in generating both local, segment-scale and global-scale compositional variability. We quantify the influence of a specific igneous process on the generation of oceanic crust as a function of variations in the external variables. We also find that it is unlikely that garnet lherzolite melting produces a signature in either major or trace element compositions formed from aggregated melts, because when melting does occur in the garnet field at high mantle temperature, it contributes a relatively small, uniform fraction (< 10%) of the pooled melt compositions at all spreading rates. Additionally, while increasing water content and/or temperature promote garnet melting, they also increase melt extent, pushing the pooled composition to lower Sm/Yb and higher Lu/Hf.
NASA Astrophysics Data System (ADS)
Liu, Dan; Li, Congsheng; Kang, Yangyang; Zhou, Zhou; Xie, Yi; Wu, Tongning
2017-09-01
In this study, the plane wave exposure of an infant to radiofrequency electromagnetic fields of 3.5 GHz was numerically analyzed to investigate the unintentional electromagnetic field (EMF) exposure of fifth generation (5G) signals during field test. The dosimetric influence of age-dependent dielectric properties and the influence of an adult body were evaluated using an infant model of 12 month old and an adult female model. The results demonstrated that the whole body-averaged specific absorption rate (WBASAR) was not significantly affected by age-dependent dielectric properties and the influence of the adult body did not enhance WBASAR. Taking the magnitude of the in situ
Self-Exciting Point Process Modeling of Conversation Event Sequences
NASA Astrophysics Data System (ADS)
Masuda, Naoki; Takaguchi, Taro; Sato, Nobuo; Yano, Kazuo
Self-exciting processes of Hawkes type have been used to model various phenomena including earthquakes, neural activities, and views of online videos. Studies of temporal networks have revealed that sequences of social interevent times for individuals are highly bursty. We examine some basic properties of event sequences generated by the Hawkes self-exciting process to show that it generates bursty interevent times for a wide parameter range. Then, we fit the model to the data of conversation sequences recorded in company offices in Japan. In this way, we can estimate relative magnitudes of the self excitement, its temporal decay, and the base event rate independent of the self excitation. These variables highly depend on individuals. We also point out that the Hawkes model has an important limitation that the correlation in the interevent times and the burstiness cannot be independently modulated.
NASA Astrophysics Data System (ADS)
Aung, T. T.; Fujii, T.; Amo, M.; Suzuki, K.
2017-12-01
Understanding potential of methane flux from the Pleistocene fore-arc basin filled turbiditic sedimentary formation along the eastern Nankai Trough is important in the quantitative assessment of gas hydrate resources. We considered generated methane could exist in sedimentary basin in the forms of three major components, and those are methane in methane hydrate, free gas and methane dissolved in water. Generation of biomethane strongly depends on microbe activity and microbes in turn survive in diverse range of temperature, salinity and pH. This study aims to understand effect of reaction temperature and total organic carbon on generation of biomethane and its components. Biomarker analysis and cultural experiment results of the core samples from the eastern Nankai Trough reveal that methane generation rate gets peak at various temperature ranging12.5°to 35°. Simulation study of biomethane generation was made using commercial basin scale simulator, PetroMod, with different reaction temperature and total organic carbon to predict how these effect on generation of biomethane. Reaction model is set by Gaussian distribution with constant hydrogen index and standard deviation of 1. Series of simulation cases with peak reaction temperature ranging 12.5°to 35° and total organic carbon of 0.6% to 3% were conducted and analyzed. Simulation results show that linear decrease in generation potential while increasing reaction temperature. But decreasing amount becomes larger in the model with higher total organic carbon. At higher reaction temperatures, >30°, extremely low generation potential was found. This is due to the fact that the source formation modeled is less than 1 km in thickness and most of formation do not reach temperature more than 30°. In terms of the components, methane in methane hydrate and free methane increase with increasing TOC. Drastic increase in free methane was observed in the model with 3% of TOC. Methane amount dissolved in water shows almost same for all models.
Parisi Kern, Andrea; Ferreira Dias, Michele; Piva Kulakowski, Marlova; Paulo Gomes, Luciana
2015-05-01
Reducing construction waste is becoming a key environmental issue in the construction industry. The quantification of waste generation rates in the construction sector is an invaluable management tool in supporting mitigation actions. However, the quantification of waste can be a difficult process because of the specific characteristics and the wide range of materials used in different construction projects. Large variations are observed in the methods used to predict the amount of waste generated because of the range of variables involved in construction processes and the different contexts in which these methods are employed. This paper proposes a statistical model to determine the amount of waste generated in the construction of high-rise buildings by assessing the influence of design process and production system, often mentioned as the major culprits behind the generation of waste in construction. Multiple regression was used to conduct a case study based on multiple sources of data of eighteen residential buildings. The resulting statistical model produced dependent (i.e. amount of waste generated) and independent variables associated with the design and the production system used. The best regression model obtained from the sample data resulted in an adjusted R(2) value of 0.694, which means that it predicts approximately 69% of the factors involved in the generation of waste in similar constructions. Most independent variables showed a low determination coefficient when assessed in isolation, which emphasizes the importance of assessing their joint influence on the response (dependent) variable. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Mitrovica, J. X.; Davis, J. L.; Shapiro, I. I.
1993-01-01
We predict the present-day rates of change of the lengths of 19 North American baselines due to the glacial isostatic adjustment process. Contrary to previously published research, we find that the three dimensional motion of each of the sites defining a baseline, rather than only the radial motions of these sites, needs to be considered to obtain an accurate estimate of the rate of change of the baseline length. Predictions are generated using a suite of Earth models and late Pleistocene ice histories, these include specific combinations of the two which have been proposed in the literature as satisfying a variety of rebound related geophysical observations from the North American region. A number of these published models are shown to predict rates which differ significantly from the VLBI observations.
New insight on petroleum system modeling of Ghadames basin, Libya
NASA Astrophysics Data System (ADS)
Bora, Deepender; Dubey, Siddharth
2015-12-01
Underdown and Redfern (2008) performed a detailed petroleum system modeling of the Ghadames basin along an E-W section. However, hydrocarbon generation, migration and accumulation changes significantly across the basin due to complex geological history. Therefore, a single section can't be considered representative for the whole basin. This study aims at bridging this gap by performing petroleum system modeling along a N-S section and provides new insights on source rock maturation, generation and migration of the hydrocarbons using 2D basin modeling. This study in conjunction with earlier work provides a 3D context of petroleum system modeling in the Ghadames basin. Hydrocarbon generation from the lower Silurian Tanezzuft formation and the Upper Devonian Aouinet Ouenine started during the late Carboniferous. However, high subsidence rate during middle to late Cretaceous and elevated heat flow in Cenozoic had maximum impact on source rock transformation and hydrocarbon generation whereas large-scale uplift and erosion during Alpine orogeny has significant impact on migration and accumulation. Visible migration observed along faults, which reactivated during Austrian unconformity. Peak hydrocarbon expulsion reached during Oligocene for both the Tanezzuft and the Aouinet Ouenine source rocks. Based on modeling results, capillary entry pressure driven downward expulsion of hydrocarbons from the lower Silurian Tanezzuft formation to the underlying Bir Tlacsin formation observed during middle Cretaceous. Kinetic modeling has helped to model hydrocarbon composition and distribution of generated hydrocarbons from both the source rocks. Application of source to reservoir tracking technology suggest some accumulations at shallow stratigraphic level has received hydrocarbons from both the Tanezzuft and Aouinet Ouenine source rocks, implying charge mixing. Five petroleum systems identified based on source to reservoir correlation technology in Petromod*. This Study builds upon the original work of Underdown and Redfern, 2008 and offers new insights and interpretation of the data.
Performance of green waste biocovers for enhancing methane oxidation.
Mei, Changgen; Yazdani, Ramin; Han, Byunghyun; Mostafid, M Erfan; Chanton, Jeff; VanderGheynst, Jean; Imhoff, Paul
2015-05-01
Green waste aged 2 and 24months, labeled "fresh" and "aged" green waste, respectively, were placed in biocover test cells and evaluated for their ability to oxidize methane (CH4) under high landfill gas loading over a 15-month testing period. These materials are less costly to produce than green waste compost, yet satisfied recommended respiration requirements for landfill compost covers. In field tests employing a novel gas tracer to correct for leakage, both green wastes oxidized CH4 at high rates during the first few months of operation - 140 and 200g/m(2)/day for aged and fresh green waste, respectively. Biocover performance degraded during the winter and spring, with significant CH4 generated from anaerobic regions in the 60-80cm thick biocovers. Concurrently, CH4 oxidation rates decreased. Two previously developed empirical models for moisture and temperature dependency of CH4 oxidation in soils were used to test their applicability to green waste. Models accounted for 68% and 79% of the observed seasonal variations in CH4 oxidation rates for aged green waste. Neither model could describe similar seasonal changes for the less stable fresh green waste. This is the first field application and evaluation of these empirical models using media with high organic matter. Given the difficulty of preventing undesired CH4 generation, green waste may not be a viable biocover material for many climates and landfill conditions. Copyright © 2015 Elsevier Ltd. All rights reserved.
RNA Recombination Enhances Adaptability and Is Required for Virus Spread and Virulence.
Xiao, Yinghong; Rouzine, Igor M; Bianco, Simone; Acevedo, Ashley; Goldstein, Elizabeth Faul; Farkov, Mikhail; Brodsky, Leonid; Andino, Raul
2016-04-13
Mutation and recombination are central processes driving microbial evolution. A high mutation rate fuels adaptation but also generates deleterious mutations. Recombination between two different genomes may resolve this paradox, alleviating effects of clonal interference and purging deleterious mutations. Here we demonstrate that recombination significantly accelerates adaptation and evolution during acute virus infection. We identified a poliovirus recombination determinant within the virus polymerase, mutation of which reduces recombination rates without altering replication fidelity. By generating a panel of variants with distinct mutation rates and recombination ability, we demonstrate that recombination is essential to enrich the population in beneficial mutations and purge it from deleterious mutations. The concerted activities of mutation and recombination are key to virus spread and virulence in infected animals. These findings inform a mathematical model to demonstrate that poliovirus adapts most rapidly at an optimal mutation rate determined by the trade-off between selection and accumulation of detrimental mutations. Copyright © 2016 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morgan, R. V.; Likhachev, O. A.; Jacobs, J. W.
Theory and experiments are reported that explore the behaviour of the Rayleigh–Taylor instability initiated with a diffuse interface. Experiments are performed in which an interface between two gases of differing density is made unstable by acceleration generated by a rarefaction wave. Well-controlled, diffuse, two-dimensional and three-dimensional, single-mode perturbations are generated by oscillating the gases either side to side, or vertically for the three-dimensional perturbations. The puncturing of a diaphragm separating a vacuum tank beneath the test section generates a rarefaction wave that travels upwards and accelerates the interface downwards. This rarefaction wave generates a large, but non-constant, acceleration of the order ofmore » $$1000g_{0}$$, where$$g_{0}$$is the acceleration due to gravity. Initial interface thicknesses are measured using a Rayleigh scattering diagnostic and the instability is visualized using planar laser-induced Mie scattering. Growth rates agree well with theoretical values, and with the inviscid, dynamic diffusion model of Duffet al. (Phys. Fluids, vol. 5, 1962, pp. 417–425) when diffusion thickness is accounted for, and the acceleration is weighted using inviscid Rayleigh–Taylor theory. The linear stability formulation of Chandrasekhar (Proc. Camb. Phil. Soc., vol. 51, 1955, pp. 162–178) is solved numerically with an error function diffusion profile using the Riccati method. This technique exhibits good agreement with the dynamic diffusion model of Duffet al. for small wavenumbers, but produces larger growth rates for large-wavenumber perturbations. Asymptotic analysis shows a$$1/k^{2}$$decay in growth rates as$$k\\rightarrow \\infty$$for large-wavenumber perturbations.« less
Morgan, R. V.; Likhachev, O. A.; Jacobs, J. W.
2016-02-15
Theory and experiments are reported that explore the behaviour of the Rayleigh–Taylor instability initiated with a diffuse interface. Experiments are performed in which an interface between two gases of differing density is made unstable by acceleration generated by a rarefaction wave. Well-controlled, diffuse, two-dimensional and three-dimensional, single-mode perturbations are generated by oscillating the gases either side to side, or vertically for the three-dimensional perturbations. The puncturing of a diaphragm separating a vacuum tank beneath the test section generates a rarefaction wave that travels upwards and accelerates the interface downwards. This rarefaction wave generates a large, but non-constant, acceleration of the order ofmore » $$1000g_{0}$$, where$$g_{0}$$is the acceleration due to gravity. Initial interface thicknesses are measured using a Rayleigh scattering diagnostic and the instability is visualized using planar laser-induced Mie scattering. Growth rates agree well with theoretical values, and with the inviscid, dynamic diffusion model of Duffet al. (Phys. Fluids, vol. 5, 1962, pp. 417–425) when diffusion thickness is accounted for, and the acceleration is weighted using inviscid Rayleigh–Taylor theory. The linear stability formulation of Chandrasekhar (Proc. Camb. Phil. Soc., vol. 51, 1955, pp. 162–178) is solved numerically with an error function diffusion profile using the Riccati method. This technique exhibits good agreement with the dynamic diffusion model of Duffet al. for small wavenumbers, but produces larger growth rates for large-wavenumber perturbations. Asymptotic analysis shows a$$1/k^{2}$$decay in growth rates as$$k\\rightarrow \\infty$$for large-wavenumber perturbations.« less
NASA Astrophysics Data System (ADS)
Olson, Richard F.
2013-05-01
Rendering of point scatterer based radar scenes for millimeter wave (mmW) seeker tests in real-time hardware-in-the-loop (HWIL) scene generation requires efficient algorithms and vector-friendly computer architectures for complex signal synthesis. New processor technology from Intel implements an extended 256-bit vector SIMD instruction set (AVX, AVX2) in a multi-core CPU design providing peak execution rates of hundreds of GigaFLOPS (GFLOPS) on one chip. Real world mmW scene generation code can approach peak SIMD execution rates only after careful algorithm and source code design. An effective software design will maintain high computing intensity emphasizing register-to-register SIMD arithmetic operations over data movement between CPU caches or off-chip memories. Engineers at the U.S. Army Aviation and Missile Research, Development and Engineering Center (AMRDEC) applied two basic parallel coding methods to assess new 256-bit SIMD multi-core architectures for mmW scene generation in HWIL. These include use of POSIX threads built on vector library functions and more portable, highlevel parallel code based on compiler technology (e.g. OpenMP pragmas and SIMD autovectorization). Since CPU technology is rapidly advancing toward high processor core counts and TeraFLOPS peak SIMD execution rates, it is imperative that coding methods be identified which produce efficient and maintainable parallel code. This paper describes the algorithms used in point scatterer target model rendering, the parallelization of those algorithms, and the execution performance achieved on an AVX multi-core machine using the two basic parallel coding methods. The paper concludes with estimates for scale-up performance on upcoming multi-core technology.
Towards a Universal Calving Law: Modeling Ice Shelves Using Damage Mechanics
NASA Astrophysics Data System (ADS)
Whitcomb, M.; Bassis, J. N.; Price, S. F.; Lipscomb, W. H.
2017-12-01
Modeling iceberg calving from ice shelves and ice tongues is a particularly difficult problem in glaciology because of the wide range of observed calving rates. Ice shelves naturally calve large tabular icebergs at infrequent intervals, but may instead calve smaller bergs regularly or disintegrate due to hydrofracturing in warmer conditions. Any complete theory of iceberg calving in ice shelves must be able to generate realistic calving rate values depending on the magnitudes of the external forcings. Here we show that a simple damage evolution law, which represents crevasse distributions as a continuum field, produces reasonable estimates of ice shelf calving rates when added to the Community Ice Sheet Model (CISM). Our damage formulation is based on a linear stability analysis and depends upon the bulk stress and strain rate in the ice shelf, as well as the surface and basal melt rates. The basal melt parameter in our model enhances crevasse growth near the ice shelf terminus, leading to an increased iceberg production rate. This implies that increasing ocean temperatures underneath ice shelves will drive ice shelf retreat, as has been observed in the Amundsen and Bellingshausen Seas. We show that our model predicts broadly correct calving rates for ice tongues ranging in length from 10 km (Erebus) to over 100 km (Drygalski), by matching the computed steady state lengths to observations. In addition, we apply the model to idealized Antarctic ice shelves and show that we can also predict realistic ice shelf extents. Our damage mechanics model provides a promising, computationally efficient way to compute calving fluxes and links ice shelf stability to climate forcing.
Enforced Sparse Non-Negative Matrix Factorization
2016-01-23
documents to find interesting pieces of information. With limited resources, analysts often employ automated text - mining tools that highlight common...represented as an undirected bipartite graph. It has become a common method for generating topic models of text data because it is known to produce good results...model and the convergence rate of the underlying algorithm. I. Introduction A common analyst challenge is searching through large quantities of text
NASA Astrophysics Data System (ADS)
Engeland, Kolbjorn; Steinsland, Ingelin
2016-04-01
The aim of this study is to investigate how the inclusion of uncertainties in inputs and observed streamflow influence the parameter estimation, streamflow predictions and model evaluation. In particular we wanted to answer the following research questions: • What is the effect of including a random error in the precipitation and temperature inputs? • What is the effect of decreased information about precipitation by excluding the nearest precipitation station? • What is the effect of the uncertainty in streamflow observations? • What is the effect of reduced information about the true streamflow by using a rating curve where the measurement of the highest and lowest streamflow is excluded when estimating the rating curve? To answer these questions, we designed a set of calibration experiments and evaluation strategies. We used the elevation distributed HBV model operating on daily time steps combined with a Bayesian formulation and the MCMC routine Dream for parameter inference. The uncertainties in inputs was represented by creating ensembles of precipitation and temperature. The precipitation ensemble were created using a meta-gaussian random field approach. The temperature ensembles were created using a 3D Bayesian kriging with random sampling of the temperature laps rate. The streamflow ensembles were generated by a Bayesian multi-segment rating curve model. Precipitation and temperatures were randomly sampled for every day, whereas the streamflow ensembles were generated from rating curve ensembles, and the same rating curve was always used for the whole time series in a calibration or evaluation run. We chose a catchment with a meteorological station measuring precipitation and temperature, and a rating curve of relatively high quality. This allowed us to investigate and further test the effect of having less information on precipitation and streamflow during model calibration, predictions and evaluation. The results showed that including uncertainty in the precipitation and temperature input has a negligible effect on the posterior distribution of parameters and for the Nash-Sutcliffe (NS) efficiency for the predicted flows, while the reliability and the continuous rank probability score (CRPS) improves. Reduced information in precipitation input resulted in a and a shift in the water balance parameter Pcorr, a model producing smoother streamflow predictions giving poorer NS and CRPS, but higher reliability. The effect of calibrating the hydrological model using wrong rating curves is mainly seen as variability in the water balance parameter Pcorr. When evaluating predictions obtained using a wrong rating curve, the evaluation scores varies depending on the true rating curve. Generally, the best evaluation scores were not achieved for the rating curve used for calibration, but for a rating curves giving low variance in streamflow observations. Reduced information in streamflow influenced the water balance parameter Pcorr, and increased the spread in evaluation scores giving both better and worse scores. This case study shows that estimating the water balance is challenging since both precipitation inputs and streamflow observations have pronounced systematic component in their uncertainties.
NASA Astrophysics Data System (ADS)
Campos, Joana; Van der Veer, Henk W.; Freitas, Vânia; Kooijman, Sebastiaan A. L. M.
2009-08-01
In this paper a contribution is made to the ongoing debate on which brown shrimp generation mostly sustains the autumn peak in coastal North Sea commercial fisheries: the generation born in summer, or the winter one. Since the two perspectives are based on different considerations on the growth timeframe from settlement till commercial size, the Dynamic Energy Budget (DEB) theory was applied to predict maximum possible growth under natural conditions. First, the parameters of the standard DEB model for Crangon crangon L. were estimated using available data sets. These were insufficient to allow a direct estimation, requiring a special protocol to achieve consistency between parameters. Next, the DEB model was validated by comparing simulations with published experimental data on shrimp growth in relation to water temperatures. Finally, the DEB model was applied to simulate growth under optimal food conditions using the prevailing water temperature conditions in the Wadden Sea. Results show clear differences between males and females whereby the fastest growth rates were observed in females. DEB model simulations of maximum growth in the Wadden Sea suggest that it is not the summer brood from the current year as Boddeke claimed, nor the previous winter generation as Kuipers and Dapper suggested, but more likely the summer generation from the previous year which contributes to the bulk of the fisheries recruits in autumn.
Analysis of dynamic behavior of multiple-stage planetary gear train used in wind driven generator.
Wang, Jungang; Wang, Yong; Huo, Zhipu
2014-01-01
A dynamic model of multiple-stage planetary gear train composed of a two-stage planetary gear train and a one-stage parallel axis gear is proposed to be used in wind driven generator to analyze the influence of revolution speed and mesh error on dynamic load sharing characteristic based on the lumped parameter theory. Dynamic equation of the model is solved using numerical method to analyze the uniform load distribution of the system. It is shown that the load sharing property of the system is significantly affected by mesh error and rotational speed; load sharing coefficient and change rate of internal and external meshing of the system are of obvious difference from each other. The study provides useful theoretical guideline for the design of the multiple-stage planetary gear train of wind driven generator.
Analysis of Dynamic Behavior of Multiple-Stage Planetary Gear Train Used in Wind Driven Generator
Wang, Jungang; Wang, Yong; Huo, Zhipu
2014-01-01
A dynamic model of multiple-stage planetary gear train composed of a two-stage planetary gear train and a one-stage parallel axis gear is proposed to be used in wind driven generator to analyze the influence of revolution speed and mesh error on dynamic load sharing characteristic based on the lumped parameter theory. Dynamic equation of the model is solved using numerical method to analyze the uniform load distribution of the system. It is shown that the load sharing property of the system is significantly affected by mesh error and rotational speed; load sharing coefficient and change rate of internal and external meshing of the system are of obvious difference from each other. The study provides useful theoretical guideline for the design of the multiple-stage planetary gear train of wind driven generator. PMID:24511295
Possible Mechanism for the Generation of a Fundamental Unit of Charge (long version)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lestone, John Paul
2017-06-16
Various methods for calculating particle-emission rates from hot systems are reviewed. Semi-classically derived photon-emission rates often contain the term exp(-ε/T) which needs to be replaced with the corresponding Planckian factor of [exp(-ε/T)-1] -1 to obtain the correct rate. This replacement is associated with the existence of stimulated emission. Simple arguments are used to demonstrate that black holes can also undergo stimulated emission, as previously determined by others. We extend these concepts to fundamental particles, and assume they can be stimulated to emit virtual photons with a cross section of πλ 2, in the case of an isolated particle when themore » incident virtual-photon energy is < 2πmc 2. Stimulated-virtual photons can be exchanged with other particles generating a force. With the inclusion of near-field effects, the model choices presented give a calculated fundamental unit of charge of 1.6022x10 -19 C. If these choices are corroborated by detailed calculations then an understanding of the numerical value of the fine structure constant may emerge. The present study suggests charge might be an emergent property generated by a simple interaction mechanism between point-like particles and the electromagnetic vacuum, similar to the process that generates the Lamb shift.« less
Tapered Screened Channel PMD for Cryogenic Liquids
NASA Astrophysics Data System (ADS)
Dodge, Franklin T.; Green, Steve T.; Walter, David B.
2004-02-01
If a conventional spacecraft propellant management device (PMD) of the screened channel type were employed with a cryogenic liquid, vapor bubbles generated within the channel by heat transfer could ``dry out'' the channel screens and thereby cause the channels to admit large amounts of vapor from the tank into the liquid outflow. This paper describes a new tapered channel design that passively `pumps' bubbles away from the outlet port and vents them into the tank. A predictive mathematical model of the operating principle is presented and discussed. Scale-model laboratory tests were conducted and the mathematical model agreed well with the measured rates of bubble transport velocity. Finally, an example of the use of the predictive model for a realistic spacecraft application is presented. The model predicts that bubble clearing rates are acceptable even in tanks up to 2 m in length.
Modeling of fire smoke movement in multizone garments building using two open source platforms
NASA Astrophysics Data System (ADS)
Khandoker, Md. Arifur Rahman; Galib, Musanna; Islam, Adnan; Rahman, Md. Ashiqur
2017-06-01
Casualty of garment factory workers from factory fire in Bangladesh is a recurring tragedy. Smoke, which is more fatal than fire itself, often propagates through different pathways from lower to upper floors during building fire. Among the toxic gases produced from a building fire, carbon monoxide (CO) can be deadly, even in small amounts. This paper models the propagation and transportation of fire induced smoke (CO) that resulted from the burning of synthetic polyester fibers using two open source platforms, CONTAM and Fire Dynamics Simulator (FDS). Smoke migration in a generic multistoried garment factory building in Bangladesh is modeled using CONTAM where each floor is compartmentalized by different zones. The elevator and stairway shafts are modeled by phantom zones to simulate contaminant (CO) transport from one floor to upper floors. FDS analysis involves burning of two different stacks of polyester jacket of six feet height and with a maximum heat release rate per unit area of 1500kw/m2 over a storage area 50m2 and 150m2, respectively. The resulting CO generation and removal rates from FDS are used in CONTAM to predict fire-borne CO propagation in different zones of the garment building. Findings of the study exhibit that the contaminant flow rate is a strong function of the position of building geometry, location of initiation of fire, amount of burnt material, presence of AHU and contaminant generation and removal rate of CO from the source location etc. The transport of fire-smoke in the building Hallways, stairways and lifts are also investigated in detail to examine the safe egress of the occupants in case of fire.
Earthquake Cycle Simulations with Rate-and-State Friction and Linear and Nonlinear Viscoelasticity
NASA Astrophysics Data System (ADS)
Allison, K. L.; Dunham, E. M.
2016-12-01
We have implemented a parallel code that simultaneously models both rate-and-state friction on a strike-slip fault and off-fault viscoelastic deformation throughout the earthquake cycle in 2D. Because we allow fault slip to evolve with a rate-and-state friction law and do not impose the depth of the brittle-to-ductile transition, we are able to address: the physical processes limiting the depth of large ruptures (with hazard implications); the degree of strain localization with depth; the relative partitioning of fault slip and viscous deformation in the brittle-to-ductile transition zone; and the relative contributions of afterslip and viscous flow to postseismic surface deformation. The method uses a discretization that accommodates variable off-fault material properties, depth-dependent frictional properties, and linear and nonlinear viscoelastic rheologies. All phases of the earthquake cycle are modeled, allowing the model to spontaneously generate earthquakes, and to capture afterslip and postseismic viscous flow. We compare the effects of a linear Maxwell rheology, often used in geodetic models, with those of a nonlinear power law rheology, which laboratory data indicates more accurately represents the lower crust and upper mantle. The viscosity of the Maxwell rheology is set by power law rheological parameters with an assumed a geotherm and strain rate, producing a viscosity that exponentially decays with depth and is constant in time. In contrast, the power law rheology will evolve an effective viscosity that is a function of the temperature profile and the stress state, and therefore varies both spatially and temporally. We will also integrate the energy equation for the thermomechanical problem, capturing frictional heat generation on the fault and off-fault viscous shear heating, and allowing these in turn to alter the effective viscosity.
Hendrix, Kristin S; Downs, Stephen M; Brophy, Ginger; Carney Doebbeling, Caroline; Swigonski, Nancy L
2013-01-01
Most state Medicaid programs reimburse physicians for providing fluoride varnish, yet the only published studies of cost-effectiveness do not show cost-savings. Our objective is to apply state-specific claims data to an existing published model to quickly and inexpensively estimate the cost-savings of a policy consideration to better inform decisions - specifically, to assess whether Indiana Medicaid children's restorative service rates met the threshold to generate cost-savings. Threshold analysis was based on the 2006 model by Quiñonez et al. Simple calculations were used to "align" the Indiana Medicaid data with the published model. Quarterly likelihoods that a child would receive treatment for caries were annualized. The probability of a tooth developing a cavitated lesion was multiplied by the probability of using restorative services. Finally, this rate of restorative services given cavitation was multiplied by 1.5 to generate the threshold to attain cost-savings. Restorative services utilization rates, extrapolated from available Indiana Medicaid claims, were compared with these thresholds. For children 1-2 years old, restorative services utilization was 2.6 percent, which was below the 5.8 percent threshold for cost-savings. However, for children 3-5 years of age, restorative services utilization was 23.3 percent, exceeding the 14.5 percent threshold that suggests cost-savings. Combining a published model with state-specific data, we were able to quickly and inexpensively demonstrate that restorative service utilization rates for children 36 months and older in Indiana are high enough that fluoride varnish regularly applied by physicians to children starting at 9 months of age could save Medicaid funds over a 3-year horizon. © 2013 American Association of Public Health Dentistry.
Kinetic modeling of electro-Fenton reaction in aqueous solution.
Liu, H; Li, X Z; Leng, Y J; Wang, C
2007-03-01
To well describe the electro-Fenton (E-Fenton) reaction in aqueous solution, a new kinetic model was established according to the generally accepted mechanism of E-Fenton reaction. The model has special consideration on the rates of hydrogen peroxide (H(2)O(2)) generation and consumption in the reaction solution. The model also embraces three key operating factors affecting the organic degradation in the E-Fenton reaction, including current density, dissolved oxygen concentration and initial ferrous ion concentration. This analytical model was then validated by the experiments of phenol degradation in aqueous solution. The experiments demonstrated that the H(2)O(2) gradually built up with time and eventually approached its maximum value in the reaction solution. The experiments also showed that phenol was degraded at a slow rate at the early stage of the reaction, a faster rate during the middle stage, and a slow rate again at the final stage. It was confirmed in all experiments that the curves of phenol degradation (concentration vs. time) appeared to be an inverted "S" shape. The experimental data were fitted using both the normal first-order model and our new model, respectively. The goodness of fittings demonstrated that the new model could better fit the experimental data than the first-order model appreciably, which indicates that this analytical model can better describe the kinetics of the E-Fenton reaction mathematically and also chemically.
Shear Band Formation in Plastic-Bonded Explosives (PBX)
NASA Astrophysics Data System (ADS)
Dey, Thomas N.; Johnson, James N.
1997-07-01
Adiabatic shear bands can be a source of ignition and lead to detonation. At low to moderate deformation rates, 10--1000 s-1, two other mechanisms can also give rise to shear bands. These mechanisms are: softening caused by micro-cracking and (2) a constitutive response with a non-associated flow rule as is observed in granular material such as soil. Brittle behavior at small strains and the granular nature of HMX suggest that PBX-9501 constitutive behavior may be similar to sand. A constitutive model for each of these mechanims is studied in a series of calculations. A viscoelastic constitutive model for PBX-9501 softens via a statistical crack model, based on the work of Dienes (1986). A sand model is used to provide a non-associated flow rule. Both models generate shear band formation at 1--2% strain at nominal strain rates at and below 1000 s-1. Shear band formation is suppressed at higher strain rates. The sand model gives qualitative agreement for location and orientation of shear bands observed in a punch experiment. Both mechanisms may accelerate the formation of adiabatic shear bands.
Shutdown Dose Rate Analysis for the long-pulse D-D Operation Phase in KSTAR
NASA Astrophysics Data System (ADS)
Park, Jin Hun; Han, Jung-Hoon; Kim, D. H.; Joo, K. S.; Hwang, Y. S.
2017-09-01
KSTAR is a medium size fully superconducting tokamak. The deuterium-deuterium (D-D) reaction in the KSTAR tokamak generates neutrons with a peak yield of 3.5x1016 per second through a pulse operation of 100 seconds. The effect of neutron generation from full D-D high power KSTAR operation mode to the machine, such as activation, shutdown dose rate, and nuclear heating, are estimated for an assurance of safety during operation, maintenance, and machine upgrade. The nuclear heating of the in-vessel components, and neutron activation of the surrounding materials have been investigated. The dose rates during operation and after shutdown of KSTAR have been calculated by a 3D CAD model of KSTAR with the Monte Carlo code MCNP5 (neutron flux and decay photon), the inventory code FISPACT (activation and decay photon) and the FENDL 2.1 nuclear data library.
Studies and comparison of currently utilized models for ablation in Electrothermal-chemical guns
NASA Astrophysics Data System (ADS)
Jia, Shenli; Li, Rui; Li, Xingwen
2009-10-01
Wall ablation is a key process taking place in the capillary plasma generator in Electrothermal-Chemical (ETC) guns, whose characteristic directly decides the generator's performance. In the present article, this ablation process is theoretically studied. Currently widely used mathematical models designed to describe such process are analyzed and compared, including a recently developed kinetic model which takes into account the unsteady state in plasma-wall transition region by dividing it into two sub-layers, a Knudsen layer and a collision dominated non-equilibrium Hydrodynamic layer, a model based on Langmuir Law, as well as a simplified model widely used in arc-wall interaction process in circuit breakers, which assumes a proportional factor and an ablation enthalpy obtained empirically. Bulk plasma state and parameters are assumed to be consistent while analyzing and comparing each model, in order to take into consideration only the difference caused by model itself. Finally ablation rate is calculated in each method respectively and differences are discussed.
Laboratory hydraulic fracturing experiments in intact and pre-fractured rock
Zoback, M.D.; Rummel, F.; Jung, R.; Raleigh, C.B.
1977-01-01
Laboratory hydraulic fracturing experiments were conducted to investigate two factors which could influence the use of the hydrofrac technique for in-situ stress determinations; the possible dependence of the breakdown pressure upon the rate of borehole pressurization, and the influence of pre-existing cracks on the orientation of generated fractures. The experiments have shown that while the rate of borehole pressurization has a marked effect on breakdown pressures, the pressure at which hydraulic fractures initiate (and thus tensile strength) is independent of the rate of borehole pressurization when the effect of fluid penetration is negligible. Thus, the experiments indicate that use of breakdown pressures rather than fracture initiation pressures may lead to an erroneous estimate of tectonic stresses. A conceptual model is proposed to explain anomalously high breakdown pressures observed when fracturing with high viscosity fluids. In this model, initial fracture propagation is presumed to be stable due to large differences between the borehole pressure and that within the fracture. In samples which contained pre-existing fractures which were 'leaky' to water, we found it possible to generate hydraulic fractures oriented parallel to the direction of maximum compression if high viscosity drilling mud was used as the fracturing fluid. ?? 1977.
Using cloud models of heartbeats as the entity identifier to secure mobile devices.
Fu, Donglai; Liu, Yanhua
2017-01-01
Mobile devices are extensively used to store more private and often sensitive information. Therefore, it is important to protect them against unauthorised access. Authentication ensures that authorised users can use mobile devices. However, traditional authentication methods, such as numerical or graphic passwords, are vulnerable to passive attacks. For example, an adversary can steal the password by snooping from a shorter distance. To avoid these problems, this study presents a biometric approach that uses cloud models of heartbeats as the entity identifier to secure mobile devices. Here, it is identified that these concepts including cloud model or cloud have nothing to do with cloud computing. The cloud model appearing in the study is the cognitive model. In the proposed method, heartbeats are collected by two ECG electrodes that are connected to one mobile device. The backward normal cloud generator is used to generate ECG standard cloud models characterising the heartbeat template. When a user tries to have access to their mobile device, cloud models regenerated by fresh heartbeats will be compared with ECG standard cloud models to determine if the current user can use this mobile device. This authentication method was evaluated from three aspects including accuracy, authentication time and energy consumption. The proposed method gives 86.04% of true acceptance rate with 2.73% of false acceptance rate. One authentication can be done in 6s, and this processing consumes about 2000 mW of power.