Science.gov

Sample records for evaluating value-at-risk models

  1. Multifractal Value at Risk model

    NASA Astrophysics Data System (ADS)

    Lee, Hojin; Song, Jae Wook; Chang, Woojin

    2016-06-01

    In this paper new Value at Risk (VaR) model is proposed and investigated. We consider the multifractal property of financial time series and develop a multifractal Value at Risk (MFVaR). MFVaR introduced in this paper is analytically tractable and not based on simulation. Empirical study showed that MFVaR can provide the more stable and accurate forecasting performance in volatile financial markets where large loss can be incurred. This implies that our multifractal VaR works well for the risk measurement of extreme credit events.

  2. Estimation of value at risk and conditional value at risk using normal mixture distributions model

    NASA Astrophysics Data System (ADS)

    Kamaruzzaman, Zetty Ain; Isa, Zaidi

    2013-04-01

    Normal mixture distributions model has been successfully applied in financial time series analysis. In this paper, we estimate the return distribution, value at risk (VaR) and conditional value at risk (CVaR) for monthly and weekly rates of returns for FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI) from July 1990 until July 2010 using the two component univariate normal mixture distributions model. First, we present the application of normal mixture distributions model in empirical finance where we fit our real data. Second, we present the application of normal mixture distributions model in risk analysis where we apply the normal mixture distributions model to evaluate the value at risk (VaR) and conditional value at risk (CVaR) with model validation for both risk measures. The empirical results provide evidence that using the two components normal mixture distributions model can fit the data well and can perform better in estimating value at risk (VaR) and conditional value at risk (CVaR) where it can capture the stylized facts of non-normality and leptokurtosis in returns distribution.

  3. Estimation of Value-at-Risk for Energy Commodities via CAViaR Model

    NASA Astrophysics Data System (ADS)

    Xiliang, Zhao; Xi, Zhu

    This paper uses the Conditional Autoregressive Value at Risk model (CAViaR) proposed by Engle and Manganelli (2004) to evaluate the value-at-risk for daily spot prices of Brent crude oil and West Texas Intermediate crude oil covering the period May 21th, 1987 to Novermber 18th, 2008. Then the accuracy of the estimates of CAViaR model, Normal-GARCH, and GED-GARCH was compared. The results show that all the methods do good job for the low confidence level (95%), and GED-GARCH is the best for spot WTI price, Normal-GARCH and Adaptive-CAViaR are the best for spot Brent price. However, for the high confidence level (99%), Normal-GARCH do a good job for spot WTI, GED-GARCH and four kind of CAViaR specifications do well for spot Brent price. Normal-GARCH does badly for spot Brent price. The result seems suggest that CAViaR do well as well as GED-GARCH since CAViaR directly model the quantile autoregression, but it does not outperform GED-GARCH although it does outperform Normal-GARCH.

  4. Application of the Beck model to stock markets: Value-at-Risk and portfolio risk assessment

    NASA Astrophysics Data System (ADS)

    Kozaki, M.; Sato, A.-H.

    2008-02-01

    We apply the Beck model, developed for turbulent systems that exhibit scaling properties, to stock markets. Our study reveals that the Beck model elucidates the properties of stock market returns and is applicable to practical use such as the Value-at-Risk estimation and the portfolio analysis. We perform empirical analysis with daily/intraday data of the S&P500 index return and find that the volatility fluctuation of real markets is well-consistent with the assumptions of the Beck model: The volatility fluctuates at a much larger time scale than the return itself and the inverse of variance, or “inverse temperature”, β obeys Γ-distribution. As predicted by the Beck model, the distribution of returns is well-fitted by q-Gaussian distribution of Tsallis statistics. The evaluation method of Value-at-Risk (VaR), one of the most significant indicators in risk management, is studied for q-Gaussian distribution. Our proposed method enables the VaR evaluation in consideration of tail risk, which is underestimated by the variance-covariance method. A framework of portfolio risk assessment under the existence of tail risk is considered. We propose a multi-asset model with a single volatility fluctuation shared by all assets, named the single β model, and empirically examine the agreement between the model and an imaginary portfolio with Dow Jones indices. It turns out that the single β model gives good approximation to portfolios composed of the assets with non-Gaussian and correlated returns.

  5. Value-at-Risk forecasts by a spatiotemporal model in Chinese stock market

    NASA Astrophysics Data System (ADS)

    Gong, Pu; Weng, Yingliang

    2016-01-01

    This paper generalizes a recently proposed spatial autoregressive model and introduces a spatiotemporal model for forecasting stock returns. We support the view that stock returns are affected not only by the absolute values of factors such as firm size, book-to-market ratio and momentum but also by the relative values of factors like trading volume ranking and market capitalization ranking in each period. This article studies a new method for constructing stocks' reference groups; the method is called quartile method. Applying the method empirically to the Shanghai Stock Exchange 50 Index, we compare the daily volatility forecasting performance and the out-of-sample forecasting performance of Value-at-Risk (VaR) estimated by different models. The empirical results show that the spatiotemporal model performs surprisingly well in terms of capturing spatial dependences among individual stocks, and it produces more accurate VaR forecasts than the other three models introduced in the previous literature. Moreover, the findings indicate that both allowing for serial correlation in the disturbances and using time-varying spatial weight matrices can greatly improve the predictive accuracy of a spatial autoregressive model.

  6. Evaluating the RiskMetrics methodology in measuring volatility and Value-at-Risk in financial markets

    NASA Astrophysics Data System (ADS)

    Pafka, Szilárd; Kondor, Imre

    2001-10-01

    We analyze the performance of RiskMetrics, a widely used methodology for measuring market risk. Based on the assumption of normally distributed returns, the RiskMetrics model completely ignores the presence of fat tails in the distribution function, which is an important feature of financial data. Nevertheless, it was commonly found that RiskMetrics performs satisfactorily well, and therefore the technique has become widely used in the financial industry. We find, however, that the success of RiskMetrics is the artifact of the choice of the risk measure. First, the outstanding performance of volatility estimates is basically due to the choice of a very short (one-period ahead) forecasting horizon. Second, the satisfactory performance in obtaining Value-at-Risk by simply multiplying volatility with a constant factor is mainly due to the choice of the particular significance level.

  7. Modelling climate change impacts on and adaptation strategies for agriculture in Sardinia and Tunisia using AquaCrop and value-at-risk.

    PubMed

    Bird, David Neil; Benabdallah, Sihem; Gouda, Nadine; Hummel, Franz; Koeberl, Judith; La Jeunesse, Isabelle; Meyer, Swen; Prettenthaler, Franz; Soddu, Antonino; Woess-Gallasch, Susanne

    2016-02-01

    In Europe, there is concern that climate change will cause significant impacts around the Mediterranean. The goals of this study are to quantify the economic risk to crop production, to demonstrate the variability of yield by soil texture and climate model and to investigate possible adaptation strategies. In the Rio Mannu di San Sperate watershed, located in Sardinia (Italy) we investigate production of wheat, a rainfed crop. In the Chiba watershed located in Cap Bon (Tunisia), we analyze irrigated tomato production. We find, using the FAO model AquaCrop that crop production will decrease significantly in a future climate (2040-2070) as compared to the present without adaptation measures. Using "value-at-risk", we show that production should be viewed in a statistical manner. Wheat yields in Sardinia are modelled to decrease by 64% on clay loams, and to increase by 8% and 26% respectively on sandy loams and sandy clay loams. Assuming constant irrigation, tomatoes sown in August in Cap Bon are modelled to have a 45% chance of crop failure on loamy sands; a 39% decrease in yields on sandy clay loams; and a 12% increase in yields on sandy loams. For tomatoes sown in March; sandy clay loams will fail 81% of the time; on loamy sands the crop yields will be 63% less while on sandy loams, the yield will increase by 12%. However, if one assume 10% less water available for irrigation then tomatoes sown in March are not viable. Some adaptation strategies will be able to counteract the modelled crop losses. Increasing the amount of irrigation one strategy however this may not be sustainable. Changes in agricultural management such as changing the planting date of wheat to coincide with changing rainfall patterns in Sardinia or mulching of tomatoes in Tunisia can be effective at reducing crop losses. PMID:26187862

  8. How to estimate the Value at Risk under incomplete information

    NASA Astrophysics Data System (ADS)

    de Schepper, Ann; Heijnen, Bart

    2010-03-01

    A key problem in financial and actuarial research, and particularly in the field of risk management, is the choice of models so as to avoid systematic biases in the measurement of risk. An alternative consists of relaxing the assumption that the probability distribution is completely known, leading to interval estimates instead of point estimates. In the present contribution, we show how this is possible for the Value at Risk, by fixing only a small number of parameters of the underlying probability distribution. We start by deriving bounds on tail probabilities, and we show how a conversion leads to bounds for the Value at Risk. It will turn out that with a maximum of three given parameters, the best estimates are always realized in the case of a unimodal random variable for which two moments and the mode are given. It will also be shown that a lognormal model results in estimates for the Value at Risk that are much closer to the upper bound than to the lower bound.

  9. Multifractality and value-at-risk forecasting of exchange rates

    NASA Astrophysics Data System (ADS)

    Batten, Jonathan A.; Kinateder, Harald; Wagner, Niklas

    2014-05-01

    This paper addresses market risk prediction for high frequency foreign exchange rates under nonlinear risk scaling behaviour. We use a modified version of the multifractal model of asset returns (MMAR) where trading time is represented by the series of volume ticks. Our dataset consists of 138,418 5-min round-the-clock observations of EUR/USD spot quotes and trading ticks during the period January 5, 2006 to December 31, 2007. Considering fat-tails, long-range dependence as well as scale inconsistency with the MMAR, we derive out-of-sample value-at-risk (VaR) forecasts and compare our approach to historical simulation as well as a benchmark GARCH(1,1) location-scale VaR model. Our findings underline that the multifractal properties in EUR/USD returns in fact have notable risk management implications. The MMAR approach is a parsimonious model which produces admissible VaR forecasts at the 12-h forecast horizon. For the daily horizon, the MMAR outperforms both alternatives based on conditional as well as unconditional coverage statistics.

  10. Heavy-tailed value-at-risk analysis for Malaysian stock exchange

    NASA Astrophysics Data System (ADS)

    Chin, Wen Cheong

    2008-07-01

    This article investigates the comparison of power-law value-at-risk (VaR) evaluation with quantile and non-linear time-varying volatility approaches. A simple Pareto distribution is proposed to account the heavy-tailed property in the empirical distribution of returns. Alternative VaR measurement such as non-parametric quantile estimate is implemented using interpolation method. In addition, we also used the well-known two components ARCH modelling technique under the assumptions of normality and heavy-tailed (student- t distribution) for the innovations. Our results evidenced that the predicted VaR under the Pareto distribution exhibited similar results with the symmetric heavy-tailed long-memory ARCH model. However, it is found that only the Pareto distribution is able to provide a convenient framework for asymmetric properties in both the lower and upper tails.

  11. The social values at risk from sea-level rise

    SciTech Connect

    Graham, Sonia; Barnett, Jon; Fincher, Ruth; Hurlimann, Anna; Mortreux, Colette; Waters, Elissa

    2013-07-15

    Analysis of the risks of sea-level rise favours conventionally measured metrics such as the area of land that may be subsumed, the numbers of properties at risk, and the capital values of assets at risk. Despite this, it is clear that there exist many less material but no less important values at risk from sea-level rise. This paper re-theorises these multifarious social values at risk from sea-level rise, by explaining their diverse nature, and grounding them in the everyday practices of people living in coastal places. It is informed by a review and analysis of research on social values from within the fields of social impact assessment, human geography, psychology, decision analysis, and climate change adaptation. From this we propose that it is the ‘lived values’ of coastal places that are most at risk from sea-level rise. We then offer a framework that groups these lived values into five types: those that are physiological in nature, and those that relate to issues of security, belonging, esteem, and self-actualisation. This framework of lived values at risk from sea-level rise can guide empirical research investigating the social impacts of sea-level rise, as well as the impacts of actions to adapt to sea-level rise. It also offers a basis for identifying the distribution of related social outcomes across populations exposed to sea-level rise or sea-level rise policies.

  12. Portfolio Value-at-Risk with Time-Varying Copula: Evidence from Latin America

    NASA Astrophysics Data System (ADS)

    Ozun, Alper; Cifter, Atilla

    Model risk in the estimation of value-at-risk is a challenging threat for the success of any financial investments. The degree of the model risk increases when the estimation process is constructed with a portfolio in the emerging markets. The proper model should both provide flexible joint distributions by splitting the marginality from the dependencies among the financial assets within the portfolio and also capture the non-linear behaviours and extremes in the returns arising from the special features of the emerging markets. In this study, we use time-varying copula to estimate the value-at-risk of the portfolio comprised of the Bovespa and the IPC Mexico in equal and constant weights. The performance comparison of the copula model to the EWMA portfolio model made by the Christoffersen back-test shows that the copula model captures the extremes most successfully. The copula model, by estimating the portfolio value-at-risk with the least violation number in the back-tests, provides the investors to allocate the minimum regulatory capital requirement in accordance with the Basel II Accord.

  13. Empirical application of normal mixture GARCH and value-at-risk estimation

    NASA Astrophysics Data System (ADS)

    Kamaruzzaman, Zetty Ain; Isa, Zaidi

    2014-06-01

    Normal mixture (NM) GARCH model can capture time variation in both conditional skewness and kurtosis. In this paper, we present the general framework of Normal mixture GARCH (1,1). An empirical application is presented using Malaysia weekly stock market returns. This paper provides evidence that, for modeling stock market returns, two-component Normal mixture GARCH (1,1) model perform better than Normal, symmetric and skewed Student's t-GARCH models. This model can quantify the volatility corresponding to stable and crash market circumstances. We also consider Value-at-Risk (VaR) estimation for Normal mixture GARCH model.

  14. Making the business case for process safety using value-at-risk concepts.

    PubMed

    Fang, Jayming S; Ford, David M; Mannan, M Sam

    2004-11-11

    An increasing emphasis on chemical process safety over the last two decades has led to the development and application of powerful risk assessment tools. Hazard analysis and risk evaluation techniques have developed to the point where quantitatively meaningful risks can be calculated for processes and plants. However, the results are typically presented in semi-quantitative "ranked list" or "categorical matrix" formats, which are certainly useful but not optimal for making business decisions. A relatively new technique for performing valuation under uncertainty, value at risk (VaR), has been developed in the financial world. VaR is a method of evaluating the probability of a gain or loss by a complex venture, by examining the stochastic behavior of its components. We believe that combining quantitative risk assessment techniques with VaR concepts will bridge the gap between engineers and scientists who determine process risk and business leaders and policy makers who evaluate, manage, or regulate risk. We present a few basic examples of the application of VaR to hazard analysis in the chemical process industry. PMID:15518960

  15. Value-at-risk estimation with wavelet-based extreme value theory: Evidence from emerging markets

    NASA Astrophysics Data System (ADS)

    Cifter, Atilla

    2011-06-01

    This paper introduces wavelet-based extreme value theory (EVT) for univariate value-at-risk estimation. Wavelets and EVT are combined for volatility forecasting to estimate a hybrid model. In the first stage, wavelets are used as a threshold in generalized Pareto distribution, and in the second stage, EVT is applied with a wavelet-based threshold. This new model is applied to two major emerging stock markets: the Istanbul Stock Exchange (ISE) and the Budapest Stock Exchange (BUX). The relative performance of wavelet-based EVT is benchmarked against the Riskmetrics-EWMA, ARMA-GARCH, generalized Pareto distribution, and conditional generalized Pareto distribution models. The empirical results show that the wavelet-based extreme value theory increases predictive performance of financial forecasting according to number of violations and tail-loss tests. The superior forecasting performance of the wavelet-based EVT model is also consistent with Basel II requirements, and this new model can be used by financial institutions as well.

  16. Value at risk estimation with entropy-based wavelet analysis in exchange markets

    NASA Astrophysics Data System (ADS)

    He, Kaijian; Wang, Lijun; Zou, Yingchao; Lai, Kin Keung

    2014-08-01

    In recent years, exchange markets are increasingly integrated together. Fluctuations and risks across different exchange markets exhibit co-moving and complex dynamics. In this paper we propose the entropy-based multivariate wavelet based approaches to analyze the multiscale characteristic in the multidimensional domain and improve further the Value at Risk estimation reliability. Wavelet analysis has been introduced to construct the entropy-based Multiscale Portfolio Value at Risk estimation algorithm to account for the multiscale dynamic correlation. The entropy measure has been proposed as the more effective measure with the error minimization principle to select the best basis when determining the wavelet families and the decomposition level to use. The empirical studies conducted in this paper have provided positive evidence as to the superior performance of the proposed approach, using the closely related Chinese Renminbi and European Euro exchange market.

  17. Measuring daily Value-at-Risk of SSEC index: A new approach based on multifractal analysis and extreme value theory

    NASA Astrophysics Data System (ADS)

    Wei, Yu; Chen, Wang; Lin, Yu

    2013-05-01

    Recent studies in the econophysics literature reveal that price variability has fractal and multifractal characteristics not only in developed financial markets, but also in emerging markets. Taking high-frequency intraday quotes of the Shanghai Stock Exchange Component (SSEC) Index as example, this paper proposes a new method to measure daily Value-at-Risk (VaR) by combining the newly introduced multifractal volatility (MFV) model and the extreme value theory (EVT) method. Two VaR backtesting techniques are then employed to compare the performance of the model with that of a group of linear and nonlinear generalized autoregressive conditional heteroskedasticity (GARCH) models. The empirical results show the multifractal nature of price volatility in Chinese stock market. VaR measures based on the multifractal volatility model and EVT method outperform many GARCH-type models at high-risk levels.

  18. On Value at Risk for Foreign Exchange Rates --- the Copula Approach

    NASA Astrophysics Data System (ADS)

    Jaworski, P.

    2006-11-01

    The aim of this paper is to determine the Value at Risk (VaR) of the portfolio consisting of long positions in foreign currencies on an emerging market. Basing on empirical data we restrict ourselves to the case when the tail parts of distributions of logarithmic returns of these assets follow the power laws and the lower tail of associated copula C follows the power law of degree 1. We will illustrate the practical usefulness of this approach by the analysis of the exchange rates of EUR and CHF at the Polish forex market.

  19. 'Weather Value at Risk': A uniform approach to describe and compare sectoral income risks from climate change.

    PubMed

    Prettenthaler, Franz; Köberl, Judith; Bird, David Neil

    2016-02-01

    We extend the concept of 'Weather Value at Risk' - initially introduced to measure the economic risks resulting from current weather fluctuations - to describe and compare sectoral income risks from climate change. This is illustrated using the examples of wheat cultivation and summer tourism in (parts of) Sardinia. Based on climate scenario data from four different regional climate models we study the change in the risk of weather-related income losses between some reference (1971-2000) and some future (2041-2070) period. Results from both examples suggest an increase in weather-related risks of income losses due to climate change, which is somewhat more pronounced for summer tourism. Nevertheless, income from wheat cultivation is at much higher risk of weather-related losses than income from summer tourism, both under reference and future climatic conditions. A weather-induced loss of at least 5% - compared to the income associated with average reference weather conditions - shows a 40% (80%) probability of occurrence in the case of wheat cultivation, but only a 0.4% (16%) probability of occurrence in the case of summer tourism, given reference (future) climatic conditions. Whereas in the agricultural example increases in the weather-related income risks mainly result from an overall decrease in average wheat yields, the heightened risk in the tourism example stems mostly from a change in the weather-induced variability of tourism incomes. With the extended 'Weather Value at Risk' concept being able to capture both, impacts from changes in the mean and the variability of the climate, it is a powerful tool for presenting and disseminating the results of climate change impact assessments. Due to its flexibility, the concept can be applied to any economic sector and therefore provides a valuable tool for cross-sectoral comparisons of climate change impacts, but also for the assessment of the costs and benefits of adaptation measures. PMID:25929802

  20. Stochastic dynamic programming (SDP) with a conditional value-at-risk (CVaR) criterion for management of storm-water

    NASA Astrophysics Data System (ADS)

    Piantadosi, J.; Metcalfe, A. V.; Howlett, P. G.

    2008-01-01

    SummaryWe present a new approach to stochastic dynamic programming (SDP) to determine a policy for management of urban storm-water that minimises conditional value-at-risk (CVaR). Storm-water flows into a large capture dam and is subsequently pumped to a holding dam. Water is then supplied directly to users or stored in an underground aquifer. We assume random inflow and constant demand. SDP is used to find a pumping policy that minimises CVaR, with a penalty for increased risk of environmental damage, and a pumping policy that maximises expected monetary value (EMV). We use both value iteration and policy improvement to show that the optimal policy under CVaR differs from the optimal policy under EMV.

  1. The EMEFS model evaluation

    SciTech Connect

    Barchet, W.R. ); Dennis, R.L. ); Seilkop, S.K. ); Banic, C.M.; Davies, D.; Hoff, R.M.; Macdonald, A.M.; Mickle, R.E.; Padro, J.; Puckett, K. ); Byun, D.; McHenry, J.N.

    1991-12-01

    The binational Eulerian Model Evaluation Field Study (EMEFS) consisted of several coordinated data gathering and model evaluation activities. In the EMEFS, data were collected by five air and precipitation monitoring networks between June 1988 and June 1990. Model evaluation is continuing. This interim report summarizes the progress made in the evaluation of the Regional Acid Deposition Model (RADM) and the Acid Deposition and Oxidant Model (ADOM) through the December 1990 completion of a State of Science and Technology report on model evaluation for the National Acid Precipitation Assessment Program (NAPAP). Because various assessment applications of RADM had to be evaluated for NAPAP, the report emphasizes the RADM component of the evaluation. A protocol for the evaluation was developed by the model evaluation team and defined the observed and predicted values to be used and the methods by which the observed and predicted values were to be compared. Scatter plots and time series of predicted and observed values were used to present the comparisons graphically. Difference statistics and correlations were used to quantify model performance. 64 refs., 34 figs., 6 tabs.

  2. ATMOSPHERIC MODEL EVALUATION

    EPA Science Inventory

    Evaluation of the Models-3/CMAQ is conducted in this task. The focus is on evaluation of ozone, other photochemical oxidants, and fine particles using data from both routine monitoring networks and special, intensive field programs. Two types of evaluations are performed here: pe...

  3. Integrated Assessment Model Evaluation

    NASA Astrophysics Data System (ADS)

    Smith, S. J.; Clarke, L.; Edmonds, J. A.; Weyant, J. P.

    2012-12-01

    Integrated assessment models of climate change (IAMs) are widely used to provide insights into the dynamics of the coupled human and socio-economic system, including emission mitigation analysis and the generation of future emission scenarios. Similar to the climate modeling community, the integrated assessment community has a two decade history of model inter-comparison, which has served as one of the primary venues for model evaluation and confirmation. While analysis of historical trends in the socio-economic system has long played a key role in diagnostics of future scenarios from IAMs, formal hindcast experiments are just now being contemplated as evaluation exercises. Some initial thoughts on setting up such IAM evaluation experiments are discussed. Socio-economic systems do not follow strict physical laws, which means that evaluation needs to take place in a context, unlike that of physical system models, in which there are few fixed, unchanging relationships. Of course strict validation of even earth system models is not possible (Oreskes etal 2004), a fact borne out by the inability of models to constrain the climate sensitivity. Energy-system models have also been grappling with some of the same questions over the last quarter century. For example, one of "the many questions in the energy field that are waiting for answers in the next 20 years" identified by Hans Landsberg in 1985 was "Will the price of oil resume its upward movement?" Of course we are still asking this question today. While, arguably, even fewer constraints apply to socio-economic systems, numerous historical trends and patterns have been identified, although often only in broad terms, that are used to guide the development of model components, parameter ranges, and scenario assumptions. IAM evaluation exercises are expected to provide useful information for interpreting model results and improving model behavior. A key step is the recognition of model boundaries, that is, what is inside

  4. VPPA weld model evaluation

    NASA Technical Reports Server (NTRS)

    Mccutcheon, Kimble D.; Gordon, Stephen S.; Thompson, Paul A.

    1992-01-01

    NASA uses the Variable Polarity Plasma Arc Welding (VPPAW) process extensively for fabrication of Space Shuttle External Tanks. This welding process has been in use at NASA since the late 1970's but the physics of the process have never been satisfactorily modeled and understood. In an attempt to advance the level of understanding of VPPAW, Dr. Arthur C. Nunes, Jr., (NASA) has developed a mathematical model of the process. The work described in this report evaluated and used two versions (level-0 and level-1) of Dr. Nunes' model, and a model derived by the University of Alabama at Huntsville (UAH) from Dr. Nunes' level-1 model. Two series of VPPAW experiments were done, using over 400 different combinations of welding parameters. Observations were made of VPPAW process behavior as a function of specific welding parameter changes. Data from these weld experiments was used to evaluate and suggest improvements to Dr. Nunes' model. Experimental data and correlations with the model were used to develop a multi-variable control algorithm for use with a future VPPAW controller. This algorithm is designed to control weld widths (both on the crown and root of the weld) based upon the weld parameters, base metal properties, and real-time observation of the crown width. The algorithm exhibited accuracy comparable to that of the weld width measurements for both aluminum and mild steel welds.

  5. Evaluating Service Organization Models

    PubMed Central

    TOUATI, NASSERA; PINEAULT, RAYNALD; CHAMPAGNE, FRANÇOIS; DENIS, JEAN-LOUIS; BROUSSELLE, ASTRID; CONTANDRIOPOULOS, ANDRÉ-PIERRE; GENEAU, ROBERT

    2016-01-01

    Based on the example of the evaluation of service organization models, this article shows how a configurational approach overcomes the limits of traditional methods which for the most part have studied the individual components of various models considered independently of one another. These traditional methods have led to results (observed effects) that are difficult to interpret. The configurational approach, in contrast, is based on the hypothesis that effects are associated with a set of internally coherent model features that form various configurations. These configurations, like their effects, are context-dependent. We explore the theoretical basis of the configuration approach in order to emphasize its relevance, and discuss the methodological challenges inherent in the application of this approach through an in-depth analysis of the scientific literature. We also propose methodological solutions to these challenges. We illustrate from an example how a configurational approach has been used to evaluate primary care models. Finally, we begin a discussion on the implications of this new evaluation approach for the scientific and decision-making communities.

  6. Composite Load Model Evaluation

    SciTech Connect

    Lu, Ning; Qiao, Hong

    2007-09-30

    The WECC load modeling task force has dedicated its effort in the past few years to develop a composite load model that can represent behaviors of different end-user components. The modeling structure of the composite load model is recommended by the WECC load modeling task force. GE Energy has implemented this composite load model with a new function CMPLDW in its power system simulation software package, PSLF. For the last several years, Bonneville Power Administration (BPA) has taken the lead and collaborated with GE Energy to develop the new composite load model. Pacific Northwest National Laboratory (PNNL) and BPA joint force and conducted the evaluation of the CMPLDW and test its parameter settings to make sure that: • the model initializes properly, • all the parameter settings are functioning, and • the simulation results are as expected. The PNNL effort focused on testing the CMPLDW in a 4-bus system. An exhaustive testing on each parameter setting has been performed to guarantee each setting works. This report is a summary of the PNNL testing results and conclusions.

  7. Pragmatic geometric model evaluation

    NASA Astrophysics Data System (ADS)

    Pamer, Robert

    2015-04-01

    Quantification of subsurface model reliability is mathematically and technically demanding as there are many different sources of uncertainty and some of the factors can be assessed merely in a subjective way. For many practical applications in industry or risk assessment (e. g. geothermal drilling) a quantitative estimation of possible geometric variations in depth unit is preferred over relative numbers because of cost calculations for different scenarios. The talk gives an overview of several factors that affect the geometry of structural subsurface models that are based upon typical geological survey organization (GSO) data like geological maps, borehole data and conceptually driven construction of subsurface elements (e. g. fault network). Within the context of the trans-European project "GeoMol" uncertainty analysis has to be very pragmatic also because of different data rights, data policies and modelling software between the project partners. In a case study a two-step evaluation methodology for geometric subsurface model uncertainty is being developed. In a first step several models of the same volume of interest have been calculated by omitting successively more and more input data types (seismic constraints, fault network, outcrop data). The positions of the various horizon surfaces are then compared. The procedure is equivalent to comparing data of various levels of detail and therefore structural complexity. This gives a measure of the structural significance of each data set in space and as a consequence areas of geometric complexity are identified. These areas are usually very data sensitive hence geometric variability in between individual data points in these areas is higher than in areas of low structural complexity. Instead of calculating a multitude of different models by varying some input data or parameters as it is done by Monte-Carlo-simulations, the aim of the second step of the evaluation procedure (which is part of the ongoing work) is to

  8. BioVapor Model Evaluation

    EPA Science Inventory

    General background on modeling and specifics of modeling vapor intrusion are given. Three classical model applications are described and related to the problem of petroleum vapor intrusion. These indicate the need for model calibration and uncertainty analysis. Evaluation of Bi...

  9. Social Program Evaluation: Six Models.

    ERIC Educational Resources Information Center

    New Directions for Program Evaluation, 1980

    1980-01-01

    Representative models of program evaluation are described by their approach to values, and categorized by empirical style: positivism versus humanism. The models are: social process audit; experimental/quasi-experimental research design; goal-free evaluation; systems evaluation; cost-benefit analysis; and accountability program evaluation. (CP)

  10. Evaluation Theory, Models, and Applications

    ERIC Educational Resources Information Center

    Stufflebeam, Daniel L.; Shinkfield, Anthony J.

    2007-01-01

    "Evaluation Theory, Models, and Applications" is designed for evaluators and students who need to develop a commanding knowledge of the evaluation field: its history, theory and standards, models and approaches, procedures, and inclusion of personnel as well as program evaluation. This important book shows how to choose from a growing array of…

  11. Model Program Evaluations. Fact Sheet

    ERIC Educational Resources Information Center

    Arkansas Safe Schools Initiative Division, 2002

    2002-01-01

    There are probably thousands of programs and courses intended to prevent or reduce violence in this nation's schools. Evaluating these many programs has become a problem or goal in itself. There are now many evaluation programs, with many levels of designations, such as model, promising, best practice, exemplary and noteworthy. "Model program" is…

  12. Evaluating Causal Models.

    ERIC Educational Resources Information Center

    Watt, James H., Jr.

    Pointing out that linear causal models can organize the interrelationships of a large number of variables, this paper contends that such models are particularly useful to mass communication research, which must by necessity deal with complex systems of variables. The paper first outlines briefly the philosophical requirements for establishing a…

  13. Advocacy Evaluation: A Model for Internal Evaluation Offices.

    ERIC Educational Resources Information Center

    Sonnichsen, Richard C.

    1988-01-01

    As evaluations are more often implemented by internal staff, internal evaluators must begin to assume decision-making and advocacy tasks. This advocacy evaluation concept is described using the Federal Bureau of Investigation evaluation staff as a model. (TJH)

  14. A Model for Curriculum Evaluation

    ERIC Educational Resources Information Center

    Crane, Peter; Abt, Clark C.

    1969-01-01

    Describes in some detail the Curriculum Evaluation Model, "a technique for calculating the cost-effectiveness of alternative curriculum materials by a detailed breakdown and analysis of their components, quality, and cost. Coverage, appropriateness, motivational effectiveness, and cost are the four major categories in terms of which the…

  15. Sequentially Executed Model Evaluation Framework

    SciTech Connect

    2015-10-20

    Provides a message passing framework between generic input, model and output drivers, and specifies an API for developing such drivers. Also provides batch and real-time controllers which step the model and I/O through the time domain (or other discrete domain), and sample I/O drivers. This is a library framework, and does not, itself, solve any problems or execute any modeling. The SeMe framework aids in development of models which operate on sequential information, such as time-series, where evaluation is based on prior results combined with new data for this iteration. Has applications in quality monitoring, and was developed as part of the CANARY-EDS software, where real-time water quality data is being analyzed for anomalies.

  16. Sequentially Executed Model Evaluation Framework

    SciTech Connect

    2014-02-14

    Provides a message passing framework between generic input, model and output drivers, and specifies an API for developing such drivers. Also provides batch and real-time controllers which step the model and 1/0 through the time domain (or other discrete domain), and sample 1/0 drivers. This is a Framework library framework, and does not, itself, solve any problems or execute any modelling. The SeMe framework aids in development of models which operate on sequential information, such as time-series, where evaluation is based on prior results combined with new data for this iteration. Ha) applications in quality monitoring, and was developed as part of the CANARY-EDS software, where real-time water quality data is being analyzed

  17. Sequentially Executed Model Evaluation Framework

    Energy Science and Technology Software Center (ESTSC)

    2014-02-14

    Provides a message passing framework between generic input, model and output drivers, and specifies an API for developing such drivers. Also provides batch and real-time controllers which step the model and 1/0 through the time domain (or other discrete domain), and sample 1/0 drivers. This is a Framework library framework, and does not, itself, solve any problems or execute any modelling. The SeMe framework aids in development of models which operate on sequential information, suchmore » as time-series, where evaluation is based on prior results combined with new data for this iteration. Ha) applications in quality monitoring, and was developed as part of the CANARY-EDS software, where real-time water quality data is being analyzed« less

  18. Sequentially Executed Model Evaluation Framework

    Energy Science and Technology Software Center (ESTSC)

    2015-10-20

    Provides a message passing framework between generic input, model and output drivers, and specifies an API for developing such drivers. Also provides batch and real-time controllers which step the model and I/O through the time domain (or other discrete domain), and sample I/O drivers. This is a library framework, and does not, itself, solve any problems or execute any modeling. The SeMe framework aids in development of models which operate on sequential information, such asmore » time-series, where evaluation is based on prior results combined with new data for this iteration. Has applications in quality monitoring, and was developed as part of the CANARY-EDS software, where real-time water quality data is being analyzed for anomalies.« less

  19. Infrasound Sensor Models and Evaluations

    SciTech Connect

    KROMER,RICHARD P.; MCDONALD,TIMOTHY S.

    2000-07-31

    Sandia National Laboratories has continued to evaluate the performance of infrasound sensors that are candidates for use by the International Monitoring System (IMS) for the Comprehensive Nuclear-Test-Ban Treaty Organization. The performance criteria against which these sensors are assessed are specified in ``Operational Manual for Infra-sound Monitoring and the International Exchange of Infrasound Data''. This presentation includes the results of efforts concerning two of these sensors: (1) Chaparral Physics Model 5; and (2) CEA MB2000. Sandia is working with Chaparral Physics in order to improve the capability of the Model 5 (a prototype sensor) to be calibrated and evaluated. With the assistance of the Scripps Institution of Oceanography, Sandia is also conducting tests to evaluate the performance of the CEA MB2000. Sensor models based on theoretical transfer functions and manufacturer specifications for these two devices have been developed. This presentation will feature the results of coherence-based data analysis of signals from a huddle test, utilizing several sensors of both types, in order to verify the sensor performance.

  20. The Spiral-Interactive Program Evaluation Model.

    ERIC Educational Resources Information Center

    Khaleel, Ibrahim Adamu

    1988-01-01

    Describes the spiral interactive program evaluation model, which is designed to evaluate vocational-technical education programs in secondary schools in Nigeria. Program evaluation is defined; utility oriented and process oriented models for evaluation are described; and internal and external evaluative factors and variables that define each…

  1. Decisionmaking Context Model for Enhancing Evaluation Utilization.

    ERIC Educational Resources Information Center

    Brown, Robert D.; And Others

    1984-01-01

    This paper discusses two models that hold promise for helping evaluators understand and cope with different decision contexts: (1) the conflict Model (Janis and Mann, 1977) and the Social Process Model (Vroom and Yago, 1974). Implications and guidelines for using decisionmaking models in evaluation settings are presented. (BS)

  2. Beyond Evaluation: A Model for Cooperative Evaluation of Internet Resources.

    ERIC Educational Resources Information Center

    Kirkwood, Hal P., Jr.

    1998-01-01

    Presents a status report on Web site evaluation efforts, listing dead, merged, new review, Yahoo! wannabes, subject-specific review, former librarian-managed, and librarian-managed review sites; discusses how sites are evaluated; describes and demonstrates (reviewing company directories) the Marr/Kirkwood evaluation model; and provides an…

  3. Developing Useful Evaluation Capability: Lessons From the Model Evaluation Program.

    ERIC Educational Resources Information Center

    Waller, John D.; And Others

    The assessment of 12 model evaluation systems provides insight and guidance into their development for government managers and evaluators. The eight individual completed grants are documented in a series of case studies, while the synthesis of all the project experiences and results are summarized. There are things that evaluation systems can do…

  4. The EMEFS model evaluation. An interim report

    SciTech Connect

    Barchet, W.R.; Dennis, R.L.; Seilkop, S.K.; Banic, C.M.; Davies, D.; Hoff, R.M.; Macdonald, A.M.; Mickle, R.E.; Padro, J.; Puckett, K.; Byun, D.; McHenry, J.N.; Karamchandani, P.; Venkatram, A.; Fung, C.; Misra, P.K.; Hansen, D.A.; Chang, J.S.

    1991-12-01

    The binational Eulerian Model Evaluation Field Study (EMEFS) consisted of several coordinated data gathering and model evaluation activities. In the EMEFS, data were collected by five air and precipitation monitoring networks between June 1988 and June 1990. Model evaluation is continuing. This interim report summarizes the progress made in the evaluation of the Regional Acid Deposition Model (RADM) and the Acid Deposition and Oxidant Model (ADOM) through the December 1990 completion of a State of Science and Technology report on model evaluation for the National Acid Precipitation Assessment Program (NAPAP). Because various assessment applications of RADM had to be evaluated for NAPAP, the report emphasizes the RADM component of the evaluation. A protocol for the evaluation was developed by the model evaluation team and defined the observed and predicted values to be used and the methods by which the observed and predicted values were to be compared. Scatter plots and time series of predicted and observed values were used to present the comparisons graphically. Difference statistics and correlations were used to quantify model performance. 64 refs., 34 figs., 6 tabs.

  5. Evaluating modeling tools for the EDOS

    NASA Technical Reports Server (NTRS)

    Knoble, Gordon; Mccaleb, Frederick; Aslam, Tanweer; Nester, Paul

    1994-01-01

    The Earth Observing System (EOS) Data and Operations System (EDOS) Project is developing a functional, system performance model to support the system implementation phase of the EDOS which is being designed and built by the Goddard Space Flight Center (GSFC). The EDOS Project will use modeling to meet two key objectives: (1) manage system design impacts introduced by unplanned changed in mission requirements; and (2) evaluate evolutionary technology insertions throughout the development of the EDOS. To select a suitable modeling tool, the EDOS modeling team developed an approach for evaluating modeling tools and languages by deriving evaluation criteria from both the EDOS modeling requirements and the development plan. Essential and optional features for an appropriate modeling tool were identified and compared with known capabilities of several modeling tools. Vendors were also provided the opportunity to model a representative EDOS processing function to demonstrate the applicability of their modeling tool to the EDOS modeling requirements. This paper emphasizes the importance of using a well defined approach for evaluating tools to model complex systems like the EDOS. The results of this evaluation study do not in any way signify the superiority of any one modeling tool since the results will vary with the specific modeling requirements of each project.

  6. A Model for Administrative Evaluation by Subordinates.

    ERIC Educational Resources Information Center

    Budig, Jeanne E.

    Under the administrator evaluation program adopted at Vincennes University, all faculty and professional staff are invited to evaluate each administrator above them in the chain of command. Originally based on the Purdue University "cafeteria" system, this evaluation model has been used biannually for 10 years. In an effort to simplify the system,…

  7. Model Performance Evaluation and Scenario Analysis (MPESA) Tutorial

    EPA Science Inventory

    This tool consists of two parts: model performance evaluation and scenario analysis (MPESA). The model performance evaluation consists of two components: model performance evaluation metrics and model diagnostics. These metrics provides modelers with statistical goodness-of-fit m...

  8. THE ATMOSPHERIC MODEL EVALUATION (AMET): METEOROLOGY MODULE

    EPA Science Inventory

    An Atmospheric Model Evaluation Tool (AMET), composed of meteorological and air quality components, is being developed to examine the error and uncertainty in the model simulations. AMET matches observations with the corresponding model-estimated values in space and time, and the...

  9. Black Model Appearance and Product Evaluations.

    ERIC Educational Resources Information Center

    Kerin, Roger A.

    1979-01-01

    Examines a study of how human models affect the impression conveyed by an advertisement, particularly the effect of a Black model's physical characteristics on product evaluations among Black and White females.Results show that the physical appearance of the model influenced impressions of product quality and suitability for personal use. (JMF)

  10. Comprehensive system models: Strategies for evaluation

    NASA Technical Reports Server (NTRS)

    Field, Christopher; Kutzbach, John E.; Ramanathan, V.; Maccracken, Michael C.

    1992-01-01

    The task of evaluating comprehensive earth system models is vast involving validations of every model component at every scale of organization, as well as tests of all the individual linkages. Even the most detailed evaluation of each of the component processes and the individual links among them should not, however, engender confidence in the performance of the whole. The integrated earth system is so rich with complex feedback loops, often involving components of the atmosphere, oceans, biosphere, and cryosphere, that it is certain to exhibit emergent properties very difficult to predict from the perspective of a narrow focus on any individual component of the system. Therefore, a substantial share of the task of evaluating comprehensive earth system models must reside at the level of whole system evaluations. Since complete, integrated atmosphere/ ocean/ biosphere/ hydrology models are not yet operational, questions of evaluation must be addressed at the level of the kinds of earth system processes that the models should be competent to simulate, rather than at the level of specific performance criteria. Here, we have tried to identify examples of earth system processes that are difficult to simulate with existing models and that involve a rich enough suite of feedbacks that they are unlikely to be satisfactorily described by highly simplified or toy models. Our purpose is not to specify a checklist of evaluation criteria but to introduce characteristics of the earth system that may present useful opportunities for model testing and, of course, improvement.

  11. Evaluation of Galactic Cosmic Ray Models

    NASA Technical Reports Server (NTRS)

    Adams, James H., Jr.; Heiblim, Samuel; Malott, Christopher

    2009-01-01

    Models of the galactic cosmic ray spectra have been tested by comparing their predictions to an evaluated database containing more than 380 measured cosmic ray spectra extending from 1960 to the present.

  12. Outcomes Evaluation: A Model for the Future.

    ERIC Educational Resources Information Center

    Blasi, John F.; Davis, Barbara S.

    1986-01-01

    Examines issues and problems related to the measurement of community college outcomes in relation to mission and goals. Presents a model for outcomes evaluation at the community college which derives from the mission statement and provides evaluative comment and comparison with institutional and national norms. (DMM)

  13. Evaluation Model for Career Programs. Final Report.

    ERIC Educational Resources Information Center

    Byerly, Richard L.; And Others

    A study was conducted to provide and test an evaluative model that could be utilized in providing curricular evaluation of the various career programs. Two career fields, dental assistant and auto mechanic, were chosen for study. A questionnaire based upon the actual job performance was completed by six groups connected with the auto mechanics and…

  14. SAPHIRE models and software for ASP evaluations

    SciTech Connect

    Sattison, M.B.; Schroeder, J.A.; Russell, K.D.

    1995-04-01

    The Idaho National Engineering Laboratory (INEL) over the past year has created 75 plant-specific Accident Sequence Precursor (ASP) models using the SAPHIRE suite of PRA codes. Along with the new models, the INEL has also developed a new module for SAPHIRE which is tailored specifically to the unique needs of conditional core damage probability (CCDP) evaluations. These models and software will be the next generation of risk tools for the evaluation of accident precursors by both NRR and AEOD. This paper presents an overview of the models and software. Key characteristics include: (1) classification of the plant models according to plant response with a unique set of event trees for each plant class, (2) plant-specific fault trees using supercomponents, (3) generation and retention of all system and sequence cutsets, (4) full flexibility in modifying logic, regenerating cutsets, and requantifying results, and (5) user interface for streamlined evaluation of ASP events.

  15. Rock mechanics models evaluation report. [Contains glossary

    SciTech Connect

    Not Available

    1987-08-01

    This report documents the evaluation of the thermal and thermomechanical models and codes for repository subsurface design and for design constraint analysis. The evaluation was based on a survey of the thermal and thermomechanical codes and models that are applicable to subsurface design, followed by a Kepner-Tregoe (KT) structured decision analysis of the codes and models. The primary recommendations of the analysis are that the DOT code be used for two-dimensional thermal analysis and that the STEALTH and HEATING 5/6 codes be used for three-dimensional and complicated two-dimensional thermal analysis. STEALTH and SPECTROM 32 are recommended for thermomechanical analyses. The other evaluated codes should be considered for use in certain applications. A separate review of salt creep models indicate that the commonly used exponential time law model is appropriate for use in repository design studies. 38 refs., 1 fig., 7 tabs.

  16. EPA EXPOSURE MODELS LIBRARY AND INTEGRATED MODEL EVALUATION SYSTEM

    EPA Science Inventory

    The third edition of the U.S. Environmental Protection Agencys (EPA) EML/IMES (Exposure Models Library and Integrated Model Evaluation System) on CD-ROM is now available. The purpose of the disc is to provide a compact and efficient means to distribute exposure models, documentat...

  17. Evaluation of constitutive models for crushed salt

    SciTech Connect

    Callahan, G.D.; Loken, M.C. [RE Hurtado, L.D.; Hansen, F.D.

    1996-05-01

    Three constitutive models are recommended as candidates for describing the deformation of crushed salt. These models are generalized to three-dimensional states of stress to include the effects of mean and deviatoric stress and modified to include effects of temperature, grain size, and moisture content. A database including hydrostatic consolidation and shear consolidation tests conducted on Waste Isolation Pilot Plant (WIPP) and southeastern New Mexico salt is used to determine material parameters for the models. To evaluate the capability of the models, parameter values obtained from fitting the complete database are used to predict the individual tests. Finite element calculations of a WIPP shaft with emplaced crushed salt demonstrate the model predictions.

  18. Evaluation of a lake whitefish bioenergetics model

    USGS Publications Warehouse

    Madenjian, Charles P.; O'Connor, Daniel V.; Pothoven, Steven A.; Schneeberger, Philip J.; Rediske, Richard R.; O'Keefe, James P.; Bergstedt, Roger A.; Argyle, Ray L.; Brandt, Stephen B.

    2006-01-01

    We evaluated the Wisconsin bioenergetics model for lake whitefish Coregonus clupeaformis in the laboratory and in the field. For the laboratory evaluation, lake whitefish were fed rainbow smelt Osmerus mordax in four laboratory tanks during a 133-d experiment. Based on a comparison of bioenergetics model predictions of lake whitefish food consumption and growth with observed consumption and growth, we concluded that the bioenergetics model furnished significantly biased estimates of both food consumption and growth. On average, the model overestimated consumption by 61% and underestimated growth by 16%. The source of the bias was probably an overestimation of the respiration rate. We therefore adjusted the respiration component of the bioenergetics model to obtain a good fit of the model to the observed consumption and growth in our laboratory tanks. Based on the adjusted model, predictions of food consumption over the 133-d period fell within 5% of observed consumption in three of the four tanks and within 9% of observed consumption in the remaining tank. We used polychlorinated biphenyls (PCBs) as a tracer to evaluate model performance in the field. Based on our laboratory experiment, the efficiency with which lake whitefish retained PCBs from their food (I?) was estimated at 0.45. We applied the bioenergetics model to Lake Michigan lake whitefish and then used PCB determinations of both lake whitefish and their prey from Lake Michigan to estimate p in the field. Application of the original model to Lake Michigan lake whitefish yielded a field estimate of 0.28, implying that the original formulation of the model overestimated consumption in Lake Michigan by 61%. Application of the bioenergetics model with the adjusted respiration component resulted in a field I? estimate of 0.56, implying that this revised model underestimated consumption by 20%.

  19. The Discrepancy Evaluation Model. I. Basic Tenets of the Model.

    ERIC Educational Resources Information Center

    Steinmetz, Andres

    1976-01-01

    The basic principles of the discrepancy evaluation model (DEM), developed by Malcolm Provus, are presented. The three concepts which are essential to DEM are defined: (1) the standard is a description of how something should be; (2) performance measures are used to find out the actual characteristics of the object being evaluated; and (3) the…

  20. Saphire models and software for ASP evaluations

    SciTech Connect

    Sattison, M.B.

    1997-02-01

    The Idaho National Engineering Laboratory (INEL) over the three years has created 75 plant-specific Accident Sequence Precursor (ASP) models using the SAPHIRE suite of PRA codes. Along with the new models, the INEL has also developed a new module for SAPHIRE which is tailored specifically to the unique needs of ASP evaluations. These models and software will be the next generation of risk tools for the evaluation of accident precursors by both the U.S. Nuclear Regulatory Commission`s (NRC`s) Office of Nuclear Reactor Regulation (NRR) and the Office for Analysis and Evaluation of Operational Data (AEOD). This paper presents an overview of the models and software. Key characteristics include: (1) classification of the plant models according to plant response with a unique set of event trees for each plant class, (2) plant-specific fault trees using supercomponents, (3) generation and retention of all system and sequence cutsets, (4) full flexibility in modifying logic, regenerating cutsets, and requantifying results, and (5) user interface for streamlined evaluation of ASP events. Future plans for the ASP models is also presented.

  1. Multi-criteria evaluation of hydrological models

    NASA Astrophysics Data System (ADS)

    Rakovec, Oldrich; Clark, Martyn; Weerts, Albrecht; Hill, Mary; Teuling, Ryan; Uijlenhoet, Remko

    2013-04-01

    Over the last years, there is a tendency in the hydrological community to move from the simple conceptual models towards more complex, physically/process-based hydrological models. This is because conceptual models often fail to simulate the dynamics of the observations. However, there is little agreement on how much complexity needs to be considered within the complex process-based models. One way to proceed to is to improve understanding of what is important and unimportant in the models considered. The aim of this ongoing study is to evaluate structural model adequacy using alternative conceptual and process-based models of hydrological systems, with an emphasis on understanding how model complexity relates to observed hydrological processes. Some of the models require considerable execution time and the computationally frugal sensitivity analysis, model calibration and uncertainty quantification methods are well-suited to providing important insights for models with lengthy execution times. The current experiment evaluates two version of the Framework for Understanding Structural Errors (FUSE), which both enable running model inter-comparison experiments. One supports computationally efficient conceptual models, and the second supports more-process-based models that tend to have longer execution times. The conceptual FUSE combines components of 4 existing conceptual hydrological models. The process-based framework consists of different forms of Richard's equations, numerical solutions, groundwater parameterizations and hydraulic conductivity distribution. The hydrological analysis of the model processes has evolved from focusing only on simulated runoff (final model output), to also including other criteria such as soil moisture and groundwater levels. Parameter importance and associated structural importance are evaluated using different types of sensitivity analyses techniques, making use of both robust global methods (e.g. Sobol') as well as several

  2. Evaluation of trends in wheat yield models

    NASA Technical Reports Server (NTRS)

    Ferguson, M. C.

    1982-01-01

    Trend terms in models for wheat yield in the U.S. Great Plains for the years 1932 to 1976 are evaluated. The subset of meteorological variables yielding the largest adjusted R(2) is selected using the method of leaps and bounds. Latent root regression is used to eliminate multicollinearities, and generalized ridge regression is used to introduce bias to provide stability in the data matrix. The regression model used provides for two trends in each of two models: a dependent model in which the trend line is piece-wise continuous, and an independent model in which the trend line is discontinuous at the year of the slope change. It was found that the trend lines best describing the wheat yields consisted of combinations of increasing, decreasing, and constant trend: four combinations for the dependent model and seven for the independent model.

  3. Dynamic Multicriteria Evaluation of Conceptual Hydrological Models

    NASA Astrophysics Data System (ADS)

    de Vos, N. J.; Rientjes, T. H.; Fenicia, F.; Gupta, H. V.

    2007-12-01

    Accurate and precise forecasts of river streamflows are crucial for successful management of water resources and under the threat of hydrological extremes such as floods and droughts. Conceptual rainfall-runoff models are the most popular approach in flood forecasting. However, the calibration and evaluation of such models is often oversimplified by the use of performance statistics that largely ignore the dynamic character of a watershed system. This research aims to find novel ways of model evaluation by identifying periods of hydrologic similarity and customizing evaluation within each period using multiple criteria. A dynamic approach to hydrologic model identification, calibration and testing can be realized by applying clustering algorithms (e.g., Self-Organizing Map, Fuzzy C-means algorithm) to hydrological data. These algorithms are able to identify clusters in the data that represent periods of hydrological similarity. In this way, dynamic catchment system behavior can be simplified within the clusters that are identified. Although clustering requires a number of subjective choices, new insights into the hydrological functioning of a catchment can be obtained. Finally, separate model multi-criteria calibration and evaluation is performed for each of the clusters. Such a model evaluation procedure shows to be reliable and gives much-needed feedback on exactly where certain model structures fail. Several clustering algorithms were tested on two data sets of meso-scale and large-scale catchments. The results show that the clustering algorithms define categories that reflect hydrological process understanding: dry/wet seasons, rising/falling hydrograph limbs, precipitation-driven/ non-driven periods, etc. The results of various clustering algorithms are compared and validated using expert knowledge. Calibration results on a conceptual hydrological model show that the common practice of single-criteria calibration over the complete time series fails to perform

  4. Evaluating network models: A likelihood analysis

    NASA Astrophysics Data System (ADS)

    Wang, Wen-Qiang; Zhang, Qian-Ming; Zhou, Tao

    2012-04-01

    Many models are put forward to mimic the evolution of real networked systems. A well-accepted way to judge the validity is to compare the modeling results with real networks subject to several structural features. Even for a specific real network, we cannot fairly evaluate the goodness of different models since there are too many structural features while there is no criterion to select and assign weights on them. Motivated by the studies on link prediction algorithms, we propose a unified method to evaluate the network models via the comparison of the likelihoods of the currently observed network driven by different models, with an assumption that the higher the likelihood is, the more accurate the model is. We test our method on the real Internet at the Autonomous System (AS) level, and the results suggest that the Generalized Linear Preferential (GLP) model outperforms the Tel Aviv Network Generator (Tang), while both two models are better than the Barabási-Albert (BA) and Erdös-Rényi (ER) models. Our method can be further applied in determining the optimal values of parameters that correspond to the maximal likelihood. The experiment indicates that the parameters obtained by our method can better capture the characters of newly added nodes and links in the AS-level Internet than the original methods in the literature.

  5. PREFACE SPECIAL ISSUE ON MODEL EVALUATION: EVALUATION OF URBAN AND REGIONAL EULERIAN AIR QUALITY MODELS

    EPA Science Inventory

    The "Preface to the Special Edition on Model Evaluation: Evaluation of Urban and Regional Eulerian Air Quality Models" is a brief introduction to the papers included in a special issue of Atmospheric Environment. The Preface provides a background for the papers, which have thei...

  6. Evaluation of a habitat suitability index model

    USGS Publications Warehouse

    Farmer, A.H.; Cade, B.S.; Stauffer, D.F.

    2002-01-01

    We assisted with development of a model for maternity habitat of the Indiana bat (Myotis soda/is), for use in conducting assessments of projects potentially impacting this endangered species. We started with an existing model, modified that model in a workshop, and evaluated the revised model, using data previously collected by others. Our analyses showed that higher indices of habitat suitability were associated with sites where Indiana bats were present and, thus, the model may be useful for identifying suitable habitat. Utility of the model, however, was based on a single component-density of suitable roost trees. Percentage of landscape in forest did not allow differentiation between sites occupied and not occupied by Indiana bats. Moreover, in spite of a general opinion by participants in the workshop that bodies of water were highly productive feeding areas and that a diversity of feeding habitats was optimal, we found no evidence to support either hypothesis.

  7. Performance Evaluation of Dense Gas Dispersion Models.

    NASA Astrophysics Data System (ADS)

    Touma, Jawad S.; Cox, William M.; Thistle, Harold; Zapert, James G.

    1995-03-01

    This paper summarizes the results of a study to evaluate the performance of seven dense gas dispersion models using data from three field experiments. Two models (DEGADIS and SLAB) are in the public domain and the other five (AIRTOX, CHARM, FOCUS, SAFEMODE, and TRACE) are proprietary. The field data used are the Desert Tortoise pressurized ammonia releases, Burro liquefied natural gas spill tests, and the Goldfish anhydrous hydrofluoric acid spill experiments. Desert Tortoise and Goldfish releases were simulated as horizontal jet releases, and Burro as a liquid pool. Performance statistics were used to compare maximum observed concentrations and plume half-width to those predicted by each model. Model performance varied and no model exhibited consistently good performance across all three databases. However, when combined across the three databases, all models performed within a factor of 2. Problems encountered are discussed in order to help future investigators.

  8. Optical Storage Performance Modeling and Evaluation.

    ERIC Educational Resources Information Center

    Behera, Bailochan; Singh, Harpreet

    1990-01-01

    Evaluates different types of storage media for long-term archival storage of large amounts of data. Existing storage media are reviewed, including optical disks, optical tape, magnetic storage, and microfilm; three models are proposed based on document storage requirements; performance analysis is considered; and cost effectiveness is discussed.…

  9. Evaluation of Usability Utilizing Markov Models

    ERIC Educational Resources Information Center

    Penedo, Janaina Rodrigues; Diniz, Morganna; Ferreira, Simone Bacellar Leal; Silveira, Denis S.; Capra, Eliane

    2012-01-01

    Purpose: The purpose of this paper is to analyze the usability of a remote learning system in its initial development phase, using a quantitative usability evaluation method through Markov models. Design/methodology/approach: The paper opted for an exploratory study. The data of interest of the research correspond to the possible accesses of users…

  10. Evaluation of help model replacement codes

    SciTech Connect

    Whiteside, Tad; Hang, Thong; Flach, Gregory

    2009-07-01

    This work evaluates the computer codes that are proposed to be used to predict percolation of water through the closure-cap and into the waste containment zone at the Department of Energy closure sites. This work compares the currently used water-balance code (HELP) with newly developed computer codes that use unsaturated flow (Richards’ equation). It provides a literature review of the HELP model and the proposed codes, which result in two recommended codes for further evaluation: HYDRUS-2D3D and VADOSE/W. This further evaluation involved performing actual simulations on a simple model and comparing the results of those simulations to those obtained with the HELP code and the field data. From the results of this work, we conclude that the new codes perform nearly the same, although moving forward, we recommend HYDRUS-2D3D.

  11. Evaluation of a Mysis bioenergetics model

    USGS Publications Warehouse

    Chipps, S.R.; Bennett, D.H.

    2002-01-01

    Direct approaches for estimating the feeding rate of the opossum shrimp Mysis relicta can be hampered by variable gut residence time (evacuation rate models) and non-linear functional responses (clearance rate models). Bioenergetics modeling provides an alternative method, but the reliability of this approach needs to be evaluated using independent measures of growth and food consumption. In this study, we measured growth and food consumption for M. relicta and compared experimental results with those predicted from a Mysis bioenergetics model. For Mysis reared at 10??C, model predictions were not significantly different from observed values. Moreover, decomposition of mean square error indicated that 70% of the variation between model predictions and observed values was attributable to random error. On average, model predictions were within 12% of observed values. A sensitivity analysis revealed that Mysis respiration and prey energy density were the most sensitive parameters affecting model output. By accounting for uncertainty (95% CLs) in Mysis respiration, we observed a significant improvement in the accuracy of model output (within 5% of observed values), illustrating the importance of sensitive input parameters for model performance. These findings help corroborate the Mysis bioenergetics model and demonstrate the usefulness of this approach for estimating Mysis feeding rate.

  12. Evaluation (not validation) of quantitative models.

    PubMed Central

    Oreskes, N

    1998-01-01

    The present regulatory climate has led to increasing demands for scientists to attest to the predictive reliability of numerical simulation models used to help set public policy, a process frequently referred to as model validation. But while model validation may reveal useful information, this paper argues that it is not possible to demonstrate the predictive reliability of any model of a complex natural system in advance of its actual use. All models embed uncertainties, and these uncertainties can and frequently do undermine predictive reliability. In the case of lead in the environment, we may categorize model uncertainties as theoretical, empirical, parametrical, and temporal. Theoretical uncertainties are aspects of the system that are not fully understood, such as the biokinetic pathways of lead metabolism. Empirical uncertainties are aspects of the system that are difficult (or impossible) to measure, such as actual lead ingestion by an individual child. Parametrical uncertainties arise when complexities in the system are simplified to provide manageable model input, such as representing longitudinal lead exposure by cross-sectional measurements. Temporal uncertainties arise from the assumption that systems are stable in time. A model may also be conceptually flawed. The Ptolemaic system of astronomy is a historical example of a model that was empirically adequate but based on a wrong conceptualization. Yet had it been computerized--and had the word then existed--its users would have had every right to call it validated. Thus, rather than talking about strategies for validation, we should be talking about means of evaluation. That is not to say that language alone will solve our problems or that the problems of model evaluation are primarily linguistic. The uncertainties inherent in large, complex models will not go away simply because we change the way we talk about them. But this is precisely the point: calling a model validated does not make it valid

  13. Evaluation (not validation) of quantitative models.

    PubMed

    Oreskes, N

    1998-12-01

    The present regulatory climate has led to increasing demands for scientists to attest to the predictive reliability of numerical simulation models used to help set public policy, a process frequently referred to as model validation. But while model validation may reveal useful information, this paper argues that it is not possible to demonstrate the predictive reliability of any model of a complex natural system in advance of its actual use. All models embed uncertainties, and these uncertainties can and frequently do undermine predictive reliability. In the case of lead in the environment, we may categorize model uncertainties as theoretical, empirical, parametrical, and temporal. Theoretical uncertainties are aspects of the system that are not fully understood, such as the biokinetic pathways of lead metabolism. Empirical uncertainties are aspects of the system that are difficult (or impossible) to measure, such as actual lead ingestion by an individual child. Parametrical uncertainties arise when complexities in the system are simplified to provide manageable model input, such as representing longitudinal lead exposure by cross-sectional measurements. Temporal uncertainties arise from the assumption that systems are stable in time. A model may also be conceptually flawed. The Ptolemaic system of astronomy is a historical example of a model that was empirically adequate but based on a wrong conceptualization. Yet had it been computerized--and had the word then existed--its users would have had every right to call it validated. Thus, rather than talking about strategies for validation, we should be talking about means of evaluation. That is not to say that language alone will solve our problems or that the problems of model evaluation are primarily linguistic. The uncertainties inherent in large, complex models will not go away simply because we change the way we talk about them. But this is precisely the point: calling a model validated does not make it valid

  14. A model evaluation checklist for process-based environmental models

    NASA Astrophysics Data System (ADS)

    Jackson-Blake, Leah

    2015-04-01

    Mechanistic catchment-scale phosphorus models appear to perform poorly where diffuse sources dominate. The reasons for this were investigated for one commonly-applied model, the INtegrated model of CAtchment Phosphorus (INCA-P). Model output was compared to 18 months of daily water quality monitoring data in a small agricultural catchment in Scotland, and model structure, key model processes and internal model responses were examined. Although the model broadly reproduced dissolved phosphorus dynamics, it struggled with particulates. The reasons for poor performance were explored, together with ways in which improvements could be made. The process of critiquing and assessing model performance was then generalised to provide a broadly-applicable model evaluation checklist, incorporating: (1) Calibration challenges, relating to difficulties in thoroughly searching a high-dimensional parameter space and in selecting appropriate means of evaluating model performance. In this study, for example, model simplification was identified as a necessary improvement to reduce the number of parameters requiring calibration, whilst the traditionally-used Nash Sutcliffe model performance statistic was not able to discriminate between realistic and unrealistic model simulations, and alternative statistics were needed. (2) Data limitations, relating to a lack of (or uncertainty in) input data, data to constrain model parameters, data for model calibration and testing, and data to test internal model processes. In this study, model reliability could be improved by addressing all four kinds of data limitation. For example, there was insufficient surface water monitoring data for model testing against an independent dataset to that used in calibration, whilst additional monitoring of groundwater and effluent phosphorus inputs would help distinguish between alternative plausible model parameterisations. (3) Model structural inadequacies, whereby model structure may inadequately represent

  15. Evaluating model accuracy for model-based reasoning

    NASA Technical Reports Server (NTRS)

    Chien, Steve; Roden, Joseph

    1992-01-01

    Described here is an approach to automatically assessing the accuracy of various components of a model. In this approach, actual data from the operation of a target system is used to drive statistical measures to evaluate the prediction accuracy of various portions of the model. We describe how these statistical measures of model accuracy can be used in model-based reasoning for monitoring and design. We then describe the application of these techniques to the monitoring and design of the water recovery system of the Environmental Control and Life Support System (ECLSS) of Space Station Freedom.

  16. AERMOD: Model formulation and evaluation results

    SciTech Connect

    Paine, R.J.; Lee, R.; Brode, R.; Wilson, R.; Cimorelli, A.; Perry, S.G.; Weil, J.; Venkatram, A.; Peters, W.

    1999-07-01

    AERMOD is an advanced plume model that incorporates updated treatments of the boundary layer theory, understanding of turbulence and dispersion, and includes handling of terrain interactions. This paper presents an overview of AERMOD's features relative to ISCST3. AERMOD has been evaluated on 10 databases, which include flat and hilly terrain areas, urban and rural sites, and a mixture of tracer experiments as well as routine monitoring networks with a limited number of fixed monitoring sites. This paper presents a summary of the evaluation results of AERMOD with these diverse databases.

  17. AERMOD: Model formulation and evaluation results

    SciTech Connect

    Paine, R.; Lee, R.; Brode, R.; Wilson, R.; Cimorelli, A.

    1999-07-01

    AERMOD is an advanced plume model that incorporates update treatment of the boundary treatments of the boundary layer theory, understanding of turbulence and dispersion, and includes handling of terrain interactions. This paper presents an overview of AERMOD`s features relatives of ISCST3. AERMOD has been evaluated on 10 databases, which include flat and hilly terrain areas, urban and rural sites, and a mixture of tracer experiments as well as routine monitoring networks with a limited number of fixed monitoring sites. This paper presents a summary of the evaluation results of AERMOD with these diverse databases.

  18. Evaluating spatial patterns in hydrological modeling

    NASA Astrophysics Data System (ADS)

    Koch, Julian; Stisen, Simon; Høgh Jensen, Karsten

    2014-05-01

    Recent advances in hydrological modeling towards fully distributed grid based model codes, increased availability of spatially distributed data (remote sensing and intensive field studies) and more computational power allow a shift towards a spatial model evaluation away from the traditional aggregated evaluation. The consideration of spatially aggregated observations, in form of river discharge, in the evaluation process does not ensure a correct simulation of catchment-inherent distributed variables. The integration of spatial data and hydrological models is limited due to a lack of suitable metrics to evaluate similarity of spatial patterns. This study is engaged with the development of a novel set of performance metrics that capture spatial patterns and go beyond global statistics. The metrics are required to be easy, flexible and especially targeted to compare observed and simulated spatial patterns of hydrological variables. Four quantitative methodologies for comparing spatial patterns are brought forward: (1) A fuzzy set approach that incorporates both fuzziness of location and fuzziness of category. (2) Kappa statistic that expresses the similarity between two maps based on a contingency table (error matrix). (3) An extended version of (2) by considering both fuzziness in location and fuzziness in category. (4) Increasing the information content of a single cell by aggregating neighborhood cells at different window sizes; then computing mean and standard deviation. The identified metrics are tested on observed and simulated land surface temperature maps in a groundwater dominated catchment in western Denmark. The observed data originates from the MODIS satellite and MIKE SHE, a coupled and fully distributed hydrological model, serves as the modelling tool. Synthetic land surface temperature maps are generated to further address strengths and weaknesses of the metrics. The metrics are tested in different parameter optimizing frameworks, where they are

  19. Automated Expert Modeling and Student Evaluation

    Energy Science and Technology Software Center (ESTSC)

    2012-09-12

    AEMASE searches a database of recorded events for combinations of events that are of interest. It compares matching combinations to a statistical model to determine similarity to previous events of interest and alerts the user as new matching examples are found. AEMASE is currently used by weapons tactics instructors to find situations of interest in recorded tactical training scenarios. AEMASE builds on a sub-component, the Relational Blackboard (RBB), which is being released as open-source software.more » AEMASE builds on RBB adding interactive expert model construction (automated knowledge capture) and re-evaluation of scenario data.« less

  20. Automated Expert Modeling and Student Evaluation

    SciTech Connect

    2012-09-12

    AEMASE searches a database of recorded events for combinations of events that are of interest. It compares matching combinations to a statistical model to determine similarity to previous events of interest and alerts the user as new matching examples are found. AEMASE is currently used by weapons tactics instructors to find situations of interest in recorded tactical training scenarios. AEMASE builds on a sub-component, the Relational Blackboard (RBB), which is being released as open-source software. AEMASE builds on RBB adding interactive expert model construction (automated knowledge capture) and re-evaluation of scenario data.

  1. CTBT integrated verification system evaluation model supplement

    SciTech Connect

    EDENBURN,MICHAEL W.; BUNTING,MARCUS; PAYNE JR.,ARTHUR C.; TROST,LAWRENCE C.

    2000-03-02

    Sandia National Laboratories has developed a computer based model called IVSEM (Integrated Verification System Evaluation Model) to estimate the performance of a nuclear detonation monitoring system. The IVSEM project was initiated in June 1994, by Sandia's Monitoring Systems and Technology Center and has been funded by the U.S. Department of Energy's Office of Nonproliferation and National Security (DOE/NN). IVSEM is a simple, ''top-level,'' modeling tool which estimates the performance of a Comprehensive Nuclear Test Ban Treaty (CTBT) monitoring system and can help explore the impact of various sensor system concepts and technology advancements on CTBT monitoring. One of IVSEM's unique features is that it integrates results from the various CTBT sensor technologies (seismic, in sound, radionuclide, and hydroacoustic) and allows the user to investigate synergy among the technologies. Specifically, IVSEM estimates the detection effectiveness (probability of detection), location accuracy, and identification capability of the integrated system and of each technology subsystem individually. The model attempts to accurately estimate the monitoring system's performance at medium interfaces (air-land, air-water) and for some evasive testing methods such as seismic decoupling. The original IVSEM report, CTBT Integrated Verification System Evaluation Model, SAND97-25 18, described version 1.2 of IVSEM. This report describes the changes made to IVSEM version 1.2 and the addition of identification capability estimates that have been incorporated into IVSEM version 2.0.

  2. Programs Help Create And Evaluate Markov Models

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Boerschlein, David P.

    1993-01-01

    Pade Approximation With Scaling (PAWS) and Scaled Taylor Exponential Matrix (STEM) computer programs provide flexible, user-friendly, language-based interface for creation and evaluation of Markov models describing behaviors of fault-tolerant reconfigurable computer systems. Produce exact solution for probabilities of system failures and provide conservative estimates of numbers of significant digits in solutions. Also offer as part of bundled package with SURE and ASSIST, two other reliable analysis programs developed by Systems Validation Methods group at Langley Research Center.

  3. Radiation model for row crops: II. Model evaluation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Relatively few radiation transfer studies have considered the impact of varying vegetation cover that typifies row crops, and meth¬ods to account for partial row crop cover have not been well investigated. Our objective was to evaluate a widely used radiation model that was modified for row crops ha...

  4. Using Critical Incidents To Model Effective Evaluation Practice in the Teaching of Evaluation.

    ERIC Educational Resources Information Center

    Preskill, Hallie

    1997-01-01

    Discusses the importance of modeling effective evaluation practice as teachers teach about evaluation. Using the critical incidents evaluation tool and process students in graduate evaluation courses were asked to reflect on their learning of modeling formative evaluation throughout the course as a way to teach about evaluation practice. (SLD)

  5. User's appraisal of yield model evaluation criteria

    NASA Technical Reports Server (NTRS)

    Warren, F. B. (Principal Investigator)

    1982-01-01

    The five major potential USDA users of AgRISTAR crop yield forecast models rated the Yield Model Development (YMD) project Test and Evaluation Criteria by the importance placed on them. These users were agreed that the "TIMELINES" and "RELIABILITY" of the forecast yields would be of major importance in determining if a proposed yield model was worthy of adoption. Although there was considerable difference of opinion as to the relative importance of the other criteria, "COST", "OBJECTIVITY", "ADEQUACY", AND "MEASURES OF ACCURACY" generally were felt to be more important that "SIMPLICITY" and "CONSISTENCY WITH SCIENTIFIC KNOWLEDGE". However, some of the comments which accompanied the ratings did indicate that several of the definitions and descriptions of the criteria were confusing.

  6. Evaluation of a mallard productivity model

    USGS Publications Warehouse

    Johnson, D.H.; Cowardin, L.M.; Sparling, D.W.

    1986-01-01

    A stochastic model of mallard (Anas platyrhynchos) productivity has been developed over a 10-year period and successfully applied to several management questions. Here we review the model and describe some recent uses and improvements that increase its realism and applicability, including naturally occurring changes in wetland habitat, catastrophic weather events, and the migrational homing of mallards. The amount of wetland habitat influenced productivity primarily by affecting the renesting rate. Late snowstorms severely reduced productivity, whereas the loss of nests due to flooding was largely compensated for by increased renesting, often in habitats where hatching rates were better. Migrational homing was shown to be an important phenomenon in population modeling and should be considered when evaluating management plans.

  7. A Formulation of the Interactive Evaluation Model

    PubMed Central

    Walsh, Peter J.; Awad-Edwards, Roger; Engelhardt, K. G.; Perkash, Inder

    1985-01-01

    The development of highly technical devices for specialized users requires continual feedback from potential users to the project team designing the device to assure that a useful product will result. This necessity for user input is the basis for the Interactive Evaluation Model which has been applied to complex computer assisted robotic aids for individuals with disabilities and has wide application to the development of a variety of technical devices. We present a preliminary mathematical formulation of the Interactive Evaluation Model which maximizes the rate of growth toward success, at a constant cost rate, of the efforts of a team having the diverse expertises needed to produce a complex technical product. Close interaction is simulated by a growth rate which is a multiplicative product involving the number of participants within a given class of necessary expertise and evaluation is included by demanding that users form one of the necessary classes. In the multipliers, the number of class participants is raised to a power termed the class weight exponent. In the simplest case, the optimum participant number varies as the ratio of the class weight exponent to the average class cost. An illustrative example, based on our experience with medical care assistive aids, shows the dramatic cost reduction possible with users on the team.

  8. Hazardous gas model evaluation with field observations

    NASA Astrophysics Data System (ADS)

    Hanna, S. R.; Chang, J. C.; Strimaitis, D. G.

    Fifteen hazardous gas models were evaluated using data from eight field experiments. The models include seven publicly available models (AFTOX, DEGADIS, HEGADAS, HGSYSTEM, INPUFF, OB/DG and SLAB), six proprietary models (AIRTOX, CHARM, FOCUS, GASTAR, PHAST and TRACE), and two "benchmark" analytical models (the Gaussian Plume Model and the analytical approximations to the Britter and McQuaid Workbook nomograms). The field data were divided into three groups—continuous dense gas releases (Burro LNG, Coyote LNG, Desert Tortoise NH 3-gas and aerosols, Goldfish HF-gas and aerosols, and Maplin Sands LNG), continuous passive gas releases (Prairie Grass and Hanford), and instantaneous dense gas releases (Thorney Island freon). The dense gas models that produced the most consistent predictions of plume centerline concentrations across the dense gas data sets are the Britter and McQuaid, CHARM, GASTAR, HEGADAS, HGSYSTEM, PHAST, SLAB and TRACE models, with relative mean biases of about ±30% or less and magnitudes of relative scatter that are about equal to the mean. The dense gas models tended to overpredict the plume widths and underpredict the plume depths by about a factor of two. All models except GASTAR, TRACE, and the area source version of DEGADIS perform fairly well with the continuous passive gas data sets. Some sensitivity studies were also carried out. It was found that three of the more widely used publicly-available dense gas models (DEGADIS, HGSYSTEM and SLAB) predicted increases in concentration of about 70% as roughness length decreased by an order of magnitude for the Desert Tortoise and Goldfish field studies. It was also found that none of the dense gas models that were considered came close to simulating the observed factor of two increase in peak concentrations as averaging time decreased from several minutes to 1 s. Because of their assumption that a concentrated dense gas core existed that was unaffected by variations in averaging time, the dense gas

  9. The natural emissions model (NEMO): Description, application and model evaluation

    NASA Astrophysics Data System (ADS)

    Liora, Natalia; Markakis, Konstantinos; Poupkou, Anastasia; Giannaros, Theodore M.; Melas, Dimitrios

    2015-12-01

    The aim of this study is the application and evaluation of a new computer model used for the quantification of emissions coming from natural sources. The Natural Emissions Model (NEMO) is driven by the meteorological data of the mesoscale numerical Weather Research and Forecasting (WRF) model and it estimates particulate matter (PM) emissions from windblown dust, sea salt aerosols (SSA) and primary biological aerosol particles (PBAPs). It also includes emissions from Biogenic Volatile Organic Compounds (BVOCs) from vegetation; however, this study focuses only on particle emissions. An application and evaluation of NEMO at European scale are presented. NEMO and the modelling system consisted of WRF model and the Comprehensive Air Quality Model with extensions (CAMx) were applied in a 30 km European domain for the year 2009. The computed domain-wide annual PM10 emissions from windblown dust, sea salt and PBAPs were 0.57 Tg, 20 Tg and 0.12 Tg, respectively. PM2.5 represented 6% and 33% of emitted windblown dust and sea salt, respectively. Natural emissions are characterized by high geographical and seasonal variations; windblown dust emissions were the highest during summer in the southern Europe and SSA production was the highest in Atlantic Ocean during the cold season while in Mediterranean Sea the highest SSA emissions were found over the Aegean Sea during summer. Modelled concentrations were compared with surface station measurements and showed that the model captured fairly well the contribution of the natural sources to PM levels over Europe. Dust concentrations correlated better when dust transport events from Sahara desert were absent while the simulation of sea salt episodes led to an improvement of model performance during the cold season.

  10. Evaluating face trustworthiness: a model based approach

    PubMed Central

    Baron, Sean G.; Oosterhof, Nikolaas N.

    2008-01-01

    Judgments of trustworthiness from faces determine basic approach/avoidance responses and approximate the valence evaluation of faces that runs across multiple person judgments. Here, based on trustworthiness judgments and using a computer model for face representation, we built a model for representing face trustworthiness (study 1). Using this model, we generated novel faces with an increased range of trustworthiness and used these faces as stimuli in a functional Magnetic Resonance Imaging study (study 2). Although participants did not engage in explicit evaluation of the faces, the amygdala response changed as a function of face trustworthiness. An area in the right amygdala showed a negative linear response—as the untrustworthiness of faces increased so did the amygdala response. Areas in the left and right putamen, the latter area extended into the anterior insula, showed a similar negative linear response. The response in the left amygdala was quadratic—strongest for faces on both extremes of the trustworthiness dimension. The medial prefrontal cortex and precuneus also showed a quadratic response, but their response was strongest to faces in the middle range of the trustworthiness dimension. PMID:19015102

  11. Evaluating conflation methods using uncertainty modeling

    NASA Astrophysics Data System (ADS)

    Doucette, Peter; Dolloff, John; Canavosio-Zuzelski, Roberto; Lenihan, Michael; Motsko, Dennis

    2013-05-01

    The classic problem of computer-assisted conflation involves the matching of individual features (e.g., point, polyline, or polygon vectors) as stored in a geographic information system (GIS), between two different sets (layers) of features. The classical goal of conflation is the transfer of feature metadata (attributes) from one layer to another. The age of free public and open source geospatial feature data has significantly increased the opportunity to conflate such data to create enhanced products. There are currently several spatial conflation tools in the marketplace with varying degrees of automation. An ability to evaluate conflation tool performance quantitatively is of operational value, although manual truthing of matched features is laborious and costly. In this paper, we present a novel methodology that uses spatial uncertainty modeling to simulate realistic feature layers to streamline evaluation of feature matching performance for conflation methods. Performance results are compiled for DCGIS street centerline features.

  12. Data assimilation and model evaluation experiment datasets

    NASA Technical Reports Server (NTRS)

    Lai, Chung-Cheng A.; Qian, Wen; Glenn, Scott M.

    1994-01-01

    The Institute for Naval Oceanography, in cooperation with Naval Research Laboratories and universities, executed the Data Assimilation and Model Evaluation Experiment (DAMEE) for the Gulf Stream region during fiscal years 1991-1993. Enormous effort has gone into the preparation of several high-quality and consistent datasets for model initialization and verification. This paper describes the preparation process, the temporal and spatial scopes, the contents, the structure, etc., of these datasets. The goal of DAMEE and the need of data for the four phases of experiment are briefly stated. The preparation of DAMEE datasets consisted of a series of processes: (1) collection of observational data; (2) analysis and interpretation; (3) interpolation using the Optimum Thermal Interpolation System package; (4) quality control and re-analysis; and (5) data archiving and software documentation. The data products from these processes included a time series of 3D fields of temperature and salinity, 2D fields of surface dynamic height and mixed-layer depth, analysis of the Gulf Stream and rings system, and bathythermograph profiles. To date, these are the most detailed and high-quality data for mesoscale ocean modeling, data assimilation, and forecasting research. Feedback from ocean modeling groups who tested this data was incorporated into its refinement. Suggestions for DAMEE data usages include (1) ocean modeling and data assimilation studies, (2) diagnosis and theoretical studies, and (3) comparisons with locally detailed observations.

  13. Modelling approaches for evaluating multiscale tendon mechanics.

    PubMed

    Fang, Fei; Lake, Spencer P

    2016-02-01

    Tendon exhibits anisotropic, inhomogeneous and viscoelastic mechanical properties that are determined by its complicated hierarchical structure and varying amounts/organization of different tissue constituents. Although extensive research has been conducted to use modelling approaches to interpret tendon structure-function relationships in combination with experimental data, many issues remain unclear (i.e. the role of minor components such as decorin, aggrecan and elastin), and the integration of mechanical analysis across different length scales has not been well applied to explore stress or strain transfer from macro- to microscale. This review outlines mathematical and computational models that have been used to understand tendon mechanics at different scales of the hierarchical organization. Model representations at the molecular, fibril and tissue levels are discussed, including formulations that follow phenomenological and microstructural approaches (which include evaluations of crimp, helical structure and the interaction between collagen fibrils and proteoglycans). Multiscale modelling approaches incorporating tendon features are suggested to be an advantageous methodology to understand further the physiological mechanical response of tendon and corresponding adaptation of properties owing to unique in vivo loading environments. PMID:26855747

  14. CTBT Integrated Verification System Evaluation Model

    SciTech Connect

    Edenburn, M.W.; Bunting, M.L.; Payne, A.C. Jr.

    1997-10-01

    Sandia National Laboratories has developed a computer based model called IVSEM (Integrated Verification System Evaluation Model) to estimate the performance of a nuclear detonation monitoring system. The IVSEM project was initiated in June 1994, by Sandia`s Monitoring Systems and Technology Center and has been funded by the US Department of Energy`s Office of Nonproliferation and National Security (DOE/NN). IVSEM is a simple, top-level, modeling tool which estimates the performance of a Comprehensive Nuclear Test Ban Treaty (CTBT) monitoring system and can help explore the impact of various sensor system concepts and technology advancements on CTBT monitoring. One of IVSEM`s unique features is that it integrates results from the various CTBT sensor technologies (seismic, infrasound, radionuclide, and hydroacoustic) and allows the user to investigate synergy among the technologies. Specifically, IVSEM estimates the detection effectiveness (probability of detection) and location accuracy of the integrated system and of each technology subsystem individually. The model attempts to accurately estimate the monitoring system`s performance at medium interfaces (air-land, air-water) and for some evasive testing methods such as seismic decoupling. This report describes version 1.2 of IVSEM.

  15. An evaluation framework for participatory modelling

    NASA Astrophysics Data System (ADS)

    Krueger, T.; Inman, A.; Chilvers, J.

    2012-04-01

    Strong arguments for participatory modelling in hydrology can be made on substantive, instrumental and normative grounds. These arguments have led to increasingly diverse groups of stakeholders (here anyone affecting or affected by an issue) getting involved in hydrological research and the management of water resources. In fact, participation has become a requirement of many research grants, programs, plans and policies. However, evidence of beneficial outcomes of participation as suggested by the arguments is difficult to generate and therefore rare. This is because outcomes are diverse, distributed, often tacit, and take time to emerge. In this paper we develop an evaluation framework for participatory modelling focussed on learning outcomes. Learning encompasses many of the potential benefits of participation, such as better models through diversity of knowledge and scrutiny, stakeholder empowerment, greater trust in models and ownership of subsequent decisions, individual moral development, reflexivity, relationships, social capital, institutional change, resilience and sustainability. Based on the theories of experiential, transformative and social learning, complemented by practitioner experience our framework examines if, when and how learning has occurred. Special emphasis is placed on the role of models as learning catalysts. We map the distribution of learning between stakeholders, scientists (as a subgroup of stakeholders) and models. And we analyse what type of learning has occurred: instrumental learning (broadly cognitive enhancement) and/or communicative learning (change in interpreting meanings, intentions and values associated with actions and activities; group dynamics). We demonstrate how our framework can be translated into a questionnaire-based survey conducted with stakeholders and scientists at key stages of the participatory process, and show preliminary insights from applying the framework within a rural pollution management situation in

  16. Performance Evaluation Modeling of Network Sensors

    NASA Technical Reports Server (NTRS)

    Clare, Loren P.; Jennings, Esther H.; Gao, Jay L.

    2003-01-01

    Substantial benefits are promised by operating many spatially separated sensors collectively. Such systems are envisioned to consist of sensor nodes that are connected by a communications network. A simulation tool is being developed to evaluate the performance of networked sensor systems, incorporating such metrics as target detection probabilities, false alarms rates, and classification confusion probabilities. The tool will be used to determine configuration impacts associated with such aspects as spatial laydown, and mixture of different types of sensors (acoustic, seismic, imaging, magnetic, RF, etc.), and fusion architecture. The QualNet discrete-event simulation environment serves as the underlying basis for model development and execution. This platform is recognized for its capabilities in efficiently simulating networking among mobile entities that communicate via wireless media. We are extending QualNet's communications modeling constructs to capture the sensing aspects of multi-target sensing (analogous to multiple access communications), unimodal multi-sensing (broadcast), and multi-modal sensing (multiple channels and correlated transmissions). Methods are also being developed for modeling the sensor signal sources (transmitters), signal propagation through the media, and sensors (receivers) that are consistent with the discrete event paradigm needed for performance determination of sensor network systems. This work is supported under the Microsensors Technical Area of the Army Research Laboratory (ARL) Advanced Sensors Collaborative Technology Alliance.

  17. An Integrated Model of Training Evaluation and Effectiveness

    ERIC Educational Resources Information Center

    Alvarez, Kaye; Salas, Eduardo; Garofano, Christina M.

    2004-01-01

    A decade of training evaluation and training effectiveness research was reviewed to construct an integrated model of training evaluation and effectiveness. This model integrates four prior evaluation models and results of 10 years of training effectiveness research. It is the first to be constructed using a set of strict criteria and to…

  18. 10 CFR Appendix K to Part 50 - ECCS Evaluation Models

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 1 2011-01-01 2011-01-01 false ECCS Evaluation Models K Appendix K to Part 50 Energy NUCLEAR REGULATORY COMMISSION DOMESTIC LICENSING OF PRODUCTION AND UTILIZATION FACILITIES Pt. 50, App. K Appendix K to Part 50—ECCS Evaluation Models I. Required and Acceptable Features of Evaluation Models. II. Required Documentation. I. Required...

  19. Evaluation of Mesoscale Model Phenomenological Verification Techniques

    NASA Technical Reports Server (NTRS)

    Lambert, Winifred

    2006-01-01

    Forecasters at the Spaceflight Meteorology Group, 45th Weather Squadron, and National Weather Service in Melbourne, FL use mesoscale numerical weather prediction model output in creating their operational forecasts. These models aid in forecasting weather phenomena that could compromise the safety of launch, landing, and daily ground operations and must produce reasonable weather forecasts in order for their output to be useful in operations. Considering the importance of model forecasts to operations, their accuracy in forecasting critical weather phenomena must be verified to determine their usefulness. The currently-used traditional verification techniques involve an objective point-by-point comparison of model output and observations valid at the same time and location. The resulting statistics can unfairly penalize high-resolution models that make realistic forecasts of a certain phenomena, but are offset from the observations in small time and/or space increments. Manual subjective verification can provide a more valid representation of model performance, but is time-consuming and prone to personal biases. An objective technique that verifies specific meteorological phenomena, much in the way a human would in a subjective evaluation, would likely produce a more realistic assessment of model performance. Such techniques are being developed in the research community. The Applied Meteorology Unit (AMU) was tasked to conduct a literature search to identify phenomenological verification techniques being developed, determine if any are ready to use operationally, and outline the steps needed to implement any operationally-ready techniques into the Advanced Weather Information Processing System (AWIPS). The AMU conducted a search of all literature on the topic of phenomenological-based mesoscale model verification techniques and found 10 different techniques in various stages of development. Six of the techniques were developed to verify precipitation forecasts, one

  20. Treatment modalities and evaluation models for periodontitis

    PubMed Central

    Tariq, Mohammad; Iqbal, Zeenat; Ali, Javed; Baboota, Sanjula; Talegaonkar, Sushama; Ahmad, Zulfiqar; Sahni, Jasjeet K

    2012-01-01

    Periodontitis is the most common localized dental inflammatory disease related with several pathological conditions like inflammation of gums (gingivitis), degeneration of periodontal ligament, dental cementum and alveolar bone loss. In this perspective, the various preventive and treatment modalities, including oral hygiene, gingival irrigations, mechanical instrumentation, full mouth disinfection, host modulation and antimicrobial therapy, which are used either as adjunctive treatments or as stand-alone therapies in the non-surgical management of periodontal infections, have been discussed. Intra-pocket, sustained release systems have emerged as a novel paradigm for the future research. In this article, special consideration is given to different locally delivered anti-microbial and anti inflammatory medications which are either commercially available or are currently under consideration for Food and Drug Administration (FDA) approval. The various in vitro dissolution models and microbiological strain investigated to impersonate the infected and inflamed periodontal cavity and to predict the in vivo performance of treatment modalities have also been thrashed out. Animal models that have been employed to explore the pathology at the different stages of periodontitis and to evaluate its treatment modalities are enlightened in this proposed review. PMID:23373002

  1. A Multidisciplinary Model of Evaluation Capacity Building

    ERIC Educational Resources Information Center

    Preskill, Hallie; Boyle, Shanelle

    2008-01-01

    Evaluation capacity building (ECB) has become a hot topic of conversation, activity, and study within the evaluation field. Seeking to enhance stakeholders' understanding of evaluation concepts and practices, and in an effort to create evaluation cultures, organizations have been implementing a variety of strategies to help their members learn…

  2. The design and implementation of an operational model evaluation system

    SciTech Connect

    Foster, K.T.

    1995-06-01

    An evaluation of an atmospheric transport and diffusion model`s operational performance typically involves the comparison of the model`s calculations with measurements of an atmospheric pollutant`s temporal and spatial distribution. These evaluations however often use data from a small number of experiments and may be limited to producing some of the commonly quoted statistics based on the differences between model calculations and the measurements. This paper presents efforts to develop a model evaluation system geared for both the objective statistical analysis and the more subjective visualization of the inter-relationships between a model`s calculations and the appropriate field measurement data.

  3. A Hybrid Evaluation Model for Evaluating Online Professional Development

    ERIC Educational Resources Information Center

    Hahs-Vaughn, Debbie; Zygouris-Coe, Vicky; Fiedler, Rebecca

    2007-01-01

    Online professional development is multidimensional. It encompasses: a) an online, web-based format; b) professional development; and most likely c) specific objectives tailored to and created for the respective online professional development course. Evaluating online professional development is therefore also multidimensional and as such both…

  4. Modelling and evaluating against the violent insider

    SciTech Connect

    Fortney, D.S.; Al-Ayat, R.A.; Saleh, R.A.

    1991-07-01

    The violent insider threat poses a special challenge to facilities protecting special nuclear material from theft or diversion. These insiders could potentially behave as nonviolent insiders to deceitfully defeat certain safeguards elements and use violence to forcefully defeat hardware or personnel. While several vulnerability assessment tools are available to deal with the nonviolent insider, very limited effort has been directed to developing analysis tools for the violent threat. In this paper, we present an approach using the results of a vulnerability assessment for nonviolent insiders to evaluate certain violent insider scenarios. Since existing tools do not explicitly consider violent insiders, the approach is intended for experienced safeguards analysts and relies on the analyst to brainstorm possible violent actions, to assign detection probabilities, and to ensure consistency. We then discuss our efforts in developing an automated tool for assessing the vulnerability against those violent insiders who are willing to use force against barriers, but who are unwilling to kill or be killed. Specifically, we discuss our efforts in developing databases for violent insiders penetrating barriers, algorithms for considering the entry of contraband, and modelling issues in considering the use of violence.

  5. Modelling and evaluating against the violent insider

    SciTech Connect

    Fortney, D.S.; Al-Ayat, R.A.; Saleh, R.A. )

    1991-01-01

    The violent insider threat poses a special challenge to facilities protecting special nuclear material from theft or diversion. These insiders could potentially behave as nonviolent insiders to deceitfully defeat certain safeguards elements and use violence to forcefully defeat hardware or personnel. While several vulnerability assessment tools are available to deal with the nonviolent insider, very limited effort has been directed to developing analysis tools for the violent threat. In this paper, the authors present an approach using the results of a vulnerability assessment for nonviolent insiders to evaluate certain violent insider scenarios. Since existing tools do not explicitly consider violent insiders, the approach is intended for experienced safeguards analysts and relies on the analyst to brainstorm possible violent actions, to assign detection probabilities, and to ensure consistency. The authors then discuss our efforts in developing an automated tool for assessing the vulnerability against those violent insiders who are willing to use force against barriers, but who are unwilling to kill or be killed. Specifically, the authors discuss our efforts in developing databases for violent insiders penetrating barriers, algorithms for considering the entry of contraband, and modelling issues in considering the use of violence.

  6. Evaluation of video quality models for multimedia

    NASA Astrophysics Data System (ADS)

    Brunnström, Kjell; Hands, David; Speranza, Filippo; Webster, Arthur

    2008-02-01

    The Video Quality Experts Group (VQEG) is a group of experts from industry, academia, government and standards organizations working in the field of video quality assessment. Over the last 10 years, VQEG has focused its efforts on the evaluation of objective video quality metrics for digital video. Objective video metrics are mathematical models that predict the picture quality as perceived by an average observer. VQEG has completed validation tests for full reference objective metrics for the Standard Definition Television (SDTV) format. From this testing, two ITU Recommendations were produced. This standardization effort is of great relevance to the video industries because objective metrics can be used for quality control of the video at various stages of the delivery chain. Currently, VQEG is undertaking several projects in parallel. The most mature project is concerned with objective measurement of multimedia content. This project is probably the largest coordinated set of video quality testing ever embarked upon. The project will involve the collection of a very large database of subjective quality data. About 40 subjective assessment experiments and more than 160,000 opinion scores will be collected. These will be used to validate the proposed objective metrics. This paper describes the test plan for the project, its current status, and one of the multimedia subjective tests.

  7. An Evaluation Model for Innovative Individualized Programs.

    ERIC Educational Resources Information Center

    Weber, Margaret B.

    1977-01-01

    Program evaluation is a tri-level process: evaluation of the learners, of the program against its own objectives, and as compared against a criterion program. Evaluation of innovative programs is primarily an issue of definition, and they should be judged in terms of the needs they were designed to satisfy. (Author/CTM)

  8. THE ATMOSPHERIC MODEL EVALUATION TOOL (AMET); AIR QUALITY MODULE

    EPA Science Inventory

    This presentation reviews the development of the Atmospheric Model Evaluation Tool (AMET) air quality module. The AMET tool is being developed to aid in the model evaluation. This presentation focuses on the air quality evaluation portion of AMET. Presented are examples of the...

  9. Formative Evaluation: A Revised Descriptive Theory and a Prescriptive Model.

    ERIC Educational Resources Information Center

    Braden, Roberts A.

    The premise is advanced that a major weakness of the everyday generic instructional systems design model stems from a too modest traditional conception of the purpose and potential of formative evaluation. In the typical ISD (instructional systems design) model formative evaluation is shown not at all or as a single, product evaluation step. Yet…

  10. Evaluating a Training Using the "Four Levels Model"

    ERIC Educational Resources Information Center

    Steensma, Herman; Groeneveld, Karin

    2010-01-01

    Purpose: The aims of this study are: to present a training evaluation based on the "four levels model"; to demonstrate the value of experimental designs in evaluation studies; and to take a first step in the development of an evidence-based training program. Design/methodology/approach: The Kirkpatrick four levels model was used to evaluate the…

  11. THE ATMOSPHERIC MODEL EVALUATION TOOL: METEOROLOGY MODULE

    EPA Science Inventory

    Air quality modeling is continuously expanding in sophistication and function. Currently, air quality models are being used for research, forecasting, regulatory related emission control strategies, and other applications. Results from air-quality model applications are closely ...

  12. Program evaluation models and related theories: AMEE guide no. 67.

    PubMed

    Frye, Ann W; Hemmer, Paul A

    2012-01-01

    This Guide reviews theories of science that have influenced the development of common educational evaluation models. Educators can be more confident when choosing an appropriate evaluation model if they first consider the model's theoretical basis against their program's complexity and their own evaluation needs. Reductionism, system theory, and (most recently) complexity theory have inspired the development of models commonly applied in evaluation studies today. This Guide describes experimental and quasi-experimental models, Kirkpatrick's four-level model, the Logic Model, and the CIPP (Context/Input/Process/Product) model in the context of the theories that influenced their development and that limit or support their ability to do what educators need. The goal of this Guide is for educators to become more competent and confident in being able to design educational program evaluations that support intentional program improvement while adequately documenting or describing the changes and outcomes-intended and unintended-associated with their programs. PMID:22515309

  13. Global daily reference evapotranspiration modeling and evaluation

    USGS Publications Warehouse

    Senay, G.B.; Verdin, J.P.; Lietzow, R.; Melesse, Assefa M.

    2008-01-01

    Accurate and reliable evapotranspiration (ET) datasets are crucial in regional water and energy balance studies. Due to the complex instrumentation requirements, actual ET values are generally estimated from reference ET values by adjustment factors using coefficients for water stress and vegetation conditions, commonly referred to as crop coefficients. Until recently, the modeling of reference ET has been solely based on important weather variables collected from weather stations that are generally located in selected agro-climatic locations. Since 2001, the National Oceanic and Atmospheric Administration's Global Data Assimilation System (GDAS) has been producing six-hourly climate parameter datasets that are used to calculate daily reference ET for the whole globe at 1-degree spatial resolution. The U.S. Geological Survey Center for Earth Resources Observation and Science has been producing daily reference ET (ETo) since 2001, and it has been used on a variety of operational hydrological models for drought and streamflow monitoring all over the world. With the increasing availability of local station-based reference ET estimates, we evaluated the GDAS-based reference ET estimates using data from the California Irrigation Management Information System (CIMIS). Daily CIMIS reference ET estimates from 85 stations were compared with GDAS-based reference ET at different spatial and temporal scales using five-year daily data from 2002 through 2006. Despite the large difference in spatial scale (point vs. ???100 km grid cell) between the two datasets, the correlations between station-based ET and GDAS-ET were very high, exceeding 0.97 on a daily basis to more than 0.99 on time scales of more than 10 days. Both the temporal and spatial correspondences in trend/pattern and magnitudes between the two datasets were satisfactory, suggesting the reliability of using GDAS parameter-based reference ET for regional water and energy balance studies in many parts of the world

  14. Rhode Island Model Evaluation & Support System: Teacher. Edition III

    ERIC Educational Resources Information Center

    Rhode Island Department of Education, 2015

    2015-01-01

    Rhode Island educators believe that implementing a fair, accurate, and meaningful educator evaluation and support system will help improve teaching and learning. The primary purpose of the Rhode Island Model Teacher Evaluation and Support System (Rhode Island Model) is to help all teachers improve. Through the Model, the goal is to help create a…

  15. Animal models to evaluate bacterial biofilm development.

    PubMed

    Thomsen, Kim; Trøstrup, Hannah; Moser, Claus

    2014-01-01

    Medical biofilms have attracted substantial attention especially in the past decade. Animal models are contributing significantly to understand the pathogenesis of medical biofilms. In addition, animal models are an essential tool in testing the hypothesis generated from clinical observations in patients and preclinical testing of agents showing in vitro antibiofilm effect. Here, we describe three animal models - two non-foreign body Pseudomonas aeruginosa biofilm models and a foreign body Staphylococcus aureus model. PMID:24664830

  16. Evaluation of Models of Parkinson's Disease

    PubMed Central

    Jagmag, Shail A.; Tripathi, Naveen; Shukla, Sunil D.; Maiti, Sankar; Khurana, Sukant

    2016-01-01

    Parkinson's disease is one of the most common neurodegenerative diseases. Animal models have contributed a large part to our understanding and therapeutics developed for treatment of PD. There are several more exhaustive reviews of literature that provide the initiated insights into the specific models; however a novel synthesis of the basic advantages and disadvantages of different models is much needed. Here we compare both neurotoxin based and genetic models while suggesting some novel avenues in PD modeling. We also highlight the problems faced and promises of all the mammalian models with the hope of providing a framework for comparison of various systems. PMID:26834536

  17. The Use of the Discrepancy Evaluation Model in Evaluating Educational Programs for Visually Handicapped Persons.

    ERIC Educational Resources Information Center

    Hill, Everett W.; Hill, Mary-Maureen

    1983-01-01

    The need to evaluate educational programs is briefly addressed, and the application of the Discrepancy Evaluation Model (DEM) at a hypothetical residential school for the visually handicapped is described. (Author/SW)

  18. Evaluating uncertainty in stochastic simulation models

    SciTech Connect

    McKay, M.D.

    1998-02-01

    This paper discusses fundamental concepts of uncertainty analysis relevant to both stochastic simulation models and deterministic models. A stochastic simulation model, called a simulation model, is a stochastic mathematical model that incorporates random numbers in the calculation of the model prediction. Queuing models are familiar simulation models in which random numbers are used for sampling interarrival and service times. Another example of simulation models is found in probabilistic risk assessments where atmospheric dispersion submodels are used to calculate movement of material. For these models, randomness comes not from the sampling of times but from the sampling of weather conditions, which are described by a frequency distribution of atmospheric variables like wind speed and direction as a function of height above ground. A common characteristic of simulation models is that single predictions, based on one interarrival time or one weather condition, for example, are not nearly as informative as the probability distribution of possible predictions induced by sampling the simulation variables like time and weather condition. The language of model analysis is often general and vague, with terms having mostly intuitive meaning. The definition and motivations for some of the commonly used terms and phrases offered in this paper lead to an analysis procedure based on prediction variance. In the following mathematical abstraction the authors present a setting for model analysis, relate practical objectives to mathematical terms, and show how two reasonable premises lead to a viable analysis strategy.

  19. Likelihood-Based Climate Model Evaluation

    NASA Technical Reports Server (NTRS)

    Braverman, Amy; Cressie, Noel; Teixeira, Joao

    2012-01-01

    Climate models are deterministic, mathematical descriptions of the physics of climate. Confidence in predictions of future climate is increased if the physics are verifiably correct. A necessary, (but not sufficient) condition is that past and present climate be simulated well. Quantify the likelihood that a (summary statistic computed from a) set of observations arises from a physical system with the characteristics captured by a model generated time series. Given a prior on models, we can go further: posterior distribution of model given observations.

  20. Evaluation of Traditional Medicines for Neurodegenerative Diseases Using Drosophila Models

    PubMed Central

    Lee, Soojin; Bang, Se Min; Lee, Joon Woo; Cho, Kyoung Sang

    2014-01-01

    Drosophila is one of the oldest and most powerful genetic models and has led to novel insights into a variety of biological processes. Recently, Drosophila has emerged as a model system to study human diseases, including several important neurodegenerative diseases. Because of the genomic similarity between Drosophila and humans, Drosophila neurodegenerative disease models exhibit a variety of human-disease-like phenotypes, facilitating fast and cost-effective in vivo genetic modifier screening and drug evaluation. Using these models, many disease-associated genetic factors have been identified, leading to the identification of compelling drug candidates. Recently, the safety and efficacy of traditional medicines for human diseases have been evaluated in various animal disease models. Despite the advantages of the Drosophila model, its usage in the evaluation of traditional medicines is only nascent. Here, we introduce the Drosophila model for neurodegenerative diseases and some examples demonstrating the successful application of Drosophila models in the evaluation of traditional medicines. PMID:24790636

  1. Evaluation of Fast-Time Wake Vortex Prediction Models

    NASA Technical Reports Server (NTRS)

    Proctor, Fred H.; Hamilton, David W.

    2009-01-01

    Current fast-time wake models are reviewed and three basic types are defined. Predictions from several of the fast-time models are compared. Previous statistical evaluations of the APA-Sarpkaya and D2P fast-time models are discussed. Root Mean Square errors between fast-time model predictions and Lidar wake measurements are examined for a 24 hr period at Denver International Airport. Shortcomings in current methodology for evaluating wake errors are also discussed.

  2. Evaluating Energy Efficiency Policies with Energy-Economy Models

    SciTech Connect

    Mundaca, Luis; Neij, Lena; Worrell, Ernst; McNeil, Michael A.

    2010-08-01

    The growing complexities of energy systems, environmental problems and technology markets are driving and testing most energy-economy models to their limits. To further advance bottom-up models from a multidisciplinary energy efficiency policy evaluation perspective, we review and critically analyse bottom-up energy-economy models and corresponding evaluation studies on energy efficiency policies to induce technological change. We use the household sector as a case study. Our analysis focuses on decision frameworks for technology choice, type of evaluation being carried out, treatment of market and behavioural failures, evaluated policy instruments, and key determinants used to mimic policy instruments. Although the review confirms criticism related to energy-economy models (e.g. unrealistic representation of decision-making by consumers when choosing technologies), they provide valuable guidance for policy evaluation related to energy efficiency. Different areas to further advance models remain open, particularly related to modelling issues, techno-economic and environmental aspects, behavioural determinants, and policy considerations.

  3. Evaluation study of building-resolved urban dispersion models

    SciTech Connect

    Flaherty, Julia E.; Allwine, K Jerry; Brown, Mike J.; Coirier, WIlliam J.; Ericson, Shawn C.; Hansen, Olav R.; Huber, Alan H.; Kim, Sura; Leach, Martin J.; Mirocha, Jeff D.; Newsom, Rob K.; Patnaik, Gopal; Senocak, Inanc

    2007-09-10

    For effective emergency response and recovery planning, it is critically important that building-resolved urban dispersion models be evaluated using field data. Several full-physics computational fluid dynamics (CFD) models and semi-empirical building-resolved (SEB) models are being advanced and applied to simulating flow and dispersion in urban areas. To obtain an estimate of the current state-of-readiness of these classes of models, the Department of Homeland Security (DHS) funded a study to compare five CFD models and one SEB model with tracer data from the extensive Midtown Manhattan field study (MID05) conducted during August 2005 as part of the DHS Urban Dispersion Program (UDP; Allwine and Flaherty 2007). Six days of tracer and meteorological experiments were conducted over an approximately 2-km-by-2-km area in Midtown Manhattan just south of Central Park in New York City. A subset of these data was used for model evaluations. The study was conducted such that an evaluation team, independent of the six modeling teams, provided all the input data (e.g., building data, meteorological data and tracer release rates) and run conditions for each of four experimental periods simulated. Tracer concentration data for two of the four experimental periods were provided to the modeling teams for their own evaluation of their respective models to ensure proper setup and operation. Tracer data were not provided for the second two experimental periods to provide for an independent evaluation of the models. The tracer concentrations resulting from the model simulations were provided to the evaluation team in a standard format for consistency in inter-comparing model results. An overview of the model evaluation approach will be given followed by a discussion on the qualitative comparison of the respective models with the field data. Future model developments efforts needed to address modeling gaps identified from this study will also be discussed.

  4. Novel methods to evaluate fracture risk models

    PubMed Central

    Donaldson, M.G.; Cawthon, P. M.; Schousboe, J.T.; Ensrud, K.E.; Lui, L.Y.; Cauley, J.A.; Hillier, T.A.; Taylor, B.C.; Hochberg, M.C.; Bauer, D.C.; Cummings, S.R.

    2013-01-01

    Fracture prediction models help identify individuals at high risk who may benefit from treatment. Area Under the Curve (AUC) is used to compare prediction models. However, the AUC has limitations and may miss important differences between models. Novel reclassification methods quantify how accurately models classify patients who benefit from treatment and the proportion of patients above/below treatment thresholds. We applied two reclassification methods, using the NOF treatment thresholds, to compare two risk models: femoral neck BMD and age (“simple model”) and FRAX (”FRAX model”). The Pepe method classifies based on case/non-case status and examines the proportion of each above and below thresholds. The Cook method examines fracture rates above and below thresholds. We applied these to the Study of Osteoporotic Fractures. There were 6036 (1037 fractures) and 6232 (389 fractures) participants with complete data for major osteoporotic and hip fracture respectively. Both models for major osteoporotic fracture (0.68 vs. 0.69) and hip fracture (0.75 vs. 0.76) had similar AUCs. In contrast, using reclassification methods, each model classified a substantial number of women differently. Using the Pepe method, the FRAX model (vs. simple model), missed treating 70 (7%) cases of major osteoporotic fracture but avoided treating 285 (6%) non-cases. For hip fracture, the FRAX model missed treating 31 (8%) cases but avoided treating 1026 (18%) non-cases. The Cook method (both models, both fracture outcomes) had similar fracture rates above/below the treatment thresholds. Compared with the AUC, new methods provide more detailed information about how models classify patients. PMID:21351143

  5. Evaluation of stochastic reservoir operation optimization models

    NASA Astrophysics Data System (ADS)

    Celeste, Alcigeimes B.; Billib, Max

    2009-09-01

    This paper investigates the performance of seven stochastic models used to define optimal reservoir operating policies. The models are based on implicit (ISO) and explicit stochastic optimization (ESO) as well as on the parameterization-simulation-optimization (PSO) approach. The ISO models include multiple regression, two-dimensional surface modeling and a neuro-fuzzy strategy. The ESO model is the well-known and widely used stochastic dynamic programming (SDP) technique. The PSO models comprise a variant of the standard operating policy (SOP), reservoir zoning, and a two-dimensional hedging rule. The models are applied to the operation of a single reservoir damming an intermittent river in northeastern Brazil. The standard operating policy is also included in the comparison and operational results provided by deterministic optimization based on perfect forecasts are used as a benchmark. In general, the ISO and PSO models performed better than SDP and the SOP. In addition, the proposed ISO-based surface modeling procedure and the PSO-based two-dimensional hedging rule showed superior overall performance as compared with the neuro-fuzzy approach.

  6. A Generalized Evaluation Model for Primary Prevention Programs.

    ERIC Educational Resources Information Center

    Barling, Phillip W.; Cramer, Kathryn D.

    A generalized evaluation model (GEM) has been developed to evaluate primary prevention program impact. The GEM model views primary prevention dynamically; delineating four structural components (program, organization, target population, system) and four developmental stages (initiation, establishment, integration, continuation). The interaction of…

  7. Mathematical model of bisubject qualimetric arbitrary objects evaluation

    NASA Astrophysics Data System (ADS)

    Morozova, A.

    2016-04-01

    An analytical basis and the process of formalization of arbitrary objects bisubject qualimetric evaluation mathematical model information spaces are developed. The model is applicable in solving problems of control over both technical and socio-economic systems for objects evaluation using systems of parameters generated by different subjects taking into account their performance and priorities of decision-making.

  8. Evaluating a Community-School Model of Social Work Practice

    ERIC Educational Resources Information Center

    Diehl, Daniel; Frey, Andy

    2008-01-01

    While research has shown that social workers can have positive impacts on students' school adjustment, evaluations of overall practice models continue to be limited. This article evaluates a model of community-school social work practice by examining its effect on problem behaviors and concerns identified by teachers and parents at referral. As…

  9. An Alternative Feedback/Evaluation Model for Outdoor Wilderness Programs.

    ERIC Educational Resources Information Center

    Dawson, R.

    Project D.A.R.E. (Development through Adventure, Responsibility and Education), an adventure-based outdoor program, uses a feedback/evaluation model, combining a learning component with a two-part participant observational model. The first phase focuses on evaluation of the child and progress made while he is in the program (stages one to four);…

  10. Testing of a Program Evaluation Model: Final Report.

    ERIC Educational Resources Information Center

    Nagler, Phyllis J.; Marson, Arthur A.

    A program evaluation model developed by Moraine Park Technical Institute (MPTI) is described in this report. Following background material, the four main evaluation criteria employed in the model are identified as program quality, program relevance to community needs, program impact on MPTI, and the transition and growth of MPTI graduates in the…

  11. Rhode Island Model Evaluation & Support System: Building Administrator. Edition III

    ERIC Educational Resources Information Center

    Rhode Island Department of Education, 2015

    2015-01-01

    Rhode Island educators believe that implementing a fair, accurate, and meaningful educator evaluation and support system will help improve teaching, learning, and school leadership. The primary purpose of the Rhode Island Model Building Administrator Evaluation and Support System (Rhode Island Model) is to help all building administrators improve.…

  12. Modelling in Evaluating a Working Life Project in Higher Education

    ERIC Educational Resources Information Center

    Sarja, Anneli; Janhonen, Sirpa; Havukainen, Pirjo; Vesterinen, Anne

    2012-01-01

    This article describes an evaluation method based on collaboration between the higher education, a care home and university, in a R&D project. The aim of the project was to elaborate modelling as a tool of developmental evaluation for innovation and competence in project cooperation. The approach was based on activity theory. Modelling enabled a…

  13. Program Evaluation: The Accountability Bridge Model for Counselors

    ERIC Educational Resources Information Center

    Astramovich, Randall L.; Coker, J. Kelly

    2007-01-01

    The accountability and reform movements in education and the human services professions have pressured counselors to demonstrate outcomes of counseling programs and services. Evaluation models developed for large-scale evaluations are generally impractical for counselors to implement. Counselors require practical models to guide them in planning…

  14. Rhode Island Model Evaluation & Support System: Support Professional. Edition II

    ERIC Educational Resources Information Center

    Rhode Island Department of Education, 2015

    2015-01-01

    Rhode Island educators believe that implementing a fair, accurate, and meaningful evaluation and support system for support professionals will help improve student outcomes. The primary purpose of the Rhode Island Model Support Professional Evaluation and Support System (Rhode Island Model) is to help all support professionals do their best work…

  15. Evaluation of spinal cord injury animal models

    PubMed Central

    Zhang, Ning; Fang, Marong; Chen, Haohao; Gou, Fangming; Ding, Mingxing

    2014-01-01

    Because there is no curative treatment for spinal cord injury, establishing an ideal animal model is important to identify injury mechanisms and develop therapies for individuals suffering from spinal cord injuries. In this article, we systematically review and analyze various kinds of animal models of spinal cord injury and assess their advantages and disadvantages for further studies. PMID:25598784

  16. SUMMARY OF COMPLEX TERRAIN MODEL EVALUATION

    EPA Science Inventory

    The Environmental Protection Agency conducted a scientific review of a set of eight complex terrain dispersion models. TRC Environmental Consultants, Inc. calculated and tabulated a uniform set of performance statistics for the models using the Cinder Cone Butte and Westvaco Luke...

  17. Evaluating Aptness of a Regression Model

    ERIC Educational Resources Information Center

    Matson, Jack E.; Huguenard, Brian R.

    2007-01-01

    The data for 104 software projects is used to develop a linear regression model that uses function points (a measure of software project size) to predict development effort. The data set is particularly interesting in that it violates several of the assumptions required of a linear model; but when the data are transformed, the data set satisfies…

  18. Model for Evaluating Teacher and Trainer Competences

    ERIC Educational Resources Information Center

    Carioca, Vito; Rodrigues, Clara; Saude, Sandra; Kokosowski, Alain; Harich, Katja; Sau-Ek, Kristiina; Georgogianni, Nicole; Levy, Samuel; Speer, Sandra; Pugh, Terence

    2009-01-01

    A lack of common criteria for comparing education and training systems makes it difficult to recognise qualifications and competences acquired in different environments and levels of training. A valid basis for defining a framework for evaluating professional performance in European educational and training contexts must therefore be established.…

  19. Evaluating Individualized Reading Programs: A Bayesian Model.

    ERIC Educational Resources Information Center

    Maxwell, Martha

    Simple Bayesian approaches can be applied to answer specific questions in evaluating an individualized reading program. A small reading and study skills program located in the counseling center of a major research university collected and compiled data on student characteristics such as class, number of sessions attended, grade point average, and…

  20. Designing and Evaluating Representations to Model Pedagogy

    ERIC Educational Resources Information Center

    Masterman, Elizabeth; Craft, Brock

    2013-01-01

    This article presents the case for a theory-informed approach to designing and evaluating representations for implementation in digital tools to support Learning Design, using the framework of epistemic efficacy as an example. This framework, which is rooted in the literature of cognitive psychology, is operationalised through dimensions of fit…

  1. Ensemble evaluation of hydrological model hypotheses

    NASA Astrophysics Data System (ADS)

    Krueger, Tobias; Freer, Jim; Quinton, John N.; MacLeod, Christopher J. A.; Bilotta, Gary S.; Brazier, Richard E.; Butler, Patricia; Haygarth, Philip M.

    2010-07-01

    It is demonstrated for the first time how model parameter, structural and data uncertainties can be accounted for explicitly and simultaneously within the Generalized Likelihood Uncertainty Estimation (GLUE) methodology. As an example application, 72 variants of a single soil moisture accounting store are tested as simplified hypotheses of runoff generation at six experimental grassland field-scale lysimeters through model rejection and a novel diagnostic scheme. The fields, designed as replicates, exhibit different hydrological behaviors which yield different model performances. For fields with low initial discharge levels at the beginning of events, the conceptual stores considered reach their limit of applicability. Conversely, one of the fields yielding more discharge than the others, but having larger data gaps, allows for greater flexibility in the choice of model structures. As a model learning exercise, the study points to a "leaking" of the fields not evident from previous field experiments. It is discussed how understanding observational uncertainties and incorporating these into model diagnostics can help appreciate the scale of model structural error.

  2. Evaluating Models of Human Performance: Safety-Critical Systems Applications

    NASA Technical Reports Server (NTRS)

    Feary, Michael S.

    2012-01-01

    This presentation is part of panel discussion on Evaluating Models of Human Performance. The purpose of this panel is to discuss the increasing use of models in the world today and specifically focus on how to describe and evaluate models of human performance. My presentation will focus on discussions of generating distributions of performance, and the evaluation of different strategies for humans performing tasks with mixed initiative (Human-Automation) systems. I will also discuss issues with how to provide Human Performance modeling data to support decisions on acceptability and tradeoffs in the design of safety critical systems. I will conclude with challenges for the future.

  3. Guidelines for Evaluating Ground-Water Flow Models

    USGS Publications Warehouse

    Reilly, Thomas E.; Harbaugh, Arlen W.

    2004-01-01

    Ground-water flow modeling is an important tool frequently used in studies of ground-water systems. Reviewers and users of these studies have a need to evaluate the accuracy or reasonableness of the ground-water flow model. This report provides some guidelines and discussion on how to evaluate complex ground-water flow models used in the investigation of ground-water systems. A consistent thread throughout these guidelines is that the objectives of the study must be specified to allow the adequacy of the model to be evaluated.

  4. Models for evaluating the performability of degradable computing systems

    NASA Technical Reports Server (NTRS)

    Wu, L. T.

    1982-01-01

    Recent advances in multiprocessor technology established the need for unified methods to evaluate computing systems performance and reliability. In response to this modeling need, a general modeling framework that permits the modeling, analysis and evaluation of degradable computing systems is considered. Within this framework, several user oriented performance variables are identified and shown to be proper generalizations of the traditional notions of system performance and reliability. Furthermore, a time varying version of the model is developed to generalize the traditional fault tree reliability evaluation methods of phased missions.

  5. TMDL MODEL EVALUATION AND RESEARCH NEEDS

    EPA Science Inventory

    This review examines the modeling research needs to support environmental decision-making for the 303(d) requirements for development of total maximum daily loads (TMDLs) and related programs such as 319 Nonpoint Source Program activities, watershed management, stormwater permits...

  6. Perceptual evaluation of voice source models.

    PubMed

    Kreiman, Jody; Garellek, Marc; Chen, Gang; Alwan, Abeer; Gerratt, Bruce R

    2015-07-01

    Models of the voice source differ in their fits to natural voices, but it is unclear which differences in fit are perceptually salient. This study examined the relationship between the fit of five voice source models to 40 natural voices, and the degree of perceptual match among stimuli synthesized with each of the modeled sources. Listeners completed a visual sort-and-rate task to compare versions of each voice created with the different source models, and the results were analyzed using multidimensional scaling. Neither fits to pulse shapes nor fits to landmark points on the pulses predicted observed differences in quality. Further, the source models fit the opening phase of the glottal pulses better than they fit the closing phase, but at the same time similarity in quality was better predicted by the timing and amplitude of the negative peak of the flow derivative (part of the closing phase) than by the timing and/or amplitude of peak glottal opening. Results indicate that simply knowing how (or how well) a particular source model fits or does not fit a target source pulse in the time domain provides little insight into what aspects of the voice source are important to listeners. PMID:26233000

  7. Evaluation of Surrogate Animal Models of Melioidosis

    PubMed Central

    Warawa, Jonathan Mark

    2010-01-01

    Burkholderia pseudomallei is the Gram-negative bacterial pathogen responsible for the disease melioidosis. B. pseudomallei establishes disease in susceptible individuals through multiple routes of infection, all of which may proceed to a septicemic disease associated with a high mortality rate. B. pseudomallei opportunistically infects humans and a wide range of animals directly from the environment, and modeling of experimental melioidosis has been conducted in numerous biologically relevant models including mammalian and invertebrate hosts. This review seeks to summarize published findings related to established animal models of melioidosis, with an aim to compare and contrast the virulence of B. pseudomallei in these models. The effect of the route of delivery on disease is also discussed for intravenous, intraperitoneal, subcutaneous, intranasal, aerosol, oral, and intratracheal infection methodologies, with a particular focus on how they relate to modeling clinical melioidosis. The importance of the translational validity of the animal models used in B. pseudomallei research is highlighted as these studies have become increasingly therapeutic in nature. PMID:21772830

  8. The Discrepancy Evaluation Model: A Systematic Approach for the Evaluation of Career Planning and Placement Programs.

    ERIC Educational Resources Information Center

    Buttram, Joan L.; Covert, Robert W.

    The Discrepancy Evaluation Model (DEM), developed in 1966 by Malcolm Provus, provides information for program assessment and program improvement. Under the DEM, evaluation is defined as the comparison of an actual performance to a desired standard. The DEM embodies five stages of evaluation based upon a program's natural development: program…

  9. CFD code evaluation for internal flow modeling

    NASA Technical Reports Server (NTRS)

    Chung, T. J.

    1990-01-01

    Research on the computational fluid dynamics (CFD) code evaluation with emphasis on supercomputing in reacting flows is discussed. Advantages of unstructured grids, multigrids, adaptive methods, improved flow solvers, vector processing, parallel processing, and reduction of memory requirements are discussed. As examples, researchers include applications of supercomputing to reacting flow Navier-Stokes equations including shock waves and turbulence and combustion instability problems associated with solid and liquid propellants. Evaluation of codes developed by other organizations are not included. Instead, the basic criteria for accuracy and efficiency have been established, and some applications on rocket combustion have been made. Research toward an ultimate goal, the most accurate and efficient CFD code, is in progress and will continue for years to come.

  10. Model structure identification based on ensemble model evaluation

    NASA Astrophysics Data System (ADS)

    Van Hoey, S.; van der Kwast, J.; Nopens, I.; Seuntjens, P.; Pereira, F.

    2012-04-01

    Identifying the most appropriate hydrological model for a given problem is more than fitting the parameters of a fixed model structure to reproduce the measured hydrograph. Defining the most appropriate model structure is dependent of the modeling objective, the characteristics of the system under investigation and the available data. To be able to adapt to the different conditions and to propose different hypotheses of the underlying system, a flexible model structure is preferred in combination with a rejectionist analysis based on different diagnostics supporting the model objective. By confronting the model structures with the model diagnostics, an identification of the dominant processes is attempted. In the presented work, a set of 24 model structures was constructed, by combining interchangeable components representing different hypotheses of the system under study, the Nete catchment in Belgium. To address the effect of different model diagnostics on the performance of the selected model structures, an optimization of the model structures was performed to identify the parameter sets minimizing specific objective functions, focusing on low or high flow conditions. Furthermore, the different model structures are compared simultaneously within the Generalized Likelihood Uncertainty Estimation (GLUE) approach. The rejection of inadequate model structures by specifying limits of acceptance and weighting of the accepted ones is the basis of the GLUE approach. Multiple measures are combined to give guidance about the suitability of the different structures and information about the identifiability and uncertainty of the parameters is extracted from the ensemble of selected structures. The results of the optimization demonstrate the relationship between the selected objective function and the behaviour of the model structures, but also the compensation for structural differences by different parameter values resulting in similar performance. The optimization gives

  11. Numerical models for the evaluation of geothermal systems

    SciTech Connect

    Bodvarsson, G.S.; Pruess, K.; Lippmann, M.J.

    1986-08-01

    We have carried out detailed simulations of various fields in the USA (Bada, New Mexico; Heber, California); Mexico (Cerro Prieto); Iceland (Krafla); and Kenya (Olkaria). These simulation studies have illustrated the usefulness of numerical models for the overall evaluation of geothermal systems. The methodology for modeling the behavior of geothermal systems, different approaches to geothermal reservoir modeling and how they can be applied in comprehensive evaluation work are discussed.

  12. A common fallacy in climate model evaluation

    NASA Astrophysics Data System (ADS)

    Annan, J. D.; Hargreaves, J. C.; Tachiiri, K.

    2012-04-01

    We discuss the assessment of model ensembles such as that arising from the CMIP3 coordinated multi-model experiments. An important aspect of this is not merely the closeness of the models to observations in absolute terms but also the reliability of the ensemble spread as an indication of uncertainty. In this context, it has been widely argued that the multi-model ensemble of opportunity is insufficiently broad to adequately represent uncertainties regarding future climate change. For example, the IPCC AR4 summarises the consensus with the sentence: "Those studies also suggest that the current AOGCMs may not cover the full range of uncertainty for climate sensitivity." Similar claims have been made in the literature for other properties of the climate system, including the transient climate response and efficiency of ocean heat uptake. Comparison of model outputs with observations of the climate system forms an essential component of model assessment and is crucial for building our confidence in model predictions. However, methods for undertaking this comparison are not always clearly justified and understood. Here we show that the popular approach which forms the basis for the above claims, of comparing the ensemble spread to a so-called "observationally-constrained pdf", can be highly misleading. Such a comparison will almost certainly result in disagreement, but in reality tells us little about the performance of the ensemble. We present an alternative approach based on an assessment of the predictive performance of the ensemble, and show how it may lead to very different, and rather more encouraging, conclusions. We additionally outline some necessary conditions for an ensemble (or more generally, a probabilistic prediction) to be challenged by an observation.

  13. Experimental evaluations of the microchannel flow model

    NASA Astrophysics Data System (ADS)

    Parker, K. J.

    2015-06-01

    Recent advances have enabled a new wave of biomechanics measurements, and have renewed interest in selecting appropriate rheological models for soft tissues such as the liver, thyroid, and prostate. The microchannel flow model was recently introduced to describe the linear response of tissue to stimuli such as stress relaxation or shear wave propagation. This model postulates a power law relaxation spectrum that results from a branching distribution of vessels and channels in normal soft tissue such as liver. In this work, the derivation is extended to determine the explicit link between the distribution of vessels and the relaxation spectrum. In addition, liver tissue is modified by temperature or salinity, and the resulting changes in tissue responses (by factors of 1.5 or greater) are reasonably predicted from the microchannel flow model, simply by considering the changes in fluid flow through the modified samples. The 2 and 4 parameter versions of the model are considered, and it is shown that in some cases the maximum time constant (corresponding to the minimum vessel diameters), could be altered in a way that has major impact on the observed tissue response. This could explain why an inflamed region is palpated as a harder bump compared to surrounding normal tissue.

  14. Evaluation of Model Fit in Cognitive Diagnosis Models

    ERIC Educational Resources Information Center

    Hu, Jinxiang; Miller, M. David; Huggins-Manley, Anne Corinne; Chen, Yi-Hsin

    2016-01-01

    Cognitive diagnosis models (CDMs) estimate student ability profiles using latent attributes. Model fit to the data needs to be ascertained in order to determine whether inferences from CDMs are valid. This study investigated the usefulness of some popular model fit statistics to detect CDM fit including relative fit indices (AIC, BIC, and CAIC),…

  15. Evaluation of biological models using Spacelab

    NASA Technical Reports Server (NTRS)

    Tollinger, D.; Williams, B. A.

    1980-01-01

    Biological models of hypogravity effects are described, including the cardiovascular-fluid shift, musculoskeletal, embryological and space sickness models. These models predict such effects as loss of extracellular fluid and electrolytes, decrease in red blood cell mass, and the loss of muscle and bone mass in weight-bearing portions of the body. Experimentation in Spacelab by the use of implanted electromagnetic flow probes, by fertilizing frog eggs in hypogravity and fixing the eggs at various stages of early development and by assessing the role of the vestibulocular reflex arc in space sickness is suggested. It is concluded that the use of small animals eliminates the uncertainties caused by corrective or preventive measures employed with human subjects.

  16. Source term evaluation for combustion modeling

    NASA Technical Reports Server (NTRS)

    Sussman, Myles A.

    1993-01-01

    A modification is developed for application to the source terms used in combustion modeling. The modification accounts for the error of the finite difference scheme in regions where chain-branching chemical reactions produce exponential growth of species densities. The modification is first applied to a one-dimensional scalar model problem. It is then generalized to multiple chemical species, and used in quasi-one-dimensional computations of shock-induced combustion in a channel. Grid refinement studies demonstrate the improved accuracy of the method using this modification. The algorithm is applied in two spatial dimensions and used in simulations of steady and unsteady shock-induced combustion. Comparisons with ballistic range experiments give confidence in the numerical technique and the 9-species hydrogen-air chemistry model.

  17. Evaluating models of climate and forest vegetation

    NASA Technical Reports Server (NTRS)

    Clark, James S.

    1992-01-01

    Understanding how the biosphere may respond to increasing trace gas concentrations in the atmosphere requires models that contain vegetation responses to regional climate. Most of the processes ecologists study in forests, including trophic interactions, nutrient cycling, and disturbance regimes, and vital components of the world economy, such as forest products and agriculture, will be influenced in potentially unexpected ways by changing climate. These vegetation changes affect climate in the following ways: changing C, N, and S pools; trace gases; albedo; and water balance. The complexity of the indirect interactions among variables that depend on climate, together with the range of different space/time scales that best describe these processes, make the problems of modeling and prediction enormously difficult. These problems of predicting vegetation response to climate warming and potential ways of testing model predictions are the subjects of this chapter.

  18. Modeling procedures for handling qualities evaluation of flexible aircraft

    NASA Technical Reports Server (NTRS)

    Govindaraj, K. S.; Eulrich, B. J.; Chalk, C. R.

    1981-01-01

    This paper presents simplified modeling procedures to evaluate the impact of flexible modes and the unsteady aerodynamic effects on the handling qualities of Supersonic Cruise Aircraft (SCR). The modeling procedures involve obtaining reduced order transfer function models of SCR vehicles, including the important flexible mode responses and unsteady aerodynamic effects, and conversion of the transfer function models to time domain equations for use in simulations. The use of the modeling procedures is illustrated by a simple example.

  19. Evaluation of a hydrological model based on Bidirectional Reach (BReach)

    NASA Astrophysics Data System (ADS)

    Van Eerdenbrugh, Katrien; Van Hoey, Stijn; Verhoest, Niko E. C.

    2016-04-01

    Evaluation and discrimination of model structures is crucial to ensure an appropriate use of hydrological models. When evaluating model results by aggregating their quality in (a subset of) individual observations, overall results of this analysis sometimes conceal important detailed information about model structural deficiencies. Analyzing model results within their local (time) context can uncover this detailed information. In this research, a methodology called Bidirectional Reach (BReach) is proposed to evaluate and analyze results of a hydrological model by assessing the maximum left and right reach in each observation point that is used for model evaluation. These maximum reaches express the capability of the model to describe a subset of the evaluation data both in the direction of the previous (left) and of the following data (right). This capability is evaluated on two levels. First, on the level of individual observations, the combination of a parameter set and an observation is classified as non-acceptable if the deviation between the accompanying model result and the measurement exceeds observational uncertainty. Second, the behavior in a sequence of observations is evaluated by means of a tolerance degree. This tolerance degree expresses the condition for satisfactory model behavior in a data series and is defined by the percentage of observations within this series that can have non-acceptable model results. Based on both criteria, the maximum left and right reaches of a model in an observation represent the data points in the direction of the previous respectively the following observations beyond which none of the sampled parameter sets both are satisfactory and result in an acceptable deviation. After assessing these reaches for a variety of tolerance degrees, results can be plotted in a combined BReach plot that show temporal changes in the behavior of model results. The methodology is applied on a Probability Distributed Model (PDM) of the river

  20. TAMPA BAY MODEL EVALUATION AND ASSESSMENT

    EPA Science Inventory

    A long term goal of multimedia environmental management is to achieve sustainable ecological resources. Progress towards this goal rests on a foundation of science-based methods and data integrated into predictive multimedia, multi-stressor open architecture modeling systems. The...

  1. Evaluating a Model of Youth Physical Activity

    ERIC Educational Resources Information Center

    Heitzler, Carrie D.; Lytle, Leslie A.; Erickson, Darin J.; Barr-Anderson, Daheia; Sirard, John R.; Story, Mary

    2010-01-01

    Objective: To explore the relationship between social influences, self-efficacy, enjoyment, and barriers and physical activity. Methods: Structural equation modeling examined relationships between parent and peer support, parent physical activity, individual perceptions, and objectively measured physical activity using accelerometers among a…

  2. AERMOD: MODEL FORMULATION AND EVALUATION RESULTS

    EPA Science Inventory

    AERMOD is an advanced plume model that incorporates updated treatments of the boundary layer theory, understanding of turbulence and dispersion, and includes handling of terrain interactions. This paper presents an overview of AERMOD's features relative to ISCST3.

    AERM...

  3. Evaluating the Pedagogical Potential of Hybrid Models

    ERIC Educational Resources Information Center

    Levin, Tzur; Levin, Ilya

    2013-01-01

    The paper examines how the use of hybrid models--that consist of the interacting continuous and discrete processes--may assist in teaching system thinking. We report an experiment in which undergraduate students were asked to choose between a hybrid and a continuous solution for a number of control problems. A correlation has been found between…

  4. Experiences in evaluating regional air quality models

    NASA Astrophysics Data System (ADS)

    Liu, Mei-Kao; Greenfield, Stanley M.

    Any area of the world concerned with the health and welfare of its people and the viability of its ecological system must eventually address the question of the control of air pollution. This is true in developed countries as well as countries that are undergoing a considerable degree of industrialization. The control or limitation of the emissions of a pollutant can be very costly. To avoid ineffective or unnecessary control, the nature of the problem must be fully understood and the relationship between source emissions and ambient concentrations must be established. Mathematical models, while admittedly containing large uncertainties, can be used to examine alternatives of emission restrictions for achieving safe ambient concentrations. The focus of this paper is to summarize our experiences with modeling regional air quality in the United States and Western Europe. The following modeling experiences have been used: future SO 2 and sulfate distributions and projected acidic deposition as related to coal development in the northern Great Plains in the U.S.; analysis of regional ozone and sulfate episodes in the northeastern U.S.; analysis of the regional ozone problem in western Europe in support of alternative emission control strategies; analysis of distributions of toxic chemicals in the Southeast Ohio River Valley in support of the design of a monitoring network human exposure. Collectively, these prior modeling analyses can be invaluable in examining a similar problem in other parts of the world as well, such as the Pacific rim in Asia.

  5. Evaluation of regional-scale receptor modeling.

    PubMed

    Lowenthal, Douglas H; Watson, John G; Koracin, Darko; Chen, L W Antony; Dubois, David; Vellore, Ramesh; Kumar, Naresh; Knipping, Eladio M; Wheeler, Neil; Craig, Kenneth; Reid, Stephen

    2010-01-01

    The ability of receptor models to estimate regional contributions to fine particulate matter (PM2.5) was assessed with synthetic, speciated datasets at Brigantine National Wildlife Refuge (BRIG) in New Jersey and Great Smoky Mountains National Park (GRSM) in Tennessee. Synthetic PM2.5 chemical concentrations were generated for the summer of 2002 using the Community Multiscale Air Quality (CMAQ) model and chemically speciated PM2.5 source profiles from the U.S. Environmental Protection Agency (EPA)'s SPECIATE and Desert Research Institute's source profile databases. CMAQ estimated the "true" contributions of seven regions in the eastern United States to chemical species concentrations and individual source contributions to primary PM2.5 at both sites. A seven-factor solution by the positive matrix factorization (PMF) receptor model explained approximately 99% of the variability in the data at both sites. At BRIG, PMF captured the first four major contributing sources (including a secondary sulfate factor), although diesel and gasoline vehicle contributions were not separated. However, at GRSM, the resolved factors did not correspond well to major PM2.5 sources. There were no correlations between PMF factors and regional contributions to sulfate at either site. Unmix produced five- and seven-factor solutions, including a secondary sulfate factor, at both sites. Some PMF factors were combined or missing in the Unmix factors. The trajectory mass balance regression (TMBR) model apportioned sulfate concentrations to the seven source regions using Hybrid Single-Particle Lagrangian Integrated Trajectory (HYSPLIT) trajectories based on Meteorological Model Version 5 (MM5) and Eta Data Simulation System (EDAS) meteorological input. The largest estimated sulfate contributions at both sites were from the local regions; this agreed qualitatively with the true regional apportionments. Estimated regional contributions depended on the starting elevation of the trajectories and on

  6. Evaluation information integration model on book purchasing bids

    NASA Astrophysics Data System (ADS)

    Wang, Wei; Jiao, Yang

    2011-12-01

    The multi attributes decision model is presented basing on a number of indicators of book procurement bidders, and by the characteristics of persons to engage in joint decision-making. For each evaluation to define the ideal solution and negative ideal solution, further the relative closeness of each evaluation person and each supplier. The ideal solution and negative ideal solution of the evaluation committee is defined based on the group closeness matrix, and then the results of the ultimate supplier evaluation are calculated by decision-making groups. In this paper, the model is through the application of experimental data.

  7. Teachers' Development Model to Authentic Assessment by Empowerment Evaluation Approach

    ERIC Educational Resources Information Center

    Charoenchai, Charin; Phuseeorn, Songsak; Phengsawat, Waro

    2015-01-01

    The purposes of this study were 1) Study teachers authentic assessment, teachers comprehension of authentic assessment and teachers needs for authentic assessment development. 2) To create teachers development model. 3) Experiment of teachers development model. 4) Evaluate effectiveness of teachers development model. The research is divided into 4…

  8. An Evaluation of Some Models for Culture-Fair Selection.

    ERIC Educational Resources Information Center

    Petersen, Nancy S.; Novick, Melvin R.

    Models proposed by Cleary, Thorndike, Cole, Linn, Einhorn and Bass, Darlington, and Gross and Su for analyzing bias in the use of tests in a selection strategy are surveyed. Several additional models are also introduced. The purpose is to describe, compare, contrast, and evaluate these models while extracting such useful ideas as may be found in…

  9. Automated expert modeling for automated student evaluation.

    SciTech Connect

    Abbott, Robert G.

    2006-01-01

    The 8th International Conference on Intelligent Tutoring Systems provides a leading international forum for the dissemination of original results in the design, implementation, and evaluation of intelligent tutoring systems and related areas. The conference draws researchers from a broad spectrum of disciplines ranging from artificial intelligence and cognitive science to pedagogy and educational psychology. The conference explores intelligent tutoring systems increasing real world impact on an increasingly global scale. Improved authoring tools and learning object standards enable fielding systems and curricula in real world settings on an unprecedented scale. Researchers deploy ITS's in ever larger studies and increasingly use data from real students, tasks, and settings to guide new research. With high volumes of student interaction data, data mining, and machine learning, tutoring systems can learn from experience and improve their teaching performance. The increasing number of realistic evaluation studies also broaden researchers knowledge about the educational contexts for which ITS's are best suited. At the same time, researchers explore how to expand and improve ITS/student communications, for example, how to achieve more flexible and responsive discourse with students, help students integrate Web resources into learning, use mobile technologies and games to enhance student motivation and learning, and address multicultural perspectives.

  10. Case study of an evaluation coaching model: exploring the role of the evaluator.

    PubMed

    Ensminger, David C; Kallemeyn, Leanne M; Rempert, Tania; Wade, James; Polanin, Megan

    2015-04-01

    This study examined the role of the external evaluator as a coach. More specifically, using an evaluative inquiry framework (Preskill & Torres, 1999a; Preskill & Torres, 1999b), it explored the types of coaching that an evaluator employed to promote individual, team and organizational learning. The study demonstrated that evaluation coaching provided a viable means for an organization with a limited budget to conduct evaluations through support of a coach. It also demonstrated how the coaching processes supported the development of evaluation capacity within the organization. By examining coaching models outside of the field of evaluation, this study identified two forms of coaching--results coaching and developmental coaching--that promoted evaluation capacity building and have not been previously discussed in the evaluation literature. PMID:25677616

  11. Rule based design of conceptual models for formative evaluation

    NASA Technical Reports Server (NTRS)

    Moore, Loretta A.; Chang, Kai; Hale, Joseph P.; Bester, Terri; Rix, Thomas; Wang, Yaowen

    1994-01-01

    A Human-Computer Interface (HCI) Prototyping Environment with embedded evaluation capability has been investigated. This environment will be valuable in developing and refining HCI standards and evaluating program/project interface development, especially Space Station Freedom on-board displays for payload operations. This environment, which allows for rapid prototyping and evaluation of graphical interfaces, includes the following four components: (1) a HCI development tool; (2) a low fidelity simulator development tool; (3) a dynamic, interactive interface between the HCI and the simulator; and (4) an embedded evaluator that evaluates the adequacy of a HCI based on a user's performance. The embedded evaluation tool collects data while the user is interacting with the system and evaluates the adequacy of an interface based on a user's performance. This paper describes the design of conceptual models for the embedded evaluation system using a rule-based approach.

  12. Rule based design of conceptual models for formative evaluation

    NASA Technical Reports Server (NTRS)

    Moore, Loretta A.; Chang, Kai; Hale, Joseph P.; Bester, Terri; Rix, Thomas; Wang, Yaowen

    1994-01-01

    A Human-Computer Interface (HCI) Prototyping Environment with embedded evaluation capability has been investigated. This environment will be valuable in developing and refining HCI standards and evaluating program/project interface development, especially Space Station Freedom on-board displays for payload operations. This environment, which allows for rapid prototyping and evaluation of graphical interfaces, includes the following four components: (1) a HCI development tool, (2) a low fidelity simulator development tool, (3) a dynamic, interactive interface between the HCI and the simulator, and (4) an embedded evaluator that evaluates the adequacy of a HCI based on a user's performance. The embedded evaluation tool collects data while the user is interacting with the system and evaluates the adequacy of an interface based on a user's performance. This paper describes the design of conceptual models for the embedded evaluation system using a rule-based approach.

  13. A MULTILAYER BIOCHEMICAL DRY DEPOSITION MODEL 2. MODEL EVALUATION

    EPA Science Inventory

    The multilayer biochemical dry deposition model (MLBC) described in the accompanying paper was tested against half-hourly eddy correlation data from six field sites under a wide range of climate conditions with various plant types. Modeled CO2, O3, SO2<...

  14. Evaluation of an Infiltration Model with Microchannels

    NASA Astrophysics Data System (ADS)

    Garcia-Serrana, M.; Gulliver, J. S.; Nieber, J. L.

    2015-12-01

    This research goal is to develop and demonstrate the means by which roadside drainage ditches and filter strips can be assigned the appropriate volume reduction credits by infiltration. These vegetated surfaces convey stormwater, infiltrate runoff, and filter and/or settle solids, and are often placed along roads and other impermeable surfaces. Infiltration rates are typically calculated by assuming that water flows as sheet flow over the slope. However, for most intensities water flow occurs in narrow and shallow micro-channels and concentrates in depressions. This channelization reduces the fraction of the soil surface covered with the water coming from the road. The non-uniform distribution of water along a hillslope directly affects infiltration. First, laboratory and field experiments have been conducted to characterize the spatial pattern of flow for stormwater runoff entering onto the surface of a sloped surface in a drainage ditch. In the laboratory experiments different micro-topographies were tested over bare sandy loam soil: a smooth surface, and three and five parallel rills. All the surfaces experienced erosion; the initially smooth surface developed a system of channels over time that increased runoff generation. On average, the initially smooth surfaces infiltrated 10% more volume than the initially rilled surfaces. The field experiments were performed in the side slope of established roadside drainage ditches. Three rates of runoff from a road surface into the swale slope were tested, representing runoff from 1, 2, and 10-year storm events. The average percentage of input runoff water infiltrated in the 32 experiments was 67%, with a 21% standard deviation. Multiple measurements of saturated hydraulic conductivity were conducted to account for its spatial variability. Second, a rate-based coupled infiltration and overland model has been designed that calculates stormwater infiltration efficiency of swales. The Green-Ampt-Mein-Larson assumptions were

  15. Evaluating Conceptual Site Models with Multicomponent Reactive Transport Modeling

    NASA Astrophysics Data System (ADS)

    Dai, Z.; Heffner, D.; Price, V.; Temples, T. J.; Nicholson, T. J.

    2005-05-01

    Modeling ground-water flow and multicomponent reactive chemical transport is a useful approach for testing conceptual site models and assessing the design of monitoring networks. A graded approach with three conceptual site models is presented here with a field case of tetrachloroethene (PCE) transport and biodegradation near Charleston, SC. The first model assumed a one-layer homogeneous aquifer structure with semi-infinite boundary conditions, in which an analytical solution of the reactive solute transport can be obtained with BIOCHLOR (Aziz et al., 1999). Due to the over-simplification of the aquifer structure, this simulation cannot reproduce the monitoring data. In the second approach we used GMS to develop the conceptual site model, a layer-cake multi-aquifer system, and applied a numerical module (MODFLOW and RT3D within GMS) to solve the flow and reactive transport problem. The results were better than the first approach but still did not fit the plume well because the geological structures were still inadequately defined. In the third approach we developed a complex conceptual site model by interpreting log and seismic survey data with Petra and PetraSeis. We detected a major channel and a younger channel, through the PCE source area. These channels control the local ground-water flow direction and provide a preferential chemical transport pathway. Results using the third conceptual site model agree well with the monitoring concentration data. This study confirms that the bias and uncertainty from inadequate conceptual models are much larger than those introduced from an inadequate choice of model parameter values (Neuman and Wierenga, 2003; Meyer et al., 2004). Numerical modeling in this case provides key insight into the hydrogeology and geochemistry of the field site for predicting contaminant transport in the future. Finally, critical monitoring points and performance indicator parameters are selected for future monitoring to confirm system

  16. [Effect evaluation of three cell culture models].

    PubMed

    Wang, Aiguo; Xia, Tao; Yuan, Jing; Chen, Xuemin

    2003-11-01

    Primary rat hepatocytes were cultured using three kinds of models in vitro and the enzyme leakage, albumin secretion, and cytochrome P450 1A (CYP 1A) activity were observed. The results showed that the level of LDH in the medium decreased over time in the period of culture. However, on 5 days, LDH showed a significant increase in monolayer culture (MC) while after 8 days LDH was not detected in sandwich culture (SC). The levels of AST and ALT in the medium did not change significantly over the investigated time. The basic CYP 1A activity gradually decreased with time in MC and SC. The decline of CYP 1A in rat hepatocytes was faster in MC than that in SC. This effect was partially reversed by using cytochrome P450 (CYP450) inducers such as omeprazol and 3-methylcholanthrene (3-MC) and the CYP 1A induction was always higher in MC than that in SC. Basic CYP 1A activity in bioreactor was keeped over 2 weeks and the highest albumin production was observed in bioreactor, and next were SC and MC. In conclusion, our results clearly indicated that there have some advantages and disadvantages in each of models in which can address different questions in metabolism of toxicants and drugs. PMID:14963896

  17. Evaluation Of Hemolysis Models Using A High Fidelity Blood Model

    NASA Astrophysics Data System (ADS)

    Ezzeldin, Hussein; de Tullio, Marco; Solares, Santiago; Balaras, Elias

    2012-11-01

    Red blood cell (RBC) hemolysis is a critical concern in the design of heart assisted devices, such as prosthetic heart valves (PHVs). To date a few analytical and numerical models have been proposed to relate either hydrodynamic stresses or RBC strains, resulting from the external hydrodynamic loading, to the expected degree of hemolysis as a function of time. Such models are based on either ``lumped'' descriptions of fluid stresses or an abstract analytical-numerical representation of the RBC relying on simple geometrical assumptions. We introduce two new approaches based on an existing coarse grained (CG) RBC structural model, which is utilized to explore the physics underlying each hemolysis model whereby applying a set of devised computational experiments. Then, all the models are subjected to pathlines calculated for a realistic PHVs to predict the level of RBC trauma. Our results highlight the strengths and weaknesses of each approach and identify the key gaps that should be addressed in the development of new models. Finally, a two-layer CG model, coupling the spectrin network and the lipid bilayer, which provides invaluable information pertaining to RBC local strains and hence hemolysis. We acknowledge the support of NSF OCI-0904920 and CMMI-0841840 grants. Computing time was provided by XSEDE.

  18. EVALUATION OF MULTIPLE PHARMACOKINETIC MODELING STRUCTURES FOR TRICHLOROETHYLENE

    EPA Science Inventory

    A series of PBPK models were developed for trichloroethylene (TCE) to evaluate biological processes that may affect the absorption, distribution, metabolism and excretion (ADME) of TCE and its metabolites.

  19. Incorporating principal component analysis into air quality model evaluation

    EPA Science Inventory

    The efficacy of standard air quality model evaluation techniques is becoming compromised as the simulation periods continue to lengthen in response to ever increasing computing capacity. Accordingly, the purpose of this paper is to demonstrate a statistical approach called Princi...

  20. Evaluating Rapid Models for High-Throughput Exposure Forecasting (SOT)

    EPA Science Inventory

    High throughput exposure screening models can provide quantitative predictions for thousands of chemicals; however these predictions must be systematically evaluated for predictive ability. Without the capability to make quantitative, albeit uncertain, forecasts of exposure, the ...

  1. Solid rocket booster performance evaluation model. Volume 4: Program listing

    NASA Technical Reports Server (NTRS)

    1974-01-01

    All subprograms or routines associated with the solid rocket booster performance evaluation model are indexed in this computer listing. An alphanumeric list of each routine in the index is provided in a table of contents.

  2. Evaluation of Model Operational Analyses during DYNAMO

    NASA Astrophysics Data System (ADS)

    Ciesielski, Paul; Johnson, Richard

    2013-04-01

    A primary component of the observing system in the DYNAMO-CINDY2011-AMIE field campaign was an atmospheric sounding network comprised of two sounding quadrilaterals, one north and one south of the equator over the central Indian Ocean. During the experiment a major effort was undertaken to ensure the real-time transmission of these data onto the GTS (Global Telecommunication System) for dissemination to the operational centers (ECMWF, NCEP, JMA, etc.). Preliminary estimates indicate that ~95% of the soundings from the enhanced sounding network were successfully transmitted and potentially used in their data assimilation systems. Because of the wide use of operational and reanalysis products (e.g., in process studies, initializing numerical simulations, construction of large-scale forcing datasets for CRMs, etc.), their validity will be examined by comparing a variety of basic and diagnosed fields from two operational analyses (ECMWF and NCEP) to similar analyses based solely on sounding observations. Particular attention will be given to the vertical structures of apparent heating (Q1) and drying (Q2) from the operational analyses (OA), which are strongly influenced by cumulus parameterizations, a source of model infidelity. Preliminary results indicate that the OA products did a reasonable job at capturing the mean and temporal characteristics of convection during the DYNAMO enhanced observing period, which included the passage of two significant MJO events during the October-November 2011 period. For example, temporal correlations between Q2-budget derived rainfall from the OA products and that estimated from the TRMM satellite (i.e., the 3B42V7 product) were greater than 0.9 over the Northern Sounding Array of DYNAMO. However closer inspection of the budget profiles show notable differences between the OA products and the sounding-derived results in low-level (surface to 700 hPa) heating and drying structures. This presentation will examine these differences and

  3. An Emerging Model for Student Feedback: Electronic Distributed Evaluation

    ERIC Educational Resources Information Center

    Brunk-Chavez, Beth; Arrigucci, Annette

    2012-01-01

    In this article we address several issues and challenges that the evaluation of writing presents individual instructors and composition programs as a whole. We present electronic distributed evaluation, or EDE, as an emerging model for feedback on student writing and describe how it was integrated into our program's course redesign. Because the…

  4. An Information Search Model of Evaluative Concerns in Intergroup Interaction

    ERIC Educational Resources Information Center

    Vorauer, Jacquie D.

    2006-01-01

    In an information search model, evaluative concerns during intergroup interaction are conceptualized as a joint function of uncertainty regarding and importance attached to out-group members' views of oneself. High uncertainty generally fosters evaluative concerns during intergroup exchanges. Importance depends on whether out-group members'…

  5. Interrater Agreement Evaluation: A Latent Variable Modeling Approach

    ERIC Educational Resources Information Center

    Raykov, Tenko; Dimitrov, Dimiter M.; von Eye, Alexander; Marcoulides, George A.

    2013-01-01

    A latent variable modeling method for evaluation of interrater agreement is outlined. The procedure is useful for point and interval estimation of the degree of agreement among a given set of judges evaluating a group of targets. In addition, the approach allows one to test for identity in underlying thresholds across raters as well as to identify…

  6. Information and complexity measures for hydrologic model evaluation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Hydrological models are commonly evaluated through the residual-based performance measures such as the root-mean square error or efficiency criteria. Such measures, however, do not evaluate the degree of similarity of patterns in simulated and measured time series. The objective of this study was to...

  7. Evaluating an English Language Teacher Education Program through Peacock's Model

    ERIC Educational Resources Information Center

    Coskun, Abdullah; Daloglu, Aysegul

    2010-01-01

    The main aim of this study is to draw attention to the importance of program evaluation for teacher education programs and to reveal the pre-service English teacher education program components that are in need of improvement or maintenance both from teachers' and students' perspectives by using Peacock's (2009) recent evaluation model in a…

  8. A Model for Evaluating and Acquiring Educational Software in Psychology.

    ERIC Educational Resources Information Center

    Brown, Stephen W.; And Others

    This paper describes a model for evaluating and acquiring instructionally effective and cost effective educational computer software in university psychology departments. Four stages in evaluating the software are developed: (1) establishing departmental goals and objectives for educational use of computers; (2) inventorying and evaluating…

  9. Estimating an Evaluation Utilization Model Using Conjoint Measurement and Analysis.

    ERIC Educational Resources Information Center

    Johnson, R. Burke

    1995-01-01

    The conjoint approach to measurement and analysis is demonstrated with a test of an evaluation utilization process-model that includes two endogenous variables (predicted participation and predicted instrumental evaluation). Conjoint measurement involves having respondents rate profiles that are analogues to concepts based on cells in a factorial…

  10. AQMEII: A New International Initiative on Air Quality Model Evaluation

    EPA Science Inventory

    We provide a conceptual view of the process of evaluating regional-scale three-dimensional numerical photochemical air quality modeling system, based on an examination of existing approached to the evaluation of such systems as they are currently used in a variety of application....

  11. Regime-based evaluation of cloudiness in CMIP5 models

    NASA Astrophysics Data System (ADS)

    Jin, Daeho; Oreopoulos, Lazaros; Lee, Dongmin

    2016-04-01

    The concept of cloud regimes (CRs) is used to develop a framework for evaluating the cloudiness of 12 fifth Coupled Model Intercomparison Project (CMIP5) models. Reference CRs come from existing global International Satellite Cloud Climatology Project (ISCCP) weather states. The evaluation is made possible by the implementation in several CMIP5 models of the ISCCP simulator generating in each grid cell daily joint histograms of cloud optical thickness and cloud top pressure. Model performance is assessed with several metrics such as CR global cloud fraction (CF), CR relative frequency of occurrence (RFO), their product [long-term average total cloud amount (TCA)], cross-correlations of CR RFO maps, and a metric of resemblance between model and ISCCP CRs. In terms of CR global RFO, arguably the most fundamental metric, the models perform unsatisfactorily overall, except for CRs representing thick storm clouds. Because model CR CF is internally constrained by our method, RFO discrepancies yield also substantial TCA errors. Our results support previous findings that CMIP5 models underestimate cloudiness. The multi-model mean performs well in matching observed RFO maps for many CRs, but is still not the best for this or other metrics. When overall performance across all CRs is assessed, some models, despite shortcomings, apparently outperform Moderate Resolution Imaging Spectroradiometer cloud observations evaluated against ISCCP like another model output. Lastly, contrasting cloud simulation performance against each model's equilibrium climate sensitivity in order to gain insight on whether good cloud simulation pairs with particular values of this parameter, yields no clear conclusions.

  12. An evaluation of recent internal field models. [of earth magnetism

    NASA Technical Reports Server (NTRS)

    Mead, G. D.

    1979-01-01

    The paper reviews the current status of internal field models and evaluates several recently published models by comparing their predictions with annual means of the magnetic field measured at 140 magnetic observatories from 1973 to 1977. Three of the four models studied, viz. AWC/75, IGS/75, and Pogo 8/71, were nearly equal in their ability to predict the magnitude and direction of the current field. The fourth model, IGRF 1975, was significantly poorer in its ability to predict the current field. All models seemed to be able to extrapolate predictions quite well several years outside the data range used to construct the models.

  13. Development, Evaluation, and Design Applications of an AMTEC Converter Model

    NASA Astrophysics Data System (ADS)

    Spence, Cliff A.; Schuller, Michael; Lalk, Tom R.

    2003-01-01

    Issues associated with the development of an alkali metal thermal-to-electric conversion (AMTEC) converter model that serves as an effective design tool were investigated. The requirements and performance prediction equations for the model were evaluated, and a modeling methodology was established. It was determined by defining the requirements and equations for the model and establishing a methodology that Thermal Desktop, a recently improved finite-difference software package, could be used to develop a model that serves as an effective design tool. Implementing the methodology within Thermal Desktop provides stability, high resolution, modular construction, easy-to-use interfaces, and modeling flexibility.

  14. Evaluation of one dimensional analytical models for vegetation canopies

    NASA Technical Reports Server (NTRS)

    Goel, Narendra S.; Kuusk, Andres

    1992-01-01

    The SAIL model for one-dimensional homogeneous vegetation canopies has been modified to include the specular reflectance and hot spot effects. This modified model and the Nilson-Kuusk model are evaluated by comparing the reflectances given by them against those given by a radiosity-based computer model, Diana, for a set of canopies, characterized by different leaf area index (LAI) and leaf angle distribution (LAD). It is shown that for homogeneous canopies, the analytical models are generally quite accurate in the visible region, but not in the infrared region. For architecturally realistic heterogeneous canopies of the type found in nature, these models fall short. These shortcomings are quantified.

  15. An Evaluation of Unsaturated Flow Models in an Arid Climate

    SciTech Connect

    Dixon, J.

    1999-12-01

    The objective of this study was to evaluate the effectiveness of two unsaturated flow models in arid regions. The area selected for the study was the Area 5 Radioactive Waste Management Site (RWMS) at the Nevada Test Site in Nye County, Nevada. The two models selected for this evaluation were HYDRUS-1D [Simunek et al., 1998] and the SHAW model [Flerchinger and Saxton, 1989]. Approximately 5 years of soil-water and atmospheric data collected from an instrumented weighing lysimeter site near the RWMS were used for building the models with actual initial and boundary conditions representative of the site. Physical processes affecting the site and model performance were explored. Model performance was based on a detailed sensitivity analysis and ultimately on storage comparisons. During the process of developing descriptive model input, procedures for converting hydraulic parameters for each model were explored. In addition, the compilation of atmospheric data collected at the site became a useful tool for developing predictive functions for future studies. The final model results were used to evaluate the capacities of the HYDRUS and SHAW models for predicting soil-moisture movement and variable surface phenomena for bare soil conditions in the arid vadose zone. The development of calibrated models along with the atmospheric and soil data collected at the site provide useful information for predicting future site performance at the RWMS.

  16. Putting Theory-Oriented Evaluation into Practice: A Logic Model Approach for Evaluating SIMGAME

    ERIC Educational Resources Information Center

    Hense, Jan; Kriz, Willy Christian; Wolfe, Joseph

    2009-01-01

    Evaluations of gaming simulations and business games as teaching devices are typically end-state driven. This emphasis fails to detect how the simulation being evaluated does or does not bring about its desired consequences. This paper advances the use of a logic model approach, which possesses a holistic perspective that aims at including all…

  17. Evaluating Organic Aerosol Model Performance: Impact of two Embedded Assumptions

    NASA Astrophysics Data System (ADS)

    Jiang, W.; Giroux, E.; Roth, H.; Yin, D.

    2004-05-01

    Organic aerosols are important due to their abundance in the polluted lower atmosphere and their impact on human health and vegetation. However, modeling organic aerosols is a very challenging task because of the complexity of aerosol composition, structure, and formation processes. Assumptions and their associated uncertainties in both models and measurement data make model performance evaluation a truly demanding job. Although some assumptions are obvious, others are hidden and embedded, and can significantly impact modeling results, possibly even changing conclusions about model performance. This paper focuses on analyzing the impact of two embedded assumptions on evaluation of organic aerosol model performance. One assumption is about the enthalpy of vaporization widely used in various secondary organic aerosol (SOA) algorithms. The other is about the conversion factor used to obtain ambient organic aerosol concentrations from measured organic carbon. These two assumptions reflect uncertainties in the model and in the ambient measurement data, respectively. For illustration purposes, various choices of the assumed values are implemented in the evaluation process for an air quality model based on CMAQ (the Community Multiscale Air Quality Model). Model simulations are conducted for the Lower Fraser Valley covering Southwest British Columbia, Canada, and Northwest Washington, United States, for a historical pollution episode in 1993. To understand the impact of the assumed enthalpy of vaporization on modeling results, its impact on instantaneous organic aerosol yields (IAY) through partitioning coefficients is analysed first. The analysis shows that utilizing different enthalpy of vaporization values causes changes in the shapes of IAY curves and in the response of SOA formation capability of reactive organic gases to temperature variations. These changes are then carried into the air quality model and cause substantial changes in the organic aerosol modeling

  18. Model C Is Feasible for ESEA Title I Evaluation.

    ERIC Educational Resources Information Center

    Echternacht, Gary

    The assertion that Model C is feasible for Elementary Secondary Education Act Title I evaluation, why it is feasible, and reasons why it is so seldom used are explained. Two assumptions must be made to use the special regression model. First, a strict cut-off must be used on the pretest to assign students to Title I and comparison groups. Second,…

  19. A Constructivist Model for Evaluating Postgraduate Supervision: A Case Study

    ERIC Educational Resources Information Center

    Zuber-Skerritt, Ortrun; Roche, Val

    2004-01-01

    This paper presents a new constructivist model of knowledge development in a case study that illustrates how a group of postgraduate students defined and evaluated effective postgraduate supervision. This new model is based on "personal construct theory" and "repertory grid technology" which is combined with interviews and group discussion. It is…

  20. An Evaluation Model for Competency Based Teacher Preparatory Programs.

    ERIC Educational Resources Information Center

    Denton, Jon J.

    This discussion describes an evaluation model designed to complement a curriculum development project, the primary goal of which is to structure a performance based program for preservice teachers. Data collected from the implementation of this four-phase model can be used to make decisions for developing and changing performance objectives and…

  1. A Model for Integrating Program Development and Evaluation.

    ERIC Educational Resources Information Center

    Brown, J. Lynne; Kiernan, Nancy Ellen

    1998-01-01

    A communication model consisting of input from target audience, program delivery, and outcomes (receivers' perception of message) was applied to an osteoporosis-prevention program for working mothers ages 21 to 45. Due to poor completion rate on evaluation instruments and failure of participants to learn key concepts, the model was used to improve…

  2. The Impact of Spatial Correlation and Incommensurability on Model Evaluation

    EPA Science Inventory

    Standard evaluations of air quality models rely heavily on a direct comparison of monitoring data matched with the model output for the grid cell containing the monitor’s location. While such techniques may be adequate for some applications, conclusions are limited by such facto...

  3. Evaluation of radiation partitioning models at Bushland, Texas

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Crop growth and soil-vegetation-atmosphere continuum energy transfer models often require estimates of net radiation components, such as photosynthetic, solar, and longwave radiation to both the canopy and soil. We evaluated the 1998 radiation partitioning model of Campbell and Norman, herein referr...

  4. Faculty Performance Evaluation: The CIPP-SAPS Model.

    ERIC Educational Resources Information Center

    Mitcham, Maralynne

    1981-01-01

    The issues of faculty performance evaluation for allied health professionals are addressed. Daniel Stufflebeam's CIPP (content-imput-process-product) model is introduced and its development into a CIPP-SAPS (self-administrative-peer- student) model is pursued. (Author/CT)

  5. An Alternative Model for the Evaluation of Change. Technical Report.

    ERIC Educational Resources Information Center

    Corder-Bolz, Charles R.

    Previous research has indicated that most mathematical models used to evaluate change due to experimental treatment are misleading because the procedures artificially reduced one of the estimates of error variance. Two modified models, based upon the expected values of the variance of scores and difference scores, were developed from a new…

  6. Solid rocket booster performance evaluation model. Volume 2: Users manual

    NASA Technical Reports Server (NTRS)

    1974-01-01

    This users manual for the solid rocket booster performance evaluation model (SRB-II) contains descriptions of the model, the program options, the required program inputs, the program output format and the program error messages. SRB-II is written in FORTRAN and is operational on both the IBM 370/155 and the MSFC UNIVAC 1108 computers.

  7. Rocky Mountain School Division No.15 Evaluation Model.

    ERIC Educational Resources Information Center

    Rocky Mountain School Div. No. 15, Rocky Mountain House (Alberta).

    This summary report presents methodologies, results, and conclusions of a two-year evaluation model implemented by an Alberta, Canada, rural school district to provide information for administrative and public decision making. An introductory chapter enumerates district goals for students and the model's objectives. Chapter 2 outlines how survey…

  8. FOLIAR WASHOFF OF PESTICIDES (FWOP) MODEL: DEVELOPMENT AND EVALUATION

    EPA Science Inventory

    The Foliar Washoff of Pesticides (FWOP) Model was developed to provide an empirical simulation of pesticide washoff from plant leaf surfaces as influenced by rainfall amount. To evaluate the technique, simulations by the FWOP Model were compared to those by the foliar washoff alg...

  9. Modeling nuisance variables for phenotypic evaluation of bull fertility

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This research determined which (available) nuisance variables should be included in a model for phenotypic evaluation of US service sire conception rate (CR), based on DHI data. Models were compared by splitting data into records for estimation (n=3,613,907) and set-aside data (n=2,025,884), computi...

  10. NEW CATEGORICAL METRICS FOR AIR QUALITY MODEL EVALUATION

    EPA Science Inventory

    Traditional categorical metrics used in model evaluations are "clear-cut" measures in that the model's ability to predict an exceedance is defined by a fixed threshold concentration and the metrics are defined by observation-forecast sets that are paired both in space and time. T...

  11. Groundwater modeling in RCRA assessment, corrective action design and evaluation

    SciTech Connect

    Rybak, I.; Henley, W.

    1995-12-31

    Groundwater modeling was conducted to design, implement, modify, and terminate corrective action at several RCRA sites in EPA Region 4. Groundwater flow, contaminant transport and unsaturated zone air flow models were used depending on the complexity of the site and the corrective action objectives. Software used included Modflow, Modpath, Quickflow, Bioplume 2, and AIR3D. Site assessment data, such as aquifer properties, site description, and surface water characteristics for each facility were used in constructing the models and designing the remedial systems. Modeling, in turn, specified additional site assessment data requirements for the remedial system design. The specific purpose of computer modeling is discussed with several case studies. These consist, among others, of the following: evaluation of the mechanism of the aquifer system and selection of a cost effective remedial option, evaluation of the capture zone of a pumping system, prediction of the system performance for different and difficult hydrogeologic settings, evaluation of the system performance, and trouble-shooting for the remedial system operation. Modeling is presented as a useful tool for corrective action system design, performance, evaluation, and trouble-shooting. The case studies exemplified the integration of diverse data sources, understanding the mechanism of the aquifer system, and evaluation of the performance of alternative remediation systems in a cost-effective manner. Pollutants of concern include metals and PAHs.

  12. The Pantex Process model: Formulations of the evaluation planning module

    SciTech Connect

    JONES,DEAN A.; LAWTON,CRAIG R.; LIST,GEORGE FISHER; TURNQUIST,MARK ALAN

    1999-12-01

    This paper describes formulations of the Evaluation Planning Module that have been developed since its inception. This module is one of the core algorithms in the Pantex Process Model, a computerized model to support production planning in a complex manufacturing system at the Pantex Plant, a US Department of Energy facility. Pantex is responsible for three major DOE programs -- nuclear weapons disposal, stockpile evaluation, and stockpile maintenance -- using shared facilities, technicians, and equipment. The model reflects the interactions of scheduling constraints, material flow constraints, and the availability of required technicians and facilities.

  13. Effects of distributed database modeling on evaluation of transaction rollbacks

    NASA Technical Reports Server (NTRS)

    Mukkamala, Ravi

    1991-01-01

    Data distribution, degree of data replication, and transaction access patterns are key factors in determining the performance of distributed database systems. In order to simplify the evaluation of performance measures, database designers and researchers tend to make simplistic assumptions about the system. The effect is studied of modeling assumptions on the evaluation of one such measure, the number of transaction rollbacks, in a partitioned distributed database system. Six probabilistic models and expressions are developed for the numbers of rollbacks under each of these models. Essentially, the models differ in terms of the available system information. The analytical results so obtained are compared to results from simulation. From here, it is concluded that most of the probabilistic models yield overly conservative estimates of the number of rollbacks. The effect of transaction commutativity on system throughout is also grossly undermined when such models are employed.

  14. Effects of distributed database modeling on evaluation of transaction rollbacks

    NASA Technical Reports Server (NTRS)

    Mukkamala, Ravi

    1991-01-01

    Data distribution, degree of data replication, and transaction access patterns are key factors in determining the performance of distributed database systems. In order to simplify the evaluation of performance measures, database designers and researchers tend to make simplistic assumptions about the system. Here, researchers investigate the effect of modeling assumptions on the evaluation of one such measure, the number of transaction rollbacks in a partitioned distributed database system. The researchers developed six probabilistic models and expressions for the number of rollbacks under each of these models. Essentially, the models differ in terms of the available system information. The analytical results obtained are compared to results from simulation. It was concluded that most of the probabilistic models yield overly conservative estimates of the number of rollbacks. The effect of transaction commutativity on system throughput is also grossly undermined when such models are employed.

  15. Evaluating the accuracy of diffusion MRI models in white matter.

    PubMed

    Rokem, Ariel; Yeatman, Jason D; Pestilli, Franco; Kay, Kendrick N; Mezer, Aviv; van der Walt, Stefan; Wandell, Brian A

    2015-01-01

    Models of diffusion MRI within a voxel are useful for making inferences about the properties of the tissue and inferring fiber orientation distribution used by tractography algorithms. A useful model must fit the data accurately. However, evaluations of model-accuracy of commonly used models have not been published before. Here, we evaluate model-accuracy of the two main classes of diffusion MRI models. The diffusion tensor model (DTM) summarizes diffusion as a 3-dimensional Gaussian distribution. Sparse fascicle models (SFM) summarize the signal as a sum of signals originating from a collection of fascicles oriented in different directions. We use cross-validation to assess model-accuracy at different gradient amplitudes (b-values) throughout the white matter. Specifically, we fit each model to all the white matter voxels in one data set and then use the model to predict a second, independent data set. This is the first evaluation of model-accuracy of these models. In most of the white matter the DTM predicts the data more accurately than test-retest reliability; SFM model-accuracy is higher than test-retest reliability and also higher than the DTM model-accuracy, particularly for measurements with (a) a b-value above 1000 in locations containing fiber crossings, and (b) in the regions of the brain surrounding the optic radiations. The SFM also has better parameter-validity: it more accurately estimates the fiber orientation distribution function (fODF) in each voxel, which is useful for fiber tracking. PMID:25879933

  16. Evaluating the Accuracy of Diffusion MRI Models in White Matter

    PubMed Central

    Rokem, Ariel; Yeatman, Jason D.; Pestilli, Franco; Kay, Kendrick N.; Mezer, Aviv; van der Walt, Stefan; Wandell, Brian A.

    2015-01-01

    Models of diffusion MRI within a voxel are useful for making inferences about the properties of the tissue and inferring fiber orientation distribution used by tractography algorithms. A useful model must fit the data accurately. However, evaluations of model-accuracy of commonly used models have not been published before. Here, we evaluate model-accuracy of the two main classes of diffusion MRI models. The diffusion tensor model (DTM) summarizes diffusion as a 3-dimensional Gaussian distribution. Sparse fascicle models (SFM) summarize the signal as a sum of signals originating from a collection of fascicles oriented in different directions. We use cross-validation to assess model-accuracy at different gradient amplitudes (b-values) throughout the white matter. Specifically, we fit each model to all the white matter voxels in one data set and then use the model to predict a second, independent data set. This is the first evaluation of model-accuracy of these models. In most of the white matter the DTM predicts the data more accurately than test-retest reliability; SFM model-accuracy is higher than test-retest reliability and also higher than the DTM model-accuracy, particularly for measurements with (a) a b-value above 1000 in locations containing fiber crossings, and (b) in the regions of the brain surrounding the optic radiations. The SFM also has better parameter-validity: it more accurately estimates the fiber orientation distribution function (fODF) in each voxel, which is useful for fiber tracking. PMID:25879933

  17. Evaluation of dense-gas simulation models. Final report

    SciTech Connect

    Zapert, J.G.; Londergan, R.J.; Thistle, H.

    1991-05-01

    The report describes the approach and presents the results of an evaluation study of seven dense gas simulation models using data from three experimental programs. The models evaluated are two in the public domain (DEGADIS and SLAB) and five that are proprietary (AIRTOX, CHARM, FOCUS, SAFEMODE, and TRACE). The data bases used in the evaluation are the Desert Tortoise Pressurized Ammonia Releases, Burro Liquefied Natural Gas Spill Tests and the Goldfish Anhydrous Hydroflouric Acid Spill Experiments. A uniform set of performance statistics are calculated and tabulated to compare maximum observed concentrations and cloud half-width to those predicted by each model. None of the models demonstrated good performance consistently for all three experimental programs.

  18. How Do You Evaluate Everyone Who Isn't a Teacher? An Adaptable Evaluation Model for Professional Support Personnel.

    ERIC Educational Resources Information Center

    Stronge, James H.; And Others

    The evaluation of professional support personnel in the schools has been a neglected area in educational evaluation. The Center for Research on Educational Accountability and Teacher Evaluation (CREATE) has worked to develop a conceptually sound evaluation model and then to translate the model into practical evaluation procedures that facilitate…

  19. Evaluation of advanced geopotential models for operational orbit determination

    NASA Technical Reports Server (NTRS)

    Radomski, M. S.; Davis, B. E.; Samii, M. V.; Engel, C. J.; Doll, C. E.

    1988-01-01

    To meet future orbit determination accuracy requirements for different NASA projects, analyses are performed using Tracking and Data Relay Satellite System (TDRSS) tracking measurements and orbit determination improvements in areas such as the modeling of the Earth's gravitational field. Current operational requirements are satisfied using the Goddard Earth Model-9 (GEM-9) geopotential model with the harmonic expansion truncated at order and degree 21 (21-by-21). This study evaluates the performance of 36-by-36 geopotential models, such as the GEM-10B and Preliminary Goddard Solution-3117 (PGS-3117) models. The Earth Radiation Budget Satellite (ERBS) and LANDSAT-5 are the spacecraft considered in this study.

  20. Human Thermal Model Evaluation Using the JSC Human Thermal Database

    NASA Technical Reports Server (NTRS)

    Bue, Grant; Makinen, Janice; Cognata, Thomas

    2012-01-01

    Human thermal modeling has considerable long term utility to human space flight. Such models provide a tool to predict crew survivability in support of vehicle design and to evaluate crew response in untested space environments. It is to the benefit of any such model not only to collect relevant experimental data to correlate it against, but also to maintain an experimental standard or benchmark for future development in a readily and rapidly searchable and software accessible format. The Human thermal database project is intended to do just so; to collect relevant data from literature and experimentation and to store the data in a database structure for immediate and future use as a benchmark to judge human thermal models against, in identifying model strengths and weakness, to support model development and improve correlation, and to statistically quantify a model s predictive quality. The human thermal database developed at the Johnson Space Center (JSC) is intended to evaluate a set of widely used human thermal models. This set includes the Wissler human thermal model, a model that has been widely used to predict the human thermoregulatory response to a variety of cold and hot environments. These models are statistically compared to the current database, which contains experiments of human subjects primarily in air from a literature survey ranging between 1953 and 2004 and from a suited experiment recently performed by the authors, for a quantitative study of relative strength and predictive quality of the models.

  1. Evaluating Vocational Educators' Training Programs: A Kirkpatrick-Inspired Evaluation Model

    ERIC Educational Resources Information Center

    Ravicchio, Fabrizio; Trentin, Guglielmo

    2015-01-01

    The aim of the article is to describe the assessment model adopted by the SCINTILLA Project, a project in Italy aimed at the online vocational training of young, seriously-disabled subjects and their subsequent work inclusion in smart-work mode. It will thus describe the model worked out for evaluation of the training program conceived for the…

  2. New model framework and structure and the commonality evaluation model. [concerning unmanned spacecraft projects

    NASA Technical Reports Server (NTRS)

    1977-01-01

    The development of a framework and structure for shuttle era unmanned spacecraft projects and the development of a commonality evaluation model is documented. The methodology developed for model utilization in performing cost trades and comparative evaluations for commonality studies is discussed. The model framework consists of categories of activities associated with the spacecraft system's development process. The model structure describes the physical elements to be treated as separate identifiable entities. Cost estimating relationships for subsystem and program-level components were calculated.

  3. Evaluating performances of simplified physically based landslide susceptibility models.

    NASA Astrophysics Data System (ADS)

    Capparelli, Giovanna; Formetta, Giuseppe; Versace, Pasquale

    2015-04-01

    Rainfall induced shallow landslides cause significant damages involving loss of life and properties. Prediction of shallow landslides susceptible locations is a complex task that involves many disciplines: hydrology, geotechnical science, geomorphology, and statistics. Usually to accomplish this task two main approaches are used: statistical or physically based model. This paper presents a package of GIS based models for landslide susceptibility analysis. It was integrated in the NewAge-JGrass hydrological model using the Object Modeling System (OMS) modeling framework. The package includes three simplified physically based models for landslides susceptibility analysis (M1, M2, and M3) and a component for models verifications. It computes eight goodness of fit indices (GOF) by comparing pixel-by-pixel model results and measurements data. Moreover, the package integration in NewAge-JGrass allows the use of other components such as geographic information system tools to manage inputs-output processes, and automatic calibration algorithms to estimate model parameters. The system offers the possibility to investigate and fairly compare the quality and the robustness of models and models parameters, according a procedure that includes: i) model parameters estimation by optimizing each of the GOF index separately, ii) models evaluation in the ROC plane by using each of the optimal parameter set, and iii) GOF robustness evaluation by assessing their sensitivity to the input parameter variation. This procedure was repeated for all three models. The system was applied for a case study in Calabria (Italy) along the Salerno-Reggio Calabria highway, between Cosenza and Altilia municipality. The analysis provided that among all the optimized indices and all the three models, Average Index (AI) optimization coupled with model M3 is the best modeling solution for our test case. This research was funded by PON Project No. 01_01503 "Integrated Systems for Hydrogeological Risk

  4. Study on Turbulent Modeling in Gas Entrainment Evaluation Method

    NASA Astrophysics Data System (ADS)

    Ito, Kei; Ohshima, Hiroyuki; Nakamine, Yoshiaki; Imai, Yasutomo

    Suppression of gas entrainment (GE) phenomena caused by free surface vortices are very important to establish an economically superior design of the sodium-cooled fast reactor in Japan (JSFR). However, due to the non-linearity and/or locality of the GE phenomena, it is not easy to evaluate the occurrences of the GE phenomena accurately. In other words, the onset condition of the GE phenomena in the JSFR is not predicted easily based on scaled-model and/or partial-model experiments. Therefore, the authors are developing a CFD-based evaluation method in which the non-linearity and locality of the GE phenomena can be considered. In the evaluation method, macroscopic vortex parameters, e.g. circulation, are determined by three-dimensional CFD and then, GE-related parameters, e.g. gas core (GC) length, are calculated by using the Burgers vortex model. This procedure is efficient to evaluate the GE phenomena in the JSFR. However, it is well known that the Burgers vortex model tends to overestimate the GC length due to the lack of considerations on some physical mechanisms. Therefore, in this study, the authors develop a turbulent vortex model to evaluate the GE phenomena more accurately. Then, the improved GE evaluation method with the turbulent viscosity model is validated by analyzing the GC lengths observed in a simple experiment. The evaluation results show that the GC lengths analyzed by the improved method are shorter in comparison to the original method, and give better agreement with the experimental data.

  5. Evaluation of six ionospheric models as predictors of TEC

    SciTech Connect

    Brown, L.D.; Daniell, R.E.; Fox, M.W.; Klobuchar, J.A.; Doherty, P.H.

    1990-05-03

    The authors have gathered TEC data from a wide range of latitudes and longitudes for a complete range of solar activity. This data was used to evaluate the performance of six ionospheric models as predictors of Total Electron Content (TFC). The TEC parameter is important in correcting modern DOD space systems, which propagate radio signals from the earth to satellites, for the time delay effects of the ionosphere. The TEC data were obtained from polarimeter receivers located in North America, the Pacific, and the East Coast of Asia. The ionospheric models evaluated are: (1) the International Reference Ionosphere (IRI); (2) the Bent model; (3) the Ionospheric Conductivity and Electron Density (ICED) model; (4) the Penn State model; (5) the Fully Analytic Ionospheric Model (FAIM, a modification of the Chiu model); and (6) the Damen-Hartranft model. They will present extensive comparisons between monthly mean TEC at all local times and model TEC obtained by integrating electron density profiles produced by the six models. These comparisons demonstrate that even thought most of the models do very well at representing f0F2, none of them do very well with TEC, probably because of inaccurate representation of the topside scale height. They suggest that one approach to obtaining better representations of TEC is the use of f0E2 from coefficients coupled with a new slab thickness developed at Boston University.

  6. Obs4MIPS: Satellite Observations for Model Evaluation

    NASA Astrophysics Data System (ADS)

    Ferraro, R.; Waliser, D. E.; Gleckler, P. J.

    2015-12-01

    This poster will review the current status of the obs4MIPs project, whose purpose is to provide a limited collection of well-established and documented datasets for comparison with Earth system models (https://www.earthsystemcog.org/projects/obs4mips/). These datasets have been reformatted to correspond with the CMIP5 model output requirements, and include technical documentation specifically targeted for their use in model output evaluation. There are currently over 50 datasets containing observations that directly correspond to CMIP5 model output variables. We will review recent additions to the obs4MIPs collection, and provide updated download statistics. We will also provide an update on changes to submission and documentation guidelines, the work of the WCRP Data Advisory Council (WDAC) Observations for Model Evaluation Task Team, and engagement with the CMIP6 MIP experiments.

  7. Evaluation of potential crushed-salt constitutive models

    SciTech Connect

    Callahan, G.D.; Loken, M.C.; Sambeek, L.L. Van; Chen, R.; Pfeifle, T.W.; Nieland, J.D.

    1995-12-01

    Constitutive models describing the deformation of crushed salt are presented in this report. Ten constitutive models with potential to describe the phenomenological and micromechanical processes for crushed salt were selected from a literature search. Three of these ten constitutive models, termed Sjaardema-Krieg, Zeuch, and Spiers models, were adopted as candidate constitutive models. The candidate constitutive models were generalized in a consistent manner to three-dimensional states of stress and modified to include the effects of temperature, grain size, and moisture content. A database including hydrostatic consolidation and shear consolidation tests conducted on Waste Isolation Pilot Plant and southeastern New Mexico salt was used to determine material parameters for the candidate constitutive models. Nonlinear least-squares model fitting to data from the hydrostatic consolidation tests, the shear consolidation tests, and a combination of the shear and hydrostatic tests produces three sets of material parameter values for the candidate models. The change in material parameter values from test group to test group indicates the empirical nature of the models. To evaluate the predictive capability of the candidate models, each parameter value set was used to predict each of the tests in the database. Based on the fitting statistics and the ability of the models to predict the test data, the Spiers model appeared to perform slightly better than the other two candidate models. The work reported here is a first-of-its kind evaluation of constitutive models for reconsolidation of crushed salt. Questions remain to be answered. Deficiencies in models and databases are identified and recommendations for future work are made. 85 refs.

  8. Neutral models as a way to evaluate the Sea Level Affecting Marshes Model (SLAMM)

    EPA Science Inventory

    A commonly used landscape model to simulate wetland change – the Sea Level Affecting Marshes Model(SLAMM) – has rarely been explicitly assessed for its prediction accuracy. Here, we evaluated this model using recently proposed neutral models – including the random constraint matc...

  9. On the evaluation of box model results: the case of BOXURB model.

    PubMed

    Paschalidou, A K; Kassomenos, P A

    2009-08-01

    In the present paper, the BOXURB model results, as they occurred in the Greater Area of Athens after model application on an hourly basis for the 10-year period 1995-2004, are evaluated both in time and space in the light of observed pollutant concentrations time series from 17 monitoring stations. The evaluation is performed at a total, monthly, daily and hourly scale. The analysis also includes evaluation of the model performance with regard to the meteorological parameters. Finally, the model is evaluated as an air quality forecasting and urban planning tool. Given the simplicity of the model and the complexity of the area topography, the model results are found to be in good agreement with the measured pollutant concentrations, especially in the heavy traffic stations. Therefore, the model can be used for regulatory purposes by authorities for time-efficient, simple and reliable estimation of air pollution levels within city boundaries. PMID:18600462

  10. Evaluation of ADAM/1 model for advanced coal extraction concepts

    NASA Technical Reports Server (NTRS)

    Deshpande, G. K.; Gangal, M. D.

    1982-01-01

    Several existing computer programs for estimating life cycle cost of mining systems were evaluated. A commercially available program, ADAM/1 was found to be satisfactory in relation to the needs of the advanced coal extraction project. Two test cases were run to confirm the ability of the program to handle nonconventional mining equipment and procedures. The results were satisfactory. The model, therefore, is recommended to the project team for evaluation of their conceptual designs.

  11. Vestibular models for design and evaluation of flight simulator motion

    NASA Technical Reports Server (NTRS)

    Bussolari, S. R.; Sullivan, R. B.; Young, L. R.

    1986-01-01

    The use of spatial orientation models in the design and evaluation of control systems for motion-base flight simulators is investigated experimentally. The development of a high-fidelity motion drive controller using an optimal control approach based on human vestibular models is described. The formulation and implementation of the optimal washout system are discussed. The effectiveness of the motion washout system was evaluated by studying the response of six motion washout systems to the NASA/AMES Vertical Motion Simulator for a single dash-quick-stop maneuver. The effects of the motion washout system on pilot performance and simulator acceptability are examined. The data reveal that human spatial orientation models are useful for the design and evaluation of flight simulator motion fidelity.

  12. Human Thermal Model Evaluation Using the JSC Human Thermal Database

    NASA Technical Reports Server (NTRS)

    Cognata, T.; Bue, G.; Makinen, J.

    2011-01-01

    The human thermal database developed at the Johnson Space Center (JSC) is used to evaluate a set of widely used human thermal models. This database will facilitate a more accurate evaluation of human thermoregulatory response using in a variety of situations, including those situations that might otherwise prove too dangerous for actual testing--such as extreme hot or cold splashdown conditions. This set includes the Wissler human thermal model, a model that has been widely used to predict the human thermoregulatory response to a variety of cold and hot environments. These models are statistically compared to the current database, which contains experiments of human subjects primarily in air from a literature survey ranging between 1953 and 2004 and from a suited experiment recently performed by the authors, for a quantitative study of relative strength and predictive quality of the models. Human thermal modeling has considerable long term utility to human space flight. Such models provide a tool to predict crew survivability in support of vehicle design and to evaluate crew response in untested environments. It is to the benefit of any such model not only to collect relevant experimental data to correlate it against, but also to maintain an experimental standard or benchmark for future development in a readily and rapidly searchable and software accessible format. The Human thermal database project is intended to do just so; to collect relevant data from literature and experimentation and to store the data in a database structure for immediate and future use as a benchmark to judge human thermal models against, in identifying model strengths and weakness, to support model development and improve correlation, and to statistically quantify a model s predictive quality.

  13. Evaluation Model on Education Effect of Team Learning

    NASA Astrophysics Data System (ADS)

    Kobayashi, Hiroshi; Uchida, Tatsuo; Ishiyama, Jun-Ichi; Ito, Masahiko; Tanigaki, Miho; Kanno, Hiroyuki

    With the acceleration of the worldwide globalization, the fluidity of the person on the earth scale advances these days. So guaranteeing the quality of the education and coping with diversification of the social need are demanded to the higher education system. Therefore, colleges and universities are introducing the activity utilized their originality and characteristic, and then promoting the educational reform. However, in those activities, the participant is usually limited or it is difficult to evaluate educational effect. In this paper, to contribute to building up an appropriate evaluation model for the team activity, evaluation systems of these activity of our college are presented and estimated; supplementary lesson, creation training, contest, etc.

  14. Evaluation of Trapped Radiation Model Uncertainties for Spacecraft Design

    NASA Technical Reports Server (NTRS)

    Armstrong, T. W.; Colborn, B. L.

    2000-01-01

    The standard AP8 and AE8 models for predicting trapped proton and electron environments have been compared with several sets of flight data to evaluate model uncertainties. Model comparisons are made with flux, dose, and activation measurements made on various U.S. low-Earth orbit satellites (APEX, CRRES, DMSP. LDEF, NOAA) and Space Shuttle flights, on Russian satellites (Photon-8, Cosmos-1887, Cosmos-2044), and on the Russian Mir space station. This report gives a summary of the model-data given in a companion report. Results from the model comparisons with flight data show, for example, that the AP8 model underpredicts the trapped proton flux at low altitudes by a factor of about two (independent of proton energy and solar cycle conditions), and that the AE8 model overpredict the flux in the outer electron belt be an order of magnitude or more.

  15. Evaluation of Trapped Radiation Model Uncertainties for Spacecraft Design

    NASA Technical Reports Server (NTRS)

    Armstrong, T. W.; Colborn, B. L.

    2000-01-01

    The standard AP8 and AE8 models for predicting trapped proton and electron environments have been compared with several sets of flight data to evaluate model uncertainties. Model comparisons are made with flux, dose, and activation measurements made on various U.S. low-Earth orbit satellites (APEX, CRRES, DMSP, LDEF, NOAA) and Space Shuttle flights, on Russian satellites (Photon-8, Cosmos-1887, Cosmos-2044), and on the Russian Mir Space Station. This report gives a summary of the model-data comparisons-detailed results are given in a companion report. Results from the model comparisons with flic,ht data show, for example, the AP8 model underpredicts the trapped proton flux at low altitudes by a factor of about two (independent of proton energy and solar cycle conditions), and that the AE8 model overpredicts the flux in the outer electron belt by an order of magnitude or more.

  16. Ensemble-based evaluation for protein structure models

    PubMed Central

    Jamroz, Michal; Kolinski, Andrzej; Kihara, Daisuke

    2016-01-01

    Motivation: Comparing protein tertiary structures is a fundamental procedure in structural biology and protein bioinformatics. Structure comparison is important particularly for evaluating computational protein structure models. Most of the model structure evaluation methods perform rigid body superimposition of a structure model to its crystal structure and measure the difference of the corresponding residue or atom positions between them. However, these methods neglect intrinsic flexibility of proteins by treating the native structure as a rigid molecule. Because different parts of proteins have different levels of flexibility, for example, exposed loop regions are usually more flexible than the core region of a protein structure, disagreement of a model to the native needs to be evaluated differently depending on the flexibility of residues in a protein. Results: We propose a score named FlexScore for comparing protein structures that consider flexibility of each residue in the native state of proteins. Flexibility information may be extracted from experiments such as NMR or molecular dynamics simulation. FlexScore considers an ensemble of conformations of a protein described as a multivariate Gaussian distribution of atomic displacements and compares a query computational model with the ensemble. We compare FlexScore with other commonly used structure similarity scores over various examples. FlexScore agrees with experts’ intuitive assessment of computational models and provides information of practical usefulness of models. Availability and implementation: https://bitbucket.org/mjamroz/flexscore Contact: dkihara@purdue.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27307633

  17. Mathematical models and lymphatic filariasis control: monitoring and evaluating interventions.

    PubMed

    Michael, Edwin; Malecela-Lazaro, Mwele N; Maegga, Bertha T A; Fischer, Peter; Kazura, James W

    2006-11-01

    Monitoring and evaluation are crucially important to the scientific management of any mass parasite control programme. Monitoring enables the effectiveness of implemented actions to be assessed and necessary adaptations to be identified; it also determines when management objectives are achieved. Parasite transmission models can provide a scientific template for informing the optimal design of such monitoring programmes. Here, we illustrate the usefulness of using a model-based approach for monitoring and evaluating anti-parasite interventions and discuss issues that need addressing. We focus on the use of such an approach for the control and/or elimination of the vector-borne parasitic disease, lymphatic filariasis. PMID:16971182

  18. Evaluation of annual, global seismicity forecasts, including ensemble models

    NASA Astrophysics Data System (ADS)

    Taroni, Matteo; Zechar, Jeremy; Marzocchi, Warner

    2013-04-01

    In 2009, the Collaboratory for the Study of the Earthquake Predictability (CSEP) initiated a prototype global earthquake forecast experiment. Three models participated in this experiment for 2009, 2010 and 2011—each model forecast the number of earthquakes above magnitude 6 in 1x1 degree cells that span the globe. Here we use likelihood-based metrics to evaluate the consistency of the forecasts with the observed seismicity. We compare model performance with statistical tests and a new method based on the peer-to-peer gambling score. The results of the comparisons are used to build ensemble models that are a weighted combination of the individual models. Notably, in these experiments the ensemble model always performs significantly better than the single best-performing model. Our results indicate the following: i) time-varying forecasts, if not updated after each major shock, may not provide significant advantages with respect to time-invariant models in 1-year forecast experiments; ii) the spatial distribution seems to be the most important feature to characterize the different forecasting performances of the models; iii) the interpretation of consistency tests may be misleading because some good models may be rejected while trivial models may pass consistency tests; iv) a proper ensemble modeling seems to be a valuable procedure to get the best performing model for practical purposes.

  19. Development and evaluation of thermal model reduction algorithms for spacecraft

    NASA Astrophysics Data System (ADS)

    Deiml, Michael; Suderland, Martin; Reiss, Philipp; Czupalla, Markus

    2015-05-01

    This paper is concerned with the topic of the reduction of thermal models of spacecraft. The work presented here has been conducted in cooperation with the company OHB AG, formerly Kayser-Threde GmbH, and the Institute of Astronautics at Technische Universität München with the goal to shorten and automatize the time-consuming and manual process of thermal model reduction. The reduction of thermal models can be divided into the simplification of the geometry model for calculation of external heat flows and radiative couplings and into the reduction of the underlying mathematical model. For simplification a method has been developed which approximates the reduced geometry model with the help of an optimization algorithm. Different linear and nonlinear model reduction techniques have been evaluated for their applicability in reduction of the mathematical model. Thereby the compatibility with the thermal analysis tool ESATAN-TMS is of major concern, which restricts the useful application of these methods. Additional model reduction methods have been developed, which account to these constraints. The Matrix Reduction method allows the approximation of the differential equation to reference values exactly expect for numerical errors. The summation method enables a useful, applicable reduction of thermal models that can be used in industry. In this work a framework for model reduction of thermal models has been created, which can be used together with a newly developed graphical user interface for the reduction of thermal models in industry.

  20. New performance evaluation models for character detection in images

    NASA Astrophysics Data System (ADS)

    Wang, YanWei; Ding, XiaoQing; Liu, ChangSong; Wang, Kongqiao

    2010-02-01

    Detection of characters regions is a meaningful research work for both highlighting region of interest and recognition for further information processing. A lot of researches have been performed on character localization and extraction and this leads to the great needs of performance evaluation scheme to inspect detection algorithms. In this paper, two probability models are established to accomplish evaluation tasks for different applications respectively. For highlighting region of interest, a Gaussian probability model, which simulates the property of a low-pass Gaussian filter of human vision system (HVS), was constructed to allocate different weights to different character parts. It reveals the greatest potential to describe the performance of detectors, especially, when the result detected is an incomplete character, where other methods cannot effectively work. For the recognition destination, we also introduced a weighted probability model to give an appropriate description for the contribution of detection results to final recognition results. The validity of performance evaluation models proposed in this paper are proved by experiments on web images and natural scene images. These models proposed in this paper may also be able to be applied in evaluating algorithms of locating other objects, like face detection and more wide experiments need to be done to examine the assumption.

  1. Progressive evaluation of incorporating information into a model building process

    NASA Astrophysics Data System (ADS)

    Gharari, Shervan; Hrachowitz, Markus; Fenicia, Fabrizio; Gao, Hongkai; Gupta, Hoshin; Savenije, Huub

    2014-05-01

    Catchments are open systems meaning that it is impossible to find out the exact boundary conditions of the real system spatially and temporarily. Therefore models are essential tools in capturing system behaviour spatially and extrapolating it temporarily for prediction. In recent years conceptual models have been in the center of attention rather than so called physically based models which are often over-parameterized and encounter difficulties for up-scaling of small scale processes. Conceptual models however are heavily dependent on calibration as one or more of their parameter values can typically not be physically measured at the catchment scale. The general understanding is based on the fact that increasing the complexity of conceptual model for better representation of hydrological process heterogeneity typically makes parameter identification more difficult however the evaluation of the amount of information given by each of the model elements, control volumes (so called buckets), interconnecting fluxes, parameterization (constitutive functions) and finally parameter values are rather unknown. Each of the mentioned components of a model contains information on the transformation of forcing (precipitation) into runoff, however the effect of each of them solely and together is not well understood. In this study we follow hierarchal steps for model building, firstly the model structure is built by its building blocks (control volumes) as well as interconnecting fluxes. The effect of adding every control volumes and the architecture of the model (formation of control volumes and fluxes) can be evaluated in this level. In the second layer the parameterization of model is evaluated. As an example the effect of a specific type of stage-discharge relation for a control volume can be explored. Finally in the last step of the model building the information gained by parameter values are quantified. In each development level the value of information which are added

  2. Evaluation of different feed intake models for dairy cows.

    PubMed

    Krizsan, S J; Sairanen, A; Höjer, A; Huhtanen, P

    2014-01-01

    The objective of the current study was to evaluate feed intake prediction models of varying complexity using individual observations of lactating cows subjected to experimental dietary treatments in periodic sequences (i.e., change-over trials). Observed or previous period animal data were combined with the current period feed data in the evaluations of the different feed intake prediction models. This would illustrate the situation and amount of available data when formulating rations for dairy cows in practice and test the robustness of the models when milk yield is used in feed intake predictions. The models to be evaluated in the current study were chosen based on the input data required in the models and the applicability to Nordic conditions. A data set comprising 2,161 total individual observations was constructed from 24 trials conducted at research barns in Denmark, Finland, Norway, and Sweden. Prediction models were evaluated by residual analysis using mixed and simple model regression. Great variation in animal and feed factors was observed in the data set, with ranges in total dry matter intake (DMI) from 10.4 to 30.8kg/d, forage DMI from 4.1 to 23.0kg/d, and milk yield from 8.4 to 51.1kg/d. The mean biases of DMI predictions for the National Research Council, the Cornell Net Carbohydrate and Protein System, the British, Finnish, and Scandinavian models were -1.71, 0.67, 2.80, 0.83, -0.60kg/d with prediction errors of 2.33, 1.71, 3.19, 1.62, and 2.03kg/d, respectively, when observed milk yield was used in the predictions. The performance of the models were ranked the same, using either mixed or simple model regression analysis, but generally the random contribution to the prediction error increased with simple rather than mixed model regression analysis. The prediction error of all models was generally greater when using previous period data compared with the observed milk yield. When the average milk yield over all periods was used in the predictions

  3. Evaluating supervised topic models in the presence of OCR errors

    NASA Astrophysics Data System (ADS)

    Walker, Daniel; Ringger, Eric; Seppi, Kevin

    2013-01-01

    Supervised topic models are promising tools for text analytics that simultaneously model topical patterns in document collections and relationships between those topics and document metadata, such as timestamps. We examine empirically the effect of OCR noise on the ability of supervised topic models to produce high quality output through a series of experiments in which we evaluate three supervised topic models and a naive baseline on synthetic OCR data having various levels of degradation and on real OCR data from two different decades. The evaluation includes experiments with and without feature selection. Our results suggest that supervised topic models are no better, or at least not much better in terms of their robustness to OCR errors, than unsupervised topic models and that feature selection has the mixed result of improving topic quality while harming metadata prediction quality. For users of topic modeling methods on OCR data, supervised topic models do not yet solve the problem of finding better topics than the original unsupervised topic models.

  4. Forecasting in foodservice: model development, testing, and evaluation.

    PubMed

    Miller, J L; Thompson, P A; Orabella, M M

    1991-05-01

    This study was designed to develop, test, and evaluate mathematical models appropriate for forecasting menu-item production demand in foodservice. Data were collected from residence and dining hall foodservices at Ohio State University. Objectives of the study were to collect, code, and analyze the data; develop and test models using actual operation data; and compare forecasting results with current methods in use. Customer count was forecast using deseasonalized simple exponential smoothing. Menu-item demand was forecast by multiplying the count forecast by a predicted preference statistic. Forecasting models were evaluated using mean squared error, mean absolute deviation, and mean absolute percentage error techniques. All models were more accurate than current methods. A broad spectrum of forecasting techniques could be used by foodservice managers with access to a personal computer and spread-sheet and database-management software. The findings indicate that mathematical forecasting techniques may be effective in foodservice operations to control costs, increase productivity, and maximize profits. PMID:2019699

  5. Software Platform Evaluation - Verifiable Fuel Cycle Simulation (VISION) Model

    SciTech Connect

    J. J. Jacobson; D. E. Shropshire; W. B. West

    2005-11-01

    The purpose of this Software Platform Evaluation (SPE) is to document the top-level evaluation of potential software platforms on which to construct a simulation model that satisfies the requirements for a Verifiable Fuel Cycle Simulation Model (VISION) of the Advanced Fuel Cycle (AFC). See the Software Requirements Specification for Verifiable Fuel Cycle Simulation (VISION) Model (INEEL/EXT-05-02643, Rev. 0) for a discussion of the objective and scope of the VISION model. VISION is intended to serve as a broad systems analysis and study tool applicable to work conducted as part of the AFCI (including costs estimates) and Generation IV reactor development studies. This document will serve as a guide for selecting the most appropriate software platform for VISION. This is a “living document” that will be modified over the course of the execution of this work.

  6. Road network safety evaluation using Bayesian hierarchical joint model.

    PubMed

    Wang, Jie; Huang, Helai

    2016-05-01

    Safety and efficiency are commonly regarded as two significant performance indicators of transportation systems. In practice, road network planning has focused on road capacity and transport efficiency whereas the safety level of a road network has received little attention in the planning stage. This study develops a Bayesian hierarchical joint model for road network safety evaluation to help planners take traffic safety into account when planning a road network. The proposed model establishes relationships between road network risk and micro-level variables related to road entities and traffic volume, as well as socioeconomic, trip generation and network density variables at macro level which are generally used for long term transportation plans. In addition, network spatial correlation between intersections and their connected road segments is also considered in the model. A road network is elaborately selected in order to compare the proposed hierarchical joint model with a previous joint model and a negative binomial model. According to the results of the model comparison, the hierarchical joint model outperforms the joint model and negative binomial model in terms of the goodness-of-fit and predictive performance, which indicates the reasonableness of considering the hierarchical data structure in crash prediction and analysis. Moreover, both random effects at the TAZ level and the spatial correlation between intersections and their adjacent segments are found to be significant, supporting the employment of the hierarchical joint model as an alternative in road-network-level safety modeling as well. PMID:26945109

  7. Evaluation of thermographic phosphor technology for aerodynamic model testing

    SciTech Connect

    Cates, M.R.; Tobin, K.W.; Smith, D.B.

    1990-08-01

    The goal for this project was to perform technology evaluations applicable to the development of higher-precision, higher-temperature aerodynamic model testing at Arnold Engineering Development Center (AEDC) in Tullahmoa, Tennessee. With the advent of new programs for design of aerospace craft that fly at higher speeds and altitudes, requirements for detailed understanding of high-temperature materials become very important. Model testing is a natural and critical part of the development of these new initiatives. The well-established thermographic phosphor techniques of the Applied Technology Division at Oak Ridge National Laboratory are highly desirable for diagnostic evaluation of materials and aerodynamic shapes as studied in model tests. Combining this state-of-the-art thermographic technique with modern, higher-temperature models will greatly improve the practicability of tests for the advanced aerospace vehicles and will provide higher precision diagnostic information for quantitative evaluation of these tests. The wavelength ratio method for measuring surface temperatures of aerodynamic models was demonstrated in measurements made for this project. In particular, it was shown that the appropriate phosphors could be selected for the temperature range up to {approximately}700 {degree}F or higher and emission line ratios of sufficient sensitivity to measure temperature with 1% precision or better. Further, it was demonstrated that two-dimensional image- processing methods, using standard hardware, can be successfully applied to surface thermography of aerodynamic models for AEDC applications.

  8. Evaluation of Rainfall-Runoff Models for Mediterranean Subcatchments

    NASA Astrophysics Data System (ADS)

    Cilek, A.; Berberoglu, S.; Donmez, C.

    2016-06-01

    The development and the application of rainfall-runoff models have been a corner-stone of hydrological research for many decades. The amount of rainfall and its intensity and variability control the generation of runoff and the erosional processes operating at different scales. These interactions can be greatly variable in Mediterranean catchments with marked hydrological fluctuations. The aim of the study was to evaluate the performance of rainfall-runoff model, for rainfall-runoff simulation in a Mediterranean subcatchment. The Pan-European Soil Erosion Risk Assessment (PESERA), a simplified hydrological process-based approach, was used in this study to combine hydrological surface runoff factors. In total 128 input layers derived from data set includes; climate, topography, land use, crop type, planting date, and soil characteristics, are required to run the model. Initial ground cover was estimated from the Landsat ETM data provided by ESA. This hydrological model was evaluated in terms of their performance in Goksu River Watershed, Turkey. It is located at the Central Eastern Mediterranean Basin of Turkey. The area is approximately 2000 km2. The landscape is dominated by bare ground, agricultural and forests. The average annual rainfall is 636.4mm. This study has a significant importance to evaluate different model performances in a complex Mediterranean basin. The results provided comprehensive insight including advantages and limitations of modelling approaches in the Mediterranean environment.

  9. Animal models to evaluate anti-atherosclerotic drugs.

    PubMed

    Priyadharsini, Raman P

    2015-08-01

    Atherosclerosis is a multifactorial condition characterized by endothelial injury, fatty streak deposition, and stiffening of the blood vessels. The pathogenesis is complex and mediated by adhesion molecules, inflammatory cells, and smooth muscle cells. Statins have been the major drugs in treating hypercholesterolemia for the past two decades despite little efficacy. There is an urgent need for new drugs that can replace statins or combined with statins. The preclinical studies evaluating atherosclerosis require an ideal animal model which resembles the disease condition, but there is no single animal model which mimics the disease. The animal models used are rabbits, rats, mice, hamsters, mini pigs, etc. Each animal model has its own advantages and disadvantages. The method of induction of atherosclerosis includes diet, chemical induction, mechanically induced injuries, and genetically manipulated animal models. This review mainly focuses on the various animal models, method of induction, the advantages, disadvantages, and the current perspectives with regard to preclinical studies on atherosclerosis. PMID:26095240

  10. Information technology model for evaluating emergency medicine teaching

    NASA Astrophysics Data System (ADS)

    Vorbach, James; Ryan, James

    1996-02-01

    This paper describes work in progress to develop an Information Technology (IT) model and supporting information system for the evaluation of clinical teaching in the Emergency Medicine (EM) Department of North Shore University Hospital. In the academic hospital setting student physicians, i.e. residents, and faculty function daily in their dual roles as teachers and students respectively, and as health care providers. Databases exist that are used to evaluate both groups in either academic or clinical performance, but rarely has this information been integrated to analyze the relationship between academic performance and the ability to care for patients. The goal of the IT model is to improve the quality of teaching of EM physicians by enabling the development of integrable metrics for faculty and resident evaluation. The IT model will include (1) methods for tracking residents in order to develop experimental databases; (2) methods to integrate lecture evaluation, clinical performance, resident evaluation, and quality assurance databases; and (3) a patient flow system to monitor patient rooms and the waiting area in the Emergency Medicine Department, to record and display status of medical orders, and to collect data for analyses.

  11. Air Pollution Data for Model Evaluation and Application

    EPA Science Inventory

    One objective of designing an air pollution monitoring network is to obtain data for evaluating air quality models that are used in the air quality management process and scientific discovery.1.2 A common use is to relate emissions to air quality, including assessing ...

  12. Support for Career Development in Youth: Program Models and Evaluations

    ERIC Educational Resources Information Center

    Mekinda, Megan A.

    2012-01-01

    This article examines four influential programs--Citizen Schools, After School Matters, career academies, and Job Corps--to demonstrate the diversity of approaches to career programming for youth. It compares the specific program models and draws from the evaluation literature to discuss strengths and weaknesses of each. The article highlights…

  13. Evaluation of an Interdisciplinary, Physically Active Lifestyle Course Model

    ERIC Educational Resources Information Center

    Fede, Marybeth H.

    2009-01-01

    The purpose of this study was to evaluate a fit for life program at a university and to use the findings from an extensive literature review, consultations with formative and summative committees, and data collection to develop an interdisciplinary, physically active lifestyle (IPAL) course model. To address the 5 research questions examined in…

  14. Evaluation of a metabolic cotton seedling emergence model

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A model for cotton seedling emergence (MaGi) based on malate synthase kinetics was evaluated in the field at two locations, Lubbock and Big Spring, TX. Cotton, cvar. DP 444, was planted through the early spring and into typical planting times for the areas. Soil temperatures at seed depth was used a...

  15. Animal models of cerebral ischemia for evaluation of drugs.

    PubMed

    Gupta, Y K; Briyal, Seema

    2004-10-01

    Stroke is a major cause of death and disability worldwide. The resulting burden on the society continues to grow, with increase in the incidence of stroke. Brain attack is a term introduced to describe the acute presentation of stroke, which emphasizes the need for urgent action to remedy the situation. Though a large number of therapeutic agents like thrombolytics, NMDA receptor antagonists, calcium channel blockers and antioxidants, have been used or being evaluated, there remains a large gap between the benefits by these agents and properties an ideal drug for stroke should offer. In recent years much attention is being paid towards the exploration of herbal preparation, antioxidant agents and combination therapies including COX-2 inhibitors in experimental model of stroke. For better evaluation of the drugs and enhancement of their predictability from animal experimentation to clinical settings, it has been realized that the selection of animal models, the parameters to be evaluated should be critically assessed. Focal and global cerebral ischemia represents diseases that are common in the human population. Understanding the mechanisms of injury and neuroprotection in these diseases is important to learn new target sites to treat ischemia. There are many animal models available to investigate injury mechanisms and neuroprotective strategies. In this article we attempted to summarize commonly explored animal models of focal and global cerebral ischemia and evaluate their advantages and limitations. PMID:15907047

  16. A Model for Evaluation of Mass Media Coverage.

    ERIC Educational Resources Information Center

    Johnson, Phylis

    1996-01-01

    Defines total community coverage as the presentation of divisive issues through such media as electronic town meetings and public debates. Suggests ways to improve these media formats, including a 4-level model. Describes in depth each level--Foundations, Conceptual Awareness, Investigation and Evaluation, and Action Skills. Presents a case study…

  17. EVALUATION OF THE EMPIRICAL KINETIC MODELING APPROACH (EKMA)

    EPA Science Inventory

    The EKMA is a Lagrangian photochemical air quality simulation model that calculates ozone from its precursors: nonmethane organic compounds (NMOC) and nitrogen oxides (NOx). This study evaluated the performance of the EKMA when it is used to estimate the maximum ozone concentrati...

  18. Career Alternatives Model for 1975-76. Final Evaluation Report.

    ERIC Educational Resources Information Center

    Washington Univ., Seattle. Bureau of School Service and Research.

    An evaluation of the Career Alternatives Model (CAM) program in the Highline School District, Washington, assessed the impact of four key components: (1) the film series "Bread and Butterflies" (for grades 4-6), designed to increase acceptance of responsibility for future career development and to increase maturity in career decision making; (2) a…

  19. Evaluation of a stratiform cloud parameterization for general circulation models

    SciTech Connect

    Ghan, S.J.; Leung, L.R.; McCaa, J.

    1996-04-01

    To evaluate the relative importance of horizontal advection of cloud versus cloud formation within the grid cell of a single column model (SCM), we have performed a series of simulations with our SCM driven by a fixed vertical velocity and various rates of horizontal advection.

  20. Field Evaluation of an Avian Risk Assessment Model

    EPA Science Inventory

    We conducted two laboratory subacute dietary toxicity tests and one outdoor subacute dietary toxicity test to determine the effectiveness of the U.S. Environmental Protection Agency's deterministic risk assessment model for evaluating the potential of adverse effects to birds in ...

  1. An IPA-Embedded Model for Evaluating Creativity Curricula

    ERIC Educational Resources Information Center

    Chang, Chi-Cheng

    2014-01-01

    How to diagnose the effectiveness of creativity-related curricula is a crucial concern in the pursuit of educational excellence. This paper introduces an importance-performance analysis (IPA)-embedded model for curriculum evaluation, using the example of an IT project implementation course to assess the creativity performance deduced from student…

  2. Evaluating Item Fit for Multidimensional Item Response Models

    ERIC Educational Resources Information Center

    Zhang, Bo; Stone, Clement A.

    2008-01-01

    This research examines the utility of the s-x[superscript 2] statistic proposed by Orlando and Thissen (2000) in evaluating item fit for multidimensional item response models. Monte Carlo simulation was conducted to investigate both the Type I error and statistical power of this fit statistic in analyzing two kinds of multidimensional test…

  3. Assessment and Evaluation Modeling. Symposium 38. [AHRD Conference, 2001].

    ERIC Educational Resources Information Center

    2001

    This symposium on assessment and evaluation modeling consists of three presentations. "Training Assessment Among Kenyan Smallholder Entrepreneurs" (George G. Shibanda, Jemymah Ingado, Bernard Nassiuma) reports a study that assessed the extent to which the need for knowledge, information, and skills among small scale farmers can promote effective…

  4. REVIEW OF MATHEMATICAL MODELING FOR EVALUATING SOIL VAPOR EXTRACTION SYSTEMS

    EPA Science Inventory

    Soil vapor extraction (SVE) is a commonly used remedial technology at sites contaminated with volatile organic compounds (VOC5) such as chlorinated solvents and hydrocarbon fuels. Modeling tools are available to help evaluate the feasibility, design, and performance of SVE system...

  5. TURBULENT DIFFUSION BEHIND VEHICLES: EVALUATION OF ROADWAY MODELS

    EPA Science Inventory

    The paper presents a statistical evaluation of three highway air pollution models (CALINE 3, HIWAY-2, and ROADWAY) using the tracer data from the General Motors Sulfate Dispersion Experiment. The bootstrap resampling procedure is used to quantify the variability in the observed c...

  6. Validation of an Evaluation Model for Learning Management Systems

    ERIC Educational Resources Information Center

    Kim, S. W.; Lee, M. G.

    2008-01-01

    This study aims to validate a model for evaluating learning management systems (LMS) used in e-learning fields. A survey of 163 e-learning experts, regarding 81 validation items developed through literature review, was used to ascertain the importance of the criteria. A concise list of explanatory constructs, including two principle factors, was…

  7. Evaluating the Predictive Value of Growth Prediction Models

    ERIC Educational Resources Information Center

    Murphy, Daniel L.; Gaertner, Matthew N.

    2014-01-01

    This study evaluates four growth prediction models--projection, student growth percentile, trajectory, and transition table--commonly used to forecast (and give schools credit for) middle school students' future proficiency. Analyses focused on vertically scaled summative mathematics assessments, and two performance standards conditions (high…

  8. Economic evaluations with agent-based modelling: an introduction.

    PubMed

    Chhatwal, Jagpreet; He, Tianhua

    2015-05-01

    Agent-based modelling (ABM) is a relatively new technique, which overcomes some of the limitations of other methods commonly used for economic evaluations. These limitations include linearity, homogeneity and stationarity. Agents in ABMs are autonomous entities, who interact with each other and with the environment. ABMs provide an inductive or 'bottom-up' approach, i.e. individual-level behaviours define system-level components. ABMs have a unique property to capture emergence phenomena that otherwise cannot be predicted by the combination of individual-level interactions. In this tutorial, we discuss the basic concepts and important features of ABMs. We present a case study of an application of a simple ABM to evaluate the cost effectiveness of screening of an infectious disease. We also provide our model, which was developed using an open-source software program, NetLogo. We discuss software, resources, challenges and future research opportunities of ABMs for economic evaluations. PMID:25609398

  9. A model for compression after impact strength evaluation

    NASA Technical Reports Server (NTRS)

    Ilcewicz, Larry B.; Dost, Ernst F.; Coggeshall, Randy L.

    1989-01-01

    One key property commonly used for evaluating composite material performance is compression after impact strength (CAI). Standarad CAI tests typically use a specific laminate stacking sequence, coupon geometry, and impact level. In order to understand what material factors affect CAI, evaluation of test results should include more than comparisons of the measured strength for different materials. This study considers the effects of characteristic impact damage state, specimen geometry, material toughness, ply group thickness, undamaged strength, and failure mode. The results of parametric studies, using an analysis model developed to predict CAI, are discussed. Experimental results used to verify the model are also presented. Finally, recommended pre- and post-test CAI evaluation schemes which help link material behavior to structural performance are summarized.

  10. Quantitative comparison between crowd models for evacuation planning and evaluation

    NASA Astrophysics Data System (ADS)

    Viswanathan, Vaisagh; Lee, Chong Eu; Lees, Michael Harold; Cheong, Siew Ann; Sloot, Peter M. A.

    2014-02-01

    Crowd simulation is rapidly becoming a standard tool for evacuation planning and evaluation. However, the many crowd models in the literature are structurally different, and few have been rigorously calibrated against real-world egress data, especially in emergency situations. In this paper we describe a procedure to quantitatively compare different crowd models or between models and real-world data. We simulated three models: (1) the lattice gas model, (2) the social force model, and (3) the RVO2 model, and obtained the distributions of six observables: (1) evacuation time, (2) zoned evacuation time, (3) passage density, (4) total distance traveled, (5) inconvenience, and (6) flow rate. We then used the DISTATIS procedure to compute the compromise matrix of statistical distances between the three models. Projecting the three models onto the first two principal components of the compromise matrix, we find the lattice gas and RVO2 models are similar in terms of the evacuation time, passage density, and flow rates, whereas the social force and RVO2 models are similar in terms of the total distance traveled. Most importantly, we find that the zoned evacuation times of the three models to be very different from each other. Thus we propose to use this variable, if it can be measured, as the key test between different models, and also between models and the real world. Finally, we compared the model flow rates against the flow rate of an emergency evacuation during the May 2008 Sichuan earthquake, and found the social force model agrees best with this real data.

  11. Model Evaluation and Hindcasting: An Experiment with an Integrated Assessment Model

    SciTech Connect

    Chaturvedi, Vaibhav; Kim, Son H.; Smith, Steven J.; Clarke, Leon E.; Zhou, Yuyu; Kyle, G. Page; Patel, Pralit L.

    2013-11-01

    Integrated assessment models have been extensively used for analyzing long term energy and greenhouse emissions trajectories and have influenced key policies on this subject. Though admittedly these models are focused on the long term trajectories, how well these models are able to capture historical dynamics is an open question. In a first experiment of its kind, we present a framework for evaluation of such integrated assessment models. We use Global Change Assessment Model for this zero order experiment, and focus on the building sector results for USA. We calibrate the model for 1990 and run it forward up to 2095 in five year time steps. This gives us results for 1995, 2000, 2005 and 2010 which we compare to observed historical data at both fuel level and service level. We focus on bringing out the key insights for the wider process of model evaluation through our experiment with GCAM. We begin with highlighting that creation of an evaluation dataset and identification of key evaluation metric is the foremost challenge in the evaluation process. Our analysis highlights that estimation of functional form of the relationship between energy service demand, which is an unobserved variable, and its drivers is a significant challenge in the absence of adequate historical data for both the dependent and driver variables. Historical data availability for key metrics is a serious limiting factor in the process of evaluation. Interestingly, service level data against which such models need to be evaluated are itself a result of models. Thus for energy services, the best we can do is compare our model results with other model results rather than observed and measured data. We show that long term models, by the nature of their construction, will most likely underestimate the rapid growth in some services observed in a short time span. Also, we learn that modeling saturated energy services like space heating is easier than modeling unsaturated services like space cooling

  12. Evaluation of battery models for prediction of electric vehicle range

    NASA Technical Reports Server (NTRS)

    Frank, H. A.; Phillips, A. M.

    1977-01-01

    Three analytical models for predicting electric vehicle battery output and the corresponding electric vehicle range for various driving cycles were evaluated. The models were used to predict output and range, and then compared with experimentally determined values determined by laboratory tests on batteries using discharge cycles identical to those encountered by an actual electric vehicle while on SAE cycles. Results indicate that the modified Hoxie model gave the best predictions with an accuracy of about 97 to 98% in the best cases and 86% in the worst case. A computer program was written to perform the lengthy iterative calculations required. The program and hardware used to automatically discharge the battery are described.

  13. Evaluation of the St. Lucia geothermal resource: macroeconomic models

    SciTech Connect

    Burris, A.E.; Trocki, L.K.; Yeamans, M.K.; Kolstad, C.D.

    1984-08-01

    A macroeconometric model describing the St. Lucian economy was developed using 1970 to 1982 economic data. Results of macroeconometric forecasts for the period 1983 through 1985 show an increase in gross domestic product (GDP) for 1983 and 1984 with a decline in 1985. The rate of population growth is expected to exceed GDP growth so that a small decline in per capita GDP will occur. We forecast that garment exports will increase, providing needed employment and foreign exchange. To obtain a longer-term but more general outlook on St. Lucia's economy, and to evaluate the benefit of geothermal energy development, we applied a nonlinear programming model. The model maximizes discounted cumulative consumption.

  14. Evaluating Model Fit for Growth Curve Models: Integration of Fit Indices from SEM and MLM Frameworks

    ERIC Educational Resources Information Center

    Wu, Wei; West, Stephen G.; Taylor, Aaron B.

    2009-01-01

    Evaluating overall model fit for growth curve models involves 3 challenging issues. (a) Three types of longitudinal data with different implications for model fit may be distinguished: balanced on time with complete data, balanced on time with data missing at random, and unbalanced on time. (b) Traditional work on fit from the structural equation…

  15. Parameter Sensitivity Evaluation of the CLM-Crop model

    NASA Astrophysics Data System (ADS)

    Drewniak, B. A.; Zeng, X.; Mametjanov, A.; Anitescu, M.; Norris, B.; Kotamarthi, V. R.

    2011-12-01

    In order to improve carbon cycling within Earth System Models, crop representation for corn, spring wheat, and soybean species has been incorporated into the latest version of the Community Land Model (CLM), the land surface model in the Community Earth System Model. As a means to evaluate and improve the CLM-Crop model, we will determine the sensitivity of various crop parameters on carbon fluxes (such as GPP and NEE), yields, and soil organic matter. The sensitivity analysis will perform small perturbations over a range of values for each parameter on individual grid sites, for comparison with AmeriFlux data, as well as globally so crop model parameters can be improved. Over 20 parameters have been identified for evaluation in this study including carbon-nitrogen ratios for leaves, stems, roots, and organs; fertilizer applications; growing degree days for each growth stage; and more. Results from this study will be presented to give a better understanding of the sensitivity of the various parameters used to represent crops, which will help improve the overall model performance and aid with determining future influences climate change will have on cropland ecosystems.

  16. Toward diagnostic model calibration and evaluation: Approximate Bayesian computation

    NASA Astrophysics Data System (ADS)

    Vrugt, Jasper A.; Sadegh, Mojtaba

    2013-07-01

    The ever increasing pace of computational power, along with continued advances in measurement technologies and improvements in process understanding has stimulated the development of increasingly complex hydrologic models that simulate soil moisture flow, groundwater recharge, surface runoff, root water uptake, and river discharge at different spatial and temporal scales. Reconciling these high-order system models with perpetually larger volumes of field data is becoming more and more difficult, particularly because classical likelihood-based fitting methods lack the power to detect and pinpoint deficiencies in the model structure. Gupta et al. (2008) has recently proposed steps (amongst others) toward the development of a more robust and powerful method of model evaluation. Their diagnostic approach uses signature behaviors and patterns observed in the input-output data to illuminate to what degree a representation of the real world has been adequately achieved and how the model should be improved for the purpose of learning and scientific discovery. In this paper, we introduce approximate Bayesian computation (ABC) as a vehicle for diagnostic model evaluation. This statistical methodology relaxes the need for an explicit likelihood function in favor of one or multiple different summary statistics rooted in hydrologic theory that together have a clearer and more compelling diagnostic power than some average measure of the size of the error residuals. Two illustrative case studies are used to demonstrate that ABC is relatively easy to implement, and readily employs signature based indices to analyze and pinpoint which part of the model is malfunctioning and in need of further improvement.

  17. Evaluation of the GEM-AQ simulations for the Air Quality Model Evaluation International Initiative (AQMEII)

    NASA Astrophysics Data System (ADS)

    Lobocki, Lech; Gawuc, Lech; Jefimow, Maciej; Kaminski, Jacek; Porebska, Magdalena; Struzewska, Joanna; Zdunek, Malgorzata

    2013-04-01

    A multiscale, on-line meteorological and air quality model GEM-AQ was used to simulate ozone and particulate matter over the European continent in 2006, as a part of the Air Quality Model Evaluation International Initiative (AQMEII). In contrast to the majority of models participating in the Phase I of AQMEII, the GEM-AQ configuration employed here utilized neither external meteorological fields nor lateral boundary conditions, owing to the global-extent and variable grid resolution of the model setup. We will present evaluation results for global model performance statistics calculated for the entire year and more detailed performance analysis of pollution episodes. Evaluation of meteorological parameters includes comparisons of model-predicted wind, temperature and cloudiness with hourly observations at surface weather stations, daily maxima, and comparison with upper-air soundings at selected sites. Frequency distribution of principal boundary layer parameters and its spatial structure will be presented. Air quality predictions are assessed in terms of ground-level daily mean ozone concentrations and its daily peak values, vertical structure as inferred from ozone soundings, and particulate matter daily mean concentrations at the surface.

  18. Evaluating climate models: Should we use weather or climate observations?

    SciTech Connect

    Oglesby, Robert J; Erickson III, David J

    2009-12-01

    Calling the numerical models that we use for simulations of climate change 'climate models' is a bit of a misnomer. These 'general circulation models' (GCMs, AKA global climate models) and their cousins the 'regional climate models' (RCMs) are actually physically-based weather simulators. That is, these models simulate, either globally or locally, daily weather patterns in response to some change in forcing or boundary condition. These simulated weather patterns are then aggregated into climate statistics, very much as we aggregate observations into 'real climate statistics'. Traditionally, the output of GCMs has been evaluated using climate statistics, as opposed to their ability to simulate realistic daily weather observations. At the coarse global scale this may be a reasonable approach, however, as RCM's downscale to increasingly higher resolutions, the conjunction between weather and climate becomes more problematic. We present results from a series of present-day climate simulations using the WRF ARW for domains that cover North America, much of Latin America, and South Asia. The basic domains are at a 12 km resolution, but several inner domains at 4 km have also been simulated. These include regions of complex topography in Mexico, Colombia, Peru, and Sri Lanka, as well as a region of low topography and fairly homogeneous land surface type (the U.S. Great Plains). Model evaluations are performed using standard climate analyses (e.g., reanalyses; NCDC data) but also using time series of daily station observations. Preliminary results suggest little difference in the assessment of long-term mean quantities, but the variability on seasonal and interannual timescales is better described. Furthermore, the value-added by using daily weather observations as an evaluation tool increases with the model resolution.

  19. Towards the quantitative evaluation of visual attention models.

    PubMed

    Bylinskii, Z; DeGennaro, E M; Rajalingham, R; Ruda, H; Zhang, J; Tsotsos, J K

    2015-11-01

    Scores of visual attention models have been developed over the past several decades of research. Differences in implementation, assumptions, and evaluations have made comparison of these models very difficult. Taxonomies have been constructed in an attempt at the organization and classification of models, but are not sufficient at quantifying which classes of models are most capable of explaining available data. At the same time, a multitude of physiological and behavioral findings have been published, measuring various aspects of human and non-human primate visual attention. All of these elements highlight the need to integrate the computational models with the data by (1) operationalizing the definitions of visual attention tasks and (2) designing benchmark datasets to measure success on specific tasks, under these definitions. In this paper, we provide some examples of operationalizing and benchmarking different visual attention tasks, along with the relevant design considerations. PMID:25951756

  20. Distributed multi-criteria model evaluation and spatial association analysis

    NASA Astrophysics Data System (ADS)

    Scherer, Laura; Pfister, Stephan

    2015-04-01

    Model performance, if evaluated, is often communicated by a single indicator and at an aggregated level; however, it does not embrace the trade-offs between different indicators and the inherent spatial heterogeneity of model efficiency. In this study, we simulated the water balance of the Mississippi watershed using the Soil and Water Assessment Tool (SWAT). The model was calibrated against monthly river discharge at 131 measurement stations. Its time series were bisected to allow for subsequent validation at the same gauges. Furthermore, the model was validated against evapotranspiration which was available as a continuous raster based on remote sensing. The model performance was evaluated for each of the 451 sub-watersheds using four different criteria: 1) Nash-Sutcliffe efficiency (NSE), 2) percent bias (PBIAS), 3) root mean square error (RMSE) normalized to standard deviation (RSR), as well as 4) a combined indicator of the squared correlation coefficient and the linear regression slope (bR2). Conditions that might lead to a poor model performance include aridity, a very flat and steep relief, snowfall and dams, as indicated by previous research. In an attempt to explain spatial differences in model efficiency, the goodness of the model was spatially compared to these four phenomena by means of a bivariate spatial association measure which combines Pearson's correlation coefficient and Moran's index for spatial autocorrelation. In order to assess the model performance of the Mississippi watershed as a whole, three different averages of the sub-watershed results were computed by 1) applying equal weights, 2) weighting by the mean observed river discharge, 3) weighting by the upstream catchment area and the square root of the time series length. Ratings of model performance differed significantly in space and according to efficiency criterion. The model performed much better in the humid Eastern region than in the arid Western region which was confirmed by the

  1. Human Modeling Evaluations in Microgravity Workstation and Restraint Development

    NASA Technical Reports Server (NTRS)

    Whitmore, Mihriban; Chmielewski, Cynthia; Wheaton, Aneice; Hancock, Lorraine; Beierle, Jason; Bond, Robert L. (Technical Monitor)

    1999-01-01

    The International Space Station (ISS) will provide long-term missions which will enable the astronauts to live and work, as well as, conduct research in a microgravity environment. The dominant factor in space affecting the crew is "weightlessness" which creates a challenge for establishing workstation microgravity design requirements. The crewmembers will work at various workstations such as Human Research Facility (HRF), Microgravity Sciences Glovebox (MSG) and Life Sciences Glovebox (LSG). Since the crew will spend considerable amount of time at these workstations, it is critical that ergonomic design requirements are integral part of design and development effort. In order to achieve this goal, the Space Human Factors Laboratory in the Johnson Space Center Flight Crew Support Division has been tasked to conduct integrated evaluations of workstations and associated crew restraints. Thus, a two-phase approach was used: 1) ground and microgravity evaluations of the physical dimensions and layout of the workstation components, and 2) human modeling analyses of the user interface. Computer-based human modeling evaluations were an important part of the approach throughout the design and development process. Human modeling during the conceptual design phase included crew reach and accessibility of individual equipment, as well as, crew restraint needs. During later design phases, human modeling has been used in conjunction with ground reviews and microgravity evaluations of the mock-ups in order to verify the human factors requirements. (Specific examples will be discussed.) This two-phase approach was the most efficient method to determine ergonomic design characteristics for workstations and restraints. The real-time evaluations provided a hands-on implementation in a microgravity environment. On the other hand, only a limited number of participants could be tested. The human modeling evaluations provided a more detailed analysis of the setup. The issues identified

  2. Evaluation of WRF-Urban Canopy Model over Seoul, Korea

    NASA Astrophysics Data System (ADS)

    Byon, J.; Seo, B.; Choi, Y.

    2008-12-01

    Numerical models with a fine grid can be a useful tool for investigation of urban forecast which provide input to air dispersion and pollution model. Simulation for urban forecast may be conducted using CFD model or mesoscale model. A small domain of the CFD model limits for the study of larger scale forcing to the urban environment. Improvement of computational environment and physics in mesoscale model allows urban scale prediction with a larger domain using mososcale model. It is implemented the parameterization of urban effect in the WRF mesoscale model which is developed in NCAR. NCAR coupled an urban canopy model (UCM) with Noah land surface model in the WRF model to realistically represent the urban by high resolution of land-use and building information. This study is focus on evaluation of WRF-UCM over the urban region of Seoul, South Korea during July 1-10 and October 6-12, 2007. WRF-UCM is conducted with 1km resolution and a 10km WRF model result which is forecasted at Korea Meteorological Administration numerical weather prediction center employed as initial and boundary condition. The urban land-use is remapped using data from Korean Ministry of Environment(KME). The KME land-use data is retrieved from Landsat satellite which has a 30-m resolution. The air temperature of WRF model is lower than observation, while wind speed increase in the model forecast. The temperature from the WRF-UCM is higher than that from the standard WRF over Seoul. The coupled WRF-UCM represents increase of urban heat which is caused from urban effects such as anthropogenic heat and building, etc. The performance of the WRF-UCM results over Seoul, South Korea would be presented in the conference. The WRF-UCM results will contribute to the study of urban heat and air flow in the city.

  3. Evaluation of Black Carbon Estimations in Global Aerosol Models

    SciTech Connect

    Koch, D.; Schulz, M.; Kinne, Stefan; McNaughton, C. S.; Spackman, J. R.; Balkanski, Y.; Bauer, S.; Berntsen, T.; Bond, Tami C.; Boucher, Olivier; Chin, M.; Clarke, A. D.; De Luca, N.; Dentener, F.; Diehl, T.; Dubovik, O.; Easter, Richard C.; Fahey, D. W.; Feichter, J.; Fillmore, D.; Freitag, S.; Ghan, Steven J.; Ginoux, P.; Gong, S.; Horowitz, L.; Iversen, T.; Kirkevag, A.; Klimont, Z.; Kondo, Yutaka; Krol, M.; Liu, Xiaohong; Miller, R.; Montanaro, V.; Moteki, N.; Myhre, G.; Penner, J.; Perlwitz, Ja; Pitari, G.; Reddy, S.; Sahu, L.; Sakamoto, H.; Schuster, G.; Schwarz, J. P.; Seland, O.; Stier, P.; Takegawa, Nobuyuki; Takemura, T.; Textor, C.; van Aardenne, John; Zhao, Y.

    2009-11-27

    We evaluate black carbon (BC) model predictions from the AeroCom model intercomparison project by considering the diversity among year 2000 model simulations and comparing model predictions with available measurements. These model-measurement intercomparisons include BC surface and aircraft concentrations, aerosol absorption optical depth (AAOD) from AERONET and OMI retrievals and BC column estimations based on AERONET. In regions other than Asia, most models are biased high compared to surface concentration measurements. However compared with (column) AAOD or BC burden retreivals, the models are generally biased low. The average ratio of model to retrieved AAOD is less than 0.7 in South American and 0.6 in African biomass burning regions; both of these regions lack surface concentration measurements. In Asia the average model to observed ratio is 0.6 for AAOD and 0.5 for BC surface concentrations. Compared with aircraft measurements over the Americas at latitudes between 0 and 50N, the average model is a factor of 10 larger than observed, and most models exceed the measured BC standard deviation in the mid to upper troposphere. At higher latitudes the average model to aircraft BC is 0.6 and underestimate the observed BC loading in the lower and middle troposphere associated with springtime Arctic haze. Low model bias for AAOD but overestimation of surface and upper atmospheric BC concentrations at lower latitudes suggests that most models are underestimating BC absorption and should improve estimates for refractive index, particle size, and optical effects of BC coating. Retrieval uncertainties and/or differences with model diagnostic treatment may also contribute to the model-measurement disparity. Largest AeroCom model diversity occurred in northern Eurasia and the remote Arctic, regions influenced by anthropogenic sources. Changing emissions, aging, removal, or optical properties within a single model generated a smaller change in model predictions than the

  4. Evaluation of reliability modeling tools for advanced fault tolerant systems

    NASA Technical Reports Server (NTRS)

    Baker, Robert; Scheper, Charlotte

    1986-01-01

    The Computer Aided Reliability Estimation (CARE III) and Automated Reliability Interactice Estimation System (ARIES 82) reliability tools for application to advanced fault tolerance aerospace systems were evaluated. To determine reliability modeling requirements, the evaluation focused on the Draper Laboratories' Advanced Information Processing System (AIPS) architecture as an example architecture for fault tolerance aerospace systems. Advantages and limitations were identified for each reliability evaluation tool. The CARE III program was designed primarily for analyzing ultrareliable flight control systems. The ARIES 82 program's primary use was to support university research and teaching. Both CARE III and ARIES 82 were not suited for determining the reliability of complex nodal networks of the type used to interconnect processing sites in the AIPS architecture. It was concluded that ARIES was not suitable for modeling advanced fault tolerant systems. It was further concluded that subject to some limitations (the difficulty in modeling systems with unpowered spare modules, systems where equipment maintenance must be considered, systems where failure depends on the sequence in which faults occurred, and systems where multiple faults greater than a double near coincident faults must be considered), CARE III is best suited for evaluating the reliability of advanced tolerant systems for air transport.

  5. Evaluation of Stratospheric Transport in New 3D Models Using the Global Modeling Initiative Grading Criteria

    NASA Technical Reports Server (NTRS)

    Strahan, Susan E.; Douglass, Anne R.; Einaudi, Franco (Technical Monitor)

    2001-01-01

    The Global Modeling Initiative (GMI) Team developed objective criteria for model evaluation in order to identify the best representation of the stratosphere. This work created a method to quantitatively and objectively discriminate between different models. In the original GMI study, 3 different meteorological data sets were used to run an offline chemistry and transport model (CTM). Observationally-based grading criteria were derived and applied to these simulations and various aspects of stratospheric transport were evaluated; grades were assigned. Here we report on the application of the GMI evaluation criteria to CTM simulations integrated with a new assimilated wind data set and a new general circulation model (GCM) wind data set. The Finite Volume Community Climate Model (FV-CCM) is a new GCM developed at Goddard which uses the NCAR CCM physics and the Lin and Rood advection scheme. The FV-Data Assimilation System (FV-DAS) is a new data assimilation system which uses the FV-CCM as its core model. One year CTM simulations of 2.5 degrees longitude by 2 degrees latitude resolution were run for each wind data set. We present the evaluation of temperature and annual transport cycles in the lower and middle stratosphere in the two new CTM simulations. We include an evaluation of high latitude transport which was not part of the original GMI criteria. Grades for the new simulations will be compared with those assigned during the original GMT evaluations and areas of improvement will be identified.

  6. Evaluation of COMPASS ionospheric model in GNSS positioning

    NASA Astrophysics Data System (ADS)

    Wu, Xiaoli; Hu, Xiaogong; Wang, Gang; Zhong, Huijuan; Tang, Chengpan

    2013-03-01

    As important products of GNSS navigation message, ionospheric delay model parameters are broadcasted for single-frequency users to improve their positioning accuracy. GPS provides daily Klobuchar ionospheric model parameters based on geomagnetic reference frame, while the regional satellite navigation system of China's COMPASS broadcasts an eight-parameter ionospheric model, COMPASS Ionospheric Model(CIM), which was generated by processing data from continuous monitoring stations, with updating the parameters every 2 h. To evaluate its performance, CIM predictions are compared to ionospheric delay measurements, along with GPS positioning accuracy comparisons. Real observed data analysis indicates that CIM provides higher correction precision in middle-latitude regions, but relatively lower correction precision for low-latitude regions where the ionosphere has much higher variability. CIM errors for some users show a common bias for in-coming COMPASS signals from different satellites, and hence ionospheric model errors are somehow translated into the receivers' clock error estimation. In addition, the CIM from the China regional monitoring network are further evaluated for global ionospheric corrections. Results show that in the Northern Hemisphere areas including Asia, Europe and North America, the three-dimensional positioning accuracy using the CIM for ionospheric delay corrections is improved by 7.8%-35.3% when compared to GPS single-frequency positioning ionospheric delay corrections using the Klobuchar model. However, the positioning accuracy in the Southern Hemisphere is degraded due apparently to the lack of monitoring stations there.

  7. Evaluation of Model Recognition for Grammar-Based Automatic 3d Building Model Reconstruction

    NASA Astrophysics Data System (ADS)

    Yu, Qian; Helmholz, Petra; Belton, David

    2016-06-01

    In recent years, 3D city models are in high demand by many public and private organisations, and the steadily growing capacity in both quality and quantity are increasing demand. The quality evaluation of these 3D models is a relevant issue both from the scientific and practical points of view. In this paper, we present a method for the quality evaluation of 3D building models which are reconstructed automatically from terrestrial laser scanning (TLS) data based on an attributed building grammar. The entire evaluation process has been performed in all the three dimensions in terms of completeness and correctness of the reconstruction. Six quality measures are introduced to apply on four datasets of reconstructed building models in order to describe the quality of the automatic reconstruction, and also are assessed on their validity from the evaluation point of view.

  8. Fuzzy-based dynamic soil erosion model (FuDSEM): Modelling approach and preliminary evaluation

    NASA Astrophysics Data System (ADS)

    Cohen, Sagy; Svoray, Tal; Laronne, Jonathan B.; Alexandrov, Yulia

    2008-07-01

    SummarySoil erosion models have advanced in recent years, by becoming more physically-based, with better representation of spatial patterns. Despite substantial progress, fundamental difficulties in catchment scale applications have been widely reported. In this paper, we introduce a new catchment scale soil erosion model. The model is designed for catchment interface and management purposes by: (1) using relatively common input data; (2) having a modular model structure; and (3) a clear and easily interpretable output analysis, by producing possibility or potential, rather than quantitative erosion maps. The model (named: FuDSEM; fuzzy-based dynamic soil erosion model) is spatially explicit and temporally dynamic and is formalized and based on fuzzy-logic equations. FuDSEM was initially evaluated on a small data-rich catchment and was found well calibrated. It was then implemented on a medium-sized heterogeneous catchment in central Israel. Initial evaluations of the medium-scale model predictions were conducted by: (1) comparison of FuDSEM runoff predictions against measured runoff from five hydrological stations and (2) a site specific evaluation of the FuDSEM multi-year erosion prediction in two sub-catchments. FuDSEM was compared with two other erosion models (a temporally static version of itself and a known physically-based model). The results show the advantages of FuDSEM over the other two models in evaluating the relative distribution of erosion, thereby emphasizing the benefits of its temporally dynamic and fuzzy structure.

  9. Postural effects on intracranial pressure: modeling and clinical evaluation.

    PubMed

    Qvarlander, Sara; Sundström, Nina; Malm, Jan; Eklund, Anders

    2013-11-01

    The physiological effect of posture on intracranial pressure (ICP) is not well described. This study defined and evaluated three mathematical models describing the postural effects on ICP, designed to predict ICP at different head-up tilt angles from the supine ICP value. Model I was based on a hydrostatic indifference point for the cerebrospinal fluid (CSF) system, i.e., the existence of a point in the system where pressure is independent of body position. Models II and III were based on Davson's equation for CSF absorption, which relates ICP to venous pressure, and postulated that gravitational effects within the venous system are transferred to the CSF system. Model II assumed a fully communicating venous system, and model III assumed that collapse of the jugular veins at higher tilt angles creates two separate hydrostatic compartments. Evaluation of the models was based on ICP measurements at seven tilt angles (0-71°) in 27 normal pressure hydrocephalus patients. ICP decreased with tilt angle (ANOVA: P < 0.01). The reduction was well predicted by model III (ANOVA lack-of-fit: P = 0.65), which showed excellent fit against measured ICP. Neither model I nor II adequately described the reduction in ICP (ANOVA lack-of-fit: P < 0.01). Postural changes in ICP could not be predicted based on the currently accepted theory of a hydrostatic indifference point for the CSF system, but a new model combining Davson's equation for CSF absorption and hydrostatic gradients in a collapsible venous system performed well and can be useful in future research on gravity and CSF physiology. PMID:24052030

  10. scoringRules - A software package for probabilistic model evaluation

    NASA Astrophysics Data System (ADS)

    Lerch, Sebastian; Jordan, Alexander; Krüger, Fabian

    2016-04-01

    Models in the geosciences are generally surrounded by uncertainty, and being able to quantify this uncertainty is key to good decision making. Accordingly, probabilistic forecasts in the form of predictive distributions have become popular over the last decades. With the proliferation of probabilistic models arises the need for decision theoretically principled tools to evaluate the appropriateness of models and forecasts in a generalized way. Various scoring rules have been developed over the past decades to address this demand. Proper scoring rules are functions S(F,y) which evaluate the accuracy of a forecast distribution F , given that an outcome y was observed. As such, they allow to compare alternative models, a crucial ability given the variety of theories, data sources and statistical specifications that is available in many situations. This poster presents the software package scoringRules for the statistical programming language R, which contains functions to compute popular scoring rules such as the continuous ranked probability score for a variety of distributions F that come up in applied work. Two main classes are parametric distributions like normal, t, or gamma distributions, and distributions that are not known analytically, but are indirectly described through a sample of simulation draws. For example, Bayesian forecasts produced via Markov Chain Monte Carlo take this form. Thereby, the scoringRules package provides a framework for generalized model evaluation that both includes Bayesian as well as classical parametric models. The scoringRules package aims to be a convenient dictionary-like reference for computing scoring rules. We offer state of the art implementations of several known (but not routinely applied) formulas, and implement closed-form expressions that were previously unavailable. Whenever more than one implementation variant exists, we offer statistically principled default choices.

  11. Methods for evaluating the predictive accuracy of structural dynamic models

    NASA Technical Reports Server (NTRS)

    Hasselman, Timothy K.; Chrostowski, Jon D.

    1991-01-01

    Modeling uncertainty is defined in terms of the difference between predicted and measured eigenvalues and eigenvectors. Data compiled from 22 sets of analysis/test results was used to create statistical databases for large truss-type space structures and both pretest and posttest models of conventional satellite-type space structures. Modeling uncertainty is propagated through the model to produce intervals of uncertainty on frequency response functions, both amplitude and phase. This methodology was used successfully to evaluate the predictive accuracy of several structures, including the NASA CSI Evolutionary Structure tested at Langley Research Center. Test measurements for this structure were within + one-sigma intervals of predicted accuracy for the most part, demonstrating the validity of the methodology and computer code.

  12. An epidemic model to evaluate the homogeneous mixing assumption

    NASA Astrophysics Data System (ADS)

    Turnes, P. P.; Monteiro, L. H. A.

    2014-11-01

    Many epidemic models are written in terms of ordinary differential equations (ODE). This approach relies on the homogeneous mixing assumption; that is, the topological structure of the contact network established by the individuals of the host population is not relevant to predict the spread of a pathogen in this population. Here, we propose an epidemic model based on ODE to study the propagation of contagious diseases conferring no immunity. The state variables of this model are the percentages of susceptible individuals, infectious individuals and empty space. We show that this dynamical system can experience transcritical and Hopf bifurcations. Then, we employ this model to evaluate the validity of the homogeneous mixing assumption by using real data related to the transmission of gonorrhea, hepatitis C virus, human immunodeficiency virus, and obesity.

  13. obs4MIPS: Satellite Datasets for Model Evaluation

    NASA Astrophysics Data System (ADS)

    Ferraro, R.; Waliser, D. E.; Gleckler, P. J.

    2013-12-01

    This poster will review the current status of the obs4MIPs project, whose purpose is to provide a limited collection of well-established and documented datasets for comparison with Earth system models. These datasets have been reformatted to correspond with the CMIP5 model output requirements, and include technical documentation specifically targeted for their use in model output evaluation. There are currently over 50 datasets containing observations that directly correspond to CMIP5 model output variables. We will review the rational and requirements for obs4MIPs contributions, and provide summary information of the current obs4MIPs holdings on the Earth System Grid Federation. We will also provide some usage statistics, an update on governance for the obs4MIPs project, and plans for supporting CMIP6.

  14. The Application of a Model for the Evaluation of Educational Products.

    ERIC Educational Resources Information Center

    Bertram, Charles L.; And Others

    Papers presented at a symposium on "The Application of a Model for the Evaluation of Educational Products" are provided. The papers are: "A Model for the Evaluation of Educational Products" by Charles L. Bertram; "The Application of an Evaluation Model to a Preschool Intervention Program" by Brainard W. Hines; "An Evaluation Model for a Regional…

  15. Advances in the ARM Climate Research Facility for Model Evaluation

    NASA Astrophysics Data System (ADS)

    Mather, J. H., II; Voyles, J.

    2014-12-01

    The DOE Atmospheric Radiation Measurement (ARM) Climate Research Facility has been operating for over 20 years providing observations of clouds, aerosols, and radiation to obtain a detailed description of the atmospheric state, support the study of atmospheric processes, and support the improvement of the parameterization of these processes in climate models. ARM facilities include long-term, multi-year deployments as well shorter mobile facility deployments in diverse climate regimes. With the goal of exploiting this increasingly diverse data set, we will explore relationships of cloud distributions with environmental parameters across climate regimes for the purpose of providing constraints on climate model simulations. We will also explore a new strategy ARM is developing to make use of high-resolution process models as another means to impact climate models. The Southern Great Plains and North Slope of Alaska sites are undergoing a reconfiguration to support the routine operation of high-resolution process-models. Enhancements are being targeted to both enable the initialization and evaluation of process models with the goal of using the combination of high-resolution observations and simulations to study key atmospheric processes. This type of observation/model integration is not new at ARM sites, but the creation of two new "supersites" will enable the application of this observation/model synergy on a routine basis, further exploiting the long-term nature of the ARM observations.

  16. Evaluation of Improved Spacecraft Models for GLONASS Orbit Determination

    NASA Astrophysics Data System (ADS)

    Weiss, J. P.; Sibthorpe, A.; Harvey, N.; Bar-Sever, Y.; Kuang, D.

    2010-12-01

    High-fidelity spacecraft models become more important as orbit determination strategies achieve greater levels of precision and accuracy. In this presentation, we assess the impacts of new solar radiation pressure and attitude models on precise orbit determination (POD) for GLONASS spacecraft within JPLs GIPSY-OASIS software. A new solar radiation pressure model is developed by empirically fitting a Fourier expansion to solar pressure forces acting on the spacecraft X, Y, Z components using one year of recent orbit data. Compared to a basic “box-wing” solar pressure model, the median 24-hour orbit prediction accuracy for one month of independent test data improves by 43%. We additionally implement an updated yaw attitude model during eclipse periods. We evaluate the impacts of both models on post-processed POD solutions spanning 6-months. We consider a number of metrics such as internal orbit and clock overlaps as well as comparisons to independent solutions. Improved yaw attitude modeling reduces the dependence of these metrics on the “solar elevation” angle. The updated solar pressure model improves orbit overlap statistics by several mm in the median sense and centimeters in the max sense (1D). Orbit differences relative to the IGS combined solution are at or below the 5 cm level (1D RMS).

  17. Evaluating sediment chemistry and toxicity data using logistic regression modeling

    USGS Publications Warehouse

    Field, L.J.; MacDonald, D.D.; Norton, S.B.; Severn, C.G.; Ingersoll, C.G.

    1999-01-01

    This paper describes the use of logistic-regression modeling for evaluating matching sediment chemistry and toxicity data. Contaminant- specific logistic models were used to estimate the percentage of samples expected to be toxic at a given concentration. These models enable users to select the probability of effects of concern corresponding to their specific assessment or management objective or to estimate the probability of observing specific biological effects at any contaminant concentration. The models were developed using a large database (n = 2,524) of matching saltwater sediment chemistry and toxicity data for field-collected samples compiled from a number of different sources and geographic areas. The models for seven chemicals selected as examples showed a wide range in goodness of fit, reflecting high variability in toxicity at low concentrations and limited data on toxicity at higher concentrations for some chemicals. The models for individual test endpoints (e.g., amphipod mortality) provided a better fit to the data than the models based on all endpoints combined. A comparison of the relative sensitivity of two amphipod species to specific contaminants illustrated an important application of the logistic model approach.

  18. The establishment of the evaluation model for pupil's lunch suppliers

    NASA Astrophysics Data System (ADS)

    Lo, Chih-Yao; Hou, Cheng-I.; Ma, Rosa

    2011-10-01

    The aim of this study is the establishment of the evaluation model for the government-controlled private suppliers for school lunches in the public middle and primary schools in Miao-Li County. After finishing the literature search and the integration of the opinions from anonymous experts by Modified Delphi Method, the grade forms from relevant schools in and outside the Miao-Li County will firstly be collected and the delaminated structures for evaluation be constructed. Then, the data analysis will be performed on those retrieved questionnaires designed in accordance with the Analytic Hierarchy Process (AHP). Finally, the evaluation form for the government-controlled private suppliers can be constructed and presented in the hope of benefiting the personnel in charge of school meal purchasing.

  19. EVALUATION OF HYDROLOGIC MODELS IN THE DESIGN OF STABLE LANDFILL COVERS

    EPA Science Inventory

    The study evaluates the utility of two hydrologic models in designing stable landfill cover systems. The models evaluated were HELP (Hydrologic Evaluation of Landfill Performance) and CREAMS (Chemicals, Runoff, and Erosion from Agricultural Management Systems). Studies of paramet...

  20. [Evaluation of national prevention campaigns against AIDS: analysis model].

    PubMed

    Hausser, D; Lehmann, P; Dubois, F; Gutzwiller, F

    1987-01-01

    The evaluation of the "Stop-Aids" campaign is based upon a model of behaviour modification (McAlister) which includes the communication theory of McGuire and the social learning theory of Bandura. Using this model, it is possible to define key variables that are used to measure the impact of the campaign. Process evaluation allows identification of multipliers that reinforce and confirm the initial message of prevention (source) thereby encouraging behaviour modifications that are likely to reduce the transmission of HIV (condom use, no sharing of injection material, monogamous relationship, etc.). Twelve studies performed by seven teams in the three linguistic areas contribute to the project. A synthesis of these results will be performed by the IUMSP. PMID:3687209

  1. A Model Evaluation Data Set for the Tropical ARM Sites

    DOE Data Explorer

    Jakob, Christian

    2008-01-15

    This data set has been derived from various ARM and external data sources with the main aim of providing modelers easy access to quality controlled data for model evaluation. The data set contains highly aggregated (in time) data from a number of sources at the tropical ARM sites at Manus and Nauru. It spans the years of 1999 and 2000. The data set contains information on downward surface radiation; surface meteorology, including precipitation; atmospheric water vapor and cloud liquid water content; hydrometeor cover as a function of height; and cloud cover, cloud optical thickness and cloud top pressure information provided by the International Satellite Cloud Climatology Project (ISCCP).

  2. Evaluation of Differentiation Strategy in Shipping Enterprises with Simulation Model

    NASA Astrophysics Data System (ADS)

    Vaxevanou, Anthi Z.; Ferfeli, Maria V.; Damianos, Sakas P.

    2009-08-01

    The present inquiring study aims at investigating the circumstances that prevail in the European Shipping Enterprises with special reference to the Greek ones. This investigation is held in order to explore the potential implementation of strategies so as to create a unique competitive advantage [1]. The Shipping sector is composed of enterprises that are mainly activated in the following three areas: the passenger, the commercial and the naval. The main target is to create a dynamic simulation model which, with reference to the STAIR strategic model, will evaluate the strategic differential choice that some of the shipping enterprises have.

  3. [IDEAL as a model for the evaluation of implants].

    PubMed

    Rovers, M M; Tax, C

    2015-01-01

    - Medical devices have to be tested for safety before they can be brought onto the market; effectiveness does not have to be demonstrated.- The 'IDEAL model' is in place for the development and evaluation of new surgical interventions and procedures; 'IDEAL' stands for the 5 phases of the model: Idea, Development, Exploration, Assessment and Long-term study. The model is based on the assumption that innovation and evaluation should be interwoven from concept through to the, preferably randomised, clinical trial phase.- Prospective registration of new interventions from the pre-clinical development phase onwards can prevent waste of money and effort, as unsuccessful 'new' registered interventions will not have to be developed again in the same manner.- The IDEAL model, with a few adaptations, also seems to be a practically useful and suitable model for new medical devices. It will allow patients efficient access to new interventions and medical devices for which the safety and effectiveness has been sufficiently clinically established. PMID:27007931

  4. Evaluation of predictions in the CASP10 model refinement category

    PubMed Central

    Nugent, Timothy; Cozzetto, Domenico; Jones, David T

    2014-01-01

    Here we report on the assessment results of the third experiment to evaluate the state of the art in protein model refinement, where participants were invited to improve the accuracy of initial protein models for 27 targets. Using an array of complementary evaluation measures, we find that five groups performed better than the naïve (null) method—a marked improvement over CASP9, although only three were significantly better. The leading groups also demonstrated the ability to consistently improve both backbone and side chain positioning, while other groups reliably enhanced other aspects of protein physicality. The top-ranked group succeeded in improving the backbone conformation in almost 90% of targets, suggesting a strategy that for the first time in CASP refinement is successful in a clear majority of cases. A number of issues remain unsolved: the majority of groups still fail to improve the quality of the starting models; even successful groups are only able to make modest improvements; and no prediction is more similar to the native structure than to the starting model. Successful refinement attempts also often go unrecognized, as suggested by the relatively larger improvements when predictions not submitted as model 1 are also considered. Proteins 2014; 82(Suppl 2):98–111. PMID:23900810

  5. Evaluation of 3D-Jury on CASP7 models

    PubMed Central

    Kaján, László; Rychlewski, Leszek

    2007-01-01

    Background 3D-Jury, the structure prediction consensus method publicly available in the Meta Server , was evaluated using models gathered in the 7th round of the Critical Assessment of Techniques for Protein Structure Prediction (CASP7). 3D-Jury is an automated expert process that generates protein structure meta-predictions from sets of models obtained from partner servers. Results The performance of 3D-Jury was analysed for three aspects. First, we examined the correlation between the 3D-Jury score and a model quality measure: the number of correctly predicted residues. The 3D-Jury score was shown to correlate significantly with the number of correctly predicted residues, the correlation is good enough to be used for prediction. 3D-Jury was also found to improve upon the competing servers' choice of the best structure model in most cases. The value of the 3D-Jury score as a generic reliability measure was also examined. We found that the 3D-Jury score separates bad models from good models better than the reliability score of the original server in 27 cases and falls short of it in only 5 cases out of a total of 38. We report the release of a new Meta Server feature: instant 3D-Jury scoring of uploaded user models. Conclusion The 3D-Jury score continues to be a good indicator of structural model quality. It also provides a generic reliability score, especially important for models that were not assigned such by the original server. Individual structure modellers can also benefit from the 3D-Jury scoring system by testing their models in the new instant scoring feature available in the Meta Server. PMID:17711571

  6. Modeling Healthcare Processes Using Commitments: An Empirical Evaluation.

    PubMed

    Telang, Pankaj R; Kalia, Anup K; Singh, Munindar P

    2015-01-01

    The two primary objectives of this paper are: (a) to demonstrate how Comma, a business modeling methodology based on commitments, can be applied in healthcare process modeling, and (b) to evaluate the effectiveness of such an approach in producing healthcare process models. We apply the Comma approach on a breast cancer diagnosis process adapted from an HHS committee report, and presents the results of an empirical study that compares Comma with a traditional approach based on the HL7 Messaging Standard (Traditional-HL7). Our empirical study involved 47 subjects, and two phases. In the first phase, we partitioned the subjects into two approximately equal groups. We gave each group the same requirements based on a process scenario for breast cancer diagnosis. Members of one group first applied Traditional-HL7 and then Comma whereas members of the second group first applied Comma and then Traditional-HL7-each on the above-mentioned requirements. Thus, each subject produced two models, each model being a set of UML Sequence Diagrams. In the second phase, we repartitioned the subjects into two groups with approximately equal distributions from both original groups. We developed exemplar Traditional-HL7 and Comma models; we gave one repartitioned group our Traditional-HL7 model and the other repartitioned group our Comma model. We provided the same changed set of requirements to all subjects and asked them to modify the provided exemplar model to satisfy the new requirements. We assessed solutions produced by subjects in both phases with respect to measures of flexibility, time, difficulty, objective quality, and subjective quality. Our study found that Comma is superior to Traditional-HL7 in flexibility and objective quality as validated via Student's t-test to the 10% level of significance. Comma is a promising new approach for modeling healthcare processes. Further gains could be made through improved tooling and enhanced training of modeling personnel. PMID:26539985

  7. Modeling Healthcare Processes Using Commitments: An Empirical Evaluation

    PubMed Central

    2015-01-01

    The two primary objectives of this paper are: (a) to demonstrate how Comma, a business modeling methodology based on commitments, can be applied in healthcare process modeling, and (b) to evaluate the effectiveness of such an approach in producing healthcare process models. We apply the Comma approach on a breast cancer diagnosis process adapted from an HHS committee report, and presents the results of an empirical study that compares Comma with a traditional approach based on the HL7 Messaging Standard (Traditional-HL7). Our empirical study involved 47 subjects, and two phases. In the first phase, we partitioned the subjects into two approximately equal groups. We gave each group the same requirements based on a process scenario for breast cancer diagnosis. Members of one group first applied Traditional-HL7 and then Comma whereas members of the second group first applied Comma and then Traditional-HL7—each on the above-mentioned requirements. Thus, each subject produced two models, each model being a set of UML Sequence Diagrams. In the second phase, we repartitioned the subjects into two groups with approximately equal distributions from both original groups. We developed exemplar Traditional-HL7 and Comma models; we gave one repartitioned group our Traditional-HL7 model and the other repartitioned group our Comma model. We provided the same changed set of requirements to all subjects and asked them to modify the provided exemplar model to satisfy the new requirements. We assessed solutions produced by subjects in both phases with respect to measures of flexibility, time, difficulty, objective quality, and subjective quality. Our study found that Comma is superior to Traditional-HL7 in flexibility and objective quality as validated via Student’s t-test to the 10% level of significance. Comma is a promising new approach for modeling healthcare processes. Further gains could be made through improved tooling and enhanced training of modeling personnel. PMID

  8. Evaluating Arctic warming mechanisms in CMIP5 models

    NASA Astrophysics Data System (ADS)

    Franzke, Christian L. E.; Lee, Sukyoung; Feldstein, Steven B.

    2016-07-01

    Arctic warming is one of the most striking signals of global warming. The Arctic is one of the fastest warming regions on Earth and constitutes, thus, a good test bed to evaluate the ability of climate models to reproduce the physics and dynamics involved in Arctic warming. Different physical and dynamical mechanisms have been proposed to explain Arctic amplification. These mechanisms include the surface albedo feedback and poleward sensible and latent heat transport processes. During the winter season when Arctic amplification is most pronounced, the first mechanism relies on an enhancement in upward surface heat flux, while the second mechanism does not. In these mechanisms, it has been proposed that downward infrared radiation (IR) plays a role to a varying degree. Here, we show that the current generation of CMIP5 climate models all reproduce Arctic warming and there are high pattern correlations—typically greater than 0.9—between the surface air temperature (SAT) trend and the downward IR trend. However, we find that there are two groups of CMIP5 models: one with small pattern correlations between the Arctic SAT trend and the surface vertical heat flux trend (Group 1), and the other with large correlations (Group 2) between the same two variables. The Group 1 models exhibit higher pattern correlations between Arctic SAT and 500 hPa geopotential height trends, than do the Group 2 models. These findings suggest that Arctic warming in Group 1 models is more closely related to changes in the large-scale atmospheric circulation, whereas in Group 2, the albedo feedback effect plays a more important role. Interestingly, while Group 1 models have a warm or weak bias in their Arctic SAT, Group 2 models show large cold biases. This stark difference in model bias leads us to hypothesize that for a given model, the dominant Arctic warming mechanism and trend may be dependent on the bias of the model mean state.

  9. Fuzzy Present Value Analysis Model for Evaluating Information System Projects

    SciTech Connect

    Omitaomu, Olufemi A; Badiru, Adedeji B

    2007-01-01

    In this article, the economic evaluation of information system projects using present value is analyzed based on triangular fuzzy numbers. Information system projects usually have numerous uncertainties and several conditions of risk that make their economic evaluation a challenging task. Each year, several information system projects are cancelled before completion as a result of budget overruns at a cost of several billions of dollars to industry. Although engineering economic analysis offers tools and techniques for evaluating risky projects, the tools are not enough to place information system projects on a safe budget/selection track. There is a need for an integrative economic analysis model that will account for the uncertainties in estimating project costs, benefits, and useful lives of uncertain and risky projects. In this study, we propose an approximate method of computing project present value using the concept of fuzzy modeling with special reference to information system projects. This proposed model has the potential of enhancing the project selection process by capturing a better economic picture of the project alternatives. The proposed methodology can also be used for other real-life projects with high degree of uncertainty and risk.

  10. Development, Implementation, and Evaluation of a Model Program for Evaluating School Principals. Maxi-II Practicum Report.

    ERIC Educational Resources Information Center

    Seal, Edgar Z.

    The objective of this project was to improve the Orange, California Unified School District's system for evaluating school principals through the development, implementation, and evaluation of a model program. The intended result of improved evaluation through the model was to identify those management skills which each participating principal…

  11. A Reusable Framework for Regional Climate Model Evaluation

    NASA Astrophysics Data System (ADS)

    Hart, A. F.; Goodale, C. E.; Mattmann, C. A.; Lean, P.; Kim, J.; Zimdars, P.; Waliser, D. E.; Crichton, D. J.

    2011-12-01

    Climate observations are currently obtained through a diverse network of sensors and platforms that include space-based observatories, airborne and seaborne platforms, and distributed, networked, ground-based instruments. These global observational measurements are critical inputs to the efforts of the climate modeling community and can provide a corpus of data for use in analysis and validation of climate models. The Regional Climate Model Evaluation System (RCMES) is an effort currently being undertaken to address the challenges of integrating this vast array of observational climate data into a coherent resource suitable for performing model analysis at the regional level. Developed through a collaboration between the NASA Jet Propulsion Laboratory (JPL) and the UCLA Joint Institute for Regional Earth System Science and Engineering (JIFRESSE), the RCMES uses existing open source technologies (MySQL, Apache Hadoop, and Apache OODT), to construct a scalable, parametric, geospatial data store that incorporates decades of observational data from a variety of NASA Earth science missions, as well as other sources into a consistently annotated, highly available scientific resource. By eliminating arbitrary partitions in the data (individual file boundaries, differing file formats, etc), and instead treating each individual observational measurement as a unique, geospatially referenced data point, the RCMES is capable of transforming large, heterogeneous collections of disparate observational data into a unified resource suitable for comparison to climate model output. This facility is further enhanced by the availability of a model evaluation toolkit which consists of a set of Python libraries, a RESTful web service layer, and a browser-based graphical user interface that allows for orchestration of model-to-data comparisons by composing them visually through web forms. This combination of tools and interfaces dramatically simplifies the process of interacting with and

  12. Using water isotopes in the evaluation of land surface models

    NASA Astrophysics Data System (ADS)

    Guglielmo, Francesca; Risi, Camille; Ottlé, Catherine; Bastrikov, Vladislav; Valdayskikh, Victor; Cattani, Olivier; Jouzel, Jean; Gribanov, Konstantin; Nekrasova, Olga; Zacharov, Vyacheslav; Ogée, Jérôme; Wingate, Lisa; Raz-Yaseef, Naama

    2013-04-01

    Several studies show that uncertainties in the representation of land surface processes contribute significantly to the spread in projections for the hydrological cycle. Improvements in the evaluation of land surface models would therefore translate into more reliable predictions of future changes. The isotopic composition of water is affected by phase transitions and, for this reason, is a good tracer for the hydrological cycle. Particularly relevant for the assessment of land surface processes is the fact that bare soil evaporation and transpiration bear different isotopic signatures. Water isotopic measurement could thus be employed in the evaluation of the land surface hydrological budget. With this objective, isotopes have been implemented in the most recent version of the land surface model ORCHIDEE. This model has undergone considerable development in the past few years. In particular, a newly discretised (11 layers) hydrology aims at a more realistic representation of the soil water budget. In addition, biogeophysical processes, as, for instance, the dynamics of permafrost and of its interaction with snow and vegetation, have been included. This model version will allow us to better resolve vertical profiles of soil water isotopic composition and to more realistically simulate the land surface hydrological and isotopic budget in a broader range of climate zones. Model results have been evaluated against temperature profiles and isotopes measurements in soil and stem water at sites located in semi-arid (Yatir), temperate (Le Bray) and boreal (Labytnangi) regions. Seasonal cycles are reasonably well reproduced. Furthermore, a sensitivity analysis investigates to what extent water isotopic measurements in soil water can help constrain the representation of land surface processes, with a focus on the partitioning between evaporation and transpiration. In turn, improvements in the description of this partitioning may help reduce the uncertainties in the land

  13. Evaluation Model of Life Loss Due to Dam Failure

    NASA Astrophysics Data System (ADS)

    Huang, Dongjing

    2016-04-01

    Dam failure poses a serious threat to human life, however there is still lack of systematic research on life loss which due to dam failure in China. From the perspective of protecting human life, an evaluation model for life loss caused by dam failure is put forward. The model building gets three progressive steps. Twenty dam failure cases in China are preferably chosen as the basic data, considering geographical location and construction time of dams, as well as various conditions of dam failure. Then twelve impact factors of life loss are selected, including severity degree of flood, population at risk, understanding of dam failure, warning time, evacuation condition, number of damaged buildings, water temperature, reservoir storage, dam height, dam type, break time and distance from flood area to dam. And through principal component analysis, it gets four principal components consisting of the first flood character principle component, the second warning system principle component, the third human character principle component and the fourth space-time impact principle component. After multivariate nonlinear regression and ten-fold validation in combination, the evaluation model for life loss is finally established. And the result of the proposed model is closer to the true value and better in fitting effect in comparison with the results of RESCDAM method and M. Peng method. The proposed model is not only applied to evaluate life loss and its rate under various kinds of dam failure conditions in China, but also provides reliable cause analysis and prediction approach to reduce the risk of life loss.

  14. Evaluating cloud tuning in a climate model with satellite observations

    NASA Astrophysics Data System (ADS)

    Suzuki, K.; Golaz, J.; Stephens, G. L.

    2013-12-01

    Climate model representation of the aerosol indirect effect is largely dependent on how to tune uncertain parameters in the models. The threshold particle radius triggering the warm rain formation, among others, is one particular 'tunable knob' that severely affects the indirect radiative forcing. Alternate values of the model's particular parameter within uncertainty have been shown to produce severely different historical temperature tends due to differing magnitude of aerosol indirect forcing. This study examines the validity of three different threshold radii assumed in GFDL CM3 with satellite observations in an attempt to constrain which value is more plausible than others. For this purpose, the methodologies developed to analyze multi-sensor satellite observations are employed to construct the statistics that fingerprint process-level signatures of the warm rain formation. The statistics are then used as observation-based metrics and compared between the model and satellite observations to examine how the alternate model configurations lead to different microphysical characteristics and to evaluate how they compare to satellite observations. The results show that the threshold radius that best reproduces satellite-observed microphysical statistics leads to the historical temperature trend that worst matches to observed trend and vice-versa. This inconsistency between the 'bottom-up' process-based constraint and the 'top-down' temperature trend constraint implies the presence of compensating errors in the model. This study underscores the importance of observation-based, process-level constraints on model microphysics uncertainties for more reliable predictions of aerosol indirect forcing.

  15. A quantitative evaluation of models for Aegean crustal deformation

    NASA Astrophysics Data System (ADS)

    Nyst, M.; Thatcher, W.

    2003-04-01

    Modeling studies of eastern Mediterranean tectonics show that Aegean deformation is mainly determined by WSW directed expulsion of Anatolia and SW directed extension due to roll-back of African lithosphere along the Hellenic trench. How motion is transferred across the Aegean remains a subject of debate. The two most widely used hypotheses for Aegean tectonics assert fundamentally different mechanisms. The first model describes deformation as a result of opposing rotations of two rigid microplates separated by a zone of extension. In the second model most motion is accommodated by shear on a series of dextral faults and extension on graben systems. These models make different quantitative predictions for the crustal deformation field that can be tested by a new, spatially dense GPS velocity data set. To convert the GPS data into crustal deformation parameters we use different methods to model complementary aspects of crustal deformation. We parameterize the main fault and plate boundary structures of both models and produce representations for the crustal deformation field that range from purely rigid rotations of microplates, via interacting, elastically deforming blocks separated by crustal faults to a continuous velocity gradient field. Critical evaluation of these models indicates strengths and limitations of each and suggests new measurements for further refining understanding of present-day Aegean tectonics.

  16. Evaluation of Plaid Models in Biclustering of Gene Expression Data

    PubMed Central

    Alavi Majd, Hamid; Shahsavari, Soodeh; Baghestani, Ahmad Reza; Tabatabaei, Seyyed Mohammad; Khadem Bashi, Naghme; Rezaei Tavirani, Mostafa; Hamidpour, Mohsen

    2016-01-01

    Background. Biclustering algorithms for the analysis of high-dimensional gene expression data were proposed. Among them, the plaid model is arguably one of the most flexible biclustering models up to now. Objective. The main goal of this study is to provide an evaluation of plaid models. To that end, we will investigate this model on both simulation data and real gene expression datasets. Methods. Two simulated matrices with different degrees of overlap and noise are generated and then the intrinsic structure of these data is compared with biclusters result. Also, we have searched biologically significant discovered biclusters by GO analysis. Results. When there is no noise the algorithm almost discovered all of the biclusters but when there is moderate noise in the dataset, this algorithm cannot perform very well in finding overlapping biclusters and if noise is big, the result of biclustering is not reliable. Conclusion. The plaid model needs to be modified because when there is a moderate or big noise in the data, it cannot find good biclusters. This is a statistical model and is a quite flexible one. In summary, in order to reduce the errors, model can be manipulated and distribution of error can be changed. PMID:27051553

  17. Evaluation of Plaid Models in Biclustering of Gene Expression Data.

    PubMed

    Alavi Majd, Hamid; Shahsavari, Soodeh; Baghestani, Ahmad Reza; Tabatabaei, Seyyed Mohammad; Khadem Bashi, Naghme; Rezaei Tavirani, Mostafa; Hamidpour, Mohsen

    2016-01-01

    Background. Biclustering algorithms for the analysis of high-dimensional gene expression data were proposed. Among them, the plaid model is arguably one of the most flexible biclustering models up to now. Objective. The main goal of this study is to provide an evaluation of plaid models. To that end, we will investigate this model on both simulation data and real gene expression datasets. Methods. Two simulated matrices with different degrees of overlap and noise are generated and then the intrinsic structure of these data is compared with biclusters result. Also, we have searched biologically significant discovered biclusters by GO analysis. Results. When there is no noise the algorithm almost discovered all of the biclusters but when there is moderate noise in the dataset, this algorithm cannot perform very well in finding overlapping biclusters and if noise is big, the result of biclustering is not reliable. Conclusion. The plaid model needs to be modified because when there is a moderate or big noise in the data, it cannot find good biclusters. This is a statistical model and is a quite flexible one. In summary, in order to reduce the errors, model can be manipulated and distribution of error can be changed. PMID:27051553

  18. Statistics in asteroseismology: Evaluating confidence in stellar model fits

    NASA Astrophysics Data System (ADS)

    Johnson, Erik Stewart

    We evaluate techniques presently used to match slates of stellar evolution models to asteroseismic observations by using numeric simulations of the model fits with randomly generated numbers. Measuring the quality of the fit between a simulated model and the star by a raw chi2 shows how well a reported model fit to a given star compares to a distribution of random model fits to the same star. The distribution of chi2 between "models" and simulated pulsations exhibits the behavior of a log-normal distribution, which suggests a link between the distribution and an analytic solution. Since the shape of the distribution strongly depends on the peculiar distribution of modes within the simulations, there appears to be no universal analytic quality-of-fit criterion, so evaluating seismic model fits must be done on a case--by--case basis. We also perform numeric simulations to determine the validity of spacings between pulsations by comparing the spacing between the observed modes of a given star to those between 106 sets of random numbers using the Q parameter of the Kolmogorov-Smirnov test. The observed periods in GD 358 and PG 1159--035 outperform these numeric simulations and validate their perceived spacings, while there is little support for spacings in PG 1219+534 or PG 0014+067. The best period spacing in BPM 37098 is marginally significant. The observed frequencies of eta Bootis outstrip random sets with an equal number of modes, but the modes are selectively chosen by the investigators from over 70 detected periodicities. When choosing the random data from sets of 70 values, the observed modes' spacings are reproducible by at least 2% of the random sets. Comparing asteroseismic data to random numbers statistically gauge the prominence of any possible spacing which removes another element of bias from asteroseismic analysis.

  19. Evaluation of a distributed catchment scale water balance model

    NASA Technical Reports Server (NTRS)

    Troch, Peter A.; Mancini, Marco; Paniconi, Claudio; Wood, Eric F.

    1993-01-01

    The validity of some of the simplifying assumptions in a conceptual water balance model is investigated by comparing simulation results from the conceptual model with simulation results from a three-dimensional physically based numerical model and with field observations. We examine, in particular, assumptions and simplifications related to water table dynamics, vertical soil moisture and pressure head distributions, and subsurface flow contributions to stream discharge. The conceptual model relies on a topographic index to predict saturation excess runoff and on Philip's infiltration equation to predict infiltration excess runoff. The numerical model solves the three-dimensional Richards equation describing flow in variably saturated porous media, and handles seepage face boundaries, infiltration excess and saturation excess runoff production, and soil driven and atmosphere driven surface fluxes. The study catchments (a 7.2 sq km catchment and a 0.64 sq km subcatchment) are located in the North Appalachian ridge and valley region of eastern Pennsylvania. Hydrologic data collected during the MACHYDRO 90 field experiment are used to calibrate the models and to evaluate simulation results. It is found that water table dynamics as predicted by the conceptual model are close to the observations in a shallow water well and therefore, that a linear relationship between a topographic index and the local water table depth is found to be a reasonable assumption for catchment scale modeling. However, the hydraulic equilibrium assumption is not valid for the upper 100 cm layer of the unsaturated zone and a conceptual model that incorporates a root zone is suggested. Furthermore, theoretical subsurface flow characteristics from the conceptual model are found to be different from field observations, numerical simulation results, and theoretical baseflow recession characteristics based on Boussinesq's groundwater equation.

  20. Evaluation of semiempirical atmospheric density models for orbit determination applications

    NASA Technical Reports Server (NTRS)

    Cox, C. M.; Feiertag, R. J.; Oza, D. H.; Doll, C. E.

    1994-01-01

    This paper presents the results of an investigation of the orbit determination performance of the Jacchia-Roberts (JR), mass spectrometer incoherent scatter 1986 (MSIS-86), and drag temperature model (DTM) atmospheric density models. Evaluation of the models was performed to assess the modeling of the total atmospheric density. This study was made generic by using six spacecraft and selecting time periods of study representative of all portions of the 11-year cycle. Performance of the models was measured for multiple spacecraft, representing a selection of orbit geometries from near-equatorial to polar inclinations and altitudes from 400 kilometers to 900 kilometers. The orbit geometries represent typical low earth-orbiting spacecraft supported by the Goddard Space Flight Center (GSFC) Flight Dynamics Division (FDD). The best available modeling and orbit determination techniques using the Goddard Trajectory Determination System (GTDS) were employed to minimize the effects of modeling errors. The latest geopotential model available during the analysis, the Goddard earth model-T3 (GEM-T3), was employed to minimize geopotential model error effects on the drag estimation. Improved-accuracy techniques identified for TOPEX/Poseidon orbit determination analysis were used to improve the Tracking and Data Relay Satellite System (TDRSS)-based orbit determination used for most of the spacecraft chosen for this analysis. This paper shows that during periods of relatively quiet solar flux and geomagnetic activity near the solar minimum, the choice of atmospheric density model used for orbit determination is relatively inconsequential. During typical solar flux conditions near the solar maximum, the differences between the JR, DTM, and MSIS-86 models begin to become apparent. Time periods of extreme solar activity, those in which the daily and 81-day mean solar flux are high and change rapidly, result in significant differences between the models. During periods of high

  1. Evaluation of the Current State of Integrated Water Quality Modelling

    NASA Astrophysics Data System (ADS)

    Arhonditsis, G. B.; Wellen, C. C.; Ecological Modelling Laboratory

    2010-12-01

    Environmental policy and management implementation require robust methods for assessing the contribution of various point and non-point pollution sources to water quality problems as well as methods for estimating the expected and achieved compliance with the water quality goals. Water quality models have been widely used for creating the scientific basis for management decisions by providing a predictive link between restoration actions and ecosystem response. Modelling water quality and nutrient transport is challenging due a number of constraints associated with the input data and existing knowledge gaps related to the mathematical description of landscape and in-stream biogeochemical processes. While enormous effort has been invested to make watershed models process-based and spatially-distributed, there has not been a comprehensive meta-analysis of model credibility in watershed modelling literature. In this study, we evaluate the current state of integrated water quality modeling across the range of temporal and spatial scales typically utilized. We address several common modeling questions by providing a quantitative assessment of model performance and by assessing how model performance depends on model development. The data compiled represent a heterogeneous group of modeling studies, especially with respect to complexity, spatial and temporal scales and model development objectives. Beginning from 1992, the year when Beven and Binley published their seminal paper on uncertainty analysis in hydrological modelling, and ending in 2009, we selected over 150 papers fitting a number of criteria. These criteria involved publications that: (i) employed distributed or semi-distributed modelling approaches; (ii) provided predictions on flow and nutrient concentration state variables; and (iii) reported fit to measured data. Model performance was quantified with the Nash-Sutcliffe Efficiency, the relative error, and the coefficient of determination. Further, our

  2. Evaluation of Nanoparticle Uptake in Co-culture Cancer Models

    PubMed Central

    Costa, Elisabete C.; Gaspar, Vítor M.; Marques, João G.; Coutinho, Paula; Correia, Ilídio J.

    2013-01-01

    Co-culture models are currently bridging the gap between classical cultures and in vivo animal models. Exploring this novel approach unlocks the possibility to mimic the tumor microenvironment in vitro, through the establishment of cancer-stroma synergistic interactions. Notably, these organotypic models offer a perfect platform for the development and pre-clinical evaluation of candidate nanocarriers loaded with anti-tumoral drugs in a high throughput screening mode, with lower costs and absence of ethical issues. However, this evaluation was until now limited to co-culture systems established with precise cell ratios, not addressing the natural cell heterogeneity commonly found in different tumors. Therefore, herein the multifunctional nanocarriers efficiency was characterized in various fibroblast-MCF-7 co-culture systems containing different cell ratios, in order to unravel key design parameters that influence nanocarrier performance and the therapeutic outcome. The successful establishment of the co-culture models was confirmed by the tissue-like distribution of the different cells in culture. Nanoparticles incubation in the various co-culture systems reveals that these nanocarriers possess targeting specificity for cancer cells, indicating their suitability for being used in this illness therapy. Additionally, by using different co-culture ratios, different nanoparticle uptake profiles were obtained. These findings are of crucial importance for the future design and optimization of new drug delivery systems, since their real targeting capacity must be addressed in heterogenous cell populations, such as those found in tumors. PMID:23922909

  3. Evaluation of a chinook salmon (Oncorhynchus tshawytscha) bioenergetics model

    USGS Publications Warehouse

    Madenjian, Charles P.; O'Connor, Daniel V.; Chernyak, Sergei M.; Rediske, Richard R.; O'Keefe, James P.

    2004-01-01

    We evaluated the Wisconsin bioenergetics model for chinook salmon (Oncorhynchus tshawytscha) in both the laboratory and the field. Chinook salmon in laboratory tanks were fed alewife (Alosa pseudoharengus), the predominant food of chinook salmon in Lake Michigan. Food consumption and growth by chinook salmon during the experiment were measured. To estimate the efficiency with which chinook salmon retain polychlorinated biphenyls (PCBs) from their food in the laboratory, PCB concentrations of the alewife and of the chinook salmon at both the beginning and end of the experiment were determined. Based on our laboratory evaluation, the bioenergetics model was furnishing unbiased estimates of food consumption by chinook salmon. Additionally, from the laboratory experiment, we calculated that chinook salmon retained 75% of the PCBs contained within their food. In an earlier study, assimilation rate of PCBs to chinook salmon from their food in Lake Michigan was estimated at 53%, thereby suggesting that the model was substantially overestimating food consumption by chinook salmon in Lake Michigan. However, we concluded that field performance of the model could not be accurately assessed because PCB assimilation efficiency is dependent on feeding rate, and feeding rate of chinook salmon was likely much lower in our laboratory tanks than in Lake Michigan.

  4. Energy from biomass: Land analysis and evaluation of supply models

    NASA Astrophysics Data System (ADS)

    Shen, S. Y.; Stavrou, J.; Nelson, C. H.; Vyas, A.

    1982-01-01

    Methods of determining the potential overall impact of land-based biomass production on the agricultural and forestry sectors of the US economy were evaluated. The availability of the factor that possibly limits biomass production the most, land is examined. A summary by US Department of Agriculture regions of the amount of available land with potential for biomass production and not presently in food crop production is presented. Then several currently used agricultural and forestry models that could be used to determine the impact of increased land-based biomass production on the agricultural and forestry sectors are evaluated. It was found that the forestry sector would not be significantly effected even by a level of biomass production with an energy yield as high as 11 quads. It was recommended that a suitable linear programming model from Iowa State University's Center for Agricultural and Rural Development (CARD) modeling system be used for future analysis. The CARD model would have to be appropriately modified so that biomass grasses and short-rotation trees could be added to the agricultural crops.

  5. Effects of question formats on causal judgments and model evaluation

    PubMed Central

    Smithson, Michael

    2015-01-01

    Evaluation of causal reasoning models depends on how well the subjects’ causal beliefs are assessed. Elicitation of causal beliefs is determined by the experimental questions put to subjects. We examined the impact of question formats commonly used in causal reasoning research on participant’s responses. The results of our experiment (Study 1) demonstrate that both the mean and homogeneity of the responses can be substantially influenced by the type of question (structure induction versus strength estimation versus prediction). Study 2A demonstrates that subjects’ responses to a question requiring them to predict the effect of a candidate cause can be significantly lower and more heterogeneous than their responses to a question asking them to diagnose a cause when given an effect. Study 2B suggests that diagnostic reasoning can strongly benefit from cues relating to temporal precedence of the cause in the question. Finally, we evaluated 16 variations of recent computational models and found the model fitting was substantially influenced by the type of questions. Our results show that future research in causal reasoning should place a high priority on disentangling the effects of question formats from the effects of experimental manipulations, because that will enable comparisons between models of causal reasoning uncontaminated by method artifact. PMID:25954225

  6. Evaluation of weather-based rice yield models in India.

    PubMed

    Sudharsan, D; Adinarayana, J; Reddy, D Raji; Sreenivas, G; Ninomiya, S; Hirafuji, M; Kiura, T; Tanaka, K; Desai, U B; Merchant, S N

    2013-01-01

    The objective of this study was to compare two different rice simulation models--standalone (Decision Support System for Agrotechnology Transfer [DSSAT]) and web based (SImulation Model for RIce-Weather relations [SIMRIW])--with agrometeorological data and agronomic parameters for estimation of rice crop production in southern semi-arid tropics of India. Studies were carried out on the BPT5204 rice variety to evaluate two crop simulation models. Long-term experiments were conducted in a research farm of Acharya N G Ranga Agricultural University (ANGRAU), Hyderabad, India. Initially, the results were obtained using 4 years (1994-1997) of data with weather parameters from a local weather station to evaluate DSSAT simulated results with observed values. Linear regression models used for the purpose showed a close relationship between DSSAT and observed yield. Subsequently, yield comparisons were also carried out with SIMRIW and DSSAT, and validated with actual observed values. Realizing the correlation coefficient values of SIMRIW simulation values in acceptable limits, further rice experiments in monsoon (Kharif) and post-monsoon (Rabi) agricultural seasons (2009, 2010 and 2011) were carried out with a location-specific distributed sensor network system. These proximal systems help to simulate dry weight, leaf area index and potential yield by the Java based SIMRIW on a daily/weekly/monthly/seasonal basis. These dynamic parameters are useful to the farming community for necessary decision making in a ubiquitous manner. However, SIMRIW requires fine tuning for better results/decision making. PMID:22422393

  7. The Third Phase of AQMEII: Evaluation Strategy and Multi-Model Performance Analysis

    EPA Science Inventory

    AQMEII (Air Quality Model Evaluation International Initiative) is an extraordinary effort promoting policy-relevant research on regional air quality model evaluation across the European and North American atmospheric modelling communities, providing the ideal platform for advanci...

  8. Looking beyond general metrics for model evaluation - lessons from an international model intercomparison study

    NASA Astrophysics Data System (ADS)

    Bouaziz, Laurène; de Boer-Euser, Tanja; Brauer, Claudia; Drogue, Gilles; Fenicia, Fabrizio; Grelier, Benjamin; de Niel, Jan; Nossent, Jiri; Pereira, Fernando; Savenije, Hubert; Thirel, Guillaume; Willems, Patrick

    2016-04-01

    International collaboration between institutes and universities is a promising way to reach consensus on hydrological model development. Education, experience and expert knowledge of the hydrological community have resulted in the development of a great variety of model concepts, calibration methods and analysis techniques. Although comparison studies are very valuable for international cooperation, they do often not lead to very clear new insights regarding the relevance of the modelled processes. We hypothesise that this is partly caused by model complexity and the used comparison methods, which focus on a good overall performance instead of focusing on specific events. We propose an approach that focuses on the evaluation of specific events. Eight international research groups calibrated their model for the Ourthe catchment in Belgium (1607 km2) and carried out a validation in time for the Ourthe (i.e. on two different periods, one of them on a blind mode for the modellers) and a validation in space for nested and neighbouring catchments of the Meuse in a completely blind mode. For each model, the same protocol was followed and an ensemble of best performing parameter sets was selected. Signatures were first used to assess model performances in the different catchments during validation. Comparison of the models was then followed by evaluation of selected events, which include: low flows, high flows and the transition from low to high flows. While the models show rather similar performances based on general metrics (i.e. Nash-Sutcliffe Efficiency), clear differences can be observed for specific events. While most models are able to simulate high flows well, large differences are observed during low flows and in the ability to capture the first peaks after drier months. The transferability of model parameters to neighbouring and nested catchments is assessed as an additional measure in the model evaluation. This suggested approach helps to select, among competing

  9. Animal Models for Evaluation of Bone Implants and Devices: Comparative Bone Structure and Common Model Uses.

    PubMed

    Wancket, L M

    2015-09-01

    Bone implants and devices are a rapidly growing field within biomedical research, and implants have the potential to significantly improve human and animal health. Animal models play a key role in initial product development and are important components of nonclinical data included in applications for regulatory approval. Pathologists are increasingly being asked to evaluate these models at the initial developmental and nonclinical biocompatibility testing stages, and it is important to understand the relative merits and deficiencies of various species when evaluating a new material or device. This article summarizes characteristics of the most commonly used species in studies of bone implant materials, including detailed information about the relevance of a particular model to human bone physiology and pathology. Species reviewed include mice, rats, rabbits, guinea pigs, dogs, sheep, goats, and nonhuman primates. Ultimately, a comprehensive understanding of the benefits and limitations of different model species will aid in rigorously evaluating a novel bone implant material or device. PMID:26163303

  10. Peformance Tuning and Evaluation of a Parallel Community Climate Model

    SciTech Connect

    Drake, J.B.; Worley, P.H.; Hammond, S.

    1999-11-13

    The Parallel Community Climate Model (PCCM) is a message-passing parallelization of version 2.1 of the Community Climate Model (CCM) developed by researchers at Argonne and Oak Ridge National Laboratories and at the National Center for Atmospheric Research in the early to mid 1990s. In preparation for use in the Department of Energy's Parallel Climate Model (PCM), PCCM has recently been updated with new physics routines from version 3.2 of the CCM, improvements to the parallel implementation, and ports to the SGIKray Research T3E and Origin 2000. We describe our experience in porting and tuning PCCM on these new platforms, evaluating the performance of different parallel algorithm options and comparing performance between the T3E and Origin 2000.

  11. The algorithmic anatomy of model-based evaluation

    PubMed Central

    Daw, Nathaniel D.; Dayan, Peter

    2014-01-01

    Despite many debates in the first half of the twentieth century, it is now largely a truism that humans and other animals build models of their environments and use them for prediction and control. However, model-based (MB) reasoning presents severe computational challenges. Alternative, computationally simpler, model-free (MF) schemes have been suggested in the reinforcement learning literature, and have afforded influential accounts of behavioural and neural data. Here, we study the realization of MB calculations, and the ways that this might be woven together with MF values and evaluation methods. There are as yet mostly only hints in the literature as to the resulting tapestry, so we offer more preview than review. PMID:25267820

  12. Performance Evaluation of 3d Modeling Software for Uav Photogrammetry

    NASA Astrophysics Data System (ADS)

    Yanagi, H.; Chikatsu, H.

    2016-06-01

    UAV (Unmanned Aerial Vehicle) photogrammetry, which combines UAV and freely available internet-based 3D modeling software, is widely used as a low-cost and user-friendly photogrammetry technique in the fields such as remote sensing and geosciences. In UAV photogrammetry, only the platform used in conventional aerial photogrammetry is changed. Consequently, 3D modeling software contributes significantly to its expansion. However, the algorithms of the 3D modelling software are black box algorithms. As a result, only a few studies have been able to evaluate their accuracy using 3D coordinate check points. With this motive, Smart3DCapture and Pix4Dmapper were downloaded from the Internet and commercial software PhotoScan was also employed; investigations were performed in this paper using check points and images obtained from UAV.

  13. Dynamic simulation of tibial tuberosity realignment: model evaluation.

    PubMed

    Purevsuren, Tserenchimed; Elias, John J; Kim, Kyungsoo; Kim, Yoon Hyuk

    2015-01-01

    This study was performed to evaluate a dynamic multibody model developed to characterize the influence of tibial tuberosity realignment procedures on patellofemoral motion and loading. Computational models were created to represent four knees previously tested at 40°, 60°, and 80° of flexion with the tibial tuberosity in a lateral, medial and anteromedial positions. The experimentally loaded muscles, major ligaments of the knee, and patellar tendon were represented. A repeated measures ANOVA with post-hoc testing was performed at each flexion angle to compare data between the three positions of the tibial tuberosity. Significant experimental trends for decreased patella flexion due to tuberosity anteriorization and a decrease in the lateral contact force due to tuberosity medialization were reproduced computationally. The dynamic multibody modeling technique will allow simulation of function for symptomatic knees to identify optimal surgical treatment methods based on parameters related to knee pathology and pre-operative kinematics. PMID:25025488

  14. Evaluation of 'partnership care model' in the control of hypertension.

    PubMed

    Mohammadi, Eesa; Abedi, Heidar Ali; Jalali, Farzad; Gofranipour, Fazlolah; Kazemnejad, Anoshirvan

    2006-06-01

    One of the shared common goals of World Hypertension League (WHL) and World Health Organization (WHO) is the control of hypertension. Despite many local and international interventions, the goal has not been achieved. This study evaluated an intervention based on the partnership care model to control hypertension in a rural population in the north of Iran. The results showed that the intervention was effective in decreasing systolic and diastolic blood pressure and in increasing the rate of controlled hypertensives (based on criteria of WHO/WHL). The intervention also had positive effects on health-related quality of life, body mass index, anxiety, high density lipoprotein level and compliance score. Based on these results, the partnership care model is effective in hypertension control and is recommended as a model to replace previous approaches in hypertension control. PMID:16674782

  15. Model evaluation using a community benchmarking system for land surface models

    NASA Astrophysics Data System (ADS)

    Mu, M.; Hoffman, F. M.; Lawrence, D. M.; Riley, W. J.; Keppel-Aleks, G.; Kluzek, E. B.; Koven, C. D.; Randerson, J. T.

    2014-12-01

    Evaluation of atmosphere, ocean, sea ice, and land surface models is an important step in identifying deficiencies in Earth system models and developing improved estimates of future change. For the land surface and carbon cycle, the design of an open-source system has been an important objective of the International Land Model Benchmarking (ILAMB) project. Here we evaluated CMIP5 and CLM models using a benchmarking system that enables users to specify models, data sets, and scoring systems so that results can be tailored to specific model intercomparison projects. Our scoring system used information from four different aspects of global datasets, including climatological mean spatial patterns, seasonal cycle dynamics, interannual variability, and long-term trends. Variable-to-variable comparisons enable investigation of the mechanistic underpinnings of model behavior, and allow for some control of biases in model drivers. Graphics modules allow users to evaluate model performance at local, regional, and global scales. Use of modular structures makes it relatively easy for users to add new variables, diagnostic metrics, benchmarking datasets, or model simulations. Diagnostic results are automatically organized into HTML files, so users can conveniently share results with colleagues. We used this system to evaluate atmospheric carbon dioxide, burned area, global biomass and soil carbon stocks, net ecosystem exchange, gross primary production, ecosystem respiration, terrestrial water storage, evapotranspiration, and surface radiation from CMIP5 historical and ESM historical simulations. We found that the multi-model mean often performed better than many of the individual models for most variables. We plan to publicly release a stable version of the software during fall of 2014 that has land surface, carbon cycle, hydrology, radiation and energy cycle components.

  16. Performance evaluation of four directional emissivity analytical models with thermal SAIL model and airborne images.

    PubMed

    Ren, Huazhong; Liu, Rongyuan; Yan, Guangjian; Li, Zhao-Liang; Qin, Qiming; Liu, Qiang; Nerry, Françoise

    2015-04-01

    Land surface emissivity is a crucial parameter in the surface status monitoring. This study aims at the evaluation of four directional emissivity models, including two bi-directional reflectance distribution function (BRDF) models and two gap-frequency-based models. Results showed that the kernel-driven BRDF model could well represent directional emissivity with an error less than 0.002, and was consequently used to retrieve emissivity with an accuracy of about 0.012 from an airborne multi-angular thermal infrared data set. Furthermore, we updated the cavity effect factor relating to multiple scattering inside canopy, which improved the performance of the gap-frequency-based models. PMID:25968800

  17. Metal mixture modeling evaluation project: 2. Comparison of four modeling approaches.

    PubMed

    Farley, Kevin J; Meyer, Joseph S; Balistrieri, Laurie S; De Schamphelaere, Karel A C; Iwasaki, Yuichi; Janssen, Colin R; Kamo, Masashi; Lofts, Stephen; Mebane, Christopher A; Naito, Wataru; Ryan, Adam C; Santore, Robert C; Tipping, Edward

    2015-04-01

    As part of the Metal Mixture Modeling Evaluation (MMME) project, models were developed by the National Institute of Advanced Industrial Science and Technology (Japan), the US Geological Survey (USA), HDR|HydroQual (USA), and the Centre for Ecology and Hydrology (United Kingdom) to address the effects of metal mixtures on biological responses of aquatic organisms. A comparison of the 4 models, as they were presented at the MMME workshop in Brussels, Belgium (May 2012), is provided in the present study. Overall, the models were found to be similar in structure (free ion activities computed by the Windermere humic aqueous model [WHAM]; specific or nonspecific binding of metals/cations in or on the organism; specification of metal potency factors or toxicity response functions to relate metal accumulation to biological response). Major differences in modeling approaches are attributed to various modeling assumptions (e.g., single vs multiple types of binding sites on the organism) and specific calibration strategies that affected the selection of model parameters. The models provided a reasonable description of additive (or nearly additive) toxicity for a number of individual toxicity test results. Less-than-additive toxicity was more difficult to describe with the available models. Because of limitations in the available datasets and the strong interrelationships among the model parameters (binding constants, potency factors, toxicity response parameters), further evaluation of specific model assumptions and calibration strategies is needed. PMID:25418584

  18. Statistical procedures for evaluating daily and monthly hydrologic model predictions

    USGS Publications Warehouse

    Coffey, M.E.; Workman, S.R.; Taraba, J.L.; Fogle, A.W.

    2004-01-01

    The overall study objective was to evaluate the applicability of different qualitative and quantitative methods for comparing daily and monthly SWAT computer model hydrologic streamflow predictions to observed data, and to recommend statistical methods for use in future model evaluations. Statistical methods were tested using daily streamflows and monthly equivalent runoff depths. The statistical techniques included linear regression, Nash-Sutcliffe efficiency, nonparametric tests, t-test, objective functions, autocorrelation, and cross-correlation. None of the methods specifically applied to the non-normal distribution and dependence between data points for the daily predicted and observed data. Of the tested methods, median objective functions, sign test, autocorrelation, and cross-correlation were most applicable for the daily data. The robust coefficient of determination (CD*) and robust modeling efficiency (EF*) objective functions were the preferred methods for daily model results due to the ease of comparing these values with a fixed ideal reference value of one. Predicted and observed monthly totals were more normally distributed, and there was less dependence between individual monthly totals than was observed for the corresponding predicted and observed daily values. More statistical methods were available for comparing SWAT model-predicted and observed monthly totals. The 1995 monthly SWAT model predictions and observed data had a regression Rr2 of 0.70, a Nash-Sutcliffe efficiency of 0.41, and the t-test failed to reject the equal data means hypothesis. The Nash-Sutcliffe coefficient and the R r2 coefficient were the preferred methods for monthly results due to the ability to compare these coefficients to a set ideal value of one.

  19. Evaluation of a laboratory model of human head impact biomechanics.

    PubMed

    Hernandez, Fidel; Shull, Peter B; Camarillo, David B

    2015-09-18

    This work describes methodology for evaluating laboratory models of head impact biomechanics. Using this methodology, we investigated: how closely does twin-wire drop testing model head rotation in American football impacts? Head rotation is believed to cause mild traumatic brain injury (mTBI) but helmet safety standards only model head translations believed to cause severe TBI. It is unknown whether laboratory head impact models in safety standards, like twin-wire drop testing, reproduce six degree-of-freedom (6DOF) head impact biomechanics that may cause mTBI. We compared 6DOF measurements of 421 American football head impacts to twin-wire drop tests at impact sites and velocities weighted to represent typical field exposure. The highest rotational velocities produced by drop testing were the 74th percentile of non-injury field impacts. For a given translational acceleration level, drop testing underestimated field rotational acceleration by 46% and rotational velocity by 72%. Primary rotational acceleration frequencies were much larger in drop tests (~100 Hz) than field impacts (~10 Hz). Drop testing was physically unable to produce acceleration directions common in field impacts. Initial conditions of a single field impact were highly resolved in stereo high-speed video and reconstructed in a drop test. Reconstruction results reflected aggregate trends of lower amplitude rotational velocity and higher frequency rotational acceleration in drop testing, apparently due to twin-wire constraints and the absence of a neck. These results suggest twin-wire drop testing is limited in modeling head rotation during impact, and motivate continued evaluation of head impact models to ensure helmets are tested under conditions that may cause mTBI. PMID:26117075

  20. Evaluation of wave runup predictions from numerical and parametric models

    USGS Publications Warehouse

    Stockdon, Hilary F.; Thompson, David M.; Plant, Nathaniel G.; Long, Joseph W.

    2014-01-01

    Wave runup during storms is a primary driver of coastal evolution, including shoreline and dune erosion and barrier island overwash. Runup and its components, setup and swash, can be predicted from a parameterized model that was developed by comparing runup observations to offshore wave height, wave period, and local beach slope. Because observations during extreme storms are often unavailable, a numerical model is used to simulate the storm-driven runup to compare to the parameterized model and then develop an approach to improve the accuracy of the parameterization. Numerically simulated and parameterized runup were compared to observations to evaluate model accuracies. The analysis demonstrated that setup was accurately predicted by both the parameterized model and numerical simulations. Infragravity swash heights were most accurately predicted by the parameterized model. The numerical model suffered from bias and gain errors that depended on whether a one-dimensional or two-dimensional spatial domain was used. Nonetheless, all of the predictions were significantly correlated to the observations, implying that the systematic errors can be corrected. The numerical simulations did not resolve the incident-band swash motions, as expected, and the parameterized model performed best at predicting incident-band swash heights. An assimilated prediction using a weighted average of the parameterized model and the numerical simulations resulted in a reduction in prediction error variance. Finally, the numerical simulations were extended to include storm conditions that have not been previously observed. These results indicated that the parameterized predictions of setup may need modification for extreme conditions; numerical simulations can be used to extend the validity of the parameterized predictions of infragravity swash; and numerical simulations systematically underpredict incident swash, which is relatively unimportant under extreme conditions.

  1. Optical CD metrology model evaluation and refining for manufacturing

    NASA Astrophysics Data System (ADS)

    Wang, S.-B.; Huang, C. L.; Chiu, Y. H.; Tao, H. J.; Mii, Y. J.

    2009-03-01

    Optical critical dimension (OCD) metrology has been well-accepted as standard inline metrology tool in semiconductor manufacturing since 65nm technology node for its un-destructive and versatile advantage. Many geometry parameters can be obtained in a single measurement with good accuracy if model is well established and calibrated by transmission electron microscopy (TEM). However, in the viewpoint of manufacturing, there is no effective index for model quality and, based on that, for model refining. Even, when device structure becomes more complicated, like strained silicon technology, there are more parameters required to be determined in the afterward measurement. The model, therefore, requires more attention to be paid to ensure inline metrology reliability. GOF (goodness-of-fitting), one model index given by a commercial OCD metrology tool, for example, is not sensitive enough while correlation and sensitivity coefficient, the other two indexes, are evaluated under metrology tool noise only and not directly related to inline production measurement uncertainty. In this article, we will propose a sensitivity matrix for measurement uncertainty estimation in which each entry is defined as the correlation coefficient between the corresponding two floating parameters and obtained by linearization theorem. The uncertainty is estimated in combination of production line variation and found, for the first time, much larger than that by metrology tool noise alone that indicates model quality control is critical for nanometer device production control. The uncertainty, in comparison with production requirement, also serves as index for model refining either by grid size rescaling or structure model modification. This method is verified by TEM measurement and, in the final, a flow chart for model refining is proposed.

  2. Obs4MIPS: Satellite Observations for CMIP6 Model Evaluation

    NASA Astrophysics Data System (ADS)

    Ferraro, R.; Waliser, D. E.; Gleckler, P. J.; Taylor, K. E.; Eyring, V.

    2014-12-01

    This poster will review the current status of the obs4MIPs project, whose purpose is to provide a limited collection of well-established and documented datasets for comparison with Earth system models (https://www.earthsystemcog.org/projects/obs4mips/). These datasets have been reformatted to correspond with the CMIP5 model output requirements, and include technical documentation specifically targeted for their use in model otput evaluation. There are currently over 50 datasets containing observations that directly correspond to CMIP5 model output variables. We will review the results and recommendations from the recent obs4MIPs - CMIP6 planning meeting, which gathered over 50 experts in satellite observations and CMIP modeling, to assess the needed observations to support the next round of CMIP experiments. The recommendations address key issues regarding the inclusion of higher frequency datasets (both observations and model output), the need for error and bias characterization, the inclusion of reanalysis, and support for observation simulators. An update on the governance for the obs4MIPs project and recent usage statistics will also be presented.

  3. Evaluation of an Urban Canopy Parameterization in a Mesoscale Model

    SciTech Connect

    Chin, H S; Leach, M J; Sugiyama, G A; Leone, Jr., J M; Walker, H; Nasstrom, J; Brown, M J

    2004-03-18

    A modified urban canopy parameterization (UCP) is developed and evaluated in a three-dimensional mesoscale model to assess the urban impact on surface and lower atmospheric properties. This parameterization accounts for the effects of building drag, turbulent production, radiation balance, anthropogenic heating, and building rooftop heating/cooling. USGS land-use data are also utilized to derive urban infrastructure and urban surface properties needed for driving the UCP. An intensive observational period with clear-sky, strong ambient wind and drainage flow, and the absence of land-lake breeze over the Salt Lake Valley, occurring on 25-26 October 2000, is selected for this study. A series of sensitivity experiments are performed to gain understanding of the urban impact in the mesoscale model. Results indicate that within the selected urban environment, urban surface characteristics and anthropogenic heating play little role in the formation of the modeled nocturnal urban boundary layer. The rooftop effect appears to be the main contributor to this urban boundary layer. Sensitivity experiments also show that for this weak urban heat island case, the model horizontal grid resolution is important in simulating the elevated inversion layer. The root mean square errors of the predicted wind and temperature with respect to surface station measurements exhibit substantially larger discrepancies at the urban locations than the rural counterparts. However, the close agreement of modeled tracer concentration with observations fairly justifies the modeled urban impact on the wind direction shift and wind drag effects.

  4. Gridded global surface ozone metrics for atmospheric chemistry model evaluation

    NASA Astrophysics Data System (ADS)

    Sofen, E. D.; Bowdalo, D.; Evans, M. J.; Apadula, F.; Bonasoni, P.; Cupeiro, M.; Ellul, R.; Galbally, I. E.; Girgzdiene, R.; Luppo, S.; Mimouni, M.; Nahas, A. C.; Saliba, M.; Tørseth, K.

    2016-02-01

    The concentration of ozone at the Earth's surface is measured at many locations across the globe for the purposes of air quality monitoring and atmospheric chemistry research. We have brought together all publicly available surface ozone observations from online databases from the modern era to build a consistent data set for the evaluation of chemical transport and chemistry-climate (Earth System) models for projects such as the Chemistry-Climate Model Initiative and Aer-Chem-MIP. From a total data set of approximately 6600 sites and 500 million hourly observations from 1971-2015, approximately 2200 sites and 200 million hourly observations pass screening as high-quality sites in regionally representative locations that are appropriate for use in global model evaluation. There is generally good data volume since the start of air quality monitoring networks in 1990 through 2013. Ozone observations are biased heavily toward North America and Europe with sparse coverage over the rest of the globe. This data set is made available for the purposes of model evaluation as a set of gridded metrics intended to describe the distribution of ozone concentrations on monthly and annual timescales. Metrics include the moments of the distribution, percentiles, maximum daily 8-hour average (MDA8), sum of means over 35 ppb (daily maximum 8-h; SOMO35), accumulated ozone exposure above a threshold of 40 ppbv (AOT40), and metrics related to air quality regulatory thresholds. Gridded data sets are stored as netCDF-4 files and are available to download from the British Atmospheric Data Centre (doi: 10.5285/08fbe63d-fa6d-4a7a-b952-5932e3ab0452). We provide recommendations to the ozone measurement community regarding improving metadata reporting to simplify ongoing and future efforts in working with ozone data from disparate networks in a consistent manner.

  5. Gridded global surface ozone metrics for atmospheric chemistry model evaluation

    NASA Astrophysics Data System (ADS)

    Sofen, E. D.; Bowdalo, D.; Evans, M. J.; Apadula, F.; Bonasoni, P.; Cupeiro, M.; Ellul, R.; Galbally, I. E.; Girgzdiene, R.; Luppo, S.; Mimouni, M.; Nahas, A. C.; Saliba, M.; Tørseth, K.; Wmo Gaw, Epa Aqs, Epa Castnet, Capmon, Naps, Airbase, Emep, Eanet Ozone Datasets, All Other Contributors To

    2015-07-01

    The concentration of ozone at the Earth's surface is measured at many locations across the globe for the purposes of air quality monitoring and atmospheric chemistry research. We have brought together all publicly available surface ozone observations from online databases from the modern era to build a consistent dataset for the evaluation of chemical transport and chemistry-climate (Earth System) models for projects such as the Chemistry-Climate Model Initiative and Aer-Chem-MIP. From a total dataset of approximately 6600 sites and 500 million hourly observations from 1971-2015, approximately 2200 sites and 200 million hourly observations pass screening as high-quality sites in regional background locations that are appropriate for use in global model evaluation. There is generally good data volume since the start of air quality monitoring networks in 1990 through 2013. Ozone observations are biased heavily toward North America and Europe with sparse coverage over the rest of the globe. This dataset is made available for the purposes of model evaluation as a set of gridded metrics intended to describe the distribution of ozone concentrations on monthly and annual timescales. Metrics include the moments of the distribution, percentiles, maximum daily eight-hour average (MDA8), SOMO35, AOT40, and metrics related to air quality regulatory thresholds. Gridded datasets are stored as netCDF-4 files and are available to download from the British Atmospheric Data Centre (doi:10.5285/08fbe63d-fa6d-4a7a-b952-5932e3ab0452). We provide recommendations to the ozone measurement community regarding improving metadata reporting to simplify ongoing and future efforts in working with ozone data from disparate networks in a consistent manner.

  6. Evaluating the uncertainty of input quantities in measurement models

    NASA Astrophysics Data System (ADS)

    Possolo, Antonio; Elster, Clemens

    2014-06-01

    The Guide to the Expression of Uncertainty in Measurement (GUM) gives guidance about how values and uncertainties should be assigned to the input quantities that appear in measurement models. This contribution offers a concrete proposal for how that guidance may be updated in light of the advances in the evaluation and expression of measurement uncertainty that were made in the course of the twenty years that have elapsed since the publication of the GUM, and also considering situations that the GUM does not yet contemplate. Our motivation is the ongoing conversation about a new edition of the GUM. While generally we favour a Bayesian approach to uncertainty evaluation, we also recognize the value that other approaches may bring to the problems considered here, and focus on methods for uncertainty evaluation and propagation that are widely applicable, including to cases that the GUM has not yet addressed. In addition to Bayesian methods, we discuss maximum-likelihood estimation, robust statistical methods, and measurement models where values of nominal properties play the same role that input quantities play in traditional models. We illustrate these general-purpose techniques in concrete examples, employing data sets that are realistic but that also are of conveniently small sizes. The supplementary material available online lists the R computer code that we have used to produce these examples (stacks.iop.org/Met/51/3/339/mmedia). Although we strive to stay close to clause 4 of the GUM, which addresses the evaluation of uncertainty for input quantities, we depart from it as we review the classes of measurement models that we believe are generally useful in contemporary measurement science. We also considerably expand and update the treatment that the GUM gives to Type B evaluations of uncertainty: reviewing the state-of-the-art, disciplined approach to the elicitation of expert knowledge, and its encapsulation in probability distributions that are usable in

  7. Evaluating Alzheimer's Disease Progression by Modeling Crosstalk Network Disruption

    PubMed Central

    Liu, Haochen; Wei, Chunxiang; He, Hua; Liu, Xiaoquan

    2016-01-01

    Aβ, tau, and P-tau have been widely accepted as reliable markers for Alzheimer's disease (AD). The crosstalk between these markers forms a complex network. AD may induce the integral variation and disruption of the network. The aim of this study was to develop a novel mathematic model based on a simplified crosstalk network to evaluate the disease progression of AD. The integral variation of the network is measured by three integral disruption parameters. The robustness of network is evaluated by network disruption probability. Presented results show that network disruption probability has a good linear relationship with Mini Mental State Examination (MMSE). The proposed model combined with Support vector machine (SVM) achieves a relative high 10-fold cross-validated performance in classification of AD vs. normal and mild cognitive impairment (MCI) vs. normal (95% accuracy, 95% sensitivity, 95% specificity for AD vs. normal; 90% accuracy, 94% sensitivity, 83% specificity for MCI vs. normal). This research evaluates the progression of AD and facilitates AD early diagnosis. PMID:26834548

  8. Evaluation of Data Used for Modelling the Stratosphere of Saturn

    NASA Astrophysics Data System (ADS)

    Armstrong, Eleanor Sophie; Irwin, Patrick G. J.; Moses, Julianne I.

    2015-11-01

    Planetary atmospheres are modeled through the use of a photochemical and kinetic reaction scheme constructed from experimentally and theoretically determined rate coefficients, photoabsorption cross sections and branching ratios for the molecules described within them. The KINETICS architecture has previously been developed to model planetary atmospheres and is applied here to Saturn’s stratosphere. We consider the pathways that comprise the reaction scheme of a current model, and update the reaction scheme according the to findings in a literature investigation. We evaluate contemporary photochemical literature, studying recent data sets of cross-sections and branching ratios for a number of hydrocarbons used in the photochemical scheme of Model C of KINETICS. In particular evaluation of new photodissociation branching ratios for CH4, C2H2, C2H4, C3H3, C3H5 and C4H2, and new cross-sectional data for C2H2, C2H4, C2H6, C3H3, C4H2, C6H2 and C8H2 are considered. By evaluating the techniques used and data sets obtained, a new reaction scheme selection was drawn up. These data are then used within the preferred reaction scheme of the thesis and applied to the KINETICS atmospheric model to produce a model of the stratosphere of Saturn in a steady state. A total output of the preferred reaction scheme is presented, and the data is compared both with the previous reaction scheme and with data from the Cassini spacecraft in orbit around Saturn.One of the key findings of this work is that there is significant change in the model’s output as a result of temperature dependent data determination. Although only shown within the changes to the photochemical portion of the preferred reaction scheme, it is suggested that an equally important temperature dependence will be exhibited in the kinetic section of the reaction scheme. The photochemical model output is shown to be highly dependent on the preferred reaction scheme used within it by this thesis. The importance of correct

  9. Hierarchical Bayesian model averaging for hydrostratigraphic modeling: Uncertainty segregation and comparative evaluation

    NASA Astrophysics Data System (ADS)

    Tsai, Frank T.-C.; Elshall, Ahmed S.

    2013-09-01

    Analysts are often faced with competing propositions for each uncertain model component. How can we judge that we select a correct proposition(s) for an uncertain model component out of numerous possible propositions? We introduce the hierarchical Bayesian model averaging (HBMA) method as a multimodel framework for uncertainty analysis. The HBMA allows for segregating, prioritizing, and evaluating different sources of uncertainty and their corresponding competing propositions through a hierarchy of BMA models that forms a BMA tree. We apply the HBMA to conduct uncertainty analysis on the reconstructed hydrostratigraphic architectures of the Baton Rouge aquifer-fault system, Louisiana. Due to uncertainty in model data, structure, and parameters, multiple possible hydrostratigraphic models are produced and calibrated as base models. The study considers four sources of uncertainty. With respect to data uncertainty, the study considers two calibration data sets. With respect to model structure, the study considers three different variogram models, two geological stationarity assumptions and two fault conceptualizations. The base models are produced following a combinatorial design to allow for uncertainty segregation. Thus, these four uncertain model components with their corresponding competing model propositions result in 24 base models. The results show that the systematic dissection of the uncertain model components along with their corresponding competing propositions allows for detecting the robust model propositions and the major sources of uncertainty.

  10. A Multiscale Model Evaluates Screening for Neoplasia in Barrett's Esophagus.

    PubMed

    Curtius, Kit; Hazelton, William D; Jeon, Jihyoun; Luebeck, E Georg

    2015-05-01

    Barrett's esophagus (BE) patients are routinely screened for high grade dysplasia (HGD) and esophageal adenocarcinoma (EAC) through endoscopic screening, during which multiple esophageal tissue samples are removed for histological analysis. We propose a computational method called the multistage clonal expansion for EAC (MSCE-EAC) screening model that is used for screening BE patients in silico to evaluate the effects of biopsy sampling, diagnostic sensitivity, and treatment on disease burden. Our framework seamlessly integrates relevant cell-level processes during EAC development with a spatial screening process to provide a clinically relevant model for detecting dysplastic and malignant clones within the crypt-structured BE tissue. With this computational approach, we retain spatio-temporal information about small, unobserved tissue lesions in BE that may remain undetected during biopsy-based screening but could be detected with high-resolution imaging. This allows evaluation of the efficacy and sensitivity of current screening protocols to detect neoplasia (dysplasia and early preclinical EAC) in the esophageal lining. We demonstrate the clinical utility of this model by predicting three important clinical outcomes: (1) the probability that small cancers are missed during biopsy-based screening, (2) the potential gains in neoplasia detection probabilities if screening occurred via high-resolution tomographic imaging, and (3) the efficacy of ablative treatments that result in the curative depletion of metaplastic and neoplastic cell populations in BE in terms of the long-term impact on reducing EAC incidence. PMID:26001209

  11. Evaluation of weather-based rice yield models in India

    NASA Astrophysics Data System (ADS)

    Sudharsan, D.; Adinarayana, J.; Reddy, D. Raji; Sreenivas, G.; Ninomiya, S.; Hirafuji, M.; Kiura, T.; Tanaka, K.; Desai, U. B.; Merchant, S. N.

    2013-01-01

    The objective of this study was to compare two different rice simulation models—standalone (Decision Support System for Agrotechnology Transfer [DSSAT]) and web based (SImulation Model for RIce-Weather relations [SIMRIW])—with agrometeorological data and agronomic parameters for estimation of rice crop production in southern semi-arid tropics of India. Studies were carried out on the BPT5204 rice variety to evaluate two crop simulation models. Long-term experiments were conducted in a research farm of Acharya N G Ranga Agricultural University (ANGRAU), Hyderabad, India. Initially, the results were obtained using 4 years (1994-1997) of data with weather parameters from a local weather station to evaluate DSSAT simulated results with observed values. Linear regression models used for the purpose showed a close relationship between DSSAT and observed yield. Subsequently, yield comparisons were also carried out with SIMRIW and DSSAT, and validated with actual observed values. Realizing the correlation coefficient values of SIMRIW simulation values in acceptable limits, further rice experiments in monsoon (Kharif) and post-monsoon (Rabi) agricultural seasons (2009, 2010 and 2011) were carried out with a location-specific distributed sensor network system. These proximal systems help to simulate dry weight, leaf area index and potential yield by the Java based SIMRIW on a daily/weekly/monthly/seasonal basis. These dynamic parameters are useful to the farming community for necessary decision making in a ubiquitous manner. However, SIMRIW requires fine tuning for better results/decision making.

  12. Uncertainty evaluation in numerical modeling of complex devices

    NASA Astrophysics Data System (ADS)

    Cheng, X.; Monebhurrun, V.

    2014-10-01

    Numerical simulation is an efficient tool for exploring and understanding the physics of complex devices, e.g. mobile phones. For meaningful results, it is important to evaluate the uncertainty of the numerical simulation. Uncertainty quantification in specific absorption rate (SAR) calculation using a full computer-aided design (CAD) mobile phone model is a challenging task. Since a typical SAR numerical simulation is computationally expensive, the traditional Monte Carlo (MC) simulation method proves inadequate. The unscented transformation (UT) is an alternative and numerically efficient method herein investigated to evaluate the uncertainty in the SAR calculation using the realistic models of two commercially available mobile phones. The electromagnetic simulation process is modeled as a nonlinear mapping with the uncertainty in the inputs e.g. the relative permittivity values of the mobile phone material properties, inducing an uncertainty in the output, e.g. the peak spatial-average SAR value.The numerical simulation results demonstrate that UT may be a potential candidate for the uncertainty quantification in SAR calculations since only a few simulations are necessary to obtain results similar to those obtained after hundreds or thousands of MC simulations.

  13. Durability evaluation techniques and modeling for highway materials

    SciTech Connect

    Biswas, M.; Muchane, G.K.

    1995-06-01

    For satisfactory long-term performance of highway facilities, the authors are concerned about durability of materials, in addition to their initial strength. Besides conventional materials, such as Portland cement concrete and asphalt concrete, their interests include high-performance materials such as polymer concrete and polymer modified concrete. Degradation of materials may occur over time due to exposure to a number of aggravating conditions and environments. For investigation of durability, the aggravating exposures that the authors have considered include repeated loading, freeze-thaw cycling. Methods of evaluation of performance of materials include application of vibration spectral techniques for evaluating of material stiffness and damage. Materials are modeled to characterize their performance under repeated loads and other aggravating exposures.

  14. Animal models for bladder cancer: The model establishment and evaluation (Review)

    PubMed Central

    ZHANG, NING; LI, DONGYANG; SHAO, JIALIANG; WANG, XIANG

    2015-01-01

    Bladder cancer is the most common type of tumor in the urogenital system. Approximately 75% of patients with bladder cancer present with non-muscle-invasive cancer, which is generally treated by transurethral resection and intravesical chemotherapy. In spite of different therapeutic options, there remains a very variable risk of recurrence and progression. Novel therapeutic methods of treating bladder cancer are urgently required. The exploration and preclinical evaluation of new treatments requires an animal tumor model that mimics the human counterpart. Animal models are key in bladder cancer research and provide a bridge to the clinic. Various animal bladder cancer models have been described to date, but the tumor take rate is reported to be 30–100%. Establishment of reliable, simple, practicable and reproducible animal models remains an ongoing challenge. The present review summarizes the latest developments with regard to the establishment of animal models and tumor evaluation. PMID:25788992

  15. An Evaluation of a Diagnostic Wind Model (CALMET)

    SciTech Connect

    Wang, Weiguo; Shaw, William J.; Seiple, Timothy E.; Rishel, Jeremy P.; Xie, YuLong

    2008-06-01

    An EPA-recommended diagnostic wind model (CALMET) was evaluated during a typical lake-breeze event in the Chicago region. We focused on the performance of CALMET in terms of simulating winds that were highly variable in space and time. The reference winds were generated by the PSU/NCAR MM5 assimilating system, with which CALMET results were compared. Statistical evaluations were conducted to quantify overall errors in wind speed and direction over the domain. Within the atmospheric boundary layer (ABL), relative errors in (layer-averaged) wind speed were about 25% to 40% during the simulation period; wind direction errors generally ranged from 6 to 20 deg. Above the ABL, the errors became larger due to lack of upper air stations in the studied domain. Analyses implied that model errors were dependent on time due to time-dependent spatial variability in winds. Trajectory analyses were made to examine the likely spatial dependence of model errors within the domain, suggesting that the quality of CALMET winds in local areas depended on their locations with respect to the lake-breeze front position. Large errors usually occurred near the front area, where observations cannot resolve the spatial variability of wind, or in the fringe of the domain, where observations are lacking. We also compared results simulated using different datasets and model options. Model errors tended to be reduced with data sampled from more stations or from more uniformly-distributed stations. Suggestions are offered for further improving or interpreting CALMET results under complex wind conditions in the Chicago region, which may also apply to other regions.

  16. Modelling public risk evaluation of natural hazards: a conceptual approach

    NASA Astrophysics Data System (ADS)

    Plattner, Th.

    2005-04-01

    In recent years, the dealing with natural hazards in Switzerland has shifted away from being hazard-oriented towards a risk-based approach. Decreasing societal acceptance of risk, accompanied by increasing marginal costs of protective measures and decreasing financial resources cause an optimization problem. Therefore, the new focus lies on the mitigation of the hazard's risk in accordance with economical, ecological and social considerations. This modern proceeding requires an approach in which not only technological, engineering or scientific aspects of the definition of the hazard or the computation of the risk are considered, but also the public concerns about the acceptance of these risks. These aspects of a modern risk approach enable a comprehensive assessment of the (risk) situation and, thus, sound risk management decisions. In Switzerland, however, the competent authorities suffer from a lack of decision criteria, as they don't know what risk level the public is willing to accept. Consequently, there exists a need for the authorities to know what the society thinks about risks. A formalized model that allows at least a crude simulation of the public risk evaluation could therefore be a useful tool to support effective and efficient risk mitigation measures. This paper presents a conceptual approach of such an evaluation model using perception affecting factors PAF, evaluation criteria EC and several factors without any immediate relation to the risk itself, but to the evaluating person. Finally, the decision about the acceptance Acc of a certain risk i is made by a comparison of the perceived risk Ri,perc with the acceptable risk Ri,acc.

  17. Laboratory evaluation of a walleye (Sander vitreus) bioenergetics model.

    PubMed

    Madenjian, Charles P; Wang, Chunfang; O'Brien, Timothy P; Holuszko, Melissa J; Ogilvie, Lynn M; Stickel, Richard G

    2010-03-01

    Walleye (Sander vitreus) is an important game fish throughout much of North America. We evaluated the performance of the Wisconsin bioenergetics model for walleye in the laboratory. Walleyes were fed rainbow smelt (Osmerus mordax) in four laboratory tanks during a 126-day experiment. Based on a statistical comparison of bioenergetics model predictions of monthly consumption with the observed monthly consumption, we concluded that the bioenergetics model significantly underestimated food consumption by walleye in the laboratory. The degree of underestimation appeared to depend on the feeding rate. For the tank with the lowest feeding rate (1.4% of walleye body weight per day), the agreement between the bioenergetics model prediction of cumulative consumption over the entire 126-day experiment and the observed cumulative consumption was remarkably close, as the prediction was within 0.1% of the observed cumulative consumption. Feeding rates in the other three tanks ranged from 1.6% to 1.7% of walleye body weight per day, and bioenergetics model predictions of cumulative consumption over the 126-day experiment ranged between 11 and 15% less than the observed cumulative consumption. PMID:18979219

  18. Proper bibeta ROC model: algorithm, software, and performance evaluation

    NASA Astrophysics Data System (ADS)

    Chen, Weijie; Hu, Nan

    2016-03-01

    Semi-parametric models are often used to fit data collected in receiver operating characteristic (ROC) experiments to obtain a smooth ROC curve and ROC parameters for statistical inference purposes. The proper bibeta model as recently proposed by Mossman and Peng enjoys several theoretical properties. In addition to having explicit density functions for the latent decision variable and an explicit functional form of the ROC curve, the two parameter bibeta model also has simple closed-form expressions for true-positive fraction (TPF), false-positive fraction (FPF), and the area under the ROC curve (AUC). In this work, we developed a computational algorithm and R package implementing this model for ROC curve fitting. Our algorithm can deal with any ordinal data (categorical or continuous). To improve accuracy, efficiency, and reliability of our software, we adopted several strategies in our computational algorithm including: (1) the LABROC4 categorization to obtain the true maximum likelihood estimation of the ROC parameters; (2) a principled approach to initializing parameters; (3) analytical first-order and second-order derivatives of the likelihood function; (4) an efficient optimization procedure (the L-BFGS algorithm in the R package "nlopt"); and (5) an analytical delta method to estimate the variance of the AUC. We evaluated the performance of our software with intensive simulation studies and compared with the conventional binormal and the proper binormal-likelihood-ratio models developed at the University of Chicago. Our simulation results indicate that our software is highly accurate, efficient, and reliable.

  19. Miniaturized three-dimensional cancer model for drug evaluation.

    PubMed

    Lovitt, Carrie J; Shelper, Todd B; Avery, Vicky M

    2013-09-01

    A more relevant in vitro cell culture model that closely mimics tumor biology and provides better predictive information on anticancer therapies has been the focus of much attention in recent years. We have developed a three-dimensional (3D) human tumor cell culture model that attempts to recreate the in vivo microenvironment and tumor biology in a miniaturized 384-well plate format. This model aims to exploit the potential of 3D cell culture as a screening tool for novel therapeutics for discovery programs. Here we have evaluated a Matrigel™ based induction of 3D tumor formation using standard labware and plate reading equipment. We have demonstrated that with an optimized protocol, reproducible proliferation, and cell viability data can be obtained across a range of cell lines and reagent batches. A panel of reference drugs was used to validate the suitability of the assays for a high throughput drug discovery program. Indicators of assay reproducibility, such as Z'-factor and coefficient of variation, as well as dose response curves confirmed the robustness of the assays. Several methods of drug activity determination were examined, including metabolic and imaging based assays. These data demonstrate this model as a robust tool for drug discovery bridging the gap between monolayer cell culture and animal models, providing insights into drug efficacy at an earlier time point, ultimately reducing costs and high attrition rates. PMID:25310845

  20. An Evaluation of Recent Gravity Models wrt. Altimeter Satellite Missions

    NASA Technical Reports Server (NTRS)

    Lemoine, Frank G.; Zelensky, N. P.; Luthcke, S. B.; Beckley, B. D.; Chinn, D. S.; Rowlands, D. D.

    2003-01-01

    With the launch of CHAMP and GRACE, we have entered a new phase in the history of satellite geodesy. For the first time, geopotential models are now available based almost exclusively on satellite-satellite tracking either with GPS in the case of the CHAMP-based geopotential models, or co-orbital intersatellite ultra-precise ranging in the case of GRACE. Different groups have analyzed these data, and produced a series of geopotential models (e.g., EIGENlS, EIGEN2, GGM0lS, GGMOlC) that incorporate the new data. We will compare the performance of these "newer" geopotential models with the standard models now used for computations, (e.g., JGM-3, BGM-96, PGS7727, and GRIMS-C1) for TOPEX, JASON, Geosat-Follow-On (GFO), and Envisat using standard metrics such as SLR RMS of fit, altimeter crossovers, and orbit overlaps. Where covariances are available we can evaluate the predicted geographically correlated orbit error. These predicted results can be compared with the Earth-fixed differences between dynamic and reduced-dynamic orbits to test the predictive accuracy of the covariances, as well as to calibrate the error of the solutions.

  1. Preliminary evaluation of a lake whitefish (Coregonus clupeaformis) bioenergetics model

    USGS Publications Warehouse

    Madenjian, Charles P.; Pothoven, Steven A.; Schneeberger, Philip J.; O'Connor, Daniel V.; Brandt, Stephen B.

    2005-01-01

    We conducted a preliminary evaluation of a lake whitefish (Coregonus clupeaformis) bioenergetics model by applying the model to size-at-age data for lake whitefish from northern Lake Michigan. We then compared estimates of gross growth efficiency (GGE) from our bioenergetis model with previously published estimates of GGE for bloater (C. hoyi) in Lake Michigan and for lake whitefish in Quebec. According to our model, the GGE of Lake Michigan lake whitefish decreased from 0.075 to 0.02 as age increased from 2 to 5 years. In contrast, the GGE of lake whitefish in Quebec inland waters decreased from 0.12 to 0.05 for the same ages. When our swimming-speed submodel was replaced with a submodel that had been used for lake trout (Salvelinus namaycush) in Lake Michigan and an observed predator energy density for Lake Michigan lake whitefish was employed, our model predicted that the GGE of Lake Michigan lake whitefish decreased from 0.12 to 0.04 as age increased from 2 to 5 years.

  2. An evaluation of attention models for use in SLAM

    NASA Astrophysics Data System (ADS)

    Dodge, Samuel; Karam, Lina

    2013-12-01

    In this paper we study the application of visual saliency models for the simultaneous localization and mapping (SLAM) problem. We consider visual SLAM, where the location of the camera and a map of the environment can be generated using images from a single moving camera. In visual SLAM, the interest point detector is of key importance. This detector must be invariant to certain image transformations so that features can be matched across di erent frames. Recent work has used a model of human visual attention to detect interest points, however it is unclear as to what is the best attention model for this purpose. To this aim, we compare the performance of interest points from four saliency models (Itti, GBVS, RARE, and AWS) with the performance of four traditional interest point detectors (Harris, Shi-Tomasi, SIFT, and FAST). We evaluate these detectors under several di erent types of image transformation and nd that the Itti saliency model, in general, achieves the best performance in terms of keypoint repeatability.

  3. Laboratory evaluation of a walleye (Sander vitreus) bioenergetics model

    USGS Publications Warehouse

    Madenjian, C.P.; Wang, C.; O'Brien, T. P.; Holuszko, M.J.; Ogilvie, L.M.; Stickel, R.G.

    2010-01-01

    Walleye (Sander vitreus) is an important game fish throughout much of North America. We evaluated the performance of the Wisconsin bioenergetics model for walleye in the laboratory. Walleyes were fed rainbow smelt (Osmerus mordax) in four laboratory tanks during a 126-day experiment. Based on a statistical comparison of bioenergetics model predictions of monthly consumption with the observed monthly consumption, we concluded that the bioenergetics model significantly underestimated food consumption by walleye in the laboratory. The degree of underestimation appeared to depend on the feeding rate. For the tank with the lowest feeding rate (1.4% of walleye body weight per day), the agreement between the bioenergetics model prediction of cumulative consumption over the entire 126-day experiment and the observed cumulative consumption was remarkably close, as the prediction was within 0.1% of the observed cumulative consumption. Feeding rates in the other three tanks ranged from 1.6% to 1.7% of walleye body weight per day, and bioenergetics model predictions of cumulative consumption over the 126-day experiment ranged between 11 and 15% less than the observed cumulative consumption. ?? 2008 Springer Science+Business Media B.V.

  4. Advancing Models and Evaluation of Cumulus, Climate and Aerosol Interactions

    SciTech Connect

    Gettelman, Andrew

    2015-10-27

    This project was successfully able to meet its’ goals, but faced some serious challenges due to personnel issues. Nonetheless, it was largely successful. The Project Objectives were as follows: 1. Develop a unified representation of stratifom and cumulus cloud microphysics for NCAR/DOE global community models. 2. Examine the effects of aerosols on clouds and their impact on precipitation in stratiform and cumulus clouds. We will also explore the effects of clouds and precipitation on aerosols. 3. Test these new formulations using advanced evaluation techniques and observations and release

  5. A Framework and Model for Evaluating Clinical Decision Support Architectures

    PubMed Central

    Wright, Adam; Sittig, Dean F.

    2008-01-01

    In this paper, we develop a four-phase model for evaluating architectures for clinical decision support that focuses on: defining a set of desirable features for a decision support architecture; building a proof-of-concept prototype; demonstrating that the architecture is useful by showing that it can be integrated with existing decision support systems and comparing its coverage to that of other architectures. We apply this framework to several well-known decision support architectures, including Arden Syntax, GLIF, SEBASTIAN and SAGE PMID:18462999

  6. Interfacial Micromechanics in Fibrous Composites: Design, Evaluation, and Models

    PubMed Central

    Lei, Zhenkun; Li, Xuan; Qin, Fuyong; Qiu, Wei

    2014-01-01

    Recent advances of interfacial micromechanics in fiber reinforced composites using micro-Raman spectroscopy are given. The faced mechanical problems for interface design in fibrous composites are elaborated from three optimization ways: material, interface, and computation. Some reasons are depicted that the interfacial evaluation methods are difficult to guarantee the integrity, repeatability, and consistency. Micro-Raman study on the fiber interface failure behavior and the main interface mechanical problems in fibrous composites are summarized, including interfacial stress transfer, strength criterion of interface debonding and failure, fiber bridging, frictional slip, slip transition, and friction reloading. The theoretical models of above interface mechanical problems are given. PMID:24977189

  7. Evaluation and cross-validation of Environmental Models

    NASA Astrophysics Data System (ADS)

    Lemaire, Joseph

    Before scientific models (statistical or empirical models based on experimental measurements; physical or mathematical models) can be proposed and selected as ISO Environmental Standards, a Commission of professional experts appointed by an established International Union or Association (e.g. IAGA for Geomagnetism and Aeronomy, . . . ) should have been able to study, document, evaluate and validate the best alternative models available at a given epoch. Examples will be given, indicating that different values for the Earth radius have been employed in different data processing laboratories, institutes or agencies, to process, analyse or retrieve series of experimental observations. Furthermore, invariant magnetic coordinates like B and L, commonly used in the study of Earth's radiation belts fluxes and for their mapping, differ from one space mission data center to the other, from team to team, and from country to country. Worse, users of empirical models generally fail to use the original magnetic model which had been employed to compile B and L , and thus to build these environmental models. These are just some flagrant examples of inconsistencies and misuses identified so far; there are probably more of them to be uncovered by careful, independent examination and benchmarking. A meter prototype, the standard unit length that has been determined on 20 May 1875, during the Diplomatic Conference of the Meter, and deposited at the BIPM (Bureau International des Poids et Mesures). In the same token, to coordinate and safeguard progress in the field of Space Weather, similar initiatives need to be undertaken, to prevent wild, uncontrolled dissemination of pseudo Environmental Models and Standards. Indeed, unless validation tests have been performed, there is guaranty, a priori, that all models on the market place have been built consistently with the same units system, and that they are based on identical definitions for the coordinates systems, etc... Therefore

  8. An evaluation of sex-age-kill (SAK) model performance

    USGS Publications Warehouse

    Millspaugh, Joshua J.; Skalski, John R.; Townsend, Richard L.; Diefenbach, Duane R.; Boyce, Mark S.; Hansen, Lonnie P.; Kammermeyer, Kent

    2009-01-01

    The sex-age-kill (SAK) model is widely used to estimate abundance of harvested large mammals, including white-tailed deer (Odocoileus virginianus). Despite a long history of use, few formal evaluations of SAK performance exist. We investigated how violations of the stable age distribution and stationary population assumption, changes to male or female harvest, stochastic effects (i.e., random fluctuations in recruitment and survival), and sampling efforts influenced SAK estimation. When the simulated population had a stable age distribution and λ > 1, the SAK model underestimated abundance. Conversely, when λ < 1, the SAK overestimated abundance. When changes to male harvest were introduced, SAK estimates were opposite the true population trend. In contrast, SAK estimates were robust to changes in female harvest rates. Stochastic effects caused SAK estimates to fluctuate about their equilibrium abundance, but the effect dampened as the size of the surveyed population increased. When we considered both stochastic effects and sampling error at a deer management unit scale the resultant abundance estimates were within ±121.9% of the true population level 95% of the time. These combined results demonstrate extreme sensitivity to model violations and scale of analysis. Without changes to model formulation, the SAK model will be biased when λ ≠ 1. Furthermore, any factor that alters the male harvest rate, such as changes to regulations or changes in hunter attitudes, will bias population estimates. Sex-age-kill estimates may be precise at large spatial scales, such as the state level, but less so at the individual management unit level. Alternative models, such as statistical age-at-harvest models, which require similar data types, might allow for more robust, broad-scale demographic assessments.

  9. Evaluation of internal noise methods for Hotelling observer models

    SciTech Connect

    Zhang Yani; Pham, Binh T.; Eckstein, Miguel P.

    2007-08-15

    The inclusion of internal noise in model observers is a common method to allow for quantitative comparisons between human and model observer performance in visual detection tasks. In this article, we studied two different strategies for inserting internal noise into Hotelling model observers. In the first strategy, internal noise was added to the output of individual channels: (a) Independent nonuniform channel noise, (b) independent uniform channel noise. In the second strategy, internal noise was added to the decision variable arising from the combination of channel responses. The standard deviation of the zero mean internal noise was either constant or proportional to: (a) the decision variable's standard deviation due to the external noise, (b) the decision variable's variance caused by the external noise, (c) the decision variable magnitude on a trial to trial basis. We tested three model observers: square window Hotelling observer (HO), channelized Hotelling observer (CHO), and Laguerre-Gauss Hotelling observer (LGHO) using a four alternative forced choice (4AFC) signal known exactly but variable task with a simulated signal embedded in real x-ray coronary angiogram backgrounds. The results showed that the internal noise method that led to the best prediction of human performance differed across the studied model observers. The CHO model best predicted human observer performance with the channel internal noise. The HO and LGHO best predicted human observer performance with the decision variable internal noise. The present results might guide researchers with the choice of methods to include internal noise into Hotelling model observers when evaluating and optimizing medical image quality.

  10. Large animal models of human cauda equina injury and repair: evaluation of a novel goat model.

    PubMed

    Chen, Wen-Tao; Zhang, Pei-Xun; Xue, Feng; Yin, Xiao-Feng; Qi, Cao-Yuan; Ma, Jun; Chen, Bo; Yu, You-Lai; Deng, Jiu-Xu; Jiang, Bao-Guo

    2015-01-01

    Previous animal studies of cauda equina injury have primarily used rat models, which display significant differences from humans. Furthermore, most studies have focused on electrophysiological examination. To better mimic the outcome after surgical repair of cauda equina injury, a novel animal model was established in the goat. Electrophysiological, histological and magnetic resonance imaging methods were used to evaluate the morphological and functional outcome after cauda equina injury and end-to-end suture. Our results demonstrate successful establishment of the goat experimental model of cauda equina injury. This novel model can provide detailed information on the nerve regenerative process following surgical repair of cauda equina injury. PMID:25788921

  11. Support for career development in youth: program models and evaluations.

    PubMed

    Mekinda, Megan A

    2012-01-01

    This article examines four influential programs-Citizen Schools, After School Matters, career academies, and Job Corps-to demonstrate the diversity of approaches to career programming for youth. It compares the specific program models and draws from the evaluation literature to discuss strengths and weaknesses of each. The article highlights three key lessons derived from the models that have implications for career development initiatives more generally: (1) career programming can and should be designed for youth across a broad age range, (2) career programming does not have to come at the expense of academic training or preparation for college, and (3) program effectiveness depends on intentional design and high-quality implementation. PMID:22826165

  12. Physical model assisted probability of detection in nondestructive evaluation

    SciTech Connect

    Li, M.; Meeker, W. Q.; Thompson, R. B.

    2011-06-23

    Nondestructive evaluation is used widely in many engineering and industrial areas to detect defects or flaws such as cracks inside parts or structures during manufacturing or for products in service. The standard statistical model is a simple empirical linear regression between the (possibly transformed) signal response variables and the (possibly transformed) explanatory variables. For some applications, such a simple empirical approach is inadequate. An important alternative approach is to use knowledge of the physics of the inspection process to provide information about the underlying relationship between the response and explanatory variables. Use of such knowledge can greatly increase the power and accuracy of the statistical analysis and enable, when needed, proper extrapolation outside the range of the observed explanatory variables. This paper describes a set of physical model-assisted analyses to study the capability of two different ultrasonic testing inspection methods to detect synthetic hard alpha inclusion and flat-bottom hole defects in a titanium forging disk.

  13. Statistical evaluation and modeling of Internet dial-up traffic

    NASA Astrophysics Data System (ADS)

    Faerber, Johannes; Bodamer, Stefan; Charzinski, Joachim

    1999-08-01

    In times of Internet access being a popular consumer applications even for `normal' residential users, some telephone exchanges are congested by customers using modem or ISDN dial-up connections to their Internet Service Providers. In order to estimate the number of additional lines and switching capacity required in an exchange or a trunk group, Internet access traffic must be characterized in terms of holding time and call interarrival time distributions. In this paper, we analyze log files tracing the usage of the central ISDN access line pool at University of Stuttgart for a period of six months. Mathematical distributions are fitted to the measured data and the fit quality is evaluated with respect to the blocking probability caused by the synthetic traffic in a multiple server loss system. We show how the synthetic traffic model scales with the number of subscribers and how the model could be applied to compute economy of scale results for Internet access trunks or access servers.

  14. Reliability evaluation of a photovoltaic module using accelerated degradation model

    NASA Astrophysics Data System (ADS)

    Laronde, Rémi; Charki, Abdérafi; Bigaud, David; Excoffier, Philippe

    2011-09-01

    Many photovoltaic modules are installed all around the world. However, the reliability of this product is not enough really known. The electrical power decreases in time due mainly to corrosion, encapsulation discoloration and solder bond failure. The failure of a photovoltaic module is obtained when the electrical power degradation reaches a threshold value. Accelerated life tests are commonly used to estimate the reliability of the photovoltaic module. However, using accelerated life tests, few data on the failure of this product are obtained and the realization of this kind of tests is expensive. As a solution, an accelerated degradation test can be carried out using only one stress if parameters of the acceleration model are known. The Wiener process associated with the accelerated failure time model permits to carry out many simulations and to determine the failure time distribution when the threshold value is reached. So, the failure time distribution and the lifetime (mean and uncertainty) can be evaluated.

  15. Physical Model Assisted Probability of Detection in Nondestructive Evaluation

    NASA Astrophysics Data System (ADS)

    Li, M.; Meeker, W. Q.; Thompson, R. B.

    2011-06-01

    Nondestructive evaluation is used widely in many engineering and industrial areas to detect defects or flaws such as cracks inside parts or structures during manufacturing or for products in service. The standard statistical model is a simple empirical linear regression between the (possibly transformed) signal response variables and the (possibly transformed) explanatory variables. For some applications, such a simple empirical approach is inadequate. An important alternative approach is to use knowledge of the physics of the inspection process to provide information about the underlying relationship between the response and explanatory variables. Use of such knowledge can greatly increase the power and accuracy of the statistical analysis and enable, when needed, proper extrapolation outside the range of the observed explanatory variables. This paper describes a set of physical model-assisted analyses to study the capability of two different ultrasonic testing inspection methods to detect synthetic hard alpha inclusion and flat-bottom hole defects in a titanium forging disk.

  16. Central California coastal air-quality model validation study: Data analysis and model evaluation

    SciTech Connect

    Dabberdt, W.F.; Johnson, W.B.; Brodzinsky, R.; Ruff, R.E.

    1984-08-01

    The objectives of the study were: to obtain a comprehensive experimental data base of overwater and inland dispersion along the central California coast; to evaluate air-quality models presently being used by MMS for determining air-quality impacts from offshore emission sources; to evaluate various schemes for determining atmospheric stability and methods of determining atmospheric stability and methods of determining dispersion parameters (sigma-y and sigma-z) overwater; and to provide data needed for an overwater dispersion model presently under development by MMS.

  17. A model system for the evaluation of radioimmunoimaging of tumors

    SciTech Connect

    Koizumi, M.; Endo, K.; Sakahara, H.; Nakashima, T.; Kunimatsu, M.; Ohta, H.; Konishi, J.; Torizuka, K.

    1985-05-01

    The authors have developed a simple model system that can be used to evaluate methods of radioimmunoimaging of tumors, using human chorionic gonadropin (hCG) as a model antigen, and a monoclonal antibody against hCG ..beta..-subunit as a model antibody. HCG was coated on a polystylene spherical bead with a quarter inch in diameter, and coated beads were washed extensively with phosphate buffered saline, and glycine acid buffer to remove the easily dissociable antigen. HCG-coated beads were put into the subcutaneous tissue on the back of mice. At 24 hr after the transplantation, when serum hCG was not detectable by the conventional RIA, radiolabeled antibodies were injected and its bio-distribution monitored. The %ID/g for the hCG coated beads increased to a maximum of 48 hr after the injection of radioiodinad antibody, whereas the %ID/g for most organs decreased with time. As a nonspecific antigen, beads coated with bovine serum albumin were transplanted and its uptake was as low as about one 50th of hCG-coated ones. The %ID/g of radioiodinated monoclonal antibody against human thyroglobulin (a nonspecific antibody) for hCG-coated beads was also negligible. Thus, the localization index (%ID of specific antibody / %ID of nonspecific antibody) reached to 15.0 at 24 hr, 35.5 at 48 hr and 57.8 at 96 hr after the injection. The biodistribution of In-111 labeled specific monoclonal antibody, prepared through the chelation with DTPA, demonstrated similar results with radioiodinated ones. This mouse model system that did not involve the use of tumors, yielded high localization index and reproducibilities and could be used to evaluate different methods for radiolabelng monoclonal antibodies.

  18. Fast Prediction and Evaluation of Gravitational Waveforms Using Surrogate Models

    NASA Astrophysics Data System (ADS)

    Field, Scott E.; Galley, Chad R.; Hesthaven, Jan S.; Kaye, Jason; Tiglio, Manuel

    2014-07-01

    We propose a solution to the problem of quickly and accurately predicting gravitational waveforms within any given physical model. The method is relevant for both real-time applications and more traditional scenarios where the generation of waveforms using standard methods can be prohibitively expensive. Our approach is based on three offline steps resulting in an accurate reduced order model in both parameter and physical dimensions that can be used as a surrogate for the true or fiducial waveform family. First, a set of m parameter values is determined using a greedy algorithm from which a reduced basis representation is constructed. Second, these m parameters induce the selection of m time values for interpolating a waveform time series using an empirical interpolant that is built for the fiducial waveform family. Third, a fit in the parameter dimension is performed for the waveform's value at each of these m times. The cost of predicting L waveform time samples for a generic parameter choice is of order O(mL+mcfit) online operations, where cfit denotes the fitting function operation count and, typically, m ≪L. The result is a compact, computationally efficient, and accurate surrogate model that retains the original physics of the fiducial waveform family while also being fast to evaluate. We generate accurate surrogate models for effective-one-body waveforms of nonspinning binary black hole coalescences with durations as long as 105M, mass ratios from 1 to 10, and for multiple spherical harmonic modes. We find that these surrogates are more than 3 orders of magnitude faster to evaluate as compared to the cost of generating effective-one-body waveforms in standard ways. Surrogate model building for other waveform families and models follows the same steps and has the same low computational online scaling cost. For expensive numerical simulations of binary black hole coalescences, we thus anticipate extremely large speedups in generating new waveforms with a

  19. visCOS: An R-package to evaluate model performance of hydrological models

    NASA Astrophysics Data System (ADS)

    Klotz, Daniel; Herrnegger, Mathew; Wesemann, Johannes; Schulz, Karsten

    2016-04-01

    The evaluation of model performance is a central part of (hydrological) modelling. Much attention has been given to the development of evaluation criteria and diagnostic frameworks. (Klemeš, 1986; Gupta et al., 2008; among many others). Nevertheless, many applications exist for which objective functions do not yet provide satisfying summaries. Thus, the necessity to visualize results arises in order to explore a wider range of model capacities, be it strengths or deficiencies. Visualizations are usually devised for specific projects and these efforts are often not distributed to a broader community (e.g. via open source software packages). Hence, the opportunity to explicitly discuss a state-of-the-art presentation technique is often missed. We therefore present a comprehensive R-package for evaluating model performance by visualizing and exploring different aspects of hydrological time-series. The presented package comprises a set of useful plots and visualization methods, which complement existing packages, such as hydroGOF (Zambrano-Bigiarini et al., 2012). It is derived from practical applications of the hydrological models COSERO and COSEROreg (Kling et al., 2014). visCOS, providing an interface in R, represents an easy-to-use software package for visualizing and assessing model performance and can be implemented in the process of model calibration or model development. The package provides functions to load hydrological data into R, clean the data, process, visualize, explore and finally save the results in a consistent way. Together with an interactive zoom function of the time series, an online calculation of the objective functions for variable time-windows is included. Common hydrological objective functions, such as the Nash-Sutcliffe Efficiency and the Kling-Gupta Efficiency, can also be evaluated and visualized in different ways for defined sub-periods like hydrological years or seasonal sections. Many hydrologists use long-term water-balances as a

  20. Design, modeling, simulation and evaluation of a distributed energy system

    NASA Astrophysics Data System (ADS)

    Cultura, Ambrosio B., II

    This dissertation presents the design, modeling, simulation and evaluation of distributed energy resources (DER) consisting of photovoltaics (PV), wind turbines, batteries, a PEM fuel cell and supercapacitors. The distributed energy resources installed at UMass Lowell consist of the following: 2.5kW PV, 44kWhr lead acid batteries and 1500W, 500W & 300W wind turbines, which were installed before year 2000. Recently added to that are the following: 10.56 kW PV array, 2.4 kW wind turbine, 29 kWhr Lead acid batteries, a 1.2 kW PEM fuel cell and 4-140F supercapacitors. Each newly added energy resource has been designed, modeled, simulated and evaluated before its integration into the existing PV/Wind grid-connected system. The Mathematical and Simulink model of each system was derived and validated by comparing the simulated and experimental results. The Simulated results of energy generated from a 10.56kW PV system are in good agreement with the experimental results. A detailed electrical model of a 2.4kW wind turbine system equipped with a permanent magnet generator, diode rectifier, boost converter and inverter is presented. The analysis of the results demonstrates the effectiveness of the constructed simulink model, and can be used to predict the performance of the wind turbine. It was observed that a PEM fuel cell has a very fast response to load changes. Moreover, the model has validated the actual operation of the PEM fuel cell, showing that the simulated results in Matlab Simulink are consistent with the experimental results. The equivalent mathematical equation, derived from an electrical model of the supercapacitor, is used to simulate its voltage response. The model is completely capable of simulating its voltage behavior, and can predict the charge time and discharge time of voltages on the supercapacitor. The bi-directional dc-dc converter was designed in order to connect the 48V battery bank storage to the 24V battery bank storage. This connection was

  1. Evaluation of data driven models for river suspended sediment concentration modeling

    NASA Astrophysics Data System (ADS)

    Zounemat-Kermani, Mohammad; Kişi, Özgür; Adamowski, Jan; Ramezani-Charmahineh, Abdollah

    2016-04-01

    Using eight-year data series from hydrometric stations located in Arkansas, Delaware and Idaho (USA), the ability of artificial neural network (ANN) and support vector regression (SVR) models to forecast/estimate daily suspended sediment concentrations ([SS]d) was evaluated and compared to that of traditional multiple linear regression (MLR) and sediment rating curve (SRC) models. Three different ANN model algorithms were tested [gradient descent, conjugate gradient and Broyden-Fletcher-Goldfarb-Shanno (BFGS)], along with four different SVR model kernels [linear, polynomial, sigmoid and Radial Basis Function (RBF)]. The reliability of the applied models was evaluated based on the statistical performance criteria of root mean square error (RMSE), Pearson's correlation coefficient (PCC) and Nash-Sutcliffe model efficiency coefficient (NSE). Based on RMSE values, and averaged across the three hydrometric stations, the ANN and SVR models showed, respectively, 23% and 18% improvements in forecasting and 18% and 15% improvements in estimation over traditional models. The use of the BFGS training algorithm for ANN, and the RBF kernel function for SVR models are recommended as useful options for simulating hydrological phenomena.

  2. Error apportionment for atmospheric chemistry-transport models - a new approach to model evaluation

    NASA Astrophysics Data System (ADS)

    Solazzo, Efisio; Galmarini, Stefano

    2016-05-01

    In this study, methods are proposed to diagnose the causes of errors in air quality (AQ) modelling systems. We investigate the deviation between modelled and observed time series of surface ozone through a revised formulation for breaking down the mean square error (MSE) into bias, variance and the minimum achievable MSE (mMSE). The bias measures the accuracy and implies the existence of systematic errors and poor representation of data complexity, the variance measures the precision and provides an estimate of the variability of the modelling results in relation to the observed data, and the mMSE reflects unsystematic errors and provides a measure of the associativity between the modelled and the observed fields through the correlation coefficient. Each of the error components is analysed independently and apportioned to resolved processes based on the corresponding timescale (long scale, synoptic, diurnal, and intra-day) and as a function of model complexity.The apportionment of the error is applied to the AQMEII (Air Quality Model Evaluation International Initiative) group of models, which embrace the majority of regional AQ modelling systems currently used in Europe and North America.The proposed technique has proven to be a compact estimator of the operational metrics commonly used for model evaluation (bias, variance, and correlation coefficient), and has the further benefit of apportioning the error to the originating timescale, thus allowing for a clearer diagnosis of the processes that caused the error.

  3. An 8-Stage Model for Evaluating the Tennis Serve

    PubMed Central

    Kovacs, Mark; Ellenbecker, Todd

    2011-01-01

    Background: The tennis serve is a complex stroke characterized by a series of segmental rotations involving the entire kinetic chain. Many overhead athletes use a basic 6-stage throwing model; however, the tennis serve does provide some differences. Evidence Acquisition: To support the present 8-stage descriptive model, data were gathered from PubMed and SPORTDiscus databases using keywords tennis and serve for publications between 1980 and 2010. Results: An 8-stage model of analysis for the tennis serve that includes 3 distinct phases—preparation, acceleration, and follow-through—provides a more tennis-specific analysis than that previously presented in the clinical tennis literature. When a serve is evaluated, the total body perspective is just as important as the individual segments alone. Conclusion: The 8-stage model provides a more in-depth analysis that should be utilized in all tennis players to help better understand areas of weakness, potential areas of injury, as well as components that can be improved for greater performance. PMID:23016050

  4. Application of Meta-Heuristic Models for Local Scour Evaluation

    NASA Astrophysics Data System (ADS)

    Mahjoobi, Javad; Sabzianpoor, Ali; Jabbari, Ebrahim

    2010-11-01

    One of the most important factors in bridge destruction is scour around piers of bridges. The scour depth evaluation around the bridge pier is very critical in bridge design. Although there are several empirical formulas to predict depth scour, these formulas usually have not had accurate results. Therefore, there has been an increase in use of soft computing tools such as neural networks (ANNs), fuzzy inference system (FIS) and adaptive network fuzzy inference system (ANFIS) for this purpose. Model trees are one of the prevailing data mining tools. In the present study, for prediction of scour depth, model trees (M5' algorithm) and regression trees (CART algorithm) with using experimental data of scour measurement in clear water condition were employed. Moreover, these models were used for two cases; the first one with original (dimensional) data and the second one with non-dimensional data and the results were compared with the results of six empirical formulas. The results showed that model and regression trees are more efficient than the empirical formulas to predict scour depth.

  5. Skill metrics for evaluation and comparison of sea ice models

    NASA Astrophysics Data System (ADS)

    Dukhovskoy, Dmitry S.; Ubnoske, Jonathan; Blanchard-Wrigglesworth, Edward; Hiester, Hannah R.; Proshutinsky, Andrey

    2015-09-01

    Five quantitative methodologies (metrics) that may be used to assess the skill of sea ice models against a control field are analyzed. The methodologies are Absolute Deviation, Root-Mean-Square Deviation, Mean Displacement, Hausdorff Distance, and Modified Hausdorff Distance. The methodologies are employed to quantify similarity between spatial distribution of the simulated and control scalar fields providing measures of model performance. To analyze their response to dissimilarities in two-dimensional fields (contours), the metrics undergo sensitivity tests (scale, rotation, translation, and noise). Furthermore, in order to assess their ability to quantify resemblance of three-dimensional fields, the metrics are subjected to sensitivity tests where tested fields have continuous random spatial patterns inside the contours. The Modified Hausdorff Distance approach demonstrates the best response to tested differences, with the other methods limited by weak responses to scale and translation. Both Hausdorff Distance and Modified Hausdorff Distance metrics are robust to noise, as opposed to the other methods. The metrics are then employed in realistic cases that validate sea ice concentration fields from numerical models and sea ice mean outlook against control data and observations. The Modified Hausdorff Distance method again exhibits high skill in quantifying similarity between both two-dimensional (ice contour) and three-dimensional (ice concentration) sea ice fields. The study demonstrates that the Modified Hausdorff Distance is a mathematically tractable and efficient method for model skill assessment and comparison providing effective and objective evaluation of both two-dimensional and three-dimensional sea ice characteristics across data sets.

  6. Addressing Conceptual Model Uncertainty in the Evaluation of Model Prediction Errors

    NASA Astrophysics Data System (ADS)

    Carrera, J.; Pool, M.

    2014-12-01

    Model predictions are uncertain because of errors in model parameters, future forcing terms, and model concepts. The latter remain the largest and most difficult to assess source of uncertainty in long term model predictions. We first review existing methods to evaluate conceptual model uncertainty. We argue that they are highly sensitive to the ingenuity of the modeler, in the sense that they rely on the modeler's ability to propose alternative model concepts. Worse, we find that the standard practice of stochastic methods leads to poor, potentially biased and often too optimistic, estimation of actual model errors. This is bad news because stochastic methods are purported to properly represent uncertainty. We contend that the problem does not lie on the stochastic approach itself, but on the way it is applied. Specifically, stochastic inversion methodologies, which demand quantitative information, tend to ignore geological understanding, which is conceptually rich. We illustrate some of these problems with the application to Mar del Plata aquifer, where extensive data are available for nearly a century. Geologically based models, where spatial variability is handled through zonation, yield calibration fits similar to geostatiscally based models, but much better predictions. In fact, the appearance of the stochastic T fields is similar to the geologically based models only in areas with high density of data. We take this finding to illustrate the ability of stochastic models to accommodate many data, but also, ironically, their inability to address conceptual model uncertainty. In fact, stochastic model realizations tend to be too close to the "most likely" one (i.e., they do not really realize the full conceptualuncertainty). The second part of the presentation is devoted to argue that acknowledging model uncertainty may lead to qualitatively different decisions than just working with "most likely" model predictions. Therefore, efforts should concentrate on

  7. Sustainable Deforestation Evaluation Model and System Dynamics Analysis

    PubMed Central

    Feng, Huirong; Lim, C. W.; Chen, Liqun; Zhou, Xinnian; Zhou, Chengjun; Lin, Yi

    2014-01-01

    The current study used the improved fuzzy analytic hierarchy process to construct a sustainable deforestation development evaluation system and evaluation model, which has refined a diversified system to evaluate the theory of sustainable deforestation development. Leveraging the visual image of the system dynamics causal and power flow diagram, we illustrated here that sustainable forestry development is a complex system that encompasses the interaction and dynamic development of ecology, economy, and society and has reflected the time dynamic effect of sustainable forestry development from the three combined effects. We compared experimental programs to prove the direct and indirect impacts of the ecological, economic, and social effects of the corresponding deforest techniques and fully reflected the importance of developing scientific and rational ecological harvesting and transportation technologies. Experimental and theoretical results illustrated that light cableway skidding is an ecoskidding method that is beneficial for the sustainable development of resources, the environment, the economy, and society and forecasted the broad potential applications of light cableway skidding in timber production technology. Furthermore, we discussed the sustainable development countermeasures of forest ecosystems from the aspects of causality, interaction, and harmony. PMID:25254225

  8. Sustainable deforestation evaluation model and system dynamics analysis.

    PubMed

    Feng, Huirong; Lim, C W; Chen, Liqun; Zhou, Xinnian; Zhou, Chengjun; Lin, Yi

    2014-01-01

    The current study used the improved fuzzy analytic hierarchy process to construct a sustainable deforestation development evaluation system and evaluation model, which has refined a diversified system to evaluate the theory of sustainable deforestation development. Leveraging the visual image of the system dynamics causal and power flow diagram, we illustrated here that sustainable forestry development is a complex system that encompasses the interaction and dynamic development of ecology, economy, and society and has reflected the time dynamic effect of sustainable forestry development from the three combined effects. We compared experimental programs to prove the direct and indirect impacts of the ecological, economic, and social effects of the corresponding deforest techniques and fully reflected the importance of developing scientific and rational ecological harvesting and transportation technologies. Experimental and theoretical results illustrated that light cableway skidding is an ecoskidding method that is beneficial for the sustainable development of resources, the environment, the economy, and society and forecasted the broad potential applications of light cableway skidding in timber production technology. Furthermore, we discussed the sustainable development countermeasures of forest ecosystems from the aspects of causality, interaction, and harmony. PMID:25254225

  9. Evaluation Digital Elevation Model Generated by Synthetic Aperture Radar Data

    NASA Astrophysics Data System (ADS)

    Makineci, H. B.; Karabörk, H.

    2016-06-01

    Digital elevation model, showing the physical and topographical situation of the earth, is defined a tree-dimensional digital model obtained from the elevation of the surface by using of selected an appropriate interpolation method. DEMs are used in many areas such as management of natural resources, engineering and infrastructure projects, disaster and risk analysis, archaeology, security, aviation, forestry, energy, topographic mapping, landslide and flood analysis, Geographic Information Systems (GIS). Digital elevation models, which are the fundamental components of cartography, is calculated by many methods. Digital elevation models can be obtained terrestrial methods or data obtained by digitization of maps by processing the digital platform in general. Today, Digital elevation model data is generated by the processing of stereo optical satellite images, radar images (radargrammetry, interferometry) and lidar data using remote sensing and photogrammetric techniques with the help of improving technology. One of the fundamental components of remote sensing radar technology is very advanced nowadays. In response to this progress it began to be used more frequently in various fields. Determining the shape of topography and creating digital elevation model comes the beginning topics of these areas. It is aimed in this work , the differences of evaluation of quality between Sentinel-1A SAR image ,which is sent by European Space Agency ESA and Interferometry Wide Swath imaging mode and C band type , and DTED-2 (Digital Terrain Elevation Data) and application between them. The application includes RMS static method for detecting precision of data. Results show us to variance of points make a high decrease from mountain area to plane area.

  10. Evaluating wind extremes in CMIP5 climate models

    NASA Astrophysics Data System (ADS)

    Kumar, Devashish; Mishra, Vimal; Ganguly, Auroop R.

    2015-07-01

    Wind extremes have consequences for renewable energy sectors, critical infrastructures, coastal ecosystems, and insurance industry. Considerable debates remain regarding the impacts of climate change on wind extremes. While climate models have occasionally shown increases in regional wind extremes, a decline in the magnitude of mean and extreme near-surface wind speeds has been recently reported over most regions of the Northern Hemisphere using observed data. Previous studies of wind extremes under climate change have focused on selected regions and employed outputs from the regional climate models (RCMs). However, RCMs ultimately rely on the outputs of global circulation models (GCMs), and the value-addition from the former over the latter has been questioned. Regional model runs rarely employ the full suite of GCM ensembles, and hence may not be able to encapsulate the most likely projections or their variability. Here we evaluate the performance of the latest generation of GCMs, the Coupled Model Intercomparison Project phase 5 (CMIP5), in simulating extreme winds. We find that the multimodel ensemble (MME) mean captures the spatial variability of annual maximum wind speeds over most regions except over the mountainous terrains. However, the historical temporal trends in annual maximum wind speeds for the reanalysis data, ERA-Interim, are not well represented in the GCMs. The historical trends in extreme winds from GCMs are statistically not significant over most regions. The MME model simulates the spatial patterns of extreme winds for 25-100 year return periods. The projected extreme winds from GCMs exhibit statistically less significant trends compared to the historical reference period.

  11. Evaluation of atmospheric chemical models using aircraft data (Invited)

    NASA Astrophysics Data System (ADS)

    Freeman, S.; Grossberg, N.; Pierce, R.; Lee, P.; Ngan, F.; Yates, E. L.; Iraci, L. T.; Lefer, B. L.

    2013-12-01

    Air quality prediction is an important and growing field, as the adverse health effects of ozone (O3) are becoming more important to the general public. Two atmospheric chemical models, the Realtime Air Quality Modeling System (RAQMS) and the Community Multiscale Air Quality modeling system (CMAQ) are evaluated during NASA's Student Airborne Research Project (SARP) and the NASA Alpha Jet Atmospheric eXperiment (AJAX) flights. CO, O3, and NOx data simulated by the models are interpolated using an inverse distance weighting in space and a linear interpolation in time to both the SARP and AJAX flight tracks and compared to the CO, O3, and NOx observations at those points. Results for the seven flights included show moderate error in O3 during the flights, with RAQMS having a high O3 bias (+15.7 ppbv average) above 6 km and a low O3 bias (-17.5 ppbv average) below 4km. CMAQ was found to have a low O3 bias (-13.0 ppbv average) everywhere. Additionally, little bias (-5.36% RAQMS, -11.8% CMAQ) in the CO data was observed with the exception of a wildfire smoke plume that was flown through on one SARP flight, as CMAQ lacks any wildfire sources and RAQMS resolution is too coarse to resolve narrow plumes. This indicates improvement in emissions inventories compared to previous studies. CMAQ additionally incorrectly predicted a NOx plume due to incorrectly vertically advecting it from the surface, which caused NOx titration to occur, limiting the production of ozone. This study shows that these models perform reasonably well in most conditions; however more work must be done to assimilate wildfires, improve emissions inventories, and improve meteorological forecasts for the models.

  12. Evaluation of field development plans using 3-D reservoir modelling

    SciTech Connect

    Seifert, D.; Lewis, J.J.M.; Newbery, J.D.H.

    1997-08-01

    Three-dimensional reservoir modelling has become an accepted tool in reservoir description and is used for various purposes, such as reservoir performance prediction or integration and visualisation of data. In this case study, a small Northern North Sea turbiditic reservoir was to be developed with a line drive strategy utilising a series of horizontal producer and injector pairs, oriented north-south. This development plan was to be evaluated and the expected outcome of the wells was to be assessed and risked. Detailed analyses of core, well log and analogue data has led to the development of two geological {open_quotes}end member{close_quotes} scenarios. Both scenarios have been stochastically modelled using the Sequential Indicator Simulation method. The resulting equiprobable realisations have been subjected to detailed statistical well placement optimisation techniques. Based upon bivariate statistical evaluation of more than 1000 numerical well trajectories for each of the two scenarios, it was found that the wells inclinations and lengths had a great impact on the wells success, whereas the azimuth was found to have only a minor impact. After integration of the above results, the actual well paths were redesigned to meet external drilling constraints, resulting in substantial reductions in drilling time and costs.

  13. Evaluation Between Existing and Improved CCF Modeling Using the NRC SPAR Models

    SciTech Connect

    James K. Knudsen

    2010-06-01

    Abstract: The NRC SPAR models currently employ the alpha factor common cause failure (CCF) methodology and model CCF for a group of redundant components as a single “rolled-up” basic event. These SPAR models will be updated to employ a more computationally intensive and accurate approach by expanding the CCF basic events for all active components to include all terms that appear in the Basic Parameter Model (BPM). A discussion is provided to detail the differences between the rolled-up common cause group (CCG) and expanded BPM adjustment concepts based on differences in core damage frequency and individual component importance measures. Lastly, a hypothetical condition is evaluated with a SPAR model to show the difference in results between the current adjustment method (rolled-up CCF events) and the newer method employing all of the expanded terms in the BPM. The event evaluation on the SPAR model employing the expanded terms will be solved using the graphical evaluation module (GEM) and the proposed method discussed in Reference 1.

  14. Use of Numerical Groundwater Modeling to Evaluate Uncertainty in Conceptual Models of Recharge and Hydrostratigraphy

    SciTech Connect

    Pohlmann, Karl; Ye, Ming; Pohll, Greg; Chapman, Jenny

    2007-01-19

    Numerical groundwater models are based on conceptualizations of hydrogeologic systems that are by necessity developed from limited information and therefore are simplifications of real conditions. Each aspect (e.g. recharge, hydrostratigraphy, boundary conditions) of the groundwater model is often based on a single conceptual model that is considered to be the best representation given the available data. However, the very nature of their construction means that each conceptual model is inherently uncertain and the available information may be insufficient to refute plausible alternatives, thereby raising the possibility that the flow model is underestimating overall uncertainty. In this study we use the Death Valley Regional Flow System model developed by the U.S. Geological Survey as a framework to predict regional groundwater flow southward into Yucca Flat on the Nevada Test Site. An important aspect of our work is to evaluate the uncertainty associated with multiple conceptual models of groundwater recharge and subsurface hydrostratigraphy and quantify the impacts of this uncertainty on model predictions. In our study, conceptual model uncertainty arises from two sources: (1) alternative interpretations of the hydrostratigraphy in the northern portion of Yucca Flat where, owing to sparse data, the hydrogeologic system can be conceptualized in different ways, and (2) uncertainty in groundwater recharge in the region as evidenced by the existence of several independent approaches for estimating this aspect of the hydrologic system. The composite prediction of groundwater flow is derived from the regional model that formally incorporates the uncertainty in these alternative input models using the maximum likelihood Bayesian model averaging method. An assessment of the joint predictive uncertainty of the input conceptual models is also produced. During this process, predictions of the alternative models are weighted by model probability, which is the degree of

  15. A Comprehensive and Systematic Model of User Evaluation of Web Search Engines: II. An Evaluation by Undergraduates.

    ERIC Educational Resources Information Center

    Su, Louise T.

    2003-01-01

    Presents an application of a model of user evaluation of four major Web search engines (Alta Vista, Excite, Infoseek, and Lycos) by undergraduates. Evaluation was based on 16 performance measures representing five evaluation criteria-relevance, efficiency, utility, user satisfaction, and connectivity. Content analysis of verbal data identified a…

  16. From site measurements to spatial modelling - multi-criteria model evaluation

    NASA Astrophysics Data System (ADS)

    Gottschalk, Pia; Roers, Michael; Wechsung, Frank

    2015-04-01

    Hydrological models are traditionally evaluated at gauge stations for river runoff which is assumed to be the valid and global test for model performance. One model output is assumed to reflect the performance of all implemented processes and parameters. It neglects the complex interactions of landscape processes which are actually simulated by the model but not tested. The application of a spatial hydrological model however offers a vast potential of evaluation aspects which shall be presented here with the example of the eco-hydrological model SWIM. We present current activities to evaluate SWIM at the lysimeter site Brandis, the eddy-co-variance site Gebesee and with spatial crop yields of Germany to constrain model performance additionally to river runoff. The lysimeter site is used to evaluate actuall evapotranspiration, total runoff below the soil profile and crop yields. The eddy-covariance site Gebesee offers data to study crop growth via net-ecosystem carbon exchange and actuall evapotranspiration. The performance of the vegetation module is tested via spatial crop yields at county level of Germany. Crop yields are an indirect measure of crop growth which is an important driver of the landscape water balance and therefore eventually determines river runoff as well. First results at the lysimeter site show that simulated soil water dynamics are less sensitive to soil type than measured soil water dynamics. First results from the simulation of actuall evapotranspiration and carbon exchange at Gebesee show a satisfactorily model performance with however difficulties to capture initial vegetation growth in spring. The latter is a hint at problems capturing winter growth conditions and subsequent impacts on crop growth. This is also reflected in the performance of simulated crop yields for Germany where the model reflects crop yields of silage maize much better than of winter wheat. With the given approach we would like to highlight the advantages and

  17. Development and Evaluation of Land-Use Regression Models Using Modeled Air Quality Concentrations

    EPA Science Inventory

    Abstract Land-use regression (LUR) models have emerged as a preferred methodology for estimating individual exposure to ambient air pollution in epidemiologic studies in absence of subject-specific measurements. Although there is a growing literature focused on LUR evaluation, fu...

  18. Evaluation of Turbulence-Model Performance in Jet Flows

    NASA Technical Reports Server (NTRS)

    Woodruff, S. L.; Seiner, J. M.; Hussaini, M. Y.; Erlebacher, G.

    2001-01-01

    The importance of reducing jet noise in both commercial and military aircraft applications has made jet acoustics a significant area of research. A technique for jet noise prediction commonly employed in practice is the MGB approach, based on the Lighthill acoustic analogy. This technique requires as aerodynamic input mean flow quantities and turbulence quantities like the kinetic energy and the dissipation. The purpose of the present paper is to assess existing capabilities for predicting these aerodynamic inputs. Two modern Navier-Stokes flow solvers, coupled with several modern turbulence models, are evaluated by comparison with experiment for their ability to predict mean flow properties in a supersonic jet plume. Potential weaknesses are identified for further investigation. Another comparison with similar intent is discussed by Barber et al. The ultimate goal of this research is to develop a reliable flow solver applicable to the low-noise, propulsion-efficient, nozzle exhaust systems being developed in NASA focused programs. These programs address a broad range of complex nozzle geometries operating in high temperature, compressible, flows. Seiner et al. previously discussed the jet configuration examined here. This convergent-divergent nozzle with an exit diameter of 3.6 inches was designed for an exhaust Mach number of 2.0 and a total temperature of 1680 F. The acoustic and aerodynamic data reported by Seiner et al. covered a range of jet total temperatures from 104 F to 2200 F at the fully-expanded nozzle pressure ratio. The aerodynamic data included centerline mean velocity and total temperature profiles. Computations were performed independently with two computational fluid dynamics (CFD) codes, ISAAC and PAB3D. Turbulence models employed include the k-epsilon model, the Gatski-Speziale algebraic-stress model and the Girimaji model, with and without the Sarkar compressibility correction. Centerline values of mean velocity and mean temperature are

  19. Catchment Classification via Hydrologic Modeling: Evaluating the Relative Importance of Model Selection, Parameterization and Classification Techniques

    NASA Astrophysics Data System (ADS)

    Marshall, L. A.; Smith, T. J.; To, L.

    2015-12-01

    Classification has emerged as an important tool for evaluating the runoff generating mechanisms in catchments and for providing a basis on which to group catchments having similar characteristics. These methods are particularly important for transferring models from one catchment to another in the case of data scarce regions or paired catchment studies .In many cases, the goal of catchment classification is to be able to identify models or parameter sets that could be applied to similar catchments for predictive purposes. A potential impediment to this goal is the impact of error in both the classification technique and the hydrologic model. In this study, we examine the relationship between catchment classification, hydrologic models, and model parameterizations for the purpose of transferring models between similar catchments. Building on previous work using a data set of over 100 catchments from south-east Australia, we identify several hydrologic model structures and calibrate each model for each catchment. We use clustering to identify groups of catchments with similar hydrologic response (as characterized through the calibrated model parameters). We examine the dependency of the clustered catchment groups on the pre-selected model, the uncertainty in the calibrated model parameters, and the clustering or classification algorithm. Further, we investigate the relationship between the catchment clusters and certain catchment physical characteristics or signatures, which are more typically used for catchment classification. Overall, our work is aimed at elucidating the potential sources of uncertainty in catchment classification, and the utility of classification for improving hydrologic predictions.

  20. New Methods for Air Quality Model Evaluation with Satellite Data

    NASA Astrophysics Data System (ADS)

    Holloway, T.; Harkey, M.

    2015-12-01

    Despite major advances in the ability of satellites to detect gases and aerosols in the atmosphere, there remains significant, untapped potential to apply space-based data to air quality regulatory applications. Here, we showcase research findings geared toward increasing the relevance of satellite data to support operational air quality management, focused on model evaluation. Particular emphasis is given to nitrogen dioxide (NO2) and formaldehyde (HCHO) from the Ozone Monitoring Instrument aboard the NASA Aura satellite, and evaluation of simulations from the EPA Community Multiscale Air Quality (CMAQ) model. This work is part of the NASA Air Quality Applied Sciences Team (AQAST), and is motivated by ongoing dialog with state and federal air quality management agencies. We present the response of satellite-derived NO2 to meteorological conditions, satellite-derived HCHO:NO2 ratios as an indicator of ozone production regime, and the ability of models to capture these sensitivities over the continental U.S. In the case of NO2-weather sensitivities, we find boundary layer height, wind speed, temperature, and relative humidity to be the most important variables in determining near-surface NO2 variability. CMAQ agreed with relationships observed in satellite data, as well as in ground-based data, over most regions. However, we find that the southwest U.S. is a problem area for CMAQ, where modeled NO2 responses to insolation, boundary layer height, and other variables are at odds with the observations. Our analyses utilize a software developed by our team, the Wisconsin Horizontal Interpolation Program for Satellites (WHIPS): a free, open-source program designed to make satellite-derived air quality data more usable. WHIPS interpolates level 2 satellite retrievals onto a user-defined fixed grid, in effect creating custom-gridded level 3 satellite product. Currently, WHIPS can process the following data products: OMI NO2 (NASA retrieval); OMI NO2 (KNMI retrieval); OMI

  1. Evaluating biomarkers to model cancer risk post cosmic ray exposure.

    PubMed

    Sridharan, Deepa M; Asaithamby, Aroumougame; Blattnig, Steve R; Costes, Sylvain V; Doetsch, Paul W; Dynan, William S; Hahnfeldt, Philip; Hlatky, Lynn; Kidane, Yared; Kronenberg, Amy; Naidu, Mamta D; Peterson, Leif E; Plante, Ianik; Ponomarev, Artem L; Saha, Janapriya; Snijders, Antoine M; Srinivasan, Kalayarasan; Tang, Jonathan; Werner, Erica; Pluth, Janice M

    2016-06-01

    Robust predictive models are essential to manage the risk of radiation-induced carcinogenesis. Chronic exposure to cosmic rays in the context of the complex deep space environment may place astronauts at high cancer risk. To estimate this risk, it is critical to understand how radiation-induced cellular stress impacts cell fate decisions and how this in turn alters the risk of carcinogenesis. Exposure to the heavy ion component of cosmic rays triggers a multitude of cellular changes, depending on the rate of exposure, the type of damage incurred and individual susceptibility. Heterogeneity in dose, dose rate, radiation quality, energy and particle flux contribute to the complexity of risk assessment. To unravel the impact of each of these factors, it is critical to identify sensitive biomarkers that can serve as inputs for robust modeling of individual risk of cancer or other long-term health consequences of exposure. Limitations in sensitivity of biomarkers to dose and dose rate, and the complexity of longitudinal monitoring, are some of the factors that increase uncertainties in the output from risk prediction models. Here, we critically evaluate candidate early and late biomarkers of radiation exposure and discuss their usefulness in predicting cell fate decisions. Some of the biomarkers we have reviewed include complex clustered DNA damage, persistent DNA repair foci, reactive oxygen species, chromosome aberrations and inflammation. Other biomarkers discussed, often assayed for at longer points post exposure, include mutations, chromosome aberrations, reactive oxygen species and telomere length changes. We discuss the relationship of biomarkers to different potential cell fates, including proliferation, apoptosis, senescence, and loss of stemness, which can propagate genomic instability and alter tissue composition and the underlying mRNA signatures that contribute to cell fate decisions. Our goal is to highlight factors that are important in choosing

  2. Evaluating biomarkers to model cancer risk post cosmic ray exposure

    NASA Astrophysics Data System (ADS)

    Sridharan, Deepa M.; Asaithamby, Aroumougame; Blattnig, Steve R.; Costes, Sylvain V.; Doetsch, Paul W.; Dynan, William S.; Hahnfeldt, Philip; Hlatky, Lynn; Kidane, Yared; Kronenberg, Amy; Naidu, Mamta D.; Peterson, Leif E.; Plante, Ianik; Ponomarev, Artem L.; Saha, Janapriya; Snijders, Antoine M.; Srinivasan, Kalayarasan; Tang, Jonathan; Werner, Erica; Pluth, Janice M.

    2016-06-01

    Robust predictive models are essential to manage the risk of radiation-induced carcinogenesis. Chronic exposure to cosmic rays in the context of the complex deep space environment may place astronauts at high cancer risk. To estimate this risk, it is critical to understand how radiation-induced cellular stress impacts cell fate decisions and how this in turn alters the risk of carcinogenesis. Exposure to the heavy ion component of cosmic rays triggers a multitude of cellular changes, depending on the rate of exposure, the type of damage incurred and individual susceptibility. Heterogeneity in dose, dose rate, radiation quality, energy and particle flux contribute to the complexity of risk assessment. To unravel the impact of each of these factors, it is critical to identify sensitive biomarkers that can serve as inputs for robust modeling of individual risk of cancer or other long-term health consequences of exposure. Limitations in sensitivity of biomarkers to dose and dose rate, and the complexity of longitudinal monitoring, are some of the factors that increase uncertainties in the output from risk prediction models. Here, we critically evaluate candidate early and late biomarkers of radiation exposure and discuss their usefulness in predicting cell fate decisions. Some of the biomarkers we have reviewed include complex clustered DNA damage, persistent DNA repair foci, reactive oxygen species, chromosome aberrations and inflammation. Other biomarkers discussed, often assayed for at longer points post exposure, include mutations, chromosome aberrations, reactive oxygen species and telomere length changes. We discuss the relationship of biomarkers to different potential cell fates, including proliferation, apoptosis, senescence, and loss of stemness, which can propagate genomic instability and alter tissue composition and the underlying mRNA signatures that contribute to cell fate decisions. Our goal is to highlight factors that are important in choosing

  3. Towards systematic evaluation of crop model outputs for global land-use models

    NASA Astrophysics Data System (ADS)

    Leclere, David; Azevedo, Ligia B.; Skalský, Rastislav; Balkovič, Juraj; Havlík, Petr

    2016-04-01

    Land provides vital socioeconomic resources to the society, however at the cost of large environmental degradations. Global integrated models combining high resolution global gridded crop models (GGCMs) and global economic models (GEMs) are increasingly being used to inform sustainable solution for agricultural land-use. However, little effort has yet been done to evaluate and compare the accuracy of GGCM outputs. In addition, GGCM datasets require a large amount of parameters whose values and their variability across space are weakly constrained: increasing the accuracy of such dataset has a very high computing cost. Innovative evaluation methods are required both to ground credibility to the global integrated models, and to allow efficient parameter specification of GGCMs. We propose an evaluation strategy for GGCM datasets in the perspective of use in GEMs, illustrated with preliminary results from a novel dataset (the Hypercube) generated by the EPIC GGCM and used in the GLOBIOM land use GEM to inform on present-day crop yield, water and nutrient input needs for 16 crops x 15 management intensities, at a spatial resolution of 5 arc-minutes. We adopt the following principle: evaluation should provide a transparent diagnosis of model adequacy for its intended use. We briefly describe how the Hypercube data is generated and how it articulates with GLOBIOM in order to transparently identify the performances to be evaluated, as well as the main assumptions and data processing involved. Expected performances include adequately representing the sub-national heterogeneity in crop yield and input needs: i) in space, ii) across crop species, and iii) across management intensities. We will present and discuss measures of these expected performances and weight the relative contribution of crop model, input data and data processing steps in performances. We will also compare obtained yield gaps and main yield-limiting factors against the M3 dataset. Next steps include

  4. The Pyramid Model: An Integrated Approach for Evaluating Continuing Education Programs and Outcomes.

    ERIC Educational Resources Information Center

    Hawkins, Victoria E.; Sherwood, Gwen D.

    1999-01-01

    Describes the steps of the Pyramid Evaluation Model to evaluate programs and outcomes systematically and comprehensively through an impact model that examines goals, reviews program design, monitors program implementation, assesses outcomes and impact, and analyzes efficiency . (Author/JOW)

  5. Evaluation of Atmospheric Loading and Improved Troposphere Modelling

    NASA Technical Reports Server (NTRS)

    Zelensky, Nikita P.; Chinn, Douglas S.; Lemoine, F. G.; Le Bail, Karine; Pavlis, Despina E.

    2012-01-01

    Forward modeling of non-tidal atmospheric loading displacements at geodetic tracking stations have not routinely been included in Doppler Orbitography and Radiopositionning Integrated by Satellite (DORIS) or Satellite Laser Ranging (SLR) station analyses for either POD applications or reference frame determination. The displacements which are computed from 6-hourly models such as the ECMWF and can amount to 3-10 mm in the east, north and up components depending on the tracking station locations. We evaluate the application of atmospheric loading in a number ways using the NASA GSFC GEODYN software: First we assess the impact on SLR & DORIS-determined orbits such as Jason-2, where we evaluate the impact on the tracking data RMS of fit and how the total orbits are changed with the application of this correction. Preliminary results show an RMS radial change of 0.5 mm for Jason-2 over 54 cycles and a total change in the Z-centering of the orbit of 3 mm peak-to-peak over one year. We also evaluate the effects on other DORIS-satellites such as Cryosat-2, Envisat and the SPOT satellites. In the second step, we produce two SINEX time series based on data from available DORIS satellites and assess the differences in WRMS, scale and Helmert translation parameters. Troposphere refraction is obviously an important correction for radiometric data types such as DORIS. We evaluate recent improvements in DORIS processing at GSFC including the application of the Vienna Mapping Function (VMF1) grids with a-priori hydrostatic (VZHDs) and wet (VZWDs) zenith delays. We reduce the gridded VZHD at the stations height using pressure and temperature derived from GPT (strategy 1) and Saastamoinen. We discuss the validation of the VMF1 implementation and its application to the Jason-2 POD processing, compared to corrections using the Niell mapping function and the GMF. Using one year of data, we also assess the impact of the new troposphere corrections on the DORIS-only solutions, most

  6. A new approach toward evaluation of fish bioenergetics models

    USGS Publications Warehouse

    Madenjian, Charles P.; O'Connor, Daniel V.; Nortrup, David A.

    2000-01-01

    A new approach was used to evaluate the Wisconsin bioenergetics model for lake trout (Salvelinus namaycush). Lake trout in laboratory tanks were fed alewife (Alosa pseudoharengus) and rainbow smelt (Osmerus mordax), prey typical of lake trout in Lake Michigan. Food consumption and growth by lake trout during the experiment were measured. Polychlorinated biphenyl (PCB) concentrations of the alewife and rainbow smelt, as well as of the lake trout at the beginning and end of the experiment, were determined. From these data, we calculated that lake trout retained 81% of the PCBs contained within their food. In an earlier study, application of the Wisconsin lake trout bioenergetics model to growth and diet data for lake trout in Lake Michigan, in conjunction with PCB data for lake trout and prey fish from Lake Michigan, yielded an estimate of PCB assimilation efficiency from food of 81%. This close agreement in the estimates of efficiency with which lake trout retain PCBs from their food indicated that the bioenergetics model was furnishing accurate predictions of food consumption by lake trout in Lake Michigan.

  7. Material Model Evaluation of a Composite Honeycomb Energy Absorber

    NASA Technical Reports Server (NTRS)

    Jackson, Karen E.; Annett, Martin S.; Fasanella, Edwin L.; Polanco, Michael A.

    2012-01-01

    A study was conducted to evaluate four different material models in predicting the dynamic crushing response of solid-element-based models of a composite honeycomb energy absorber, designated the Deployable Energy Absorber (DEA). Dynamic crush tests of three DEA components were simulated using the nonlinear, explicit transient dynamic code, LS-DYNA . In addition, a full-scale crash test of an MD-500 helicopter, retrofitted with DEA blocks, was simulated. The four material models used to represent the DEA included: *MAT_CRUSHABLE_FOAM (Mat 63), *MAT_HONEYCOMB (Mat 26), *MAT_SIMPLIFIED_RUBBER/FOAM (Mat 181), and *MAT_TRANSVERSELY_ANISOTROPIC_CRUSHABLE_FOAM (Mat 142). Test-analysis calibration metrics included simple percentage error comparisons of initial peak acceleration, sustained crush stress, and peak compaction acceleration of the DEA components. In addition, the Roadside Safety Verification and Validation Program (RSVVP) was used to assess similarities and differences between the experimental and analytical curves for the full-scale crash test.

  8. EMPIRE: Nuclear Reaction Model Code System for Data Evaluation

    SciTech Connect

    Herman, M. Capote, R.; Carlson, B.V.; Oblozinsky, P.; Sin, M.; Trkov, A.; Wienke, H.; Zerkin, V.

    2007-12-15

    files using the accompanying code EMPEND and completed with neutron resonances extracted from the existing evaluations. The package contains the full EXFOR (CSISRS) library of experimental reaction data that are automatically retrieved during the calculations. Publication quality graphs can be obtained using the powerful and flexible plotting package ZVView. The graphic user interface, written in Tcl/Tk, provides for easy operation of the system. This paper describes the capabilities of the code, outlines physical models and indicates parameter libraries used by EMPIRE to predict reaction cross sections and spectra, mainly for nucleon-induced reactions. Selected applications of EMPIRE are discussed, the most important being an extensive use of the code in evaluations of neutron reactions for the new US library ENDF/B-VII.0. Future extensions of the system are outlined, including neutron resonance module as well as capabilities of generating covariances, using both KALMAN and Monte-Carlo methods, that are still being advanced and refined.

  9. Review of models used for determining consequences of UF{sub 6} release: Development of model evaluation criteria. Volume 1

    SciTech Connect

    Nair, S.K.; Chambers, D.B.; Park, S.H.; Hoffman, F.O.

    1997-11-01

    The objective of this study is to examine the usefulness and effectiveness of currently existing models that simulate the release of uranium hexafluoride from UF{sub 6}-handling facilities, subsequent reactions of UF{sub 6} with atmospheric moisture, and the dispersion of UF{sub 6} and reaction products in the atmosphere. The study evaluates screening-level and detailed public-domain models that were specifically developed for UF{sub 6} and models that were originally developed for the treatment of dense gases but are applicable to UF{sub 6} release, reaction, and dispersion. The model evaluation process is divided into three specific tasks: model-component evaluation; applicability evaluation; and user interface and quality assurance and quality control (QA/QC) evaluation. Within the model-component evaluation process, a model`s treatment of source term, thermodynamics, and atmospheric dispersion are considered and model predictions are compared with actual observations. Within the applicability evaluation process, a model`s applicability to Integrated Safety Analysis, Emergency Response Planning, and Post-Accident Analysis, and to site-specific considerations are assessed. Finally, within the user interface and QA/QC evaluation process, a model`s user-friendliness, presence and clarity of documentation, ease of use, etc. are assessed, along with its handling of QA/QC. This document presents the complete methodology used in the evaluation process.

  10. A generalised model for traffic induced road dust emissions. Model description and evaluation

    NASA Astrophysics Data System (ADS)

    Berger, Janne; Denby, Bruce

    2011-07-01

    This paper concerns the development and evaluation of a new and generalised road dust emission model. Most of today's road dust emission models are based on local measurements and/or contain empirical emission factors that are specific for a given road environment. In this study, a more generalised road dust emission model is presented and evaluated. We have based the emissions on road, tyre and brake wear rates and used the mass balance concept to describe the build-up of road dust on the road surface and road shoulder. The model separates the emissions into a direct part and a resuspension part, and treats the road surface and road shoulder as two different sources. We tested the model under idealized conditions as well as on two datasets in and just outside of Oslo in Norway during the studded tyre season. We found that the model reproduced the observed increase in road dust emissions directly after drying of the road surface. The time scale for the build-up of road dust on the road surface is less than an hour for medium to heavy traffic density. The model performs well for temperatures above 0 °C and less well during colder periods. Since the model does not yet include salting as an additional mass source, underestimations are evident under dry periods with temperatures around 0 °C, under which salting occurs. The model overestimates the measured PM 10 (particulate matter less than 10 μm in diameter) concentrations under heavy precipitation events since the model does not take the amount of precipitation into account. There is a strong sensitivity of the modelled emissions to the road surface conditions and the current parameterisations of the effect of precipitation, runoff and evaporation seem inadequate.

  11. Modelling phosphorus intake, digestion, retention and excretion in growing and finishing pig: model evaluation.

    PubMed

    Symeou, V; Leinonen, I; Kyriazakis, I

    2014-10-01

    A deterministic, dynamic model was developed, to enable predictions of phosphorus (P) digested, retained and excreted for different pig genotypes and under different dietary conditions. Before confidence can be placed on the predictions of the model, its evaluation was required. A sensitivity analysis of model predictions to ±20% changes in the model parameters was undertaken using a basal UK industry standard diet and a pig genotype characterized by British Society Animal Science as being of 'intermediate growth'. Model outputs were most sensitive to the values of the efficiency of digestible P utilization for growth and the non-phytate P absorption coefficient from the small intestine into the bloodstream; all other model parameters influenced model outputs by <10%, with the majority of the parameters influencing outputs by <5%. Independent data sets of published experiments were used to evaluate model performance based on graphical comparisons and statistical analysis. The literature studies were selected on the basis of the following criteria: they were within the BW range of 20 to 120 kg, pigs grew in a thermo-neutral environment; and they provided information on P intake, retention and excretion. In general, the model predicted satisfactorily the quantitative pig responses, in terms of P digested, retained and excreted, to variation in dietary inorganic P supply, Ca and phytase supplementation. The model performed well with 'conventional', European feed ingredients and poorly with 'less conventional' ones, such as dried distillers grains with solubles and canola meal. Explanations for these inconsistencies in the predictions are offered in the paper and they are expected to lead to further model development and improvement. The latter would include the characterization of the origin of phytate in pig diets. PMID:24923282

  12. Model selection on solid ground: Rigorous comparison of nine ways to evaluate Bayesian model evidence

    NASA Astrophysics Data System (ADS)

    Schöniger, Anneli; Wöhling, Thomas; Samaniego, Luis; Nowak, Wolfgang

    2014-12-01

    Bayesian model selection or averaging objectively ranks a number of plausible, competing conceptual models based on Bayes' theorem. It implicitly performs an optimal trade-off between performance in fitting available data and minimum model complexity. The procedure requires determining Bayesian model evidence (BME), which is the likelihood of the observed data integrated over each model's parameter space. The computation of this integral is highly challenging because it is as high-dimensional as the number of model parameters. Three classes of techniques to compute BME are available, each with its own challenges and limitations: (1) Exact and fast analytical solutions are limited by strong assumptions. (2) Numerical evaluation quickly becomes unfeasible for expensive models. (3) Approximations known as information criteria (ICs) such as the AIC, BIC, or KIC (Akaike, Bayesian, or Kashyap information criterion, respectively) yield contradicting results with regard to model ranking. Our study features a theory-based intercomparison of these techniques. We further assess their accuracy in a simplistic synthetic example where for some scenarios an exact analytical solution exists. In more challenging scenarios, we use a brute-force Monte Carlo integration method as reference. We continue this analysis with a real-world application of hydrological model selection. This is a first-time benchmarking of the various methods for BME evaluation against true solutions. Results show that BME values from ICs are often heavily biased and that the choice of approximation method substantially influences the accuracy of model ranking. For reliable model selection, bias-free numerical methods should be preferred over ICs whenever computationally feasible.

  13. Model selection on solid ground: Rigorous comparison of nine ways to evaluate Bayesian model evidence

    PubMed Central

    Schöniger, Anneli; Wöhling, Thomas; Samaniego, Luis; Nowak, Wolfgang

    2014-01-01

    Bayesian model selection or averaging objectively ranks a number of plausible, competing conceptual models based on Bayes' theorem. It implicitly performs an optimal trade-off between performance in fitting available data and minimum model complexity. The procedure requires determining Bayesian model evidence (BME), which is the likelihood of the observed data integrated over each model's parameter space. The computation of this integral is highly challenging because it is as high-dimensional as the number of model parameters. Three classes of techniques to compute BME are available, each with its own challenges and limitations: (1) Exact and fast analytical solutions are limited by strong assumptions. (2) Numerical evaluation quickly becomes unfeasible for expensive models. (3) Approximations known as information criteria (ICs) such as the AIC, BIC, or KIC (Akaike, Bayesian, or Kashyap information criterion, respectively) yield contradicting results with regard to model ranking. Our study features a theory-based intercomparison of these techniques. We further assess their accuracy in a simplistic synthetic example where for some scenarios an exact analytical solution exists. In more challenging scenarios, we use a brute-force Monte Carlo integration method as reference. We continue this analysis with a real-world application of hydrological model selection. This is a first-time benchmarking of the various methods for BME evaluation against true solutions. Results show that BME values from ICs are often heavily biased and that the choice of approximation method substantially influences the accuracy of model ranking. For reliable model selection, bias-free numerical methods should be preferred over ICs whenever computationally feasible. PMID:25745272

  14. Interactive model evaluation tool based on IPython notebook

    NASA Astrophysics Data System (ADS)

    Balemans, Sophie; Van Hoey, Stijn; Nopens, Ingmar; Seuntjes, Piet

    2015-04-01

    In hydrological modelling, some kind of parameter optimization is mostly performed. This can be the selection of a single best parameter set, a split in behavioural and non-behavioural parameter sets based on a selected threshold or a posterior parameter distribution derived with a formal Bayesian approach. The selection of the criterion to measure the goodness of fit (likelihood or any objective function) is an essential step in all of these methodologies and will affect the final selected parameter subset. Moreover, the discriminative power of the objective function is also dependent from the time period used. In practice, the optimization process is an iterative procedure. As such, in the course of the modelling process, an increasing amount of simulations is performed. However, the information carried by these simulation outputs is not always fully exploited. In this respect, we developed and present an interactive environment that enables the user to intuitively evaluate the model performance. The aim is to explore the parameter space graphically and to visualize the impact of the selected objective function on model behaviour. First, a set of model simulation results is loaded along with the corresponding parameter sets and a data set of the same variable as the model outcome (mostly discharge). The ranges of the loaded parameter sets define the parameter space. A selection of the two parameters visualised can be made by the user. Furthermore, an objective function and a time period of interest need to be selected. Based on this information, a two-dimensional parameter response surface is created, which actually just shows a scatter plot of the parameter combinations and assigns a color scale corresponding with the goodness of fit of each parameter combination. Finally, a slider is available to change the color mapping of the points. Actually, the slider provides a threshold to exclude non behaviour parameter sets and the color scale is only attributed to the

  15. A linear programming model for optimizing HDR brachytherapy dose distributions with respect to mean dose in the DVH-tail

    SciTech Connect

    Holm, Åsa; Larsson, Torbjörn; Tedgren, Åsa Carlsson

    2013-08-15

    Purpose: Recent research has shown that the optimization model hitherto used in high-dose-rate (HDR) brachytherapy corresponds weakly to the dosimetric indices used to evaluate the quality of a dose distribution. Although alternative models that explicitly include such dosimetric indices have been presented, the inclusion of the dosimetric indices explicitly yields intractable models. The purpose of this paper is to develop a model for optimizing dosimetric indices that is easier to solve than those proposed earlier.Methods: In this paper, the authors present an alternative approach for optimizing dose distributions for HDR brachytherapy where dosimetric indices are taken into account through surrogates based on the conditional value-at-risk concept. This yields a linear optimization model that is easy to solve, and has the advantage that the constraints are easy to interpret and modify to obtain satisfactory dose distributions.Results: The authors show by experimental comparisons, carried out retrospectively for a set of prostate cancer patients, that their proposed model corresponds well with constraining dosimetric indices. All modifications of the parameters in the authors' model yield the expected result. The dose distributions generated are also comparable to those generated by the standard model with respect to the dosimetric indices that are used for evaluating quality.Conclusions: The authors' new model is a viable surrogate to optimizing dosimetric indices and quickly and easily yields high quality dose distributions.

  16. Alternative separation evaluations in model rechargeable silver-zinc cells

    NASA Astrophysics Data System (ADS)

    Lewis, Harlan L.; Danko, Thomas; Himy, Albert; Johnson, William

    Several varieties of standard and reinforced, cellulose-based, sausage casing films derived from wood pulp have been evaluated in model (nominal 28 A h) rechargeable silver-zinc cells. The cell performance data for both cycle life and wet stand life have been compared with cells equipped with conventional 1 mil (0.025 mm) cellophane. Although shorting was the most common failure mode in the cells with sausage casing separation, remarkably good cycle and wet life were obtained when the separation wrap also included PVA film. This paper reports the cycle and wet life comparison data for these substitute separators, with respect to conventional cellophane separation, as well as separation physical property data and silver migration rates in the cells as a function of cell life.

  17. Novel Planar Electromagnetic Sensors: Modeling and Performance Evaluation

    PubMed Central

    Mukhopadhyay, Subhas C.

    2005-01-01

    High performance planar electromagnetic sensors, their modeling and a few applications have been reported in this paper. The researches employing planar type electromagnetic sensors have started quite a few years back with the initial emphasis on the inspection of defects on printed circuit board. The use of the planar type sensing system has been extended for the evaluation of near-surface material properties such as conductivity, permittivity, permeability etc and can also be used for the inspection of defects in the near-surface of materials. Recently the sensor has been used for the inspection of quality of saxophone reeds and dairy products. The electromagnetic responses of planar interdigital sensors with pork meats have been investigated.

  18. Evaluation of CM5 Charges for Condensed-Phase Modeling.

    PubMed

    Vilseck, Jonah Z; Tirado-Rives, Julian; Jorgensen, William L

    2014-07-01

    The recently developed Charge Model 5 (CM5) is tested for its utility in condensed-phase simulations. The CM5 approach, which derives partial atomic charges from Hirshfeld population analyses, provides excellent results for gas-phase dipole moments and is applicable to all elements of the periodic table. Herein, the adequacy of scaled CM5 charges for use in modeling aqueous solutions has been evaluated by computing free energies of hydration (ΔG hyd) for 42 neutral organic molecules via Monte Carlo statistical mechanics. An optimal scaling factor for the CM5 charges was determined to be 1.27, resulting in a mean unsigned error (MUE) of 1.1 kcal/mol for the free energies of hydration. Testing for an additional 20 molecules gave an MUE of 1.3 kcal/mol. The high precision of the results is confirmed by free energy calculations using both sequential perturbations and complete molecular annihilation. Performance for specific functional groups is discussed; sulfur-containing molecules yield the largest errors. In addition, the scaling factor of 1.27 is shown to be appropriate for CM5 charges derived from a variety of density functional methods and basis sets. Though the average errors from the 1.27*CM5 results are only slightly lower than those using 1.14*CM1A charges, the broader applicability and easier access to CM5 charges via the Gaussian program are additional attractive features. The 1.27*CM5 charge model can be used for an enormous variety of applications in conjunction with many fixed-charge force fields and molecular modeling programs. PMID:25061445

  19. Evaluating Status Change of Soil Potassium from Path Model

    PubMed Central

    He, Wenming; Chen, Fang

    2013-01-01

    The purpose of this study is to determine critical environmental parameters of soil K availability and to quantify those contributors by using a proposed path model. In this study, plot experiments were designed into different treatments, and soil samples were collected and further analyzed in laboratory to investigate soil properties influence on soil potassium forms (water soluble K, exchangeable K, non-exchangeable K). Furthermore, path analysis based on proposed path model was carried out to evaluate the relationship between potassium forms and soil properties. Research findings were achieved as followings. Firstly, key direct factors were soil S, ratio of sodium-potassium (Na/K), the chemical index of alteration (CIA), Soil Organic Matter in soil solution (SOM), Na and total nitrogen in soil solution (TN), and key indirect factors were Carbonate (CO3), Mg, pH, Na, S, and SOM. Secondly, path model can effectively determine direction and quantities of potassium status changes between Exchangeable potassium (eK), Non-exchangeable potassium (neK) and water-soluble potassium (wsK) under influences of specific environmental parameters. In reversible equilibrium state of , K balance state was inclined to be moved into β and χ directions in treatments of potassium shortage. However in reversible equilibrium of , K balance state was inclined to be moved into θ and λ directions in treatments of water shortage. Results showed that the proposed path model was able to quantitatively disclose moving direction of K status and quantify its equilibrium threshold. It provided a theoretical and practical basis for scientific and effective fertilization in agricultural plants growth. PMID:24204659

  20. Development and evaluation of a bioenergetics model for bull trout

    USGS Publications Warehouse

    Mesa, Matthew G.; Welland, Lisa K.; Christiansen, Helena E.; Sauter, Sally T.; Beauchamp, David A.

    2013-01-01

    We conducted laboratory experiments to parameterize a bioenergetics model for wild Bull Trout Salvelinus confluentus, estimating the effects of body mass (12–1,117 g) and temperature (3–20°C) on maximum consumption (C max) and standard metabolic rates. The temperature associated with the highest C max was 16°C, and C max showed the characteristic dome-shaped temperature-dependent response. Mass-dependent values of C max (N = 28) at 16°C ranged from 0.03 to 0.13 g·g−1·d−1. The standard metabolic rates of fish (N = 110) ranged from 0.0005 to 0.003 g·O2·g−1·d−1 and increased with increasing temperature but declined with increasing body mass. In two separate evaluation experiments, which were conducted at only one ration level (40% of estimated C max), the model predicted final weights that were, on average, within 1.2 ± 2.5% (mean ± SD) of observed values for fish ranging from 119 to 573 g and within 3.5 ± 4.9% of values for 31–65 g fish. Model-predicted consumption was within 5.5 ± 10.9% of observed values for larger fish and within 12.4 ± 16.0% for smaller fish. Our model should be useful to those dealing with issues currently faced by Bull Trout, such as climate change or alterations in prey availability.

  1. Integrated modelling approach for the evaluation of low emission zones.

    PubMed

    Dias, Daniela; Tchepel, Oxana; Antunes, António Pais

    2016-07-15

    Low emission zones (LEZ) are areas where the most polluting vehicles are restricted or deterred from entering. In recent years, LEZ became a popular option to reduce traffic-related air pollution and have been implemented in many cities worldwide, notably in Europe. However, the evidence about their effectiveness is inconsistent. This calls for the development of tools to evaluate ex-ante the air quality impacts of a LEZ. The integrated modelling approach we propose in this paper aims to respond to this call. It links a transportation model with an emissions model and an air quality model operating over a GIS-based platform. Through the application of the approach, it is possible to estimate the changes induced by the creation of a LEZ applied to private cars with respect to air pollution levels not only inside the LEZ, but also, more generally, in the city where it is located. The usefulness of the proposed approach was demonstrated for a case study involving the city of Coimbra (Portugal), where the creation of a LEZ is being sought to mitigate the air quality problems that its historic centre currently faces. The main result of this study was that PM10 and NO2 emissions from private cars would decrease significantly inside the LEZ (63% and 52%, respectively) but the improvement in air quality would be small and exceedances to the air pollution limits adopted in the European Union would not be fully avoided. In contrast, at city level, total emissions increase and a deterioration of air quality is expected to occur. PMID:27107951

  2. Evaluation of CM5 Charges for Condensed-Phase Modeling

    PubMed Central

    2015-01-01

    The recently developed Charge Model 5 (CM5) is tested for its utility in condensed-phase simulations. The CM5 approach, which derives partial atomic charges from Hirshfeld population analyses, provides excellent results for gas-phase dipole moments and is applicable to all elements of the periodic table. Herein, the adequacy of scaled CM5 charges for use in modeling aqueous solutions has been evaluated by computing free energies of hydration (ΔGhyd) for 42 neutral organic molecules via Monte Carlo statistical mechanics. An optimal scaling factor for the CM5 charges was determined to be 1.27, resulting in a mean unsigned error (MUE) of 1.1 kcal/mol for the free energies of hydration. Testing for an additional 20 molecules gave an MUE of 1.3 kcal/mol. The high precision of the results is confirmed by free energy calculations using both sequential perturbations and complete molecular annihilation. Performance for specific functional groups is discussed; sulfur-containing molecules yield the largest errors. In addition, the scaling factor of 1.27 is shown to be appropriate for CM5 charges derived from a variety of density functional methods and basis sets. Though the average errors from the 1.27*CM5 results are only slightly lower than those using 1.14*CM1A charges, the broader applicability and easier access to CM5 charges via the Gaussian program are additional attractive features. The 1.27*CM5 charge model can be used for an enormous variety of applications in conjunction with many fixed-charge force fields and molecular modeling programs. PMID:25061445

  3. Statistical models and computation to evaluate measurement uncertainty

    NASA Astrophysics Data System (ADS)

    Possolo, Antonio

    2014-08-01

    In the course of the twenty years since the publication of the Guide to the Expression of Uncertainty in Measurement (GUM), the recognition has been steadily growing of the value that statistical models and statistical computing bring to the evaluation of measurement uncertainty, and of how they enable its probabilistic interpretation. These models and computational methods can address all the problems originally discussed and illustrated in the GUM, and enable addressing other, more challenging problems, that measurement science is facing today and that it is expected to face in the years ahead. These problems that lie beyond the reach of the techniques in the GUM include (i) characterizing the uncertainty associated with the assignment of value to measurands of greater complexity than, or altogether different in nature from, the scalar or vectorial measurands entertained in the GUM: for example, sequences of nucleotides in DNA, calibration functions and optical and other spectra, spatial distribution of radioactivity over a geographical region, shape of polymeric scaffolds for bioengineering applications, etc; (ii) incorporating relevant information about the measurand that predates or is otherwise external to the measurement experiment; (iii) combining results from measurements of the same measurand that are mutually independent, obtained by different methods or produced by different laboratories. This review of several of these statistical models and computational methods illustrates some of the advances that they have enabled, and in the process invites a reflection on the interesting historical fact that these very same models and methods, by and large, were already available twenty years ago, when the GUM was first published—but then the dialogue between metrologists, statisticians and mathematicians was still in bud. It is in full bloom today, much to the benefit of all.

  4. Evaluation of data for Sinkhole-development risk models

    NASA Astrophysics Data System (ADS)

    Upchurch, Sam B.; Littlefield, James R.

    1988-10-01

    Before risk assessments for sinkhole damage and indemnification are developed, a data base must be created to predict the occurrence and distribution of sinkholes. This database must be evaluated in terms of the following questions: (1) are available records of modern sinkhole development adequate, (2) can the distribution of ancient sinks be used for predictive purposes, and (3) at what areal scale must sinkhole occurrences be evaluated for predictive and risk analysis purposes? Twelve 7.5' quadrangles with varying karst development in Hillsborough County, Florida provide insight into these questions. The area includes 179 modern sinks that developed between 1964 and 1985 and 2,303 ancient sinks. The sinks occur in urban, suburban, agricultural, and major forest wetland areas. The number of ancient sinks ranges from 0.1 to 3.2/km2 and averages 1.1/km2 for the entire area. The quadrangle area occupied by ancient sinks ranges from 0.3 to 10.2 percent. The distribution of ancient sinkholes within a quadrangle ranges from 0 to over 25 percent of the land surface. In bare karst areas, the sinks are localized along major lineaments, especially at lineament intersections. Where there is covered karst, ancient sinks may be obscured. Modern sinkholes did not uniformly through time, they ranged from 0 to 29/yr. The regional occurrence rate is 7.6/yr. Most were reported in urban or suburban areas and their locations coincide with the lineament-controlled areas of ancient karst. Moving-average analysis indicates that the distribution of modern sinks is highly localized and ranges from 0 to 1.9/km2. Chi-square tests show that the distribution of ancient sinks in bare karst areas significantly predicts the locations of modern sinks. In areas of covered karst, the locations of ancient sinkholes do not predict modern sinks. It appears that risk-assessment models for sinkhole development can use the distribution of ancient sinks where bare karst is present. In covered karst areas

  5. Storytelling Voice Conversion: Evaluation Experiment Using Gaussian Mixture Models

    NASA Astrophysics Data System (ADS)

    Přibil, Jiří; Přibilová, Anna; Ďuračková, Daniela

    2015-07-01

    In the development of the voice conversion and personification of the text-to-speech (TTS) systems, it is very necessary to have feedback information about the users' opinion on the resulting synthetic speech quality. Therefore, the main aim of the experiments described in this paper was to find out whether the classifier based on Gaussian mixture models (GMM) could be applied for evaluation of different storytelling voices created by transformation of the sentences generated by the Czech and Slovak TTS system. We suppose that it is possible to combine this GMM-based statistical evaluation with the classical one in the form of listening tests or it can replace them. The results obtained in this way were in good correlation with the results of the conventional listening test, so they confirm practical usability of the developed GMM classifier. With the help of the performed analysis, the optimal setting of the initial parameters and the structure of the input feature set for recognition of the storytelling voices was finally determined.

  6. Evaluation of two pollutant dispersion models over continental scales

    NASA Astrophysics Data System (ADS)

    Rodriguez, D.; Walker, H.; Klepikova, N.; Kostrikov, A.; Zhuk, Y.

    Two long-range, emergency response models—one based on the particle-in-cell method of pollutant representation (ADPIC/U.S.) the other based on the superposition of Gaussian puffs released periodically in time (EXPRESS/Russia)—are evaluated using perfluorocarbon tracer data from the Across North America Tracer Experiment (ANATEX). The purpose of the study is to assess our current capabilities for simulating continental-scale dispersion processes and to use these assessments as a means to improve our modeling tools. The criteria for judging model performance are based on protocols devised by the Environmental Protection Agency and on other complementary tests. Most of these measures require the formation and analysis of surface concentration footprints (the surface manifestations of tracer clouds, which are sampled over 24-h intervals), whose dimensions, center-of-mass coordinates and integral characteristics provide a basis for comparing observed and calculated concentration distributions. Generally speaking, the plumes associated with the 20 releases of perfluorocarbon (10 each from sources at Glasgow, MT and St. Cloud, MN) in January 1987, are poorly resolved by the sampling network when the source-to-receptor distances are less than about 1000 km. Within this undersampled region, both models chronically overpredict the sampler concentrations. Given this tendency, the computed areas of the surface footprints and their integral concentrations are likewise excessive. When the actual plumes spread out sufficiently for reasonable resolution, the observed ( O) and calculated ( C) footprint areas are usually within a factor of two of one another, thereby suggesting that the models possess some skill in the prediction of long-range diffusion. Deviations in the O and C plume trajectories, as measured by the distances of separation between the plume centroids, are on the other of 125 km d -1 for both models. It appears that the inability of the models to simulate large

  7. The western Pacific monsoon in CMIP5 models: Model evaluation and projections

    NASA Astrophysics Data System (ADS)

    Brown, Josephine R.; Colman, Robert A.; Moise, Aurel F.; Smith, Ian N.

    2013-11-01

    ability of 35 models from the Coupled Model Intercomparison Project Phase 5 (CMIP5) to simulate the western Pacific (WP) monsoon is evaluated over four representative regions around Timor, New Guinea, the Solomon Islands and Palau. Coupled model simulations are compared with atmosphere-only model simulations (with observed sea surface temperatures, SSTs) to determine the impact of SST biases on model performance. Overall, the CMIP5 models simulate the WP monsoon better than previous-generation Coupled Model Intercomparison Project Phase 3 (CMIP3) models, but some systematic biases remain. The atmosphere-only models are better able to simulate the seasonal cycle of zonal winds than the coupled models, but display comparable biases in the rainfall. The CMIP5 models are able to capture features of interannual variability in response to the El Niño-Southern Oscillation. In climate projections under the RCP8.5 scenario, monsoon rainfall is increased over most of the WP monsoon domain, while wind changes are small. Widespread rainfall increases at low latitudes in the summer hemisphere appear robust as a large majority of models agree on the sign of the change. There is less agreement on rainfall changes in winter. Interannual variability of monsoon wet season rainfall is increased in a warmer climate, particularly over Palau, Timor and the Solomon Islands. A subset of the models showing greatest skill in the current climate confirms the overall projections, although showing markedly smaller rainfall increases in the western equatorial Pacific. The changes found here may have large impacts on Pacific island countries influenced by the WP monsoon.

  8. Risk evaluation of uranium mining: A geochemical inverse modelling approach

    NASA Astrophysics Data System (ADS)

    Rillard, J.; Zuddas, P.; Scislewski, A.

    2011-12-01

    It is well known that uranium extraction operations can increase risks linked to radiation exposure. The toxicity of uranium and associated heavy metals is the main environmental concern regarding exploitation and processing of U-ore. In areas where U mining is planned, a careful assessment of toxic and radioactive element concentrations is recommended before the start of mining activities. A background evaluation of harmful elements is important in order to prevent and/or quantify future water contamination resulting from possible migration of toxic metals coming from ore and waste water interaction. Controlled leaching experiments were carried out to investigate processes of ore and waste (leached ore) degradation, using samples from the uranium exploitation site located in Caetité-Bahia, Brazil. In experiments in which the reaction of waste with water was tested, we found that the water had low pH and high levels of sulphates and aluminium. On the other hand, in experiments in which ore was tested, the water had a chemical composition comparable to natural water found in the region of Caetité. On the basis of our experiments, we suggest that waste resulting from sulphuric acid treatment can induce acidification and salinization of surface and ground water. For this reason proper storage of waste is imperative. As a tool to evaluate the risks, a geochemical inverse modelling approach was developed to estimate the water-mineral interaction involving the presence of toxic elements. We used a method earlier described by Scislewski and Zuddas 2010 (Geochim. Cosmochim. Acta 74, 6996-7007) in which the reactive surface area of mineral dissolution can be estimated. We found that the reactive surface area of rock parent minerals is not constant during time but varies according to several orders of magnitude in only two months of interaction. We propose that parent mineral heterogeneity and particularly, neogenic phase formation may explain the observed variation of the

  9. Solid rocket booster performance evaluation model. Volume 1: Engineering description

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The space shuttle solid rocket booster performance evaluation model (SRB-II) is made up of analytical and functional simulation techniques linked together so that a single pass through the model will predict the performance of the propulsion elements of a space shuttle solid rocket booster. The available options allow the user to predict static test performance, predict nominal and off nominal flight performance, and reconstruct actual flight and static test performance. Options selected by the user are dependent on the data available. These can include data derived from theoretical analysis, small scale motor test data, large motor test data and motor configuration data. The user has several options for output format that include print, cards, tape and plots. Output includes all major performance parameters (Isp, thrust, flowrate, mass accounting and operating pressures) as a function of time as well as calculated single point performance data. The engineering description of SRB-II discusses the engineering and programming fundamentals used, the function of each module, and the limitations of each module.

  10. Evaluation of five fracture models in Taylor impact fracture

    NASA Astrophysics Data System (ADS)

    Zhang, Wei; Xiao, Xin-Ke; Wei, Gang; Guo, Zitao

    2012-03-01

    Taylor impact test presented in a previous study on a commercial high strength and super hard aluminum alloy 7A04-T6 are numerically evaluated using the finite element code ABAQUS/Explicit. In the present study, the influence of fracture criterion in numerical simulations of the deformation and fracture behavior of Taylor rod has been studied. Included in the paper are a modified version of Johnson-Cook, the Cockcroft-Latham(C-L), the constant fracture strain, the maximum shear stress and the maximum principle stress fracture models. Model constants for each criterion are calibrated from material tests. The modified version of Johnson-Cook fracture criterion with the stress triaxiality cut off idea is found to give good prediction of the Taylor impact fracture behavior. However, this study will also show that the C-L fracture criterion where only one simple material test is required for calibration is found to give reasonable predictions. Unfortunately, the other three criteria are not able to repeat the experimentally obtained fracture behavior. The study indicates that the stress triaxiality cut off idea is necessary to predict the Taylor impact fracture.

  11. Evaluation of Five Fracture Models in Taylor Impact Fracture

    NASA Astrophysics Data System (ADS)

    Zhang, Wei; Xiao, Xinke; Wei, Gang; Guo, Zitao

    2011-06-01

    Taylor impact test presented in a previous study on a commercial high strength and super hard aluminum alloy 7A04-T6 are numerically evaluated using the finite element code ABAQUS/Explicit. In the present study, the influence of fracture criterion in numerical simulations of the deformation and fracture behavior of Taylor rod has been studied. Included in the paper are a modified version of Johnson-Cook, the Cockcroft-Latham(C-L), the constant fracture strain, the maximum shear stress and the maximum principle stress fracture models. Model constants for each criterion are calibrated from material tests. The modified version of Johnson-Cook fracture criterion with the stress triaxiality cut off idea is found to give good prediction of the Taylor impact fracture behavior. However, this study will also show that the C-L fracture criterion where only one simple material test is required for calibration, is found to give reasonable predictions. Unfortunately, the other three criteria are not able to repeat the experimentally obtained fracture behavior. The study indicates that the stress triaxiality cut off idea is necessary to predict the Taylor impact fracture. The National Natural Science Foundation of China (No.: 11072072).

  12. Bioenergy Crop Model Simulation and Evaluation of Miscanthus

    NASA Astrophysics Data System (ADS)

    di Vittorio, A.; Miller, N. L.

    2008-12-01

    With rising demand for biofuels there is a need to convert some abandoned grasslands to biofuel crop cultivation. Miscanthus as a feedstock is a highly productive grass under well-watered and fertilized conditions, and has the potential to efficiently provide biomass and lignin-cellulosic ethanol. Few studies have addressed Miscanthus under ambient conditions, and those that have focused on mixed grasslands within the endemic range of miscanthus sp. Here a series of benchmark simulations were performed to evaluate the productivity of Miscanthus Giganteus in a converted Mediterranean C3 grassland site under ambient conditions. To assess the potential of miscanthus as a biofuel crop, we use an ecosystem biogoechemistry model, Biome-BGC, to simulate crop productivity at the Mediterranean site in California. The site is a C3 grassland in the Sierra Nevada foothills, and has an annual average precipitation of 61.6 cm. Initial soil and vegetation conditions were determined using historical climate data and current vegetation distributions. The current vegetation was replaced with miscanthus sp., and BIOME-BGC simulations were made for the planting season, growing season, and harvest under ambient climatic conditions. Modeled biofuel crop productivity is compared with published productivities from field studies.

  13. Reliability of Bolton analysis evaluation in tridimensional virtual models

    PubMed Central

    Brandão, Marianna Mendonca; Sobral, Marcio Costal; Vogel, Carlos Jorge

    2015-01-01

    Objective: The present study aimed at evaluating the reliability of Bolton analysis in tridimensional virtual models, comparing it with the manual method carried out with dental casts. Methods: The present investigation was performed using 56 pairs of dental casts produced from the dental arches of patients in perfect conditions and randomly selected from Universidade Federal da Bahia, School of Dentistry, Orthodontics Postgraduate Program. Manual measurements were obtained with the aid of a digital Cen-Tech 4"(r) caliper (Harpor Freight Tools, Calabasas, CA, USA). Subsequently, samples were digitized on 3Shape(r) R-700T scanner (Copenhagen, Denmark) and digital measures were obtained by Ortho Analyzer software. Results: Data were subject to statistical analysis and results revealed that there were no statistically significant differences between measurements with p-values equal to p = 0.173 and p= 0.239 for total and anterior proportions, respectively. Conclusion: Based on these findings, it is possible to deduce that Bolton analysis performed on tridimensional virtual models is as reliable as measurements obtained from dental casts with satisfactory agreement. PMID:26560824

  14. Evaluation of Spatial Agreement of Distinct Landslide Prediction Models

    NASA Astrophysics Data System (ADS)

    Sterlacchini, Simone; Bordogna, Gloria; Frigerio, Ivan

    2013-04-01

    derived to test agreement among the maps. Nevertheless, no information was made available about the location where the prediction of two or more maps agreed and where they did not. Thus we wanted to study if also the spatial agreements of the models predicted the same or similar values. To this end we adopted a soft image fusion approach proposed in. It is defined as a group decision making model for ranking spatial alternatives based on a soft fusion of coherent evaluations. In order to apply this approach, the prediction maps were categorized into 10 distinct classes by using an equal-area criterion to compare the predicted results. Thus we applied soft fusion of the prediction maps regarded as evaluations of distinct human experts. The fusion process needs the definition of the concept of "fuzzy majority", provided by a linguistic quantifier, in order to determine the coherence of a majority of maps in each pixel of the territory. Based on this, the overall spatial coherence among the majority of the prediction maps was evaluated. The spatial coherence among a fuzzy majority is defined based on the Minkowski OWA operators. The result made it possible to spatially identify sectors of the study area in which the predictions were in agreement for the same or for close classes of susceptibility, or discordant, or even distant classes. We studied the spatial agreement among a "fuzzy majority" defined as "80% of the 13 coherent maps", thus requiring that at least 11 out of 13 agree, since from previous results we knew that two maps were in disagreement. So the fuzzy majority AtLeast80% was defined by a quantifier with linear increasing membership function (0.8, 1). The coherence metric used was the Euclidean distance. We thus computed the soft fusion of AtLeast80% coherent maps for homogeneous groups of classes. We considered as homogeneous classes the highest two classes (9 and 10), the lowest two classes, and the central classes (4, 5 and 6). We then fused the maps

  15. Global Modeling of Tropospheric Chemistry with Assimilated Meteorology: Model Description and Evaluation

    NASA Technical Reports Server (NTRS)

    Bey, Isabelle; Jacob, Daniel J.; Yantosca, Robert M.; Logan, Jennifer A.; Field, Brendan D.; Fiore, Arlene M.; Li, Qin-Bin; Liu, Hong-Yu; Mickley, Loretta J.; Schultz, Martin G.

    2001-01-01

    We present a first description and evaluation of GEOS-CHEM, a global three-dimensional (3-D) model of tropospheric chemistry driven by assimilated meteorological observations from the Goddard Earth Observing System (GEOS) of the NASA Data Assimilation Office (DAO). The model is applied to a 1-year simulation of tropospheric ozone-NOx-hydrocarbon chemistry for 1994, and is evaluated with observations both for 1994 and for other years. It reproduces usually to within 10 ppb the concentrations of ozone observed from the worldwide ozonesonde data network. It simulates correctly the seasonal phases and amplitudes of ozone concentrations for different regions and altitudes, but tends to underestimate the seasonal amplitude at northern midlatitudes. Observed concentrations of NO and peroxyacetylnitrate (PAN) observed in aircraft campaigns are generally reproduced to within a factor of 2 and often much better. Concentrations of HNO3 in the remote troposphere are overestimated typically by a factor of 2-3, a common problem in global models that may reflect a combination of insufficient precipitation scavenging and gas-aerosol partitioning not resolved by the model. The model yields an atmospheric lifetime of methylchloroform (proxy for global OH) of 5.1 years, as compared to a best estimate from observations of 5.5 plus or minus 0.8 years, and simulates H2O2 concentrations observed from aircraft with significant regional disagreements but no global bias. The OH concentrations are approximately 20% higher than in our previous global 3-D model which included an UV-absorbing aerosol. Concentrations of CO tend to be underestimated by the model, often by 10-30 ppb, which could reflect a combination of excessive OH (a 20% decrease in model OH could be accommodated by the methylchloroform constraint) and an underestimate of CO sources (particularly biogenic). The model underestimates observed acetone concentrations over the South Pacific in fall by a factor of 3; a missing source

  16. Evaluation and Applications of the Prediction of Intensity Model Error (PRIME) Model

    NASA Astrophysics Data System (ADS)

    Bhatia, K. T.; Nolan, D. S.; Demaria, M.; Schumacher, A.

    2015-12-01

    Forecasters and end users of tropical cyclone (TC) intensity forecasts would greatly benefit from a reliable expectation of model error to counteract the lack of consistency in TC intensity forecast performance. As a first step towards producing error predictions to accompany each TC intensity forecast, Bhatia and Nolan (2013) studied the relationship between synoptic parameters, TC attributes, and forecast errors. In this study, we build on previous results of Bhatia and Nolan (2013) by testing the ability of the Prediction of Intensity Model Error (PRIME) model to forecast the absolute error and bias of four leading intensity models available for guidance in the Atlantic basin. PRIME forecasts are independently evaluated at each 12-hour interval from 12 to 120 hours during the 2007-2014 Atlantic hurricane seasons. The absolute error and bias predictions of PRIME are compared to their respective climatologies to determine their skill. In addition to these results, we will present the performance of the operational version of PRIME run during the 2015 hurricane season. PRIME verification results show that it can reliably anticipate situations where particular models excel, and therefore could lead to a more informed protocol for hurricane evacuations and storm preparations. These positive conclusions suggest that PRIME forecasts also have the potential to lower the error in the original intensity forecasts of each model. As a result, two techniques are proposed to develop a post-processing procedure for a multimodel ensemble based on PRIME. The first approach is to inverse-weight models using PRIME absolute error predictions (higher predicted absolute error corresponds to lower weights). The second multimodel ensemble applies PRIME bias predictions to each model's intensity forecast and the mean of the corrected models is evaluated. The forecasts of both of these experimental ensembles are compared to those of the equal-weight ICON ensemble, which currently

  17. Evaluation of Model Microphysics within Precipitation Bands of Extratropical Cyclones

    NASA Technical Reports Server (NTRS)

    Colle, Brian A.; Yu, Ruyi; Molthan, Andrew L.; Nesbitt, Steven

    2013-01-01

    Recent studies evaluating the bulk microphysical schemes (BMPs) within cloud resolving models (CRMs) have indicated large uncertainties and errors in the amount and size distributions of snow and cloud ice aloft. The snow prediction is sensitive to the snow densities, habits, and degree of riming within the BMPs. Improving these BMPs is a crucial step toward improving both weather forecasting and climate predictions. Several microphysical schemes in the Weather Research and Forecasting (WRF) model down to 1.33-km grid spacing are evaluated using aircraft, radar, and ground in situ data from the Global Precipitation Mission Coldseason Precipitation Experiment (GCPEx) experiment, as well as a few years (15 winter storms) of surface measurements of riming, crystal habit, snow density, and radar measurements at Stony Brook, NY (SBNY on north shore of Long Island) during the 2009-2012 winter seasons. Surface microphysical measurements at SBNY were taken every 15 to 30 minutes using a stereo microscope and camera, and snow depth and snow density were also recorded. During these storms, a vertically-pointing Ku-band radar was used to observe the vertical evolution of reflectivity and Doppler vertical velocities. A Particle Size and Velocity (PARSIVEL) disdrometer was also used to measure the surface size distribution and fall speeds of snow at SBNY. For the 15 cases at SBNY, the WSM6, Morrison (MORR), Thompson (THOM2), and Stony Brook (SBU-YLIN) BMPs were validated. A non-spherical snow assumption (THOM2 and SBU-YLIN) simulated a more realistic distribution of reflectivity than spherical snow assumptions in the WSM6 and MORR schemes. The MORR, WSM6, and SBU-YLIN schemes are comparable to the observed velocity distribution in light and moderate riming periods. The THOM2 is approx 0.25 m/s too slow with its velocity distribution in these periods. In heavier riming, the vertical Doppler velocities in the WSM6, THOM2, and MORR schemes were approx 0.25 m/s too slow, while the

  18. Evaluation of Model Microphysics Within Precipitation Bands of Extratropical Cyclones

    NASA Technical Reports Server (NTRS)

    Colle, Brian A.; Molthan, Andrew; Yu, Ruyi; Stark, David; Yuter, Sandra; Nesbitt, Steven

    2013-01-01

    Recent studies evaluating the bulk microphysical schemes (BMPs) within cloud resolving models (CRMs) have indicated large uncertainties and errors in the amount and size distributions of snow and cloud ice aloft. The snow prediction is sensitive to the snow densities, habits, and degree of riming within the BMPs. Improving these BMPs is a crucial step toward improving both weather forecasting and climate predictions. Several microphysical schemes in the Weather Research and Forecasting (WRF) model down to 1.33-km grid spacing are evaluated using aircraft, radar, and ground in situ data from the Global Precipitation Mission Coldseason Precipitation Experiment (GCPEx) experiment, as well as a few years (15 winter storms) of surface measurements of riming, crystal habit, snow density, and radar measurements at Stony Brook, NY (SBNY on north shore of Long Island) during the 2009-2012 winter seasons. Surface microphysical measurements at SBNY were taken every 15 to 30 minutes using a stereo microscope and camera, and snow depth and snow density were also recorded. During these storms, a vertically-pointing Ku-band radar was used to observe the vertical evolution of reflectivity and Doppler vertical velocities. A Particle Size and Velocity (PARSIVEL) disdrometer was also used to measure the surface size distribution and fall speeds of snow at SBNY. For the 15 cases at SBNY, the WSM6, Morrison (MORR), Thompson (THOM2), and Stony Brook (SBU-YLIN) BMPs were validated. A non-spherical snow assumption (THOM2 and SBU-YLIN) simulated a more realistic distribution of reflectivity than spherical snow assumptions in the WSM6 and MORR schemes. The MORR, WSM6, and SBU-YLIN schemes are comparable to the observed velocity distribution in light and moderate riming periods. The THOM2 is 0.25 meters per second too slow with its velocity distribution in these periods. In heavier riming, the vertical Doppler velocities in the WSM6, THOM2, and MORR schemes were 0.25 meters per second too

  19. Collaborative evaluation and market research converge: an innovative model agricultural development program evaluation in Southern Sudan.

    PubMed

    O'Sullivan, John M; O'Sullivan, Rita

    2012-11-01

    In June and July 2006 a team of outside experts arrived in Yei, Southern Sudan through an AID project to provide support to a local agricultural development project. The team brought evaluation, agricultural marketing and financial management expertise to the in-country partners looking at steps to rebuild the economy of the war ravaged region. A partnership of local officials, agricultural development staff, and students worked with the outside team to craft a survey of agricultural traders working between northern Uganda and Southern Sudan the steps approach of a collaborative model. The goal was to create a market directory of use to producers, government officials and others interested in stimulating agricultural trade. The directory of agricultural producers and distributors served as an agricultural development and promotion tool as did the collaborative process itself. PMID:22309968

  20. Model-based damage evaluation of layered CFRP structures

    NASA Astrophysics Data System (ADS)

    Munoz, Rafael; Bochud, Nicolas; Rus, Guillermo; Peralta, Laura; Melchor, Juan; Chiachío, Juan; Chiachío, Manuel; Bond, Leonard J.

    2015-03-01

    An ultrasonic evaluation technique for damage identification of layered CFRP structures is presented. This approach relies on a model-based estimation procedure that combines experimental data and simulation of ultrasonic damage-propagation interactions. The CFPR structure, a [0/90]4s lay-up, has been tested in an immersion through transmission experiment, where a scan has been performed on a damaged specimen. Most ultrasonic techniques in industrial practice consider only a few features of the received signals, namely, time of flight, amplitude, attenuation, frequency contents, and so forth. In this case, once signals are captured, an algorithm is used to reconstruct the complete signal waveform and extract the unknown damage parameters by means of modeling procedures. A linear version of the data processing has been performed, where only Young modulus has been monitored and, in a second nonlinear version, the first order nonlinear coefficient β was incorporated to test the possibility of detection of early damage. The aforementioned physical simulation models are solved by the Transfer Matrix formalism, which has been extended from linear to nonlinear harmonic generation technique. The damage parameter search strategy is based on minimizing the mismatch between the captured and simulated signals in the time domain in an automated way using Genetic Algorithms. Processing all scanned locations, a C-scan of the parameter of each layer can be reconstructed, obtaining the information describing the state of each layer and each interface. Damage can be located and quantified in terms of changes in the selected parameter with a measurable extension. In the case of the nonlinear coefficient of first order, evidence of higher sensitivity to damage than imaging the linearly estimated Young Modulus is provided.

  1. MODELING AND BIOPHARMACEUTICAL EVALUATION OF SEMISOLID SYSTEMS WITH ROSEMARY EXTRACT.

    PubMed

    Ramanauskiene, Kristina; Zilius, Modestas; Kancauskas, Marius; Juskaite, Vaida; Cizinauskas, Vytis; Inkeniene, Asta; Petrikaite, Vilma; Rimdeika, Rytis; Briedis, Vitalis

    2016-01-01

    Scientific literature provides a great deal of studies supporting antioxidant effects of rosemary, protecting the body's cells against reactive oxygen species and their negative impact. Ethanol rosemary extracts were produced by maceration method. To assess biological activity of rosemary extracts, antioxidant and antimicrobial activity tests were performed. Antimicrobial activity tests revealed that G+ microorganisms are most sensitive to liquid rosemary extract, while G-microorganisms are most resistant to it. For the purposes of experimenting, five types of semisolid systems were modeled: hydrogel, oleogel, absorption-hydrophobic ointment, oil-in-water-type cream and water-in-oil-type cream, which contained rosemary extract as an active ingredient. Study results show that liquid rosemary extract was distributed evenly in the aqueous phase of water-in-oil-type system, forming the stable emulsion systems. The following research aim was chosen to evaluate the semisolid systems with rosemary exctract: to model semisolid preparations with liquid rosemary extract and determine the influence of excipients on their quality, and perform in vitro study of the release of active ingredients and antimicrobial activity. It was found that oil-in-water type gel-cream has antimicrobial activity against Staphylococcus epidermidis bacteria and Candida albicans fungus, while hydrogel affected only Candida albicans. According to the results of biopharmaceutical study, modeled semisolid systems with rosemary extract can be arranged in an ascending order of the release of phenolic compounds from the forms: water-in-oil-type cream < absorption-hydrophobic ointment < Pionier PLW oleogel < oil-in-water-type eucerin cream < hydrogel < oil-in-water-type gel-cream. Study results showed that oil-in-water-type gel-cream is the most suitable vehicle for liquid rosemary extract used as an active ingredient. PMID:27008810

  2. Evaluating the extreme precipitation events using a mesoscale atmopshere model

    NASA Astrophysics Data System (ADS)

    Yucel, I.; Onen, A.

    2012-04-01

    Evidence is showing that global warming or climate change has a direct influence on changes in precipitation and the hydrological cycle. Extreme weather events such as heavy rainfall and flooding are projected to become much more frequent as climate warms. Mesoscale atmospheric models coupled with land surface models provide efficient forecasts for meteorological events in high lead time and therefore they should be used for flood forecasting and warning issues as they provide more continuous monitoring of precipitation over large areas. This study examines the performance of the Weather Research and Forecasting (WRF) model in producing the temporal and spatial characteristics of the number of extreme precipitation events observed in West Black Sea Region of Turkey. Extreme precipitation events usually resulted in flood conditions as an associated hydrologic response of the basin. The performance of the WRF system is further investigated by using the three dimensional variational (3D-VAR) data assimilation scheme within WRF. WRF performance with and without data assimilation at high spatial resolution (4 km) is evaluated by making comparison with gauge precipitation and satellite-estimated rainfall data from Multi Precipitation Estimates (MPE). WRF-derived precipitation showed capabilities in capturing the timing of the precipitation extremes and in some extent spatial distribution and magnitude of the heavy rainfall events. These precipitation characteristics are enhanced with the use of 3D-VAR scheme in WRF system. Data assimilation improved area-averaged precipitation forecasts by 9 percent and at some points there exists quantitative match in precipitation events, which are critical for hydrologic forecast application.

  3. Simulation Modeling and Performance Evaluation of Space Networks

    NASA Technical Reports Server (NTRS)

    Jennings, Esther H.; Segui, John

    2006-01-01

    In space exploration missions, the coordinated use of spacecraft as communication relays increases the efficiency of the endeavors. To conduct trade-off studies of the performance and resource usage of different communication protocols and network designs, JPL designed a comprehensive extendable tool, the Multi-mission Advanced Communications Hybrid Environment for Test and Evaluation (MACHETE). The design and development of MACHETE began in 2000 and is constantly evolving. Currently, MACHETE contains Consultative Committee for Space Data Systems (CCSDS) protocol standards such as Proximity-1, Advanced Orbiting Systems (AOS), Packet Telemetry/Telecommand, Space Communications Protocol Specification (SCPS), and the CCSDS File Delivery Protocol (CFDP). MACHETE uses the Aerospace Corporation s Satellite Orbital Analysis Program (SOAP) to generate the orbital geometry information and contact opportunities. Matlab scripts provide the link characteristics. At the core of MACHETE is a discrete event simulator, QualNet. Delay Tolerant Networking (DTN) is an end-to-end architecture providing communication in and/or through highly stressed networking environments. Stressed networking environments include those with intermittent connectivity, large and/or variable delays, and high bit error rates. To provide its services, the DTN protocols reside at the application layer of the constituent internets, forming a store-and-forward overlay network. The key capabilities of the bundling protocols include custody-based reliability, ability to cope with intermittent connectivity, ability to take advantage of scheduled and opportunistic connectivity, and late binding of names to addresses. In this presentation, we report on the addition of MACHETE models needed to support DTN, namely: the Bundle Protocol (BP) model. To illustrate the use of MACHETE with the additional DTN model, we provide an example simulation to benchmark its performance. We demonstrate the use of the DTN protocol

  4. Carbosoil, a land evaluation model for soil carbon accounting

    NASA Astrophysics Data System (ADS)

    Anaya-Romero, M.; Muñoz-Rojas, M.; Pino, R.; Jordan, A.; Zavala, L. M.; De la Rosa, D.

    2012-04-01

    The belowground carbon content is particularly difficult to quantify and most of the time is assumed to be a fixed fraction or ignored for lack of better information. In this respect, this research presents a land evaluation tool, Carbosoil, for predicting soil carbon accounting where this data are scarce or not available, as a new component of MicroLEIS DSS. The pilot study area was a Mediterranean region (Andalusia, Southern Spain) during 1956-2007. Input data were obtained from different data sources and include 1689 soil profiles from Andalusia (S Spain). Previously, detailed studies of changes in LU and vegetation carbon stocks, and soil organic carbon (SOC) dynamic were carried out. Previous results showed the influence of LU, climate (mean temperature and rainfall) and soil variables related with SOC dynamics. For instance, SCS decreased in Cambisols and Regosols by 80% when LU changed from forest to heterogeneous agricultural areas. Taking this into account, the input variables considered were LU, site (elevation, slope, erosion, type-of-drainage, and soil-depth), climate (mean winter/summer temperature and annual precipitation), and soil (pH, nitrates, CEC, sand/clay content, bulk density and field capacity). The available data set was randomly split into two parts: training-set (75%), and validation-set (25%). The model was built by using multiple linear regression. The regression coefficient (R2) obtained in the calibration and validation of Carbosoil was >0.9 for the considered soil sections (0-25, 25-50, and 50-75 cm). The validation showed the high accuracy of the model and its capacity to discriminate carbon distribution regarding different climate, LU and soil management scenarios. Carbosoil model together with the methodologies and information generated in this work will be a useful basis to accurately quantify and understanding the distribution of soil carbon account helpful for decision makers.

  5. Evaluation of an in vitro toxicogenetic mouse model for hepatotoxicity

    SciTech Connect

    Martinez, Stephanie M.; Bradford, Blair U.; Soldatow, Valerie Y.; Witek, Rafal; Kaiser, Robert; Stewart, Todd; Amaral, Kirsten; Freeman, Kimberly; Black, Chris; LeCluyse, Edward L.; Ferguson, Stephen S.

    2010-12-15

    Numerous studies support the fact that a genetically diverse mouse population may be useful as an animal model to understand and predict toxicity in humans. We hypothesized that cultures of hepatocytes obtained from a large panel of inbred mouse strains can produce data indicative of inter-individual differences in in vivo responses to hepato-toxicants. In order to test this hypothesis and establish whether in vitro studies using cultured hepatocytes from genetically distinct mouse strains are feasible, we aimed to determine whether viable cells may be isolated from different mouse inbred strains, evaluate the reproducibility of cell yield, viability and functionality over subsequent isolations, and assess the utility of the model for toxicity screening. Hepatocytes were isolated from 15 strains of mice (A/J, B6C3F1, BALB/cJ, C3H/HeJ, C57BL/6J, CAST/EiJ, DBA/2J, FVB/NJ, BALB/cByJ, AKR/J, MRL/MpJ, NOD/LtJ, NZW/LacJ, PWD/PhJ and WSB/EiJ males) and cultured for up to 7 days in traditional 2-dimensional culture. Cells from B6C3F1, C57BL/6J, and NOD/LtJ strains were treated with acetaminophen, WY-14,643 or rifampin and concentration-response effects on viability and function were established. Our data suggest that high yield and viability can be achieved across a panel of strains. Cell function and expression of key liver-specific genes of hepatocytes isolated from different strains and cultured under standardized conditions are comparable. Strain-specific responses to toxicant exposure have been observed in cultured hepatocytes and these experiments open new opportunities for further developments of in vitro models of hepatotoxicity in a genetically diverse population.

  6. A multimedia fate and chemical transport modeling system for pesticides: II. Model evaluation

    NASA Astrophysics Data System (ADS)

    Li, Rong; Scholtz, M. Trevor; Yang, Fuquan; Sloan, James J.

    2011-07-01

    Pesticides have adverse health effects and can be transported over long distances to contaminate sensitive ecosystems. To address problems caused by environmental pesticides we developed a multimedia multi-pollutant modeling system, and here we present an evaluation of the model by comparing modeled results against measurements. The modeled toxaphene air concentrations for two sites, in Louisiana (LA) and Michigan (MI), are in good agreement with measurements (average concentrations agree to within a factor of 2). Because the residue inventory showed no soil residues at these two sites, resulting in no emissions, the concentrations must be caused by transport; the good agreement between the modeled and measured concentrations suggests that the model simulates atmospheric transport accurately. Compared to the LA and MI sites, the measured air concentrations at two other sites having toxaphene soil residues leading to emissions, in Indiana and Arkansas, showed more pronounced seasonal variability (higher in warmer months); this pattern was also captured by the model. The model-predicted toxaphene concentration fraction on particles (0.5-5%) agrees well with measurement-based estimates (3% or 6%). There is also good agreement between modeled and measured dry (1:1) and wet (within a factor of less than 2) depositions in Lake Ontario. Additionally this study identified erroneous soil residue data around a site in Texas in a published US toxaphene residue inventory, which led to very low modeled air concentrations at this site. Except for the erroneous soil residue data around this site, the good agreement between the modeled and observed results implies that both the US and Mexican toxaphene soil residue inventories are reasonably good. This agreement also suggests that the modeling system is capable of simulating the important physical and chemical processes in the multimedia compartments.

  7. Modeling Urban Dynamics Using Random Forest: Implementing Roc and Toc for Model Evaluation

    NASA Astrophysics Data System (ADS)

    Ahmadlou, M.; Delavar, M. R.; Shafizadeh-Moghadam, H.; Tayyebi, A.

    2016-06-01

    The importance of spatial accuracy of land use/cover change maps necessitates the use of high performance models. To reach this goal, calibrating machine learning (ML) approaches to model land use/cover conversions have received increasing interest among the scholars. This originates from the strength of these techniques as they powerfully account for the complex relationships underlying urban dynamics. Compared to other ML techniques, random forest has rarely been used for modeling urban growth. This paper, drawing on information from the multi-temporal Landsat satellite images of 1985, 2000 and 2015, calibrates a random forest regression (RFR) model to quantify the variable importance and simulation of urban change spatial patterns. The results and performance of RFR model were evaluated using two complementary tools, relative operating characteristics (ROC) and total operating characteristics (TOC), by overlaying the map of observed change and the modeled suitability map for land use change (error map). The suitability map produced by RFR model showed 82.48% area under curve for the ROC model which indicates a very good performance and highlights its appropriateness for simulating urban growth.

  8. MIRAGE: Model Description and Evaluation of Aerosols and Trace Gases

    SciTech Connect

    Easter, Richard C.; Ghan, Steven J.; Zhang, Yang; Saylor, Rick D.; Chapman, Elaine G.; Laulainen, Nels S.; Abdul-Razzak, Hayder; Leung, Lai-Yung R.; Bian, Xindi; Zaveri, Rahul A.

    2004-10-27

    The MIRAGE (Model for Integrated Research on Atmospheric Global Exchanges) modeling system, designed to study the impacts of anthropogenic aerosols on the global environment, is described. MIRAGE consists of a chemical transport model coupled on line with a global climate model. The chemical transport model simulates trace gases, aerosol number, and aerosol chemical component mass [sulfate, MSA, organic matter, black carbon (BC), sea salt, mineral dust] for four aerosol modes (Aitken, accumulation, coarse sea salt, coarse mineral dust) using the modal aerosol dynamics approach. Cloud-phase and interstitial aerosol are predicted separately. The climate model, based on the CCM2, has physically-based treatments of aerosol direct and indirect forcing. Stratiform cloud water and droplet number are simulated using a bulk microphysics parameterization that includes aerosol activation. Aerosol and trace gas species simulated by MIRAGE are presented and evaluated using surface and aircraft measurements. Surface-level SO2 in N. American and European source regions is higher than observed. SO2 above the boundary layer is in better agreement with observations, and surface-level SO2 at marine locations is somewhat lower than observed. Comparison with other models suggests insufficient SO2 dry deposition; increasing the deposition velocity improves simulated SO2. Surface-level sulfate in N. American and European source regions is in good agreement with observations, although the seasonal cycle in Europe is stronger than observed. Surface-level sulfate at high-latitude and marine locations, and sulfate above the boundary layer, are higher than observed. This is attributed primarily to insufficient wet removal; increasing the wet removal improves simulated sulfate at remote locations and aloft. Because of the high sulfate bias, radiative forcing estimates for anthropogenic sulfur in Ghan et al. [2001c] are probably too high. Surface-level DMS is {approx}40% higher than observed

  9. Evaluating ET estimates from the Simplified Surface Energy Balance (SSEB) model using METRIC model output

    NASA Astrophysics Data System (ADS)

    Senay, G. B.; Budde, M. E.; Allen, R. G.; Verdin, J. P.

    2008-12-01

    Evapotranspiration (ET) is an important component of the hydrologic budget because it expresses the exchange of mass and energy between the soil-water-vegetation system and the atmosphere. Since direct measurement of ET is difficult, various modeling methods are used to estimate actual ET (ETa). Generally, the choice of method for ET estimation depends on the objective of the study and is further limited by the availability of data and desired accuracy of the ET estimate. Operational monitoring of crop performance requires processing large data sets and a quick response time. A Simplified Surface Energy Balance (SSEB) model was developed by the U.S. Geological Survey's Famine Early Warning Systems Network to estimate irrigation water use in remote places of the world. In this study, we evaluated the performance of the SSEB model with the METRIC (Mapping Evapotranspiration at high Resolution and with Internalized Calibration) model that has been evaluated by several researchers using the Lysimeter data. The METRIC model has been proven to provide reliable ET estimates in different regions of the world. Reference ET fractions of both models (ETrF of METRIC vs. ETf of SSEB) were generated and compared using individual Landsat thermal images collected from 2000 though 2005 in Idaho, New Mexico, and California. In addition, the models were compared using monthly and seasonal total ETa estimates. The SSEB model reproduced both the spatial and temporal variability exhibited by METRIC on land surfaces, explaining up to 80 percent of the spatial variability. However, the ETa estimates over water bodies were systematically higher in the SSEB output, which could be improved by using a correction coefficient to take into account the absorption of solar energy by deeper water layers that has little contribution to the ET process. This study demonstrated the usefulness of the SSEB method for large-scale agro-hydrologic applications for operational monitoring and assessing of

  10. Physically-based landslide susceptibility modelling: geotechnical testing and model evaluation issues

    NASA Astrophysics Data System (ADS)

    Marchesini, Ivan; Mergili, Martin; Schneider-Muntau, Barbara; Alvioli, Massimiliano; Rossi, Mauro; Guzzetti, Fausto

    2015-04-01

    We used the software r.slope.stability for physically-based landslide susceptibility modelling in the 90 km² Collazzone area, Central Italy, exploiting a comprehensive set of lithological, geotechnical, and landslide inventory data. The model results were evaluated against the inventory. r.slope.stability is a GIS-supported tool for modelling shallow and deep-seated slope stability and slope failure probability at comparatively broad scales. Developed as a raster module of the GRASS GIS software, r.slope.stability evaluates the slope stability for a large number of randomly selected ellipsoidal potential sliding surfaces. The bottom of the soil (for shallow slope stability) or the bedding planes of lithological layers (for deep-seated slope stability) are taken as potential sliding surfaces by truncating the ellipsoids, allowing for the analysis of relatively complex geological structures. To take account for the uncertain geotechnical and geometric parameters, r.slope.stability computes the slope failure probability by testing multiple parameter combinations sampled deterministically or stochastically, and evaluating the ratio between the number of parameter combinations yielding a factor of safety below 1 and the total number of tested combinations. Any single raster cell may be intersected by multiple sliding surfaces, each associated with a slope failure probability. The most critical sliding surface is relevant for each pixel. Intensive use of r.slope.stability in the Collazzone Area has opened up two questions elaborated in the present work: (i) To what extent does a larger number of geotechnical tests help to better constrain the geotechnical characteristics of the study area and, consequently, to improve the model results? The ranges of values of cohesion and angle of internal friction obtained through 13 direct shear tests corresponds remarkably well to the range of values suggested by a geotechnical textbook. We elaborate how far an increased number of

  11. Critical evaluation and modeling of algal harvesting using dissolved air flotation. DAF Algal Harvesting Modeling

    DOE PAGESBeta

    Zhang, Xuezhi; Hewson, John C.; Amendola, Pasquale; Reynoso, Monica; Sommerfeld, Milton; Chen, Yongsheng; Hu, Qiang

    2014-07-14

    In our study, Chlorella zofingiensis harvesting by dissolved air flotation (DAF) was critically evaluated with regard to algal concentration, culture conditions, type and dosage of coagulants, and recycle ratio. Harvesting efficiency increased with coagulant dosage and leveled off at 81%, 86%, 91%, and 87% when chitosan, Al3+, Fe3+, and cetyl trimethylammonium bromide (CTAB) were used at dosages of 70, 180, 250, and 500 mg g-1, respectively. The DAF efficiency-coagulant dosage relationship changed with algal culture conditions. In evaluating the influence of the initial algal concentration and recycle ratio revealed that, under conditions typical for algal harvesting, we found that itmore » is possible that the number of bubbles is insufficient. A DAF algal harvesting model was developed to explain this observation by introducing mass-based floc size distributions and a bubble limitation into the white water blanket model. Moreover, the model revealed the importance of coagulation to increase floc-bubble collision and attachment, and the preferential interaction of bubbles with larger flocs, which limited the availability of bubbles to the smaller sized flocs. The harvesting efficiencies predicted by the model agree reasonably with experimental data obtained at different Al3+ dosages, algal concentrations, and recycle ratios. Based on this modeling, critical parameters for efficient algal harvesting were identified.« less

  12. Critical evaluation and modeling of algal harvesting using dissolved air flotation. DAF Algal Harvesting Modeling

    SciTech Connect

    Zhang, Xuezhi; Hewson, John C.; Amendola, Pasquale; Reynoso, Monica; Sommerfeld, Milton; Chen, Yongsheng; Hu, Qiang

    2014-07-14

    In our study, Chlorella zofingiensis harvesting by dissolved air flotation (DAF) was critically evaluated with regard to algal concentration, culture conditions, type and dosage of coagulants, and recycle ratio. Harvesting efficiency increased with coagulant dosage and leveled off at 81%, 86%, 91%, and 87% when chitosan, Al3+, Fe3+, and cetyl trimethylammonium bromide (CTAB) were used at dosages of 70, 180, 250, and 500 mg g-1, respectively. The DAF efficiency-coagulant dosage relationship changed with algal culture conditions. In evaluating the influence of the initial algal concentration and recycle ratio revealed that, under conditions typical for algal harvesting, we found that it is possible that the number of bubbles is insufficient. A DAF algal harvesting model was developed to explain this observation by introducing mass-based floc size distributions and a bubble limitation into the white water blanket model. Moreover, the model revealed the importance of coagulation to increase floc-bubble collision and attachment, and the preferential interaction of bubbles with larger flocs, which limited the availability of bubbles to the smaller sized flocs. The harvesting efficiencies predicted by the model agree reasonably with experimental data obtained at different Al3+ dosages, algal concentrations, and recycle ratios. Based on this modeling, critical parameters for efficient algal harvesting were identified.

  13. ROCKY MOUNTAIN ACID DEPOSITION MODEL ASSESSMENT: EVALUATION OF MESOSCALE ACID DEPOSITION MODELS FOR USE IN COMPLEX TERRAIN

    EPA Science Inventory

    The report includes an evaluation of candidate meteorological models and acid deposition models. The hybrid acid deposition/air quality modeling system for the Rocky Mountains makes use of a mesoscale meteorological model, which includes a new diagnostic wind model, as a driver f...

  14. Evaluating the SWAT Model for Hydrological Modeling in the Xixian Watershed and A Comparison with the XAJ Model

    SciTech Connect

    Shi, Peng; Chen, Chao; Srinivasan, Raghavan; Zhang, Xuesong; Cai, Tao; Fang, Xiuqin; Qu, Simin; Chen, Xi; Li, Qiongfang

    2011-09-10

    Already declining water availability in Huaihe River, the 6th largest river in China, is further stressed by climate change and intense human activities. There is a pressing need for a watershed model to better understand the interaction between land use activities and hydrologic processes and to support sustainable water use planning. In this study, we evaluated the performance of SWAT for hydrologic modeling in the Xixian River Basin, located at the headwaters of the Huaihe River, and compared its performance with the Xinanjiang (XAJ) model that has been widely used in China

  15. Evaluating the performance of copula models in phase I-II clinical trials under model misspecification

    PubMed Central

    2014-01-01

    Background Traditionally, phase I oncology trials are designed to determine the maximum tolerated dose (MTD), defined as the highest dose with an acceptable probability of dose limiting toxicities(DLT), of a new treatment via a dose escalation study. An alternate approach is to jointly model toxicity and efficacy and allow dose escalation to depend on a pre-specified efficacy/toxicity tradeoff in a phase I-II design. Several phase I-II trial designs have been discussed in the literature; while these model-based designs are attractive in their performance, they are potentially vulnerable to model misspecification. Methods Phase I-II designs often rely on copula models to specify the joint distribution of toxicity and efficacy, which include an additional correlation parameter that can be difficult to estimate. We compare and contrast three models for the joint probability of toxicity and efficacy, including two copula models that have been proposed for use in phase I-II clinical trials and a simple model that assumes the two outcomes are independent. We evaluate the performance of the various models through simulation both when the models are correct and under model misspecification. Results Both models exhibited similar performance, as measured by the probability of correctly identifying the optimal dose and the number of subjects treated at the optimal dose, regardless of whether the data were generated from the correct or incorrect copula, even when there is substantial correlation between the two outcomes. Similar results were observed for a simple model that assumes independence, even in the presence of strong correlation. Further simulation results indicate that estimating the correlation parameter in copula models is difficult with the sample sizes used in Phase I-II clinical trials. Conclusions Our simulation results indicate that the operating characteristics of phase I-II clinical trials are robust to misspecification of the copula model but that a simple

  16. Modeling irrigation-based climate change adaptation in agriculture: Model development and evaluation in Northeast China

    NASA Astrophysics Data System (ADS)

    Okada, Masashi; Iizumi, Toshichika; Sakurai, Gen; Hanasaki, Naota; Sakai, Toru; Okamoto, Katsuo; Yokozawa, Masayuki

    2015-09-01

    Replacing a rainfed cropping system with an irrigated one is widely assumed to be an effective measure for climate change adaptation. However, many agricultural impact studies have not necessarily accounted for the space-time variations in the water availability under changing climate and land use. Moreover, many hydrologic and agricultural assessments of climate change impacts are not fully integrated. To overcome this shortcoming, a tool that can simultaneously simulate the dynamic interactions between crop production and water resources in a watershed is essential. Here we propose the regional production and circulation coupled model (CROVER) by embedding the PRYSBI-2 (Process-based Regional Yield Simulator with Bayesian Inference version 2) large-area crop model into the global water resources model (called H08), and apply this model to the Songhua River watershed in Northeast China. The evaluation reveals that the model's performance in capturing the major characteristics of historical change in surface soil moisture, river discharge, actual crop evapotranspiration, and soybean yield relative to the reference data during the interval 1979-2010 is satisfactory accurate. The simulation experiments using the model demonstrated that subregional irrigation management, such as designating the area to which irrigation is primarily applied, has measurable influences on the regional crop production in a drought year. This finding suggests that reassessing climate change risk in agriculture using this type of modeling is crucial not to overestimate potential of irrigation-based adaptation.

  17. Evaluation of fish models of soluble epoxide hydrolase inhibition.

    PubMed Central

    Newman, J W; Denton, D L; Morisseau, C; Koger, C S; Wheelock, C E; Hinton, D E; Hammock, B D

    2001-01-01

    Substituted ureas and carbamates are mechanistic inhibitors of the soluble epoxide hydrolase (sEH). We screened a set of chemicals containing these functionalities in larval fathead minnow (Pimphales promelas) and embryo/larval golden medaka (Oryzias latipes) models to evaluate the utility of these systems for investigating sEH inhibition in vivo. Both fathead minnow and medaka sEHs were functionally similar to the tested mammalian orthologs (murine and human) with respect to substrate hydrolysis and inhibitor susceptibility. Low lethality was observed in either larval or embryonic fish exposed to diuron [N-(3,4-dichlorophenyl), N'-dimethyl urea], desmethyl diuron [N-(3,4-dichlorophenyl), N'-methyl urea], or siduron [N-(1-methylcyclohexyl), N'-phenyl urea]. Dose-dependent inhibition of sEH was a sublethal effect of substituted urea exposure with the potency of siduron < desmethyl diuron = diuron, differing from the observed in vitro sEH inhibition potency of siduron > desmethyl diuron > diuron. Further, siduron exposure synergized the toxicity of trans-stilbene oxide in fathead minnows. Medaka embryos exposed to diuron, desmethyl diuron, or siduron displayed dose-dependent delays in hatch, and elevated concentrations of diuron and desmethyl diuron produced developmental toxicity. The dose-dependent toxicity and in vivo sEH inhibition correlated, suggesting a potential, albeit undefined, relationship between these factors. Additionally, the observed inversion of in vitro to in vivo potency suggests that these fish models may provide tools for investigating the in vivo stability of in vitro inhibitors while screening for untoward effects. PMID:11171526

  18. Underwater blasting effects` models: A critical evaluation of IBLAST

    SciTech Connect

    Hempen, G.L.; Keevin, T.M.

    1995-12-31

    A user-friendly program for environmental effects from underwater explosions is presently available. The IBLAST program (Coastline Environmental Services 1986) includes estimates for both borehole-loaded explosives for demolition or rock removal purposes and midwater shots. Midwater shots may be used for fish population assessments or some seismic surveys. The program produces approximate mortality distances for planning purposes. IBLAST does have serious shortcomings of which users should be aware. The equipment chosen for recording underwater pressures led to some moot results. The underwater recording equipment`s time increment was too long to adequately record blast pressures. Blast data must be processed on a microsecond basis. The inability to recover blast records at short time increments eliminates the analysis of pressure data. The subject study and many earlier field assessments of pressure used millisecond intervals, which are much too long. Secondly, some pressure values may have exceeded the ability of the recording system, causing clipping of the record. Impulse and energy assessments are determined from the pressure wave, therefore inaccurate pressures provide invalid impulse values. Other assumptions of IBLAST would be more useful if revised for assessments. Lastly, differing species will have differing mortalities. Current studies being completed by the St. Louis District (SLD) have yielded data on caged bluegill mortality due to shallow, mid-water, blasting effects. Microsecond recording increments were made with presently available equipment and provides significantly different records than the basis for IBLAST. The SLD work is presently being evaluated for adjustments to IBLAST or new model creation. IBLAST can be the basis of mitigation determinations. Any model revision will aid underwater blasting mitigation efforts.

  19. Evaluating antithrombotic activity of HY023016 on rat hypercoagulable model.

    PubMed

    Chen, Qiu-Fang; Li, Yun-Zhan; Wang, Xin-Hui; Su, You-Rui; Cui, Shuang; Miao, Ming-Xing; Jiang, Zhen-Zhou; Jiang, Mei-Ling; Jiang, Ai-Dou; Chen, Xiang; Xu, Yun-Gen; Gong, Guo-Qing

    2016-06-15

    The generation of thrombus is not considered as an isolated progression without other pathologic processes, which may also enhance procoagulant state. The purpose of this study was to assess whether HY023016, a novel dabigatran prodrug and an oral direct thrombin inhibitor, or dabigatran etexilate, another thrombin inhibitor can improve the state of whole blood hypercoagulability in vitro/vivo. By using whole blood flow cytometry we explored the effects of HY023016 and dabigatran etexilate on thrombin and ADP-induced human platelet-leukocyte aggregation generated in vitro. With the method of continuous infusion of thrombin intravenous, we successfully established a rat hypercoagulable model and evaluated the effect of HY023016 or dabigatran etexilate in vivo. HY023016 was able to inhibit thrombin- or ADP-induced platelet P-selectin or CD40L expression, leukocyte CD11b expression and formation of platelet-leukocyte aggregates in dose-dependent manner. Dabigatran etexilate was unable to affect ADP-induced platelet P-selectin or CD40L expression, leukocyte CD11b expression and formation of platelet-leukocyte aggregates. Based on rat hypercoagulable model, dabigatran etexilate could reverse thrombin-induced circulatory system hypercoagulable state in a concentration-dependent manner. Dabigatran etexilate also inhibited electrical stimulation induced formation of arterial thrombus in rat under hypercoagulable state, and extracorporal circulation-induced formation of thrombus in dose-dependent manner. Compared with dabigatran etexilate, HY023016 showed nearly equal or even better antithrombotic activity, regardless of reversing the cycle of rat hypercoagulable state or inhibiting platelet-leukocyte aggregation. In surrmary, HY023016 could effectively improve hypercoagulable state of circulatory system. PMID:27085896

  20. Intranasal curcumin and its evaluation in murine model of asthma.

    PubMed

    Subhashini; Chauhan, Preeti S; Kumari, Sharda; Kumar, Jarajana Pradeep; Chawla, Ruchi; Dash, D; Singh, Mandavi; Singh, Rashmi

    2013-11-01

    Curcumin, a phytochemical present in turmeric, rhizome of Curcuma longa, has been shown to have a wide variety of pharmacological activities including anti-inflammatory, anti-allergic and anti-asthmatic properties. Curcumin is known for its low systemic bioavailability and rapid metabolization through oral route and has limited its applications. Over the recent decades, the interest in intranasal delivery as a non-invasive route for drugs has increased as target tissue for drug delivery since nasal mucosa offers numerous benefits. In this study, we evaluated intranasal curcumin following its absorption through nasal mucosa by a sensitive and validated high-performance liquid chromatography (HPLC) method for the determination of intranasal curcumin in mouse blood plasma and lung tissue. Intranasal curcumin has been detected in plasma after 15 min to 3 h at pharmacological dose (5 mg/kg, i.n.), which has shown anti-asthmatic potential by inhibiting bronchoconstriction and inflammatory cell recruitment to the lungs. At considerably lower doses has proved better than standard drug disodium cromoglycate (DSCG 50 mg/kg, i.p.) by affecting inflammatory cell infiltration and histamine release in mouse model of asthma. HPLC detection revealed that curcumin absorption in lungs has started after 30 min following intranasal administration and retained till 3h then declines. Present investigations suggest that intranasal curcumin (5.0 mg/kg, i.n.) has effectively being absorbed and detected in plasma and lungs both and suppressed airway inflammations at lower doses than the earlier doses used for detection (100-200 mg/kg, i.p.) for pharmacological studies (10-20 mg/kg, i.p.) in mouse model of asthma. Present study may prove the possibility of curcumin as complementary medication in the development of nasal drops to prevent airway inflammations and bronchoconstrictions in asthma without any side effect. PMID:24021755

  1. Logic Models for Program Design, Implementation, and Evaluation: Workshop Toolkit. REL 2015-057

    ERIC Educational Resources Information Center

    Shakman, Karen; Rodriguez, Sheila M.

    2015-01-01

    The Logic Model Workshop Toolkit is designed to help practitioners learn the purpose of logic models, the different elements of a logic model, and the appropriate steps for developing and using a logic model for program evaluation. Topics covered in the sessions include an overview of logic models, the elements of a logic model, an introduction to…

  2. Evaluation of Aerosol-Cloud Interactions in GISS ModelE Using ASR Observations

    NASA Astrophysics Data System (ADS)

    de Boer, G.; Menon, S.; Bauer, S. E.; Toto, T.; Bennartz, R.; Cribb, M.

    2011-12-01

    The impacts of aerosol particles on clouds continue to rank among the largest uncertainties in global climate simulation. In this work we assess the capability of the NASA GISS ModelE, coupled to MATRIX aerosol microphysics, in correctly representing warm-phase aerosol-cloud interactions. This evaluation is completed through the analysis of a nudged, multi-year global simulation using measurements from various US Department of Energy sponsored measurement campaigns and satellite-based observations. Campaign observations include the Aerosol Intensive Operations Period (Aerosol IOP) and Routine ARM Arial Facility Clouds with Low Optical Water Depths (CLOWD) Optical Radiative Observations (RACORO) at the Southern Great Plains site in Oklahoma, the Marine Stratus Radiation, Aerosol, and Drizzle (MASRAD) campaign at Pt. Reyes, California, and the ARM mobile facility's 2008 deployment to China. This combination of datasets provides a variety of aerosol and atmospheric conditions under which to test ModelE parameterizations. In addition to these localized comparisons, we provide the results of global evaluations completed using measurements derived from satellite remote sensors. We will provide a basic overview of simulation performance, as well as a detailed analysis of parameterizations relevant to aerosol indirect effects.

  3. A Program Evaluation Model: Using Bloom's Taxonomy to Identify Outcome Indicators in Outcomes-Based Program Evaluations

    ERIC Educational Resources Information Center

    McNeil, Rita C.

    2011-01-01

    Outcomes-based program evaluation is a systematic approach to identifying outcome indicators and measuring results against those indicators. One dimension of program evaluation is assessing the level of learner acquisition to determine if learning objectives were achieved as intended. The purpose of the proposed model is to use Bloom's Taxonomy to…

  4. An Organizational Model to Distinguish between and Integrate Research and Evaluation Activities in a Theory Based Evaluation

    ERIC Educational Resources Information Center

    Sample McMeeking, Laura B.; Basile, Carole; Cobb, R. Brian

    2012-01-01

    Theory-based evaluation (TBE) is an evaluation method that shows how a program will work under certain conditions and has been supported as a viable, evidence-based option in cases where randomized trials or high-quality quasi-experiments are not feasible. Despite the model's widely accepted theoretical appeal there are few examples of its…

  5. A Model for Program Development and Evaluation: The Formative Role of Summative Evaluation and Research in Science Education.

    ERIC Educational Resources Information Center

    Pines, A. Leon

    This paper describes a specific case study of the development of a continued evaluation program in science education. Introduced is the Audio-Tutorial Elementary Science Project (A-TESP) developed by Joseph D. Novak, which provides a model of program development, implementation, and evaluation. Three phases, characterized as the pre-developmental,…

  6. Evaluation of air pollution modelling tools as environmental engineering courseware.

    PubMed

    Souto González, J A; Bello Bugallo, P M; Casares Long, J J

    2004-01-01

    The study of phenomena related to the dispersion of pollutants usually takes advantage of the use of mathematical models based on the description of the different processes involved. This educational approach is especially important in air pollution dispersion, when the processes follow a non-linear behaviour so it is difficult to understand the relationships between inputs and outputs, and in a 3D context where it becomes hard to analyze alphanumeric results. In this work, three different software tools, as computer solvers for typical air pollution dispersion phenomena, are presented. Each software tool developed to be implemented on PCs, follows approaches that represent three generations of programming languages (Fortran 77, VisualBasic and Java), applied over three different environments: MS-DOS, MS-Windows and the world wide web. The software tools were tested by students of environmental engineering (undergraduate) and chemical engineering (postgraduate), in order to evaluate the ability of these software tools to improve both theoretical and practical knowledge of the air pollution dispersion problem, and the impact of the different environment in the learning process in terms of content, ease of use and visualization of results. PMID:15193095

  7. Biomechanical modelling and evaluation of construction jobs for performance improvement.

    PubMed

    Parida, Ratri; Ray, Pradip Kumar

    2012-01-01

    Occupational risk factors, such as awkward posture, repetition, lack of rest, insufficient illumination and heavy workload related to construction-related MMH activities may cause musculoskeletal disorders and poor performance of the workers, ergonomic design of construction worksystems was a critical need for improving their health and safety wherein a dynamic biomechanical models were required to be empirically developed and tested at a construction site of Tata Steel, the largest steel making company of India in private sector. In this study, a comprehensive framework is proposed for biomechanical evaluation of shovelling and grinding under diverse work environments. The benefit of such an analysis lies in its usefulness in setting guidelines for designing such jobs with minimization of risks of musculoskeletal disorders (MSDs) and enhancing correct methods of carrying out the jobs leading to reduced fatigue and physical stress. Data based on direct observations and videography were collected for the shovellers and grinders over a number of workcycles. Compressive forces and moments for a number of segments and joints are computed with respect to joint flexion and extension. The results indicate that moments and compressive forces at L5/S1 link are significant for shovellers while moments at elbow and wrist are significant for grinders. PMID:22317733

  8. Linear multivariate evaluation models for spatial perception of soundscape.

    PubMed

    Deng, Zhiyong; Kang, Jian; Wang, Daiwei; Liu, Aili; Kang, Joe Zhengyu

    2015-11-01

    Soundscape is a sound environment that emphasizes the awareness of auditory perception and social or cultural understandings. The case of spatial perception is significant to soundscape. However, previous studies on the auditory spatial perception of the soundscape environment have been limited. Based on 21 native binaural-recorded soundscape samples and a set of auditory experiments for subjective spatial perception (SSP), a study of the analysis among semantic parameters, the inter-aural-cross-correlation coefficient (IACC), A-weighted-equal sound-pressure-level (L(eq)), dynamic (D), and SSP is introduced to verify the independent effect of each parameter and to re-determine some of their possible relationships. The results show that the more noisiness the audience perceived, the worse spatial awareness they received, while the closer and more directional the sound source image variations, dynamics, and numbers of sound sources in the soundscape are, the better the spatial awareness would be. Thus, the sensations of roughness, sound intensity, transient dynamic, and the values of Leq and IACC have a suitable range for better spatial perception. A better spatial awareness seems to promote the preference slightly for the audience. Finally, setting SSPs as functions of the semantic parameters and Leq-D-IACC, two linear multivariate evaluation models of subjective spatial perception are proposed. PMID:26627762

  9. Field evaluation of an avian risk assessment model

    USGS Publications Warehouse

    Vyas, N.B.; Spann, J.W.; Hulse, C.S.; Borges, S.L.; Bennett, R.S.; Torrez, M.; Williams, B.I.; Leffel, R.

    2006-01-01

    We conducted two laboratory subacute dietary toxicity tests and one outdoor subacute dietary toxicity test to determine the effectiveness of the U.S. Environmental Protection Agency's deterministic risk assessment model for evaluating the potential of adverse effects to birds in the field. We tested technical-grade diazinon and its D Z N- 50W (50% diazinon active ingredient wettable powder) formulation on Canada goose (Branta canadensis) goslings. Brain acetylcholinesterase activity was measured, and the feathers and skin, feet. and gastrointestinal contents were analyzed for diazinon residues. The dose-response curves showed that diazinon was significantly more toxic to goslings in the outdoor test than in the laboratory tests. The deterministic risk assessment method identified the potential for risk to birds in general, but the factors associated with extrapolating from the laboratory to the field, and from the laboratory test species to other species, resulted in the underestimation of risk to the goslings. The present study indicates that laboratory-based risk quotients should be interpreted with caution.

  10. Maintaining Adherence Programme: evaluation of an innovative service model

    PubMed Central

    Lewis, Llewellyn; O'Keeffe, Christine; Smyth, Ian; Mallalieu, Judi; Baldock, Laura; Oliver, Sam

    2016-01-01

    Aims and method The Maintaining Adherence Programme (MAP) is a new model of care for patients with schizophrenia, schizoaffective disorder and bipolar affective disorder which aims to encourage adherence and prevent relapse. This evaluation, conducted by retrospective and prospective data collection (including patient questionnaires and staff interviews), aimed to describe MAP's impact on healthcare resource use, clinical measures and patient and staff satisfaction, following its implementation in a university National Health Service (NHS) foundation trust in England. We included 143 consenting patients who entered MAP before 31 March 2012. Results In-patient bed days and non-MAP NHS costs reduced significantly in the 18 months post-MAP entry. At 15–18 months post-MAP, Medication Adherence Rating Scale scores had improved significantly from baseline and there was a shift towards less severe clinician-rated disease categories. Based on patient surveys, 96% would recommend MAP to friends, and staff were also overwhelmingly positive about the service. Clinical implications MAP was associated with reduced cost of treatment, improvements in clinical outcomes and very high patient and staff satisfaction. PMID:26958352

  11. A MODEL FOR THE EVALUATION OF A TESTING PROGRAM.

    ERIC Educational Resources Information Center

    COX, RICHARD C.; UNKS, NANCY J.

    THE EVALUATION OF AN EDUCATIONAL PROGRAM TYPICALLY IMPLIES MEASUREMENT. MEASUREMENT, IN TURN, IMPLIES TESTING IN ONE FORM OR ANOTHER. IN ORDER TO CARRY OUT THE TESTING NECESSARY FOR THE EVALUATION OF AN EDUCATIONAL PROGRAM, RESEARCHERS OFTEN DEVELOP A COMPLETE TESTING SUB-PROGRAM. THE EVALUATION OF THE TOTAL PROJECT MAY DEPEND UPON THE TESTING…

  12. Understanding Evaluation Influence within Public Sector Partnerships: A Conceptual Model

    ERIC Educational Resources Information Center

    Appleton-Dyer, Sarah; Clinton, Janet; Carswell, Peter; McNeill, Rob

    2012-01-01

    The importance of evaluation use has led to a large amount of theoretical and empirical study. Evaluation use, however, is still not well understood. There is a need to capture the complexity of this phenomenon across a diverse range of contexts. In response to such complexities, the notion of "evaluation influence" emerged. This article presents…

  13. A Meta-Model for Evaluating Information Retrieval Serviceability.

    ERIC Educational Resources Information Center

    Hjerppe, Roland

    This document first outlines considerations relative to a systems approach to evaluation, and then argues for such an approach to the evaluation of information retrieval systems (ISR). The criterion of such evaluations should be the utility of the information retrieved to the user, and the ISR ought to be regarded as one of three interrelated…

  14. Using Hierarchical Linear Modeling for Proformative Evaluation: A Case Example

    ERIC Educational Resources Information Center

    Coryn, Chris L. S.

    2007-01-01

    Proformative evaluation--first introduced in Scriven's (2006) "The great enigma: An evaluation design puzzle"--"is motivated, like formative evaluation, by the intention to improve something that is still developing, but unlike formative, the improvement is only possible by taking action, hence proactive instead of reactive, hence both, hence…

  15. A Program Evaluation Model for Migrant Higher Education.

    ERIC Educational Resources Information Center

    California State Univ., Fresno.

    This technical report outlines the evaluation methodology used to conduct the 1984-85 High School Equivalency Programs/College Assistance for Migrant Programs (HEP/CAMP) National Evaluation Project, with special emphasis upon how this methodology might be adapted to meet evaluation requirements of local HEP and CAMP programs. Section 1 describes…

  16. Evaluating pharmacological models of high and low anxiety in sheep

    PubMed Central

    Lee, Caroline; McGill, David M.; Mendl, Michael

    2015-01-01

    New tests of animal affect and welfare require validation in subjects experiencing putatively different states. Pharmacological manipulations of affective state are advantageous because they can be administered in a standardised fashion, and the duration of their action can be established and tailored to suit the length of a particular test. To this end, the current study aimed to evaluate a pharmacological model of high and low anxiety in an important agricultural and laboratory species, the sheep. Thirty-five 8-month-old female sheep received either an intramuscular injection of the putatively anxiogenic drug 1-(m-chlorophenyl)piperazine (mCPP; 1 mg/kg; n = 12), an intravenous injection of the putatively anxiolytic drug diazepam (0.1 mg/kg; n = 12), or acted as a control (saline intramuscular injection n = 11). Thirty minutes after the treatments, sheep were individually exposed to a variety of tests assessing their general movement, performance in a ‘runway task’ (moving down a raceway for a food reward), response to startle, and behaviour in isolation. A test to assess feeding motivation was performed 2 days later following administration of the drugs to the same animals in the same manner. The mCPP sheep had poorer performance in the two runway tasks (6.8 and 7.7 × slower respectively than control group; p < 0.001), a greater startle response (1.4 vs. 0.6; p = 0.02), a higher level of movement during isolation (9.1 steps vs. 5.4; p < 0.001), and a lower feeding motivation (1.8 × slower; p < 0.001) than the control group, all of which act as indicators of anxiety. These results show that mCPP is an effective pharmacological model of high anxiety in sheep. Comparatively, the sheep treated with diazepam did not display any differences compared to the control sheep. Thus we suggest that mCPP is an effective treatment to validate future tests aimed at assessing anxiety in sheep, and that future studies should include other subtle indicators of positive

  17. Evaluating pharmacological models of high and low anxiety in sheep.

    PubMed

    Doyle, Rebecca E; Lee, Caroline; McGill, David M; Mendl, Michael

    2015-01-01

    New tests of animal affect and welfare require validation in subjects experiencing putatively different states. Pharmacological manipulations of affective state are advantageous because they can be administered in a standardised fashion, and the duration of their action can be established and tailored to suit the length of a particular test. To this end, the current study aimed to evaluate a pharmacological model of high and low anxiety in an important agricultural and laboratory species, the sheep. Thirty-five 8-month-old female sheep received either an intramuscular injection of the putatively anxiogenic drug 1-(m-chlorophenyl)piperazine (mCPP; 1 mg/kg; n = 12), an intravenous injection of the putatively anxiolytic drug diazepam (0.1 mg/kg; n = 12), or acted as a control (saline intramuscular injection n = 11). Thirty minutes after the treatments, sheep were individually exposed to a variety of tests assessing their general movement, performance in a 'runway task' (moving down a raceway for a food reward), response to startle, and behaviour in isolation. A test to assess feeding motivation was performed 2 days later following administration of the drugs to the same animals in the same manner. The mCPP sheep had poorer performance in the two runway tasks (6.8 and 7.7 × slower respectively than control group; p < 0.001), a greater startle response (1.4 vs. 0.6; p = 0.02), a higher level of movement during isolation (9.1 steps vs. 5.4; p < 0.001), and a lower feeding motivation (1.8 × slower; p < 0.001) than the control group, all of which act as indicators of anxiety. These results show that mCPP is an effective pharmacological model of high anxiety in sheep. Comparatively, the sheep treated with diazepam did not display any differences compared to the control sheep. Thus we suggest that mCPP is an effective treatment to validate future tests aimed at assessing anxiety in sheep, and that future studies should include other subtle indicators of positive affective

  18. Models and techniques for evaluating the effectiveness of aircraft computing systems

    NASA Technical Reports Server (NTRS)

    Meyer, J. F.

    1978-01-01

    Progress in the development of system models and techniques for the formulation and evaluation of aircraft computer system effectiveness is reported. Topics covered include: analysis of functional dependence: a prototype software package, METAPHOR, developed to aid the evaluation of performability; and a comprehensive performability modeling and evaluation exercise involving the SIFT computer.

  19. THE DEVELOPMENT AND TESTING OF AN EVALUATION MODEL FOR VOCATIONAL PILOT PROGRAMS. FINAL REPORT.

    ERIC Educational Resources Information Center

    TUCKMAN, BRUCE W.

    THE OBJECTIVES OF THE PROJECT WERE (1) TO DEVELOP AN EVALUATION MODEL IN THE FORM OF A HOW-TO-DO-IT MANUAL WHICH OUTLINES PROCEDURES FOR OBTAINING IMMEDIATE INFORMATION REGARDING THE DEGREE TO WHICH A PILOT PROGRAM ACHIEVES ITS STATED FINAL OBJECTIVES, (2) TO EVALUATE THIS MODEL BY USING IT TO EVALUATE TWO ONGOING PILOT PROGRAMS, AND (3) TO…

  20. Models and Mechanisms for Evaluating Government-Funded Research: An International Comparison

    ERIC Educational Resources Information Center

    Coryn, Chris L. S.; Hattie, John A.; Scriven, Michael; Hartmann, David J.

    2007-01-01

    This research describes, classifies, and comparatively evaluates national models and mechanisms used to evaluate research and allocate research funding in 16 countries. Although these models and mechanisms vary widely in terms of how research is evaluated and financed, nearly all share the common characteristic of relating funding to some measure…

  1. A Multi-Component Model for Assessing Learning Objects: The Learning Object Evaluation Metric (LOEM)

    ERIC Educational Resources Information Center

    Kay, Robin H.; Knaack, Liesel

    2008-01-01

    While discussion of the criteria needed to assess learning objects has been extensive, a formal, systematic model for evaluation has yet to be thoroughly tested. The purpose of the following study was to develop and assess a multi-component model for evaluating learning objects. The Learning Object Evaluation Metric (LOEM) was developed from a…

  2. Evaluation of a gully headcut retreat model using multitemporal aerial photographs and digital elevation models

    NASA Astrophysics Data System (ADS)

    Campo-Bescós, M. A.; Flores-Cervantes, J. H.; Bras, R. L.; Casalí, J.; Giráldez, J. V.

    2013-12-01

    large fraction of soil erosion in temperate climate systems proceeds from gully headcut growth processes. Nevertheless, headcut retreat is not well understood. Few erosion models include gully headcut growth processes, and none of the existing headcut retreat models have been tested against long-term retreat rate estimates. In this work the headcut retreat resulting from plunge pool erosion in the Channel Hillslope Integrated Landscape Development (CHILD) model is calibrated and compared to long-term evolution measurements of six gullies at the Bardenas Reales, northeast Spain. The headcut retreat module of CHILD was calibrated by adjusting the shape factor parameter to fit the observed retreat and volumetric soil loss of one gully during a 36 year period, using reported and collected field data to parameterize the rest of the model. To test the calibrated model, estimates by CHILD were compared to observations of headcut retreat from five other neighboring gullies. The differences in volumetric soil loss rates between the simulations and observations were less than 0.05 m3 yr-1, on average, with standard deviations smaller than 0.35 m3 yr-1. These results are the first evaluation of the headcut retreat module implemented in CHILD with a field data set. These results also show the usefulness of the model as a tool for simulating long-term volumetric gully evolution due to plunge pool erosion.

  3. Evaluation of the meteorological forcing used for the Air Quality Model Evaluation International Initiative (AQMEII) air quality simulations

    NASA Astrophysics Data System (ADS)

    Vautard, Robert; Moran, Michael D.; Solazzo, Efisio; Gilliam, Robert C.; Matthias, Volker; Bianconi, Roberto; Chemel, Charles; Ferreira, Joana; Geyer, Beate; Hansen, Ayoe B.; Jericevic, Amela; Prank, Marje; Segers, Arjo; Silver, Jeremy D.; Werhahn, Johannes; Wolke, Ralf; Rao, S. T.; Galmarini, Stefano

    2012-06-01

    Accurate regional air pollution simulation relies strongly on the accuracy of the mesoscale meteorological simulation used to drive the air quality model. The framework of the Air Quality Model Evaluation International Initiative (AQMEII), which involved a large international community of modeling groups in Europe and North America, offered a unique opportunity to evaluate the skill of mesoscale meteorological models for two continents for the same period. More than 20 groups worldwide participated in AQMEII, using several meteorological and chemical transport models with different configurations. The evaluation has been performed over a full year (2006) for both continents. The focus for this particular evaluation was meteorological parameters relevant to air quality processes such as transport and mixing, chemistry, and surface fluxes. The unprecedented scale of the exercise (one year, two continents) allowed us to examine the general characteristics of meteorological models' skill and uncertainty. In particular, we found that there was a large variability between models or even model versions in predicting key parameters such as surface shortwave radiation. We also found several systematic model biases such as wind speed overestimations, particularly during stable conditions. We conclude that major challenges still remain in the simulation of meteorology, such as nighttime meteorology and cloud/radiation processes, for air quality simulation.

  4. Evaluating soil carbon in global climate models: benchmarking, future projections, and model drivers

    NASA Astrophysics Data System (ADS)

    Todd-Brown, K. E.; Randerson, J. T.; Post, W. M.; Allison, S. D.

    2012-12-01

    The carbon cycle plays a critical role in how the climate responds to anthropogenic carbon dioxide. To evaluate how well Earth system models (ESMs) from the Climate Model Intercomparison Project (CMIP5) represent the carbon cycle, we examined predictions of current soil carbon stocks from the historical simulation. We compared the soil and litter carbon pools from 17 ESMs with data on soil carbon stocks from the Harmonized World Soil Database (HWSD). We also examined soil carbon predictions for 2100 from 16 ESMs from the rcp85 (highest radiative forcing) simulation to investigate the effects of climate change on soil carbon stocks. In both analyses, we used a reduced complexity model to separate the effects of variation in model drivers from the effects of model parameters on soil carbon predictions. Drivers included NPP, soil temperature, and soil moisture, and the reduced complexity model represented one pool of soil carbon as a function of these drivers. The ESMs predicted global soil carbon totals of 500 to 2980 Pg-C, compared to 1260 Pg-C in the HWSD. This 5-fold variation in predicted soil stocks was a consequence of a 3.4-fold variation in NPP inputs and 3.8-fold variability in mean global turnover times. None of the ESMs correlated well with the global distribution of soil carbon in the HWSD (Pearson's correlation <0.40, RMSE 9-22 kg m-2). On a biome level there was a broad range of agreement between the ESMs and the HWSD. Some models predicted HWSD biome totals well (R2=0.91) while others did not (R2=0.23). All of the ESM terrestrial decomposition models are structurally similar with outputs that were well described by a reduced complexity model that included NPP and soil temperature (R2 of 0.73-0.93). However, MPI-ESM-LR outputs showed only a moderate fit to this model (R2=0.51), and CanESM2 outputs were better described by a reduced model that included soil moisture (R2=0.74), We also found a broad range in soil carbon responses to climate change

  5. A dynamic model of metabolizable energy utilization in growing and mature cattle. III. Model evaluation.

    PubMed

    Williams, C B; Jenkins, T G

    2003-06-01

    Component models of heat production identified in a proposed system of partitioning ME intake and a dynamic systems model that predicts gain in empty BW in cattle resulting from a known intake of ME were evaluated. Evaluations were done in four main areas: 1) net efficiency of ME utilization for gain, 2) relationship between recovered energy and ME intake, 3) predicting gain in empty BW from recovered energy, and 4) predicting gain in empty BW from ME intake. An analysis of published data showed that the net partial efficiencies of ME utilization for protein and fat gain were approximately 0.2 and 0.75, respectively, and that the net efficiency of ME utilization for gain could be estimated using these net partial efficiencies and the fraction of recovered energy that is contained in protein. Analyses of published sheep and cattle experimental data showed a significant linear relationship between recovered energy and ME intake, with no evidence for a nonlinear relationship. Growth and body composition of Hereford x Angus steers simulated from weaning to slaughter showed that over the finishing period, 20.8% of ME intake was recovered in gain. These results were similar to observed data and comparable to feedlot data of 26.5% for a shorter finishing period with a higher-quality diet. The component model to predict gain in empty BW from recovered energy was evaluated with growth and body composition data of five steer genotypes on two levels of nutrition. Linear regression of observed on predicted values for empty BW resulted in an intercept and slope that were not different (P < 0.05) from 0 and 1, respectively. Evaluations of the dynamic systems model to predict gain in empty BW using ME intake as the input showed close agreement between predicted and observed final empty BW for steers that were finished on high-energy diets, and the model accurately predicted growth patterns for Angus, Charolais, and Simmental reproducing females from 10 mo to 7 yr of age. PMID

  6. Model Evaluation and Ensemble Modelling of Surface-Level Ozone in Europe and North America in the Context of AQMEII

    EPA Science Inventory

    More than ten state-of-the-art regional air quality models have been applied as part of the Air Quality Model Evaluation International Initiative (AQMEII). These models were run by twenty independent groups in Europe and North America. Standardised modelling outputs over a full y...

  7. Validating the ACE Model for Evaluating Student Performance Using a Teaching-Learning Process Based on Computational Modeling Systems

    ERIC Educational Resources Information Center

    Louzada, Alexandre Neves; Elia, Marcos da Fonseca; Sampaio, Fábio Ferrentini; Vidal, Andre Luiz Pestana

    2014-01-01

    The aim of this work is to adapt and test, in a Brazilian public school, the ACE model proposed by Borkulo for evaluating student performance as a teaching-learning process based on computational modeling systems. The ACE model is based on different types of reasoning involving three dimensions. In addition to adapting the model and introducing…

  8. Isoprene emissions modelling for West Africa: MEGAN model evaluation and sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Ferreira, J.; Reeves, C. E.; Murphy, J. G.; Garcia-Carreras, L.; Parker, D. J.; Oram, D. E.

    2010-09-01

    Isoprene emissions are the largest source of reactive carbon to the atmosphere, with the tropics being a major source region. These natural emissions are expected to change with changing climate and human impact on land use. As part of the African Monsoon Multidisciplinary Analyses (AMMA) project the Model of Emissions of Gases and Aerosols from Nature (MEGAN) has been used to estimate the spatial and temporal distribution of isoprene emissions over the West African region. During the AMMA field campaign, carried out in July and August 2006, isoprene mixing ratios were measured on board the FAAM BAe-146 aircraft. These data have been used to make a qualitative evaluation of the model performance. MEGAN was firstly applied to a large area covering much of West Africa from the Gulf of Guinea in the south to the desert in the north and was able to capture the large scale spatial distribution of isoprene emissions as inferred from the observed isoprene mixing ratios. In particular the model captures the transition from the forested area in the south to the bare soils in the north, but some discrepancies have been identified over the bare soil, mainly due to the emission factors used. Sensitivity analyses were performed to assess the model response to changes in driving parameters, namely Leaf Area Index (LAI), Emission Factors (EF), temperature and solar radiation. A high resolution simulation was made of a limited area south of Niamey, Niger, where the higher concentrations of isoprene were observed. This is used to evaluate the model's ability to simulate smaller scale spatial features and to examine the influence of the driving parameters on an hourly basis through a case study of a flight on 17 August 2006. This study highlights the complex interactions between land surface processes and the meteorological dynamics and chemical composition of the PBL. This has implications for quantifying the impact of biogenic emissions on the atmospheric composition over West

  9. Evaluating the use of different precipitation datasets in flood modelling

    NASA Astrophysics Data System (ADS)

    Akyurek, Zuhal; Soytekin, Arzu

    2016-04-01

    Satellite based precipitation products, numerical weather prediction model precipitation forecasts and weather radar precipitation estimates can be a remedy for gauge sparse regions especially in flood forecasting studies. However, there is a strong need for evaluation of the performance and limitations of these estimates in hydrology. This study compares the Hydro-Estimator precipitation product, Weather Research and Forecasting (WRF) model precipitation and weather radar values with gauge data in Samsun-Terme region located in the eastern Black Sea region of Turkey, which generally receives high rainfall from north-facing slopes of mountains. Using different statistical factors, performance of the precipitation estimates are compared in point and areal based manner. In point based comparisons, three matching methods; direct matching method (DM), probability matching method (PMM) and window correlation matching method (WCMM) are used to make comparisons for the flood event (22.11.2014) lasted 40 hours. Hourly rainfall data from 13 ground observation stations were used in the analyses. This flood event created 541 m3/sec peak discharge at the 22-45 discharge observation station and flooding at the downstream of the basin. It is seen that, general trend of the rainfall is captured by the radar rainfall estimation well but radar underestimates the peaks. Moreover, it is observed that the assessment factor (gauge rainfall/ radar rainfall estimation) does not depend on the distance between radar and gauge station. In WCMM calculation it is found out that change of space window from 1x1 type to 5x5 type does not improve the results dramatically. In areal based comparisons, it is found out that the distribution of the HE product in time series does not show similarity for other datasets. Furthermore, the geometry of the subbasins, size of the area in 2D and 3D and average elevation do not have an impact on the mean statistics, RMSE, r and bias calculation for both radar

  10. The Discrepancy Evaluation Model: A Strategy for Improving a Simulation and Determining Effectiveness.

    ERIC Educational Resources Information Center

    Morra, Linda G.

    This paper presents the Discrepancy Evaluation Model (DEM) as an overall strategy or framework for both the improvement and assessment of effectiveness of simulation/games. While application of the evaluation model to simulation/games rather than educational programs requires modification of the model, its critical features remain. These include:…

  11. A Model of Evaluation Planning, Implementation and Management toward a "Culture of Information" within Organizations.

    ERIC Educational Resources Information Center

    Bhola, H. S.

    The argument underlying the ongoing paradigm shift from logical-positivism to constructionism is briefly outlined. A model of evaluation planning, implementation, and management (the P-I-M model) is then presented, which assumes a complementarity between the two paradigms. The P-I-M Model includes three components of educational evaluation: a…

  12. Empirical Evaluation of a Forecasting Model for Successful Facilitation in Telematic Learning Programmes.

    ERIC Educational Resources Information Center

    Bisschoff, A.; Bisschoff, C. A.

    2002-01-01

    Potchefstroom University for Christian Higher Education evaluated the usefulness of a model created to predict the success of distance education course facilitators. The model identified eight key attributes based on performance measures from the 1999 Facilitator Customer Service Survey. The evaluation accredited the model while suggesting…

  13. [Evaluation of imaging biomarker by transgenic mouse models].

    PubMed

    Maeda, Jun; Higuchi, Makoto; Suhara, Tetsuya

    2009-04-01

    The invention of trangenic and gene knockout mice contributes to the understanding of various brain functions. With the previous-generation positron emission tomography (PET) camera it was impossible to visualize the mouse brain functions, while the newly developed small-animal PET camera with higher resolution is enough to visualize the mouse brain functions. In the present study, we investigated the visualization of functional brain images for a few transgenic mouse models using the small-animal PET. In neurodegenerative illnesses such as Alzheimer disease (AD), the relationship between etiopathology and main symptoms has been elucidated relatively well; therefore several transgenic mice have been already developed. We succeeded in visualizing amyloid images in human mutant amyloid precursor protein (APP) transgenic mice brains. This result suggested that small-animal PET enabled the quantitative analysis of pathologies in the Tg mouse brain. Psychiatric disorders are presumed to have underlying multiple neural dysfunctions. Despite some efficient medicinal therapies having been already established, the etiopathology of mental illness and its biological markers have not been clarified. Thus, we investigated in type II Ca-calmodulin-dependent protein kinase alpha (CaMKII alpha) heterozygous knockout (hKO) mouse, a major protein kinase in the brain. The CaMKII alpha hKO mice have several abnormal behavioral phenotypes, such as hyper aggression and lack of anxiogenic responses; therefore CaMKII alpha might involve in the pathogenesis of mood disorder and affect personal characterizations. Furthermore, serotonin (5-HT) 1A receptor density in the CaMKII alpha hKO mouse brain changed among various brain regions compared to wild mice. These mechanistic insights, PET assays of Tg mice that we have established here, provide an efficient methodology for preclinical evaluation of emerging diagnostic and therapeutic agents for neurodegenerative and psychiatric illnesses

  14. ACTINIDE REMOVAL PROCESS SAMPLE ANALYSIS, CHEMICAL MODELING, AND FILTRATION EVALUATION

    SciTech Connect

    Martino, C.; Herman, D.; Pike, J.; Peters, T.

    2014-06-05

    Filtration within the Actinide Removal Process (ARP) currently limits the throughput in interim salt processing at the Savannah River Site. In this process, batches of salt solution with Monosodium Titanate (MST) sorbent are concentrated by crossflow filtration. The filtrate is subsequently processed to remove cesium in the Modular Caustic Side Solvent Extraction Unit (MCU) followed by disposal in saltstone grout. The concentrated MST slurry is washed and sent to the Defense Waste Processing Facility (DWPF) for vitrification. During recent ARP processing, there has been a degradation of filter performance manifested as the inability to maintain high filtrate flux throughout a multi-batch cycle. The objectives of this effort were to characterize the feed streams, to determine if solids (in addition to MST) are precipitating and causing the degraded performance of the filters, and to assess the particle size and rheological data to address potential filtration impacts. Equilibrium modelling with OLI Analyzer{sup TM} and OLI ESP{sup TM} was performed to determine chemical components at risk of precipitation and to simulate the ARP process. The performance of ARP filtration was evaluated to review potential causes of the observed filter behavior. Task activities for this study included extensive physical and chemical analysis of samples from the Late Wash Pump Tank (LWPT) and the Late Wash Hold Tank (LWHT) within ARP as well as samples of the tank farm feed from Tank 49H. The samples from the LWPT and LWHT were obtained from several stages of processing of Salt Batch 6D, Cycle 6, Batch 16.

  15. A Framework for Multifaceted Evaluation of Student Models

    ERIC Educational Resources Information Center

    Huang, Yun; González-Brenes, José P.; Kumar, Rohit; Brusilovsky, Peter

    2015-01-01

    Latent variable models, such as the popular Knowledge Tracing method, are often used to enable adaptive tutoring systems to personalize education. However, finding optimal model parameters is usually a difficult non-convex optimization problem when considering latent variable models. Prior work has reported that latent variable models obtained…

  16. Evaluating Latent Growth Curve Models Using Individual Fit Statistics

    ERIC Educational Resources Information Center

    Coffman, Donna L.; Millsap, Roger E.

    2006-01-01

    The usefulness of assessing individual fit in latent growth curve models was examined. The study used simulated data based on an unconditional and a conditional latent growth curve model with a linear component and a small quadratic component and a linear model was fit to the data. Then the overall fit of linear and quadratic models to these data…

  17. Relative and Absolute Fit Evaluation in Cognitive Diagnosis Modeling

    ERIC Educational Resources Information Center

    Chen, Jinsong; de la Torre, Jimmy; Zhang, Zao

    2013-01-01

    As with any psychometric models, the validity of inferences from cognitive diagnosis models (CDMs) determines the extent to which these models can be useful. For inferences from CDMs to be valid, it is crucial that the fit of the model to the data is ascertained. Based on a simulation study, this study investigated the sensitivity of various fit…

  18. COMBINED SEWER OVERFLOW SEDIMENT TRANSPORT MODEL: DOCUMENTATION AND EVALUATION

    EPA Science Inventory

    A modeling package for studying the movement and fate of combined sewer overflow (CSO) sediment in receiving waters is described. The package contains a linear, implicit, finite-difference flow model and an explicit, finite-difference sediment transport model. The sediment model ...

  19. Evaluating mallard adaptive management models with time series

    USGS Publications Warehouse

    Conn, P.B.; Kendall, W.L.

    2004-01-01

    Wildlife practitioners concerned with midcontinent mallard (Anas platyrhynchos) management in the United States have instituted a system of adaptive harvest management (AHM) as an objective format for setting harvest regulations. Under the AHM paradigm, predictions from a set of models that reflect key uncertainties about processes underlying population dynamics are used in coordination with optimization software to determine an optimal set of harvest decisions. Managers use comparisons of the predictive abilities of these models to gauge the relative truth of different hypotheses about density-dependent recruitment and survival, with better-predicting models giving more weight to the determination of harvest regulations. We tested the effectiveness of this strategy by examining convergence rates of 'predictor' models when the true model for population dynamics was known a priori. We generated time series for cases when the a priori model was 1 of the predictor models as well as for several cases when the a priori model was not in the model set. We further examined the addition of different levels of uncertainty into the variance structure of predictor models, reflecting different levels of confidence about estimated parameters. We showed that in certain situations, the model-selection process favors a predictor model that incorporates the hypotheses of additive harvest mortality and weakly density-dependent recruitment, even when the model is not used to generate data. Higher levels of predictor model variance led to decreased rates of convergence to the model that generated the data, but model weight trajectories were in general more stable. We suggest that predictive models should incorporate all sources of uncertainty about estimated parameters, that the variance structure should be similar for all predictor models, and that models with different functional forms for population dynamics should be considered for inclusion in predictor model! sets. All of these

  20. A diagnostic evaluation model for complex research partnerships with community engagement: the partnership for Native American Cancer Prevention (NACP) model.

    PubMed

    Trotter, Robert T; Laurila, Kelly; Alberts, David; Huenneke, Laura F

    2015-02-01

    Complex community oriented health care prevention and intervention partnerships fail or only partially succeed at alarming rates. In light of the current rapid expansion of critically needed programs targeted at health disparities in minority populations, we have designed and are testing an "logic model plus" evaluation model that combines classic logic model and query based evaluation designs (CDC, NIH, Kellogg Foundation) with advances in community engaged designs derived from industry-university partnership models. These approaches support the application of a "near real time" feedback system (diagnosis and intervention) based on organizational theory, social network theory, and logic model metrics directed at partnership dynamics, combined with logic model metrics. PMID:25265164