Science.gov

Sample records for evaluating value-at-risk models

  1. Multifractal Value at Risk model

    NASA Astrophysics Data System (ADS)

    Lee, Hojin; Song, Jae Wook; Chang, Woojin

    2016-06-01

    In this paper new Value at Risk (VaR) model is proposed and investigated. We consider the multifractal property of financial time series and develop a multifractal Value at Risk (MFVaR). MFVaR introduced in this paper is analytically tractable and not based on simulation. Empirical study showed that MFVaR can provide the more stable and accurate forecasting performance in volatile financial markets where large loss can be incurred. This implies that our multifractal VaR works well for the risk measurement of extreme credit events.

  2. Estimation of value at risk and conditional value at risk using normal mixture distributions model

    NASA Astrophysics Data System (ADS)

    Kamaruzzaman, Zetty Ain; Isa, Zaidi

    2013-04-01

    Normal mixture distributions model has been successfully applied in financial time series analysis. In this paper, we estimate the return distribution, value at risk (VaR) and conditional value at risk (CVaR) for monthly and weekly rates of returns for FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI) from July 1990 until July 2010 using the two component univariate normal mixture distributions model. First, we present the application of normal mixture distributions model in empirical finance where we fit our real data. Second, we present the application of normal mixture distributions model in risk analysis where we apply the normal mixture distributions model to evaluate the value at risk (VaR) and conditional value at risk (CVaR) with model validation for both risk measures. The empirical results provide evidence that using the two components normal mixture distributions model can fit the data well and can perform better in estimating value at risk (VaR) and conditional value at risk (CVaR) where it can capture the stylized facts of non-normality and leptokurtosis in returns distribution.

  3. Estimation of Value-at-Risk for Energy Commodities via CAViaR Model

    NASA Astrophysics Data System (ADS)

    Xiliang, Zhao; Xi, Zhu

    This paper uses the Conditional Autoregressive Value at Risk model (CAViaR) proposed by Engle and Manganelli (2004) to evaluate the value-at-risk for daily spot prices of Brent crude oil and West Texas Intermediate crude oil covering the period May 21th, 1987 to Novermber 18th, 2008. Then the accuracy of the estimates of CAViaR model, Normal-GARCH, and GED-GARCH was compared. The results show that all the methods do good job for the low confidence level (95%), and GED-GARCH is the best for spot WTI price, Normal-GARCH and Adaptive-CAViaR are the best for spot Brent price. However, for the high confidence level (99%), Normal-GARCH do a good job for spot WTI, GED-GARCH and four kind of CAViaR specifications do well for spot Brent price. Normal-GARCH does badly for spot Brent price. The result seems suggest that CAViaR do well as well as GED-GARCH since CAViaR directly model the quantile autoregression, but it does not outperform GED-GARCH although it does outperform Normal-GARCH.

  4. Application of the Beck model to stock markets: Value-at-Risk and portfolio risk assessment

    NASA Astrophysics Data System (ADS)

    Kozaki, M.; Sato, A.-H.

    2008-02-01

    We apply the Beck model, developed for turbulent systems that exhibit scaling properties, to stock markets. Our study reveals that the Beck model elucidates the properties of stock market returns and is applicable to practical use such as the Value-at-Risk estimation and the portfolio analysis. We perform empirical analysis with daily/intraday data of the S&P500 index return and find that the volatility fluctuation of real markets is well-consistent with the assumptions of the Beck model: The volatility fluctuates at a much larger time scale than the return itself and the inverse of variance, or “inverse temperature”, β obeys Γ-distribution. As predicted by the Beck model, the distribution of returns is well-fitted by q-Gaussian distribution of Tsallis statistics. The evaluation method of Value-at-Risk (VaR), one of the most significant indicators in risk management, is studied for q-Gaussian distribution. Our proposed method enables the VaR evaluation in consideration of tail risk, which is underestimated by the variance-covariance method. A framework of portfolio risk assessment under the existence of tail risk is considered. We propose a multi-asset model with a single volatility fluctuation shared by all assets, named the single β model, and empirically examine the agreement between the model and an imaginary portfolio with Dow Jones indices. It turns out that the single β model gives good approximation to portfolios composed of the assets with non-Gaussian and correlated returns.

  5. Value-at-Risk forecasts by a spatiotemporal model in Chinese stock market

    NASA Astrophysics Data System (ADS)

    Gong, Pu; Weng, Yingliang

    2016-01-01

    This paper generalizes a recently proposed spatial autoregressive model and introduces a spatiotemporal model for forecasting stock returns. We support the view that stock returns are affected not only by the absolute values of factors such as firm size, book-to-market ratio and momentum but also by the relative values of factors like trading volume ranking and market capitalization ranking in each period. This article studies a new method for constructing stocks' reference groups; the method is called quartile method. Applying the method empirically to the Shanghai Stock Exchange 50 Index, we compare the daily volatility forecasting performance and the out-of-sample forecasting performance of Value-at-Risk (VaR) estimated by different models. The empirical results show that the spatiotemporal model performs surprisingly well in terms of capturing spatial dependences among individual stocks, and it produces more accurate VaR forecasts than the other three models introduced in the previous literature. Moreover, the findings indicate that both allowing for serial correlation in the disturbances and using time-varying spatial weight matrices can greatly improve the predictive accuracy of a spatial autoregressive model.

  6. Evaluating the RiskMetrics methodology in measuring volatility and Value-at-Risk in financial markets

    NASA Astrophysics Data System (ADS)

    Pafka, Szilárd; Kondor, Imre

    2001-10-01

    We analyze the performance of RiskMetrics, a widely used methodology for measuring market risk. Based on the assumption of normally distributed returns, the RiskMetrics model completely ignores the presence of fat tails in the distribution function, which is an important feature of financial data. Nevertheless, it was commonly found that RiskMetrics performs satisfactorily well, and therefore the technique has become widely used in the financial industry. We find, however, that the success of RiskMetrics is the artifact of the choice of the risk measure. First, the outstanding performance of volatility estimates is basically due to the choice of a very short (one-period ahead) forecasting horizon. Second, the satisfactory performance in obtaining Value-at-Risk by simply multiplying volatility with a constant factor is mainly due to the choice of the particular significance level.

  7. Modelling climate change impacts on and adaptation strategies for agriculture in Sardinia and Tunisia using AquaCrop and value-at-risk.

    PubMed

    Bird, David Neil; Benabdallah, Sihem; Gouda, Nadine; Hummel, Franz; Koeberl, Judith; La Jeunesse, Isabelle; Meyer, Swen; Prettenthaler, Franz; Soddu, Antonino; Woess-Gallasch, Susanne

    2016-02-01

    In Europe, there is concern that climate change will cause significant impacts around the Mediterranean. The goals of this study are to quantify the economic risk to crop production, to demonstrate the variability of yield by soil texture and climate model and to investigate possible adaptation strategies. In the Rio Mannu di San Sperate watershed, located in Sardinia (Italy) we investigate production of wheat, a rainfed crop. In the Chiba watershed located in Cap Bon (Tunisia), we analyze irrigated tomato production. We find, using the FAO model AquaCrop that crop production will decrease significantly in a future climate (2040-2070) as compared to the present without adaptation measures. Using "value-at-risk", we show that production should be viewed in a statistical manner. Wheat yields in Sardinia are modelled to decrease by 64% on clay loams, and to increase by 8% and 26% respectively on sandy loams and sandy clay loams. Assuming constant irrigation, tomatoes sown in August in Cap Bon are modelled to have a 45% chance of crop failure on loamy sands; a 39% decrease in yields on sandy clay loams; and a 12% increase in yields on sandy loams. For tomatoes sown in March; sandy clay loams will fail 81% of the time; on loamy sands the crop yields will be 63% less while on sandy loams, the yield will increase by 12%. However, if one assume 10% less water available for irrigation then tomatoes sown in March are not viable. Some adaptation strategies will be able to counteract the modelled crop losses. Increasing the amount of irrigation one strategy however this may not be sustainable. Changes in agricultural management such as changing the planting date of wheat to coincide with changing rainfall patterns in Sardinia or mulching of tomatoes in Tunisia can be effective at reducing crop losses. PMID:26187862

  8. How to estimate the Value at Risk under incomplete information

    NASA Astrophysics Data System (ADS)

    de Schepper, Ann; Heijnen, Bart

    2010-03-01

    A key problem in financial and actuarial research, and particularly in the field of risk management, is the choice of models so as to avoid systematic biases in the measurement of risk. An alternative consists of relaxing the assumption that the probability distribution is completely known, leading to interval estimates instead of point estimates. In the present contribution, we show how this is possible for the Value at Risk, by fixing only a small number of parameters of the underlying probability distribution. We start by deriving bounds on tail probabilities, and we show how a conversion leads to bounds for the Value at Risk. It will turn out that with a maximum of three given parameters, the best estimates are always realized in the case of a unimodal random variable for which two moments and the mode are given. It will also be shown that a lognormal model results in estimates for the Value at Risk that are much closer to the upper bound than to the lower bound.

  9. Multifractality and value-at-risk forecasting of exchange rates

    NASA Astrophysics Data System (ADS)

    Batten, Jonathan A.; Kinateder, Harald; Wagner, Niklas

    2014-05-01

    This paper addresses market risk prediction for high frequency foreign exchange rates under nonlinear risk scaling behaviour. We use a modified version of the multifractal model of asset returns (MMAR) where trading time is represented by the series of volume ticks. Our dataset consists of 138,418 5-min round-the-clock observations of EUR/USD spot quotes and trading ticks during the period January 5, 2006 to December 31, 2007. Considering fat-tails, long-range dependence as well as scale inconsistency with the MMAR, we derive out-of-sample value-at-risk (VaR) forecasts and compare our approach to historical simulation as well as a benchmark GARCH(1,1) location-scale VaR model. Our findings underline that the multifractal properties in EUR/USD returns in fact have notable risk management implications. The MMAR approach is a parsimonious model which produces admissible VaR forecasts at the 12-h forecast horizon. For the daily horizon, the MMAR outperforms both alternatives based on conditional as well as unconditional coverage statistics.

  10. Heavy-tailed value-at-risk analysis for Malaysian stock exchange

    NASA Astrophysics Data System (ADS)

    Chin, Wen Cheong

    2008-07-01

    This article investigates the comparison of power-law value-at-risk (VaR) evaluation with quantile and non-linear time-varying volatility approaches. A simple Pareto distribution is proposed to account the heavy-tailed property in the empirical distribution of returns. Alternative VaR measurement such as non-parametric quantile estimate is implemented using interpolation method. In addition, we also used the well-known two components ARCH modelling technique under the assumptions of normality and heavy-tailed (student- t distribution) for the innovations. Our results evidenced that the predicted VaR under the Pareto distribution exhibited similar results with the symmetric heavy-tailed long-memory ARCH model. However, it is found that only the Pareto distribution is able to provide a convenient framework for asymmetric properties in both the lower and upper tails.

  11. The social values at risk from sea-level rise

    SciTech Connect

    Graham, Sonia; Barnett, Jon; Fincher, Ruth; Hurlimann, Anna; Mortreux, Colette; Waters, Elissa

    2013-07-15

    Analysis of the risks of sea-level rise favours conventionally measured metrics such as the area of land that may be subsumed, the numbers of properties at risk, and the capital values of assets at risk. Despite this, it is clear that there exist many less material but no less important values at risk from sea-level rise. This paper re-theorises these multifarious social values at risk from sea-level rise, by explaining their diverse nature, and grounding them in the everyday practices of people living in coastal places. It is informed by a review and analysis of research on social values from within the fields of social impact assessment, human geography, psychology, decision analysis, and climate change adaptation. From this we propose that it is the ‘lived values’ of coastal places that are most at risk from sea-level rise. We then offer a framework that groups these lived values into five types: those that are physiological in nature, and those that relate to issues of security, belonging, esteem, and self-actualisation. This framework of lived values at risk from sea-level rise can guide empirical research investigating the social impacts of sea-level rise, as well as the impacts of actions to adapt to sea-level rise. It also offers a basis for identifying the distribution of related social outcomes across populations exposed to sea-level rise or sea-level rise policies.

  12. Portfolio Value-at-Risk with Time-Varying Copula: Evidence from Latin America

    NASA Astrophysics Data System (ADS)

    Ozun, Alper; Cifter, Atilla

    Model risk in the estimation of value-at-risk is a challenging threat for the success of any financial investments. The degree of the model risk increases when the estimation process is constructed with a portfolio in the emerging markets. The proper model should both provide flexible joint distributions by splitting the marginality from the dependencies among the financial assets within the portfolio and also capture the non-linear behaviours and extremes in the returns arising from the special features of the emerging markets. In this study, we use time-varying copula to estimate the value-at-risk of the portfolio comprised of the Bovespa and the IPC Mexico in equal and constant weights. The performance comparison of the copula model to the EWMA portfolio model made by the Christoffersen back-test shows that the copula model captures the extremes most successfully. The copula model, by estimating the portfolio value-at-risk with the least violation number in the back-tests, provides the investors to allocate the minimum regulatory capital requirement in accordance with the Basel II Accord.

  13. Empirical application of normal mixture GARCH and value-at-risk estimation

    NASA Astrophysics Data System (ADS)

    Kamaruzzaman, Zetty Ain; Isa, Zaidi

    2014-06-01

    Normal mixture (NM) GARCH model can capture time variation in both conditional skewness and kurtosis. In this paper, we present the general framework of Normal mixture GARCH (1,1). An empirical application is presented using Malaysia weekly stock market returns. This paper provides evidence that, for modeling stock market returns, two-component Normal mixture GARCH (1,1) model perform better than Normal, symmetric and skewed Student's t-GARCH models. This model can quantify the volatility corresponding to stable and crash market circumstances. We also consider Value-at-Risk (VaR) estimation for Normal mixture GARCH model.

  14. Making the business case for process safety using value-at-risk concepts.

    PubMed

    Fang, Jayming S; Ford, David M; Mannan, M Sam

    2004-11-11

    An increasing emphasis on chemical process safety over the last two decades has led to the development and application of powerful risk assessment tools. Hazard analysis and risk evaluation techniques have developed to the point where quantitatively meaningful risks can be calculated for processes and plants. However, the results are typically presented in semi-quantitative "ranked list" or "categorical matrix" formats, which are certainly useful but not optimal for making business decisions. A relatively new technique for performing valuation under uncertainty, value at risk (VaR), has been developed in the financial world. VaR is a method of evaluating the probability of a gain or loss by a complex venture, by examining the stochastic behavior of its components. We believe that combining quantitative risk assessment techniques with VaR concepts will bridge the gap between engineers and scientists who determine process risk and business leaders and policy makers who evaluate, manage, or regulate risk. We present a few basic examples of the application of VaR to hazard analysis in the chemical process industry. PMID:15518960

  15. Value-at-risk estimation with wavelet-based extreme value theory: Evidence from emerging markets

    NASA Astrophysics Data System (ADS)

    Cifter, Atilla

    2011-06-01

    This paper introduces wavelet-based extreme value theory (EVT) for univariate value-at-risk estimation. Wavelets and EVT are combined for volatility forecasting to estimate a hybrid model. In the first stage, wavelets are used as a threshold in generalized Pareto distribution, and in the second stage, EVT is applied with a wavelet-based threshold. This new model is applied to two major emerging stock markets: the Istanbul Stock Exchange (ISE) and the Budapest Stock Exchange (BUX). The relative performance of wavelet-based EVT is benchmarked against the Riskmetrics-EWMA, ARMA-GARCH, generalized Pareto distribution, and conditional generalized Pareto distribution models. The empirical results show that the wavelet-based extreme value theory increases predictive performance of financial forecasting according to number of violations and tail-loss tests. The superior forecasting performance of the wavelet-based EVT model is also consistent with Basel II requirements, and this new model can be used by financial institutions as well.

  16. Value at risk estimation with entropy-based wavelet analysis in exchange markets

    NASA Astrophysics Data System (ADS)

    He, Kaijian; Wang, Lijun; Zou, Yingchao; Lai, Kin Keung

    2014-08-01

    In recent years, exchange markets are increasingly integrated together. Fluctuations and risks across different exchange markets exhibit co-moving and complex dynamics. In this paper we propose the entropy-based multivariate wavelet based approaches to analyze the multiscale characteristic in the multidimensional domain and improve further the Value at Risk estimation reliability. Wavelet analysis has been introduced to construct the entropy-based Multiscale Portfolio Value at Risk estimation algorithm to account for the multiscale dynamic correlation. The entropy measure has been proposed as the more effective measure with the error minimization principle to select the best basis when determining the wavelet families and the decomposition level to use. The empirical studies conducted in this paper have provided positive evidence as to the superior performance of the proposed approach, using the closely related Chinese Renminbi and European Euro exchange market.

  17. Measuring daily Value-at-Risk of SSEC index: A new approach based on multifractal analysis and extreme value theory

    NASA Astrophysics Data System (ADS)

    Wei, Yu; Chen, Wang; Lin, Yu

    2013-05-01

    Recent studies in the econophysics literature reveal that price variability has fractal and multifractal characteristics not only in developed financial markets, but also in emerging markets. Taking high-frequency intraday quotes of the Shanghai Stock Exchange Component (SSEC) Index as example, this paper proposes a new method to measure daily Value-at-Risk (VaR) by combining the newly introduced multifractal volatility (MFV) model and the extreme value theory (EVT) method. Two VaR backtesting techniques are then employed to compare the performance of the model with that of a group of linear and nonlinear generalized autoregressive conditional heteroskedasticity (GARCH) models. The empirical results show the multifractal nature of price volatility in Chinese stock market. VaR measures based on the multifractal volatility model and EVT method outperform many GARCH-type models at high-risk levels.

  18. On Value at Risk for Foreign Exchange Rates --- the Copula Approach

    NASA Astrophysics Data System (ADS)

    Jaworski, P.

    2006-11-01

    The aim of this paper is to determine the Value at Risk (VaR) of the portfolio consisting of long positions in foreign currencies on an emerging market. Basing on empirical data we restrict ourselves to the case when the tail parts of distributions of logarithmic returns of these assets follow the power laws and the lower tail of associated copula C follows the power law of degree 1. We will illustrate the practical usefulness of this approach by the analysis of the exchange rates of EUR and CHF at the Polish forex market.

  19. 'Weather Value at Risk': A uniform approach to describe and compare sectoral income risks from climate change.

    PubMed

    Prettenthaler, Franz; Köberl, Judith; Bird, David Neil

    2016-02-01

    We extend the concept of 'Weather Value at Risk' - initially introduced to measure the economic risks resulting from current weather fluctuations - to describe and compare sectoral income risks from climate change. This is illustrated using the examples of wheat cultivation and summer tourism in (parts of) Sardinia. Based on climate scenario data from four different regional climate models we study the change in the risk of weather-related income losses between some reference (1971-2000) and some future (2041-2070) period. Results from both examples suggest an increase in weather-related risks of income losses due to climate change, which is somewhat more pronounced for summer tourism. Nevertheless, income from wheat cultivation is at much higher risk of weather-related losses than income from summer tourism, both under reference and future climatic conditions. A weather-induced loss of at least 5% - compared to the income associated with average reference weather conditions - shows a 40% (80%) probability of occurrence in the case of wheat cultivation, but only a 0.4% (16%) probability of occurrence in the case of summer tourism, given reference (future) climatic conditions. Whereas in the agricultural example increases in the weather-related income risks mainly result from an overall decrease in average wheat yields, the heightened risk in the tourism example stems mostly from a change in the weather-induced variability of tourism incomes. With the extended 'Weather Value at Risk' concept being able to capture both, impacts from changes in the mean and the variability of the climate, it is a powerful tool for presenting and disseminating the results of climate change impact assessments. Due to its flexibility, the concept can be applied to any economic sector and therefore provides a valuable tool for cross-sectoral comparisons of climate change impacts, but also for the assessment of the costs and benefits of adaptation measures. PMID:25929802

  20. Stochastic dynamic programming (SDP) with a conditional value-at-risk (CVaR) criterion for management of storm-water

    NASA Astrophysics Data System (ADS)

    Piantadosi, J.; Metcalfe, A. V.; Howlett, P. G.

    2008-01-01

    SummaryWe present a new approach to stochastic dynamic programming (SDP) to determine a policy for management of urban storm-water that minimises conditional value-at-risk (CVaR). Storm-water flows into a large capture dam and is subsequently pumped to a holding dam. Water is then supplied directly to users or stored in an underground aquifer. We assume random inflow and constant demand. SDP is used to find a pumping policy that minimises CVaR, with a penalty for increased risk of environmental damage, and a pumping policy that maximises expected monetary value (EMV). We use both value iteration and policy improvement to show that the optimal policy under CVaR differs from the optimal policy under EMV.

  1. The EMEFS model evaluation

    SciTech Connect

    Barchet, W.R. ); Dennis, R.L. ); Seilkop, S.K. ); Banic, C.M.; Davies, D.; Hoff, R.M.; Macdonald, A.M.; Mickle, R.E.; Padro, J.; Puckett, K. ); Byun, D.; McHenry, J.N.

    1991-12-01

    The binational Eulerian Model Evaluation Field Study (EMEFS) consisted of several coordinated data gathering and model evaluation activities. In the EMEFS, data were collected by five air and precipitation monitoring networks between June 1988 and June 1990. Model evaluation is continuing. This interim report summarizes the progress made in the evaluation of the Regional Acid Deposition Model (RADM) and the Acid Deposition and Oxidant Model (ADOM) through the December 1990 completion of a State of Science and Technology report on model evaluation for the National Acid Precipitation Assessment Program (NAPAP). Because various assessment applications of RADM had to be evaluated for NAPAP, the report emphasizes the RADM component of the evaluation. A protocol for the evaluation was developed by the model evaluation team and defined the observed and predicted values to be used and the methods by which the observed and predicted values were to be compared. Scatter plots and time series of predicted and observed values were used to present the comparisons graphically. Difference statistics and correlations were used to quantify model performance. 64 refs., 34 figs., 6 tabs.

  2. ATMOSPHERIC MODEL EVALUATION

    EPA Science Inventory

    Evaluation of the Models-3/CMAQ is conducted in this task. The focus is on evaluation of ozone, other photochemical oxidants, and fine particles using data from both routine monitoring networks and special, intensive field programs. Two types of evaluations are performed here: pe...

  3. Integrated Assessment Model Evaluation

    NASA Astrophysics Data System (ADS)

    Smith, S. J.; Clarke, L.; Edmonds, J. A.; Weyant, J. P.

    2012-12-01

    Integrated assessment models of climate change (IAMs) are widely used to provide insights into the dynamics of the coupled human and socio-economic system, including emission mitigation analysis and the generation of future emission scenarios. Similar to the climate modeling community, the integrated assessment community has a two decade history of model inter-comparison, which has served as one of the primary venues for model evaluation and confirmation. While analysis of historical trends in the socio-economic system has long played a key role in diagnostics of future scenarios from IAMs, formal hindcast experiments are just now being contemplated as evaluation exercises. Some initial thoughts on setting up such IAM evaluation experiments are discussed. Socio-economic systems do not follow strict physical laws, which means that evaluation needs to take place in a context, unlike that of physical system models, in which there are few fixed, unchanging relationships. Of course strict validation of even earth system models is not possible (Oreskes etal 2004), a fact borne out by the inability of models to constrain the climate sensitivity. Energy-system models have also been grappling with some of the same questions over the last quarter century. For example, one of "the many questions in the energy field that are waiting for answers in the next 20 years" identified by Hans Landsberg in 1985 was "Will the price of oil resume its upward movement?" Of course we are still asking this question today. While, arguably, even fewer constraints apply to socio-economic systems, numerous historical trends and patterns have been identified, although often only in broad terms, that are used to guide the development of model components, parameter ranges, and scenario assumptions. IAM evaluation exercises are expected to provide useful information for interpreting model results and improving model behavior. A key step is the recognition of model boundaries, that is, what is inside

  4. VPPA weld model evaluation

    NASA Technical Reports Server (NTRS)

    Mccutcheon, Kimble D.; Gordon, Stephen S.; Thompson, Paul A.

    1992-01-01

    NASA uses the Variable Polarity Plasma Arc Welding (VPPAW) process extensively for fabrication of Space Shuttle External Tanks. This welding process has been in use at NASA since the late 1970's but the physics of the process have never been satisfactorily modeled and understood. In an attempt to advance the level of understanding of VPPAW, Dr. Arthur C. Nunes, Jr., (NASA) has developed a mathematical model of the process. The work described in this report evaluated and used two versions (level-0 and level-1) of Dr. Nunes' model, and a model derived by the University of Alabama at Huntsville (UAH) from Dr. Nunes' level-1 model. Two series of VPPAW experiments were done, using over 400 different combinations of welding parameters. Observations were made of VPPAW process behavior as a function of specific welding parameter changes. Data from these weld experiments was used to evaluate and suggest improvements to Dr. Nunes' model. Experimental data and correlations with the model were used to develop a multi-variable control algorithm for use with a future VPPAW controller. This algorithm is designed to control weld widths (both on the crown and root of the weld) based upon the weld parameters, base metal properties, and real-time observation of the crown width. The algorithm exhibited accuracy comparable to that of the weld width measurements for both aluminum and mild steel welds.

  5. Evaluating Service Organization Models

    PubMed Central

    TOUATI, NASSERA; PINEAULT, RAYNALD; CHAMPAGNE, FRANÇOIS; DENIS, JEAN-LOUIS; BROUSSELLE, ASTRID; CONTANDRIOPOULOS, ANDRÉ-PIERRE; GENEAU, ROBERT

    2016-01-01

    Based on the example of the evaluation of service organization models, this article shows how a configurational approach overcomes the limits of traditional methods which for the most part have studied the individual components of various models considered independently of one another. These traditional methods have led to results (observed effects) that are difficult to interpret. The configurational approach, in contrast, is based on the hypothesis that effects are associated with a set of internally coherent model features that form various configurations. These configurations, like their effects, are context-dependent. We explore the theoretical basis of the configuration approach in order to emphasize its relevance, and discuss the methodological challenges inherent in the application of this approach through an in-depth analysis of the scientific literature. We also propose methodological solutions to these challenges. We illustrate from an example how a configurational approach has been used to evaluate primary care models. Finally, we begin a discussion on the implications of this new evaluation approach for the scientific and decision-making communities.

  6. Composite Load Model Evaluation

    SciTech Connect

    Lu, Ning; Qiao, Hong

    2007-09-30

    The WECC load modeling task force has dedicated its effort in the past few years to develop a composite load model that can represent behaviors of different end-user components. The modeling structure of the composite load model is recommended by the WECC load modeling task force. GE Energy has implemented this composite load model with a new function CMPLDW in its power system simulation software package, PSLF. For the last several years, Bonneville Power Administration (BPA) has taken the lead and collaborated with GE Energy to develop the new composite load model. Pacific Northwest National Laboratory (PNNL) and BPA joint force and conducted the evaluation of the CMPLDW and test its parameter settings to make sure that: • the model initializes properly, • all the parameter settings are functioning, and • the simulation results are as expected. The PNNL effort focused on testing the CMPLDW in a 4-bus system. An exhaustive testing on each parameter setting has been performed to guarantee each setting works. This report is a summary of the PNNL testing results and conclusions.

  7. Pragmatic geometric model evaluation

    NASA Astrophysics Data System (ADS)

    Pamer, Robert

    2015-04-01

    Quantification of subsurface model reliability is mathematically and technically demanding as there are many different sources of uncertainty and some of the factors can be assessed merely in a subjective way. For many practical applications in industry or risk assessment (e. g. geothermal drilling) a quantitative estimation of possible geometric variations in depth unit is preferred over relative numbers because of cost calculations for different scenarios. The talk gives an overview of several factors that affect the geometry of structural subsurface models that are based upon typical geological survey organization (GSO) data like geological maps, borehole data and conceptually driven construction of subsurface elements (e. g. fault network). Within the context of the trans-European project "GeoMol" uncertainty analysis has to be very pragmatic also because of different data rights, data policies and modelling software between the project partners. In a case study a two-step evaluation methodology for geometric subsurface model uncertainty is being developed. In a first step several models of the same volume of interest have been calculated by omitting successively more and more input data types (seismic constraints, fault network, outcrop data). The positions of the various horizon surfaces are then compared. The procedure is equivalent to comparing data of various levels of detail and therefore structural complexity. This gives a measure of the structural significance of each data set in space and as a consequence areas of geometric complexity are identified. These areas are usually very data sensitive hence geometric variability in between individual data points in these areas is higher than in areas of low structural complexity. Instead of calculating a multitude of different models by varying some input data or parameters as it is done by Monte-Carlo-simulations, the aim of the second step of the evaluation procedure (which is part of the ongoing work) is to

  8. BioVapor Model Evaluation

    EPA Science Inventory

    General background on modeling and specifics of modeling vapor intrusion are given. Three classical model applications are described and related to the problem of petroleum vapor intrusion. These indicate the need for model calibration and uncertainty analysis. Evaluation of Bi...

  9. Social Program Evaluation: Six Models.

    ERIC Educational Resources Information Center

    New Directions for Program Evaluation, 1980

    1980-01-01

    Representative models of program evaluation are described by their approach to values, and categorized by empirical style: positivism versus humanism. The models are: social process audit; experimental/quasi-experimental research design; goal-free evaluation; systems evaluation; cost-benefit analysis; and accountability program evaluation. (CP)

  10. Evaluation Theory, Models, and Applications

    ERIC Educational Resources Information Center

    Stufflebeam, Daniel L.; Shinkfield, Anthony J.

    2007-01-01

    "Evaluation Theory, Models, and Applications" is designed for evaluators and students who need to develop a commanding knowledge of the evaluation field: its history, theory and standards, models and approaches, procedures, and inclusion of personnel as well as program evaluation. This important book shows how to choose from a growing array of…

  11. Model Program Evaluations. Fact Sheet

    ERIC Educational Resources Information Center

    Arkansas Safe Schools Initiative Division, 2002

    2002-01-01

    There are probably thousands of programs and courses intended to prevent or reduce violence in this nation's schools. Evaluating these many programs has become a problem or goal in itself. There are now many evaluation programs, with many levels of designations, such as model, promising, best practice, exemplary and noteworthy. "Model program" is…

  12. Evaluating Causal Models.

    ERIC Educational Resources Information Center

    Watt, James H., Jr.

    Pointing out that linear causal models can organize the interrelationships of a large number of variables, this paper contends that such models are particularly useful to mass communication research, which must by necessity deal with complex systems of variables. The paper first outlines briefly the philosophical requirements for establishing a…

  13. Advocacy Evaluation: A Model for Internal Evaluation Offices.

    ERIC Educational Resources Information Center

    Sonnichsen, Richard C.

    1988-01-01

    As evaluations are more often implemented by internal staff, internal evaluators must begin to assume decision-making and advocacy tasks. This advocacy evaluation concept is described using the Federal Bureau of Investigation evaluation staff as a model. (TJH)

  14. A Model for Curriculum Evaluation

    ERIC Educational Resources Information Center

    Crane, Peter; Abt, Clark C.

    1969-01-01

    Describes in some detail the Curriculum Evaluation Model, "a technique for calculating the cost-effectiveness of alternative curriculum materials by a detailed breakdown and analysis of their components, quality, and cost. Coverage, appropriateness, motivational effectiveness, and cost are the four major categories in terms of which the…

  15. Sequentially Executed Model Evaluation Framework

    Energy Science and Technology Software Center (ESTSC)

    2014-02-14

    Provides a message passing framework between generic input, model and output drivers, and specifies an API for developing such drivers. Also provides batch and real-time controllers which step the model and 1/0 through the time domain (or other discrete domain), and sample 1/0 drivers. This is a Framework library framework, and does not, itself, solve any problems or execute any modelling. The SeMe framework aids in development of models which operate on sequential information, suchmore » as time-series, where evaluation is based on prior results combined with new data for this iteration. Ha) applications in quality monitoring, and was developed as part of the CANARY-EDS software, where real-time water quality data is being analyzed« less

  16. Sequentially Executed Model Evaluation Framework

    Energy Science and Technology Software Center (ESTSC)

    2015-10-20

    Provides a message passing framework between generic input, model and output drivers, and specifies an API for developing such drivers. Also provides batch and real-time controllers which step the model and I/O through the time domain (or other discrete domain), and sample I/O drivers. This is a library framework, and does not, itself, solve any problems or execute any modeling. The SeMe framework aids in development of models which operate on sequential information, such asmore » time-series, where evaluation is based on prior results combined with new data for this iteration. Has applications in quality monitoring, and was developed as part of the CANARY-EDS software, where real-time water quality data is being analyzed for anomalies.« less

  17. Sequentially Executed Model Evaluation Framework

    SciTech Connect

    2015-10-20

    Provides a message passing framework between generic input, model and output drivers, and specifies an API for developing such drivers. Also provides batch and real-time controllers which step the model and I/O through the time domain (or other discrete domain), and sample I/O drivers. This is a library framework, and does not, itself, solve any problems or execute any modeling. The SeMe framework aids in development of models which operate on sequential information, such as time-series, where evaluation is based on prior results combined with new data for this iteration. Has applications in quality monitoring, and was developed as part of the CANARY-EDS software, where real-time water quality data is being analyzed for anomalies.

  18. Sequentially Executed Model Evaluation Framework

    SciTech Connect

    2014-02-14

    Provides a message passing framework between generic input, model and output drivers, and specifies an API for developing such drivers. Also provides batch and real-time controllers which step the model and 1/0 through the time domain (or other discrete domain), and sample 1/0 drivers. This is a Framework library framework, and does not, itself, solve any problems or execute any modelling. The SeMe framework aids in development of models which operate on sequential information, such as time-series, where evaluation is based on prior results combined with new data for this iteration. Ha) applications in quality monitoring, and was developed as part of the CANARY-EDS software, where real-time water quality data is being analyzed

  19. Infrasound Sensor Models and Evaluations

    SciTech Connect

    KROMER,RICHARD P.; MCDONALD,TIMOTHY S.

    2000-07-31

    Sandia National Laboratories has continued to evaluate the performance of infrasound sensors that are candidates for use by the International Monitoring System (IMS) for the Comprehensive Nuclear-Test-Ban Treaty Organization. The performance criteria against which these sensors are assessed are specified in ``Operational Manual for Infra-sound Monitoring and the International Exchange of Infrasound Data''. This presentation includes the results of efforts concerning two of these sensors: (1) Chaparral Physics Model 5; and (2) CEA MB2000. Sandia is working with Chaparral Physics in order to improve the capability of the Model 5 (a prototype sensor) to be calibrated and evaluated. With the assistance of the Scripps Institution of Oceanography, Sandia is also conducting tests to evaluate the performance of the CEA MB2000. Sensor models based on theoretical transfer functions and manufacturer specifications for these two devices have been developed. This presentation will feature the results of coherence-based data analysis of signals from a huddle test, utilizing several sensors of both types, in order to verify the sensor performance.

  20. The Spiral-Interactive Program Evaluation Model.

    ERIC Educational Resources Information Center

    Khaleel, Ibrahim Adamu

    1988-01-01

    Describes the spiral interactive program evaluation model, which is designed to evaluate vocational-technical education programs in secondary schools in Nigeria. Program evaluation is defined; utility oriented and process oriented models for evaluation are described; and internal and external evaluative factors and variables that define each…

  1. Decisionmaking Context Model for Enhancing Evaluation Utilization.

    ERIC Educational Resources Information Center

    Brown, Robert D.; And Others

    1984-01-01

    This paper discusses two models that hold promise for helping evaluators understand and cope with different decision contexts: (1) the conflict Model (Janis and Mann, 1977) and the Social Process Model (Vroom and Yago, 1974). Implications and guidelines for using decisionmaking models in evaluation settings are presented. (BS)

  2. Beyond Evaluation: A Model for Cooperative Evaluation of Internet Resources.

    ERIC Educational Resources Information Center

    Kirkwood, Hal P., Jr.

    1998-01-01

    Presents a status report on Web site evaluation efforts, listing dead, merged, new review, Yahoo! wannabes, subject-specific review, former librarian-managed, and librarian-managed review sites; discusses how sites are evaluated; describes and demonstrates (reviewing company directories) the Marr/Kirkwood evaluation model; and provides an…

  3. Developing Useful Evaluation Capability: Lessons From the Model Evaluation Program.

    ERIC Educational Resources Information Center

    Waller, John D.; And Others

    The assessment of 12 model evaluation systems provides insight and guidance into their development for government managers and evaluators. The eight individual completed grants are documented in a series of case studies, while the synthesis of all the project experiences and results are summarized. There are things that evaluation systems can do…

  4. The EMEFS model evaluation. An interim report

    SciTech Connect

    Barchet, W.R.; Dennis, R.L.; Seilkop, S.K.; Banic, C.M.; Davies, D.; Hoff, R.M.; Macdonald, A.M.; Mickle, R.E.; Padro, J.; Puckett, K.; Byun, D.; McHenry, J.N.; Karamchandani, P.; Venkatram, A.; Fung, C.; Misra, P.K.; Hansen, D.A.; Chang, J.S.

    1991-12-01

    The binational Eulerian Model Evaluation Field Study (EMEFS) consisted of several coordinated data gathering and model evaluation activities. In the EMEFS, data were collected by five air and precipitation monitoring networks between June 1988 and June 1990. Model evaluation is continuing. This interim report summarizes the progress made in the evaluation of the Regional Acid Deposition Model (RADM) and the Acid Deposition and Oxidant Model (ADOM) through the December 1990 completion of a State of Science and Technology report on model evaluation for the National Acid Precipitation Assessment Program (NAPAP). Because various assessment applications of RADM had to be evaluated for NAPAP, the report emphasizes the RADM component of the evaluation. A protocol for the evaluation was developed by the model evaluation team and defined the observed and predicted values to be used and the methods by which the observed and predicted values were to be compared. Scatter plots and time series of predicted and observed values were used to present the comparisons graphically. Difference statistics and correlations were used to quantify model performance. 64 refs., 34 figs., 6 tabs.

  5. Evaluating modeling tools for the EDOS

    NASA Technical Reports Server (NTRS)

    Knoble, Gordon; Mccaleb, Frederick; Aslam, Tanweer; Nester, Paul

    1994-01-01

    The Earth Observing System (EOS) Data and Operations System (EDOS) Project is developing a functional, system performance model to support the system implementation phase of the EDOS which is being designed and built by the Goddard Space Flight Center (GSFC). The EDOS Project will use modeling to meet two key objectives: (1) manage system design impacts introduced by unplanned changed in mission requirements; and (2) evaluate evolutionary technology insertions throughout the development of the EDOS. To select a suitable modeling tool, the EDOS modeling team developed an approach for evaluating modeling tools and languages by deriving evaluation criteria from both the EDOS modeling requirements and the development plan. Essential and optional features for an appropriate modeling tool were identified and compared with known capabilities of several modeling tools. Vendors were also provided the opportunity to model a representative EDOS processing function to demonstrate the applicability of their modeling tool to the EDOS modeling requirements. This paper emphasizes the importance of using a well defined approach for evaluating tools to model complex systems like the EDOS. The results of this evaluation study do not in any way signify the superiority of any one modeling tool since the results will vary with the specific modeling requirements of each project.

  6. A Model for Administrative Evaluation by Subordinates.

    ERIC Educational Resources Information Center

    Budig, Jeanne E.

    Under the administrator evaluation program adopted at Vincennes University, all faculty and professional staff are invited to evaluate each administrator above them in the chain of command. Originally based on the Purdue University "cafeteria" system, this evaluation model has been used biannually for 10 years. In an effort to simplify the system,…

  7. Model Performance Evaluation and Scenario Analysis (MPESA) Tutorial

    EPA Science Inventory

    This tool consists of two parts: model performance evaluation and scenario analysis (MPESA). The model performance evaluation consists of two components: model performance evaluation metrics and model diagnostics. These metrics provides modelers with statistical goodness-of-fit m...

  8. THE ATMOSPHERIC MODEL EVALUATION (AMET): METEOROLOGY MODULE

    EPA Science Inventory

    An Atmospheric Model Evaluation Tool (AMET), composed of meteorological and air quality components, is being developed to examine the error and uncertainty in the model simulations. AMET matches observations with the corresponding model-estimated values in space and time, and the...

  9. Black Model Appearance and Product Evaluations.

    ERIC Educational Resources Information Center

    Kerin, Roger A.

    1979-01-01

    Examines a study of how human models affect the impression conveyed by an advertisement, particularly the effect of a Black model's physical characteristics on product evaluations among Black and White females.Results show that the physical appearance of the model influenced impressions of product quality and suitability for personal use. (JMF)

  10. Comprehensive system models: Strategies for evaluation

    NASA Technical Reports Server (NTRS)

    Field, Christopher; Kutzbach, John E.; Ramanathan, V.; Maccracken, Michael C.

    1992-01-01

    The task of evaluating comprehensive earth system models is vast involving validations of every model component at every scale of organization, as well as tests of all the individual linkages. Even the most detailed evaluation of each of the component processes and the individual links among them should not, however, engender confidence in the performance of the whole. The integrated earth system is so rich with complex feedback loops, often involving components of the atmosphere, oceans, biosphere, and cryosphere, that it is certain to exhibit emergent properties very difficult to predict from the perspective of a narrow focus on any individual component of the system. Therefore, a substantial share of the task of evaluating comprehensive earth system models must reside at the level of whole system evaluations. Since complete, integrated atmosphere/ ocean/ biosphere/ hydrology models are not yet operational, questions of evaluation must be addressed at the level of the kinds of earth system processes that the models should be competent to simulate, rather than at the level of specific performance criteria. Here, we have tried to identify examples of earth system processes that are difficult to simulate with existing models and that involve a rich enough suite of feedbacks that they are unlikely to be satisfactorily described by highly simplified or toy models. Our purpose is not to specify a checklist of evaluation criteria but to introduce characteristics of the earth system that may present useful opportunities for model testing and, of course, improvement.

  11. Evaluation of Galactic Cosmic Ray Models

    NASA Technical Reports Server (NTRS)

    Adams, James H., Jr.; Heiblim, Samuel; Malott, Christopher

    2009-01-01

    Models of the galactic cosmic ray spectra have been tested by comparing their predictions to an evaluated database containing more than 380 measured cosmic ray spectra extending from 1960 to the present.

  12. Outcomes Evaluation: A Model for the Future.

    ERIC Educational Resources Information Center

    Blasi, John F.; Davis, Barbara S.

    1986-01-01

    Examines issues and problems related to the measurement of community college outcomes in relation to mission and goals. Presents a model for outcomes evaluation at the community college which derives from the mission statement and provides evaluative comment and comparison with institutional and national norms. (DMM)

  13. Evaluation Model for Career Programs. Final Report.

    ERIC Educational Resources Information Center

    Byerly, Richard L.; And Others

    A study was conducted to provide and test an evaluative model that could be utilized in providing curricular evaluation of the various career programs. Two career fields, dental assistant and auto mechanic, were chosen for study. A questionnaire based upon the actual job performance was completed by six groups connected with the auto mechanics and…

  14. SAPHIRE models and software for ASP evaluations

    SciTech Connect

    Sattison, M.B.; Schroeder, J.A.; Russell, K.D.

    1995-04-01

    The Idaho National Engineering Laboratory (INEL) over the past year has created 75 plant-specific Accident Sequence Precursor (ASP) models using the SAPHIRE suite of PRA codes. Along with the new models, the INEL has also developed a new module for SAPHIRE which is tailored specifically to the unique needs of conditional core damage probability (CCDP) evaluations. These models and software will be the next generation of risk tools for the evaluation of accident precursors by both NRR and AEOD. This paper presents an overview of the models and software. Key characteristics include: (1) classification of the plant models according to plant response with a unique set of event trees for each plant class, (2) plant-specific fault trees using supercomponents, (3) generation and retention of all system and sequence cutsets, (4) full flexibility in modifying logic, regenerating cutsets, and requantifying results, and (5) user interface for streamlined evaluation of ASP events.

  15. Rock mechanics models evaluation report. [Contains glossary

    SciTech Connect

    Not Available

    1987-08-01

    This report documents the evaluation of the thermal and thermomechanical models and codes for repository subsurface design and for design constraint analysis. The evaluation was based on a survey of the thermal and thermomechanical codes and models that are applicable to subsurface design, followed by a Kepner-Tregoe (KT) structured decision analysis of the codes and models. The primary recommendations of the analysis are that the DOT code be used for two-dimensional thermal analysis and that the STEALTH and HEATING 5/6 codes be used for three-dimensional and complicated two-dimensional thermal analysis. STEALTH and SPECTROM 32 are recommended for thermomechanical analyses. The other evaluated codes should be considered for use in certain applications. A separate review of salt creep models indicate that the commonly used exponential time law model is appropriate for use in repository design studies. 38 refs., 1 fig., 7 tabs.

  16. EPA EXPOSURE MODELS LIBRARY AND INTEGRATED MODEL EVALUATION SYSTEM

    EPA Science Inventory

    The third edition of the U.S. Environmental Protection Agencys (EPA) EML/IMES (Exposure Models Library and Integrated Model Evaluation System) on CD-ROM is now available. The purpose of the disc is to provide a compact and efficient means to distribute exposure models, documentat...

  17. Evaluation of constitutive models for crushed salt

    SciTech Connect

    Callahan, G.D.; Loken, M.C. [RE Hurtado, L.D.; Hansen, F.D.

    1996-05-01

    Three constitutive models are recommended as candidates for describing the deformation of crushed salt. These models are generalized to three-dimensional states of stress to include the effects of mean and deviatoric stress and modified to include effects of temperature, grain size, and moisture content. A database including hydrostatic consolidation and shear consolidation tests conducted on Waste Isolation Pilot Plant (WIPP) and southeastern New Mexico salt is used to determine material parameters for the models. To evaluate the capability of the models, parameter values obtained from fitting the complete database are used to predict the individual tests. Finite element calculations of a WIPP shaft with emplaced crushed salt demonstrate the model predictions.

  18. Evaluation of a lake whitefish bioenergetics model

    USGS Publications Warehouse

    Madenjian, Charles P.; O'Connor, Daniel V.; Pothoven, Steven A.; Schneeberger, Philip J.; Rediske, Richard R.; O'Keefe, James P.; Bergstedt, Roger A.; Argyle, Ray L.; Brandt, Stephen B.

    2006-01-01

    We evaluated the Wisconsin bioenergetics model for lake whitefish Coregonus clupeaformis in the laboratory and in the field. For the laboratory evaluation, lake whitefish were fed rainbow smelt Osmerus mordax in four laboratory tanks during a 133-d experiment. Based on a comparison of bioenergetics model predictions of lake whitefish food consumption and growth with observed consumption and growth, we concluded that the bioenergetics model furnished significantly biased estimates of both food consumption and growth. On average, the model overestimated consumption by 61% and underestimated growth by 16%. The source of the bias was probably an overestimation of the respiration rate. We therefore adjusted the respiration component of the bioenergetics model to obtain a good fit of the model to the observed consumption and growth in our laboratory tanks. Based on the adjusted model, predictions of food consumption over the 133-d period fell within 5% of observed consumption in three of the four tanks and within 9% of observed consumption in the remaining tank. We used polychlorinated biphenyls (PCBs) as a tracer to evaluate model performance in the field. Based on our laboratory experiment, the efficiency with which lake whitefish retained PCBs from their food (I?) was estimated at 0.45. We applied the bioenergetics model to Lake Michigan lake whitefish and then used PCB determinations of both lake whitefish and their prey from Lake Michigan to estimate p in the field. Application of the original model to Lake Michigan lake whitefish yielded a field estimate of 0.28, implying that the original formulation of the model overestimated consumption in Lake Michigan by 61%. Application of the bioenergetics model with the adjusted respiration component resulted in a field I? estimate of 0.56, implying that this revised model underestimated consumption by 20%.

  19. The Discrepancy Evaluation Model. I. Basic Tenets of the Model.

    ERIC Educational Resources Information Center

    Steinmetz, Andres

    1976-01-01

    The basic principles of the discrepancy evaluation model (DEM), developed by Malcolm Provus, are presented. The three concepts which are essential to DEM are defined: (1) the standard is a description of how something should be; (2) performance measures are used to find out the actual characteristics of the object being evaluated; and (3) the…

  20. Saphire models and software for ASP evaluations

    SciTech Connect

    Sattison, M.B.

    1997-02-01

    The Idaho National Engineering Laboratory (INEL) over the three years has created 75 plant-specific Accident Sequence Precursor (ASP) models using the SAPHIRE suite of PRA codes. Along with the new models, the INEL has also developed a new module for SAPHIRE which is tailored specifically to the unique needs of ASP evaluations. These models and software will be the next generation of risk tools for the evaluation of accident precursors by both the U.S. Nuclear Regulatory Commission`s (NRC`s) Office of Nuclear Reactor Regulation (NRR) and the Office for Analysis and Evaluation of Operational Data (AEOD). This paper presents an overview of the models and software. Key characteristics include: (1) classification of the plant models according to plant response with a unique set of event trees for each plant class, (2) plant-specific fault trees using supercomponents, (3) generation and retention of all system and sequence cutsets, (4) full flexibility in modifying logic, regenerating cutsets, and requantifying results, and (5) user interface for streamlined evaluation of ASP events. Future plans for the ASP models is also presented.

  1. Multi-criteria evaluation of hydrological models

    NASA Astrophysics Data System (ADS)

    Rakovec, Oldrich; Clark, Martyn; Weerts, Albrecht; Hill, Mary; Teuling, Ryan; Uijlenhoet, Remko

    2013-04-01

    Over the last years, there is a tendency in the hydrological community to move from the simple conceptual models towards more complex, physically/process-based hydrological models. This is because conceptual models often fail to simulate the dynamics of the observations. However, there is little agreement on how much complexity needs to be considered within the complex process-based models. One way to proceed to is to improve understanding of what is important and unimportant in the models considered. The aim of this ongoing study is to evaluate structural model adequacy using alternative conceptual and process-based models of hydrological systems, with an emphasis on understanding how model complexity relates to observed hydrological processes. Some of the models require considerable execution time and the computationally frugal sensitivity analysis, model calibration and uncertainty quantification methods are well-suited to providing important insights for models with lengthy execution times. The current experiment evaluates two version of the Framework for Understanding Structural Errors (FUSE), which both enable running model inter-comparison experiments. One supports computationally efficient conceptual models, and the second supports more-process-based models that tend to have longer execution times. The conceptual FUSE combines components of 4 existing conceptual hydrological models. The process-based framework consists of different forms of Richard's equations, numerical solutions, groundwater parameterizations and hydraulic conductivity distribution. The hydrological analysis of the model processes has evolved from focusing only on simulated runoff (final model output), to also including other criteria such as soil moisture and groundwater levels. Parameter importance and associated structural importance are evaluated using different types of sensitivity analyses techniques, making use of both robust global methods (e.g. Sobol') as well as several

  2. Evaluation of trends in wheat yield models

    NASA Technical Reports Server (NTRS)

    Ferguson, M. C.

    1982-01-01

    Trend terms in models for wheat yield in the U.S. Great Plains for the years 1932 to 1976 are evaluated. The subset of meteorological variables yielding the largest adjusted R(2) is selected using the method of leaps and bounds. Latent root regression is used to eliminate multicollinearities, and generalized ridge regression is used to introduce bias to provide stability in the data matrix. The regression model used provides for two trends in each of two models: a dependent model in which the trend line is piece-wise continuous, and an independent model in which the trend line is discontinuous at the year of the slope change. It was found that the trend lines best describing the wheat yields consisted of combinations of increasing, decreasing, and constant trend: four combinations for the dependent model and seven for the independent model.

  3. Dynamic Multicriteria Evaluation of Conceptual Hydrological Models

    NASA Astrophysics Data System (ADS)

    de Vos, N. J.; Rientjes, T. H.; Fenicia, F.; Gupta, H. V.

    2007-12-01

    Accurate and precise forecasts of river streamflows are crucial for successful management of water resources and under the threat of hydrological extremes such as floods and droughts. Conceptual rainfall-runoff models are the most popular approach in flood forecasting. However, the calibration and evaluation of such models is often oversimplified by the use of performance statistics that largely ignore the dynamic character of a watershed system. This research aims to find novel ways of model evaluation by identifying periods of hydrologic similarity and customizing evaluation within each period using multiple criteria. A dynamic approach to hydrologic model identification, calibration and testing can be realized by applying clustering algorithms (e.g., Self-Organizing Map, Fuzzy C-means algorithm) to hydrological data. These algorithms are able to identify clusters in the data that represent periods of hydrological similarity. In this way, dynamic catchment system behavior can be simplified within the clusters that are identified. Although clustering requires a number of subjective choices, new insights into the hydrological functioning of a catchment can be obtained. Finally, separate model multi-criteria calibration and evaluation is performed for each of the clusters. Such a model evaluation procedure shows to be reliable and gives much-needed feedback on exactly where certain model structures fail. Several clustering algorithms were tested on two data sets of meso-scale and large-scale catchments. The results show that the clustering algorithms define categories that reflect hydrological process understanding: dry/wet seasons, rising/falling hydrograph limbs, precipitation-driven/ non-driven periods, etc. The results of various clustering algorithms are compared and validated using expert knowledge. Calibration results on a conceptual hydrological model show that the common practice of single-criteria calibration over the complete time series fails to perform

  4. Evaluating network models: A likelihood analysis

    NASA Astrophysics Data System (ADS)

    Wang, Wen-Qiang; Zhang, Qian-Ming; Zhou, Tao

    2012-04-01

    Many models are put forward to mimic the evolution of real networked systems. A well-accepted way to judge the validity is to compare the modeling results with real networks subject to several structural features. Even for a specific real network, we cannot fairly evaluate the goodness of different models since there are too many structural features while there is no criterion to select and assign weights on them. Motivated by the studies on link prediction algorithms, we propose a unified method to evaluate the network models via the comparison of the likelihoods of the currently observed network driven by different models, with an assumption that the higher the likelihood is, the more accurate the model is. We test our method on the real Internet at the Autonomous System (AS) level, and the results suggest that the Generalized Linear Preferential (GLP) model outperforms the Tel Aviv Network Generator (Tang), while both two models are better than the Barabási-Albert (BA) and Erdös-Rényi (ER) models. Our method can be further applied in determining the optimal values of parameters that correspond to the maximal likelihood. The experiment indicates that the parameters obtained by our method can better capture the characters of newly added nodes and links in the AS-level Internet than the original methods in the literature.

  5. PREFACE SPECIAL ISSUE ON MODEL EVALUATION: EVALUATION OF URBAN AND REGIONAL EULERIAN AIR QUALITY MODELS

    EPA Science Inventory

    The "Preface to the Special Edition on Model Evaluation: Evaluation of Urban and Regional Eulerian Air Quality Models" is a brief introduction to the papers included in a special issue of Atmospheric Environment. The Preface provides a background for the papers, which have thei...

  6. Performance Evaluation of Dense Gas Dispersion Models.

    NASA Astrophysics Data System (ADS)

    Touma, Jawad S.; Cox, William M.; Thistle, Harold; Zapert, James G.

    1995-03-01

    This paper summarizes the results of a study to evaluate the performance of seven dense gas dispersion models using data from three field experiments. Two models (DEGADIS and SLAB) are in the public domain and the other five (AIRTOX, CHARM, FOCUS, SAFEMODE, and TRACE) are proprietary. The field data used are the Desert Tortoise pressurized ammonia releases, Burro liquefied natural gas spill tests, and the Goldfish anhydrous hydrofluoric acid spill experiments. Desert Tortoise and Goldfish releases were simulated as horizontal jet releases, and Burro as a liquid pool. Performance statistics were used to compare maximum observed concentrations and plume half-width to those predicted by each model. Model performance varied and no model exhibited consistently good performance across all three databases. However, when combined across the three databases, all models performed within a factor of 2. Problems encountered are discussed in order to help future investigators.

  7. Evaluation of a habitat suitability index model

    USGS Publications Warehouse

    Farmer, A.H.; Cade, B.S.; Stauffer, D.F.

    2002-01-01

    We assisted with development of a model for maternity habitat of the Indiana bat (Myotis soda/is), for use in conducting assessments of projects potentially impacting this endangered species. We started with an existing model, modified that model in a workshop, and evaluated the revised model, using data previously collected by others. Our analyses showed that higher indices of habitat suitability were associated with sites where Indiana bats were present and, thus, the model may be useful for identifying suitable habitat. Utility of the model, however, was based on a single component-density of suitable roost trees. Percentage of landscape in forest did not allow differentiation between sites occupied and not occupied by Indiana bats. Moreover, in spite of a general opinion by participants in the workshop that bodies of water were highly productive feeding areas and that a diversity of feeding habitats was optimal, we found no evidence to support either hypothesis.

  8. Optical Storage Performance Modeling and Evaluation.

    ERIC Educational Resources Information Center

    Behera, Bailochan; Singh, Harpreet

    1990-01-01

    Evaluates different types of storage media for long-term archival storage of large amounts of data. Existing storage media are reviewed, including optical disks, optical tape, magnetic storage, and microfilm; three models are proposed based on document storage requirements; performance analysis is considered; and cost effectiveness is discussed.…

  9. Evaluation of Usability Utilizing Markov Models

    ERIC Educational Resources Information Center

    Penedo, Janaina Rodrigues; Diniz, Morganna; Ferreira, Simone Bacellar Leal; Silveira, Denis S.; Capra, Eliane

    2012-01-01

    Purpose: The purpose of this paper is to analyze the usability of a remote learning system in its initial development phase, using a quantitative usability evaluation method through Markov models. Design/methodology/approach: The paper opted for an exploratory study. The data of interest of the research correspond to the possible accesses of users…

  10. Evaluation of help model replacement codes

    SciTech Connect

    Whiteside, Tad; Hang, Thong; Flach, Gregory

    2009-07-01

    This work evaluates the computer codes that are proposed to be used to predict percolation of water through the closure-cap and into the waste containment zone at the Department of Energy closure sites. This work compares the currently used water-balance code (HELP) with newly developed computer codes that use unsaturated flow (Richards’ equation). It provides a literature review of the HELP model and the proposed codes, which result in two recommended codes for further evaluation: HYDRUS-2D3D and VADOSE/W. This further evaluation involved performing actual simulations on a simple model and comparing the results of those simulations to those obtained with the HELP code and the field data. From the results of this work, we conclude that the new codes perform nearly the same, although moving forward, we recommend HYDRUS-2D3D.

  11. Evaluation of a Mysis bioenergetics model

    USGS Publications Warehouse

    Chipps, S.R.; Bennett, D.H.

    2002-01-01

    Direct approaches for estimating the feeding rate of the opossum shrimp Mysis relicta can be hampered by variable gut residence time (evacuation rate models) and non-linear functional responses (clearance rate models). Bioenergetics modeling provides an alternative method, but the reliability of this approach needs to be evaluated using independent measures of growth and food consumption. In this study, we measured growth and food consumption for M. relicta and compared experimental results with those predicted from a Mysis bioenergetics model. For Mysis reared at 10??C, model predictions were not significantly different from observed values. Moreover, decomposition of mean square error indicated that 70% of the variation between model predictions and observed values was attributable to random error. On average, model predictions were within 12% of observed values. A sensitivity analysis revealed that Mysis respiration and prey energy density were the most sensitive parameters affecting model output. By accounting for uncertainty (95% CLs) in Mysis respiration, we observed a significant improvement in the accuracy of model output (within 5% of observed values), illustrating the importance of sensitive input parameters for model performance. These findings help corroborate the Mysis bioenergetics model and demonstrate the usefulness of this approach for estimating Mysis feeding rate.

  12. Evaluation (not validation) of quantitative models.

    PubMed

    Oreskes, N

    1998-12-01

    The present regulatory climate has led to increasing demands for scientists to attest to the predictive reliability of numerical simulation models used to help set public policy, a process frequently referred to as model validation. But while model validation may reveal useful information, this paper argues that it is not possible to demonstrate the predictive reliability of any model of a complex natural system in advance of its actual use. All models embed uncertainties, and these uncertainties can and frequently do undermine predictive reliability. In the case of lead in the environment, we may categorize model uncertainties as theoretical, empirical, parametrical, and temporal. Theoretical uncertainties are aspects of the system that are not fully understood, such as the biokinetic pathways of lead metabolism. Empirical uncertainties are aspects of the system that are difficult (or impossible) to measure, such as actual lead ingestion by an individual child. Parametrical uncertainties arise when complexities in the system are simplified to provide manageable model input, such as representing longitudinal lead exposure by cross-sectional measurements. Temporal uncertainties arise from the assumption that systems are stable in time. A model may also be conceptually flawed. The Ptolemaic system of astronomy is a historical example of a model that was empirically adequate but based on a wrong conceptualization. Yet had it been computerized--and had the word then existed--its users would have had every right to call it validated. Thus, rather than talking about strategies for validation, we should be talking about means of evaluation. That is not to say that language alone will solve our problems or that the problems of model evaluation are primarily linguistic. The uncertainties inherent in large, complex models will not go away simply because we change the way we talk about them. But this is precisely the point: calling a model validated does not make it valid

  13. Evaluation (not validation) of quantitative models.

    PubMed Central

    Oreskes, N

    1998-01-01

    The present regulatory climate has led to increasing demands for scientists to attest to the predictive reliability of numerical simulation models used to help set public policy, a process frequently referred to as model validation. But while model validation may reveal useful information, this paper argues that it is not possible to demonstrate the predictive reliability of any model of a complex natural system in advance of its actual use. All models embed uncertainties, and these uncertainties can and frequently do undermine predictive reliability. In the case of lead in the environment, we may categorize model uncertainties as theoretical, empirical, parametrical, and temporal. Theoretical uncertainties are aspects of the system that are not fully understood, such as the biokinetic pathways of lead metabolism. Empirical uncertainties are aspects of the system that are difficult (or impossible) to measure, such as actual lead ingestion by an individual child. Parametrical uncertainties arise when complexities in the system are simplified to provide manageable model input, such as representing longitudinal lead exposure by cross-sectional measurements. Temporal uncertainties arise from the assumption that systems are stable in time. A model may also be conceptually flawed. The Ptolemaic system of astronomy is a historical example of a model that was empirically adequate but based on a wrong conceptualization. Yet had it been computerized--and had the word then existed--its users would have had every right to call it validated. Thus, rather than talking about strategies for validation, we should be talking about means of evaluation. That is not to say that language alone will solve our problems or that the problems of model evaluation are primarily linguistic. The uncertainties inherent in large, complex models will not go away simply because we change the way we talk about them. But this is precisely the point: calling a model validated does not make it valid

  14. A model evaluation checklist for process-based environmental models

    NASA Astrophysics Data System (ADS)

    Jackson-Blake, Leah

    2015-04-01

    Mechanistic catchment-scale phosphorus models appear to perform poorly where diffuse sources dominate. The reasons for this were investigated for one commonly-applied model, the INtegrated model of CAtchment Phosphorus (INCA-P). Model output was compared to 18 months of daily water quality monitoring data in a small agricultural catchment in Scotland, and model structure, key model processes and internal model responses were examined. Although the model broadly reproduced dissolved phosphorus dynamics, it struggled with particulates. The reasons for poor performance were explored, together with ways in which improvements could be made. The process of critiquing and assessing model performance was then generalised to provide a broadly-applicable model evaluation checklist, incorporating: (1) Calibration challenges, relating to difficulties in thoroughly searching a high-dimensional parameter space and in selecting appropriate means of evaluating model performance. In this study, for example, model simplification was identified as a necessary improvement to reduce the number of parameters requiring calibration, whilst the traditionally-used Nash Sutcliffe model performance statistic was not able to discriminate between realistic and unrealistic model simulations, and alternative statistics were needed. (2) Data limitations, relating to a lack of (or uncertainty in) input data, data to constrain model parameters, data for model calibration and testing, and data to test internal model processes. In this study, model reliability could be improved by addressing all four kinds of data limitation. For example, there was insufficient surface water monitoring data for model testing against an independent dataset to that used in calibration, whilst additional monitoring of groundwater and effluent phosphorus inputs would help distinguish between alternative plausible model parameterisations. (3) Model structural inadequacies, whereby model structure may inadequately represent

  15. Evaluating model accuracy for model-based reasoning

    NASA Technical Reports Server (NTRS)

    Chien, Steve; Roden, Joseph

    1992-01-01

    Described here is an approach to automatically assessing the accuracy of various components of a model. In this approach, actual data from the operation of a target system is used to drive statistical measures to evaluate the prediction accuracy of various portions of the model. We describe how these statistical measures of model accuracy can be used in model-based reasoning for monitoring and design. We then describe the application of these techniques to the monitoring and design of the water recovery system of the Environmental Control and Life Support System (ECLSS) of Space Station Freedom.

  16. AERMOD: Model formulation and evaluation results

    SciTech Connect

    Paine, R.J.; Lee, R.; Brode, R.; Wilson, R.; Cimorelli, A.; Perry, S.G.; Weil, J.; Venkatram, A.; Peters, W.

    1999-07-01

    AERMOD is an advanced plume model that incorporates updated treatments of the boundary layer theory, understanding of turbulence and dispersion, and includes handling of terrain interactions. This paper presents an overview of AERMOD's features relative to ISCST3. AERMOD has been evaluated on 10 databases, which include flat and hilly terrain areas, urban and rural sites, and a mixture of tracer experiments as well as routine monitoring networks with a limited number of fixed monitoring sites. This paper presents a summary of the evaluation results of AERMOD with these diverse databases.

  17. AERMOD: Model formulation and evaluation results

    SciTech Connect

    Paine, R.; Lee, R.; Brode, R.; Wilson, R.; Cimorelli, A.

    1999-07-01

    AERMOD is an advanced plume model that incorporates update treatment of the boundary treatments of the boundary layer theory, understanding of turbulence and dispersion, and includes handling of terrain interactions. This paper presents an overview of AERMOD`s features relatives of ISCST3. AERMOD has been evaluated on 10 databases, which include flat and hilly terrain areas, urban and rural sites, and a mixture of tracer experiments as well as routine monitoring networks with a limited number of fixed monitoring sites. This paper presents a summary of the evaluation results of AERMOD with these diverse databases.

  18. Evaluating spatial patterns in hydrological modeling

    NASA Astrophysics Data System (ADS)

    Koch, Julian; Stisen, Simon; Høgh Jensen, Karsten

    2014-05-01

    Recent advances in hydrological modeling towards fully distributed grid based model codes, increased availability of spatially distributed data (remote sensing and intensive field studies) and more computational power allow a shift towards a spatial model evaluation away from the traditional aggregated evaluation. The consideration of spatially aggregated observations, in form of river discharge, in the evaluation process does not ensure a correct simulation of catchment-inherent distributed variables. The integration of spatial data and hydrological models is limited due to a lack of suitable metrics to evaluate similarity of spatial patterns. This study is engaged with the development of a novel set of performance metrics that capture spatial patterns and go beyond global statistics. The metrics are required to be easy, flexible and especially targeted to compare observed and simulated spatial patterns of hydrological variables. Four quantitative methodologies for comparing spatial patterns are brought forward: (1) A fuzzy set approach that incorporates both fuzziness of location and fuzziness of category. (2) Kappa statistic that expresses the similarity between two maps based on a contingency table (error matrix). (3) An extended version of (2) by considering both fuzziness in location and fuzziness in category. (4) Increasing the information content of a single cell by aggregating neighborhood cells at different window sizes; then computing mean and standard deviation. The identified metrics are tested on observed and simulated land surface temperature maps in a groundwater dominated catchment in western Denmark. The observed data originates from the MODIS satellite and MIKE SHE, a coupled and fully distributed hydrological model, serves as the modelling tool. Synthetic land surface temperature maps are generated to further address strengths and weaknesses of the metrics. The metrics are tested in different parameter optimizing frameworks, where they are

  19. Automated Expert Modeling and Student Evaluation

    SciTech Connect

    2012-09-12

    AEMASE searches a database of recorded events for combinations of events that are of interest. It compares matching combinations to a statistical model to determine similarity to previous events of interest and alerts the user as new matching examples are found. AEMASE is currently used by weapons tactics instructors to find situations of interest in recorded tactical training scenarios. AEMASE builds on a sub-component, the Relational Blackboard (RBB), which is being released as open-source software. AEMASE builds on RBB adding interactive expert model construction (automated knowledge capture) and re-evaluation of scenario data.

  20. Automated Expert Modeling and Student Evaluation

    Energy Science and Technology Software Center (ESTSC)

    2012-09-12

    AEMASE searches a database of recorded events for combinations of events that are of interest. It compares matching combinations to a statistical model to determine similarity to previous events of interest and alerts the user as new matching examples are found. AEMASE is currently used by weapons tactics instructors to find situations of interest in recorded tactical training scenarios. AEMASE builds on a sub-component, the Relational Blackboard (RBB), which is being released as open-source software.more » AEMASE builds on RBB adding interactive expert model construction (automated knowledge capture) and re-evaluation of scenario data.« less

  1. CTBT integrated verification system evaluation model supplement

    SciTech Connect

    EDENBURN,MICHAEL W.; BUNTING,MARCUS; PAYNE JR.,ARTHUR C.; TROST,LAWRENCE C.

    2000-03-02

    Sandia National Laboratories has developed a computer based model called IVSEM (Integrated Verification System Evaluation Model) to estimate the performance of a nuclear detonation monitoring system. The IVSEM project was initiated in June 1994, by Sandia's Monitoring Systems and Technology Center and has been funded by the U.S. Department of Energy's Office of Nonproliferation and National Security (DOE/NN). IVSEM is a simple, ''top-level,'' modeling tool which estimates the performance of a Comprehensive Nuclear Test Ban Treaty (CTBT) monitoring system and can help explore the impact of various sensor system concepts and technology advancements on CTBT monitoring. One of IVSEM's unique features is that it integrates results from the various CTBT sensor technologies (seismic, in sound, radionuclide, and hydroacoustic) and allows the user to investigate synergy among the technologies. Specifically, IVSEM estimates the detection effectiveness (probability of detection), location accuracy, and identification capability of the integrated system and of each technology subsystem individually. The model attempts to accurately estimate the monitoring system's performance at medium interfaces (air-land, air-water) and for some evasive testing methods such as seismic decoupling. The original IVSEM report, CTBT Integrated Verification System Evaluation Model, SAND97-25 18, described version 1.2 of IVSEM. This report describes the changes made to IVSEM version 1.2 and the addition of identification capability estimates that have been incorporated into IVSEM version 2.0.

  2. Programs Help Create And Evaluate Markov Models

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Boerschlein, David P.

    1993-01-01

    Pade Approximation With Scaling (PAWS) and Scaled Taylor Exponential Matrix (STEM) computer programs provide flexible, user-friendly, language-based interface for creation and evaluation of Markov models describing behaviors of fault-tolerant reconfigurable computer systems. Produce exact solution for probabilities of system failures and provide conservative estimates of numbers of significant digits in solutions. Also offer as part of bundled package with SURE and ASSIST, two other reliable analysis programs developed by Systems Validation Methods group at Langley Research Center.

  3. Radiation model for row crops: II. Model evaluation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Relatively few radiation transfer studies have considered the impact of varying vegetation cover that typifies row crops, and meth¬ods to account for partial row crop cover have not been well investigated. Our objective was to evaluate a widely used radiation model that was modified for row crops ha...

  4. Using Critical Incidents To Model Effective Evaluation Practice in the Teaching of Evaluation.

    ERIC Educational Resources Information Center

    Preskill, Hallie

    1997-01-01

    Discusses the importance of modeling effective evaluation practice as teachers teach about evaluation. Using the critical incidents evaluation tool and process students in graduate evaluation courses were asked to reflect on their learning of modeling formative evaluation throughout the course as a way to teach about evaluation practice. (SLD)

  5. Evaluation of a mallard productivity model

    USGS Publications Warehouse

    Johnson, D.H.; Cowardin, L.M.; Sparling, D.W.

    1986-01-01

    A stochastic model of mallard (Anas platyrhynchos) productivity has been developed over a 10-year period and successfully applied to several management questions. Here we review the model and describe some recent uses and improvements that increase its realism and applicability, including naturally occurring changes in wetland habitat, catastrophic weather events, and the migrational homing of mallards. The amount of wetland habitat influenced productivity primarily by affecting the renesting rate. Late snowstorms severely reduced productivity, whereas the loss of nests due to flooding was largely compensated for by increased renesting, often in habitats where hatching rates were better. Migrational homing was shown to be an important phenomenon in population modeling and should be considered when evaluating management plans.

  6. User's appraisal of yield model evaluation criteria

    NASA Technical Reports Server (NTRS)

    Warren, F. B. (Principal Investigator)

    1982-01-01

    The five major potential USDA users of AgRISTAR crop yield forecast models rated the Yield Model Development (YMD) project Test and Evaluation Criteria by the importance placed on them. These users were agreed that the "TIMELINES" and "RELIABILITY" of the forecast yields would be of major importance in determining if a proposed yield model was worthy of adoption. Although there was considerable difference of opinion as to the relative importance of the other criteria, "COST", "OBJECTIVITY", "ADEQUACY", AND "MEASURES OF ACCURACY" generally were felt to be more important that "SIMPLICITY" and "CONSISTENCY WITH SCIENTIFIC KNOWLEDGE". However, some of the comments which accompanied the ratings did indicate that several of the definitions and descriptions of the criteria were confusing.

  7. A Formulation of the Interactive Evaluation Model

    PubMed Central

    Walsh, Peter J.; Awad-Edwards, Roger; Engelhardt, K. G.; Perkash, Inder

    1985-01-01

    The development of highly technical devices for specialized users requires continual feedback from potential users to the project team designing the device to assure that a useful product will result. This necessity for user input is the basis for the Interactive Evaluation Model which has been applied to complex computer assisted robotic aids for individuals with disabilities and has wide application to the development of a variety of technical devices. We present a preliminary mathematical formulation of the Interactive Evaluation Model which maximizes the rate of growth toward success, at a constant cost rate, of the efforts of a team having the diverse expertises needed to produce a complex technical product. Close interaction is simulated by a growth rate which is a multiplicative product involving the number of participants within a given class of necessary expertise and evaluation is included by demanding that users form one of the necessary classes. In the multipliers, the number of class participants is raised to a power termed the class weight exponent. In the simplest case, the optimum participant number varies as the ratio of the class weight exponent to the average class cost. An illustrative example, based on our experience with medical care assistive aids, shows the dramatic cost reduction possible with users on the team.

  8. Hazardous gas model evaluation with field observations

    NASA Astrophysics Data System (ADS)

    Hanna, S. R.; Chang, J. C.; Strimaitis, D. G.

    Fifteen hazardous gas models were evaluated using data from eight field experiments. The models include seven publicly available models (AFTOX, DEGADIS, HEGADAS, HGSYSTEM, INPUFF, OB/DG and SLAB), six proprietary models (AIRTOX, CHARM, FOCUS, GASTAR, PHAST and TRACE), and two "benchmark" analytical models (the Gaussian Plume Model and the analytical approximations to the Britter and McQuaid Workbook nomograms). The field data were divided into three groups—continuous dense gas releases (Burro LNG, Coyote LNG, Desert Tortoise NH 3-gas and aerosols, Goldfish HF-gas and aerosols, and Maplin Sands LNG), continuous passive gas releases (Prairie Grass and Hanford), and instantaneous dense gas releases (Thorney Island freon). The dense gas models that produced the most consistent predictions of plume centerline concentrations across the dense gas data sets are the Britter and McQuaid, CHARM, GASTAR, HEGADAS, HGSYSTEM, PHAST, SLAB and TRACE models, with relative mean biases of about ±30% or less and magnitudes of relative scatter that are about equal to the mean. The dense gas models tended to overpredict the plume widths and underpredict the plume depths by about a factor of two. All models except GASTAR, TRACE, and the area source version of DEGADIS perform fairly well with the continuous passive gas data sets. Some sensitivity studies were also carried out. It was found that three of the more widely used publicly-available dense gas models (DEGADIS, HGSYSTEM and SLAB) predicted increases in concentration of about 70% as roughness length decreased by an order of magnitude for the Desert Tortoise and Goldfish field studies. It was also found that none of the dense gas models that were considered came close to simulating the observed factor of two increase in peak concentrations as averaging time decreased from several minutes to 1 s. Because of their assumption that a concentrated dense gas core existed that was unaffected by variations in averaging time, the dense gas

  9. The natural emissions model (NEMO): Description, application and model evaluation

    NASA Astrophysics Data System (ADS)

    Liora, Natalia; Markakis, Konstantinos; Poupkou, Anastasia; Giannaros, Theodore M.; Melas, Dimitrios

    2015-12-01

    The aim of this study is the application and evaluation of a new computer model used for the quantification of emissions coming from natural sources. The Natural Emissions Model (NEMO) is driven by the meteorological data of the mesoscale numerical Weather Research and Forecasting (WRF) model and it estimates particulate matter (PM) emissions from windblown dust, sea salt aerosols (SSA) and primary biological aerosol particles (PBAPs). It also includes emissions from Biogenic Volatile Organic Compounds (BVOCs) from vegetation; however, this study focuses only on particle emissions. An application and evaluation of NEMO at European scale are presented. NEMO and the modelling system consisted of WRF model and the Comprehensive Air Quality Model with extensions (CAMx) were applied in a 30 km European domain for the year 2009. The computed domain-wide annual PM10 emissions from windblown dust, sea salt and PBAPs were 0.57 Tg, 20 Tg and 0.12 Tg, respectively. PM2.5 represented 6% and 33% of emitted windblown dust and sea salt, respectively. Natural emissions are characterized by high geographical and seasonal variations; windblown dust emissions were the highest during summer in the southern Europe and SSA production was the highest in Atlantic Ocean during the cold season while in Mediterranean Sea the highest SSA emissions were found over the Aegean Sea during summer. Modelled concentrations were compared with surface station measurements and showed that the model captured fairly well the contribution of the natural sources to PM levels over Europe. Dust concentrations correlated better when dust transport events from Sahara desert were absent while the simulation of sea salt episodes led to an improvement of model performance during the cold season.

  10. Evaluating face trustworthiness: a model based approach

    PubMed Central

    Baron, Sean G.; Oosterhof, Nikolaas N.

    2008-01-01

    Judgments of trustworthiness from faces determine basic approach/avoidance responses and approximate the valence evaluation of faces that runs across multiple person judgments. Here, based on trustworthiness judgments and using a computer model for face representation, we built a model for representing face trustworthiness (study 1). Using this model, we generated novel faces with an increased range of trustworthiness and used these faces as stimuli in a functional Magnetic Resonance Imaging study (study 2). Although participants did not engage in explicit evaluation of the faces, the amygdala response changed as a function of face trustworthiness. An area in the right amygdala showed a negative linear response—as the untrustworthiness of faces increased so did the amygdala response. Areas in the left and right putamen, the latter area extended into the anterior insula, showed a similar negative linear response. The response in the left amygdala was quadratic—strongest for faces on both extremes of the trustworthiness dimension. The medial prefrontal cortex and precuneus also showed a quadratic response, but their response was strongest to faces in the middle range of the trustworthiness dimension. PMID:19015102

  11. Evaluating conflation methods using uncertainty modeling

    NASA Astrophysics Data System (ADS)

    Doucette, Peter; Dolloff, John; Canavosio-Zuzelski, Roberto; Lenihan, Michael; Motsko, Dennis

    2013-05-01

    The classic problem of computer-assisted conflation involves the matching of individual features (e.g., point, polyline, or polygon vectors) as stored in a geographic information system (GIS), between two different sets (layers) of features. The classical goal of conflation is the transfer of feature metadata (attributes) from one layer to another. The age of free public and open source geospatial feature data has significantly increased the opportunity to conflate such data to create enhanced products. There are currently several spatial conflation tools in the marketplace with varying degrees of automation. An ability to evaluate conflation tool performance quantitatively is of operational value, although manual truthing of matched features is laborious and costly. In this paper, we present a novel methodology that uses spatial uncertainty modeling to simulate realistic feature layers to streamline evaluation of feature matching performance for conflation methods. Performance results are compiled for DCGIS street centerline features.

  12. Data assimilation and model evaluation experiment datasets

    NASA Technical Reports Server (NTRS)

    Lai, Chung-Cheng A.; Qian, Wen; Glenn, Scott M.

    1994-01-01

    The Institute for Naval Oceanography, in cooperation with Naval Research Laboratories and universities, executed the Data Assimilation and Model Evaluation Experiment (DAMEE) for the Gulf Stream region during fiscal years 1991-1993. Enormous effort has gone into the preparation of several high-quality and consistent datasets for model initialization and verification. This paper describes the preparation process, the temporal and spatial scopes, the contents, the structure, etc., of these datasets. The goal of DAMEE and the need of data for the four phases of experiment are briefly stated. The preparation of DAMEE datasets consisted of a series of processes: (1) collection of observational data; (2) analysis and interpretation; (3) interpolation using the Optimum Thermal Interpolation System package; (4) quality control and re-analysis; and (5) data archiving and software documentation. The data products from these processes included a time series of 3D fields of temperature and salinity, 2D fields of surface dynamic height and mixed-layer depth, analysis of the Gulf Stream and rings system, and bathythermograph profiles. To date, these are the most detailed and high-quality data for mesoscale ocean modeling, data assimilation, and forecasting research. Feedback from ocean modeling groups who tested this data was incorporated into its refinement. Suggestions for DAMEE data usages include (1) ocean modeling and data assimilation studies, (2) diagnosis and theoretical studies, and (3) comparisons with locally detailed observations.

  13. Modelling approaches for evaluating multiscale tendon mechanics.

    PubMed

    Fang, Fei; Lake, Spencer P

    2016-02-01

    Tendon exhibits anisotropic, inhomogeneous and viscoelastic mechanical properties that are determined by its complicated hierarchical structure and varying amounts/organization of different tissue constituents. Although extensive research has been conducted to use modelling approaches to interpret tendon structure-function relationships in combination with experimental data, many issues remain unclear (i.e. the role of minor components such as decorin, aggrecan and elastin), and the integration of mechanical analysis across different length scales has not been well applied to explore stress or strain transfer from macro- to microscale. This review outlines mathematical and computational models that have been used to understand tendon mechanics at different scales of the hierarchical organization. Model representations at the molecular, fibril and tissue levels are discussed, including formulations that follow phenomenological and microstructural approaches (which include evaluations of crimp, helical structure and the interaction between collagen fibrils and proteoglycans). Multiscale modelling approaches incorporating tendon features are suggested to be an advantageous methodology to understand further the physiological mechanical response of tendon and corresponding adaptation of properties owing to unique in vivo loading environments. PMID:26855747

  14. CTBT Integrated Verification System Evaluation Model

    SciTech Connect

    Edenburn, M.W.; Bunting, M.L.; Payne, A.C. Jr.

    1997-10-01

    Sandia National Laboratories has developed a computer based model called IVSEM (Integrated Verification System Evaluation Model) to estimate the performance of a nuclear detonation monitoring system. The IVSEM project was initiated in June 1994, by Sandia`s Monitoring Systems and Technology Center and has been funded by the US Department of Energy`s Office of Nonproliferation and National Security (DOE/NN). IVSEM is a simple, top-level, modeling tool which estimates the performance of a Comprehensive Nuclear Test Ban Treaty (CTBT) monitoring system and can help explore the impact of various sensor system concepts and technology advancements on CTBT monitoring. One of IVSEM`s unique features is that it integrates results from the various CTBT sensor technologies (seismic, infrasound, radionuclide, and hydroacoustic) and allows the user to investigate synergy among the technologies. Specifically, IVSEM estimates the detection effectiveness (probability of detection) and location accuracy of the integrated system and of each technology subsystem individually. The model attempts to accurately estimate the monitoring system`s performance at medium interfaces (air-land, air-water) and for some evasive testing methods such as seismic decoupling. This report describes version 1.2 of IVSEM.

  15. An evaluation framework for participatory modelling

    NASA Astrophysics Data System (ADS)

    Krueger, T.; Inman, A.; Chilvers, J.

    2012-04-01

    Strong arguments for participatory modelling in hydrology can be made on substantive, instrumental and normative grounds. These arguments have led to increasingly diverse groups of stakeholders (here anyone affecting or affected by an issue) getting involved in hydrological research and the management of water resources. In fact, participation has become a requirement of many research grants, programs, plans and policies. However, evidence of beneficial outcomes of participation as suggested by the arguments is difficult to generate and therefore rare. This is because outcomes are diverse, distributed, often tacit, and take time to emerge. In this paper we develop an evaluation framework for participatory modelling focussed on learning outcomes. Learning encompasses many of the potential benefits of participation, such as better models through diversity of knowledge and scrutiny, stakeholder empowerment, greater trust in models and ownership of subsequent decisions, individual moral development, reflexivity, relationships, social capital, institutional change, resilience and sustainability. Based on the theories of experiential, transformative and social learning, complemented by practitioner experience our framework examines if, when and how learning has occurred. Special emphasis is placed on the role of models as learning catalysts. We map the distribution of learning between stakeholders, scientists (as a subgroup of stakeholders) and models. And we analyse what type of learning has occurred: instrumental learning (broadly cognitive enhancement) and/or communicative learning (change in interpreting meanings, intentions and values associated with actions and activities; group dynamics). We demonstrate how our framework can be translated into a questionnaire-based survey conducted with stakeholders and scientists at key stages of the participatory process, and show preliminary insights from applying the framework within a rural pollution management situation in

  16. Performance Evaluation Modeling of Network Sensors

    NASA Technical Reports Server (NTRS)

    Clare, Loren P.; Jennings, Esther H.; Gao, Jay L.

    2003-01-01

    Substantial benefits are promised by operating many spatially separated sensors collectively. Such systems are envisioned to consist of sensor nodes that are connected by a communications network. A simulation tool is being developed to evaluate the performance of networked sensor systems, incorporating such metrics as target detection probabilities, false alarms rates, and classification confusion probabilities. The tool will be used to determine configuration impacts associated with such aspects as spatial laydown, and mixture of different types of sensors (acoustic, seismic, imaging, magnetic, RF, etc.), and fusion architecture. The QualNet discrete-event simulation environment serves as the underlying basis for model development and execution. This platform is recognized for its capabilities in efficiently simulating networking among mobile entities that communicate via wireless media. We are extending QualNet's communications modeling constructs to capture the sensing aspects of multi-target sensing (analogous to multiple access communications), unimodal multi-sensing (broadcast), and multi-modal sensing (multiple channels and correlated transmissions). Methods are also being developed for modeling the sensor signal sources (transmitters), signal propagation through the media, and sensors (receivers) that are consistent with the discrete event paradigm needed for performance determination of sensor network systems. This work is supported under the Microsensors Technical Area of the Army Research Laboratory (ARL) Advanced Sensors Collaborative Technology Alliance.

  17. An Integrated Model of Training Evaluation and Effectiveness

    ERIC Educational Resources Information Center

    Alvarez, Kaye; Salas, Eduardo; Garofano, Christina M.

    2004-01-01

    A decade of training evaluation and training effectiveness research was reviewed to construct an integrated model of training evaluation and effectiveness. This model integrates four prior evaluation models and results of 10 years of training effectiveness research. It is the first to be constructed using a set of strict criteria and to…

  18. 10 CFR Appendix K to Part 50 - ECCS Evaluation Models

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 1 2011-01-01 2011-01-01 false ECCS Evaluation Models K Appendix K to Part 50 Energy NUCLEAR REGULATORY COMMISSION DOMESTIC LICENSING OF PRODUCTION AND UTILIZATION FACILITIES Pt. 50, App. K Appendix K to Part 50—ECCS Evaluation Models I. Required and Acceptable Features of Evaluation Models. II. Required Documentation. I. Required...

  19. Evaluation of Mesoscale Model Phenomenological Verification Techniques

    NASA Technical Reports Server (NTRS)

    Lambert, Winifred

    2006-01-01

    Forecasters at the Spaceflight Meteorology Group, 45th Weather Squadron, and National Weather Service in Melbourne, FL use mesoscale numerical weather prediction model output in creating their operational forecasts. These models aid in forecasting weather phenomena that could compromise the safety of launch, landing, and daily ground operations and must produce reasonable weather forecasts in order for their output to be useful in operations. Considering the importance of model forecasts to operations, their accuracy in forecasting critical weather phenomena must be verified to determine their usefulness. The currently-used traditional verification techniques involve an objective point-by-point comparison of model output and observations valid at the same time and location. The resulting statistics can unfairly penalize high-resolution models that make realistic forecasts of a certain phenomena, but are offset from the observations in small time and/or space increments. Manual subjective verification can provide a more valid representation of model performance, but is time-consuming and prone to personal biases. An objective technique that verifies specific meteorological phenomena, much in the way a human would in a subjective evaluation, would likely produce a more realistic assessment of model performance. Such techniques are being developed in the research community. The Applied Meteorology Unit (AMU) was tasked to conduct a literature search to identify phenomenological verification techniques being developed, determine if any are ready to use operationally, and outline the steps needed to implement any operationally-ready techniques into the Advanced Weather Information Processing System (AWIPS). The AMU conducted a search of all literature on the topic of phenomenological-based mesoscale model verification techniques and found 10 different techniques in various stages of development. Six of the techniques were developed to verify precipitation forecasts, one

  20. Treatment modalities and evaluation models for periodontitis

    PubMed Central

    Tariq, Mohammad; Iqbal, Zeenat; Ali, Javed; Baboota, Sanjula; Talegaonkar, Sushama; Ahmad, Zulfiqar; Sahni, Jasjeet K

    2012-01-01

    Periodontitis is the most common localized dental inflammatory disease related with several pathological conditions like inflammation of gums (gingivitis), degeneration of periodontal ligament, dental cementum and alveolar bone loss. In this perspective, the various preventive and treatment modalities, including oral hygiene, gingival irrigations, mechanical instrumentation, full mouth disinfection, host modulation and antimicrobial therapy, which are used either as adjunctive treatments or as stand-alone therapies in the non-surgical management of periodontal infections, have been discussed. Intra-pocket, sustained release systems have emerged as a novel paradigm for the future research. In this article, special consideration is given to different locally delivered anti-microbial and anti inflammatory medications which are either commercially available or are currently under consideration for Food and Drug Administration (FDA) approval. The various in vitro dissolution models and microbiological strain investigated to impersonate the infected and inflamed periodontal cavity and to predict the in vivo performance of treatment modalities have also been thrashed out. Animal models that have been employed to explore the pathology at the different stages of periodontitis and to evaluate its treatment modalities are enlightened in this proposed review. PMID:23373002

  1. A Multidisciplinary Model of Evaluation Capacity Building

    ERIC Educational Resources Information Center

    Preskill, Hallie; Boyle, Shanelle

    2008-01-01

    Evaluation capacity building (ECB) has become a hot topic of conversation, activity, and study within the evaluation field. Seeking to enhance stakeholders' understanding of evaluation concepts and practices, and in an effort to create evaluation cultures, organizations have been implementing a variety of strategies to help their members learn…

  2. The design and implementation of an operational model evaluation system

    SciTech Connect

    Foster, K.T.

    1995-06-01

    An evaluation of an atmospheric transport and diffusion model`s operational performance typically involves the comparison of the model`s calculations with measurements of an atmospheric pollutant`s temporal and spatial distribution. These evaluations however often use data from a small number of experiments and may be limited to producing some of the commonly quoted statistics based on the differences between model calculations and the measurements. This paper presents efforts to develop a model evaluation system geared for both the objective statistical analysis and the more subjective visualization of the inter-relationships between a model`s calculations and the appropriate field measurement data.

  3. A Hybrid Evaluation Model for Evaluating Online Professional Development

    ERIC Educational Resources Information Center

    Hahs-Vaughn, Debbie; Zygouris-Coe, Vicky; Fiedler, Rebecca

    2007-01-01

    Online professional development is multidimensional. It encompasses: a) an online, web-based format; b) professional development; and most likely c) specific objectives tailored to and created for the respective online professional development course. Evaluating online professional development is therefore also multidimensional and as such both…

  4. Evaluation of video quality models for multimedia

    NASA Astrophysics Data System (ADS)

    Brunnström, Kjell; Hands, David; Speranza, Filippo; Webster, Arthur

    2008-02-01

    The Video Quality Experts Group (VQEG) is a group of experts from industry, academia, government and standards organizations working in the field of video quality assessment. Over the last 10 years, VQEG has focused its efforts on the evaluation of objective video quality metrics for digital video. Objective video metrics are mathematical models that predict the picture quality as perceived by an average observer. VQEG has completed validation tests for full reference objective metrics for the Standard Definition Television (SDTV) format. From this testing, two ITU Recommendations were produced. This standardization effort is of great relevance to the video industries because objective metrics can be used for quality control of the video at various stages of the delivery chain. Currently, VQEG is undertaking several projects in parallel. The most mature project is concerned with objective measurement of multimedia content. This project is probably the largest coordinated set of video quality testing ever embarked upon. The project will involve the collection of a very large database of subjective quality data. About 40 subjective assessment experiments and more than 160,000 opinion scores will be collected. These will be used to validate the proposed objective metrics. This paper describes the test plan for the project, its current status, and one of the multimedia subjective tests.

  5. Modelling and evaluating against the violent insider

    SciTech Connect

    Fortney, D.S.; Al-Ayat, R.A.; Saleh, R.A.

    1991-07-01

    The violent insider threat poses a special challenge to facilities protecting special nuclear material from theft or diversion. These insiders could potentially behave as nonviolent insiders to deceitfully defeat certain safeguards elements and use violence to forcefully defeat hardware or personnel. While several vulnerability assessment tools are available to deal with the nonviolent insider, very limited effort has been directed to developing analysis tools for the violent threat. In this paper, we present an approach using the results of a vulnerability assessment for nonviolent insiders to evaluate certain violent insider scenarios. Since existing tools do not explicitly consider violent insiders, the approach is intended for experienced safeguards analysts and relies on the analyst to brainstorm possible violent actions, to assign detection probabilities, and to ensure consistency. We then discuss our efforts in developing an automated tool for assessing the vulnerability against those violent insiders who are willing to use force against barriers, but who are unwilling to kill or be killed. Specifically, we discuss our efforts in developing databases for violent insiders penetrating barriers, algorithms for considering the entry of contraband, and modelling issues in considering the use of violence.

  6. Modelling and evaluating against the violent insider

    SciTech Connect

    Fortney, D.S.; Al-Ayat, R.A.; Saleh, R.A. )

    1991-01-01

    The violent insider threat poses a special challenge to facilities protecting special nuclear material from theft or diversion. These insiders could potentially behave as nonviolent insiders to deceitfully defeat certain safeguards elements and use violence to forcefully defeat hardware or personnel. While several vulnerability assessment tools are available to deal with the nonviolent insider, very limited effort has been directed to developing analysis tools for the violent threat. In this paper, the authors present an approach using the results of a vulnerability assessment for nonviolent insiders to evaluate certain violent insider scenarios. Since existing tools do not explicitly consider violent insiders, the approach is intended for experienced safeguards analysts and relies on the analyst to brainstorm possible violent actions, to assign detection probabilities, and to ensure consistency. The authors then discuss our efforts in developing an automated tool for assessing the vulnerability against those violent insiders who are willing to use force against barriers, but who are unwilling to kill or be killed. Specifically, the authors discuss our efforts in developing databases for violent insiders penetrating barriers, algorithms for considering the entry of contraband, and modelling issues in considering the use of violence.

  7. An Evaluation Model for Innovative Individualized Programs.

    ERIC Educational Resources Information Center

    Weber, Margaret B.

    1977-01-01

    Program evaluation is a tri-level process: evaluation of the learners, of the program against its own objectives, and as compared against a criterion program. Evaluation of innovative programs is primarily an issue of definition, and they should be judged in terms of the needs they were designed to satisfy. (Author/CTM)

  8. THE ATMOSPHERIC MODEL EVALUATION TOOL (AMET); AIR QUALITY MODULE

    EPA Science Inventory

    This presentation reviews the development of the Atmospheric Model Evaluation Tool (AMET) air quality module. The AMET tool is being developed to aid in the model evaluation. This presentation focuses on the air quality evaluation portion of AMET. Presented are examples of the...

  9. Evaluating a Training Using the "Four Levels Model"

    ERIC Educational Resources Information Center

    Steensma, Herman; Groeneveld, Karin

    2010-01-01

    Purpose: The aims of this study are: to present a training evaluation based on the "four levels model"; to demonstrate the value of experimental designs in evaluation studies; and to take a first step in the development of an evidence-based training program. Design/methodology/approach: The Kirkpatrick four levels model was used to evaluate the…

  10. Formative Evaluation: A Revised Descriptive Theory and a Prescriptive Model.

    ERIC Educational Resources Information Center

    Braden, Roberts A.

    The premise is advanced that a major weakness of the everyday generic instructional systems design model stems from a too modest traditional conception of the purpose and potential of formative evaluation. In the typical ISD (instructional systems design) model formative evaluation is shown not at all or as a single, product evaluation step. Yet…

  11. THE ATMOSPHERIC MODEL EVALUATION TOOL: METEOROLOGY MODULE

    EPA Science Inventory

    Air quality modeling is continuously expanding in sophistication and function. Currently, air quality models are being used for research, forecasting, regulatory related emission control strategies, and other applications. Results from air-quality model applications are closely ...

  12. Program evaluation models and related theories: AMEE guide no. 67.

    PubMed

    Frye, Ann W; Hemmer, Paul A

    2012-01-01

    This Guide reviews theories of science that have influenced the development of common educational evaluation models. Educators can be more confident when choosing an appropriate evaluation model if they first consider the model's theoretical basis against their program's complexity and their own evaluation needs. Reductionism, system theory, and (most recently) complexity theory have inspired the development of models commonly applied in evaluation studies today. This Guide describes experimental and quasi-experimental models, Kirkpatrick's four-level model, the Logic Model, and the CIPP (Context/Input/Process/Product) model in the context of the theories that influenced their development and that limit or support their ability to do what educators need. The goal of this Guide is for educators to become more competent and confident in being able to design educational program evaluations that support intentional program improvement while adequately documenting or describing the changes and outcomes-intended and unintended-associated with their programs. PMID:22515309

  13. Rhode Island Model Evaluation & Support System: Teacher. Edition III

    ERIC Educational Resources Information Center

    Rhode Island Department of Education, 2015

    2015-01-01

    Rhode Island educators believe that implementing a fair, accurate, and meaningful educator evaluation and support system will help improve teaching and learning. The primary purpose of the Rhode Island Model Teacher Evaluation and Support System (Rhode Island Model) is to help all teachers improve. Through the Model, the goal is to help create a…

  14. Global daily reference evapotranspiration modeling and evaluation

    USGS Publications Warehouse

    Senay, G.B.; Verdin, J.P.; Lietzow, R.; Melesse, Assefa M.

    2008-01-01

    Accurate and reliable evapotranspiration (ET) datasets are crucial in regional water and energy balance studies. Due to the complex instrumentation requirements, actual ET values are generally estimated from reference ET values by adjustment factors using coefficients for water stress and vegetation conditions, commonly referred to as crop coefficients. Until recently, the modeling of reference ET has been solely based on important weather variables collected from weather stations that are generally located in selected agro-climatic locations. Since 2001, the National Oceanic and Atmospheric Administration's Global Data Assimilation System (GDAS) has been producing six-hourly climate parameter datasets that are used to calculate daily reference ET for the whole globe at 1-degree spatial resolution. The U.S. Geological Survey Center for Earth Resources Observation and Science has been producing daily reference ET (ETo) since 2001, and it has been used on a variety of operational hydrological models for drought and streamflow monitoring all over the world. With the increasing availability of local station-based reference ET estimates, we evaluated the GDAS-based reference ET estimates using data from the California Irrigation Management Information System (CIMIS). Daily CIMIS reference ET estimates from 85 stations were compared with GDAS-based reference ET at different spatial and temporal scales using five-year daily data from 2002 through 2006. Despite the large difference in spatial scale (point vs. ???100 km grid cell) between the two datasets, the correlations between station-based ET and GDAS-ET were very high, exceeding 0.97 on a daily basis to more than 0.99 on time scales of more than 10 days. Both the temporal and spatial correspondences in trend/pattern and magnitudes between the two datasets were satisfactory, suggesting the reliability of using GDAS parameter-based reference ET for regional water and energy balance studies in many parts of the world

  15. Animal models to evaluate bacterial biofilm development.

    PubMed

    Thomsen, Kim; Trøstrup, Hannah; Moser, Claus

    2014-01-01

    Medical biofilms have attracted substantial attention especially in the past decade. Animal models are contributing significantly to understand the pathogenesis of medical biofilms. In addition, animal models are an essential tool in testing the hypothesis generated from clinical observations in patients and preclinical testing of agents showing in vitro antibiofilm effect. Here, we describe three animal models - two non-foreign body Pseudomonas aeruginosa biofilm models and a foreign body Staphylococcus aureus model. PMID:24664830

  16. Evaluation of Models of Parkinson's Disease

    PubMed Central

    Jagmag, Shail A.; Tripathi, Naveen; Shukla, Sunil D.; Maiti, Sankar; Khurana, Sukant

    2016-01-01

    Parkinson's disease is one of the most common neurodegenerative diseases. Animal models have contributed a large part to our understanding and therapeutics developed for treatment of PD. There are several more exhaustive reviews of literature that provide the initiated insights into the specific models; however a novel synthesis of the basic advantages and disadvantages of different models is much needed. Here we compare both neurotoxin based and genetic models while suggesting some novel avenues in PD modeling. We also highlight the problems faced and promises of all the mammalian models with the hope of providing a framework for comparison of various systems. PMID:26834536

  17. The Use of the Discrepancy Evaluation Model in Evaluating Educational Programs for Visually Handicapped Persons.

    ERIC Educational Resources Information Center

    Hill, Everett W.; Hill, Mary-Maureen

    1983-01-01

    The need to evaluate educational programs is briefly addressed, and the application of the Discrepancy Evaluation Model (DEM) at a hypothetical residential school for the visually handicapped is described. (Author/SW)

  18. Evaluating uncertainty in stochastic simulation models

    SciTech Connect

    McKay, M.D.

    1998-02-01

    This paper discusses fundamental concepts of uncertainty analysis relevant to both stochastic simulation models and deterministic models. A stochastic simulation model, called a simulation model, is a stochastic mathematical model that incorporates random numbers in the calculation of the model prediction. Queuing models are familiar simulation models in which random numbers are used for sampling interarrival and service times. Another example of simulation models is found in probabilistic risk assessments where atmospheric dispersion submodels are used to calculate movement of material. For these models, randomness comes not from the sampling of times but from the sampling of weather conditions, which are described by a frequency distribution of atmospheric variables like wind speed and direction as a function of height above ground. A common characteristic of simulation models is that single predictions, based on one interarrival time or one weather condition, for example, are not nearly as informative as the probability distribution of possible predictions induced by sampling the simulation variables like time and weather condition. The language of model analysis is often general and vague, with terms having mostly intuitive meaning. The definition and motivations for some of the commonly used terms and phrases offered in this paper lead to an analysis procedure based on prediction variance. In the following mathematical abstraction the authors present a setting for model analysis, relate practical objectives to mathematical terms, and show how two reasonable premises lead to a viable analysis strategy.

  19. Likelihood-Based Climate Model Evaluation

    NASA Technical Reports Server (NTRS)

    Braverman, Amy; Cressie, Noel; Teixeira, Joao

    2012-01-01

    Climate models are deterministic, mathematical descriptions of the physics of climate. Confidence in predictions of future climate is increased if the physics are verifiably correct. A necessary, (but not sufficient) condition is that past and present climate be simulated well. Quantify the likelihood that a (summary statistic computed from a) set of observations arises from a physical system with the characteristics captured by a model generated time series. Given a prior on models, we can go further: posterior distribution of model given observations.

  20. Evaluation of Traditional Medicines for Neurodegenerative Diseases Using Drosophila Models

    PubMed Central

    Lee, Soojin; Bang, Se Min; Lee, Joon Woo; Cho, Kyoung Sang

    2014-01-01

    Drosophila is one of the oldest and most powerful genetic models and has led to novel insights into a variety of biological processes. Recently, Drosophila has emerged as a model system to study human diseases, including several important neurodegenerative diseases. Because of the genomic similarity between Drosophila and humans, Drosophila neurodegenerative disease models exhibit a variety of human-disease-like phenotypes, facilitating fast and cost-effective in vivo genetic modifier screening and drug evaluation. Using these models, many disease-associated genetic factors have been identified, leading to the identification of compelling drug candidates. Recently, the safety and efficacy of traditional medicines for human diseases have been evaluated in various animal disease models. Despite the advantages of the Drosophila model, its usage in the evaluation of traditional medicines is only nascent. Here, we introduce the Drosophila model for neurodegenerative diseases and some examples demonstrating the successful application of Drosophila models in the evaluation of traditional medicines. PMID:24790636

  1. Evaluation of Fast-Time Wake Vortex Prediction Models

    NASA Technical Reports Server (NTRS)

    Proctor, Fred H.; Hamilton, David W.

    2009-01-01

    Current fast-time wake models are reviewed and three basic types are defined. Predictions from several of the fast-time models are compared. Previous statistical evaluations of the APA-Sarpkaya and D2P fast-time models are discussed. Root Mean Square errors between fast-time model predictions and Lidar wake measurements are examined for a 24 hr period at Denver International Airport. Shortcomings in current methodology for evaluating wake errors are also discussed.

  2. Evaluating Energy Efficiency Policies with Energy-Economy Models

    SciTech Connect

    Mundaca, Luis; Neij, Lena; Worrell, Ernst; McNeil, Michael A.

    2010-08-01

    The growing complexities of energy systems, environmental problems and technology markets are driving and testing most energy-economy models to their limits. To further advance bottom-up models from a multidisciplinary energy efficiency policy evaluation perspective, we review and critically analyse bottom-up energy-economy models and corresponding evaluation studies on energy efficiency policies to induce technological change. We use the household sector as a case study. Our analysis focuses on decision frameworks for technology choice, type of evaluation being carried out, treatment of market and behavioural failures, evaluated policy instruments, and key determinants used to mimic policy instruments. Although the review confirms criticism related to energy-economy models (e.g. unrealistic representation of decision-making by consumers when choosing technologies), they provide valuable guidance for policy evaluation related to energy efficiency. Different areas to further advance models remain open, particularly related to modelling issues, techno-economic and environmental aspects, behavioural determinants, and policy considerations.

  3. Evaluation study of building-resolved urban dispersion models

    SciTech Connect

    Flaherty, Julia E.; Allwine, K Jerry; Brown, Mike J.; Coirier, WIlliam J.; Ericson, Shawn C.; Hansen, Olav R.; Huber, Alan H.; Kim, Sura; Leach, Martin J.; Mirocha, Jeff D.; Newsom, Rob K.; Patnaik, Gopal; Senocak, Inanc

    2007-09-10

    For effective emergency response and recovery planning, it is critically important that building-resolved urban dispersion models be evaluated using field data. Several full-physics computational fluid dynamics (CFD) models and semi-empirical building-resolved (SEB) models are being advanced and applied to simulating flow and dispersion in urban areas. To obtain an estimate of the current state-of-readiness of these classes of models, the Department of Homeland Security (DHS) funded a study to compare five CFD models and one SEB model with tracer data from the extensive Midtown Manhattan field study (MID05) conducted during August 2005 as part of the DHS Urban Dispersion Program (UDP; Allwine and Flaherty 2007). Six days of tracer and meteorological experiments were conducted over an approximately 2-km-by-2-km area in Midtown Manhattan just south of Central Park in New York City. A subset of these data was used for model evaluations. The study was conducted such that an evaluation team, independent of the six modeling teams, provided all the input data (e.g., building data, meteorological data and tracer release rates) and run conditions for each of four experimental periods simulated. Tracer concentration data for two of the four experimental periods were provided to the modeling teams for their own evaluation of their respective models to ensure proper setup and operation. Tracer data were not provided for the second two experimental periods to provide for an independent evaluation of the models. The tracer concentrations resulting from the model simulations were provided to the evaluation team in a standard format for consistency in inter-comparing model results. An overview of the model evaluation approach will be given followed by a discussion on the qualitative comparison of the respective models with the field data. Future model developments efforts needed to address modeling gaps identified from this study will also be discussed.

  4. Novel methods to evaluate fracture risk models

    PubMed Central

    Donaldson, M.G.; Cawthon, P. M.; Schousboe, J.T.; Ensrud, K.E.; Lui, L.Y.; Cauley, J.A.; Hillier, T.A.; Taylor, B.C.; Hochberg, M.C.; Bauer, D.C.; Cummings, S.R.

    2013-01-01

    Fracture prediction models help identify individuals at high risk who may benefit from treatment. Area Under the Curve (AUC) is used to compare prediction models. However, the AUC has limitations and may miss important differences between models. Novel reclassification methods quantify how accurately models classify patients who benefit from treatment and the proportion of patients above/below treatment thresholds. We applied two reclassification methods, using the NOF treatment thresholds, to compare two risk models: femoral neck BMD and age (“simple model”) and FRAX (”FRAX model”). The Pepe method classifies based on case/non-case status and examines the proportion of each above and below thresholds. The Cook method examines fracture rates above and below thresholds. We applied these to the Study of Osteoporotic Fractures. There were 6036 (1037 fractures) and 6232 (389 fractures) participants with complete data for major osteoporotic and hip fracture respectively. Both models for major osteoporotic fracture (0.68 vs. 0.69) and hip fracture (0.75 vs. 0.76) had similar AUCs. In contrast, using reclassification methods, each model classified a substantial number of women differently. Using the Pepe method, the FRAX model (vs. simple model), missed treating 70 (7%) cases of major osteoporotic fracture but avoided treating 285 (6%) non-cases. For hip fracture, the FRAX model missed treating 31 (8%) cases but avoided treating 1026 (18%) non-cases. The Cook method (both models, both fracture outcomes) had similar fracture rates above/below the treatment thresholds. Compared with the AUC, new methods provide more detailed information about how models classify patients. PMID:21351143

  5. Evaluation of stochastic reservoir operation optimization models

    NASA Astrophysics Data System (ADS)

    Celeste, Alcigeimes B.; Billib, Max

    2009-09-01

    This paper investigates the performance of seven stochastic models used to define optimal reservoir operating policies. The models are based on implicit (ISO) and explicit stochastic optimization (ESO) as well as on the parameterization-simulation-optimization (PSO) approach. The ISO models include multiple regression, two-dimensional surface modeling and a neuro-fuzzy strategy. The ESO model is the well-known and widely used stochastic dynamic programming (SDP) technique. The PSO models comprise a variant of the standard operating policy (SOP), reservoir zoning, and a two-dimensional hedging rule. The models are applied to the operation of a single reservoir damming an intermittent river in northeastern Brazil. The standard operating policy is also included in the comparison and operational results provided by deterministic optimization based on perfect forecasts are used as a benchmark. In general, the ISO and PSO models performed better than SDP and the SOP. In addition, the proposed ISO-based surface modeling procedure and the PSO-based two-dimensional hedging rule showed superior overall performance as compared with the neuro-fuzzy approach.

  6. Rhode Island Model Evaluation & Support System: Support Professional. Edition II

    ERIC Educational Resources Information Center

    Rhode Island Department of Education, 2015

    2015-01-01

    Rhode Island educators believe that implementing a fair, accurate, and meaningful evaluation and support system for support professionals will help improve student outcomes. The primary purpose of the Rhode Island Model Support Professional Evaluation and Support System (Rhode Island Model) is to help all support professionals do their best work…

  7. Modelling in Evaluating a Working Life Project in Higher Education

    ERIC Educational Resources Information Center

    Sarja, Anneli; Janhonen, Sirpa; Havukainen, Pirjo; Vesterinen, Anne

    2012-01-01

    This article describes an evaluation method based on collaboration between the higher education, a care home and university, in a R&D project. The aim of the project was to elaborate modelling as a tool of developmental evaluation for innovation and competence in project cooperation. The approach was based on activity theory. Modelling enabled a…

  8. A Generalized Evaluation Model for Primary Prevention Programs.

    ERIC Educational Resources Information Center

    Barling, Phillip W.; Cramer, Kathryn D.

    A generalized evaluation model (GEM) has been developed to evaluate primary prevention program impact. The GEM model views primary prevention dynamically; delineating four structural components (program, organization, target population, system) and four developmental stages (initiation, establishment, integration, continuation). The interaction of…

  9. Mathematical model of bisubject qualimetric arbitrary objects evaluation

    NASA Astrophysics Data System (ADS)

    Morozova, A.

    2016-04-01

    An analytical basis and the process of formalization of arbitrary objects bisubject qualimetric evaluation mathematical model information spaces are developed. The model is applicable in solving problems of control over both technical and socio-economic systems for objects evaluation using systems of parameters generated by different subjects taking into account their performance and priorities of decision-making.

  10. Evaluating a Community-School Model of Social Work Practice

    ERIC Educational Resources Information Center

    Diehl, Daniel; Frey, Andy

    2008-01-01

    While research has shown that social workers can have positive impacts on students' school adjustment, evaluations of overall practice models continue to be limited. This article evaluates a model of community-school social work practice by examining its effect on problem behaviors and concerns identified by teachers and parents at referral. As…

  11. An Alternative Feedback/Evaluation Model for Outdoor Wilderness Programs.

    ERIC Educational Resources Information Center

    Dawson, R.

    Project D.A.R.E. (Development through Adventure, Responsibility and Education), an adventure-based outdoor program, uses a feedback/evaluation model, combining a learning component with a two-part participant observational model. The first phase focuses on evaluation of the child and progress made while he is in the program (stages one to four);…

  12. Testing of a Program Evaluation Model: Final Report.

    ERIC Educational Resources Information Center

    Nagler, Phyllis J.; Marson, Arthur A.

    A program evaluation model developed by Moraine Park Technical Institute (MPTI) is described in this report. Following background material, the four main evaluation criteria employed in the model are identified as program quality, program relevance to community needs, program impact on MPTI, and the transition and growth of MPTI graduates in the…

  13. Rhode Island Model Evaluation & Support System: Building Administrator. Edition III

    ERIC Educational Resources Information Center

    Rhode Island Department of Education, 2015

    2015-01-01

    Rhode Island educators believe that implementing a fair, accurate, and meaningful educator evaluation and support system will help improve teaching, learning, and school leadership. The primary purpose of the Rhode Island Model Building Administrator Evaluation and Support System (Rhode Island Model) is to help all building administrators improve.…

  14. Program Evaluation: The Accountability Bridge Model for Counselors

    ERIC Educational Resources Information Center

    Astramovich, Randall L.; Coker, J. Kelly

    2007-01-01

    The accountability and reform movements in education and the human services professions have pressured counselors to demonstrate outcomes of counseling programs and services. Evaluation models developed for large-scale evaluations are generally impractical for counselors to implement. Counselors require practical models to guide them in planning…

  15. Evaluating Aptness of a Regression Model

    ERIC Educational Resources Information Center

    Matson, Jack E.; Huguenard, Brian R.

    2007-01-01

    The data for 104 software projects is used to develop a linear regression model that uses function points (a measure of software project size) to predict development effort. The data set is particularly interesting in that it violates several of the assumptions required of a linear model; but when the data are transformed, the data set satisfies…

  16. SUMMARY OF COMPLEX TERRAIN MODEL EVALUATION

    EPA Science Inventory

    The Environmental Protection Agency conducted a scientific review of a set of eight complex terrain dispersion models. TRC Environmental Consultants, Inc. calculated and tabulated a uniform set of performance statistics for the models using the Cinder Cone Butte and Westvaco Luke...

  17. Evaluation of spinal cord injury animal models

    PubMed Central

    Zhang, Ning; Fang, Marong; Chen, Haohao; Gou, Fangming; Ding, Mingxing

    2014-01-01

    Because there is no curative treatment for spinal cord injury, establishing an ideal animal model is important to identify injury mechanisms and develop therapies for individuals suffering from spinal cord injuries. In this article, we systematically review and analyze various kinds of animal models of spinal cord injury and assess their advantages and disadvantages for further studies. PMID:25598784

  18. Model for Evaluating Teacher and Trainer Competences

    ERIC Educational Resources Information Center

    Carioca, Vito; Rodrigues, Clara; Saude, Sandra; Kokosowski, Alain; Harich, Katja; Sau-Ek, Kristiina; Georgogianni, Nicole; Levy, Samuel; Speer, Sandra; Pugh, Terence

    2009-01-01

    A lack of common criteria for comparing education and training systems makes it difficult to recognise qualifications and competences acquired in different environments and levels of training. A valid basis for defining a framework for evaluating professional performance in European educational and training contexts must therefore be established.…

  19. Designing and Evaluating Representations to Model Pedagogy

    ERIC Educational Resources Information Center

    Masterman, Elizabeth; Craft, Brock

    2013-01-01

    This article presents the case for a theory-informed approach to designing and evaluating representations for implementation in digital tools to support Learning Design, using the framework of epistemic efficacy as an example. This framework, which is rooted in the literature of cognitive psychology, is operationalised through dimensions of fit…

  20. Evaluating Individualized Reading Programs: A Bayesian Model.

    ERIC Educational Resources Information Center

    Maxwell, Martha

    Simple Bayesian approaches can be applied to answer specific questions in evaluating an individualized reading program. A small reading and study skills program located in the counseling center of a major research university collected and compiled data on student characteristics such as class, number of sessions attended, grade point average, and…

  1. Ensemble evaluation of hydrological model hypotheses

    NASA Astrophysics Data System (ADS)

    Krueger, Tobias; Freer, Jim; Quinton, John N.; MacLeod, Christopher J. A.; Bilotta, Gary S.; Brazier, Richard E.; Butler, Patricia; Haygarth, Philip M.

    2010-07-01

    It is demonstrated for the first time how model parameter, structural and data uncertainties can be accounted for explicitly and simultaneously within the Generalized Likelihood Uncertainty Estimation (GLUE) methodology. As an example application, 72 variants of a single soil moisture accounting store are tested as simplified hypotheses of runoff generation at six experimental grassland field-scale lysimeters through model rejection and a novel diagnostic scheme. The fields, designed as replicates, exhibit different hydrological behaviors which yield different model performances. For fields with low initial discharge levels at the beginning of events, the conceptual stores considered reach their limit of applicability. Conversely, one of the fields yielding more discharge than the others, but having larger data gaps, allows for greater flexibility in the choice of model structures. As a model learning exercise, the study points to a "leaking" of the fields not evident from previous field experiments. It is discussed how understanding observational uncertainties and incorporating these into model diagnostics can help appreciate the scale of model structural error.

  2. Evaluating Models of Human Performance: Safety-Critical Systems Applications

    NASA Technical Reports Server (NTRS)

    Feary, Michael S.

    2012-01-01

    This presentation is part of panel discussion on Evaluating Models of Human Performance. The purpose of this panel is to discuss the increasing use of models in the world today and specifically focus on how to describe and evaluate models of human performance. My presentation will focus on discussions of generating distributions of performance, and the evaluation of different strategies for humans performing tasks with mixed initiative (Human-Automation) systems. I will also discuss issues with how to provide Human Performance modeling data to support decisions on acceptability and tradeoffs in the design of safety critical systems. I will conclude with challenges for the future.

  3. Guidelines for Evaluating Ground-Water Flow Models

    USGS Publications Warehouse

    Reilly, Thomas E.; Harbaugh, Arlen W.

    2004-01-01

    Ground-water flow modeling is an important tool frequently used in studies of ground-water systems. Reviewers and users of these studies have a need to evaluate the accuracy or reasonableness of the ground-water flow model. This report provides some guidelines and discussion on how to evaluate complex ground-water flow models used in the investigation of ground-water systems. A consistent thread throughout these guidelines is that the objectives of the study must be specified to allow the adequacy of the model to be evaluated.

  4. Models for evaluating the performability of degradable computing systems

    NASA Technical Reports Server (NTRS)

    Wu, L. T.

    1982-01-01

    Recent advances in multiprocessor technology established the need for unified methods to evaluate computing systems performance and reliability. In response to this modeling need, a general modeling framework that permits the modeling, analysis and evaluation of degradable computing systems is considered. Within this framework, several user oriented performance variables are identified and shown to be proper generalizations of the traditional notions of system performance and reliability. Furthermore, a time varying version of the model is developed to generalize the traditional fault tree reliability evaluation methods of phased missions.

  5. TMDL MODEL EVALUATION AND RESEARCH NEEDS

    EPA Science Inventory

    This review examines the modeling research needs to support environmental decision-making for the 303(d) requirements for development of total maximum daily loads (TMDLs) and related programs such as 319 Nonpoint Source Program activities, watershed management, stormwater permits...

  6. The Discrepancy Evaluation Model: A Systematic Approach for the Evaluation of Career Planning and Placement Programs.

    ERIC Educational Resources Information Center

    Buttram, Joan L.; Covert, Robert W.

    The Discrepancy Evaluation Model (DEM), developed in 1966 by Malcolm Provus, provides information for program assessment and program improvement. Under the DEM, evaluation is defined as the comparison of an actual performance to a desired standard. The DEM embodies five stages of evaluation based upon a program's natural development: program…

  7. Perceptual evaluation of voice source models.

    PubMed

    Kreiman, Jody; Garellek, Marc; Chen, Gang; Alwan, Abeer; Gerratt, Bruce R

    2015-07-01

    Models of the voice source differ in their fits to natural voices, but it is unclear which differences in fit are perceptually salient. This study examined the relationship between the fit of five voice source models to 40 natural voices, and the degree of perceptual match among stimuli synthesized with each of the modeled sources. Listeners completed a visual sort-and-rate task to compare versions of each voice created with the different source models, and the results were analyzed using multidimensional scaling. Neither fits to pulse shapes nor fits to landmark points on the pulses predicted observed differences in quality. Further, the source models fit the opening phase of the glottal pulses better than they fit the closing phase, but at the same time similarity in quality was better predicted by the timing and amplitude of the negative peak of the flow derivative (part of the closing phase) than by the timing and/or amplitude of peak glottal opening. Results indicate that simply knowing how (or how well) a particular source model fits or does not fit a target source pulse in the time domain provides little insight into what aspects of the voice source are important to listeners. PMID:26233000

  8. Evaluation of Surrogate Animal Models of Melioidosis

    PubMed Central

    Warawa, Jonathan Mark

    2010-01-01

    Burkholderia pseudomallei is the Gram-negative bacterial pathogen responsible for the disease melioidosis. B. pseudomallei establishes disease in susceptible individuals through multiple routes of infection, all of which may proceed to a septicemic disease associated with a high mortality rate. B. pseudomallei opportunistically infects humans and a wide range of animals directly from the environment, and modeling of experimental melioidosis has been conducted in numerous biologically relevant models including mammalian and invertebrate hosts. This review seeks to summarize published findings related to established animal models of melioidosis, with an aim to compare and contrast the virulence of B. pseudomallei in these models. The effect of the route of delivery on disease is also discussed for intravenous, intraperitoneal, subcutaneous, intranasal, aerosol, oral, and intratracheal infection methodologies, with a particular focus on how they relate to modeling clinical melioidosis. The importance of the translational validity of the animal models used in B. pseudomallei research is highlighted as these studies have become increasingly therapeutic in nature. PMID:21772830

  9. CFD code evaluation for internal flow modeling

    NASA Technical Reports Server (NTRS)

    Chung, T. J.

    1990-01-01

    Research on the computational fluid dynamics (CFD) code evaluation with emphasis on supercomputing in reacting flows is discussed. Advantages of unstructured grids, multigrids, adaptive methods, improved flow solvers, vector processing, parallel processing, and reduction of memory requirements are discussed. As examples, researchers include applications of supercomputing to reacting flow Navier-Stokes equations including shock waves and turbulence and combustion instability problems associated with solid and liquid propellants. Evaluation of codes developed by other organizations are not included. Instead, the basic criteria for accuracy and efficiency have been established, and some applications on rocket combustion have been made. Research toward an ultimate goal, the most accurate and efficient CFD code, is in progress and will continue for years to come.

  10. Model structure identification based on ensemble model evaluation

    NASA Astrophysics Data System (ADS)

    Van Hoey, S.; van der Kwast, J.; Nopens, I.; Seuntjens, P.; Pereira, F.

    2012-04-01

    Identifying the most appropriate hydrological model for a given problem is more than fitting the parameters of a fixed model structure to reproduce the measured hydrograph. Defining the most appropriate model structure is dependent of the modeling objective, the characteristics of the system under investigation and the available data. To be able to adapt to the different conditions and to propose different hypotheses of the underlying system, a flexible model structure is preferred in combination with a rejectionist analysis based on different diagnostics supporting the model objective. By confronting the model structures with the model diagnostics, an identification of the dominant processes is attempted. In the presented work, a set of 24 model structures was constructed, by combining interchangeable components representing different hypotheses of the system under study, the Nete catchment in Belgium. To address the effect of different model diagnostics on the performance of the selected model structures, an optimization of the model structures was performed to identify the parameter sets minimizing specific objective functions, focusing on low or high flow conditions. Furthermore, the different model structures are compared simultaneously within the Generalized Likelihood Uncertainty Estimation (GLUE) approach. The rejection of inadequate model structures by specifying limits of acceptance and weighting of the accepted ones is the basis of the GLUE approach. Multiple measures are combined to give guidance about the suitability of the different structures and information about the identifiability and uncertainty of the parameters is extracted from the ensemble of selected structures. The results of the optimization demonstrate the relationship between the selected objective function and the behaviour of the model structures, but also the compensation for structural differences by different parameter values resulting in similar performance. The optimization gives

  11. Numerical models for the evaluation of geothermal systems

    SciTech Connect

    Bodvarsson, G.S.; Pruess, K.; Lippmann, M.J.

    1986-08-01

    We have carried out detailed simulations of various fields in the USA (Bada, New Mexico; Heber, California); Mexico (Cerro Prieto); Iceland (Krafla); and Kenya (Olkaria). These simulation studies have illustrated the usefulness of numerical models for the overall evaluation of geothermal systems. The methodology for modeling the behavior of geothermal systems, different approaches to geothermal reservoir modeling and how they can be applied in comprehensive evaluation work are discussed.

  12. A common fallacy in climate model evaluation

    NASA Astrophysics Data System (ADS)

    Annan, J. D.; Hargreaves, J. C.; Tachiiri, K.

    2012-04-01

    We discuss the assessment of model ensembles such as that arising from the CMIP3 coordinated multi-model experiments. An important aspect of this is not merely the closeness of the models to observations in absolute terms but also the reliability of the ensemble spread as an indication of uncertainty. In this context, it has been widely argued that the multi-model ensemble of opportunity is insufficiently broad to adequately represent uncertainties regarding future climate change. For example, the IPCC AR4 summarises the consensus with the sentence: "Those studies also suggest that the current AOGCMs may not cover the full range of uncertainty for climate sensitivity." Similar claims have been made in the literature for other properties of the climate system, including the transient climate response and efficiency of ocean heat uptake. Comparison of model outputs with observations of the climate system forms an essential component of model assessment and is crucial for building our confidence in model predictions. However, methods for undertaking this comparison are not always clearly justified and understood. Here we show that the popular approach which forms the basis for the above claims, of comparing the ensemble spread to a so-called "observationally-constrained pdf", can be highly misleading. Such a comparison will almost certainly result in disagreement, but in reality tells us little about the performance of the ensemble. We present an alternative approach based on an assessment of the predictive performance of the ensemble, and show how it may lead to very different, and rather more encouraging, conclusions. We additionally outline some necessary conditions for an ensemble (or more generally, a probabilistic prediction) to be challenged by an observation.

  13. Experimental evaluations of the microchannel flow model

    NASA Astrophysics Data System (ADS)

    Parker, K. J.

    2015-06-01

    Recent advances have enabled a new wave of biomechanics measurements, and have renewed interest in selecting appropriate rheological models for soft tissues such as the liver, thyroid, and prostate. The microchannel flow model was recently introduced to describe the linear response of tissue to stimuli such as stress relaxation or shear wave propagation. This model postulates a power law relaxation spectrum that results from a branching distribution of vessels and channels in normal soft tissue such as liver. In this work, the derivation is extended to determine the explicit link between the distribution of vessels and the relaxation spectrum. In addition, liver tissue is modified by temperature or salinity, and the resulting changes in tissue responses (by factors of 1.5 or greater) are reasonably predicted from the microchannel flow model, simply by considering the changes in fluid flow through the modified samples. The 2 and 4 parameter versions of the model are considered, and it is shown that in some cases the maximum time constant (corresponding to the minimum vessel diameters), could be altered in a way that has major impact on the observed tissue response. This could explain why an inflamed region is palpated as a harder bump compared to surrounding normal tissue.

  14. Evaluation of Model Fit in Cognitive Diagnosis Models

    ERIC Educational Resources Information Center

    Hu, Jinxiang; Miller, M. David; Huggins-Manley, Anne Corinne; Chen, Yi-Hsin

    2016-01-01

    Cognitive diagnosis models (CDMs) estimate student ability profiles using latent attributes. Model fit to the data needs to be ascertained in order to determine whether inferences from CDMs are valid. This study investigated the usefulness of some popular model fit statistics to detect CDM fit including relative fit indices (AIC, BIC, and CAIC),…

  15. Evaluation of biological models using Spacelab

    NASA Technical Reports Server (NTRS)

    Tollinger, D.; Williams, B. A.

    1980-01-01

    Biological models of hypogravity effects are described, including the cardiovascular-fluid shift, musculoskeletal, embryological and space sickness models. These models predict such effects as loss of extracellular fluid and electrolytes, decrease in red blood cell mass, and the loss of muscle and bone mass in weight-bearing portions of the body. Experimentation in Spacelab by the use of implanted electromagnetic flow probes, by fertilizing frog eggs in hypogravity and fixing the eggs at various stages of early development and by assessing the role of the vestibulocular reflex arc in space sickness is suggested. It is concluded that the use of small animals eliminates the uncertainties caused by corrective or preventive measures employed with human subjects.

  16. Evaluating models of climate and forest vegetation

    NASA Technical Reports Server (NTRS)

    Clark, James S.

    1992-01-01

    Understanding how the biosphere may respond to increasing trace gas concentrations in the atmosphere requires models that contain vegetation responses to regional climate. Most of the processes ecologists study in forests, including trophic interactions, nutrient cycling, and disturbance regimes, and vital components of the world economy, such as forest products and agriculture, will be influenced in potentially unexpected ways by changing climate. These vegetation changes affect climate in the following ways: changing C, N, and S pools; trace gases; albedo; and water balance. The complexity of the indirect interactions among variables that depend on climate, together with the range of different space/time scales that best describe these processes, make the problems of modeling and prediction enormously difficult. These problems of predicting vegetation response to climate warming and potential ways of testing model predictions are the subjects of this chapter.

  17. Source term evaluation for combustion modeling

    NASA Technical Reports Server (NTRS)

    Sussman, Myles A.

    1993-01-01

    A modification is developed for application to the source terms used in combustion modeling. The modification accounts for the error of the finite difference scheme in regions where chain-branching chemical reactions produce exponential growth of species densities. The modification is first applied to a one-dimensional scalar model problem. It is then generalized to multiple chemical species, and used in quasi-one-dimensional computations of shock-induced combustion in a channel. Grid refinement studies demonstrate the improved accuracy of the method using this modification. The algorithm is applied in two spatial dimensions and used in simulations of steady and unsteady shock-induced combustion. Comparisons with ballistic range experiments give confidence in the numerical technique and the 9-species hydrogen-air chemistry model.

  18. Modeling procedures for handling qualities evaluation of flexible aircraft

    NASA Technical Reports Server (NTRS)

    Govindaraj, K. S.; Eulrich, B. J.; Chalk, C. R.

    1981-01-01

    This paper presents simplified modeling procedures to evaluate the impact of flexible modes and the unsteady aerodynamic effects on the handling qualities of Supersonic Cruise Aircraft (SCR). The modeling procedures involve obtaining reduced order transfer function models of SCR vehicles, including the important flexible mode responses and unsteady aerodynamic effects, and conversion of the transfer function models to time domain equations for use in simulations. The use of the modeling procedures is illustrated by a simple example.

  19. Evaluation of a hydrological model based on Bidirectional Reach (BReach)

    NASA Astrophysics Data System (ADS)

    Van Eerdenbrugh, Katrien; Van Hoey, Stijn; Verhoest, Niko E. C.

    2016-04-01

    Evaluation and discrimination of model structures is crucial to ensure an appropriate use of hydrological models. When evaluating model results by aggregating their quality in (a subset of) individual observations, overall results of this analysis sometimes conceal important detailed information about model structural deficiencies. Analyzing model results within their local (time) context can uncover this detailed information. In this research, a methodology called Bidirectional Reach (BReach) is proposed to evaluate and analyze results of a hydrological model by assessing the maximum left and right reach in each observation point that is used for model evaluation. These maximum reaches express the capability of the model to describe a subset of the evaluation data both in the direction of the previous (left) and of the following data (right). This capability is evaluated on two levels. First, on the level of individual observations, the combination of a parameter set and an observation is classified as non-acceptable if the deviation between the accompanying model result and the measurement exceeds observational uncertainty. Second, the behavior in a sequence of observations is evaluated by means of a tolerance degree. This tolerance degree expresses the condition for satisfactory model behavior in a data series and is defined by the percentage of observations within this series that can have non-acceptable model results. Based on both criteria, the maximum left and right reaches of a model in an observation represent the data points in the direction of the previous respectively the following observations beyond which none of the sampled parameter sets both are satisfactory and result in an acceptable deviation. After assessing these reaches for a variety of tolerance degrees, results can be plotted in a combined BReach plot that show temporal changes in the behavior of model results. The methodology is applied on a Probability Distributed Model (PDM) of the river

  20. Evaluating the Pedagogical Potential of Hybrid Models

    ERIC Educational Resources Information Center

    Levin, Tzur; Levin, Ilya

    2013-01-01

    The paper examines how the use of hybrid models--that consist of the interacting continuous and discrete processes--may assist in teaching system thinking. We report an experiment in which undergraduate students were asked to choose between a hybrid and a continuous solution for a number of control problems. A correlation has been found between…

  1. Experiences in evaluating regional air quality models

    NASA Astrophysics Data System (ADS)

    Liu, Mei-Kao; Greenfield, Stanley M.

    Any area of the world concerned with the health and welfare of its people and the viability of its ecological system must eventually address the question of the control of air pollution. This is true in developed countries as well as countries that are undergoing a considerable degree of industrialization. The control or limitation of the emissions of a pollutant can be very costly. To avoid ineffective or unnecessary control, the nature of the problem must be fully understood and the relationship between source emissions and ambient concentrations must be established. Mathematical models, while admittedly containing large uncertainties, can be used to examine alternatives of emission restrictions for achieving safe ambient concentrations. The focus of this paper is to summarize our experiences with modeling regional air quality in the United States and Western Europe. The following modeling experiences have been used: future SO 2 and sulfate distributions and projected acidic deposition as related to coal development in the northern Great Plains in the U.S.; analysis of regional ozone and sulfate episodes in the northeastern U.S.; analysis of the regional ozone problem in western Europe in support of alternative emission control strategies; analysis of distributions of toxic chemicals in the Southeast Ohio River Valley in support of the design of a monitoring network human exposure. Collectively, these prior modeling analyses can be invaluable in examining a similar problem in other parts of the world as well, such as the Pacific rim in Asia.

  2. TAMPA BAY MODEL EVALUATION AND ASSESSMENT

    EPA Science Inventory

    A long term goal of multimedia environmental management is to achieve sustainable ecological resources. Progress towards this goal rests on a foundation of science-based methods and data integrated into predictive multimedia, multi-stressor open architecture modeling systems. The...

  3. Evaluating a Model of Youth Physical Activity

    ERIC Educational Resources Information Center

    Heitzler, Carrie D.; Lytle, Leslie A.; Erickson, Darin J.; Barr-Anderson, Daheia; Sirard, John R.; Story, Mary

    2010-01-01

    Objective: To explore the relationship between social influences, self-efficacy, enjoyment, and barriers and physical activity. Methods: Structural equation modeling examined relationships between parent and peer support, parent physical activity, individual perceptions, and objectively measured physical activity using accelerometers among a…

  4. AERMOD: MODEL FORMULATION AND EVALUATION RESULTS

    EPA Science Inventory

    AERMOD is an advanced plume model that incorporates updated treatments of the boundary layer theory, understanding of turbulence and dispersion, and includes handling of terrain interactions. This paper presents an overview of AERMOD's features relative to ISCST3.

    AERM...

  5. Evaluation of regional-scale receptor modeling.

    PubMed

    Lowenthal, Douglas H; Watson, John G; Koracin, Darko; Chen, L W Antony; Dubois, David; Vellore, Ramesh; Kumar, Naresh; Knipping, Eladio M; Wheeler, Neil; Craig, Kenneth; Reid, Stephen

    2010-01-01

    The ability of receptor models to estimate regional contributions to fine particulate matter (PM2.5) was assessed with synthetic, speciated datasets at Brigantine National Wildlife Refuge (BRIG) in New Jersey and Great Smoky Mountains National Park (GRSM) in Tennessee. Synthetic PM2.5 chemical concentrations were generated for the summer of 2002 using the Community Multiscale Air Quality (CMAQ) model and chemically speciated PM2.5 source profiles from the U.S. Environmental Protection Agency (EPA)'s SPECIATE and Desert Research Institute's source profile databases. CMAQ estimated the "true" contributions of seven regions in the eastern United States to chemical species concentrations and individual source contributions to primary PM2.5 at both sites. A seven-factor solution by the positive matrix factorization (PMF) receptor model explained approximately 99% of the variability in the data at both sites. At BRIG, PMF captured the first four major contributing sources (including a secondary sulfate factor), although diesel and gasoline vehicle contributions were not separated. However, at GRSM, the resolved factors did not correspond well to major PM2.5 sources. There were no correlations between PMF factors and regional contributions to sulfate at either site. Unmix produced five- and seven-factor solutions, including a secondary sulfate factor, at both sites. Some PMF factors were combined or missing in the Unmix factors. The trajectory mass balance regression (TMBR) model apportioned sulfate concentrations to the seven source regions using Hybrid Single-Particle Lagrangian Integrated Trajectory (HYSPLIT) trajectories based on Meteorological Model Version 5 (MM5) and Eta Data Simulation System (EDAS) meteorological input. The largest estimated sulfate contributions at both sites were from the local regions; this agreed qualitatively with the true regional apportionments. Estimated regional contributions depended on the starting elevation of the trajectories and on

  6. Evaluation information integration model on book purchasing bids

    NASA Astrophysics Data System (ADS)

    Wang, Wei; Jiao, Yang

    2011-12-01

    The multi attributes decision model is presented basing on a number of indicators of book procurement bidders, and by the characteristics of persons to engage in joint decision-making. For each evaluation to define the ideal solution and negative ideal solution, further the relative closeness of each evaluation person and each supplier. The ideal solution and negative ideal solution of the evaluation committee is defined based on the group closeness matrix, and then the results of the ultimate supplier evaluation are calculated by decision-making groups. In this paper, the model is through the application of experimental data.

  7. Teachers' Development Model to Authentic Assessment by Empowerment Evaluation Approach

    ERIC Educational Resources Information Center

    Charoenchai, Charin; Phuseeorn, Songsak; Phengsawat, Waro

    2015-01-01

    The purposes of this study were 1) Study teachers authentic assessment, teachers comprehension of authentic assessment and teachers needs for authentic assessment development. 2) To create teachers development model. 3) Experiment of teachers development model. 4) Evaluate effectiveness of teachers development model. The research is divided into 4…

  8. An Evaluation of Some Models for Culture-Fair Selection.

    ERIC Educational Resources Information Center

    Petersen, Nancy S.; Novick, Melvin R.

    Models proposed by Cleary, Thorndike, Cole, Linn, Einhorn and Bass, Darlington, and Gross and Su for analyzing bias in the use of tests in a selection strategy are surveyed. Several additional models are also introduced. The purpose is to describe, compare, contrast, and evaluate these models while extracting such useful ideas as may be found in…

  9. Automated expert modeling for automated student evaluation.

    SciTech Connect

    Abbott, Robert G.

    2006-01-01

    The 8th International Conference on Intelligent Tutoring Systems provides a leading international forum for the dissemination of original results in the design, implementation, and evaluation of intelligent tutoring systems and related areas. The conference draws researchers from a broad spectrum of disciplines ranging from artificial intelligence and cognitive science to pedagogy and educational psychology. The conference explores intelligent tutoring systems increasing real world impact on an increasingly global scale. Improved authoring tools and learning object standards enable fielding systems and curricula in real world settings on an unprecedented scale. Researchers deploy ITS's in ever larger studies and increasingly use data from real students, tasks, and settings to guide new research. With high volumes of student interaction data, data mining, and machine learning, tutoring systems can learn from experience and improve their teaching performance. The increasing number of realistic evaluation studies also broaden researchers knowledge about the educational contexts for which ITS's are best suited. At the same time, researchers explore how to expand and improve ITS/student communications, for example, how to achieve more flexible and responsive discourse with students, help students integrate Web resources into learning, use mobile technologies and games to enhance student motivation and learning, and address multicultural perspectives.

  10. Case study of an evaluation coaching model: exploring the role of the evaluator.

    PubMed

    Ensminger, David C; Kallemeyn, Leanne M; Rempert, Tania; Wade, James; Polanin, Megan

    2015-04-01

    This study examined the role of the external evaluator as a coach. More specifically, using an evaluative inquiry framework (Preskill & Torres, 1999a; Preskill & Torres, 1999b), it explored the types of coaching that an evaluator employed to promote individual, team and organizational learning. The study demonstrated that evaluation coaching provided a viable means for an organization with a limited budget to conduct evaluations through support of a coach. It also demonstrated how the coaching processes supported the development of evaluation capacity within the organization. By examining coaching models outside of the field of evaluation, this study identified two forms of coaching--results coaching and developmental coaching--that promoted evaluation capacity building and have not been previously discussed in the evaluation literature. PMID:25677616

  11. Rule based design of conceptual models for formative evaluation

    NASA Technical Reports Server (NTRS)

    Moore, Loretta A.; Chang, Kai; Hale, Joseph P.; Bester, Terri; Rix, Thomas; Wang, Yaowen

    1994-01-01

    A Human-Computer Interface (HCI) Prototyping Environment with embedded evaluation capability has been investigated. This environment will be valuable in developing and refining HCI standards and evaluating program/project interface development, especially Space Station Freedom on-board displays for payload operations. This environment, which allows for rapid prototyping and evaluation of graphical interfaces, includes the following four components: (1) a HCI development tool; (2) a low fidelity simulator development tool; (3) a dynamic, interactive interface between the HCI and the simulator; and (4) an embedded evaluator that evaluates the adequacy of a HCI based on a user's performance. The embedded evaluation tool collects data while the user is interacting with the system and evaluates the adequacy of an interface based on a user's performance. This paper describes the design of conceptual models for the embedded evaluation system using a rule-based approach.

  12. Rule based design of conceptual models for formative evaluation

    NASA Technical Reports Server (NTRS)

    Moore, Loretta A.; Chang, Kai; Hale, Joseph P.; Bester, Terri; Rix, Thomas; Wang, Yaowen

    1994-01-01

    A Human-Computer Interface (HCI) Prototyping Environment with embedded evaluation capability has been investigated. This environment will be valuable in developing and refining HCI standards and evaluating program/project interface development, especially Space Station Freedom on-board displays for payload operations. This environment, which allows for rapid prototyping and evaluation of graphical interfaces, includes the following four components: (1) a HCI development tool, (2) a low fidelity simulator development tool, (3) a dynamic, interactive interface between the HCI and the simulator, and (4) an embedded evaluator that evaluates the adequacy of a HCI based on a user's performance. The embedded evaluation tool collects data while the user is interacting with the system and evaluates the adequacy of an interface based on a user's performance. This paper describes the design of conceptual models for the embedded evaluation system using a rule-based approach.

  13. A MULTILAYER BIOCHEMICAL DRY DEPOSITION MODEL 2. MODEL EVALUATION

    EPA Science Inventory

    The multilayer biochemical dry deposition model (MLBC) described in the accompanying paper was tested against half-hourly eddy correlation data from six field sites under a wide range of climate conditions with various plant types. Modeled CO2, O3, SO2<...

  14. Evaluation of an Infiltration Model with Microchannels

    NASA Astrophysics Data System (ADS)

    Garcia-Serrana, M.; Gulliver, J. S.; Nieber, J. L.

    2015-12-01

    This research goal is to develop and demonstrate the means by which roadside drainage ditches and filter strips can be assigned the appropriate volume reduction credits by infiltration. These vegetated surfaces convey stormwater, infiltrate runoff, and filter and/or settle solids, and are often placed along roads and other impermeable surfaces. Infiltration rates are typically calculated by assuming that water flows as sheet flow over the slope. However, for most intensities water flow occurs in narrow and shallow micro-channels and concentrates in depressions. This channelization reduces the fraction of the soil surface covered with the water coming from the road. The non-uniform distribution of water along a hillslope directly affects infiltration. First, laboratory and field experiments have been conducted to characterize the spatial pattern of flow for stormwater runoff entering onto the surface of a sloped surface in a drainage ditch. In the laboratory experiments different micro-topographies were tested over bare sandy loam soil: a smooth surface, and three and five parallel rills. All the surfaces experienced erosion; the initially smooth surface developed a system of channels over time that increased runoff generation. On average, the initially smooth surfaces infiltrated 10% more volume than the initially rilled surfaces. The field experiments were performed in the side slope of established roadside drainage ditches. Three rates of runoff from a road surface into the swale slope were tested, representing runoff from 1, 2, and 10-year storm events. The average percentage of input runoff water infiltrated in the 32 experiments was 67%, with a 21% standard deviation. Multiple measurements of saturated hydraulic conductivity were conducted to account for its spatial variability. Second, a rate-based coupled infiltration and overland model has been designed that calculates stormwater infiltration efficiency of swales. The Green-Ampt-Mein-Larson assumptions were

  15. Evaluating Conceptual Site Models with Multicomponent Reactive Transport Modeling

    NASA Astrophysics Data System (ADS)

    Dai, Z.; Heffner, D.; Price, V.; Temples, T. J.; Nicholson, T. J.

    2005-05-01

    Modeling ground-water flow and multicomponent reactive chemical transport is a useful approach for testing conceptual site models and assessing the design of monitoring networks. A graded approach with three conceptual site models is presented here with a field case of tetrachloroethene (PCE) transport and biodegradation near Charleston, SC. The first model assumed a one-layer homogeneous aquifer structure with semi-infinite boundary conditions, in which an analytical solution of the reactive solute transport can be obtained with BIOCHLOR (Aziz et al., 1999). Due to the over-simplification of the aquifer structure, this simulation cannot reproduce the monitoring data. In the second approach we used GMS to develop the conceptual site model, a layer-cake multi-aquifer system, and applied a numerical module (MODFLOW and RT3D within GMS) to solve the flow and reactive transport problem. The results were better than the first approach but still did not fit the plume well because the geological structures were still inadequately defined. In the third approach we developed a complex conceptual site model by interpreting log and seismic survey data with Petra and PetraSeis. We detected a major channel and a younger channel, through the PCE source area. These channels control the local ground-water flow direction and provide a preferential chemical transport pathway. Results using the third conceptual site model agree well with the monitoring concentration data. This study confirms that the bias and uncertainty from inadequate conceptual models are much larger than those introduced from an inadequate choice of model parameter values (Neuman and Wierenga, 2003; Meyer et al., 2004). Numerical modeling in this case provides key insight into the hydrogeology and geochemistry of the field site for predicting contaminant transport in the future. Finally, critical monitoring points and performance indicator parameters are selected for future monitoring to confirm system

  16. [Effect evaluation of three cell culture models].

    PubMed

    Wang, Aiguo; Xia, Tao; Yuan, Jing; Chen, Xuemin

    2003-11-01

    Primary rat hepatocytes were cultured using three kinds of models in vitro and the enzyme leakage, albumin secretion, and cytochrome P450 1A (CYP 1A) activity were observed. The results showed that the level of LDH in the medium decreased over time in the period of culture. However, on 5 days, LDH showed a significant increase in monolayer culture (MC) while after 8 days LDH was not detected in sandwich culture (SC). The levels of AST and ALT in the medium did not change significantly over the investigated time. The basic CYP 1A activity gradually decreased with time in MC and SC. The decline of CYP 1A in rat hepatocytes was faster in MC than that in SC. This effect was partially reversed by using cytochrome P450 (CYP450) inducers such as omeprazol and 3-methylcholanthrene (3-MC) and the CYP 1A induction was always higher in MC than that in SC. Basic CYP 1A activity in bioreactor was keeped over 2 weeks and the highest albumin production was observed in bioreactor, and next were SC and MC. In conclusion, our results clearly indicated that there have some advantages and disadvantages in each of models in which can address different questions in metabolism of toxicants and drugs. PMID:14963896

  17. Evaluation Of Hemolysis Models Using A High Fidelity Blood Model

    NASA Astrophysics Data System (ADS)

    Ezzeldin, Hussein; de Tullio, Marco; Solares, Santiago; Balaras, Elias

    2012-11-01

    Red blood cell (RBC) hemolysis is a critical concern in the design of heart assisted devices, such as prosthetic heart valves (PHVs). To date a few analytical and numerical models have been proposed to relate either hydrodynamic stresses or RBC strains, resulting from the external hydrodynamic loading, to the expected degree of hemolysis as a function of time. Such models are based on either ``lumped'' descriptions of fluid stresses or an abstract analytical-numerical representation of the RBC relying on simple geometrical assumptions. We introduce two new approaches based on an existing coarse grained (CG) RBC structural model, which is utilized to explore the physics underlying each hemolysis model whereby applying a set of devised computational experiments. Then, all the models are subjected to pathlines calculated for a realistic PHVs to predict the level of RBC trauma. Our results highlight the strengths and weaknesses of each approach and identify the key gaps that should be addressed in the development of new models. Finally, a two-layer CG model, coupling the spectrin network and the lipid bilayer, which provides invaluable information pertaining to RBC local strains and hence hemolysis. We acknowledge the support of NSF OCI-0904920 and CMMI-0841840 grants. Computing time was provided by XSEDE.

  18. Solid rocket booster performance evaluation model. Volume 4: Program listing

    NASA Technical Reports Server (NTRS)

    1974-01-01

    All subprograms or routines associated with the solid rocket booster performance evaluation model are indexed in this computer listing. An alphanumeric list of each routine in the index is provided in a table of contents.

  19. EVALUATION OF MULTIPLE PHARMACOKINETIC MODELING STRUCTURES FOR TRICHLOROETHYLENE

    EPA Science Inventory

    A series of PBPK models were developed for trichloroethylene (TCE) to evaluate biological processes that may affect the absorption, distribution, metabolism and excretion (ADME) of TCE and its metabolites.

  20. Incorporating principal component analysis into air quality model evaluation

    EPA Science Inventory

    The efficacy of standard air quality model evaluation techniques is becoming compromised as the simulation periods continue to lengthen in response to ever increasing computing capacity. Accordingly, the purpose of this paper is to demonstrate a statistical approach called Princi...

  1. Evaluating Rapid Models for High-Throughput Exposure Forecasting (SOT)

    EPA Science Inventory

    High throughput exposure screening models can provide quantitative predictions for thousands of chemicals; however these predictions must be systematically evaluated for predictive ability. Without the capability to make quantitative, albeit uncertain, forecasts of exposure, the ...

  2. Evaluation of Model Operational Analyses during DYNAMO

    NASA Astrophysics Data System (ADS)

    Ciesielski, Paul; Johnson, Richard

    2013-04-01

    A primary component of the observing system in the DYNAMO-CINDY2011-AMIE field campaign was an atmospheric sounding network comprised of two sounding quadrilaterals, one north and one south of the equator over the central Indian Ocean. During the experiment a major effort was undertaken to ensure the real-time transmission of these data onto the GTS (Global Telecommunication System) for dissemination to the operational centers (ECMWF, NCEP, JMA, etc.). Preliminary estimates indicate that ~95% of the soundings from the enhanced sounding network were successfully transmitted and potentially used in their data assimilation systems. Because of the wide use of operational and reanalysis products (e.g., in process studies, initializing numerical simulations, construction of large-scale forcing datasets for CRMs, etc.), their validity will be examined by comparing a variety of basic and diagnosed fields from two operational analyses (ECMWF and NCEP) to similar analyses based solely on sounding observations. Particular attention will be given to the vertical structures of apparent heating (Q1) and drying (Q2) from the operational analyses (OA), which are strongly influenced by cumulus parameterizations, a source of model infidelity. Preliminary results indicate that the OA products did a reasonable job at capturing the mean and temporal characteristics of convection during the DYNAMO enhanced observing period, which included the passage of two significant MJO events during the October-November 2011 period. For example, temporal correlations between Q2-budget derived rainfall from the OA products and that estimated from the TRMM satellite (i.e., the 3B42V7 product) were greater than 0.9 over the Northern Sounding Array of DYNAMO. However closer inspection of the budget profiles show notable differences between the OA products and the sounding-derived results in low-level (surface to 700 hPa) heating and drying structures. This presentation will examine these differences and

  3. Interrater Agreement Evaluation: A Latent Variable Modeling Approach

    ERIC Educational Resources Information Center

    Raykov, Tenko; Dimitrov, Dimiter M.; von Eye, Alexander; Marcoulides, George A.

    2013-01-01

    A latent variable modeling method for evaluation of interrater agreement is outlined. The procedure is useful for point and interval estimation of the degree of agreement among a given set of judges evaluating a group of targets. In addition, the approach allows one to test for identity in underlying thresholds across raters as well as to identify…

  4. Information and complexity measures for hydrologic model evaluation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Hydrological models are commonly evaluated through the residual-based performance measures such as the root-mean square error or efficiency criteria. Such measures, however, do not evaluate the degree of similarity of patterns in simulated and measured time series. The objective of this study was to...

  5. Evaluating an English Language Teacher Education Program through Peacock's Model

    ERIC Educational Resources Information Center

    Coskun, Abdullah; Daloglu, Aysegul

    2010-01-01

    The main aim of this study is to draw attention to the importance of program evaluation for teacher education programs and to reveal the pre-service English teacher education program components that are in need of improvement or maintenance both from teachers' and students' perspectives by using Peacock's (2009) recent evaluation model in a…

  6. An Emerging Model for Student Feedback: Electronic Distributed Evaluation

    ERIC Educational Resources Information Center

    Brunk-Chavez, Beth; Arrigucci, Annette

    2012-01-01

    In this article we address several issues and challenges that the evaluation of writing presents individual instructors and composition programs as a whole. We present electronic distributed evaluation, or EDE, as an emerging model for feedback on student writing and describe how it was integrated into our program's course redesign. Because the…

  7. An Information Search Model of Evaluative Concerns in Intergroup Interaction

    ERIC Educational Resources Information Center

    Vorauer, Jacquie D.

    2006-01-01

    In an information search model, evaluative concerns during intergroup interaction are conceptualized as a joint function of uncertainty regarding and importance attached to out-group members' views of oneself. High uncertainty generally fosters evaluative concerns during intergroup exchanges. Importance depends on whether out-group members'…

  8. AQMEII: A New International Initiative on Air Quality Model Evaluation

    EPA Science Inventory

    We provide a conceptual view of the process of evaluating regional-scale three-dimensional numerical photochemical air quality modeling system, based on an examination of existing approached to the evaluation of such systems as they are currently used in a variety of application....

  9. A Model for Evaluating and Acquiring Educational Software in Psychology.

    ERIC Educational Resources Information Center

    Brown, Stephen W.; And Others

    This paper describes a model for evaluating and acquiring instructionally effective and cost effective educational computer software in university psychology departments. Four stages in evaluating the software are developed: (1) establishing departmental goals and objectives for educational use of computers; (2) inventorying and evaluating…

  10. Estimating an Evaluation Utilization Model Using Conjoint Measurement and Analysis.

    ERIC Educational Resources Information Center

    Johnson, R. Burke

    1995-01-01

    The conjoint approach to measurement and analysis is demonstrated with a test of an evaluation utilization process-model that includes two endogenous variables (predicted participation and predicted instrumental evaluation). Conjoint measurement involves having respondents rate profiles that are analogues to concepts based on cells in a factorial…

  11. Regime-based evaluation of cloudiness in CMIP5 models

    NASA Astrophysics Data System (ADS)

    Jin, Daeho; Oreopoulos, Lazaros; Lee, Dongmin

    2016-04-01

    The concept of cloud regimes (CRs) is used to develop a framework for evaluating the cloudiness of 12 fifth Coupled Model Intercomparison Project (CMIP5) models. Reference CRs come from existing global International Satellite Cloud Climatology Project (ISCCP) weather states. The evaluation is made possible by the implementation in several CMIP5 models of the ISCCP simulator generating in each grid cell daily joint histograms of cloud optical thickness and cloud top pressure. Model performance is assessed with several metrics such as CR global cloud fraction (CF), CR relative frequency of occurrence (RFO), their product [long-term average total cloud amount (TCA)], cross-correlations of CR RFO maps, and a metric of resemblance between model and ISCCP CRs. In terms of CR global RFO, arguably the most fundamental metric, the models perform unsatisfactorily overall, except for CRs representing thick storm clouds. Because model CR CF is internally constrained by our method, RFO discrepancies yield also substantial TCA errors. Our results support previous findings that CMIP5 models underestimate cloudiness. The multi-model mean performs well in matching observed RFO maps for many CRs, but is still not the best for this or other metrics. When overall performance across all CRs is assessed, some models, despite shortcomings, apparently outperform Moderate Resolution Imaging Spectroradiometer cloud observations evaluated against ISCCP like another model output. Lastly, contrasting cloud simulation performance against each model's equilibrium climate sensitivity in order to gain insight on whether good cloud simulation pairs with particular values of this parameter, yields no clear conclusions.

  12. An evaluation of recent internal field models. [of earth magnetism

    NASA Technical Reports Server (NTRS)

    Mead, G. D.

    1979-01-01

    The paper reviews the current status of internal field models and evaluates several recently published models by comparing their predictions with annual means of the magnetic field measured at 140 magnetic observatories from 1973 to 1977. Three of the four models studied, viz. AWC/75, IGS/75, and Pogo 8/71, were nearly equal in their ability to predict the magnitude and direction of the current field. The fourth model, IGRF 1975, was significantly poorer in its ability to predict the current field. All models seemed to be able to extrapolate predictions quite well several years outside the data range used to construct the models.

  13. Evaluation of one dimensional analytical models for vegetation canopies

    NASA Technical Reports Server (NTRS)

    Goel, Narendra S.; Kuusk, Andres

    1992-01-01

    The SAIL model for one-dimensional homogeneous vegetation canopies has been modified to include the specular reflectance and hot spot effects. This modified model and the Nilson-Kuusk model are evaluated by comparing the reflectances given by them against those given by a radiosity-based computer model, Diana, for a set of canopies, characterized by different leaf area index (LAI) and leaf angle distribution (LAD). It is shown that for homogeneous canopies, the analytical models are generally quite accurate in the visible region, but not in the infrared region. For architecturally realistic heterogeneous canopies of the type found in nature, these models fall short. These shortcomings are quantified.

  14. Development, Evaluation, and Design Applications of an AMTEC Converter Model

    NASA Astrophysics Data System (ADS)

    Spence, Cliff A.; Schuller, Michael; Lalk, Tom R.

    2003-01-01

    Issues associated with the development of an alkali metal thermal-to-electric conversion (AMTEC) converter model that serves as an effective design tool were investigated. The requirements and performance prediction equations for the model were evaluated, and a modeling methodology was established. It was determined by defining the requirements and equations for the model and establishing a methodology that Thermal Desktop, a recently improved finite-difference software package, could be used to develop a model that serves as an effective design tool. Implementing the methodology within Thermal Desktop provides stability, high resolution, modular construction, easy-to-use interfaces, and modeling flexibility.

  15. An Evaluation of Unsaturated Flow Models in an Arid Climate

    SciTech Connect

    Dixon, J.

    1999-12-01

    The objective of this study was to evaluate the effectiveness of two unsaturated flow models in arid regions. The area selected for the study was the Area 5 Radioactive Waste Management Site (RWMS) at the Nevada Test Site in Nye County, Nevada. The two models selected for this evaluation were HYDRUS-1D [Simunek et al., 1998] and the SHAW model [Flerchinger and Saxton, 1989]. Approximately 5 years of soil-water and atmospheric data collected from an instrumented weighing lysimeter site near the RWMS were used for building the models with actual initial and boundary conditions representative of the site. Physical processes affecting the site and model performance were explored. Model performance was based on a detailed sensitivity analysis and ultimately on storage comparisons. During the process of developing descriptive model input, procedures for converting hydraulic parameters for each model were explored. In addition, the compilation of atmospheric data collected at the site became a useful tool for developing predictive functions for future studies. The final model results were used to evaluate the capacities of the HYDRUS and SHAW models for predicting soil-moisture movement and variable surface phenomena for bare soil conditions in the arid vadose zone. The development of calibrated models along with the atmospheric and soil data collected at the site provide useful information for predicting future site performance at the RWMS.

  16. Putting Theory-Oriented Evaluation into Practice: A Logic Model Approach for Evaluating SIMGAME

    ERIC Educational Resources Information Center

    Hense, Jan; Kriz, Willy Christian; Wolfe, Joseph

    2009-01-01

    Evaluations of gaming simulations and business games as teaching devices are typically end-state driven. This emphasis fails to detect how the simulation being evaluated does or does not bring about its desired consequences. This paper advances the use of a logic model approach, which possesses a holistic perspective that aims at including all…

  17. Evaluating Organic Aerosol Model Performance: Impact of two Embedded Assumptions

    NASA Astrophysics Data System (ADS)

    Jiang, W.; Giroux, E.; Roth, H.; Yin, D.

    2004-05-01

    Organic aerosols are important due to their abundance in the polluted lower atmosphere and their impact on human health and vegetation. However, modeling organic aerosols is a very challenging task because of the complexity of aerosol composition, structure, and formation processes. Assumptions and their associated uncertainties in both models and measurement data make model performance evaluation a truly demanding job. Although some assumptions are obvious, others are hidden and embedded, and can significantly impact modeling results, possibly even changing conclusions about model performance. This paper focuses on analyzing the impact of two embedded assumptions on evaluation of organic aerosol model performance. One assumption is about the enthalpy of vaporization widely used in various secondary organic aerosol (SOA) algorithms. The other is about the conversion factor used to obtain ambient organic aerosol concentrations from measured organic carbon. These two assumptions reflect uncertainties in the model and in the ambient measurement data, respectively. For illustration purposes, various choices of the assumed values are implemented in the evaluation process for an air quality model based on CMAQ (the Community Multiscale Air Quality Model). Model simulations are conducted for the Lower Fraser Valley covering Southwest British Columbia, Canada, and Northwest Washington, United States, for a historical pollution episode in 1993. To understand the impact of the assumed enthalpy of vaporization on modeling results, its impact on instantaneous organic aerosol yields (IAY) through partitioning coefficients is analysed first. The analysis shows that utilizing different enthalpy of vaporization values causes changes in the shapes of IAY curves and in the response of SOA formation capability of reactive organic gases to temperature variations. These changes are then carried into the air quality model and cause substantial changes in the organic aerosol modeling

  18. Rocky Mountain School Division No.15 Evaluation Model.

    ERIC Educational Resources Information Center

    Rocky Mountain School Div. No. 15, Rocky Mountain House (Alberta).

    This summary report presents methodologies, results, and conclusions of a two-year evaluation model implemented by an Alberta, Canada, rural school district to provide information for administrative and public decision making. An introductory chapter enumerates district goals for students and the model's objectives. Chapter 2 outlines how survey…

  19. A Model for Integrating Program Development and Evaluation.

    ERIC Educational Resources Information Center

    Brown, J. Lynne; Kiernan, Nancy Ellen

    1998-01-01

    A communication model consisting of input from target audience, program delivery, and outcomes (receivers' perception of message) was applied to an osteoporosis-prevention program for working mothers ages 21 to 45. Due to poor completion rate on evaluation instruments and failure of participants to learn key concepts, the model was used to improve…

  20. The Impact of Spatial Correlation and Incommensurability on Model Evaluation

    EPA Science Inventory

    Standard evaluations of air quality models rely heavily on a direct comparison of monitoring data matched with the model output for the grid cell containing the monitor’s location. While such techniques may be adequate for some applications, conclusions are limited by such facto...

  1. Evaluation of radiation partitioning models at Bushland, Texas

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Crop growth and soil-vegetation-atmosphere continuum energy transfer models often require estimates of net radiation components, such as photosynthetic, solar, and longwave radiation to both the canopy and soil. We evaluated the 1998 radiation partitioning model of Campbell and Norman, herein referr...

  2. Faculty Performance Evaluation: The CIPP-SAPS Model.

    ERIC Educational Resources Information Center

    Mitcham, Maralynne

    1981-01-01

    The issues of faculty performance evaluation for allied health professionals are addressed. Daniel Stufflebeam's CIPP (content-imput-process-product) model is introduced and its development into a CIPP-SAPS (self-administrative-peer- student) model is pursued. (Author/CT)

  3. An Alternative Model for the Evaluation of Change. Technical Report.

    ERIC Educational Resources Information Center

    Corder-Bolz, Charles R.

    Previous research has indicated that most mathematical models used to evaluate change due to experimental treatment are misleading because the procedures artificially reduced one of the estimates of error variance. Two modified models, based upon the expected values of the variance of scores and difference scores, were developed from a new…

  4. Solid rocket booster performance evaluation model. Volume 2: Users manual

    NASA Technical Reports Server (NTRS)

    1974-01-01

    This users manual for the solid rocket booster performance evaluation model (SRB-II) contains descriptions of the model, the program options, the required program inputs, the program output format and the program error messages. SRB-II is written in FORTRAN and is operational on both the IBM 370/155 and the MSFC UNIVAC 1108 computers.

  5. Modeling nuisance variables for phenotypic evaluation of bull fertility

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This research determined which (available) nuisance variables should be included in a model for phenotypic evaluation of US service sire conception rate (CR), based on DHI data. Models were compared by splitting data into records for estimation (n=3,613,907) and set-aside data (n=2,025,884), computi...

  6. Model C Is Feasible for ESEA Title I Evaluation.

    ERIC Educational Resources Information Center

    Echternacht, Gary

    The assertion that Model C is feasible for Elementary Secondary Education Act Title I evaluation, why it is feasible, and reasons why it is so seldom used are explained. Two assumptions must be made to use the special regression model. First, a strict cut-off must be used on the pretest to assign students to Title I and comparison groups. Second,…

  7. A Constructivist Model for Evaluating Postgraduate Supervision: A Case Study

    ERIC Educational Resources Information Center

    Zuber-Skerritt, Ortrun; Roche, Val

    2004-01-01

    This paper presents a new constructivist model of knowledge development in a case study that illustrates how a group of postgraduate students defined and evaluated effective postgraduate supervision. This new model is based on "personal construct theory" and "repertory grid technology" which is combined with interviews and group discussion. It is…

  8. An Evaluation Model for Competency Based Teacher Preparatory Programs.

    ERIC Educational Resources Information Center

    Denton, Jon J.

    This discussion describes an evaluation model designed to complement a curriculum development project, the primary goal of which is to structure a performance based program for preservice teachers. Data collected from the implementation of this four-phase model can be used to make decisions for developing and changing performance objectives and…

  9. FOLIAR WASHOFF OF PESTICIDES (FWOP) MODEL: DEVELOPMENT AND EVALUATION

    EPA Science Inventory

    The Foliar Washoff of Pesticides (FWOP) Model was developed to provide an empirical simulation of pesticide washoff from plant leaf surfaces as influenced by rainfall amount. To evaluate the technique, simulations by the FWOP Model were compared to those by the foliar washoff alg...

  10. NEW CATEGORICAL METRICS FOR AIR QUALITY MODEL EVALUATION

    EPA Science Inventory

    Traditional categorical metrics used in model evaluations are "clear-cut" measures in that the model's ability to predict an exceedance is defined by a fixed threshold concentration and the metrics are defined by observation-forecast sets that are paired both in space and time. T...

  11. Groundwater modeling in RCRA assessment, corrective action design and evaluation

    SciTech Connect

    Rybak, I.; Henley, W.

    1995-12-31

    Groundwater modeling was conducted to design, implement, modify, and terminate corrective action at several RCRA sites in EPA Region 4. Groundwater flow, contaminant transport and unsaturated zone air flow models were used depending on the complexity of the site and the corrective action objectives. Software used included Modflow, Modpath, Quickflow, Bioplume 2, and AIR3D. Site assessment data, such as aquifer properties, site description, and surface water characteristics for each facility were used in constructing the models and designing the remedial systems. Modeling, in turn, specified additional site assessment data requirements for the remedial system design. The specific purpose of computer modeling is discussed with several case studies. These consist, among others, of the following: evaluation of the mechanism of the aquifer system and selection of a cost effective remedial option, evaluation of the capture zone of a pumping system, prediction of the system performance for different and difficult hydrogeologic settings, evaluation of the system performance, and trouble-shooting for the remedial system operation. Modeling is presented as a useful tool for corrective action system design, performance, evaluation, and trouble-shooting. The case studies exemplified the integration of diverse data sources, understanding the mechanism of the aquifer system, and evaluation of the performance of alternative remediation systems in a cost-effective manner. Pollutants of concern include metals and PAHs.

  12. The Pantex Process model: Formulations of the evaluation planning module

    SciTech Connect

    JONES,DEAN A.; LAWTON,CRAIG R.; LIST,GEORGE FISHER; TURNQUIST,MARK ALAN

    1999-12-01

    This paper describes formulations of the Evaluation Planning Module that have been developed since its inception. This module is one of the core algorithms in the Pantex Process Model, a computerized model to support production planning in a complex manufacturing system at the Pantex Plant, a US Department of Energy facility. Pantex is responsible for three major DOE programs -- nuclear weapons disposal, stockpile evaluation, and stockpile maintenance -- using shared facilities, technicians, and equipment. The model reflects the interactions of scheduling constraints, material flow constraints, and the availability of required technicians and facilities.

  13. Effects of distributed database modeling on evaluation of transaction rollbacks

    NASA Technical Reports Server (NTRS)

    Mukkamala, Ravi

    1991-01-01

    Data distribution, degree of data replication, and transaction access patterns are key factors in determining the performance of distributed database systems. In order to simplify the evaluation of performance measures, database designers and researchers tend to make simplistic assumptions about the system. The effect is studied of modeling assumptions on the evaluation of one such measure, the number of transaction rollbacks, in a partitioned distributed database system. Six probabilistic models and expressions are developed for the numbers of rollbacks under each of these models. Essentially, the models differ in terms of the available system information. The analytical results so obtained are compared to results from simulation. From here, it is concluded that most of the probabilistic models yield overly conservative estimates of the number of rollbacks. The effect of transaction commutativity on system throughout is also grossly undermined when such models are employed.

  14. Effects of distributed database modeling on evaluation of transaction rollbacks

    NASA Technical Reports Server (NTRS)

    Mukkamala, Ravi

    1991-01-01

    Data distribution, degree of data replication, and transaction access patterns are key factors in determining the performance of distributed database systems. In order to simplify the evaluation of performance measures, database designers and researchers tend to make simplistic assumptions about the system. Here, researchers investigate the effect of modeling assumptions on the evaluation of one such measure, the number of transaction rollbacks in a partitioned distributed database system. The researchers developed six probabilistic models and expressions for the number of rollbacks under each of these models. Essentially, the models differ in terms of the available system information. The analytical results obtained are compared to results from simulation. It was concluded that most of the probabilistic models yield overly conservative estimates of the number of rollbacks. The effect of transaction commutativity on system throughput is also grossly undermined when such models are employed.

  15. Evaluating the accuracy of diffusion MRI models in white matter.

    PubMed

    Rokem, Ariel; Yeatman, Jason D; Pestilli, Franco; Kay, Kendrick N; Mezer, Aviv; van der Walt, Stefan; Wandell, Brian A

    2015-01-01

    Models of diffusion MRI within a voxel are useful for making inferences about the properties of the tissue and inferring fiber orientation distribution used by tractography algorithms. A useful model must fit the data accurately. However, evaluations of model-accuracy of commonly used models have not been published before. Here, we evaluate model-accuracy of the two main classes of diffusion MRI models. The diffusion tensor model (DTM) summarizes diffusion as a 3-dimensional Gaussian distribution. Sparse fascicle models (SFM) summarize the signal as a sum of signals originating from a collection of fascicles oriented in different directions. We use cross-validation to assess model-accuracy at different gradient amplitudes (b-values) throughout the white matter. Specifically, we fit each model to all the white matter voxels in one data set and then use the model to predict a second, independent data set. This is the first evaluation of model-accuracy of these models. In most of the white matter the DTM predicts the data more accurately than test-retest reliability; SFM model-accuracy is higher than test-retest reliability and also higher than the DTM model-accuracy, particularly for measurements with (a) a b-value above 1000 in locations containing fiber crossings, and (b) in the regions of the brain surrounding the optic radiations. The SFM also has better parameter-validity: it more accurately estimates the fiber orientation distribution function (fODF) in each voxel, which is useful for fiber tracking. PMID:25879933

  16. Evaluating the Accuracy of Diffusion MRI Models in White Matter

    PubMed Central

    Rokem, Ariel; Yeatman, Jason D.; Pestilli, Franco; Kay, Kendrick N.; Mezer, Aviv; van der Walt, Stefan; Wandell, Brian A.

    2015-01-01

    Models of diffusion MRI within a voxel are useful for making inferences about the properties of the tissue and inferring fiber orientation distribution used by tractography algorithms. A useful model must fit the data accurately. However, evaluations of model-accuracy of commonly used models have not been published before. Here, we evaluate model-accuracy of the two main classes of diffusion MRI models. The diffusion tensor model (DTM) summarizes diffusion as a 3-dimensional Gaussian distribution. Sparse fascicle models (SFM) summarize the signal as a sum of signals originating from a collection of fascicles oriented in different directions. We use cross-validation to assess model-accuracy at different gradient amplitudes (b-values) throughout the white matter. Specifically, we fit each model to all the white matter voxels in one data set and then use the model to predict a second, independent data set. This is the first evaluation of model-accuracy of these models. In most of the white matter the DTM predicts the data more accurately than test-retest reliability; SFM model-accuracy is higher than test-retest reliability and also higher than the DTM model-accuracy, particularly for measurements with (a) a b-value above 1000 in locations containing fiber crossings, and (b) in the regions of the brain surrounding the optic radiations. The SFM also has better parameter-validity: it more accurately estimates the fiber orientation distribution function (fODF) in each voxel, which is useful for fiber tracking. PMID:25879933

  17. Evaluation of dense-gas simulation models. Final report

    SciTech Connect

    Zapert, J.G.; Londergan, R.J.; Thistle, H.

    1991-05-01

    The report describes the approach and presents the results of an evaluation study of seven dense gas simulation models using data from three experimental programs. The models evaluated are two in the public domain (DEGADIS and SLAB) and five that are proprietary (AIRTOX, CHARM, FOCUS, SAFEMODE, and TRACE). The data bases used in the evaluation are the Desert Tortoise Pressurized Ammonia Releases, Burro Liquefied Natural Gas Spill Tests and the Goldfish Anhydrous Hydroflouric Acid Spill Experiments. A uniform set of performance statistics are calculated and tabulated to compare maximum observed concentrations and cloud half-width to those predicted by each model. None of the models demonstrated good performance consistently for all three experimental programs.

  18. How Do You Evaluate Everyone Who Isn't a Teacher? An Adaptable Evaluation Model for Professional Support Personnel.

    ERIC Educational Resources Information Center

    Stronge, James H.; And Others

    The evaluation of professional support personnel in the schools has been a neglected area in educational evaluation. The Center for Research on Educational Accountability and Teacher Evaluation (CREATE) has worked to develop a conceptually sound evaluation model and then to translate the model into practical evaluation procedures that facilitate…

  19. Evaluation of advanced geopotential models for operational orbit determination

    NASA Technical Reports Server (NTRS)

    Radomski, M. S.; Davis, B. E.; Samii, M. V.; Engel, C. J.; Doll, C. E.

    1988-01-01

    To meet future orbit determination accuracy requirements for different NASA projects, analyses are performed using Tracking and Data Relay Satellite System (TDRSS) tracking measurements and orbit determination improvements in areas such as the modeling of the Earth's gravitational field. Current operational requirements are satisfied using the Goddard Earth Model-9 (GEM-9) geopotential model with the harmonic expansion truncated at order and degree 21 (21-by-21). This study evaluates the performance of 36-by-36 geopotential models, such as the GEM-10B and Preliminary Goddard Solution-3117 (PGS-3117) models. The Earth Radiation Budget Satellite (ERBS) and LANDSAT-5 are the spacecraft considered in this study.

  20. Human Thermal Model Evaluation Using the JSC Human Thermal Database

    NASA Technical Reports Server (NTRS)

    Bue, Grant; Makinen, Janice; Cognata, Thomas

    2012-01-01

    Human thermal modeling has considerable long term utility to human space flight. Such models provide a tool to predict crew survivability in support of vehicle design and to evaluate crew response in untested space environments. It is to the benefit of any such model not only to collect relevant experimental data to correlate it against, but also to maintain an experimental standard or benchmark for future development in a readily and rapidly searchable and software accessible format. The Human thermal database project is intended to do just so; to collect relevant data from literature and experimentation and to store the data in a database structure for immediate and future use as a benchmark to judge human thermal models against, in identifying model strengths and weakness, to support model development and improve correlation, and to statistically quantify a model s predictive quality. The human thermal database developed at the Johnson Space Center (JSC) is intended to evaluate a set of widely used human thermal models. This set includes the Wissler human thermal model, a model that has been widely used to predict the human thermoregulatory response to a variety of cold and hot environments. These models are statistically compared to the current database, which contains experiments of human subjects primarily in air from a literature survey ranging between 1953 and 2004 and from a suited experiment recently performed by the authors, for a quantitative study of relative strength and predictive quality of the models.

  1. Evaluating Vocational Educators' Training Programs: A Kirkpatrick-Inspired Evaluation Model

    ERIC Educational Resources Information Center

    Ravicchio, Fabrizio; Trentin, Guglielmo

    2015-01-01

    The aim of the article is to describe the assessment model adopted by the SCINTILLA Project, a project in Italy aimed at the online vocational training of young, seriously-disabled subjects and their subsequent work inclusion in smart-work mode. It will thus describe the model worked out for evaluation of the training program conceived for the…

  2. New model framework and structure and the commonality evaluation model. [concerning unmanned spacecraft projects

    NASA Technical Reports Server (NTRS)

    1977-01-01

    The development of a framework and structure for shuttle era unmanned spacecraft projects and the development of a commonality evaluation model is documented. The methodology developed for model utilization in performing cost trades and comparative evaluations for commonality studies is discussed. The model framework consists of categories of activities associated with the spacecraft system's development process. The model structure describes the physical elements to be treated as separate identifiable entities. Cost estimating relationships for subsystem and program-level components were calculated.

  3. Evaluating performances of simplified physically based landslide susceptibility models.

    NASA Astrophysics Data System (ADS)

    Capparelli, Giovanna; Formetta, Giuseppe; Versace, Pasquale

    2015-04-01

    Rainfall induced shallow landslides cause significant damages involving loss of life and properties. Prediction of shallow landslides susceptible locations is a complex task that involves many disciplines: hydrology, geotechnical science, geomorphology, and statistics. Usually to accomplish this task two main approaches are used: statistical or physically based model. This paper presents a package of GIS based models for landslide susceptibility analysis. It was integrated in the NewAge-JGrass hydrological model using the Object Modeling System (OMS) modeling framework. The package includes three simplified physically based models for landslides susceptibility analysis (M1, M2, and M3) and a component for models verifications. It computes eight goodness of fit indices (GOF) by comparing pixel-by-pixel model results and measurements data. Moreover, the package integration in NewAge-JGrass allows the use of other components such as geographic information system tools to manage inputs-output processes, and automatic calibration algorithms to estimate model parameters. The system offers the possibility to investigate and fairly compare the quality and the robustness of models and models parameters, according a procedure that includes: i) model parameters estimation by optimizing each of the GOF index separately, ii) models evaluation in the ROC plane by using each of the optimal parameter set, and iii) GOF robustness evaluation by assessing their sensitivity to the input parameter variation. This procedure was repeated for all three models. The system was applied for a case study in Calabria (Italy) along the Salerno-Reggio Calabria highway, between Cosenza and Altilia municipality. The analysis provided that among all the optimized indices and all the three models, Average Index (AI) optimization coupled with model M3 is the best modeling solution for our test case. This research was funded by PON Project No. 01_01503 "Integrated Systems for Hydrogeological Risk

  4. Study on Turbulent Modeling in Gas Entrainment Evaluation Method

    NASA Astrophysics Data System (ADS)

    Ito, Kei; Ohshima, Hiroyuki; Nakamine, Yoshiaki; Imai, Yasutomo

    Suppression of gas entrainment (GE) phenomena caused by free surface vortices are very important to establish an economically superior design of the sodium-cooled fast reactor in Japan (JSFR). However, due to the non-linearity and/or locality of the GE phenomena, it is not easy to evaluate the occurrences of the GE phenomena accurately. In other words, the onset condition of the GE phenomena in the JSFR is not predicted easily based on scaled-model and/or partial-model experiments. Therefore, the authors are developing a CFD-based evaluation method in which the non-linearity and locality of the GE phenomena can be considered. In the evaluation method, macroscopic vortex parameters, e.g. circulation, are determined by three-dimensional CFD and then, GE-related parameters, e.g. gas core (GC) length, are calculated by using the Burgers vortex model. This procedure is efficient to evaluate the GE phenomena in the JSFR. However, it is well known that the Burgers vortex model tends to overestimate the GC length due to the lack of considerations on some physical mechanisms. Therefore, in this study, the authors develop a turbulent vortex model to evaluate the GE phenomena more accurately. Then, the improved GE evaluation method with the turbulent viscosity model is validated by analyzing the GC lengths observed in a simple experiment. The evaluation results show that the GC lengths analyzed by the improved method are shorter in comparison to the original method, and give better agreement with the experimental data.

  5. Evaluation of six ionospheric models as predictors of TEC

    SciTech Connect

    Brown, L.D.; Daniell, R.E.; Fox, M.W.; Klobuchar, J.A.; Doherty, P.H.

    1990-05-03

    The authors have gathered TEC data from a wide range of latitudes and longitudes for a complete range of solar activity. This data was used to evaluate the performance of six ionospheric models as predictors of Total Electron Content (TFC). The TEC parameter is important in correcting modern DOD space systems, which propagate radio signals from the earth to satellites, for the time delay effects of the ionosphere. The TEC data were obtained from polarimeter receivers located in North America, the Pacific, and the East Coast of Asia. The ionospheric models evaluated are: (1) the International Reference Ionosphere (IRI); (2) the Bent model; (3) the Ionospheric Conductivity and Electron Density (ICED) model; (4) the Penn State model; (5) the Fully Analytic Ionospheric Model (FAIM, a modification of the Chiu model); and (6) the Damen-Hartranft model. They will present extensive comparisons between monthly mean TEC at all local times and model TEC obtained by integrating electron density profiles produced by the six models. These comparisons demonstrate that even thought most of the models do very well at representing f0F2, none of them do very well with TEC, probably because of inaccurate representation of the topside scale height. They suggest that one approach to obtaining better representations of TEC is the use of f0E2 from coefficients coupled with a new slab thickness developed at Boston University.

  6. Obs4MIPS: Satellite Observations for Model Evaluation

    NASA Astrophysics Data System (ADS)

    Ferraro, R.; Waliser, D. E.; Gleckler, P. J.

    2015-12-01

    This poster will review the current status of the obs4MIPs project, whose purpose is to provide a limited collection of well-established and documented datasets for comparison with Earth system models (https://www.earthsystemcog.org/projects/obs4mips/). These datasets have been reformatted to correspond with the CMIP5 model output requirements, and include technical documentation specifically targeted for their use in model output evaluation. There are currently over 50 datasets containing observations that directly correspond to CMIP5 model output variables. We will review recent additions to the obs4MIPs collection, and provide updated download statistics. We will also provide an update on changes to submission and documentation guidelines, the work of the WCRP Data Advisory Council (WDAC) Observations for Model Evaluation Task Team, and engagement with the CMIP6 MIP experiments.

  7. Evaluation of potential crushed-salt constitutive models

    SciTech Connect

    Callahan, G.D.; Loken, M.C.; Sambeek, L.L. Van; Chen, R.; Pfeifle, T.W.; Nieland, J.D.

    1995-12-01

    Constitutive models describing the deformation of crushed salt are presented in this report. Ten constitutive models with potential to describe the phenomenological and micromechanical processes for crushed salt were selected from a literature search. Three of these ten constitutive models, termed Sjaardema-Krieg, Zeuch, and Spiers models, were adopted as candidate constitutive models. The candidate constitutive models were generalized in a consistent manner to three-dimensional states of stress and modified to include the effects of temperature, grain size, and moisture content. A database including hydrostatic consolidation and shear consolidation tests conducted on Waste Isolation Pilot Plant and southeastern New Mexico salt was used to determine material parameters for the candidate constitutive models. Nonlinear least-squares model fitting to data from the hydrostatic consolidation tests, the shear consolidation tests, and a combination of the shear and hydrostatic tests produces three sets of material parameter values for the candidate models. The change in material parameter values from test group to test group indicates the empirical nature of the models. To evaluate the predictive capability of the candidate models, each parameter value set was used to predict each of the tests in the database. Based on the fitting statistics and the ability of the models to predict the test data, the Spiers model appeared to perform slightly better than the other two candidate models. The work reported here is a first-of-its kind evaluation of constitutive models for reconsolidation of crushed salt. Questions remain to be answered. Deficiencies in models and databases are identified and recommendations for future work are made. 85 refs.

  8. Neutral models as a way to evaluate the Sea Level Affecting Marshes Model (SLAMM)

    EPA Science Inventory

    A commonly used landscape model to simulate wetland change – the Sea Level Affecting Marshes Model(SLAMM) – has rarely been explicitly assessed for its prediction accuracy. Here, we evaluated this model using recently proposed neutral models – including the random constraint matc...

  9. On the evaluation of box model results: the case of BOXURB model.

    PubMed

    Paschalidou, A K; Kassomenos, P A

    2009-08-01

    In the present paper, the BOXURB model results, as they occurred in the Greater Area of Athens after model application on an hourly basis for the 10-year period 1995-2004, are evaluated both in time and space in the light of observed pollutant concentrations time series from 17 monitoring stations. The evaluation is performed at a total, monthly, daily and hourly scale. The analysis also includes evaluation of the model performance with regard to the meteorological parameters. Finally, the model is evaluated as an air quality forecasting and urban planning tool. Given the simplicity of the model and the complexity of the area topography, the model results are found to be in good agreement with the measured pollutant concentrations, especially in the heavy traffic stations. Therefore, the model can be used for regulatory purposes by authorities for time-efficient, simple and reliable estimation of air pollution levels within city boundaries. PMID:18600462

  10. Evaluation of ADAM/1 model for advanced coal extraction concepts

    NASA Technical Reports Server (NTRS)

    Deshpande, G. K.; Gangal, M. D.

    1982-01-01

    Several existing computer programs for estimating life cycle cost of mining systems were evaluated. A commercially available program, ADAM/1 was found to be satisfactory in relation to the needs of the advanced coal extraction project. Two test cases were run to confirm the ability of the program to handle nonconventional mining equipment and procedures. The results were satisfactory. The model, therefore, is recommended to the project team for evaluation of their conceptual designs.

  11. Vestibular models for design and evaluation of flight simulator motion

    NASA Technical Reports Server (NTRS)

    Bussolari, S. R.; Sullivan, R. B.; Young, L. R.

    1986-01-01

    The use of spatial orientation models in the design and evaluation of control systems for motion-base flight simulators is investigated experimentally. The development of a high-fidelity motion drive controller using an optimal control approach based on human vestibular models is described. The formulation and implementation of the optimal washout system are discussed. The effectiveness of the motion washout system was evaluated by studying the response of six motion washout systems to the NASA/AMES Vertical Motion Simulator for a single dash-quick-stop maneuver. The effects of the motion washout system on pilot performance and simulator acceptability are examined. The data reveal that human spatial orientation models are useful for the design and evaluation of flight simulator motion fidelity.

  12. Human Thermal Model Evaluation Using the JSC Human Thermal Database

    NASA Technical Reports Server (NTRS)

    Cognata, T.; Bue, G.; Makinen, J.

    2011-01-01

    The human thermal database developed at the Johnson Space Center (JSC) is used to evaluate a set of widely used human thermal models. This database will facilitate a more accurate evaluation of human thermoregulatory response using in a variety of situations, including those situations that might otherwise prove too dangerous for actual testing--such as extreme hot or cold splashdown conditions. This set includes the Wissler human thermal model, a model that has been widely used to predict the human thermoregulatory response to a variety of cold and hot environments. These models are statistically compared to the current database, which contains experiments of human subjects primarily in air from a literature survey ranging between 1953 and 2004 and from a suited experiment recently performed by the authors, for a quantitative study of relative strength and predictive quality of the models. Human thermal modeling has considerable long term utility to human space flight. Such models provide a tool to predict crew survivability in support of vehicle design and to evaluate crew response in untested environments. It is to the benefit of any such model not only to collect relevant experimental data to correlate it against, but also to maintain an experimental standard or benchmark for future development in a readily and rapidly searchable and software accessible format. The Human thermal database project is intended to do just so; to collect relevant data from literature and experimentation and to store the data in a database structure for immediate and future use as a benchmark to judge human thermal models against, in identifying model strengths and weakness, to support model development and improve correlation, and to statistically quantify a model s predictive quality.

  13. Evaluation Model on Education Effect of Team Learning

    NASA Astrophysics Data System (ADS)

    Kobayashi, Hiroshi; Uchida, Tatsuo; Ishiyama, Jun-Ichi; Ito, Masahiko; Tanigaki, Miho; Kanno, Hiroyuki

    With the acceleration of the worldwide globalization, the fluidity of the person on the earth scale advances these days. So guaranteeing the quality of the education and coping with diversification of the social need are demanded to the higher education system. Therefore, colleges and universities are introducing the activity utilized their originality and characteristic, and then promoting the educational reform. However, in those activities, the participant is usually limited or it is difficult to evaluate educational effect. In this paper, to contribute to building up an appropriate evaluation model for the team activity, evaluation systems of these activity of our college are presented and estimated; supplementary lesson, creation training, contest, etc.

  14. Evaluation of Trapped Radiation Model Uncertainties for Spacecraft Design

    NASA Technical Reports Server (NTRS)

    Armstrong, T. W.; Colborn, B. L.

    2000-01-01

    The standard AP8 and AE8 models for predicting trapped proton and electron environments have been compared with several sets of flight data to evaluate model uncertainties. Model comparisons are made with flux, dose, and activation measurements made on various U.S. low-Earth orbit satellites (APEX, CRRES, DMSP, LDEF, NOAA) and Space Shuttle flights, on Russian satellites (Photon-8, Cosmos-1887, Cosmos-2044), and on the Russian Mir Space Station. This report gives a summary of the model-data comparisons-detailed results are given in a companion report. Results from the model comparisons with flic,ht data show, for example, the AP8 model underpredicts the trapped proton flux at low altitudes by a factor of about two (independent of proton energy and solar cycle conditions), and that the AE8 model overpredicts the flux in the outer electron belt by an order of magnitude or more.

  15. Evaluation of Trapped Radiation Model Uncertainties for Spacecraft Design

    NASA Technical Reports Server (NTRS)

    Armstrong, T. W.; Colborn, B. L.

    2000-01-01

    The standard AP8 and AE8 models for predicting trapped proton and electron environments have been compared with several sets of flight data to evaluate model uncertainties. Model comparisons are made with flux, dose, and activation measurements made on various U.S. low-Earth orbit satellites (APEX, CRRES, DMSP. LDEF, NOAA) and Space Shuttle flights, on Russian satellites (Photon-8, Cosmos-1887, Cosmos-2044), and on the Russian Mir space station. This report gives a summary of the model-data given in a companion report. Results from the model comparisons with flight data show, for example, that the AP8 model underpredicts the trapped proton flux at low altitudes by a factor of about two (independent of proton energy and solar cycle conditions), and that the AE8 model overpredict the flux in the outer electron belt be an order of magnitude or more.

  16. Ensemble-based evaluation for protein structure models

    PubMed Central

    Jamroz, Michal; Kolinski, Andrzej; Kihara, Daisuke

    2016-01-01

    Motivation: Comparing protein tertiary structures is a fundamental procedure in structural biology and protein bioinformatics. Structure comparison is important particularly for evaluating computational protein structure models. Most of the model structure evaluation methods perform rigid body superimposition of a structure model to its crystal structure and measure the difference of the corresponding residue or atom positions between them. However, these methods neglect intrinsic flexibility of proteins by treating the native structure as a rigid molecule. Because different parts of proteins have different levels of flexibility, for example, exposed loop regions are usually more flexible than the core region of a protein structure, disagreement of a model to the native needs to be evaluated differently depending on the flexibility of residues in a protein. Results: We propose a score named FlexScore for comparing protein structures that consider flexibility of each residue in the native state of proteins. Flexibility information may be extracted from experiments such as NMR or molecular dynamics simulation. FlexScore considers an ensemble of conformations of a protein described as a multivariate Gaussian distribution of atomic displacements and compares a query computational model with the ensemble. We compare FlexScore with other commonly used structure similarity scores over various examples. FlexScore agrees with experts’ intuitive assessment of computational models and provides information of practical usefulness of models. Availability and implementation: https://bitbucket.org/mjamroz/flexscore Contact: dkihara@purdue.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27307633

  17. Mathematical models and lymphatic filariasis control: monitoring and evaluating interventions.

    PubMed

    Michael, Edwin; Malecela-Lazaro, Mwele N; Maegga, Bertha T A; Fischer, Peter; Kazura, James W

    2006-11-01

    Monitoring and evaluation are crucially important to the scientific management of any mass parasite control programme. Monitoring enables the effectiveness of implemented actions to be assessed and necessary adaptations to be identified; it also determines when management objectives are achieved. Parasite transmission models can provide a scientific template for informing the optimal design of such monitoring programmes. Here, we illustrate the usefulness of using a model-based approach for monitoring and evaluating anti-parasite interventions and discuss issues that need addressing. We focus on the use of such an approach for the control and/or elimination of the vector-borne parasitic disease, lymphatic filariasis. PMID:16971182

  18. Development and evaluation of thermal model reduction algorithms for spacecraft

    NASA Astrophysics Data System (ADS)

    Deiml, Michael; Suderland, Martin; Reiss, Philipp; Czupalla, Markus

    2015-05-01

    This paper is concerned with the topic of the reduction of thermal models of spacecraft. The work presented here has been conducted in cooperation with the company OHB AG, formerly Kayser-Threde GmbH, and the Institute of Astronautics at Technische Universität München with the goal to shorten and automatize the time-consuming and manual process of thermal model reduction. The reduction of thermal models can be divided into the simplification of the geometry model for calculation of external heat flows and radiative couplings and into the reduction of the underlying mathematical model. For simplification a method has been developed which approximates the reduced geometry model with the help of an optimization algorithm. Different linear and nonlinear model reduction techniques have been evaluated for their applicability in reduction of the mathematical model. Thereby the compatibility with the thermal analysis tool ESATAN-TMS is of major concern, which restricts the useful application of these methods. Additional model reduction methods have been developed, which account to these constraints. The Matrix Reduction method allows the approximation of the differential equation to reference values exactly expect for numerical errors. The summation method enables a useful, applicable reduction of thermal models that can be used in industry. In this work a framework for model reduction of thermal models has been created, which can be used together with a newly developed graphical user interface for the reduction of thermal models in industry.

  19. Evaluation of annual, global seismicity forecasts, including ensemble models

    NASA Astrophysics Data System (ADS)

    Taroni, Matteo; Zechar, Jeremy; Marzocchi, Warner

    2013-04-01

    In 2009, the Collaboratory for the Study of the Earthquake Predictability (CSEP) initiated a prototype global earthquake forecast experiment. Three models participated in this experiment for 2009, 2010 and 2011—each model forecast the number of earthquakes above magnitude 6 in 1x1 degree cells that span the globe. Here we use likelihood-based metrics to evaluate the consistency of the forecasts with the observed seismicity. We compare model performance with statistical tests and a new method based on the peer-to-peer gambling score. The results of the comparisons are used to build ensemble models that are a weighted combination of the individual models. Notably, in these experiments the ensemble model always performs significantly better than the single best-performing model. Our results indicate the following: i) time-varying forecasts, if not updated after each major shock, may not provide significant advantages with respect to time-invariant models in 1-year forecast experiments; ii) the spatial distribution seems to be the most important feature to characterize the different forecasting performances of the models; iii) the interpretation of consistency tests may be misleading because some good models may be rejected while trivial models may pass consistency tests; iv) a proper ensemble modeling seems to be a valuable procedure to get the best performing model for practical purposes.

  20. New performance evaluation models for character detection in images

    NASA Astrophysics Data System (ADS)

    Wang, YanWei; Ding, XiaoQing; Liu, ChangSong; Wang, Kongqiao

    2010-02-01

    Detection of characters regions is a meaningful research work for both highlighting region of interest and recognition for further information processing. A lot of researches have been performed on character localization and extraction and this leads to the great needs of performance evaluation scheme to inspect detection algorithms. In this paper, two probability models are established to accomplish evaluation tasks for different applications respectively. For highlighting region of interest, a Gaussian probability model, which simulates the property of a low-pass Gaussian filter of human vision system (HVS), was constructed to allocate different weights to different character parts. It reveals the greatest potential to describe the performance of detectors, especially, when the result detected is an incomplete character, where other methods cannot effectively work. For the recognition destination, we also introduced a weighted probability model to give an appropriate description for the contribution of detection results to final recognition results. The validity of performance evaluation models proposed in this paper are proved by experiments on web images and natural scene images. These models proposed in this paper may also be able to be applied in evaluating algorithms of locating other objects, like face detection and more wide experiments need to be done to examine the assumption.

  1. Progressive evaluation of incorporating information into a model building process

    NASA Astrophysics Data System (ADS)

    Gharari, Shervan; Hrachowitz, Markus; Fenicia, Fabrizio; Gao, Hongkai; Gupta, Hoshin; Savenije, Huub

    2014-05-01

    Catchments are open systems meaning that it is impossible to find out the exact boundary conditions of the real system spatially and temporarily. Therefore models are essential tools in capturing system behaviour spatially and extrapolating it temporarily for prediction. In recent years conceptual models have been in the center of attention rather than so called physically based models which are often over-parameterized and encounter difficulties for up-scaling of small scale processes. Conceptual models however are heavily dependent on calibration as one or more of their parameter values can typically not be physically measured at the catchment scale. The general understanding is based on the fact that increasing the complexity of conceptual model for better representation of hydrological process heterogeneity typically makes parameter identification more difficult however the evaluation of the amount of information given by each of the model elements, control volumes (so called buckets), interconnecting fluxes, parameterization (constitutive functions) and finally parameter values are rather unknown. Each of the mentioned components of a model contains information on the transformation of forcing (precipitation) into runoff, however the effect of each of them solely and together is not well understood. In this study we follow hierarchal steps for model building, firstly the model structure is built by its building blocks (control volumes) as well as interconnecting fluxes. The effect of adding every control volumes and the architecture of the model (formation of control volumes and fluxes) can be evaluated in this level. In the second layer the parameterization of model is evaluated. As an example the effect of a specific type of stage-discharge relation for a control volume can be explored. Finally in the last step of the model building the information gained by parameter values are quantified. In each development level the value of information which are added

  2. Evaluation of different feed intake models for dairy cows.

    PubMed

    Krizsan, S J; Sairanen, A; Höjer, A; Huhtanen, P

    2014-01-01

    The objective of the current study was to evaluate feed intake prediction models of varying complexity using individual observations of lactating cows subjected to experimental dietary treatments in periodic sequences (i.e., change-over trials). Observed or previous period animal data were combined with the current period feed data in the evaluations of the different feed intake prediction models. This would illustrate the situation and amount of available data when formulating rations for dairy cows in practice and test the robustness of the models when milk yield is used in feed intake predictions. The models to be evaluated in the current study were chosen based on the input data required in the models and the applicability to Nordic conditions. A data set comprising 2,161 total individual observations was constructed from 24 trials conducted at research barns in Denmark, Finland, Norway, and Sweden. Prediction models were evaluated by residual analysis using mixed and simple model regression. Great variation in animal and feed factors was observed in the data set, with ranges in total dry matter intake (DMI) from 10.4 to 30.8kg/d, forage DMI from 4.1 to 23.0kg/d, and milk yield from 8.4 to 51.1kg/d. The mean biases of DMI predictions for the National Research Council, the Cornell Net Carbohydrate and Protein System, the British, Finnish, and Scandinavian models were -1.71, 0.67, 2.80, 0.83, -0.60kg/d with prediction errors of 2.33, 1.71, 3.19, 1.62, and 2.03kg/d, respectively, when observed milk yield was used in the predictions. The performance of the models were ranked the same, using either mixed or simple model regression analysis, but generally the random contribution to the prediction error increased with simple rather than mixed model regression analysis. The prediction error of all models was generally greater when using previous period data compared with the observed milk yield. When the average milk yield over all periods was used in the predictions

  3. Evaluating supervised topic models in the presence of OCR errors

    NASA Astrophysics Data System (ADS)

    Walker, Daniel; Ringger, Eric; Seppi, Kevin

    2013-01-01

    Supervised topic models are promising tools for text analytics that simultaneously model topical patterns in document collections and relationships between those topics and document metadata, such as timestamps. We examine empirically the effect of OCR noise on the ability of supervised topic models to produce high quality output through a series of experiments in which we evaluate three supervised topic models and a naive baseline on synthetic OCR data having various levels of degradation and on real OCR data from two different decades. The evaluation includes experiments with and without feature selection. Our results suggest that supervised topic models are no better, or at least not much better in terms of their robustness to OCR errors, than unsupervised topic models and that feature selection has the mixed result of improving topic quality while harming metadata prediction quality. For users of topic modeling methods on OCR data, supervised topic models do not yet solve the problem of finding better topics than the original unsupervised topic models.

  4. Software Platform Evaluation - Verifiable Fuel Cycle Simulation (VISION) Model

    SciTech Connect

    J. J. Jacobson; D. E. Shropshire; W. B. West

    2005-11-01

    The purpose of this Software Platform Evaluation (SPE) is to document the top-level evaluation of potential software platforms on which to construct a simulation model that satisfies the requirements for a Verifiable Fuel Cycle Simulation Model (VISION) of the Advanced Fuel Cycle (AFC). See the Software Requirements Specification for Verifiable Fuel Cycle Simulation (VISION) Model (INEEL/EXT-05-02643, Rev. 0) for a discussion of the objective and scope of the VISION model. VISION is intended to serve as a broad systems analysis and study tool applicable to work conducted as part of the AFCI (including costs estimates) and Generation IV reactor development studies. This document will serve as a guide for selecting the most appropriate software platform for VISION. This is a “living document” that will be modified over the course of the execution of this work.

  5. Forecasting in foodservice: model development, testing, and evaluation.

    PubMed

    Miller, J L; Thompson, P A; Orabella, M M

    1991-05-01

    This study was designed to develop, test, and evaluate mathematical models appropriate for forecasting menu-item production demand in foodservice. Data were collected from residence and dining hall foodservices at Ohio State University. Objectives of the study were to collect, code, and analyze the data; develop and test models using actual operation data; and compare forecasting results with current methods in use. Customer count was forecast using deseasonalized simple exponential smoothing. Menu-item demand was forecast by multiplying the count forecast by a predicted preference statistic. Forecasting models were evaluated using mean squared error, mean absolute deviation, and mean absolute percentage error techniques. All models were more accurate than current methods. A broad spectrum of forecasting techniques could be used by foodservice managers with access to a personal computer and spread-sheet and database-management software. The findings indicate that mathematical forecasting techniques may be effective in foodservice operations to control costs, increase productivity, and maximize profits. PMID:2019699

  6. Road network safety evaluation using Bayesian hierarchical joint model.

    PubMed

    Wang, Jie; Huang, Helai

    2016-05-01

    Safety and efficiency are commonly regarded as two significant performance indicators of transportation systems. In practice, road network planning has focused on road capacity and transport efficiency whereas the safety level of a road network has received little attention in the planning stage. This study develops a Bayesian hierarchical joint model for road network safety evaluation to help planners take traffic safety into account when planning a road network. The proposed model establishes relationships between road network risk and micro-level variables related to road entities and traffic volume, as well as socioeconomic, trip generation and network density variables at macro level which are generally used for long term transportation plans. In addition, network spatial correlation between intersections and their connected road segments is also considered in the model. A road network is elaborately selected in order to compare the proposed hierarchical joint model with a previous joint model and a negative binomial model. According to the results of the model comparison, the hierarchical joint model outperforms the joint model and negative binomial model in terms of the goodness-of-fit and predictive performance, which indicates the reasonableness of considering the hierarchical data structure in crash prediction and analysis. Moreover, both random effects at the TAZ level and the spatial correlation between intersections and their adjacent segments are found to be significant, supporting the employment of the hierarchical joint model as an alternative in road-network-level safety modeling as well. PMID:26945109

  7. Evaluation of Rainfall-Runoff Models for Mediterranean Subcatchments

    NASA Astrophysics Data System (ADS)

    Cilek, A.; Berberoglu, S.; Donmez, C.

    2016-06-01

    The development and the application of rainfall-runoff models have been a corner-stone of hydrological research for many decades. The amount of rainfall and its intensity and variability control the generation of runoff and the erosional processes operating at different scales. These interactions can be greatly variable in Mediterranean catchments with marked hydrological fluctuations. The aim of the study was to evaluate the performance of rainfall-runoff model, for rainfall-runoff simulation in a Mediterranean subcatchment. The Pan-European Soil Erosion Risk Assessment (PESERA), a simplified hydrological process-based approach, was used in this study to combine hydrological surface runoff factors. In total 128 input layers derived from data set includes; climate, topography, land use, crop type, planting date, and soil characteristics, are required to run the model. Initial ground cover was estimated from the Landsat ETM data provided by ESA. This hydrological model was evaluated in terms of their performance in Goksu River Watershed, Turkey. It is located at the Central Eastern Mediterranean Basin of Turkey. The area is approximately 2000 km2. The landscape is dominated by bare ground, agricultural and forests. The average annual rainfall is 636.4mm. This study has a significant importance to evaluate different model performances in a complex Mediterranean basin. The results provided comprehensive insight including advantages and limitations of modelling approaches in the Mediterranean environment.

  8. Evaluation of thermographic phosphor technology for aerodynamic model testing

    SciTech Connect

    Cates, M.R.; Tobin, K.W.; Smith, D.B.

    1990-08-01

    The goal for this project was to perform technology evaluations applicable to the development of higher-precision, higher-temperature aerodynamic model testing at Arnold Engineering Development Center (AEDC) in Tullahmoa, Tennessee. With the advent of new programs for design of aerospace craft that fly at higher speeds and altitudes, requirements for detailed understanding of high-temperature materials become very important. Model testing is a natural and critical part of the development of these new initiatives. The well-established thermographic phosphor techniques of the Applied Technology Division at Oak Ridge National Laboratory are highly desirable for diagnostic evaluation of materials and aerodynamic shapes as studied in model tests. Combining this state-of-the-art thermographic technique with modern, higher-temperature models will greatly improve the practicability of tests for the advanced aerospace vehicles and will provide higher precision diagnostic information for quantitative evaluation of these tests. The wavelength ratio method for measuring surface temperatures of aerodynamic models was demonstrated in measurements made for this project. In particular, it was shown that the appropriate phosphors could be selected for the temperature range up to {approximately}700 {degree}F or higher and emission line ratios of sufficient sensitivity to measure temperature with 1% precision or better. Further, it was demonstrated that two-dimensional image- processing methods, using standard hardware, can be successfully applied to surface thermography of aerodynamic models for AEDC applications.

  9. Animal models to evaluate anti-atherosclerotic drugs.

    PubMed

    Priyadharsini, Raman P

    2015-08-01

    Atherosclerosis is a multifactorial condition characterized by endothelial injury, fatty streak deposition, and stiffening of the blood vessels. The pathogenesis is complex and mediated by adhesion molecules, inflammatory cells, and smooth muscle cells. Statins have been the major drugs in treating hypercholesterolemia for the past two decades despite little efficacy. There is an urgent need for new drugs that can replace statins or combined with statins. The preclinical studies evaluating atherosclerosis require an ideal animal model which resembles the disease condition, but there is no single animal model which mimics the disease. The animal models used are rabbits, rats, mice, hamsters, mini pigs, etc. Each animal model has its own advantages and disadvantages. The method of induction of atherosclerosis includes diet, chemical induction, mechanically induced injuries, and genetically manipulated animal models. This review mainly focuses on the various animal models, method of induction, the advantages, disadvantages, and the current perspectives with regard to preclinical studies on atherosclerosis. PMID:26095240

  10. Information technology model for evaluating emergency medicine teaching

    NASA Astrophysics Data System (ADS)

    Vorbach, James; Ryan, James

    1996-02-01

    This paper describes work in progress to develop an Information Technology (IT) model and supporting information system for the evaluation of clinical teaching in the Emergency Medicine (EM) Department of North Shore University Hospital. In the academic hospital setting student physicians, i.e. residents, and faculty function daily in their dual roles as teachers and students respectively, and as health care providers. Databases exist that are used to evaluate both groups in either academic or clinical performance, but rarely has this information been integrated to analyze the relationship between academic performance and the ability to care for patients. The goal of the IT model is to improve the quality of teaching of EM physicians by enabling the development of integrable metrics for faculty and resident evaluation. The IT model will include (1) methods for tracking residents in order to develop experimental databases; (2) methods to integrate lecture evaluation, clinical performance, resident evaluation, and quality assurance databases; and (3) a patient flow system to monitor patient rooms and the waiting area in the Emergency Medicine Department, to record and display status of medical orders, and to collect data for analyses.