Science.gov

Sample records for evaluating value-at-risk models

  1. Multifractal Value at Risk model

    NASA Astrophysics Data System (ADS)

    Lee, Hojin; Song, Jae Wook; Chang, Woojin

    2016-06-01

    In this paper new Value at Risk (VaR) model is proposed and investigated. We consider the multifractal property of financial time series and develop a multifractal Value at Risk (MFVaR). MFVaR introduced in this paper is analytically tractable and not based on simulation. Empirical study showed that MFVaR can provide the more stable and accurate forecasting performance in volatile financial markets where large loss can be incurred. This implies that our multifractal VaR works well for the risk measurement of extreme credit events.

  2. Estimation of value at risk and conditional value at risk using normal mixture distributions model

    NASA Astrophysics Data System (ADS)

    Kamaruzzaman, Zetty Ain; Isa, Zaidi

    2013-04-01

    Normal mixture distributions model has been successfully applied in financial time series analysis. In this paper, we estimate the return distribution, value at risk (VaR) and conditional value at risk (CVaR) for monthly and weekly rates of returns for FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI) from July 1990 until July 2010 using the two component univariate normal mixture distributions model. First, we present the application of normal mixture distributions model in empirical finance where we fit our real data. Second, we present the application of normal mixture distributions model in risk analysis where we apply the normal mixture distributions model to evaluate the value at risk (VaR) and conditional value at risk (CVaR) with model validation for both risk measures. The empirical results provide evidence that using the two components normal mixture distributions model can fit the data well and can perform better in estimating value at risk (VaR) and conditional value at risk (CVaR) where it can capture the stylized facts of non-normality and leptokurtosis in returns distribution.

  3. Application of the Beck model to stock markets: Value-at-Risk and portfolio risk assessment

    NASA Astrophysics Data System (ADS)

    Kozaki, M.; Sato, A.-H.

    2008-02-01

    We apply the Beck model, developed for turbulent systems that exhibit scaling properties, to stock markets. Our study reveals that the Beck model elucidates the properties of stock market returns and is applicable to practical use such as the Value-at-Risk estimation and the portfolio analysis. We perform empirical analysis with daily/intraday data of the S&P500 index return and find that the volatility fluctuation of real markets is well-consistent with the assumptions of the Beck model: The volatility fluctuates at a much larger time scale than the return itself and the inverse of variance, or “inverse temperature”, β obeys Γ-distribution. As predicted by the Beck model, the distribution of returns is well-fitted by q-Gaussian distribution of Tsallis statistics. The evaluation method of Value-at-Risk (VaR), one of the most significant indicators in risk management, is studied for q-Gaussian distribution. Our proposed method enables the VaR evaluation in consideration of tail risk, which is underestimated by the variance-covariance method. A framework of portfolio risk assessment under the existence of tail risk is considered. We propose a multi-asset model with a single volatility fluctuation shared by all assets, named the single β model, and empirically examine the agreement between the model and an imaginary portfolio with Dow Jones indices. It turns out that the single β model gives good approximation to portfolios composed of the assets with non-Gaussian and correlated returns.

  4. Value-at-Risk forecasts by a spatiotemporal model in Chinese stock market

    NASA Astrophysics Data System (ADS)

    Gong, Pu; Weng, Yingliang

    2016-01-01

    This paper generalizes a recently proposed spatial autoregressive model and introduces a spatiotemporal model for forecasting stock returns. We support the view that stock returns are affected not only by the absolute values of factors such as firm size, book-to-market ratio and momentum but also by the relative values of factors like trading volume ranking and market capitalization ranking in each period. This article studies a new method for constructing stocks' reference groups; the method is called quartile method. Applying the method empirically to the Shanghai Stock Exchange 50 Index, we compare the daily volatility forecasting performance and the out-of-sample forecasting performance of Value-at-Risk (VaR) estimated by different models. The empirical results show that the spatiotemporal model performs surprisingly well in terms of capturing spatial dependences among individual stocks, and it produces more accurate VaR forecasts than the other three models introduced in the previous literature. Moreover, the findings indicate that both allowing for serial correlation in the disturbances and using time-varying spatial weight matrices can greatly improve the predictive accuracy of a spatial autoregressive model.

  5. The value-at-risk evaluation of Brent's crude oil market

    NASA Astrophysics Data System (ADS)

    Cheong, Chin Wen; Isa, Zaidi; Ying, Khor Chia; Lai, Ng Sew

    2014-06-01

    This study investigates the market risk of the Brent's crude oil market. First the long memory time-varying volatility is modelled under the Chung's specification. Second, for model adequacy evaluations on the heavy-tailed, long memory and endogenously estimated power transformation models indicated superior performance in out-of-sample forecasts. Lastly, these findings are further applied in the long and short trading positions of market risk evaluations of the Brent's market.

  6. Modelling climate change impacts on and adaptation strategies for agriculture in Sardinia and Tunisia using AquaCrop and value-at-risk.

    PubMed

    Bird, David Neil; Benabdallah, Sihem; Gouda, Nadine; Hummel, Franz; Koeberl, Judith; La Jeunesse, Isabelle; Meyer, Swen; Prettenthaler, Franz; Soddu, Antonino; Woess-Gallasch, Susanne

    2016-02-01

    In Europe, there is concern that climate change will cause significant impacts around the Mediterranean. The goals of this study are to quantify the economic risk to crop production, to demonstrate the variability of yield by soil texture and climate model and to investigate possible adaptation strategies. In the Rio Mannu di San Sperate watershed, located in Sardinia (Italy) we investigate production of wheat, a rainfed crop. In the Chiba watershed located in Cap Bon (Tunisia), we analyze irrigated tomato production. We find, using the FAO model AquaCrop that crop production will decrease significantly in a future climate (2040-2070) as compared to the present without adaptation measures. Using "value-at-risk", we show that production should be viewed in a statistical manner. Wheat yields in Sardinia are modelled to decrease by 64% on clay loams, and to increase by 8% and 26% respectively on sandy loams and sandy clay loams. Assuming constant irrigation, tomatoes sown in August in Cap Bon are modelled to have a 45% chance of crop failure on loamy sands; a 39% decrease in yields on sandy clay loams; and a 12% increase in yields on sandy loams. For tomatoes sown in March; sandy clay loams will fail 81% of the time; on loamy sands the crop yields will be 63% less while on sandy loams, the yield will increase by 12%. However, if one assume 10% less water available for irrigation then tomatoes sown in March are not viable. Some adaptation strategies will be able to counteract the modelled crop losses. Increasing the amount of irrigation one strategy however this may not be sustainable. Changes in agricultural management such as changing the planting date of wheat to coincide with changing rainfall patterns in Sardinia or mulching of tomatoes in Tunisia can be effective at reducing crop losses. PMID:26187862

  7. Modelling climate change impacts on and adaptation strategies for agriculture in Sardinia and Tunisia using AquaCrop and value-at-risk.

    PubMed

    Bird, David Neil; Benabdallah, Sihem; Gouda, Nadine; Hummel, Franz; Koeberl, Judith; La Jeunesse, Isabelle; Meyer, Swen; Prettenthaler, Franz; Soddu, Antonino; Woess-Gallasch, Susanne

    2016-02-01

    In Europe, there is concern that climate change will cause significant impacts around the Mediterranean. The goals of this study are to quantify the economic risk to crop production, to demonstrate the variability of yield by soil texture and climate model and to investigate possible adaptation strategies. In the Rio Mannu di San Sperate watershed, located in Sardinia (Italy) we investigate production of wheat, a rainfed crop. In the Chiba watershed located in Cap Bon (Tunisia), we analyze irrigated tomato production. We find, using the FAO model AquaCrop that crop production will decrease significantly in a future climate (2040-2070) as compared to the present without adaptation measures. Using "value-at-risk", we show that production should be viewed in a statistical manner. Wheat yields in Sardinia are modelled to decrease by 64% on clay loams, and to increase by 8% and 26% respectively on sandy loams and sandy clay loams. Assuming constant irrigation, tomatoes sown in August in Cap Bon are modelled to have a 45% chance of crop failure on loamy sands; a 39% decrease in yields on sandy clay loams; and a 12% increase in yields on sandy loams. For tomatoes sown in March; sandy clay loams will fail 81% of the time; on loamy sands the crop yields will be 63% less while on sandy loams, the yield will increase by 12%. However, if one assume 10% less water available for irrigation then tomatoes sown in March are not viable. Some adaptation strategies will be able to counteract the modelled crop losses. Increasing the amount of irrigation one strategy however this may not be sustainable. Changes in agricultural management such as changing the planting date of wheat to coincide with changing rainfall patterns in Sardinia or mulching of tomatoes in Tunisia can be effective at reducing crop losses.

  8. Multifractality and value-at-risk forecasting of exchange rates

    NASA Astrophysics Data System (ADS)

    Batten, Jonathan A.; Kinateder, Harald; Wagner, Niklas

    2014-05-01

    This paper addresses market risk prediction for high frequency foreign exchange rates under nonlinear risk scaling behaviour. We use a modified version of the multifractal model of asset returns (MMAR) where trading time is represented by the series of volume ticks. Our dataset consists of 138,418 5-min round-the-clock observations of EUR/USD spot quotes and trading ticks during the period January 5, 2006 to December 31, 2007. Considering fat-tails, long-range dependence as well as scale inconsistency with the MMAR, we derive out-of-sample value-at-risk (VaR) forecasts and compare our approach to historical simulation as well as a benchmark GARCH(1,1) location-scale VaR model. Our findings underline that the multifractal properties in EUR/USD returns in fact have notable risk management implications. The MMAR approach is a parsimonious model which produces admissible VaR forecasts at the 12-h forecast horizon. For the daily horizon, the MMAR outperforms both alternatives based on conditional as well as unconditional coverage statistics.

  9. Heavy-tailed value-at-risk analysis for Malaysian stock exchange

    NASA Astrophysics Data System (ADS)

    Chin, Wen Cheong

    2008-07-01

    This article investigates the comparison of power-law value-at-risk (VaR) evaluation with quantile and non-linear time-varying volatility approaches. A simple Pareto distribution is proposed to account the heavy-tailed property in the empirical distribution of returns. Alternative VaR measurement such as non-parametric quantile estimate is implemented using interpolation method. In addition, we also used the well-known two components ARCH modelling technique under the assumptions of normality and heavy-tailed (student- t distribution) for the innovations. Our results evidenced that the predicted VaR under the Pareto distribution exhibited similar results with the symmetric heavy-tailed long-memory ARCH model. However, it is found that only the Pareto distribution is able to provide a convenient framework for asymmetric properties in both the lower and upper tails.

  10. The social values at risk from sea-level rise

    SciTech Connect

    Graham, Sonia; Barnett, Jon; Fincher, Ruth; Hurlimann, Anna; Mortreux, Colette; Waters, Elissa

    2013-07-15

    Analysis of the risks of sea-level rise favours conventionally measured metrics such as the area of land that may be subsumed, the numbers of properties at risk, and the capital values of assets at risk. Despite this, it is clear that there exist many less material but no less important values at risk from sea-level rise. This paper re-theorises these multifarious social values at risk from sea-level rise, by explaining their diverse nature, and grounding them in the everyday practices of people living in coastal places. It is informed by a review and analysis of research on social values from within the fields of social impact assessment, human geography, psychology, decision analysis, and climate change adaptation. From this we propose that it is the ‘lived values’ of coastal places that are most at risk from sea-level rise. We then offer a framework that groups these lived values into five types: those that are physiological in nature, and those that relate to issues of security, belonging, esteem, and self-actualisation. This framework of lived values at risk from sea-level rise can guide empirical research investigating the social impacts of sea-level rise, as well as the impacts of actions to adapt to sea-level rise. It also offers a basis for identifying the distribution of related social outcomes across populations exposed to sea-level rise or sea-level rise policies.

  11. Value-at-risk estimation with wavelet-based extreme value theory: Evidence from emerging markets

    NASA Astrophysics Data System (ADS)

    Cifter, Atilla

    2011-06-01

    This paper introduces wavelet-based extreme value theory (EVT) for univariate value-at-risk estimation. Wavelets and EVT are combined for volatility forecasting to estimate a hybrid model. In the first stage, wavelets are used as a threshold in generalized Pareto distribution, and in the second stage, EVT is applied with a wavelet-based threshold. This new model is applied to two major emerging stock markets: the Istanbul Stock Exchange (ISE) and the Budapest Stock Exchange (BUX). The relative performance of wavelet-based EVT is benchmarked against the Riskmetrics-EWMA, ARMA-GARCH, generalized Pareto distribution, and conditional generalized Pareto distribution models. The empirical results show that the wavelet-based extreme value theory increases predictive performance of financial forecasting according to number of violations and tail-loss tests. The superior forecasting performance of the wavelet-based EVT model is also consistent with Basel II requirements, and this new model can be used by financial institutions as well.

  12. Value at risk estimation with entropy-based wavelet analysis in exchange markets

    NASA Astrophysics Data System (ADS)

    He, Kaijian; Wang, Lijun; Zou, Yingchao; Lai, Kin Keung

    2014-08-01

    In recent years, exchange markets are increasingly integrated together. Fluctuations and risks across different exchange markets exhibit co-moving and complex dynamics. In this paper we propose the entropy-based multivariate wavelet based approaches to analyze the multiscale characteristic in the multidimensional domain and improve further the Value at Risk estimation reliability. Wavelet analysis has been introduced to construct the entropy-based Multiscale Portfolio Value at Risk estimation algorithm to account for the multiscale dynamic correlation. The entropy measure has been proposed as the more effective measure with the error minimization principle to select the best basis when determining the wavelet families and the decomposition level to use. The empirical studies conducted in this paper have provided positive evidence as to the superior performance of the proposed approach, using the closely related Chinese Renminbi and European Euro exchange market.

  13. Measuring daily Value-at-Risk of SSEC index: A new approach based on multifractal analysis and extreme value theory

    NASA Astrophysics Data System (ADS)

    Wei, Yu; Chen, Wang; Lin, Yu

    2013-05-01

    Recent studies in the econophysics literature reveal that price variability has fractal and multifractal characteristics not only in developed financial markets, but also in emerging markets. Taking high-frequency intraday quotes of the Shanghai Stock Exchange Component (SSEC) Index as example, this paper proposes a new method to measure daily Value-at-Risk (VaR) by combining the newly introduced multifractal volatility (MFV) model and the extreme value theory (EVT) method. Two VaR backtesting techniques are then employed to compare the performance of the model with that of a group of linear and nonlinear generalized autoregressive conditional heteroskedasticity (GARCH) models. The empirical results show the multifractal nature of price volatility in Chinese stock market. VaR measures based on the multifractal volatility model and EVT method outperform many GARCH-type models at high-risk levels.

  14. On Value at Risk for Foreign Exchange Rates --- the Copula Approach

    NASA Astrophysics Data System (ADS)

    Jaworski, P.

    2006-11-01

    The aim of this paper is to determine the Value at Risk (VaR) of the portfolio consisting of long positions in foreign currencies on an emerging market. Basing on empirical data we restrict ourselves to the case when the tail parts of distributions of logarithmic returns of these assets follow the power laws and the lower tail of associated copula C follows the power law of degree 1. We will illustrate the practical usefulness of this approach by the analysis of the exchange rates of EUR and CHF at the Polish forex market.

  15. Continuous-time mean-variance portfolio selection with value-at-risk and no-shorting constraints

    NASA Astrophysics Data System (ADS)

    Yan, Wei

    2012-01-01

    An investment problem is considered with dynamic mean-variance(M-V) portfolio criterion under discontinuous prices which follow jump-diffusion processes according to the actual prices of stocks and the normality and stability of the financial market. The short-selling of stocks is prohibited in this mathematical model. Then, the corresponding stochastic Hamilton-Jacobi-Bellman(HJB) equation of the problem is presented and the solution of the stochastic HJB equation based on the theory of stochastic LQ control and viscosity solution is obtained. The efficient frontier and optimal strategies of the original dynamic M-V portfolio selection problem are also provided. And then, the effects on efficient frontier under the value-at-risk constraint are illustrated. Finally, an example illustrating the discontinuous prices based on M-V portfolio selection is presented.

  16. 'Weather Value at Risk': A uniform approach to describe and compare sectoral income risks from climate change.

    PubMed

    Prettenthaler, Franz; Köberl, Judith; Bird, David Neil

    2016-02-01

    We extend the concept of 'Weather Value at Risk' - initially introduced to measure the economic risks resulting from current weather fluctuations - to describe and compare sectoral income risks from climate change. This is illustrated using the examples of wheat cultivation and summer tourism in (parts of) Sardinia. Based on climate scenario data from four different regional climate models we study the change in the risk of weather-related income losses between some reference (1971-2000) and some future (2041-2070) period. Results from both examples suggest an increase in weather-related risks of income losses due to climate change, which is somewhat more pronounced for summer tourism. Nevertheless, income from wheat cultivation is at much higher risk of weather-related losses than income from summer tourism, both under reference and future climatic conditions. A weather-induced loss of at least 5% - compared to the income associated with average reference weather conditions - shows a 40% (80%) probability of occurrence in the case of wheat cultivation, but only a 0.4% (16%) probability of occurrence in the case of summer tourism, given reference (future) climatic conditions. Whereas in the agricultural example increases in the weather-related income risks mainly result from an overall decrease in average wheat yields, the heightened risk in the tourism example stems mostly from a change in the weather-induced variability of tourism incomes. With the extended 'Weather Value at Risk' concept being able to capture both, impacts from changes in the mean and the variability of the climate, it is a powerful tool for presenting and disseminating the results of climate change impact assessments. Due to its flexibility, the concept can be applied to any economic sector and therefore provides a valuable tool for cross-sectoral comparisons of climate change impacts, but also for the assessment of the costs and benefits of adaptation measures. PMID:25929802

  17. 'Weather Value at Risk': A uniform approach to describe and compare sectoral income risks from climate change.

    PubMed

    Prettenthaler, Franz; Köberl, Judith; Bird, David Neil

    2016-02-01

    We extend the concept of 'Weather Value at Risk' - initially introduced to measure the economic risks resulting from current weather fluctuations - to describe and compare sectoral income risks from climate change. This is illustrated using the examples of wheat cultivation and summer tourism in (parts of) Sardinia. Based on climate scenario data from four different regional climate models we study the change in the risk of weather-related income losses between some reference (1971-2000) and some future (2041-2070) period. Results from both examples suggest an increase in weather-related risks of income losses due to climate change, which is somewhat more pronounced for summer tourism. Nevertheless, income from wheat cultivation is at much higher risk of weather-related losses than income from summer tourism, both under reference and future climatic conditions. A weather-induced loss of at least 5% - compared to the income associated with average reference weather conditions - shows a 40% (80%) probability of occurrence in the case of wheat cultivation, but only a 0.4% (16%) probability of occurrence in the case of summer tourism, given reference (future) climatic conditions. Whereas in the agricultural example increases in the weather-related income risks mainly result from an overall decrease in average wheat yields, the heightened risk in the tourism example stems mostly from a change in the weather-induced variability of tourism incomes. With the extended 'Weather Value at Risk' concept being able to capture both, impacts from changes in the mean and the variability of the climate, it is a powerful tool for presenting and disseminating the results of climate change impact assessments. Due to its flexibility, the concept can be applied to any economic sector and therefore provides a valuable tool for cross-sectoral comparisons of climate change impacts, but also for the assessment of the costs and benefits of adaptation measures.

  18. Evaluation models and evaluation use

    PubMed Central

    Contandriopoulos, Damien; Brousselle, Astrid

    2012-01-01

    The use of evaluation results is at the core of evaluation theory and practice. Major debates in the field have emphasized the importance of both the evaluator’s role and the evaluation process itself in fostering evaluation use. A recent systematic review of interventions aimed at influencing policy-making or organizational behavior through knowledge exchange offers a new perspective on evaluation use. We propose here a framework for better understanding the embedded relations between evaluation context, choice of an evaluation model and use of results. The article argues that the evaluation context presents conditions that affect both the appropriateness of the evaluation model implemented and the use of results. PMID:23526460

  19. VPPA weld model evaluation

    NASA Astrophysics Data System (ADS)

    McCutcheon, Kimble D.; Gordon, Stephen S.; Thompson, Paul A.

    1992-07-01

    NASA uses the Variable Polarity Plasma Arc Welding (VPPAW) process extensively for fabrication of Space Shuttle External Tanks. This welding process has been in use at NASA since the late 1970's but the physics of the process have never been satisfactorily modeled and understood. In an attempt to advance the level of understanding of VPPAW, Dr. Arthur C. Nunes, Jr., (NASA) has developed a mathematical model of the process. The work described in this report evaluated and used two versions (level-0 and level-1) of Dr. Nunes' model, and a model derived by the University of Alabama at Huntsville (UAH) from Dr. Nunes' level-1 model. Two series of VPPAW experiments were done, using over 400 different combinations of welding parameters. Observations were made of VPPAW process behavior as a function of specific welding parameter changes. Data from these weld experiments was used to evaluate and suggest improvements to Dr. Nunes' model. Experimental data and correlations with the model were used to develop a multi-variable control algorithm for use with a future VPPAW controller. This algorithm is designed to control weld widths (both on the crown and root of the weld) based upon the weld parameters, base metal properties, and real-time observation of the crown width. The algorithm exhibited accuracy comparable to that of the weld width measurements for both aluminum and mild steel welds.

  20. VPPA weld model evaluation

    NASA Technical Reports Server (NTRS)

    Mccutcheon, Kimble D.; Gordon, Stephen S.; Thompson, Paul A.

    1992-01-01

    NASA uses the Variable Polarity Plasma Arc Welding (VPPAW) process extensively for fabrication of Space Shuttle External Tanks. This welding process has been in use at NASA since the late 1970's but the physics of the process have never been satisfactorily modeled and understood. In an attempt to advance the level of understanding of VPPAW, Dr. Arthur C. Nunes, Jr., (NASA) has developed a mathematical model of the process. The work described in this report evaluated and used two versions (level-0 and level-1) of Dr. Nunes' model, and a model derived by the University of Alabama at Huntsville (UAH) from Dr. Nunes' level-1 model. Two series of VPPAW experiments were done, using over 400 different combinations of welding parameters. Observations were made of VPPAW process behavior as a function of specific welding parameter changes. Data from these weld experiments was used to evaluate and suggest improvements to Dr. Nunes' model. Experimental data and correlations with the model were used to develop a multi-variable control algorithm for use with a future VPPAW controller. This algorithm is designed to control weld widths (both on the crown and root of the weld) based upon the weld parameters, base metal properties, and real-time observation of the crown width. The algorithm exhibited accuracy comparable to that of the weld width measurements for both aluminum and mild steel welds.

  1. Evaluating Service Organization Models

    PubMed Central

    TOUATI, NASSERA; PINEAULT, RAYNALD; CHAMPAGNE, FRANÇOIS; DENIS, JEAN-LOUIS; BROUSSELLE, ASTRID; CONTANDRIOPOULOS, ANDRÉ-PIERRE; GENEAU, ROBERT

    2016-01-01

    Based on the example of the evaluation of service organization models, this article shows how a configurational approach overcomes the limits of traditional methods which for the most part have studied the individual components of various models considered independently of one another. These traditional methods have led to results (observed effects) that are difficult to interpret. The configurational approach, in contrast, is based on the hypothesis that effects are associated with a set of internally coherent model features that form various configurations. These configurations, like their effects, are context-dependent. We explore the theoretical basis of the configuration approach in order to emphasize its relevance, and discuss the methodological challenges inherent in the application of this approach through an in-depth analysis of the scientific literature. We also propose methodological solutions to these challenges. We illustrate from an example how a configurational approach has been used to evaluate primary care models. Finally, we begin a discussion on the implications of this new evaluation approach for the scientific and decision-making communities. PMID:27274682

  2. Pragmatic geometric model evaluation

    NASA Astrophysics Data System (ADS)

    Pamer, Robert

    2015-04-01

    Quantification of subsurface model reliability is mathematically and technically demanding as there are many different sources of uncertainty and some of the factors can be assessed merely in a subjective way. For many practical applications in industry or risk assessment (e. g. geothermal drilling) a quantitative estimation of possible geometric variations in depth unit is preferred over relative numbers because of cost calculations for different scenarios. The talk gives an overview of several factors that affect the geometry of structural subsurface models that are based upon typical geological survey organization (GSO) data like geological maps, borehole data and conceptually driven construction of subsurface elements (e. g. fault network). Within the context of the trans-European project "GeoMol" uncertainty analysis has to be very pragmatic also because of different data rights, data policies and modelling software between the project partners. In a case study a two-step evaluation methodology for geometric subsurface model uncertainty is being developed. In a first step several models of the same volume of interest have been calculated by omitting successively more and more input data types (seismic constraints, fault network, outcrop data). The positions of the various horizon surfaces are then compared. The procedure is equivalent to comparing data of various levels of detail and therefore structural complexity. This gives a measure of the structural significance of each data set in space and as a consequence areas of geometric complexity are identified. These areas are usually very data sensitive hence geometric variability in between individual data points in these areas is higher than in areas of low structural complexity. Instead of calculating a multitude of different models by varying some input data or parameters as it is done by Monte-Carlo-simulations, the aim of the second step of the evaluation procedure (which is part of the ongoing work) is to

  3. Social Program Evaluation: Six Models.

    ERIC Educational Resources Information Center

    New Directions for Program Evaluation, 1980

    1980-01-01

    Representative models of program evaluation are described by their approach to values, and categorized by empirical style: positivism versus humanism. The models are: social process audit; experimental/quasi-experimental research design; goal-free evaluation; systems evaluation; cost-benefit analysis; and accountability program evaluation. (CP)

  4. Evaluation Theory, Models, and Applications

    ERIC Educational Resources Information Center

    Stufflebeam, Daniel L.; Shinkfield, Anthony J.

    2007-01-01

    "Evaluation Theory, Models, and Applications" is designed for evaluators and students who need to develop a commanding knowledge of the evaluation field: its history, theory and standards, models and approaches, procedures, and inclusion of personnel as well as program evaluation. This important book shows how to choose from a growing array of…

  5. Evaluating Causal Models.

    ERIC Educational Resources Information Center

    Watt, James H., Jr.

    Pointing out that linear causal models can organize the interrelationships of a large number of variables, this paper contends that such models are particularly useful to mass communication research, which must by necessity deal with complex systems of variables. The paper first outlines briefly the philosophical requirements for establishing a…

  6. Evaluation of Advocacy Models.

    ERIC Educational Resources Information Center

    Bradley, Valerie J.

    The paper describes approaches and findings of an evaluation of 10 advocacy projects providing services to developmentally disabled and mentally ill persons across the country. The projects included internal rights protection organizations, independent legal advocacy mechanisms, self-advocacy training centers, and legal advocacy providers in…

  7. Nuclear models relevant to evaluation

    SciTech Connect

    Arthur, E.D.; Chadwick, M.B.; Hale, G.M.; Young, P.G.

    1991-01-01

    The widespread use of nuclear models continues in the creation of data evaluations. The reasons include extension of data evaluations to higher energies, creation of data libraries for isotopic components of natural materials, and production of evaluations for radiative target species. In these cases, experimental data are often sparse or nonexistent. As this trend continues, the nuclear models employed in evaluation work move towards more microscopically-based theoretical methods, prompted in part by the availability of increasingly powerful computational resources. Advances in nuclear models applicable to evaluation will be reviewed. These include advances in optical model theory, microscopic and phenomenological state and level density theory, unified models that consistently describe both equilibrium and nonequilibrium reaction mechanism, and improved methodologies for calculation of prompt radiation from fission. 84 refs., 8 figs.

  8. Advocacy Evaluation: A Model for Internal Evaluation Offices.

    ERIC Educational Resources Information Center

    Sonnichsen, Richard C.

    1988-01-01

    As evaluations are more often implemented by internal staff, internal evaluators must begin to assume decision-making and advocacy tasks. This advocacy evaluation concept is described using the Federal Bureau of Investigation evaluation staff as a model. (TJH)

  9. A Model for Curriculum Evaluation

    ERIC Educational Resources Information Center

    Crane, Peter; Abt, Clark C.

    1969-01-01

    Describes in some detail the Curriculum Evaluation Model, "a technique for calculating the cost-effectiveness of alternative curriculum materials by a detailed breakdown and analysis of their components, quality, and cost. Coverage, appropriateness, motivational effectiveness, and cost are the four major categories in terms of which the…

  10. Sequentially Executed Model Evaluation Framework

    SciTech Connect

    2015-10-20

    Provides a message passing framework between generic input, model and output drivers, and specifies an API for developing such drivers. Also provides batch and real-time controllers which step the model and I/O through the time domain (or other discrete domain), and sample I/O drivers. This is a library framework, and does not, itself, solve any problems or execute any modeling. The SeMe framework aids in development of models which operate on sequential information, such as time-series, where evaluation is based on prior results combined with new data for this iteration. Has applications in quality monitoring, and was developed as part of the CANARY-EDS software, where real-time water quality data is being analyzed for anomalies.

  11. Sequentially Executed Model Evaluation Framework

    SciTech Connect

    2014-02-14

    Provides a message passing framework between generic input, model and output drivers, and specifies an API for developing such drivers. Also provides batch and real-time controllers which step the model and 1/0 through the time domain (or other discrete domain), and sample 1/0 drivers. This is a Framework library framework, and does not, itself, solve any problems or execute any modelling. The SeMe framework aids in development of models which operate on sequential information, such as time-series, where evaluation is based on prior results combined with new data for this iteration. Ha) applications in quality monitoring, and was developed as part of the CANARY-EDS software, where real-time water quality data is being analyzed

  12. Sequentially Executed Model Evaluation Framework

    2014-02-14

    Provides a message passing framework between generic input, model and output drivers, and specifies an API for developing such drivers. Also provides batch and real-time controllers which step the model and 1/0 through the time domain (or other discrete domain), and sample 1/0 drivers. This is a Framework library framework, and does not, itself, solve any problems or execute any modelling. The SeMe framework aids in development of models which operate on sequential information, suchmore » as time-series, where evaluation is based on prior results combined with new data for this iteration. Ha) applications in quality monitoring, and was developed as part of the CANARY-EDS software, where real-time water quality data is being analyzed« less

  13. Sequentially Executed Model Evaluation Framework

    2015-10-20

    Provides a message passing framework between generic input, model and output drivers, and specifies an API for developing such drivers. Also provides batch and real-time controllers which step the model and I/O through the time domain (or other discrete domain), and sample I/O drivers. This is a library framework, and does not, itself, solve any problems or execute any modeling. The SeMe framework aids in development of models which operate on sequential information, such asmore » time-series, where evaluation is based on prior results combined with new data for this iteration. Has applications in quality monitoring, and was developed as part of the CANARY-EDS software, where real-time water quality data is being analyzed for anomalies.« less

  14. Infrasound Sensor Models and Evaluations

    SciTech Connect

    KROMER,RICHARD P.; MCDONALD,TIMOTHY S.

    2000-07-31

    Sandia National Laboratories has continued to evaluate the performance of infrasound sensors that are candidates for use by the International Monitoring System (IMS) for the Comprehensive Nuclear-Test-Ban Treaty Organization. The performance criteria against which these sensors are assessed are specified in ``Operational Manual for Infra-sound Monitoring and the International Exchange of Infrasound Data''. This presentation includes the results of efforts concerning two of these sensors: (1) Chaparral Physics Model 5; and (2) CEA MB2000. Sandia is working with Chaparral Physics in order to improve the capability of the Model 5 (a prototype sensor) to be calibrated and evaluated. With the assistance of the Scripps Institution of Oceanography, Sandia is also conducting tests to evaluate the performance of the CEA MB2000. Sensor models based on theoretical transfer functions and manufacturer specifications for these two devices have been developed. This presentation will feature the results of coherence-based data analysis of signals from a huddle test, utilizing several sensors of both types, in order to verify the sensor performance.

  15. The Spiral-Interactive Program Evaluation Model.

    ERIC Educational Resources Information Center

    Khaleel, Ibrahim Adamu

    1988-01-01

    Describes the spiral interactive program evaluation model, which is designed to evaluate vocational-technical education programs in secondary schools in Nigeria. Program evaluation is defined; utility oriented and process oriented models for evaluation are described; and internal and external evaluative factors and variables that define each…

  16. Comprehensive Evaluation Model for Nursing Education.

    ERIC Educational Resources Information Center

    Reed, Suellen B.; Riley, William

    1979-01-01

    The comprehensive model for evaluating nursing education programs is described in terms of what is evaluated; who conducts the evaluation; and why it is conducted. A structure for further action and decision making is also presented. (GDC)

  17. Beyond Evaluation: A Model for Cooperative Evaluation of Internet Resources.

    ERIC Educational Resources Information Center

    Kirkwood, Hal P., Jr.

    1998-01-01

    Presents a status report on Web site evaluation efforts, listing dead, merged, new review, Yahoo! wannabes, subject-specific review, former librarian-managed, and librarian-managed review sites; discusses how sites are evaluated; describes and demonstrates (reviewing company directories) the Marr/Kirkwood evaluation model; and provides an…

  18. Toward a Theoretical Model of Evaluation Utilization.

    ERIC Educational Resources Information Center

    Johnson, R. Burke

    1998-01-01

    A metamodel of evaluation utilization was developed from implicit and explicit process models and ideas developed in recent research. The model depicts evaluation use as occurring in an internal environment situated in an external environment. Background variables, international or social psychological variables, and evaluation use variables are…

  19. Toward an Ecological Evaluation Model.

    ERIC Educational Resources Information Center

    Parker, Jackson; Patterson, Jerry L.

    1979-01-01

    The authors suggest that the aura of authority traditionally placed on educational research and evaluation has been based on an outdated understanding of the scientific enterprise. They outline an alternative view of science which is more ecological and provides more scope and power for evaluating educational programs. They propose a new framework…

  20. Evaluating modeling tools for the EDOS

    NASA Technical Reports Server (NTRS)

    Knoble, Gordon; Mccaleb, Frederick; Aslam, Tanweer; Nester, Paul

    1994-01-01

    The Earth Observing System (EOS) Data and Operations System (EDOS) Project is developing a functional, system performance model to support the system implementation phase of the EDOS which is being designed and built by the Goddard Space Flight Center (GSFC). The EDOS Project will use modeling to meet two key objectives: (1) manage system design impacts introduced by unplanned changed in mission requirements; and (2) evaluate evolutionary technology insertions throughout the development of the EDOS. To select a suitable modeling tool, the EDOS modeling team developed an approach for evaluating modeling tools and languages by deriving evaluation criteria from both the EDOS modeling requirements and the development plan. Essential and optional features for an appropriate modeling tool were identified and compared with known capabilities of several modeling tools. Vendors were also provided the opportunity to model a representative EDOS processing function to demonstrate the applicability of their modeling tool to the EDOS modeling requirements. This paper emphasizes the importance of using a well defined approach for evaluating tools to model complex systems like the EDOS. The results of this evaluation study do not in any way signify the superiority of any one modeling tool since the results will vary with the specific modeling requirements of each project.

  1. A Model for Administrative Evaluation by Subordinates.

    ERIC Educational Resources Information Center

    Budig, Jeanne E.

    Under the administrator evaluation program adopted at Vincennes University, all faculty and professional staff are invited to evaluate each administrator above them in the chain of command. Originally based on the Purdue University "cafeteria" system, this evaluation model has been used biannually for 10 years. In an effort to simplify the system,…

  2. IAQ evaluation by dynamic modeling

    SciTech Connect

    Meckler, M.

    1995-12-01

    The current ASHRAE Standard 62-1989, in addition to the ventilation rate (VR) procedure, now contains an alternative procedure in Appendix E to achieve acceptable indoor air quality (IAQ). In this article, the author develops a dynamic model for each of the seven most commonly used HVAC systems listed in Appendix E of ASHRAE Standard 62-1989 and demonstrates how these dynamic models work by providing an illustrative example. In this example, the author estimates the concentration of formaldehyde as a function of time in an office occupancy for three types of filters and outlines how to choose filters to decrease outside air flow requirements.

  3. The KICS Model: Evaluation Studies.

    ERIC Educational Resources Information Center

    Holovet, Jennifer, Ed.; Guess, Doug, Ed.

    The manual presents 12 papers summarizing research on the effectiveness of the Kansas Individualized Curriculum Sequencing (KICS) model for severely handicapped students. The first three papers examine the effects of distributed practice schedules on the learning, generalization and initiation of students. The use of distributed practice, an…

  4. Evaluating survival model performance: a graphical approach.

    PubMed

    Mandel, M; Galai, N; Simchen, E

    2005-06-30

    In the last decade, many statistics have been suggested to evaluate the performance of survival models. These statistics evaluate the overall performance of a model ignoring possible variability in performance over time. Using an extension of measures used in binary regression, we propose a graphical method to depict the performance of a survival model over time. The method provides estimates of performance at specific time points and can be used as an informal test for detecting time varying effects of covariates in the Cox model framework. The method is illustrated on real and simulated data using Cox proportional hazard model and rank statistics.

  5. Model Performance Evaluation and Scenario Analysis (MPESA) Tutorial

    EPA Science Inventory

    This tool consists of two parts: model performance evaluation and scenario analysis (MPESA). The model performance evaluation consists of two components: model performance evaluation metrics and model diagnostics. These metrics provides modelers with statistical goodness-of-fit m...

  6. THE ATMOSPHERIC MODEL EVALUATION (AMET): METEOROLOGY MODULE

    EPA Science Inventory

    An Atmospheric Model Evaluation Tool (AMET), composed of meteorological and air quality components, is being developed to examine the error and uncertainty in the model simulations. AMET matches observations with the corresponding model-estimated values in space and time, and the...

  7. Comprehensive system models: Strategies for evaluation

    NASA Technical Reports Server (NTRS)

    Field, Christopher; Kutzbach, John E.; Ramanathan, V.; Maccracken, Michael C.

    1992-01-01

    The task of evaluating comprehensive earth system models is vast involving validations of every model component at every scale of organization, as well as tests of all the individual linkages. Even the most detailed evaluation of each of the component processes and the individual links among them should not, however, engender confidence in the performance of the whole. The integrated earth system is so rich with complex feedback loops, often involving components of the atmosphere, oceans, biosphere, and cryosphere, that it is certain to exhibit emergent properties very difficult to predict from the perspective of a narrow focus on any individual component of the system. Therefore, a substantial share of the task of evaluating comprehensive earth system models must reside at the level of whole system evaluations. Since complete, integrated atmosphere/ ocean/ biosphere/ hydrology models are not yet operational, questions of evaluation must be addressed at the level of the kinds of earth system processes that the models should be competent to simulate, rather than at the level of specific performance criteria. Here, we have tried to identify examples of earth system processes that are difficult to simulate with existing models and that involve a rich enough suite of feedbacks that they are unlikely to be satisfactorily described by highly simplified or toy models. Our purpose is not to specify a checklist of evaluation criteria but to introduce characteristics of the earth system that may present useful opportunities for model testing and, of course, improvement.

  8. [Evaluation model for municipal health planning management].

    PubMed

    Berretta, Isabel Quint; Lacerda, Josimari Telino de; Calvo, Maria Cristina Marino

    2011-11-01

    This article presents an evaluation model for municipal health planning management. The basis was a methodological study using the health planning theoretical framework to construct the evaluation matrix, in addition to an understanding of the organization and functioning designed by the Planning System of the Unified National Health System (PlanejaSUS) and definition of responsibilities for the municipal level under the Health Management Pact. The indicators and measures were validated using the consensus technique with specialists in planning and evaluation. The applicability was tested in 271 municipalities (counties) in the State of Santa Catarina, Brazil, based on population size. The proposed model features two evaluative dimensions which reflect the municipal health administrator's commitment to planning: the guarantee of resources and the internal and external relations needed for developing the activities. The data were analyzed using indicators, sub-dimensions, and dimensions. The study concludes that the model is feasible and appropriate for evaluating municipal performance in health planning management.

  9. Evaluation of Galactic Cosmic Ray Models

    NASA Technical Reports Server (NTRS)

    Adams, James H., Jr.; Heiblim, Samuel; Malott, Christopher

    2009-01-01

    Models of the galactic cosmic ray spectra have been tested by comparing their predictions to an evaluated database containing more than 380 measured cosmic ray spectra extending from 1960 to the present.

  10. Outcomes Evaluation: A Model for the Future.

    ERIC Educational Resources Information Center

    Blasi, John F.; Davis, Barbara S.

    1986-01-01

    Examines issues and problems related to the measurement of community college outcomes in relation to mission and goals. Presents a model for outcomes evaluation at the community college which derives from the mission statement and provides evaluative comment and comparison with institutional and national norms. (DMM)

  11. Metrics for Evaluation of Student Models

    ERIC Educational Resources Information Center

    Pelanek, Radek

    2015-01-01

    Researchers use many different metrics for evaluation of performance of student models. The aim of this paper is to provide an overview of commonly used metrics, to discuss properties, advantages, and disadvantages of different metrics, to summarize current practice in educational data mining, and to provide guidance for evaluation of student…

  12. SAPHIRE models and software for ASP evaluations

    SciTech Connect

    Sattison, M.B.; Schroeder, J.A.; Russell, K.D.

    1995-04-01

    The Idaho National Engineering Laboratory (INEL) over the past year has created 75 plant-specific Accident Sequence Precursor (ASP) models using the SAPHIRE suite of PRA codes. Along with the new models, the INEL has also developed a new module for SAPHIRE which is tailored specifically to the unique needs of conditional core damage probability (CCDP) evaluations. These models and software will be the next generation of risk tools for the evaluation of accident precursors by both NRR and AEOD. This paper presents an overview of the models and software. Key characteristics include: (1) classification of the plant models according to plant response with a unique set of event trees for each plant class, (2) plant-specific fault trees using supercomponents, (3) generation and retention of all system and sequence cutsets, (4) full flexibility in modifying logic, regenerating cutsets, and requantifying results, and (5) user interface for streamlined evaluation of ASP events.

  13. Rock mechanics models evaluation report. [Contains glossary

    SciTech Connect

    Not Available

    1987-08-01

    This report documents the evaluation of the thermal and thermomechanical models and codes for repository subsurface design and for design constraint analysis. The evaluation was based on a survey of the thermal and thermomechanical codes and models that are applicable to subsurface design, followed by a Kepner-Tregoe (KT) structured decision analysis of the codes and models. The primary recommendations of the analysis are that the DOT code be used for two-dimensional thermal analysis and that the STEALTH and HEATING 5/6 codes be used for three-dimensional and complicated two-dimensional thermal analysis. STEALTH and SPECTROM 32 are recommended for thermomechanical analyses. The other evaluated codes should be considered for use in certain applications. A separate review of salt creep models indicate that the commonly used exponential time law model is appropriate for use in repository design studies. 38 refs., 1 fig., 7 tabs.

  14. An empirical evaluation of spatial regression models

    NASA Astrophysics Data System (ADS)

    Gao, Xiaolu; Asami, Yasushi; Chung, Chang-Jo F.

    2006-10-01

    Conventional statistical methods are often ineffective to evaluate spatial regression models. One reason is that spatial regression models usually have more parameters or smaller sample sizes than a simple model, so their degree of freedom is reduced. Thus, it is often unlikely to evaluate them based on traditional tests. Another reason, which is theoretically associated with statistical methods, is that statistical criteria are crucially dependent on such assumptions as normality, independence, and homogeneity. This may create problems because the assumptions are open for testing. In view of these problems, this paper proposes an alternative empirical evaluation method. To illustrate the idea, a few hedonic regression models for a house and land price data set are evaluated, including a simple, ordinary linear regression model and three spatial models. Their performance as to how well the price of the house and land can be predicted is examined. With a cross-validation technique, the prices at each sample point are predicted with a model estimated with the samples excluding the one being concerned. Then, empirical criteria are established whereby the predicted prices are compared with the real, observed prices. The proposed method provides an objective guidance for the selection of a suitable model specification for a data set. Moreover, the method is seen as an alternative way to test the significance of the spatial relationships being concerned in spatial regression models.

  15. Evaluation of constitutive models for crushed salt

    SciTech Connect

    Callahan, G.D.; Loken, M.C. [RE Hurtado, L.D.; Hansen, F.D.

    1996-05-01

    Three constitutive models are recommended as candidates for describing the deformation of crushed salt. These models are generalized to three-dimensional states of stress to include the effects of mean and deviatoric stress and modified to include effects of temperature, grain size, and moisture content. A database including hydrostatic consolidation and shear consolidation tests conducted on Waste Isolation Pilot Plant (WIPP) and southeastern New Mexico salt is used to determine material parameters for the models. To evaluate the capability of the models, parameter values obtained from fitting the complete database are used to predict the individual tests. Finite element calculations of a WIPP shaft with emplaced crushed salt demonstrate the model predictions.

  16. Evaluation of a lake whitefish bioenergetics model

    USGS Publications Warehouse

    Madenjian, Charles P.; O'Connor, Daniel V.; Pothoven, Steven A.; Schneeberger, Philip J.; Rediske, Richard R.; O'Keefe, James P.; Bergstedt, Roger A.; Argyle, Ray L.; Brandt, Stephen B.

    2006-01-01

    We evaluated the Wisconsin bioenergetics model for lake whitefish Coregonus clupeaformis in the laboratory and in the field. For the laboratory evaluation, lake whitefish were fed rainbow smelt Osmerus mordax in four laboratory tanks during a 133-d experiment. Based on a comparison of bioenergetics model predictions of lake whitefish food consumption and growth with observed consumption and growth, we concluded that the bioenergetics model furnished significantly biased estimates of both food consumption and growth. On average, the model overestimated consumption by 61% and underestimated growth by 16%. The source of the bias was probably an overestimation of the respiration rate. We therefore adjusted the respiration component of the bioenergetics model to obtain a good fit of the model to the observed consumption and growth in our laboratory tanks. Based on the adjusted model, predictions of food consumption over the 133-d period fell within 5% of observed consumption in three of the four tanks and within 9% of observed consumption in the remaining tank. We used polychlorinated biphenyls (PCBs) as a tracer to evaluate model performance in the field. Based on our laboratory experiment, the efficiency with which lake whitefish retained PCBs from their food (I?) was estimated at 0.45. We applied the bioenergetics model to Lake Michigan lake whitefish and then used PCB determinations of both lake whitefish and their prey from Lake Michigan to estimate p in the field. Application of the original model to Lake Michigan lake whitefish yielded a field estimate of 0.28, implying that the original formulation of the model overestimated consumption in Lake Michigan by 61%. Application of the bioenergetics model with the adjusted respiration component resulted in a field I? estimate of 0.56, implying that this revised model underestimated consumption by 20%.

  17. Evaluation of an Upslope Precipitation Model

    NASA Astrophysics Data System (ADS)

    Barstad, I.; Smith, R. B.

    2002-12-01

    A linear orographic precipitation model applicable on complex terrain for an arbitrary wind direction has been developed. The model includes mountain wave dynamics as well as condensed water advection and two micro-physical time delay mechanisms. Atmospheric input variables in the model are wind speed and direction, specific humidity, wet static stability and two conversion factors for the micro-physics. In addition, the underlying terrain is needed. Various closed-form solutions for the precipitation behavior over ideal mountains have been derived and verified with numerical mesoscale models. The model is tested in real terrain against observations. Several locations are used to evaluate the model performance (southern Norway, the Alps and the Wasatch mountains in Utah). The model results are of same magnitude as the observations, which indicate that the fundamental physics is included in the model. The ratio of condensate that is carried over the mountain crest to the amount that is left as precipitation is crucial, and the model seem to reproduce this well. When the model results are evaluated against observations with statistical measure such as correlation coefficient, it performs well overall. This requires that detailed input information such as wind direction and stability are provided and that the observations are taken frequently. Traditional observation samplings are normally unevenly distributed between valleys and mountain tops which cause a bias in objective analysis. Such an analysis can, in this case, not be held directly against model results. For the same reason, if a model for instance perform well on mountain tops, but poorly in valleys, observations will give a wrong impressions of the model performance. From our tests, the model perform well in smaller region where the input variables are representative for the whole area. Some model deficiencies are also discovered. The model performance seems to improve with slightly smoothed terrain which

  18. The Discrepancy Evaluation Model. I. Basic Tenets of the Model.

    ERIC Educational Resources Information Center

    Steinmetz, Andres

    1976-01-01

    The basic principles of the discrepancy evaluation model (DEM), developed by Malcolm Provus, are presented. The three concepts which are essential to DEM are defined: (1) the standard is a description of how something should be; (2) performance measures are used to find out the actual characteristics of the object being evaluated; and (3) the…

  19. Saphire models and software for ASP evaluations

    SciTech Connect

    Sattison, M.B.

    1997-02-01

    The Idaho National Engineering Laboratory (INEL) over the three years has created 75 plant-specific Accident Sequence Precursor (ASP) models using the SAPHIRE suite of PRA codes. Along with the new models, the INEL has also developed a new module for SAPHIRE which is tailored specifically to the unique needs of ASP evaluations. These models and software will be the next generation of risk tools for the evaluation of accident precursors by both the U.S. Nuclear Regulatory Commission`s (NRC`s) Office of Nuclear Reactor Regulation (NRR) and the Office for Analysis and Evaluation of Operational Data (AEOD). This paper presents an overview of the models and software. Key characteristics include: (1) classification of the plant models according to plant response with a unique set of event trees for each plant class, (2) plant-specific fault trees using supercomponents, (3) generation and retention of all system and sequence cutsets, (4) full flexibility in modifying logic, regenerating cutsets, and requantifying results, and (5) user interface for streamlined evaluation of ASP events. Future plans for the ASP models is also presented.

  20. Evaluation of trends in wheat yield models

    NASA Technical Reports Server (NTRS)

    Ferguson, M. C.

    1982-01-01

    Trend terms in models for wheat yield in the U.S. Great Plains for the years 1932 to 1976 are evaluated. The subset of meteorological variables yielding the largest adjusted R(2) is selected using the method of leaps and bounds. Latent root regression is used to eliminate multicollinearities, and generalized ridge regression is used to introduce bias to provide stability in the data matrix. The regression model used provides for two trends in each of two models: a dependent model in which the trend line is piece-wise continuous, and an independent model in which the trend line is discontinuous at the year of the slope change. It was found that the trend lines best describing the wheat yields consisted of combinations of increasing, decreasing, and constant trend: four combinations for the dependent model and seven for the independent model.

  1. PREFACE SPECIAL ISSUE ON MODEL EVALUATION: EVALUATION OF URBAN AND REGIONAL EULERIAN AIR QUALITY MODELS

    EPA Science Inventory

    The "Preface to the Special Edition on Model Evaluation: Evaluation of Urban and Regional Eulerian Air Quality Models" is a brief introduction to the papers included in a special issue of Atmospheric Environment. The Preface provides a background for the papers, which have thei...

  2. Evaluation of a habitat suitability index model

    USGS Publications Warehouse

    Farmer, A.H.; Cade, B.S.; Stauffer, D.F.

    2002-01-01

    We assisted with development of a model for maternity habitat of the Indiana bat (Myotis soda/is), for use in conducting assessments of projects potentially impacting this endangered species. We started with an existing model, modified that model in a workshop, and evaluated the revised model, using data previously collected by others. Our analyses showed that higher indices of habitat suitability were associated with sites where Indiana bats were present and, thus, the model may be useful for identifying suitable habitat. Utility of the model, however, was based on a single component-density of suitable roost trees. Percentage of landscape in forest did not allow differentiation between sites occupied and not occupied by Indiana bats. Moreover, in spite of a general opinion by participants in the workshop that bodies of water were highly productive feeding areas and that a diversity of feeding habitats was optimal, we found no evidence to support either hypothesis.

  3. Evaluation of help model replacement codes

    SciTech Connect

    Whiteside, Tad; Hang, Thong; Flach, Gregory

    2009-07-01

    This work evaluates the computer codes that are proposed to be used to predict percolation of water through the closure-cap and into the waste containment zone at the Department of Energy closure sites. This work compares the currently used water-balance code (HELP) with newly developed computer codes that use unsaturated flow (Richards’ equation). It provides a literature review of the HELP model and the proposed codes, which result in two recommended codes for further evaluation: HYDRUS-2D3D and VADOSE/W. This further evaluation involved performing actual simulations on a simple model and comparing the results of those simulations to those obtained with the HELP code and the field data. From the results of this work, we conclude that the new codes perform nearly the same, although moving forward, we recommend HYDRUS-2D3D.

  4. Evaluation of a Mysis bioenergetics model

    USGS Publications Warehouse

    Chipps, S.R.; Bennett, D.H.

    2002-01-01

    Direct approaches for estimating the feeding rate of the opossum shrimp Mysis relicta can be hampered by variable gut residence time (evacuation rate models) and non-linear functional responses (clearance rate models). Bioenergetics modeling provides an alternative method, but the reliability of this approach needs to be evaluated using independent measures of growth and food consumption. In this study, we measured growth and food consumption for M. relicta and compared experimental results with those predicted from a Mysis bioenergetics model. For Mysis reared at 10??C, model predictions were not significantly different from observed values. Moreover, decomposition of mean square error indicated that 70% of the variation between model predictions and observed values was attributable to random error. On average, model predictions were within 12% of observed values. A sensitivity analysis revealed that Mysis respiration and prey energy density were the most sensitive parameters affecting model output. By accounting for uncertainty (95% CLs) in Mysis respiration, we observed a significant improvement in the accuracy of model output (within 5% of observed values), illustrating the importance of sensitive input parameters for model performance. These findings help corroborate the Mysis bioenergetics model and demonstrate the usefulness of this approach for estimating Mysis feeding rate.

  5. Evaluation (not validation) of quantitative models.

    PubMed

    Oreskes, N

    1998-12-01

    The present regulatory climate has led to increasing demands for scientists to attest to the predictive reliability of numerical simulation models used to help set public policy, a process frequently referred to as model validation. But while model validation may reveal useful information, this paper argues that it is not possible to demonstrate the predictive reliability of any model of a complex natural system in advance of its actual use. All models embed uncertainties, and these uncertainties can and frequently do undermine predictive reliability. In the case of lead in the environment, we may categorize model uncertainties as theoretical, empirical, parametrical, and temporal. Theoretical uncertainties are aspects of the system that are not fully understood, such as the biokinetic pathways of lead metabolism. Empirical uncertainties are aspects of the system that are difficult (or impossible) to measure, such as actual lead ingestion by an individual child. Parametrical uncertainties arise when complexities in the system are simplified to provide manageable model input, such as representing longitudinal lead exposure by cross-sectional measurements. Temporal uncertainties arise from the assumption that systems are stable in time. A model may also be conceptually flawed. The Ptolemaic system of astronomy is a historical example of a model that was empirically adequate but based on a wrong conceptualization. Yet had it been computerized--and had the word then existed--its users would have had every right to call it validated. Thus, rather than talking about strategies for validation, we should be talking about means of evaluation. That is not to say that language alone will solve our problems or that the problems of model evaluation are primarily linguistic. The uncertainties inherent in large, complex models will not go away simply because we change the way we talk about them. But this is precisely the point: calling a model validated does not make it valid

  6. Evaluation (not validation) of quantitative models.

    PubMed Central

    Oreskes, N

    1998-01-01

    The present regulatory climate has led to increasing demands for scientists to attest to the predictive reliability of numerical simulation models used to help set public policy, a process frequently referred to as model validation. But while model validation may reveal useful information, this paper argues that it is not possible to demonstrate the predictive reliability of any model of a complex natural system in advance of its actual use. All models embed uncertainties, and these uncertainties can and frequently do undermine predictive reliability. In the case of lead in the environment, we may categorize model uncertainties as theoretical, empirical, parametrical, and temporal. Theoretical uncertainties are aspects of the system that are not fully understood, such as the biokinetic pathways of lead metabolism. Empirical uncertainties are aspects of the system that are difficult (or impossible) to measure, such as actual lead ingestion by an individual child. Parametrical uncertainties arise when complexities in the system are simplified to provide manageable model input, such as representing longitudinal lead exposure by cross-sectional measurements. Temporal uncertainties arise from the assumption that systems are stable in time. A model may also be conceptually flawed. The Ptolemaic system of astronomy is a historical example of a model that was empirically adequate but based on a wrong conceptualization. Yet had it been computerized--and had the word then existed--its users would have had every right to call it validated. Thus, rather than talking about strategies for validation, we should be talking about means of evaluation. That is not to say that language alone will solve our problems or that the problems of model evaluation are primarily linguistic. The uncertainties inherent in large, complex models will not go away simply because we change the way we talk about them. But this is precisely the point: calling a model validated does not make it valid

  7. Evaluation (not validation) of quantitative models.

    PubMed

    Oreskes, N

    1998-12-01

    The present regulatory climate has led to increasing demands for scientists to attest to the predictive reliability of numerical simulation models used to help set public policy, a process frequently referred to as model validation. But while model validation may reveal useful information, this paper argues that it is not possible to demonstrate the predictive reliability of any model of a complex natural system in advance of its actual use. All models embed uncertainties, and these uncertainties can and frequently do undermine predictive reliability. In the case of lead in the environment, we may categorize model uncertainties as theoretical, empirical, parametrical, and temporal. Theoretical uncertainties are aspects of the system that are not fully understood, such as the biokinetic pathways of lead metabolism. Empirical uncertainties are aspects of the system that are difficult (or impossible) to measure, such as actual lead ingestion by an individual child. Parametrical uncertainties arise when complexities in the system are simplified to provide manageable model input, such as representing longitudinal lead exposure by cross-sectional measurements. Temporal uncertainties arise from the assumption that systems are stable in time. A model may also be conceptually flawed. The Ptolemaic system of astronomy is a historical example of a model that was empirically adequate but based on a wrong conceptualization. Yet had it been computerized--and had the word then existed--its users would have had every right to call it validated. Thus, rather than talking about strategies for validation, we should be talking about means of evaluation. That is not to say that language alone will solve our problems or that the problems of model evaluation are primarily linguistic. The uncertainties inherent in large, complex models will not go away simply because we change the way we talk about them. But this is precisely the point: calling a model validated does not make it valid

  8. Evaluating the TD model of classical conditioning.

    PubMed

    Ludvig, Elliot A; Sutton, Richard S; Kehoe, E James

    2012-09-01

    The temporal-difference (TD) algorithm from reinforcement learning provides a simple method for incrementally learning predictions of upcoming events. Applied to classical conditioning, TD models suppose that animals learn a real-time prediction of the unconditioned stimulus (US) on the basis of all available conditioned stimuli (CSs). In the TD model, similar to other error-correction models, learning is driven by prediction errors--the difference between the change in US prediction and the actual US. With the TD model, however, learning occurs continuously from moment to moment and is not artificially constrained to occur in trials. Accordingly, a key feature of any TD model is the assumption about the representation of a CS on a moment-to-moment basis. Here, we evaluate the performance of the TD model with a heretofore unexplored range of classical conditioning tasks. To do so, we consider three stimulus representations that vary in their degree of temporal generalization and evaluate how the representation influences the performance of the TD model on these conditioning tasks.

  9. A Traceability Framework to facilitate model evaluation

    NASA Astrophysics Data System (ADS)

    Luo, Yiqi; Xia, Jianyang; Hararuk, Sasha; Wang, Ying Ping

    2013-04-01

    Land models have been developed to account for more and more processes, making their complex structures difficult to be understood and evaluated. Here we introduced a framework to decompose a complex land model into traceable components based on their mutually independent properties of modeled biogeochemical processes. The framework traces modeled ecosystem carbon storage capacity (Xss) to (1) a product of net primary productivity (NPP) and ecosystem residence time (τ_E). The latter τE can be further traced to (2) baseline carbon residence times (τ_(E )^'), which are usually preset in a model according to vegetation characteristics and soil types, (3) environmental scalars (ξ) including temperature and water scalars, and (4) environmental forcings. We have applied the framework to the Australian Community Atmosphere Biosphere Land Exchange (CABLE) model to help understand differences in modeled carbon processes among biomes and as influenced by nitrogen processes. Our framework could be used to facilitate data-model comparisons and model intercomparisons via tracking a few traceable components for all terrestrial carbon cycle models.

  10. Evaluation of infiltration models in contaminated landscape.

    PubMed

    Sadegh Zadeh, Kouroush; Shirmohammadi, Adel; Montas, Hubert J; Felton, Gary

    2007-06-01

    The infiltration models of Kostiakov, Green-Ampt, and Philip (two and three terms equations) were used, calibrated, and evaluated to simulate in-situ infiltration in nine different soil types. The Osborne-Moré modified version of the Levenberg-Marquardt optimization algorithm was coupled with the experimental data obtained by the double ring infiltrometers and the infiltration equations, to estimate the model parameters. Comparison of the model outputs with the experimental data indicates that the models can successfully describe cumulative infiltration in different soil types. However, since Kostiakov's equation fails to accurately simulate the infiltration rate as time approaches infinity, Philip's two-term equation, in some cases, produces negative values for the saturated hydraulic conductivity of soils, and the Green-Ampt model uses piston flow assumptions, we suggest using Philip's three-term equation to simulate infiltration and to estimate the saturated hydraulic conductivity of soils. PMID:17558778

  11. A model evaluation checklist for process-based environmental models

    NASA Astrophysics Data System (ADS)

    Jackson-Blake, Leah

    2015-04-01

    Mechanistic catchment-scale phosphorus models appear to perform poorly where diffuse sources dominate. The reasons for this were investigated for one commonly-applied model, the INtegrated model of CAtchment Phosphorus (INCA-P). Model output was compared to 18 months of daily water quality monitoring data in a small agricultural catchment in Scotland, and model structure, key model processes and internal model responses were examined. Although the model broadly reproduced dissolved phosphorus dynamics, it struggled with particulates. The reasons for poor performance were explored, together with ways in which improvements could be made. The process of critiquing and assessing model performance was then generalised to provide a broadly-applicable model evaluation checklist, incorporating: (1) Calibration challenges, relating to difficulties in thoroughly searching a high-dimensional parameter space and in selecting appropriate means of evaluating model performance. In this study, for example, model simplification was identified as a necessary improvement to reduce the number of parameters requiring calibration, whilst the traditionally-used Nash Sutcliffe model performance statistic was not able to discriminate between realistic and unrealistic model simulations, and alternative statistics were needed. (2) Data limitations, relating to a lack of (or uncertainty in) input data, data to constrain model parameters, data for model calibration and testing, and data to test internal model processes. In this study, model reliability could be improved by addressing all four kinds of data limitation. For example, there was insufficient surface water monitoring data for model testing against an independent dataset to that used in calibration, whilst additional monitoring of groundwater and effluent phosphorus inputs would help distinguish between alternative plausible model parameterisations. (3) Model structural inadequacies, whereby model structure may inadequately represent

  12. Evaluating model accuracy for model-based reasoning

    NASA Technical Reports Server (NTRS)

    Chien, Steve; Roden, Joseph

    1992-01-01

    Described here is an approach to automatically assessing the accuracy of various components of a model. In this approach, actual data from the operation of a target system is used to drive statistical measures to evaluate the prediction accuracy of various portions of the model. We describe how these statistical measures of model accuracy can be used in model-based reasoning for monitoring and design. We then describe the application of these techniques to the monitoring and design of the water recovery system of the Environmental Control and Life Support System (ECLSS) of Space Station Freedom.

  13. Two models for evaluating landslide hazards

    USGS Publications Warehouse

    Davis, J.C.; Chung, C.-J.; Ohlmacher, G.C.

    2006-01-01

    Two alternative procedures for estimating landslide hazards were evaluated using data on topographic digital elevation models (DEMs) and bedrock lithologies in an area adjacent to the Missouri River in Atchison County, Kansas, USA. The two procedures are based on the likelihood ratio model but utilize different assumptions. The empirical likelihood ratio model is based on non-parametric empirical univariate frequency distribution functions under an assumption of conditional independence while the multivariate logistic discriminant model assumes that likelihood ratios can be expressed in terms of logistic functions. The relative hazards of occurrence of landslides were estimated by an empirical likelihood ratio model and by multivariate logistic discriminant analysis. Predictor variables consisted of grids containing topographic elevations, slope angles, and slope aspects calculated from a 30-m DEM. An integer grid of coded bedrock lithologies taken from digitized geologic maps was also used as a predictor variable. Both statistical models yield relative estimates in the form of the proportion of total map area predicted to already contain or to be the site of future landslides. The stabilities of estimates were checked by cross-validation of results from random subsamples, using each of the two procedures. Cell-by-cell comparisons of hazard maps made by the two models show that the two sets of estimates are virtually identical. This suggests that the empirical likelihood ratio and the logistic discriminant analysis models are robust with respect to the conditional independent assumption and the logistic function assumption, respectively, and that either model can be used successfully to evaluate landslide hazards. ?? 2006.

  14. CTBT integrated verification system evaluation model supplement

    SciTech Connect

    EDENBURN,MICHAEL W.; BUNTING,MARCUS; PAYNE JR.,ARTHUR C.; TROST,LAWRENCE C.

    2000-03-02

    Sandia National Laboratories has developed a computer based model called IVSEM (Integrated Verification System Evaluation Model) to estimate the performance of a nuclear detonation monitoring system. The IVSEM project was initiated in June 1994, by Sandia's Monitoring Systems and Technology Center and has been funded by the U.S. Department of Energy's Office of Nonproliferation and National Security (DOE/NN). IVSEM is a simple, ''top-level,'' modeling tool which estimates the performance of a Comprehensive Nuclear Test Ban Treaty (CTBT) monitoring system and can help explore the impact of various sensor system concepts and technology advancements on CTBT monitoring. One of IVSEM's unique features is that it integrates results from the various CTBT sensor technologies (seismic, in sound, radionuclide, and hydroacoustic) and allows the user to investigate synergy among the technologies. Specifically, IVSEM estimates the detection effectiveness (probability of detection), location accuracy, and identification capability of the integrated system and of each technology subsystem individually. The model attempts to accurately estimate the monitoring system's performance at medium interfaces (air-land, air-water) and for some evasive testing methods such as seismic decoupling. The original IVSEM report, CTBT Integrated Verification System Evaluation Model, SAND97-25 18, described version 1.2 of IVSEM. This report describes the changes made to IVSEM version 1.2 and the addition of identification capability estimates that have been incorporated into IVSEM version 2.0.

  15. Automated Expert Modeling and Student Evaluation

    SciTech Connect

    2012-09-12

    AEMASE searches a database of recorded events for combinations of events that are of interest. It compares matching combinations to a statistical model to determine similarity to previous events of interest and alerts the user as new matching examples are found. AEMASE is currently used by weapons tactics instructors to find situations of interest in recorded tactical training scenarios. AEMASE builds on a sub-component, the Relational Blackboard (RBB), which is being released as open-source software. AEMASE builds on RBB adding interactive expert model construction (automated knowledge capture) and re-evaluation of scenario data.

  16. Automated Expert Modeling and Student Evaluation

    2012-09-12

    AEMASE searches a database of recorded events for combinations of events that are of interest. It compares matching combinations to a statistical model to determine similarity to previous events of interest and alerts the user as new matching examples are found. AEMASE is currently used by weapons tactics instructors to find situations of interest in recorded tactical training scenarios. AEMASE builds on a sub-component, the Relational Blackboard (RBB), which is being released as open-source software.more » AEMASE builds on RBB adding interactive expert model construction (automated knowledge capture) and re-evaluation of scenario data.« less

  17. User's appraisal of yield model evaluation criteria

    NASA Technical Reports Server (NTRS)

    Warren, F. B. (Principal Investigator)

    1982-01-01

    The five major potential USDA users of AgRISTAR crop yield forecast models rated the Yield Model Development (YMD) project Test and Evaluation Criteria by the importance placed on them. These users were agreed that the "TIMELINES" and "RELIABILITY" of the forecast yields would be of major importance in determining if a proposed yield model was worthy of adoption. Although there was considerable difference of opinion as to the relative importance of the other criteria, "COST", "OBJECTIVITY", "ADEQUACY", AND "MEASURES OF ACCURACY" generally were felt to be more important that "SIMPLICITY" and "CONSISTENCY WITH SCIENTIFIC KNOWLEDGE". However, some of the comments which accompanied the ratings did indicate that several of the definitions and descriptions of the criteria were confusing.

  18. Evaluation of a mallard productivity model

    USGS Publications Warehouse

    Johnson, D.H.; Cowardin, L.M.; Sparling, D.W.; Verner, J.; Morrison, L.M.; Ralph, C.J.

    1986-01-01

    A stochastic model of mallard (Anas platyrhynchos) productivity has been developed over a 10-year period and successfully applied to several management questions. Here we review the model and describe some recent uses and improvements that increase its realism and applicability, including naturally occurring changes in wetland habitat, catastrophic weather events, and the migrational homing of mallards. The amount of wetland habitat influenced productivity primarily by affecting the renesting rate. Late snowstorms severely reduced productivity, whereas the loss of nests due to flooding was largely compensated for by increased renesting, often in habitats where hatching rates were better. Migrational homing was shown to be an important phenomenon in population modeling and should be considered when evaluating management plans.

  19. Evaluating face trustworthiness: a model based approach

    PubMed Central

    Baron, Sean G.; Oosterhof, Nikolaas N.

    2008-01-01

    Judgments of trustworthiness from faces determine basic approach/avoidance responses and approximate the valence evaluation of faces that runs across multiple person judgments. Here, based on trustworthiness judgments and using a computer model for face representation, we built a model for representing face trustworthiness (study 1). Using this model, we generated novel faces with an increased range of trustworthiness and used these faces as stimuli in a functional Magnetic Resonance Imaging study (study 2). Although participants did not engage in explicit evaluation of the faces, the amygdala response changed as a function of face trustworthiness. An area in the right amygdala showed a negative linear response—as the untrustworthiness of faces increased so did the amygdala response. Areas in the left and right putamen, the latter area extended into the anterior insula, showed a similar negative linear response. The response in the left amygdala was quadratic—strongest for faces on both extremes of the trustworthiness dimension. The medial prefrontal cortex and precuneus also showed a quadratic response, but their response was strongest to faces in the middle range of the trustworthiness dimension. PMID:19015102

  20. Evaluating face trustworthiness: a model based approach.

    PubMed

    Todorov, Alexander; Baron, Sean G; Oosterhof, Nikolaas N

    2008-06-01

    Judgments of trustworthiness from faces determine basic approach/avoidance responses and approximate the valence evaluation of faces that runs across multiple person judgments. Here, based on trustworthiness judgments and using a computer model for face representation, we built a model for representing face trustworthiness (study 1). Using this model, we generated novel faces with an increased range of trustworthiness and used these faces as stimuli in a functional Magnetic Resonance Imaging study (study 2). Although participants did not engage in explicit evaluation of the faces, the amygdala response changed as a function of face trustworthiness. An area in the right amygdala showed a negative linear response-as the untrustworthiness of faces increased so did the amygdala response. Areas in the left and right putamen, the latter area extended into the anterior insula, showed a similar negative linear response. The response in the left amygdala was quadratic--strongest for faces on both extremes of the trustworthiness dimension. The medial prefrontal cortex and precuneus also showed a quadratic response, but their response was strongest to faces in the middle range of the trustworthiness dimension. PMID:19015102

  1. Evaluating conflation methods using uncertainty modeling

    NASA Astrophysics Data System (ADS)

    Doucette, Peter; Dolloff, John; Canavosio-Zuzelski, Roberto; Lenihan, Michael; Motsko, Dennis

    2013-05-01

    The classic problem of computer-assisted conflation involves the matching of individual features (e.g., point, polyline, or polygon vectors) as stored in a geographic information system (GIS), between two different sets (layers) of features. The classical goal of conflation is the transfer of feature metadata (attributes) from one layer to another. The age of free public and open source geospatial feature data has significantly increased the opportunity to conflate such data to create enhanced products. There are currently several spatial conflation tools in the marketplace with varying degrees of automation. An ability to evaluate conflation tool performance quantitatively is of operational value, although manual truthing of matched features is laborious and costly. In this paper, we present a novel methodology that uses spatial uncertainty modeling to simulate realistic feature layers to streamline evaluation of feature matching performance for conflation methods. Performance results are compiled for DCGIS street centerline features.

  2. Data Assimilation and Model Evaluation Experiment Datasets.

    NASA Astrophysics Data System (ADS)

    Lai, Chung-Chieng A.; Qian, Wen; Glenn, Scott M.

    1994-05-01

    The Institute for Naval Oceanography, in cooperation with Naval Research Laboratories and universities, executed the Data Assimilation and Model Evaluation Experiment (DAMÉE) for the Gulf Stream region during fiscal years 1991-1993. Enormous effort has gone into the preparation of several high-quality and consistent datasets for model initialization and verification. This paper describes the preparation process, the temporal and spatial scopes, the contents, the structure, etc., of these datasets.The goal of DAMEE and the need of data for the four phases of experiment are briefly stated. The preparation of DAMEE datasets consisted of a series of processes: 1)collection of observational data; 2) analysis and interpretation; 3) interpolation using the Optimum Thermal Interpolation System package; 4) quality control and re-analysis; and 5) data archiving and software documentation.The data products from these processes included a time series of 3D fields of temperature and salinity, 2D fields of surface dynamic height and mixed-layer depth, analysis of the Gulf Stream and rings system, and bathythermograph profiles. To date, these are the most detailed and high-quality data for mesoscale ocean modeling, data assimilation, and forecasting research. Feedback from ocean modeling groups who tested this data was incorporated into its refinement.Suggestions for DAMEE data usages include 1) ocean modeling and data assimilation studies, 2) diagnosis and theorectical studies, and 3) comparisons with locally detailed observations.

  3. Computer model evaluates heavy oil pumping units

    SciTech Connect

    Brunings, C.; Moya, J.; Morales, J.

    1989-04-10

    The need for Corpoven, S.A., affiliate of Petroleos de Venezuela, S.A., to obtain a model for use in the evaluation of pumping units and downhole equipment in heavy oil wells resulted in the development of an applicable design and optimization technique. All existing models are based on parameters related to wells and equipment for light and medium crudes. Because Venezuela continues to produce large quantities of nonconventional heavy oil, a new computer model was developed. The automation of the artificial lift operations, developed as a pilot project, permitted the monitoring of four wells in a cluster within the Orinoco heavy oil field by a telemetry system and comparison of the new model with existing models. In addition, remote control of sucker rod systems appears to have many advantages such as permanent supervision of a pumping unit, monitoring of preventive maintenance requirements, and close observation of the well behavior. The results of this pilot project are very encouraging, and a study is under way to expand the telemetry system to include more wells from the Orinoco heavy oil field.

  4. Evaluation of models of waste glass durability

    SciTech Connect

    Ellison, A.

    1995-08-01

    The main variable under the control of the waste glass producer is the composition of the glass; thus a need exists to establish functional relationships between the composition of a waste glass and measures of processability, product consistency, and durability. Many years of research show that the structure and properties of a glass depend on its composition, so it seems reasonable to assume that there also is relationship between the composition of a waste glass and its resistance to attack by an aqueous solution. Several models have been developed to describe this dependence, and an evaluation their predictive capabilities is the subject of this paper. The objective is to determine whether any of these models describe the ``correct`` functional relationship between composition and corrosion rate. A more thorough treatment of the relationships between glass composition and durability has been presented elsewhere, and the reader is encouraged to consult it for a more detailed discussion. The models examined in this study are the free energy of hydration model, developed at the Savannah River Laboratory, the structural bond strength model, developed at the Vitreous State Laboratory at the Catholic University of America, and the Composition Variation Study, developed at Pacific Northwest Laboratory.

  5. Wear Modeling: Evaluation and Categorization of Wear Models

    NASA Astrophysics Data System (ADS)

    Meng, Hsien-Chung

    The objective of this study was to evaluate progress in wear modeling and propose guidelines for future work. Such guidelines can help wear modelers to make appropriate decisions and ascertain information relevant to wear modeling. Over 5,000 papers in the literature were surveyed. 182 erosion wear and sliding wear equations proposed between 1957 and 1992 were found and studied. Two approaches were taken to analyze the surveyed models. The first approach focuses on common features and variations in each of five wear modeling steps. The second approach identifies characteristics of the overall development of wear modeling. The conclusions and recommendations of this study: (1) No single universal equation or extensively accepted theory fully explains the many types of wear behavior. (2) Wear mechanisms as typically described in research literature are not fundamental processes of material loss. Mechanical, chemical, physical and metallurgical action are the four fundamental processes: future wear models should include consideration of these four simultaneously, together with their interactions. (3) Wear models based on a single academic discipline cannot fully explain a wearing process even if for a single wear mechanism. An interdisciplinary approach should be applied to build wear models. (4) Two characteristics in past development of wear modeling are positive and should be carried into future work: (a) progressively more and more local information about the variations of working conditions has been considered in wear modeling, and (b) the approaches of different disciplines has been more frequently and extensively applied together to build wear models which explain progressively more wear phenomena in a wearing system. (5) Wear modelers should derive wear equations to concisely present research results and to complete wear modeling. Out of the 5000 papers considered in this study, only 182 presented equations as well as word descriptions to describe erosion and

  6. CTBT Integrated Verification System Evaluation Model

    SciTech Connect

    Edenburn, M.W.; Bunting, M.L.; Payne, A.C. Jr.

    1997-10-01

    Sandia National Laboratories has developed a computer based model called IVSEM (Integrated Verification System Evaluation Model) to estimate the performance of a nuclear detonation monitoring system. The IVSEM project was initiated in June 1994, by Sandia`s Monitoring Systems and Technology Center and has been funded by the US Department of Energy`s Office of Nonproliferation and National Security (DOE/NN). IVSEM is a simple, top-level, modeling tool which estimates the performance of a Comprehensive Nuclear Test Ban Treaty (CTBT) monitoring system and can help explore the impact of various sensor system concepts and technology advancements on CTBT monitoring. One of IVSEM`s unique features is that it integrates results from the various CTBT sensor technologies (seismic, infrasound, radionuclide, and hydroacoustic) and allows the user to investigate synergy among the technologies. Specifically, IVSEM estimates the detection effectiveness (probability of detection) and location accuracy of the integrated system and of each technology subsystem individually. The model attempts to accurately estimate the monitoring system`s performance at medium interfaces (air-land, air-water) and for some evasive testing methods such as seismic decoupling. This report describes version 1.2 of IVSEM.

  7. Modelling approaches for evaluating multiscale tendon mechanics.

    PubMed

    Fang, Fei; Lake, Spencer P

    2016-02-01

    Tendon exhibits anisotropic, inhomogeneous and viscoelastic mechanical properties that are determined by its complicated hierarchical structure and varying amounts/organization of different tissue constituents. Although extensive research has been conducted to use modelling approaches to interpret tendon structure-function relationships in combination with experimental data, many issues remain unclear (i.e. the role of minor components such as decorin, aggrecan and elastin), and the integration of mechanical analysis across different length scales has not been well applied to explore stress or strain transfer from macro- to microscale. This review outlines mathematical and computational models that have been used to understand tendon mechanics at different scales of the hierarchical organization. Model representations at the molecular, fibril and tissue levels are discussed, including formulations that follow phenomenological and microstructural approaches (which include evaluations of crimp, helical structure and the interaction between collagen fibrils and proteoglycans). Multiscale modelling approaches incorporating tendon features are suggested to be an advantageous methodology to understand further the physiological mechanical response of tendon and corresponding adaptation of properties owing to unique in vivo loading environments.

  8. An evaluation framework for participatory modelling

    NASA Astrophysics Data System (ADS)

    Krueger, T.; Inman, A.; Chilvers, J.

    2012-04-01

    Strong arguments for participatory modelling in hydrology can be made on substantive, instrumental and normative grounds. These arguments have led to increasingly diverse groups of stakeholders (here anyone affecting or affected by an issue) getting involved in hydrological research and the management of water resources. In fact, participation has become a requirement of many research grants, programs, plans and policies. However, evidence of beneficial outcomes of participation as suggested by the arguments is difficult to generate and therefore rare. This is because outcomes are diverse, distributed, often tacit, and take time to emerge. In this paper we develop an evaluation framework for participatory modelling focussed on learning outcomes. Learning encompasses many of the potential benefits of participation, such as better models through diversity of knowledge and scrutiny, stakeholder empowerment, greater trust in models and ownership of subsequent decisions, individual moral development, reflexivity, relationships, social capital, institutional change, resilience and sustainability. Based on the theories of experiential, transformative and social learning, complemented by practitioner experience our framework examines if, when and how learning has occurred. Special emphasis is placed on the role of models as learning catalysts. We map the distribution of learning between stakeholders, scientists (as a subgroup of stakeholders) and models. And we analyse what type of learning has occurred: instrumental learning (broadly cognitive enhancement) and/or communicative learning (change in interpreting meanings, intentions and values associated with actions and activities; group dynamics). We demonstrate how our framework can be translated into a questionnaire-based survey conducted with stakeholders and scientists at key stages of the participatory process, and show preliminary insights from applying the framework within a rural pollution management situation in

  9. Metal mixtures modeling evaluation project: 1. Background.

    PubMed

    Meyer, Joseph S; Farley, Kevin J; Garman, Emily R

    2015-04-01

    Despite more than 5 decades of aquatic toxicity tests conducted with metal mixtures, there is still a need to understand how metals interact in mixtures and to predict their toxicity more accurately than what is currently done. The present study provides a background for understanding the terminology, regulatory framework, qualitative and quantitative concepts, experimental approaches, and visualization and data-analysis methods for chemical mixtures, with an emphasis on bioavailability and metal-metal interactions in mixtures of waterborne metals. In addition, a Monte Carlo-type randomization statistical approach to test for nonadditive toxicity is presented, and an example with a binary-metal toxicity data set demonstrates the challenge involved in inferring statistically significant nonadditive toxicity. This background sets the stage for the toxicity results, data analyses, and bioavailability models related to metal mixtures that are described in the remaining articles in this special section from the Metal Mixture Modeling Evaluation project and workshop. It is concluded that although qualitative terminology such as additive and nonadditive toxicity can be useful to convey general concepts, failure to expand beyond that limited perspective could impede progress in understanding and predicting metal mixture toxicity. Instead of focusing on whether a given metal mixture causes additive or nonadditive toxicity, effort should be directed to develop models that can accurately predict the toxicity of metal mixtures.

  10. An Integrated Model of Training Evaluation and Effectiveness

    ERIC Educational Resources Information Center

    Alvarez, Kaye; Salas, Eduardo; Garofano, Christina M.

    2004-01-01

    A decade of training evaluation and training effectiveness research was reviewed to construct an integrated model of training evaluation and effectiveness. This model integrates four prior evaluation models and results of 10 years of training effectiveness research. It is the first to be constructed using a set of strict criteria and to…

  11. A Model for Evaluating Student Clinical Psychomotor Skills.

    ERIC Educational Resources Information Center

    And Others; Fiel, Nicholas J.

    1979-01-01

    A long-range plan to evaluate medical students' physical examination skills was undertaken at the Ingham Family Medical Clinic at Michigan State University. The development of the psychomotor skills evaluation model to evaluate the skill of blood pressure measurement, tests of the model's reliability, and the use of the model are described. (JMD)

  12. Evaluation of Mesoscale Model Phenomenological Verification Techniques

    NASA Technical Reports Server (NTRS)

    Lambert, Winifred

    2006-01-01

    Forecasters at the Spaceflight Meteorology Group, 45th Weather Squadron, and National Weather Service in Melbourne, FL use mesoscale numerical weather prediction model output in creating their operational forecasts. These models aid in forecasting weather phenomena that could compromise the safety of launch, landing, and daily ground operations and must produce reasonable weather forecasts in order for their output to be useful in operations. Considering the importance of model forecasts to operations, their accuracy in forecasting critical weather phenomena must be verified to determine their usefulness. The currently-used traditional verification techniques involve an objective point-by-point comparison of model output and observations valid at the same time and location. The resulting statistics can unfairly penalize high-resolution models that make realistic forecasts of a certain phenomena, but are offset from the observations in small time and/or space increments. Manual subjective verification can provide a more valid representation of model performance, but is time-consuming and prone to personal biases. An objective technique that verifies specific meteorological phenomena, much in the way a human would in a subjective evaluation, would likely produce a more realistic assessment of model performance. Such techniques are being developed in the research community. The Applied Meteorology Unit (AMU) was tasked to conduct a literature search to identify phenomenological verification techniques being developed, determine if any are ready to use operationally, and outline the steps needed to implement any operationally-ready techniques into the Advanced Weather Information Processing System (AWIPS). The AMU conducted a search of all literature on the topic of phenomenological-based mesoscale model verification techniques and found 10 different techniques in various stages of development. Six of the techniques were developed to verify precipitation forecasts, one

  13. Rainwater harvesting: model-based design evaluation.

    PubMed

    Ward, S; Memon, F A; Butler, D

    2010-01-01

    The rate of uptake of rainwater harvesting (RWH) in the UK has been slow to date, but is expected to gain momentum in the near future. The designs of two different new-build rainwater harvesting systems, based on simple methods, are evaluated using three different design methods, including a continuous simulation modelling approach. The RWH systems are shown to fulfill 36% and 46% of WC demand. Financial analyses reveal that RWH systems within large commercial buildings maybe more financially viable than smaller domestic systems. It is identified that design methods based on simple approaches generate tank sizes substantially larger than the continuous simulation. Comparison of the actual tank sizes and those calculated using continuous simulation established that the tanks installed are oversized for their associated demand level and catchment size. Oversizing tanks can lead to excessive system capital costs, which currently hinders the uptake of systems. Furthermore, it is demonstrated that the catchment area size is often overlooked when designing UK-based RWH systems. With respect to these findings, a recommendation for a transition from the use of simple tools to continuous simulation models is made.

  14. Treatment modalities and evaluation models for periodontitis

    PubMed Central

    Tariq, Mohammad; Iqbal, Zeenat; Ali, Javed; Baboota, Sanjula; Talegaonkar, Sushama; Ahmad, Zulfiqar; Sahni, Jasjeet K

    2012-01-01

    Periodontitis is the most common localized dental inflammatory disease related with several pathological conditions like inflammation of gums (gingivitis), degeneration of periodontal ligament, dental cementum and alveolar bone loss. In this perspective, the various preventive and treatment modalities, including oral hygiene, gingival irrigations, mechanical instrumentation, full mouth disinfection, host modulation and antimicrobial therapy, which are used either as adjunctive treatments or as stand-alone therapies in the non-surgical management of periodontal infections, have been discussed. Intra-pocket, sustained release systems have emerged as a novel paradigm for the future research. In this article, special consideration is given to different locally delivered anti-microbial and anti inflammatory medications which are either commercially available or are currently under consideration for Food and Drug Administration (FDA) approval. The various in vitro dissolution models and microbiological strain investigated to impersonate the infected and inflamed periodontal cavity and to predict the in vivo performance of treatment modalities have also been thrashed out. Animal models that have been employed to explore the pathology at the different stages of periodontitis and to evaluate its treatment modalities are enlightened in this proposed review. PMID:23373002

  15. Report of the Inter-Organizational Committee on Evaluation. Internal Evaluation Model.

    ERIC Educational Resources Information Center

    White, Roy; Murray, John

    Based upon the premise that school divisions in Manitoba, Canada, should evaluate and improve upon themselves, this evaluation model was developed. The participating personnel and the development of the evaluation model are described. The model has 11 parts: (1) needs assessment; (2) statement of objectives; (3) definition of objectives; (4)…

  16. The design and implementation of an operational model evaluation system

    SciTech Connect

    Foster, K.T.

    1995-06-01

    An evaluation of an atmospheric transport and diffusion model`s operational performance typically involves the comparison of the model`s calculations with measurements of an atmospheric pollutant`s temporal and spatial distribution. These evaluations however often use data from a small number of experiments and may be limited to producing some of the commonly quoted statistics based on the differences between model calculations and the measurements. This paper presents efforts to develop a model evaluation system geared for both the objective statistical analysis and the more subjective visualization of the inter-relationships between a model`s calculations and the appropriate field measurement data.

  17. A Hybrid Evaluation Model for Evaluating Online Professional Development

    ERIC Educational Resources Information Center

    Hahs-Vaughn, Debbie; Zygouris-Coe, Vicky; Fiedler, Rebecca

    2007-01-01

    Online professional development is multidimensional. It encompasses: a) an online, web-based format; b) professional development; and most likely c) specific objectives tailored to and created for the respective online professional development course. Evaluating online professional development is therefore also multidimensional and as such both…

  18. Modelling and evaluating against the violent insider

    SciTech Connect

    Fortney, D.S.; Al-Ayat, R.A.; Saleh, R.A. )

    1991-01-01

    The violent insider threat poses a special challenge to facilities protecting special nuclear material from theft or diversion. These insiders could potentially behave as nonviolent insiders to deceitfully defeat certain safeguards elements and use violence to forcefully defeat hardware or personnel. While several vulnerability assessment tools are available to deal with the nonviolent insider, very limited effort has been directed to developing analysis tools for the violent threat. In this paper, the authors present an approach using the results of a vulnerability assessment for nonviolent insiders to evaluate certain violent insider scenarios. Since existing tools do not explicitly consider violent insiders, the approach is intended for experienced safeguards analysts and relies on the analyst to brainstorm possible violent actions, to assign detection probabilities, and to ensure consistency. The authors then discuss our efforts in developing an automated tool for assessing the vulnerability against those violent insiders who are willing to use force against barriers, but who are unwilling to kill or be killed. Specifically, the authors discuss our efforts in developing databases for violent insiders penetrating barriers, algorithms for considering the entry of contraband, and modelling issues in considering the use of violence.

  19. Modelling and evaluating against the violent insider

    SciTech Connect

    Fortney, D.S.; Al-Ayat, R.A.; Saleh, R.A.

    1991-07-01

    The violent insider threat poses a special challenge to facilities protecting special nuclear material from theft or diversion. These insiders could potentially behave as nonviolent insiders to deceitfully defeat certain safeguards elements and use violence to forcefully defeat hardware or personnel. While several vulnerability assessment tools are available to deal with the nonviolent insider, very limited effort has been directed to developing analysis tools for the violent threat. In this paper, we present an approach using the results of a vulnerability assessment for nonviolent insiders to evaluate certain violent insider scenarios. Since existing tools do not explicitly consider violent insiders, the approach is intended for experienced safeguards analysts and relies on the analyst to brainstorm possible violent actions, to assign detection probabilities, and to ensure consistency. We then discuss our efforts in developing an automated tool for assessing the vulnerability against those violent insiders who are willing to use force against barriers, but who are unwilling to kill or be killed. Specifically, we discuss our efforts in developing databases for violent insiders penetrating barriers, algorithms for considering the entry of contraband, and modelling issues in considering the use of violence.

  20. Evaluation of video quality models for multimedia

    NASA Astrophysics Data System (ADS)

    Brunnström, Kjell; Hands, David; Speranza, Filippo; Webster, Arthur

    2008-02-01

    The Video Quality Experts Group (VQEG) is a group of experts from industry, academia, government and standards organizations working in the field of video quality assessment. Over the last 10 years, VQEG has focused its efforts on the evaluation of objective video quality metrics for digital video. Objective video metrics are mathematical models that predict the picture quality as perceived by an average observer. VQEG has completed validation tests for full reference objective metrics for the Standard Definition Television (SDTV) format. From this testing, two ITU Recommendations were produced. This standardization effort is of great relevance to the video industries because objective metrics can be used for quality control of the video at various stages of the delivery chain. Currently, VQEG is undertaking several projects in parallel. The most mature project is concerned with objective measurement of multimedia content. This project is probably the largest coordinated set of video quality testing ever embarked upon. The project will involve the collection of a very large database of subjective quality data. About 40 subjective assessment experiments and more than 160,000 opinion scores will be collected. These will be used to validate the proposed objective metrics. This paper describes the test plan for the project, its current status, and one of the multimedia subjective tests.

  1. Simplified cost models for prefeasibility mineral evaluations

    USGS Publications Warehouse

    Camm, Thomas W.

    1991-01-01

    This report contains 2 open pit models, 6 underground mine models, 11 mill models, and cost equations for access roads, power lines, and tailings ponds. In addition, adjustment factors for variation in haulage distances are provided for open pit models and variation in mining depths for underground models.

  2. THE ATMOSPHERIC MODEL EVALUATION TOOL (AMET); AIR QUALITY MODULE

    EPA Science Inventory

    This presentation reviews the development of the Atmospheric Model Evaluation Tool (AMET) air quality module. The AMET tool is being developed to aid in the model evaluation. This presentation focuses on the air quality evaluation portion of AMET. Presented are examples of the...

  3. Evaluating a Training Using the "Four Levels Model"

    ERIC Educational Resources Information Center

    Steensma, Herman; Groeneveld, Karin

    2010-01-01

    Purpose: The aims of this study are: to present a training evaluation based on the "four levels model"; to demonstrate the value of experimental designs in evaluation studies; and to take a first step in the development of an evidence-based training program. Design/methodology/approach: The Kirkpatrick four levels model was used to evaluate the…

  4. Formative Evaluation: A Revised Descriptive Theory and a Prescriptive Model.

    ERIC Educational Resources Information Center

    Braden, Roberts A.

    The premise is advanced that a major weakness of the everyday generic instructional systems design model stems from a too modest traditional conception of the purpose and potential of formative evaluation. In the typical ISD (instructional systems design) model formative evaluation is shown not at all or as a single, product evaluation step. Yet…

  5. Global daily reference evapotranspiration modeling and evaluation

    USGS Publications Warehouse

    Senay, G.B.; Verdin, J.P.; Lietzow, R.; Melesse, Assefa M.

    2008-01-01

    Accurate and reliable evapotranspiration (ET) datasets are crucial in regional water and energy balance studies. Due to the complex instrumentation requirements, actual ET values are generally estimated from reference ET values by adjustment factors using coefficients for water stress and vegetation conditions, commonly referred to as crop coefficients. Until recently, the modeling of reference ET has been solely based on important weather variables collected from weather stations that are generally located in selected agro-climatic locations. Since 2001, the National Oceanic and Atmospheric Administration's Global Data Assimilation System (GDAS) has been producing six-hourly climate parameter datasets that are used to calculate daily reference ET for the whole globe at 1-degree spatial resolution. The U.S. Geological Survey Center for Earth Resources Observation and Science has been producing daily reference ET (ETo) since 2001, and it has been used on a variety of operational hydrological models for drought and streamflow monitoring all over the world. With the increasing availability of local station-based reference ET estimates, we evaluated the GDAS-based reference ET estimates using data from the California Irrigation Management Information System (CIMIS). Daily CIMIS reference ET estimates from 85 stations were compared with GDAS-based reference ET at different spatial and temporal scales using five-year daily data from 2002 through 2006. Despite the large difference in spatial scale (point vs. ???100 km grid cell) between the two datasets, the correlations between station-based ET and GDAS-ET were very high, exceeding 0.97 on a daily basis to more than 0.99 on time scales of more than 10 days. Both the temporal and spatial correspondences in trend/pattern and magnitudes between the two datasets were satisfactory, suggesting the reliability of using GDAS parameter-based reference ET for regional water and energy balance studies in many parts of the world

  6. Program evaluation models and related theories: AMEE guide no. 67.

    PubMed

    Frye, Ann W; Hemmer, Paul A

    2012-01-01

    This Guide reviews theories of science that have influenced the development of common educational evaluation models. Educators can be more confident when choosing an appropriate evaluation model if they first consider the model's theoretical basis against their program's complexity and their own evaluation needs. Reductionism, system theory, and (most recently) complexity theory have inspired the development of models commonly applied in evaluation studies today. This Guide describes experimental and quasi-experimental models, Kirkpatrick's four-level model, the Logic Model, and the CIPP (Context/Input/Process/Product) model in the context of the theories that influenced their development and that limit or support their ability to do what educators need. The goal of this Guide is for educators to become more competent and confident in being able to design educational program evaluations that support intentional program improvement while adequately documenting or describing the changes and outcomes-intended and unintended-associated with their programs.

  7. Program evaluation models and related theories: AMEE guide no. 67.

    PubMed

    Frye, Ann W; Hemmer, Paul A

    2012-01-01

    This Guide reviews theories of science that have influenced the development of common educational evaluation models. Educators can be more confident when choosing an appropriate evaluation model if they first consider the model's theoretical basis against their program's complexity and their own evaluation needs. Reductionism, system theory, and (most recently) complexity theory have inspired the development of models commonly applied in evaluation studies today. This Guide describes experimental and quasi-experimental models, Kirkpatrick's four-level model, the Logic Model, and the CIPP (Context/Input/Process/Product) model in the context of the theories that influenced their development and that limit or support their ability to do what educators need. The goal of this Guide is for educators to become more competent and confident in being able to design educational program evaluations that support intentional program improvement while adequately documenting or describing the changes and outcomes-intended and unintended-associated with their programs. PMID:22515309

  8. Rhode Island Model Evaluation & Support System: Teacher. Edition III

    ERIC Educational Resources Information Center

    Rhode Island Department of Education, 2015

    2015-01-01

    Rhode Island educators believe that implementing a fair, accurate, and meaningful educator evaluation and support system will help improve teaching and learning. The primary purpose of the Rhode Island Model Teacher Evaluation and Support System (Rhode Island Model) is to help all teachers improve. Through the Model, the goal is to help create a…

  9. Evaluating Latent Variable Growth Models through Ex Post Simulation.

    ERIC Educational Resources Information Center

    Kaplan, David; George, Rani

    1998-01-01

    The use of ex post (historical) simulation statistics as means of evaluating latent growth models is considered, and a variety of simulation quality statistics are applied to such models. Results illustrate the importance of using these measures as adjuncts to more traditional forms of model evaluation. (SLD)

  10. Animal models to evaluate bacterial biofilm development.

    PubMed

    Thomsen, Kim; Trøstrup, Hannah; Moser, Claus

    2014-01-01

    Medical biofilms have attracted substantial attention especially in the past decade. Animal models are contributing significantly to understand the pathogenesis of medical biofilms. In addition, animal models are an essential tool in testing the hypothesis generated from clinical observations in patients and preclinical testing of agents showing in vitro antibiofilm effect. Here, we describe three animal models - two non-foreign body Pseudomonas aeruginosa biofilm models and a foreign body Staphylococcus aureus model.

  11. Evaluating Vocational Programs: A Three Dimensional Model.

    ERIC Educational Resources Information Center

    Rehman, Sharaf N.; Nejad, Mahmoud

    The traditional methods of assessing the academic programs in the liberal arts are inappropriate for evaluating vocational and technical programs. In traditional academic disciplines, assessment of instruction is conducted in two fashions: student evaluation at the end of a course and institutional assessment of its goals and mission. Because of…

  12. Evaluation of Models of Parkinson's Disease

    PubMed Central

    Jagmag, Shail A.; Tripathi, Naveen; Shukla, Sunil D.; Maiti, Sankar; Khurana, Sukant

    2016-01-01

    Parkinson's disease is one of the most common neurodegenerative diseases. Animal models have contributed a large part to our understanding and therapeutics developed for treatment of PD. There are several more exhaustive reviews of literature that provide the initiated insights into the specific models; however a novel synthesis of the basic advantages and disadvantages of different models is much needed. Here we compare both neurotoxin based and genetic models while suggesting some novel avenues in PD modeling. We also highlight the problems faced and promises of all the mammalian models with the hope of providing a framework for comparison of various systems. PMID:26834536

  13. Evaluation of Traditional Medicines for Neurodegenerative Diseases Using Drosophila Models

    PubMed Central

    Lee, Soojin; Bang, Se Min; Lee, Joon Woo; Cho, Kyoung Sang

    2014-01-01

    Drosophila is one of the oldest and most powerful genetic models and has led to novel insights into a variety of biological processes. Recently, Drosophila has emerged as a model system to study human diseases, including several important neurodegenerative diseases. Because of the genomic similarity between Drosophila and humans, Drosophila neurodegenerative disease models exhibit a variety of human-disease-like phenotypes, facilitating fast and cost-effective in vivo genetic modifier screening and drug evaluation. Using these models, many disease-associated genetic factors have been identified, leading to the identification of compelling drug candidates. Recently, the safety and efficacy of traditional medicines for human diseases have been evaluated in various animal disease models. Despite the advantages of the Drosophila model, its usage in the evaluation of traditional medicines is only nascent. Here, we introduce the Drosophila model for neurodegenerative diseases and some examples demonstrating the successful application of Drosophila models in the evaluation of traditional medicines. PMID:24790636

  14. Evaluation of Fast-Time Wake Vortex Prediction Models

    NASA Technical Reports Server (NTRS)

    Proctor, Fred H.; Hamilton, David W.

    2009-01-01

    Current fast-time wake models are reviewed and three basic types are defined. Predictions from several of the fast-time models are compared. Previous statistical evaluations of the APA-Sarpkaya and D2P fast-time models are discussed. Root Mean Square errors between fast-time model predictions and Lidar wake measurements are examined for a 24 hr period at Denver International Airport. Shortcomings in current methodology for evaluating wake errors are also discussed.

  15. Evaluating Energy Efficiency Policies with Energy-Economy Models

    SciTech Connect

    Mundaca, Luis; Neij, Lena; Worrell, Ernst; McNeil, Michael A.

    2010-08-01

    The growing complexities of energy systems, environmental problems and technology markets are driving and testing most energy-economy models to their limits. To further advance bottom-up models from a multidisciplinary energy efficiency policy evaluation perspective, we review and critically analyse bottom-up energy-economy models and corresponding evaluation studies on energy efficiency policies to induce technological change. We use the household sector as a case study. Our analysis focuses on decision frameworks for technology choice, type of evaluation being carried out, treatment of market and behavioural failures, evaluated policy instruments, and key determinants used to mimic policy instruments. Although the review confirms criticism related to energy-economy models (e.g. unrealistic representation of decision-making by consumers when choosing technologies), they provide valuable guidance for policy evaluation related to energy efficiency. Different areas to further advance models remain open, particularly related to modelling issues, techno-economic and environmental aspects, behavioural determinants, and policy considerations.

  16. Evaluation of the Ems Estuary ecosystem model

    NASA Astrophysics Data System (ADS)

    Baretta, J. W.; Ruardij, P.

    1987-11-01

    An ecosystem model is used to calculate and summarize carbon budgets within the Ems Estuary, The Netherlands. The similarity between model calculations and field data is established using a validation procedure. Model results show that the seaward boundary concentration for suspended matter is important in determining whether an estuary is an importer or exporter of carbon. Lowered boundary concentrations of suspended matter enhance pelagic primary production, but reduce sedimentation and hence the carbon flux from pelagic to benthic systems.

  17. Evaluation study of building-resolved urban dispersion models

    SciTech Connect

    Flaherty, Julia E.; Allwine, K Jerry; Brown, Mike J.; Coirier, WIlliam J.; Ericson, Shawn C.; Hansen, Olav R.; Huber, Alan H.; Kim, Sura; Leach, Martin J.; Mirocha, Jeff D.; Newsom, Rob K.; Patnaik, Gopal; Senocak, Inanc

    2007-09-10

    For effective emergency response and recovery planning, it is critically important that building-resolved urban dispersion models be evaluated using field data. Several full-physics computational fluid dynamics (CFD) models and semi-empirical building-resolved (SEB) models are being advanced and applied to simulating flow and dispersion in urban areas. To obtain an estimate of the current state-of-readiness of these classes of models, the Department of Homeland Security (DHS) funded a study to compare five CFD models and one SEB model with tracer data from the extensive Midtown Manhattan field study (MID05) conducted during August 2005 as part of the DHS Urban Dispersion Program (UDP; Allwine and Flaherty 2007). Six days of tracer and meteorological experiments were conducted over an approximately 2-km-by-2-km area in Midtown Manhattan just south of Central Park in New York City. A subset of these data was used for model evaluations. The study was conducted such that an evaluation team, independent of the six modeling teams, provided all the input data (e.g., building data, meteorological data and tracer release rates) and run conditions for each of four experimental periods simulated. Tracer concentration data for two of the four experimental periods were provided to the modeling teams for their own evaluation of their respective models to ensure proper setup and operation. Tracer data were not provided for the second two experimental periods to provide for an independent evaluation of the models. The tracer concentrations resulting from the model simulations were provided to the evaluation team in a standard format for consistency in inter-comparing model results. An overview of the model evaluation approach will be given followed by a discussion on the qualitative comparison of the respective models with the field data. Future model developments efforts needed to address modeling gaps identified from this study will also be discussed.

  18. Evaluation of the BioVapor Model

    EPA Science Inventory

    The BioVapor model addresses transport and biodegradation of petroleum vapors in the subsurface. This presentation describes basic background on the nature and scientific basis of environmental transport models. It then describes a series of parameter uncertainty runs of the Bi...

  19. Evaluation of spinal cord injury animal models

    PubMed Central

    Zhang, Ning; Fang, Marong; Chen, Haohao; Gou, Fangming; Ding, Mingxing

    2014-01-01

    Because there is no curative treatment for spinal cord injury, establishing an ideal animal model is important to identify injury mechanisms and develop therapies for individuals suffering from spinal cord injuries. In this article, we systematically review and analyze various kinds of animal models of spinal cord injury and assess their advantages and disadvantages for further studies. PMID:25598784

  20. Selected Models and Elements of Evaluation for Vocational Educators.

    ERIC Educational Resources Information Center

    Orlich, Donald C.; Murphy, Ronald R.

    The purpose of this manual is to provide vocational educators with evaluation elements and tested models which can assist them in designing evaluation systems. Chapter 1 provides several sets of criteria for inclusion in any general program evaluation. The eleven general areas for which criteria are included are administrative procedures,…

  1. A Context-Adaptive Model for Program Evaluation.

    ERIC Educational Resources Information Center

    Lynch, Brian K.

    1990-01-01

    Presents an adaptable, context-sensitive model for ESL/EFL program evaluation, consisting of seven steps that guide an evaluator through consideration of relevant issues, information, and design elements. Examples from an evaluation of the Reading for Science and Technology Project at the University of Guadalajara, Mexico are given. (31…

  2. Testing of a Program Evaluation Model: Final Report.

    ERIC Educational Resources Information Center

    Nagler, Phyllis J.; Marson, Arthur A.

    A program evaluation model developed by Moraine Park Technical Institute (MPTI) is described in this report. Following background material, the four main evaluation criteria employed in the model are identified as program quality, program relevance to community needs, program impact on MPTI, and the transition and growth of MPTI graduates in the…

  3. TOWARDS AN EVALUATION MODEL--A SYSTEMS APPROACH.

    ERIC Educational Resources Information Center

    ALKIN, MARVIN C.

    A MODEL FOR EVALUATING INSTRUCTIONAL PROGRAMS AT THE SCHOOL DISTRICT LEVEL IS DEVELOPED. THE MODEL IS BASICALLY A DISCUSSION AND AMPLIFICATION OF THE DEFINITION OF "EVALUATION." IT CONSISTS OF SIX MAIN ELEMENTS--(1) STUDENT INPUTS, (2) FINANCIAL INPUTS, (3) EXTERNAL SYSTEMS, (4) MEDIATING FACTORS, (5) STUDENT OUTPUTS, AND (6) NONSTUDENT OUTPUTS.…

  4. Modelling in Evaluating a Working Life Project in Higher Education

    ERIC Educational Resources Information Center

    Sarja, Anneli; Janhonen, Sirpa; Havukainen, Pirjo; Vesterinen, Anne

    2012-01-01

    This article describes an evaluation method based on collaboration between the higher education, a care home and university, in a R&D project. The aim of the project was to elaborate modelling as a tool of developmental evaluation for innovation and competence in project cooperation. The approach was based on activity theory. Modelling enabled a…

  5. Rhode Island Model Evaluation & Support System: Support Professional. Edition II

    ERIC Educational Resources Information Center

    Rhode Island Department of Education, 2015

    2015-01-01

    Rhode Island educators believe that implementing a fair, accurate, and meaningful evaluation and support system for support professionals will help improve student outcomes. The primary purpose of the Rhode Island Model Support Professional Evaluation and Support System (Rhode Island Model) is to help all support professionals do their best work…

  6. Rhode Island Model Evaluation & Support System: Building Administrator. Edition III

    ERIC Educational Resources Information Center

    Rhode Island Department of Education, 2015

    2015-01-01

    Rhode Island educators believe that implementing a fair, accurate, and meaningful educator evaluation and support system will help improve teaching, learning, and school leadership. The primary purpose of the Rhode Island Model Building Administrator Evaluation and Support System (Rhode Island Model) is to help all building administrators improve.…

  7. Evaluating a Community-School Model of Social Work Practice

    ERIC Educational Resources Information Center

    Diehl, Daniel; Frey, Andy

    2008-01-01

    While research has shown that social workers can have positive impacts on students' school adjustment, evaluations of overall practice models continue to be limited. This article evaluates a model of community-school social work practice by examining its effect on problem behaviors and concerns identified by teachers and parents at referral. As…

  8. Program Evaluation: The Accountability Bridge Model for Counselors

    ERIC Educational Resources Information Center

    Astramovich, Randall L.; Coker, J. Kelly

    2007-01-01

    The accountability and reform movements in education and the human services professions have pressured counselors to demonstrate outcomes of counseling programs and services. Evaluation models developed for large-scale evaluations are generally impractical for counselors to implement. Counselors require practical models to guide them in planning…

  9. Model for Evaluating Teacher and Trainer Competences

    ERIC Educational Resources Information Center

    Carioca, Vito; Rodrigues, Clara; Saude, Sandra; Kokosowski, Alain; Harich, Katja; Sau-Ek, Kristiina; Georgogianni, Nicole; Levy, Samuel; Speer, Sandra; Pugh, Terence

    2009-01-01

    A lack of common criteria for comparing education and training systems makes it difficult to recognise qualifications and competences acquired in different environments and levels of training. A valid basis for defining a framework for evaluating professional performance in European educational and training contexts must therefore be established.…

  10. A Theoretical Model for Faculty "Peer" Evaluation

    ERIC Educational Resources Information Center

    Sauter, Robert C.; Walker, James K.

    1976-01-01

    The current state of research by the authors into the development of a peer evaluation instrument for faculty is reported. Appraisal areas include: clarity and appropriateness of course objectives; agreement between objectives and course content; instructional material; mastery of content; communication skills; student encouragement; organization;…

  11. Evaluating Individualized Reading Programs: A Bayesian Model.

    ERIC Educational Resources Information Center

    Maxwell, Martha

    Simple Bayesian approaches can be applied to answer specific questions in evaluating an individualized reading program. A small reading and study skills program located in the counseling center of a major research university collected and compiled data on student characteristics such as class, number of sessions attended, grade point average, and…

  12. Designing and Evaluating Representations to Model Pedagogy

    ERIC Educational Resources Information Center

    Masterman, Elizabeth; Craft, Brock

    2013-01-01

    This article presents the case for a theory-informed approach to designing and evaluating representations for implementation in digital tools to support Learning Design, using the framework of epistemic efficacy as an example. This framework, which is rooted in the literature of cognitive psychology, is operationalised through dimensions of fit…

  13. A Model for Evaluating Learning Resources.

    ERIC Educational Resources Information Center

    Ruth, Lester R.

    In 1977-78 and again in 1980-81, extensive evaluations of learning resource services at Lake-Sumter Community College (LSCC) were conducted to determine: (1) the extent to which the Learning Resource Center (LRC) contributes to the stated purpose of the college; (2) the extent to which it meets its own objectives; (3) the extent to which the LRC…

  14. An Ontological Model of Evaluation: A Dynamic Model for Aiding Organizational Development.

    ERIC Educational Resources Information Center

    Peper, John B.

    Evaluation models imply or assume theories of organization, behavior, and decision-making. Seldom does an evaluation model specify these assumptions. As a result, program evaluators often choose mechanistic models and their resultant information is either inadequate or inappropriate for most of the client's purposes. The Ontological Evaluation…

  15. TMDL MODEL EVALUATION AND RESEARCH NEEDS

    EPA Science Inventory

    This review examines the modeling research needs to support environmental decision-making for the 303(d) requirements for development of total maximum daily loads (TMDLs) and related programs such as 319 Nonpoint Source Program activities, watershed management, stormwater permits...

  16. Models for evaluating the performability of degradable computing systems

    NASA Technical Reports Server (NTRS)

    Wu, L. T.

    1982-01-01

    Recent advances in multiprocessor technology established the need for unified methods to evaluate computing systems performance and reliability. In response to this modeling need, a general modeling framework that permits the modeling, analysis and evaluation of degradable computing systems is considered. Within this framework, several user oriented performance variables are identified and shown to be proper generalizations of the traditional notions of system performance and reliability. Furthermore, a time varying version of the model is developed to generalize the traditional fault tree reliability evaluation methods of phased missions.

  17. Evaluating Models of Human Performance: Safety-Critical Systems Applications

    NASA Technical Reports Server (NTRS)

    Feary, Michael S.

    2012-01-01

    This presentation is part of panel discussion on Evaluating Models of Human Performance. The purpose of this panel is to discuss the increasing use of models in the world today and specifically focus on how to describe and evaluate models of human performance. My presentation will focus on discussions of generating distributions of performance, and the evaluation of different strategies for humans performing tasks with mixed initiative (Human-Automation) systems. I will also discuss issues with how to provide Human Performance modeling data to support decisions on acceptability and tradeoffs in the design of safety critical systems. I will conclude with challenges for the future.

  18. Evaluation of Surrogate Animal Models of Melioidosis

    PubMed Central

    Warawa, Jonathan Mark

    2010-01-01

    Burkholderia pseudomallei is the Gram-negative bacterial pathogen responsible for the disease melioidosis. B. pseudomallei establishes disease in susceptible individuals through multiple routes of infection, all of which may proceed to a septicemic disease associated with a high mortality rate. B. pseudomallei opportunistically infects humans and a wide range of animals directly from the environment, and modeling of experimental melioidosis has been conducted in numerous biologically relevant models including mammalian and invertebrate hosts. This review seeks to summarize published findings related to established animal models of melioidosis, with an aim to compare and contrast the virulence of B. pseudomallei in these models. The effect of the route of delivery on disease is also discussed for intravenous, intraperitoneal, subcutaneous, intranasal, aerosol, oral, and intratracheal infection methodologies, with a particular focus on how they relate to modeling clinical melioidosis. The importance of the translational validity of the animal models used in B. pseudomallei research is highlighted as these studies have become increasingly therapeutic in nature. PMID:21772830

  19. CFD code evaluation for internal flow modeling

    NASA Technical Reports Server (NTRS)

    Chung, T. J.

    1990-01-01

    Research on the computational fluid dynamics (CFD) code evaluation with emphasis on supercomputing in reacting flows is discussed. Advantages of unstructured grids, multigrids, adaptive methods, improved flow solvers, vector processing, parallel processing, and reduction of memory requirements are discussed. As examples, researchers include applications of supercomputing to reacting flow Navier-Stokes equations including shock waves and turbulence and combustion instability problems associated with solid and liquid propellants. Evaluation of codes developed by other organizations are not included. Instead, the basic criteria for accuracy and efficiency have been established, and some applications on rocket combustion have been made. Research toward an ultimate goal, the most accurate and efficient CFD code, is in progress and will continue for years to come.

  20. The Discrepancy Evaluation Model: A Systematic Approach for the Evaluation of Career Planning and Placement Programs.

    ERIC Educational Resources Information Center

    Buttram, Joan L.; Covert, Robert W.

    The Discrepancy Evaluation Model (DEM), developed in 1966 by Malcolm Provus, provides information for program assessment and program improvement. Under the DEM, evaluation is defined as the comparison of an actual performance to a desired standard. The DEM embodies five stages of evaluation based upon a program's natural development: program…

  1. Experimental evaluations of the microchannel flow model

    NASA Astrophysics Data System (ADS)

    Parker, K. J.

    2015-06-01

    Recent advances have enabled a new wave of biomechanics measurements, and have renewed interest in selecting appropriate rheological models for soft tissues such as the liver, thyroid, and prostate. The microchannel flow model was recently introduced to describe the linear response of tissue to stimuli such as stress relaxation or shear wave propagation. This model postulates a power law relaxation spectrum that results from a branching distribution of vessels and channels in normal soft tissue such as liver. In this work, the derivation is extended to determine the explicit link between the distribution of vessels and the relaxation spectrum. In addition, liver tissue is modified by temperature or salinity, and the resulting changes in tissue responses (by factors of 1.5 or greater) are reasonably predicted from the microchannel flow model, simply by considering the changes in fluid flow through the modified samples. The 2 and 4 parameter versions of the model are considered, and it is shown that in some cases the maximum time constant (corresponding to the minimum vessel diameters), could be altered in a way that has major impact on the observed tissue response. This could explain why an inflamed region is palpated as a harder bump compared to surrounding normal tissue.

  2. Numerical models for the evaluation of geothermal systems

    SciTech Connect

    Bodvarsson, G.S.; Pruess, K.; Lippmann, M.J.

    1986-08-01

    We have carried out detailed simulations of various fields in the USA (Bada, New Mexico; Heber, California); Mexico (Cerro Prieto); Iceland (Krafla); and Kenya (Olkaria). These simulation studies have illustrated the usefulness of numerical models for the overall evaluation of geothermal systems. The methodology for modeling the behavior of geothermal systems, different approaches to geothermal reservoir modeling and how they can be applied in comprehensive evaluation work are discussed.

  3. Evaluation of biological models using Spacelab

    NASA Technical Reports Server (NTRS)

    Tollinger, D.; Williams, B. A.

    1980-01-01

    Biological models of hypogravity effects are described, including the cardiovascular-fluid shift, musculoskeletal, embryological and space sickness models. These models predict such effects as loss of extracellular fluid and electrolytes, decrease in red blood cell mass, and the loss of muscle and bone mass in weight-bearing portions of the body. Experimentation in Spacelab by the use of implanted electromagnetic flow probes, by fertilizing frog eggs in hypogravity and fixing the eggs at various stages of early development and by assessing the role of the vestibulocular reflex arc in space sickness is suggested. It is concluded that the use of small animals eliminates the uncertainties caused by corrective or preventive measures employed with human subjects.

  4. Evaluating models of climate and forest vegetation

    NASA Technical Reports Server (NTRS)

    Clark, James S.

    1992-01-01

    Understanding how the biosphere may respond to increasing trace gas concentrations in the atmosphere requires models that contain vegetation responses to regional climate. Most of the processes ecologists study in forests, including trophic interactions, nutrient cycling, and disturbance regimes, and vital components of the world economy, such as forest products and agriculture, will be influenced in potentially unexpected ways by changing climate. These vegetation changes affect climate in the following ways: changing C, N, and S pools; trace gases; albedo; and water balance. The complexity of the indirect interactions among variables that depend on climate, together with the range of different space/time scales that best describe these processes, make the problems of modeling and prediction enormously difficult. These problems of predicting vegetation response to climate warming and potential ways of testing model predictions are the subjects of this chapter.

  5. Source term evaluation for combustion modeling

    NASA Technical Reports Server (NTRS)

    Sussman, Myles A.

    1993-01-01

    A modification is developed for application to the source terms used in combustion modeling. The modification accounts for the error of the finite difference scheme in regions where chain-branching chemical reactions produce exponential growth of species densities. The modification is first applied to a one-dimensional scalar model problem. It is then generalized to multiple chemical species, and used in quasi-one-dimensional computations of shock-induced combustion in a channel. Grid refinement studies demonstrate the improved accuracy of the method using this modification. The algorithm is applied in two spatial dimensions and used in simulations of steady and unsteady shock-induced combustion. Comparisons with ballistic range experiments give confidence in the numerical technique and the 9-species hydrogen-air chemistry model.

  6. Evaluation of Model Fit in Cognitive Diagnosis Models

    ERIC Educational Resources Information Center

    Hu, Jinxiang; Miller, M. David; Huggins-Manley, Anne Corinne; Chen, Yi-Hsin

    2016-01-01

    Cognitive diagnosis models (CDMs) estimate student ability profiles using latent attributes. Model fit to the data needs to be ascertained in order to determine whether inferences from CDMs are valid. This study investigated the usefulness of some popular model fit statistics to detect CDM fit including relative fit indices (AIC, BIC, and CAIC),…

  7. Evaluating a Model of Youth Physical Activity

    PubMed Central

    Heitzler, Carrie D.; Lytle, Leslie A.; Erickson, Darin J.; Barr-Anderson, Daheia; Sirard, John R.; Story, Mary

    2011-01-01

    Objective To explore the relationship between social influences, self-efficacy, enjoyment, and barriers and physical activity. Methods Structural equation modeling examined relationships between parent and peer support, parent physical activity, individual perceptions, and objectively measured physical activity using accelerometers among a sample of youth aged 10–17 years (N=720). Results Peer support, parent physical activity, and perceived barriers were directly related to youth activity. The proposed model accounted for 14.7% of the variance in physical activity. Conclusions The results demonstrate a need to further explore additional individual, social, and environmental factors that may influence youth’s regular participation in physical activity. PMID:20524889

  8. Modeling procedures for handling qualities evaluation of flexible aircraft

    NASA Technical Reports Server (NTRS)

    Govindaraj, K. S.; Eulrich, B. J.; Chalk, C. R.

    1981-01-01

    This paper presents simplified modeling procedures to evaluate the impact of flexible modes and the unsteady aerodynamic effects on the handling qualities of Supersonic Cruise Aircraft (SCR). The modeling procedures involve obtaining reduced order transfer function models of SCR vehicles, including the important flexible mode responses and unsteady aerodynamic effects, and conversion of the transfer function models to time domain equations for use in simulations. The use of the modeling procedures is illustrated by a simple example.

  9. Evaluating a Model of Youth Physical Activity

    ERIC Educational Resources Information Center

    Heitzler, Carrie D.; Lytle, Leslie A.; Erickson, Darin J.; Barr-Anderson, Daheia; Sirard, John R.; Story, Mary

    2010-01-01

    Objective: To explore the relationship between social influences, self-efficacy, enjoyment, and barriers and physical activity. Methods: Structural equation modeling examined relationships between parent and peer support, parent physical activity, individual perceptions, and objectively measured physical activity using accelerometers among a…

  10. Evaluating the Pedagogical Potential of Hybrid Models

    ERIC Educational Resources Information Center

    Levin, Tzur; Levin, Ilya

    2013-01-01

    The paper examines how the use of hybrid models--that consist of the interacting continuous and discrete processes--may assist in teaching system thinking. We report an experiment in which undergraduate students were asked to choose between a hybrid and a continuous solution for a number of control problems. A correlation has been found between…

  11. AERMOD: MODEL FORMULATION AND EVALUATION RESULTS

    EPA Science Inventory

    AERMOD is an advanced plume model that incorporates updated treatments of the boundary layer theory, understanding of turbulence and dispersion, and includes handling of terrain interactions. This paper presents an overview of AERMOD's features relative to ISCST3.

    AERM...

  12. Climate Model Evaluation in Distributed Environments.

    NASA Astrophysics Data System (ADS)

    Braverman, A. J.

    2014-12-01

    As the volume of climate-model-generated and observational data increases, it has become infeasible to perform large-scale comparisons of model output against observations by moving the data to a central location. Data reduction techniques, such as gridding or subsetting, can reduce data volume, but also sacrifice information about spatial and temporal variability that may be important for the comparison. Alternatively, it is generally recognized that "moving the computaton to the data" is more efficient for leveraging large data sets. In the spirit of the latter approach, we describe a new methodology for comparing time series structure in model-generated and observational time series when those data are stored on different computers. The method involves simulating the sampling distribution of the difference between a statistic computed from the model output and the same statistic computed from the observed data. This is accomplished with separate wavelet decompositions of the two time series on their respective local machines, and the transmission of only a very small set of information computed from the wavelet coefficients. The smaller that set is, the cheaper it is to transmit, but also the less accurate will be the result. From the standpoint of the analysis of distributed data, the main question concerns the nature of that trade-off. In this talk, we describe the comparison methodology and the results of some preliminary studies on the cost-accuracy trade-off.

  13. Evaluation of a hydrological model based on Bidirectional Reach (BReach)

    NASA Astrophysics Data System (ADS)

    Van Eerdenbrugh, Katrien; Van Hoey, Stijn; Verhoest, Niko E. C.

    2016-04-01

    Evaluation and discrimination of model structures is crucial to ensure an appropriate use of hydrological models. When evaluating model results by aggregating their quality in (a subset of) individual observations, overall results of this analysis sometimes conceal important detailed information about model structural deficiencies. Analyzing model results within their local (time) context can uncover this detailed information. In this research, a methodology called Bidirectional Reach (BReach) is proposed to evaluate and analyze results of a hydrological model by assessing the maximum left and right reach in each observation point that is used for model evaluation. These maximum reaches express the capability of the model to describe a subset of the evaluation data both in the direction of the previous (left) and of the following data (right). This capability is evaluated on two levels. First, on the level of individual observations, the combination of a parameter set and an observation is classified as non-acceptable if the deviation between the accompanying model result and the measurement exceeds observational uncertainty. Second, the behavior in a sequence of observations is evaluated by means of a tolerance degree. This tolerance degree expresses the condition for satisfactory model behavior in a data series and is defined by the percentage of observations within this series that can have non-acceptable model results. Based on both criteria, the maximum left and right reaches of a model in an observation represent the data points in the direction of the previous respectively the following observations beyond which none of the sampled parameter sets both are satisfactory and result in an acceptable deviation. After assessing these reaches for a variety of tolerance degrees, results can be plotted in a combined BReach plot that show temporal changes in the behavior of model results. The methodology is applied on a Probability Distributed Model (PDM) of the river

  14. Automated expert modeling for automated student evaluation.

    SciTech Connect

    Abbott, Robert G.

    2006-01-01

    The 8th International Conference on Intelligent Tutoring Systems provides a leading international forum for the dissemination of original results in the design, implementation, and evaluation of intelligent tutoring systems and related areas. The conference draws researchers from a broad spectrum of disciplines ranging from artificial intelligence and cognitive science to pedagogy and educational psychology. The conference explores intelligent tutoring systems increasing real world impact on an increasingly global scale. Improved authoring tools and learning object standards enable fielding systems and curricula in real world settings on an unprecedented scale. Researchers deploy ITS's in ever larger studies and increasingly use data from real students, tasks, and settings to guide new research. With high volumes of student interaction data, data mining, and machine learning, tutoring systems can learn from experience and improve their teaching performance. The increasing number of realistic evaluation studies also broaden researchers knowledge about the educational contexts for which ITS's are best suited. At the same time, researchers explore how to expand and improve ITS/student communications, for example, how to achieve more flexible and responsive discourse with students, help students integrate Web resources into learning, use mobile technologies and games to enhance student motivation and learning, and address multicultural perspectives.

  15. Evaluation and development of physically-based embankment breach models

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The CEATI Dam Safety Interest Group (DSIG) working group on embankment erosion and breach modelling has evaluated three physically-based numerical models used to simulate embankment erosion and breach development. The three models identified by the group were considered to be good candidates for fu...

  16. Teachers' Development Model to Authentic Assessment by Empowerment Evaluation Approach

    ERIC Educational Resources Information Center

    Charoenchai, Charin; Phuseeorn, Songsak; Phengsawat, Waro

    2015-01-01

    The purposes of this study were 1) Study teachers authentic assessment, teachers comprehension of authentic assessment and teachers needs for authentic assessment development. 2) To create teachers development model. 3) Experiment of teachers development model. 4) Evaluate effectiveness of teachers development model. The research is divided into 4…

  17. INVERSE MODEL ESTIMATION AND EVALUATION OF SEASONAL NH 3 EMISSIONS

    EPA Science Inventory

    The presentation topic is inverse modeling for estimate and evaluation of emissions. The case study presented is the need for seasonal estimates of NH3 emissions for air quality modeling. The inverse modeling application approach is first described, and then the NH

  18. An Evaluation of Some Models for Culture-Fair Selection.

    ERIC Educational Resources Information Center

    Petersen, Nancy S.; Novick, Melvin R.

    Models proposed by Cleary, Thorndike, Cole, Linn, Einhorn and Bass, Darlington, and Gross and Su for analyzing bias in the use of tests in a selection strategy are surveyed. Several additional models are also introduced. The purpose is to describe, compare, contrast, and evaluate these models while extracting such useful ideas as may be found in…

  19. A Context-Restrictive Model for Program Evaluation?

    ERIC Educational Resources Information Center

    Swales,John M.

    1990-01-01

    Discusses a proposed "context adaptive" model for English-as-a-Second-Language program evaluation and suggests that the boundaries are set too narrowly within this model between phenomena and contexts and that the model of the Reading English for Science and Technology program in Guadalajara (Mexico) suffers from this constriction. (eight…

  20. [Applying multilevel models in evaluation of bioequivalence (I)].

    PubMed

    Liu, Qiao-lan; Shen, Zhuo-zhi; Chen, Feng; Li, Xiao-song; Yang, Min

    2009-12-01

    This study aims to explore the application value of multilevel models for bioequivalence evaluation. Using a real example of 2 x 4 cross-over experimental design in evaluating bioequivalence of antihypertensive drug, this paper explores complex variance components corresponding to criteria statistics in existing methods recommended by FDA but obtained in multilevel models analysis. Results are compared with those from FDA standard Method of Moments, specifically on the feasibility and applicability of multilevel models in directly assessing the bioequivalence (ABE), the population bioequivalence (PBE) and the individual bioequivalence (IBE). When measuring ln (AUC), results from all variance components of the test and reference groups such as total variance (sigma(TT)(2) and sigma(TR)(2)), between-subject variance (sigma(BT)(2) and sigma(BR)(2)) and within-subject variance (sigma(WT)(2) and sigma(WR)(2)) estimated by simple 2-level models are very close to those that using the FDA Method of Moments. In practice, bioequivalence evaluation can be carried out directly by multilevel models, or by FDA criteria, based on variance components estimated from multilevel models. Both approaches produce consistent results. Multilevel models can be used to evaluate bioequivalence in cross-over test design. Compared to FDA methods, this one is more flexible in decomposing total variance into sub components in order to evaluate the ABE, PBE and IBE. Multilevel model provides a new way into the practice of bioequivalence evaluation.

  1. Case study of an evaluation coaching model: exploring the role of the evaluator.

    PubMed

    Ensminger, David C; Kallemeyn, Leanne M; Rempert, Tania; Wade, James; Polanin, Megan

    2015-04-01

    This study examined the role of the external evaluator as a coach. More specifically, using an evaluative inquiry framework (Preskill & Torres, 1999a; Preskill & Torres, 1999b), it explored the types of coaching that an evaluator employed to promote individual, team and organizational learning. The study demonstrated that evaluation coaching provided a viable means for an organization with a limited budget to conduct evaluations through support of a coach. It also demonstrated how the coaching processes supported the development of evaluation capacity within the organization. By examining coaching models outside of the field of evaluation, this study identified two forms of coaching--results coaching and developmental coaching--that promoted evaluation capacity building and have not been previously discussed in the evaluation literature.

  2. A MULTILAYER BIOCHEMICAL DRY DEPOSITION MODEL 2. MODEL EVALUATION

    EPA Science Inventory

    The multilayer biochemical dry deposition model (MLBC) described in the accompanying paper was tested against half-hourly eddy correlation data from six field sites under a wide range of climate conditions with various plant types. Modeled CO2, O3, SO2<...

  3. Evaluation of an Infiltration Model with Microchannels

    NASA Astrophysics Data System (ADS)

    Garcia-Serrana, M.; Gulliver, J. S.; Nieber, J. L.

    2015-12-01

    This research goal is to develop and demonstrate the means by which roadside drainage ditches and filter strips can be assigned the appropriate volume reduction credits by infiltration. These vegetated surfaces convey stormwater, infiltrate runoff, and filter and/or settle solids, and are often placed along roads and other impermeable surfaces. Infiltration rates are typically calculated by assuming that water flows as sheet flow over the slope. However, for most intensities water flow occurs in narrow and shallow micro-channels and concentrates in depressions. This channelization reduces the fraction of the soil surface covered with the water coming from the road. The non-uniform distribution of water along a hillslope directly affects infiltration. First, laboratory and field experiments have been conducted to characterize the spatial pattern of flow for stormwater runoff entering onto the surface of a sloped surface in a drainage ditch. In the laboratory experiments different micro-topographies were tested over bare sandy loam soil: a smooth surface, and three and five parallel rills. All the surfaces experienced erosion; the initially smooth surface developed a system of channels over time that increased runoff generation. On average, the initially smooth surfaces infiltrated 10% more volume than the initially rilled surfaces. The field experiments were performed in the side slope of established roadside drainage ditches. Three rates of runoff from a road surface into the swale slope were tested, representing runoff from 1, 2, and 10-year storm events. The average percentage of input runoff water infiltrated in the 32 experiments was 67%, with a 21% standard deviation. Multiple measurements of saturated hydraulic conductivity were conducted to account for its spatial variability. Second, a rate-based coupled infiltration and overland model has been designed that calculates stormwater infiltration efficiency of swales. The Green-Ampt-Mein-Larson assumptions were

  4. Structural equation modeling: building and evaluating causal models: Chapter 8

    USGS Publications Warehouse

    Grace, James B.; Scheiner, Samuel M.; Schoolmaster, Donald R.

    2015-01-01

    Scientists frequently wish to study hypotheses about causal relationships, rather than just statistical associations. This chapter addresses the question of how scientists might approach this ambitious task. Here we describe structural equation modeling (SEM), a general modeling framework for the study of causal hypotheses. Our goals are to (a) concisely describe the methodology, (b) illustrate its utility for investigating ecological systems, and (c) provide guidance for its application. Throughout our presentation, we rely on a study of the effects of human activities on wetland ecosystems to make our description of methodology more tangible. We begin by presenting the fundamental principles of SEM, including both its distinguishing characteristics and the requirements for modeling hypotheses about causal networks. We then illustrate SEM procedures and offer guidelines for conducting SEM analyses. Our focus in this presentation is on basic modeling objectives and core techniques. Pointers to additional modeling options are also given.

  5. Evaluation Of Hemolysis Models Using A High Fidelity Blood Model

    NASA Astrophysics Data System (ADS)

    Ezzeldin, Hussein; de Tullio, Marco; Solares, Santiago; Balaras, Elias

    2012-11-01

    Red blood cell (RBC) hemolysis is a critical concern in the design of heart assisted devices, such as prosthetic heart valves (PHVs). To date a few analytical and numerical models have been proposed to relate either hydrodynamic stresses or RBC strains, resulting from the external hydrodynamic loading, to the expected degree of hemolysis as a function of time. Such models are based on either ``lumped'' descriptions of fluid stresses or an abstract analytical-numerical representation of the RBC relying on simple geometrical assumptions. We introduce two new approaches based on an existing coarse grained (CG) RBC structural model, which is utilized to explore the physics underlying each hemolysis model whereby applying a set of devised computational experiments. Then, all the models are subjected to pathlines calculated for a realistic PHVs to predict the level of RBC trauma. Our results highlight the strengths and weaknesses of each approach and identify the key gaps that should be addressed in the development of new models. Finally, a two-layer CG model, coupling the spectrin network and the lipid bilayer, which provides invaluable information pertaining to RBC local strains and hence hemolysis. We acknowledge the support of NSF OCI-0904920 and CMMI-0841840 grants. Computing time was provided by XSEDE.

  6. Evaluation of Model Operational Analyses during DYNAMO

    NASA Astrophysics Data System (ADS)

    Ciesielski, Paul; Johnson, Richard

    2013-04-01

    A primary component of the observing system in the DYNAMO-CINDY2011-AMIE field campaign was an atmospheric sounding network comprised of two sounding quadrilaterals, one north and one south of the equator over the central Indian Ocean. During the experiment a major effort was undertaken to ensure the real-time transmission of these data onto the GTS (Global Telecommunication System) for dissemination to the operational centers (ECMWF, NCEP, JMA, etc.). Preliminary estimates indicate that ~95% of the soundings from the enhanced sounding network were successfully transmitted and potentially used in their data assimilation systems. Because of the wide use of operational and reanalysis products (e.g., in process studies, initializing numerical simulations, construction of large-scale forcing datasets for CRMs, etc.), their validity will be examined by comparing a variety of basic and diagnosed fields from two operational analyses (ECMWF and NCEP) to similar analyses based solely on sounding observations. Particular attention will be given to the vertical structures of apparent heating (Q1) and drying (Q2) from the operational analyses (OA), which are strongly influenced by cumulus parameterizations, a source of model infidelity. Preliminary results indicate that the OA products did a reasonable job at capturing the mean and temporal characteristics of convection during the DYNAMO enhanced observing period, which included the passage of two significant MJO events during the October-November 2011 period. For example, temporal correlations between Q2-budget derived rainfall from the OA products and that estimated from the TRMM satellite (i.e., the 3B42V7 product) were greater than 0.9 over the Northern Sounding Array of DYNAMO. However closer inspection of the budget profiles show notable differences between the OA products and the sounding-derived results in low-level (surface to 700 hPa) heating and drying structures. This presentation will examine these differences and

  7. EVALUATION OF MULTIPLE PHARMACOKINETIC MODELING STRUCTURES FOR TRICHLOROETHYLENE

    EPA Science Inventory

    A series of PBPK models were developed for trichloroethylene (TCE) to evaluate biological processes that may affect the absorption, distribution, metabolism and excretion (ADME) of TCE and its metabolites.

  8. Solid rocket booster performance evaluation model. Volume 4: Program listing

    NASA Technical Reports Server (NTRS)

    1974-01-01

    All subprograms or routines associated with the solid rocket booster performance evaluation model are indexed in this computer listing. An alphanumeric list of each routine in the index is provided in a table of contents.

  9. Incorporating principal component analysis into air quality model evaluation

    EPA Science Inventory

    The efficacy of standard air quality model evaluation techniques is becoming compromised as the simulation periods continue to lengthen in response to ever increasing computing capacity. Accordingly, the purpose of this paper is to demonstrate a statistical approach called Princi...

  10. Evaluating Rapid Models for High-Throughput Exposure Forecasting (SOT)

    EPA Science Inventory

    High throughput exposure screening models can provide quantitative predictions for thousands of chemicals; however these predictions must be systematically evaluated for predictive ability. Without the capability to make quantitative, albeit uncertain, forecasts of exposure, the ...

  11. Faculty performance evaluation: the CIPP-SAPS model.

    PubMed

    Mitcham, M

    1981-11-01

    The issues of faculty performance evaluation for allied health professionals are addressed. Daniel Stufflebeam's CIPP (content-input-process-product) model is introduced and its development in a CIPP-SAPS (self-administrative-peer-student) model is pursued. Data sources for the SAPS portion of the model are discussed. A suggestion for the use of the CIPP-SAPS model within a teaching contract plan is explored.

  12. An Information Search Model of Evaluative Concerns in Intergroup Interaction

    ERIC Educational Resources Information Center

    Vorauer, Jacquie D.

    2006-01-01

    In an information search model, evaluative concerns during intergroup interaction are conceptualized as a joint function of uncertainty regarding and importance attached to out-group members' views of oneself. High uncertainty generally fosters evaluative concerns during intergroup exchanges. Importance depends on whether out-group members'…

  13. Interrater Agreement Evaluation: A Latent Variable Modeling Approach

    ERIC Educational Resources Information Center

    Raykov, Tenko; Dimitrov, Dimiter M.; von Eye, Alexander; Marcoulides, George A.

    2013-01-01

    A latent variable modeling method for evaluation of interrater agreement is outlined. The procedure is useful for point and interval estimation of the degree of agreement among a given set of judges evaluating a group of targets. In addition, the approach allows one to test for identity in underlying thresholds across raters as well as to identify…

  14. AQMEII: A New International Initiative on Air Quality Model Evaluation

    EPA Science Inventory

    We provide a conceptual view of the process of evaluating regional-scale three-dimensional numerical photochemical air quality modeling system, based on an examination of existing approached to the evaluation of such systems as they are currently used in a variety of application....

  15. Information and complexity measures for hydrologic model evaluation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Hydrological models are commonly evaluated through the residual-based performance measures such as the root-mean square error or efficiency criteria. Such measures, however, do not evaluate the degree of similarity of patterns in simulated and measured time series. The objective of this study was to...

  16. Evaluating an English Language Teacher Education Program through Peacock's Model

    ERIC Educational Resources Information Center

    Coskun, Abdullah; Daloglu, Aysegul

    2010-01-01

    The main aim of this study is to draw attention to the importance of program evaluation for teacher education programs and to reveal the pre-service English teacher education program components that are in need of improvement or maintenance both from teachers' and students' perspectives by using Peacock's (2009) recent evaluation model in a…

  17. An Emerging Model for Student Feedback: Electronic Distributed Evaluation

    ERIC Educational Resources Information Center

    Brunk-Chavez, Beth; Arrigucci, Annette

    2012-01-01

    In this article we address several issues and challenges that the evaluation of writing presents individual instructors and composition programs as a whole. We present electronic distributed evaluation, or EDE, as an emerging model for feedback on student writing and describe how it was integrated into our program's course redesign. Because the…

  18. Estimating an Evaluation Utilization Model Using Conjoint Measurement and Analysis.

    ERIC Educational Resources Information Center

    Johnson, R. Burke

    1995-01-01

    The conjoint approach to measurement and analysis is demonstrated with a test of an evaluation utilization process-model that includes two endogenous variables (predicted participation and predicted instrumental evaluation). Conjoint measurement involves having respondents rate profiles that are analogues to concepts based on cells in a factorial…

  19. Regime-based evaluation of cloudiness in CMIP5 models

    NASA Astrophysics Data System (ADS)

    Jin, Daeho; Oreopoulos, Lazaros; Lee, Dongmin

    2016-04-01

    The concept of cloud regimes (CRs) is used to develop a framework for evaluating the cloudiness of 12 fifth Coupled Model Intercomparison Project (CMIP5) models. Reference CRs come from existing global International Satellite Cloud Climatology Project (ISCCP) weather states. The evaluation is made possible by the implementation in several CMIP5 models of the ISCCP simulator generating in each grid cell daily joint histograms of cloud optical thickness and cloud top pressure. Model performance is assessed with several metrics such as CR global cloud fraction (CF), CR relative frequency of occurrence (RFO), their product [long-term average total cloud amount (TCA)], cross-correlations of CR RFO maps, and a metric of resemblance between model and ISCCP CRs. In terms of CR global RFO, arguably the most fundamental metric, the models perform unsatisfactorily overall, except for CRs representing thick storm clouds. Because model CR CF is internally constrained by our method, RFO discrepancies yield also substantial TCA errors. Our results support previous findings that CMIP5 models underestimate cloudiness. The multi-model mean performs well in matching observed RFO maps for many CRs, but is still not the best for this or other metrics. When overall performance across all CRs is assessed, some models, despite shortcomings, apparently outperform Moderate Resolution Imaging Spectroradiometer cloud observations evaluated against ISCCP like another model output. Lastly, contrasting cloud simulation performance against each model's equilibrium climate sensitivity in order to gain insight on whether good cloud simulation pairs with particular values of this parameter, yields no clear conclusions.

  20. An evaluation of recent internal field models. [of earth magnetism

    NASA Technical Reports Server (NTRS)

    Mead, G. D.

    1979-01-01

    The paper reviews the current status of internal field models and evaluates several recently published models by comparing their predictions with annual means of the magnetic field measured at 140 magnetic observatories from 1973 to 1977. Three of the four models studied, viz. AWC/75, IGS/75, and Pogo 8/71, were nearly equal in their ability to predict the magnitude and direction of the current field. The fourth model, IGRF 1975, was significantly poorer in its ability to predict the current field. All models seemed to be able to extrapolate predictions quite well several years outside the data range used to construct the models.

  1. Statistical evaluation and choice of soil water retention models

    NASA Astrophysics Data System (ADS)

    Lennartz, Franz; Müller, Hans-Otfried; Nollau, Volker; Schmitz, Gerd H.; El-Shehawy, Shaban A.

    2008-12-01

    This paper presents the results of statistical investigations for the evaluation of soil water retention models (SWRMs). We employed three different methods developed for model selection in the field of nonlinear regression, namely, simulation studies, analysis of nonlinearity measures, and resampling strategies such as cross validation and bootstrap methods. Using these methods together with small data sets, we evaluated the performance of three exemplarily chosen types of SWRMs with respect to their parameter properties and the reliability of model predictions. The resulting rankings of models show that the favorable models are characterized by few parameters with an almost linear estimation behavior and close to symmetric distributions. To further demonstrate the potential of the statistical methods in the field of model selection, a modification of the four-parameter van Genuchten model is proposed which shows significantly improved and robust statistical properties.

  2. Evaluation of performance of predictive models for deoxynivalenol in wheat.

    PubMed

    van der Fels-Klerx, H J

    2014-02-01

    The aim of this study was to evaluate the performance of two predictive models for deoxynivalenol contamination of wheat at harvest in the Netherlands, including the use of weather forecast data and external model validation. Data were collected in a different year and from different wheat fields than data used for model development. The two models were run for six preset scenarios, varying in the period for which weather forecast data were used, from zero-day (historical data only) to a 13-day period around wheat flowering. Model predictions using forecast weather data were compared to those using historical data. Furthermore, model predictions using historical weather data were evaluated against observed deoxynivalenol contamination of the wheat fields. Results showed that the use of weather forecast data rather than observed data only slightly influenced model predictions. The percent of correct model predictions, given a threshold of 1,250 μg/kg (legal limit in European Union), was about 95% for the two models. However, only three samples had a deoxynivalenol concentration above this threshold, and the models were not able to predict these samples correctly. It was concluded that two- week weather forecast data can reliable be used in descriptive models for deoxynivalenol contamination of wheat, resulting in more timely model predictions. The two models are able to predict lower deoxynivalenol contamination correctly, but model performance in situations with high deoxynivalenol contamination needs to be further validated. This will need years with conducive environmental conditions for deoxynivalenol contamination of wheat.

  3. An Evaluation of Unsaturated Flow Models in an Arid Climate

    SciTech Connect

    Dixon, J.

    1999-12-01

    The objective of this study was to evaluate the effectiveness of two unsaturated flow models in arid regions. The area selected for the study was the Area 5 Radioactive Waste Management Site (RWMS) at the Nevada Test Site in Nye County, Nevada. The two models selected for this evaluation were HYDRUS-1D [Simunek et al., 1998] and the SHAW model [Flerchinger and Saxton, 1989]. Approximately 5 years of soil-water and atmospheric data collected from an instrumented weighing lysimeter site near the RWMS were used for building the models with actual initial and boundary conditions representative of the site. Physical processes affecting the site and model performance were explored. Model performance was based on a detailed sensitivity analysis and ultimately on storage comparisons. During the process of developing descriptive model input, procedures for converting hydraulic parameters for each model were explored. In addition, the compilation of atmospheric data collected at the site became a useful tool for developing predictive functions for future studies. The final model results were used to evaluate the capacities of the HYDRUS and SHAW models for predicting soil-moisture movement and variable surface phenomena for bare soil conditions in the arid vadose zone. The development of calibrated models along with the atmospheric and soil data collected at the site provide useful information for predicting future site performance at the RWMS.

  4. Putting Theory-Oriented Evaluation into Practice: A Logic Model Approach for Evaluating SIMGAME

    ERIC Educational Resources Information Center

    Hense, Jan; Kriz, Willy Christian; Wolfe, Joseph

    2009-01-01

    Evaluations of gaming simulations and business games as teaching devices are typically end-state driven. This emphasis fails to detect how the simulation being evaluated does or does not bring about its desired consequences. This paper advances the use of a logic model approach, which possesses a holistic perspective that aims at including all…

  5. Evaluating Organic Aerosol Model Performance: Impact of two Embedded Assumptions

    NASA Astrophysics Data System (ADS)

    Jiang, W.; Giroux, E.; Roth, H.; Yin, D.

    2004-05-01

    Organic aerosols are important due to their abundance in the polluted lower atmosphere and their impact on human health and vegetation. However, modeling organic aerosols is a very challenging task because of the complexity of aerosol composition, structure, and formation processes. Assumptions and their associated uncertainties in both models and measurement data make model performance evaluation a truly demanding job. Although some assumptions are obvious, others are hidden and embedded, and can significantly impact modeling results, possibly even changing conclusions about model performance. This paper focuses on analyzing the impact of two embedded assumptions on evaluation of organic aerosol model performance. One assumption is about the enthalpy of vaporization widely used in various secondary organic aerosol (SOA) algorithms. The other is about the conversion factor used to obtain ambient organic aerosol concentrations from measured organic carbon. These two assumptions reflect uncertainties in the model and in the ambient measurement data, respectively. For illustration purposes, various choices of the assumed values are implemented in the evaluation process for an air quality model based on CMAQ (the Community Multiscale Air Quality Model). Model simulations are conducted for the Lower Fraser Valley covering Southwest British Columbia, Canada, and Northwest Washington, United States, for a historical pollution episode in 1993. To understand the impact of the assumed enthalpy of vaporization on modeling results, its impact on instantaneous organic aerosol yields (IAY) through partitioning coefficients is analysed first. The analysis shows that utilizing different enthalpy of vaporization values causes changes in the shapes of IAY curves and in the response of SOA formation capability of reactive organic gases to temperature variations. These changes are then carried into the air quality model and cause substantial changes in the organic aerosol modeling

  6. Logistics Enterprise Evaluation Model Based On Fuzzy Clustering Analysis

    NASA Astrophysics Data System (ADS)

    Fu, Pei-hua; Yin, Hong-bo

    In this thesis, we introduced an evaluation model based on fuzzy cluster algorithm of logistics enterprises. First of all,we present the evaluation index system which contains basic information, management level, technical strength, transport capacity,informatization level, market competition and customer service. We decided the index weight according to the grades, and evaluated integrate ability of the logistics enterprises using fuzzy cluster analysis method. In this thesis, we introduced the system evaluation module and cluster analysis module in detail and described how we achieved these two modules. At last, we gave the result of the system.

  7. Outline and Preliminary Evaluation of the Classical Digital Library Model.

    ERIC Educational Resources Information Center

    MacCall, Steven L.; Cleveland, Ana D.; Gibson, Ian E.

    1999-01-01

    Outlines the classical digital library model, which is derived from traditional practices of library and information science professionals, as an alternative to the database retrieval model. Reports preliminary results from an evaluation study of library and information professionals and endusers involved with primary care medicine. (AEF)

  8. An Evaluation Model for Competency Based Teacher Preparatory Programs.

    ERIC Educational Resources Information Center

    Denton, Jon J.

    This discussion describes an evaluation model designed to complement a curriculum development project, the primary goal of which is to structure a performance based program for preservice teachers. Data collected from the implementation of this four-phase model can be used to make decisions for developing and changing performance objectives and…

  9. NEW CATEGORICAL METRICS FOR AIR QUALITY MODEL EVALUATION

    EPA Science Inventory

    Traditional categorical metrics used in model evaluations are "clear-cut" measures in that the model's ability to predict an exceedance is defined by a fixed threshold concentration and the metrics are defined by observation-forecast sets that are paired both in space and time. T...

  10. The Impact of Spatial Correlation and Incommensurability on Model Evaluation

    EPA Science Inventory

    Standard evaluations of air quality models rely heavily on a direct comparison of monitoring data matched with the model output for the grid cell containing the monitor’s location. While such techniques may be adequate for some applications, conclusions are limited by such facto...

  11. Evaluation of radiation partitioning models at Bushland, Texas

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Crop growth and soil-vegetation-atmosphere continuum energy transfer models often require estimates of net radiation components, such as photosynthetic, solar, and longwave radiation to both the canopy and soil. We evaluated the 1998 radiation partitioning model of Campbell and Norman, herein referr...

  12. Faculty Performance Evaluation: The CIPP-SAPS Model.

    ERIC Educational Resources Information Center

    Mitcham, Maralynne

    1981-01-01

    The issues of faculty performance evaluation for allied health professionals are addressed. Daniel Stufflebeam's CIPP (content-imput-process-product) model is introduced and its development into a CIPP-SAPS (self-administrative-peer- student) model is pursued. (Author/CT)

  13. Solid rocket booster performance evaluation model. Volume 2: Users manual

    NASA Technical Reports Server (NTRS)

    1974-01-01

    This users manual for the solid rocket booster performance evaluation model (SRB-II) contains descriptions of the model, the program options, the required program inputs, the program output format and the program error messages. SRB-II is written in FORTRAN and is operational on both the IBM 370/155 and the MSFC UNIVAC 1108 computers.

  14. A Model for Integrating Program Development and Evaluation.

    ERIC Educational Resources Information Center

    Brown, J. Lynne; Kiernan, Nancy Ellen

    1998-01-01

    A communication model consisting of input from target audience, program delivery, and outcomes (receivers' perception of message) was applied to an osteoporosis-prevention program for working mothers ages 21 to 45. Due to poor completion rate on evaluation instruments and failure of participants to learn key concepts, the model was used to improve…

  15. A Model Vocational Evaluation Center in a Public School System.

    ERIC Educational Resources Information Center

    Quinones, Wm. A.

    A model public school vocational evaluation center for handicapped students is described. The model's battery of work samples and tests of vocational aptitudes, personal and social adjustment, physical capacities, and work habits are listed. In addition, observation of such work behaviors as remembering instructions, correcting errors, reacting to…

  16. An Alternative Model for the Evaluation of Change. Technical Report.

    ERIC Educational Resources Information Center

    Corder-Bolz, Charles R.

    Previous research has indicated that most mathematical models used to evaluate change due to experimental treatment are misleading because the procedures artificially reduced one of the estimates of error variance. Two modified models, based upon the expected values of the variance of scores and difference scores, were developed from a new…

  17. The Pantex Process model: Formulations of the evaluation planning module

    SciTech Connect

    JONES,DEAN A.; LAWTON,CRAIG R.; LIST,GEORGE FISHER; TURNQUIST,MARK ALAN

    1999-12-01

    This paper describes formulations of the Evaluation Planning Module that have been developed since its inception. This module is one of the core algorithms in the Pantex Process Model, a computerized model to support production planning in a complex manufacturing system at the Pantex Plant, a US Department of Energy facility. Pantex is responsible for three major DOE programs -- nuclear weapons disposal, stockpile evaluation, and stockpile maintenance -- using shared facilities, technicians, and equipment. The model reflects the interactions of scheduling constraints, material flow constraints, and the availability of required technicians and facilities.

  18. Effects of distributed database modeling on evaluation of transaction rollbacks

    NASA Technical Reports Server (NTRS)

    Mukkamala, Ravi

    1991-01-01

    Data distribution, degree of data replication, and transaction access patterns are key factors in determining the performance of distributed database systems. In order to simplify the evaluation of performance measures, database designers and researchers tend to make simplistic assumptions about the system. The effect is studied of modeling assumptions on the evaluation of one such measure, the number of transaction rollbacks, in a partitioned distributed database system. Six probabilistic models and expressions are developed for the numbers of rollbacks under each of these models. Essentially, the models differ in terms of the available system information. The analytical results so obtained are compared to results from simulation. From here, it is concluded that most of the probabilistic models yield overly conservative estimates of the number of rollbacks. The effect of transaction commutativity on system throughout is also grossly undermined when such models are employed.

  19. Evaluation of dense-gas simulation models. Final report

    SciTech Connect

    Zapert, J.G.; Londergan, R.J.; Thistle, H.

    1991-05-01

    The report describes the approach and presents the results of an evaluation study of seven dense gas simulation models using data from three experimental programs. The models evaluated are two in the public domain (DEGADIS and SLAB) and five that are proprietary (AIRTOX, CHARM, FOCUS, SAFEMODE, and TRACE). The data bases used in the evaluation are the Desert Tortoise Pressurized Ammonia Releases, Burro Liquefied Natural Gas Spill Tests and the Goldfish Anhydrous Hydroflouric Acid Spill Experiments. A uniform set of performance statistics are calculated and tabulated to compare maximum observed concentrations and cloud half-width to those predicted by each model. None of the models demonstrated good performance consistently for all three experimental programs.

  20. Evaluation of the suicide prevention program in Kaohsiung City, Taiwan, using the CIPP evaluation model.

    PubMed

    Ho, Wen-Wei; Chen, Wei-Jen; Ho, Chi-Kung; Lee, Ming-Been; Chen, Cheng-Chung; Chou, Frank Huang-Chih

    2011-10-01

    The purpose of this study is to evaluate the effectiveness of the Kaohsiung Suicide Prevention Center (KSPC) of Kaohsiung City, Taiwan, during the period from June 2005 to June 2008. We used a modified CIPP evaluation model to evaluate the suicide prevention program in Kaohsiung. Four evaluation models were applied to evaluate the KSPC: a context evaluation of the background and origin of the center, an input evaluation of the resources of the center, a process evaluation of the activities of the suicide prevention project, and a product evaluation of the ascertainment of project objectives. The context evaluation revealed that the task of the KSPC is to lower mortality. The input evaluation assessed the efficiency of manpower and the grants supported by Taiwan's Department of Health and Kaohsiung City government's Bureau of Health. In the process evaluation, we inspected the suicide prevention strategies of the KSPC, which are a modified version of the National Suicide Prevention Strategy of Australia. In the product evaluation, four major objectives were evaluated: (1) the suicide rate in Kaohsiung, (2) the reported suicidal cases, (3) crisis line calls, and (4) telephone counseling. From 2005 to 2008, the number of telephone counseling sessions (1,432, 2,010, 7,051, 12,517) and crisis line calls (0, 4,320, 10,339, 14,502) increased. Because of the increase in reported suicidal cases (1,328, 2,625, 2,795, and 2,989, respectively), cases which were underreported in the past, we have increasingly been able to contact the people who need help. During this same time period, the half-year suicide re-attempt rate decreased significantly for those who received services, and the committed suicide rate (21.4, 20.1, 18.2, and 17.8 per 100,000 populations, respectively) also decreased. The suicide prevention program in Kaohsiung is worth implementing on a continual basis if financial constraints are addressed.

  1. Human Thermal Model Evaluation Using the JSC Human Thermal Database

    NASA Technical Reports Server (NTRS)

    Bue, Grant; Makinen, Janice; Cognata, Thomas

    2012-01-01

    Human thermal modeling has considerable long term utility to human space flight. Such models provide a tool to predict crew survivability in support of vehicle design and to evaluate crew response in untested space environments. It is to the benefit of any such model not only to collect relevant experimental data to correlate it against, but also to maintain an experimental standard or benchmark for future development in a readily and rapidly searchable and software accessible format. The Human thermal database project is intended to do just so; to collect relevant data from literature and experimentation and to store the data in a database structure for immediate and future use as a benchmark to judge human thermal models against, in identifying model strengths and weakness, to support model development and improve correlation, and to statistically quantify a model s predictive quality. The human thermal database developed at the Johnson Space Center (JSC) is intended to evaluate a set of widely used human thermal models. This set includes the Wissler human thermal model, a model that has been widely used to predict the human thermoregulatory response to a variety of cold and hot environments. These models are statistically compared to the current database, which contains experiments of human subjects primarily in air from a literature survey ranging between 1953 and 2004 and from a suited experiment recently performed by the authors, for a quantitative study of relative strength and predictive quality of the models.

  2. Evaluating Climate Models with MISR Joint Histograms of Cloud Properties

    NASA Astrophysics Data System (ADS)

    Ackerman, T. P.; Marchand, R.; Hillman, B. R.

    2009-12-01

    Following the approach pioneered by ISCCP, joint histograms of cloud optical depth and cloud top height (pressure) are being produced by MISR and MODIS for the evaluation of climate models. There are significant differences among the histogram due to the differences in sensors and retrieval algorithms. These differences provide insight into the properties of the observed cloud fields. MISR retrievals of stereo cloud height, in particular, provide a unique perspective on the distribution cloud heights. MISR, due to its stereo imaging, is more effective in identifying low clouds and retrieving their height, while MODIS is a more reliable detector of high clouds. In analogy to the ISCCP simulator, cloud fields generated in global climate models can be processed through a MISR simulator, which we have developed, to produce joint histograms of model clouds. Comparingf observed joint histograms with simulated joint histograms allows us to determine where the model is producing clouds well and where not. We have applied this technique to results from the Multiscale Modeling Framework (MMF; also called the “superparameterization” model) and are currently applying it to the NCAR Community Atmosphere Model and the GFDL AM2 model. The MMF computes cloud properties using an embedded 2D cloud resolving model (CRM) in each grid square of the large-scale climate model. We have run versions of the MMF with CRM horizontal resolution of 4 km and 1 km and with 26 and 52 vertical levels in order to explore the effect of resolution on model clouds. Comparison with MISR joint histograms shows that the model run with 52 levels and 1 km provides an improved simulation, but low cloud amounts are still considerably lower than observed. We discuss possible solutions to this problem. Evaluations of the CAM and AM2 model are in progress and evaluations of these models will be presented.

  3. Evaluating Vocational Educators' Training Programs: A Kirkpatrick-Inspired Evaluation Model

    ERIC Educational Resources Information Center

    Ravicchio, Fabrizio; Trentin, Guglielmo

    2015-01-01

    The aim of the article is to describe the assessment model adopted by the SCINTILLA Project, a project in Italy aimed at the online vocational training of young, seriously-disabled subjects and their subsequent work inclusion in smart-work mode. It will thus describe the model worked out for evaluation of the training program conceived for the…

  4. A model to evaluate quality and effectiveness of disease management.

    PubMed

    Lemmens, K M M; Nieboer, A P; van Schayck, C P; Asin, J D; Huijsman, R

    2008-12-01

    Disease management has emerged as a new strategy to enhance quality of care for patients suffering from chronic conditions, and to control healthcare costs. So far, however, the effects of this strategy remain unclear. Although current models define the concept of disease management, they do not provide a systematic development or an explanatory theory of how disease management affects the outcomes of care. The objective of this paper is to present a framework for valid evaluation of disease-management initiatives. The evaluation model is built on two pillars of disease management: patient-related and professional-directed interventions. The effectiveness of these interventions is thought to be affected by the organisational design of the healthcare system. Disease management requires a multifaceted approach; hence disease-management programme evaluations should focus on the effects of multiple interventions, namely patient-related, professional-directed and organisational interventions. The framework has been built upon the conceptualisation of these disease-management interventions. Analysis of the underlying mechanisms of these interventions revealed that learning and behavioural theories support the core assumptions of disease management. The evaluation model can be used to identify the components of disease-management programmes and the mechanisms behind them, making valid comparison feasible. In addition, this model links the programme interventions to indicators that can be used to evaluate the disease-management programme. Consistent use of this framework will enable comparisons among disease-management programmes and outcomes in evaluation research.

  5. Evaluating performances of simplified physically based landslide susceptibility models.

    NASA Astrophysics Data System (ADS)

    Capparelli, Giovanna; Formetta, Giuseppe; Versace, Pasquale

    2015-04-01

    Rainfall induced shallow landslides cause significant damages involving loss of life and properties. Prediction of shallow landslides susceptible locations is a complex task that involves many disciplines: hydrology, geotechnical science, geomorphology, and statistics. Usually to accomplish this task two main approaches are used: statistical or physically based model. This paper presents a package of GIS based models for landslide susceptibility analysis. It was integrated in the NewAge-JGrass hydrological model using the Object Modeling System (OMS) modeling framework. The package includes three simplified physically based models for landslides susceptibility analysis (M1, M2, and M3) and a component for models verifications. It computes eight goodness of fit indices (GOF) by comparing pixel-by-pixel model results and measurements data. Moreover, the package integration in NewAge-JGrass allows the use of other components such as geographic information system tools to manage inputs-output processes, and automatic calibration algorithms to estimate model parameters. The system offers the possibility to investigate and fairly compare the quality and the robustness of models and models parameters, according a procedure that includes: i) model parameters estimation by optimizing each of the GOF index separately, ii) models evaluation in the ROC plane by using each of the optimal parameter set, and iii) GOF robustness evaluation by assessing their sensitivity to the input parameter variation. This procedure was repeated for all three models. The system was applied for a case study in Calabria (Italy) along the Salerno-Reggio Calabria highway, between Cosenza and Altilia municipality. The analysis provided that among all the optimized indices and all the three models, Average Index (AI) optimization coupled with model M3 is the best modeling solution for our test case. This research was funded by PON Project No. 01_01503 "Integrated Systems for Hydrogeological Risk

  6. Study on Turbulent Modeling in Gas Entrainment Evaluation Method

    NASA Astrophysics Data System (ADS)

    Ito, Kei; Ohshima, Hiroyuki; Nakamine, Yoshiaki; Imai, Yasutomo

    Suppression of gas entrainment (GE) phenomena caused by free surface vortices are very important to establish an economically superior design of the sodium-cooled fast reactor in Japan (JSFR). However, due to the non-linearity and/or locality of the GE phenomena, it is not easy to evaluate the occurrences of the GE phenomena accurately. In other words, the onset condition of the GE phenomena in the JSFR is not predicted easily based on scaled-model and/or partial-model experiments. Therefore, the authors are developing a CFD-based evaluation method in which the non-linearity and locality of the GE phenomena can be considered. In the evaluation method, macroscopic vortex parameters, e.g. circulation, are determined by three-dimensional CFD and then, GE-related parameters, e.g. gas core (GC) length, are calculated by using the Burgers vortex model. This procedure is efficient to evaluate the GE phenomena in the JSFR. However, it is well known that the Burgers vortex model tends to overestimate the GC length due to the lack of considerations on some physical mechanisms. Therefore, in this study, the authors develop a turbulent vortex model to evaluate the GE phenomena more accurately. Then, the improved GE evaluation method with the turbulent viscosity model is validated by analyzing the GC lengths observed in a simple experiment. The evaluation results show that the GC lengths analyzed by the improved method are shorter in comparison to the original method, and give better agreement with the experimental data.

  7. New model framework and structure and the commonality evaluation model. [concerning unmanned spacecraft projects

    NASA Technical Reports Server (NTRS)

    1977-01-01

    The development of a framework and structure for shuttle era unmanned spacecraft projects and the development of a commonality evaluation model is documented. The methodology developed for model utilization in performing cost trades and comparative evaluations for commonality studies is discussed. The model framework consists of categories of activities associated with the spacecraft system's development process. The model structure describes the physical elements to be treated as separate identifiable entities. Cost estimating relationships for subsystem and program-level components were calculated.

  8. A standard telemental health evaluation model: the time is now.

    PubMed

    Kramer, Greg M; Shore, Jay H; Mishkind, Matt C; Friedl, Karl E; Poropatich, Ronald K; Gahm, Gregory A

    2012-05-01

    The telehealth field has advanced historic promises to improve access, cost, and quality of care. However, the extent to which it is delivering on its promises is unclear as the scientific evidence needed to justify success is still emerging. Many have identified the need to advance the scientific knowledge base to better quantify success. One method for advancing that knowledge base is a standard telemental health evaluation model. Telemental health is defined here as the provision of mental health services using live, interactive video-teleconferencing technology. Evaluation in the telemental health field largely consists of descriptive and small pilot studies, is often defined by the individual goals of the specific programs, and is typically focused on only one outcome. The field should adopt new evaluation methods that consider the co-adaptive interaction between users (patients and providers), healthcare costs and savings, and the rapid evolution in communication technologies. Acceptance of a standard evaluation model will improve perceptions of telemental health as an established field, promote development of a sounder empirical base, promote interagency collaboration, and provide a framework for more multidisciplinary research that integrates measuring the impact of the technology and the overall healthcare aspect. We suggest that consideration of a standard model is timely given where telemental health is at in terms of its stage of scientific progress. We will broadly recommend some elements of what such a standard evaluation model might include for telemental health and suggest a way forward for adopting such a model.

  9. Evaluation of six ionospheric models as predictors of TEC

    SciTech Connect

    Brown, L.D.; Daniell, R.E.; Fox, M.W.; Klobuchar, J.A.; Doherty, P.H.

    1990-05-03

    The authors have gathered TEC data from a wide range of latitudes and longitudes for a complete range of solar activity. This data was used to evaluate the performance of six ionospheric models as predictors of Total Electron Content (TFC). The TEC parameter is important in correcting modern DOD space systems, which propagate radio signals from the earth to satellites, for the time delay effects of the ionosphere. The TEC data were obtained from polarimeter receivers located in North America, the Pacific, and the East Coast of Asia. The ionospheric models evaluated are: (1) the International Reference Ionosphere (IRI); (2) the Bent model; (3) the Ionospheric Conductivity and Electron Density (ICED) model; (4) the Penn State model; (5) the Fully Analytic Ionospheric Model (FAIM, a modification of the Chiu model); and (6) the Damen-Hartranft model. They will present extensive comparisons between monthly mean TEC at all local times and model TEC obtained by integrating electron density profiles produced by the six models. These comparisons demonstrate that even thought most of the models do very well at representing f0F2, none of them do very well with TEC, probably because of inaccurate representation of the topside scale height. They suggest that one approach to obtaining better representations of TEC is the use of f0E2 from coefficients coupled with a new slab thickness developed at Boston University.

  10. Classification and moral evaluation of uncertainties in engineering modeling.

    PubMed

    Murphy, Colleen; Gardoni, Paolo; Harris, Charles E

    2011-09-01

    Engineers must deal with risks and uncertainties as a part of their professional work and, in particular, uncertainties are inherent to engineering models. Models play a central role in engineering. Models often represent an abstract and idealized version of the mathematical properties of a target. Using models, engineers can investigate and acquire understanding of how an object or phenomenon will perform under specified conditions. This paper defines the different stages of the modeling process in engineering, classifies the various sources of uncertainty that arise in each stage, and discusses the categories into which these uncertainties fall. The paper then considers the way uncertainty and modeling are approached in science and the criteria for evaluating scientific hypotheses, in order to highlight the very different criteria appropriate for the development of models and the treatment of the inherent uncertainties in engineering. Finally, the paper puts forward nine guidelines for the treatment of uncertainty in engineering modeling.

  11. Obs4MIPS: Satellite Observations for Model Evaluation

    NASA Astrophysics Data System (ADS)

    Ferraro, R.; Waliser, D. E.; Gleckler, P. J.

    2015-12-01

    This poster will review the current status of the obs4MIPs project, whose purpose is to provide a limited collection of well-established and documented datasets for comparison with Earth system models (https://www.earthsystemcog.org/projects/obs4mips/). These datasets have been reformatted to correspond with the CMIP5 model output requirements, and include technical documentation specifically targeted for their use in model output evaluation. There are currently over 50 datasets containing observations that directly correspond to CMIP5 model output variables. We will review recent additions to the obs4MIPs collection, and provide updated download statistics. We will also provide an update on changes to submission and documentation guidelines, the work of the WCRP Data Advisory Council (WDAC) Observations for Model Evaluation Task Team, and engagement with the CMIP6 MIP experiments.

  12. Evaluation of potential crushed-salt constitutive models

    SciTech Connect

    Callahan, G.D.; Loken, M.C.; Sambeek, L.L. Van; Chen, R.; Pfeifle, T.W.; Nieland, J.D.

    1995-12-01

    Constitutive models describing the deformation of crushed salt are presented in this report. Ten constitutive models with potential to describe the phenomenological and micromechanical processes for crushed salt were selected from a literature search. Three of these ten constitutive models, termed Sjaardema-Krieg, Zeuch, and Spiers models, were adopted as candidate constitutive models. The candidate constitutive models were generalized in a consistent manner to three-dimensional states of stress and modified to include the effects of temperature, grain size, and moisture content. A database including hydrostatic consolidation and shear consolidation tests conducted on Waste Isolation Pilot Plant and southeastern New Mexico salt was used to determine material parameters for the candidate constitutive models. Nonlinear least-squares model fitting to data from the hydrostatic consolidation tests, the shear consolidation tests, and a combination of the shear and hydrostatic tests produces three sets of material parameter values for the candidate models. The change in material parameter values from test group to test group indicates the empirical nature of the models. To evaluate the predictive capability of the candidate models, each parameter value set was used to predict each of the tests in the database. Based on the fitting statistics and the ability of the models to predict the test data, the Spiers model appeared to perform slightly better than the other two candidate models. The work reported here is a first-of-its kind evaluation of constitutive models for reconsolidation of crushed salt. Questions remain to be answered. Deficiencies in models and databases are identified and recommendations for future work are made. 85 refs.

  13. Evaluation of clinical teaching models for nursing practice.

    PubMed

    Croxon, Lyn; Maginnis, Cathy

    2009-07-01

    Clinical placements provide opportunities for student nurses to learn experientially. To create a constructive learning environment staff need to be friendly, approachable, available and willing to teach. There must be adequate opportunities for students to develop confidence and competence in clinical skills with a focus on student learning needs rather than service needs of facilities. A popular model for clinical teaching of nursing students is the preceptor model. This model involves a student working under the supervision of individual registered nurses who are part of the clinical staff. This model was failing to meet students' needs in acute nursing practice areas, largely due to Registered Nurse staff shortages and demanding workloads. The students' evaluations led to the trial of a 'cluster' or group model of eight students, with a clinical facilitator who is paid by the university, in each acute nursing ward. Evaluation of twenty nursing students' perceptions of their acute nursing practice clinical placements was conducted using a mixed method approach to evaluate the two models of student supervision. Results indicate that the students prefer small groups with the clinical facilitator in one area. Thus evaluation and feedback from students and the perceptions of their clinical placement is essential. PMID:18722161

  14. Neutral models as a way to evaluate the Sea Level Affecting Marshes Model (SLAMM)

    EPA Science Inventory

    A commonly used landscape model to simulate wetland change – the Sea Level Affecting Marshes Model(SLAMM) – has rarely been explicitly assessed for its prediction accuracy. Here, we evaluated this model using recently proposed neutral models – including the random constraint matc...

  15. Evaluation of ADAM/1 model for advanced coal extraction concepts

    NASA Technical Reports Server (NTRS)

    Deshpande, G. K.; Gangal, M. D.

    1982-01-01

    Several existing computer programs for estimating life cycle cost of mining systems were evaluated. A commercially available program, ADAM/1 was found to be satisfactory in relation to the needs of the advanced coal extraction project. Two test cases were run to confirm the ability of the program to handle nonconventional mining equipment and procedures. The results were satisfactory. The model, therefore, is recommended to the project team for evaluation of their conceptual designs.

  16. Human Thermal Model Evaluation Using the JSC Human Thermal Database

    NASA Technical Reports Server (NTRS)

    Cognata, T.; Bue, G.; Makinen, J.

    2011-01-01

    The human thermal database developed at the Johnson Space Center (JSC) is used to evaluate a set of widely used human thermal models. This database will facilitate a more accurate evaluation of human thermoregulatory response using in a variety of situations, including those situations that might otherwise prove too dangerous for actual testing--such as extreme hot or cold splashdown conditions. This set includes the Wissler human thermal model, a model that has been widely used to predict the human thermoregulatory response to a variety of cold and hot environments. These models are statistically compared to the current database, which contains experiments of human subjects primarily in air from a literature survey ranging between 1953 and 2004 and from a suited experiment recently performed by the authors, for a quantitative study of relative strength and predictive quality of the models. Human thermal modeling has considerable long term utility to human space flight. Such models provide a tool to predict crew survivability in support of vehicle design and to evaluate crew response in untested environments. It is to the benefit of any such model not only to collect relevant experimental data to correlate it against, but also to maintain an experimental standard or benchmark for future development in a readily and rapidly searchable and software accessible format. The Human thermal database project is intended to do just so; to collect relevant data from literature and experimentation and to store the data in a database structure for immediate and future use as a benchmark to judge human thermal models against, in identifying model strengths and weakness, to support model development and improve correlation, and to statistically quantify a model s predictive quality.

  17. AQA - Air Quality model for Austria - Evaluation and Developments

    NASA Astrophysics Data System (ADS)

    Hirtl, M.; Krüger, B. C.; Baumann-Stanzer, K.; Skomorowski, P.

    2009-04-01

    The regional weather forecast model ALADIN of the Central Institute for Meteorology and Geodynamics (ZAMG) is used in combination with the chemical transport model CAMx (www.camx.com) to conduct forecasts of gaseous and particulate air pollution over Europe. The forecasts which are done in cooperation with the University of Natural Resources and Applied Life Sciences in Vienna (BOKU) are supported by the regional governments since 2005 with the main interest on the prediction of tropospheric ozone. The daily ozone forecasts are evaluated for the summer 2008 with the observations of about 150 air quality stations in Austria. In 2008 the emission-model SMOKE was integrated into the modelling system to calculate the biogenic emissions. The anthropogenic emissions are based on the newest EMEP data set as well as on regional inventories for the core domain. The performance of SMOKE is shown for a summer period in 2007. In the frame of the COST-action 728 „Enhancing mesoscale meteorological modelling capabilities for air pollution and dispersion applications", multi-model ensembles are used to conduct an international model evaluation. The model calculations of meteorological- and concentration fields are compared to measurements on the ensemble platform at the Joint Research Centre (JRC) in Ispra. The results for 2 episodes in 2006 show the performance of the different models as well as of the model ensemble.

  18. [Organ trade versus reciprocity model. An ethical evaluation].

    PubMed

    Illies, C; Weber, F

    2004-02-01

    We perform an ethical evaluation of two models that promise to solve the increasing shortage of organs for transplantations: firstly, the legalization of organ trade, and, secondly, the so called "Reciprocity Model". Thereby unrestricted respect for the individual human being serves as the ethical standard. We conclude that the Reciprocity Model is ethically much more acceptable that organ trade even if this trade were limited to Europe. In addition, the Reciprocity Model can easily be integrated into the current Eurotransplant system of organ allocation. PMID:14750056

  19. Ensemble-based evaluation for protein structure models

    PubMed Central

    Jamroz, Michal; Kolinski, Andrzej; Kihara, Daisuke

    2016-01-01

    Motivation: Comparing protein tertiary structures is a fundamental procedure in structural biology and protein bioinformatics. Structure comparison is important particularly for evaluating computational protein structure models. Most of the model structure evaluation methods perform rigid body superimposition of a structure model to its crystal structure and measure the difference of the corresponding residue or atom positions between them. However, these methods neglect intrinsic flexibility of proteins by treating the native structure as a rigid molecule. Because different parts of proteins have different levels of flexibility, for example, exposed loop regions are usually more flexible than the core region of a protein structure, disagreement of a model to the native needs to be evaluated differently depending on the flexibility of residues in a protein. Results: We propose a score named FlexScore for comparing protein structures that consider flexibility of each residue in the native state of proteins. Flexibility information may be extracted from experiments such as NMR or molecular dynamics simulation. FlexScore considers an ensemble of conformations of a protein described as a multivariate Gaussian distribution of atomic displacements and compares a query computational model with the ensemble. We compare FlexScore with other commonly used structure similarity scores over various examples. FlexScore agrees with experts’ intuitive assessment of computational models and provides information of practical usefulness of models. Availability and implementation: https://bitbucket.org/mjamroz/flexscore Contact: dkihara@purdue.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27307633

  20. Mathematical models and lymphatic filariasis control: monitoring and evaluating interventions.

    PubMed

    Michael, Edwin; Malecela-Lazaro, Mwele N; Maegga, Bertha T A; Fischer, Peter; Kazura, James W

    2006-11-01

    Monitoring and evaluation are crucially important to the scientific management of any mass parasite control programme. Monitoring enables the effectiveness of implemented actions to be assessed and necessary adaptations to be identified; it also determines when management objectives are achieved. Parasite transmission models can provide a scientific template for informing the optimal design of such monitoring programmes. Here, we illustrate the usefulness of using a model-based approach for monitoring and evaluating anti-parasite interventions and discuss issues that need addressing. We focus on the use of such an approach for the control and/or elimination of the vector-borne parasitic disease, lymphatic filariasis. PMID:16971182

  1. Development and evaluation of thermal model reduction algorithms for spacecraft

    NASA Astrophysics Data System (ADS)

    Deiml, Michael; Suderland, Martin; Reiss, Philipp; Czupalla, Markus

    2015-05-01

    This paper is concerned with the topic of the reduction of thermal models of spacecraft. The work presented here has been conducted in cooperation with the company OHB AG, formerly Kayser-Threde GmbH, and the Institute of Astronautics at Technische Universität München with the goal to shorten and automatize the time-consuming and manual process of thermal model reduction. The reduction of thermal models can be divided into the simplification of the geometry model for calculation of external heat flows and radiative couplings and into the reduction of the underlying mathematical model. For simplification a method has been developed which approximates the reduced geometry model with the help of an optimization algorithm. Different linear and nonlinear model reduction techniques have been evaluated for their applicability in reduction of the mathematical model. Thereby the compatibility with the thermal analysis tool ESATAN-TMS is of major concern, which restricts the useful application of these methods. Additional model reduction methods have been developed, which account to these constraints. The Matrix Reduction method allows the approximation of the differential equation to reference values exactly expect for numerical errors. The summation method enables a useful, applicable reduction of thermal models that can be used in industry. In this work a framework for model reduction of thermal models has been created, which can be used together with a newly developed graphical user interface for the reduction of thermal models in industry.

  2. Progressive evaluation of incorporating information into a model building process

    NASA Astrophysics Data System (ADS)

    Gharari, Shervan; Hrachowitz, Markus; Fenicia, Fabrizio; Gao, Hongkai; Gupta, Hoshin; Savenije, Huub

    2014-05-01

    Catchments are open systems meaning that it is impossible to find out the exact boundary conditions of the real system spatially and temporarily. Therefore models are essential tools in capturing system behaviour spatially and extrapolating it temporarily for prediction. In recent years conceptual models have been in the center of attention rather than so called physically based models which are often over-parameterized and encounter difficulties for up-scaling of small scale processes. Conceptual models however are heavily dependent on calibration as one or more of their parameter values can typically not be physically measured at the catchment scale. The general understanding is based on the fact that increasing the complexity of conceptual model for better representation of hydrological process heterogeneity typically makes parameter identification more difficult however the evaluation of the amount of information given by each of the model elements, control volumes (so called buckets), interconnecting fluxes, parameterization (constitutive functions) and finally parameter values are rather unknown. Each of the mentioned components of a model contains information on the transformation of forcing (precipitation) into runoff, however the effect of each of them solely and together is not well understood. In this study we follow hierarchal steps for model building, firstly the model structure is built by its building blocks (control volumes) as well as interconnecting fluxes. The effect of adding every control volumes and the architecture of the model (formation of control volumes and fluxes) can be evaluated in this level. In the second layer the parameterization of model is evaluated. As an example the effect of a specific type of stage-discharge relation for a control volume can be explored. Finally in the last step of the model building the information gained by parameter values are quantified. In each development level the value of information which are added

  3. Evaluating supervised topic models in the presence of OCR errors

    NASA Astrophysics Data System (ADS)

    Walker, Daniel; Ringger, Eric; Seppi, Kevin

    2013-01-01

    Supervised topic models are promising tools for text analytics that simultaneously model topical patterns in document collections and relationships between those topics and document metadata, such as timestamps. We examine empirically the effect of OCR noise on the ability of supervised topic models to produce high quality output through a series of experiments in which we evaluate three supervised topic models and a naive baseline on synthetic OCR data having various levels of degradation and on real OCR data from two different decades. The evaluation includes experiments with and without feature selection. Our results suggest that supervised topic models are no better, or at least not much better in terms of their robustness to OCR errors, than unsupervised topic models and that feature selection has the mixed result of improving topic quality while harming metadata prediction quality. For users of topic modeling methods on OCR data, supervised topic models do not yet solve the problem of finding better topics than the original unsupervised topic models.

  4. Forecasting in foodservice: model development, testing, and evaluation.

    PubMed

    Miller, J L; Thompson, P A; Orabella, M M

    1991-05-01

    This study was designed to develop, test, and evaluate mathematical models appropriate for forecasting menu-item production demand in foodservice. Data were collected from residence and dining hall foodservices at Ohio State University. Objectives of the study were to collect, code, and analyze the data; develop and test models using actual operation data; and compare forecasting results with current methods in use. Customer count was forecast using deseasonalized simple exponential smoothing. Menu-item demand was forecast by multiplying the count forecast by a predicted preference statistic. Forecasting models were evaluated using mean squared error, mean absolute deviation, and mean absolute percentage error techniques. All models were more accurate than current methods. A broad spectrum of forecasting techniques could be used by foodservice managers with access to a personal computer and spread-sheet and database-management software. The findings indicate that mathematical forecasting techniques may be effective in foodservice operations to control costs, increase productivity, and maximize profits.

  5. Software Platform Evaluation - Verifiable Fuel Cycle Simulation (VISION) Model

    SciTech Connect

    J. J. Jacobson; D. E. Shropshire; W. B. West

    2005-11-01

    The purpose of this Software Platform Evaluation (SPE) is to document the top-level evaluation of potential software platforms on which to construct a simulation model that satisfies the requirements for a Verifiable Fuel Cycle Simulation Model (VISION) of the Advanced Fuel Cycle (AFC). See the Software Requirements Specification for Verifiable Fuel Cycle Simulation (VISION) Model (INEEL/EXT-05-02643, Rev. 0) for a discussion of the objective and scope of the VISION model. VISION is intended to serve as a broad systems analysis and study tool applicable to work conducted as part of the AFCI (including costs estimates) and Generation IV reactor development studies. This document will serve as a guide for selecting the most appropriate software platform for VISION. This is a “living document” that will be modified over the course of the execution of this work.

  6. Forecasting in foodservice: model development, testing, and evaluation.

    PubMed

    Miller, J L; Thompson, P A; Orabella, M M

    1991-05-01

    This study was designed to develop, test, and evaluate mathematical models appropriate for forecasting menu-item production demand in foodservice. Data were collected from residence and dining hall foodservices at Ohio State University. Objectives of the study were to collect, code, and analyze the data; develop and test models using actual operation data; and compare forecasting results with current methods in use. Customer count was forecast using deseasonalized simple exponential smoothing. Menu-item demand was forecast by multiplying the count forecast by a predicted preference statistic. Forecasting models were evaluated using mean squared error, mean absolute deviation, and mean absolute percentage error techniques. All models were more accurate than current methods. A broad spectrum of forecasting techniques could be used by foodservice managers with access to a personal computer and spread-sheet and database-management software. The findings indicate that mathematical forecasting techniques may be effective in foodservice operations to control costs, increase productivity, and maximize profits. PMID:2019699

  7. Moving beyond qualitative evaluations of Bayesian models of cognition.

    PubMed

    Hemmer, Pernille; Tauber, Sean; Steyvers, Mark

    2015-06-01

    Bayesian models of cognition provide a powerful way to understand the behavior and goals of individuals from a computational point of view. Much of the focus in the Bayesian cognitive modeling approach has been on qualitative model evaluations, where predictions from the models are compared to data that is often averaged over individuals. In many cognitive tasks, however, there are pervasive individual differences. We introduce an approach to directly infer individual differences related to subjective mental representations within the framework of Bayesian models of cognition. In this approach, Bayesian data analysis methods are used to estimate cognitive parameters and motivate the inference process within a Bayesian cognitive model. We illustrate this integrative Bayesian approach on a model of memory. We apply the model to behavioral data from a memory experiment involving the recall of heights of people. A cross-validation analysis shows that the Bayesian memory model with inferred subjective priors predicts withheld data better than a Bayesian model where the priors are based on environmental statistics. In addition, the model with inferred priors at the individual subject level led to the best overall generalization performance, suggesting that individual differences are important to consider in Bayesian models of cognition.

  8. Road network safety evaluation using Bayesian hierarchical joint model.

    PubMed

    Wang, Jie; Huang, Helai

    2016-05-01

    Safety and efficiency are commonly regarded as two significant performance indicators of transportation systems. In practice, road network planning has focused on road capacity and transport efficiency whereas the safety level of a road network has received little attention in the planning stage. This study develops a Bayesian hierarchical joint model for road network safety evaluation to help planners take traffic safety into account when planning a road network. The proposed model establishes relationships between road network risk and micro-level variables related to road entities and traffic volume, as well as socioeconomic, trip generation and network density variables at macro level which are generally used for long term transportation plans. In addition, network spatial correlation between intersections and their connected road segments is also considered in the model. A road network is elaborately selected in order to compare the proposed hierarchical joint model with a previous joint model and a negative binomial model. According to the results of the model comparison, the hierarchical joint model outperforms the joint model and negative binomial model in terms of the goodness-of-fit and predictive performance, which indicates the reasonableness of considering the hierarchical data structure in crash prediction and analysis. Moreover, both random effects at the TAZ level and the spatial correlation between intersections and their adjacent segments are found to be significant, supporting the employment of the hierarchical joint model as an alternative in road-network-level safety modeling as well.

  9. Evaluation of Rainfall-Runoff Models for Mediterranean Subcatchments

    NASA Astrophysics Data System (ADS)

    Cilek, A.; Berberoglu, S.; Donmez, C.

    2016-06-01

    The development and the application of rainfall-runoff models have been a corner-stone of hydrological research for many decades. The amount of rainfall and its intensity and variability control the generation of runoff and the erosional processes operating at different scales. These interactions can be greatly variable in Mediterranean catchments with marked hydrological fluctuations. The aim of the study was to evaluate the performance of rainfall-runoff model, for rainfall-runoff simulation in a Mediterranean subcatchment. The Pan-European Soil Erosion Risk Assessment (PESERA), a simplified hydrological process-based approach, was used in this study to combine hydrological surface runoff factors. In total 128 input layers derived from data set includes; climate, topography, land use, crop type, planting date, and soil characteristics, are required to run the model. Initial ground cover was estimated from the Landsat ETM data provided by ESA. This hydrological model was evaluated in terms of their performance in Goksu River Watershed, Turkey. It is located at the Central Eastern Mediterranean Basin of Turkey. The area is approximately 2000 km2. The landscape is dominated by bare ground, agricultural and forests. The average annual rainfall is 636.4mm. This study has a significant importance to evaluate different model performances in a complex Mediterranean basin. The results provided comprehensive insight including advantages and limitations of modelling approaches in the Mediterranean environment.

  10. Evaluation of thermographic phosphor technology for aerodynamic model testing

    SciTech Connect

    Cates, M.R.; Tobin, K.W.; Smith, D.B.

    1990-08-01

    The goal for this project was to perform technology evaluations applicable to the development of higher-precision, higher-temperature aerodynamic model testing at Arnold Engineering Development Center (AEDC) in Tullahmoa, Tennessee. With the advent of new programs for design of aerospace craft that fly at higher speeds and altitudes, requirements for detailed understanding of high-temperature materials become very important. Model testing is a natural and critical part of the development of these new initiatives. The well-established thermographic phosphor techniques of the Applied Technology Division at Oak Ridge National Laboratory are highly desirable for diagnostic evaluation of materials and aerodynamic shapes as studied in model tests. Combining this state-of-the-art thermographic technique with modern, higher-temperature models will greatly improve the practicability of tests for the advanced aerospace vehicles and will provide higher precision diagnostic information for quantitative evaluation of these tests. The wavelength ratio method for measuring surface temperatures of aerodynamic models was demonstrated in measurements made for this project. In particular, it was shown that the appropriate phosphors could be selected for the temperature range up to {approximately}700 {degree}F or higher and emission line ratios of sufficient sensitivity to measure temperature with 1% precision or better. Further, it was demonstrated that two-dimensional image- processing methods, using standard hardware, can be successfully applied to surface thermography of aerodynamic models for AEDC applications.

  11. Animal models to evaluate anti-atherosclerotic drugs.

    PubMed

    Priyadharsini, Raman P

    2015-08-01

    Atherosclerosis is a multifactorial condition characterized by endothelial injury, fatty streak deposition, and stiffening of the blood vessels. The pathogenesis is complex and mediated by adhesion molecules, inflammatory cells, and smooth muscle cells. Statins have been the major drugs in treating hypercholesterolemia for the past two decades despite little efficacy. There is an urgent need for new drugs that can replace statins or combined with statins. The preclinical studies evaluating atherosclerosis require an ideal animal model which resembles the disease condition, but there is no single animal model which mimics the disease. The animal models used are rabbits, rats, mice, hamsters, mini pigs, etc. Each animal model has its own advantages and disadvantages. The method of induction of atherosclerosis includes diet, chemical induction, mechanically induced injuries, and genetically manipulated animal models. This review mainly focuses on the various animal models, method of induction, the advantages, disadvantages, and the current perspectives with regard to preclinical studies on atherosclerosis.

  12. Use of wind tunnel modeling to evaluate stable plume impact

    SciTech Connect

    Petersen, R.L.; Parce, D.K.; Spellman, D.L.

    1994-12-31

    In complex terrain situations, where the stack exhaust is at or below the height of nearby terrain features, EPA (1990) recommends various screening techniques to evaluate plume impact during stable conditions. The preferred screening techniques are: (1) Valley; (2) CTSCREEN; (3) COMPLEX I; (4) SHORTZ/LONGZ; and (5) Rough Terrain Dispersion Model (RTDM). If these screening techniques demonstrate a possible exceedance of the NAAQS, EPA suggests that a more refined analysis may need to be conducted. The Complex Terrain Dispersion Model Plus Algorithms for Unstable Situations (CTDMPLUS) is the EPA preferred air quality model for this situation. This paper discusses the dispersion models, the wind tunnel modeling methodology, and the comparison between the screening model and wind tunnel concentration predictions.

  13. The Iterative Research Cycle: Process-Based Model Evaluation

    NASA Astrophysics Data System (ADS)

    Vrugt, J. A.

    2014-12-01

    The ever increasing pace of computational power, along with continued advances in measurement technologies and improvements in process understanding has stimulated the development of increasingly complex physics based models that simulate a myriad of processes at different spatial and temporal scales. Reconciling these high-order system models with perpetually larger volumes of field data is becoming more and more difficult, particularly because classical likelihood-based fitting methods lack the power to detect and pinpoint deficiencies in the model structure. In this talk I will give an overview of our latest research on process-based model calibration and evaluation. This approach, rooted in Bayesian theory, uses summary metrics of the calibration data rather than the data itself to help detect which component(s) of the model is (are) malfunctioning and in need of improvement. A few case studies involving hydrologic and geophysical models will be used to demonstrate the proposed methodology.

  14. Information technology model for evaluating emergency medicine teaching

    NASA Astrophysics Data System (ADS)

    Vorbach, James; Ryan, James

    1996-02-01

    This paper describes work in progress to develop an Information Technology (IT) model and supporting information system for the evaluation of clinical teaching in the Emergency Medicine (EM) Department of North Shore University Hospital. In the academic hospital setting student physicians, i.e. residents, and faculty function daily in their dual roles as teachers and students respectively, and as health care providers. Databases exist that are used to evaluate both groups in either academic or clinical performance, but rarely has this information been integrated to analyze the relationship between academic performance and the ability to care for patients. The goal of the IT model is to improve the quality of teaching of EM physicians by enabling the development of integrable metrics for faculty and resident evaluation. The IT model will include (1) methods for tracking residents in order to develop experimental databases; (2) methods to integrate lecture evaluation, clinical performance, resident evaluation, and quality assurance databases; and (3) a patient flow system to monitor patient rooms and the waiting area in the Emergency Medicine Department, to record and display status of medical orders, and to collect data for analyses.

  15. Developing, implementing, and evaluating a professional practice model.

    PubMed

    Basol, Roberta; Hilleren-Listerud, Amy; Chmielewski, Linda

    2015-01-01

    This article describes how The Compass, a professional practice model (PPM), was developed through clinical nurse involvement, review of literature, expert opinion, and an innovative schematic. Implementation was supported through a dynamic video account of a patient story, interwoven with The Compass. Postproject evaluation of PPM integration demonstrates opportunities for professional nursing development and future planning. PMID:25479174

  16. Support for Career Development in Youth: Program Models and Evaluations

    ERIC Educational Resources Information Center

    Mekinda, Megan A.

    2012-01-01

    This article examines four influential programs--Citizen Schools, After School Matters, career academies, and Job Corps--to demonstrate the diversity of approaches to career programming for youth. It compares the specific program models and draws from the evaluation literature to discuss strengths and weaknesses of each. The article highlights…

  17. Input, Process, Output: A Model for Evaluating Training.

    ERIC Educational Resources Information Center

    Bushnell, David S.

    1990-01-01

    IBM has found that an input-process-output (IPO) approach to training evaluation enables decision makers to select the package that will ensure the effectiveness of a training program. Those who use the IPO model can determine whether programs are achieving their purposes and can detect the changes needed to improve course design, content, and…

  18. Assessment and Evaluation Modeling. Symposium 38. [AHRD Conference, 2001].

    ERIC Educational Resources Information Center

    2001

    This symposium on assessment and evaluation modeling consists of three presentations. "Training Assessment Among Kenyan Smallholder Entrepreneurs" (George G. Shibanda, Jemymah Ingado, Bernard Nassiuma) reports a study that assessed the extent to which the need for knowledge, information, and skills among small scale farmers can promote effective…

  19. A Model For Evaluating Didactic Profiles in an Engineering Curriculum.

    ERIC Educational Resources Information Center

    Waks, S.

    1989-01-01

    Describes general and specific didactic profiles of engineering courseware for evaluating a curriculum. To carry out a diagnosis of written material, the two profiles and a complexity facet were prepared. Provides a model in the self-instructional course, Digital System. (YP)

  20. A Model for Evaluation of Mass Media Coverage.

    ERIC Educational Resources Information Center

    Johnson, Phylis

    1996-01-01

    Defines total community coverage as the presentation of divisive issues through such media as electronic town meetings and public debates. Suggests ways to improve these media formats, including a 4-level model. Describes in depth each level--Foundations, Conceptual Awareness, Investigation and Evaluation, and Action Skills. Presents a case study…

  1. Evaluation of a stratiform cloud parameterization for general circulation models

    SciTech Connect

    Ghan, S.J.; Leung, L.R.; McCaa, J.

    1996-04-01

    To evaluate the relative importance of horizontal advection of cloud versus cloud formation within the grid cell of a single column model (SCM), we have performed a series of simulations with our SCM driven by a fixed vertical velocity and various rates of horizontal advection.

  2. Field Evaluation of an Avian Risk Assessment Model

    EPA Science Inventory

    We conducted two laboratory subacute dietary toxicity tests and one outdoor subacute dietary toxicity test to determine the effectiveness of the U.S. Environmental Protection Agency's deterministic risk assessment model for evaluating the potential of adverse effects to birds in ...

  3. Air Pollution Data for Model Evaluation and Application

    EPA Science Inventory

    One objective of designing an air pollution monitoring network is to obtain data for evaluating air quality models that are used in the air quality management process and scientific discovery.1.2 A common use is to relate emissions to air quality, including assessing ...

  4. REVIEW OF MATHEMATICAL MODELING FOR EVALUATING SOIL VAPOR EXTRACTION SYSTEMS

    EPA Science Inventory

    Soil vapor extraction (SVE) is a commonly used remedial technology at sites contaminated with volatile organic compounds (VOC5) such as chlorinated solvents and hydrocarbon fuels. Modeling tools are available to help evaluate the feasibility, design, and performance of SVE system...

  5. Evaluating Item Fit for Multidimensional Item Response Models

    ERIC Educational Resources Information Center

    Zhang, Bo; Stone, Clement A.

    2008-01-01

    This research examines the utility of the s-x[superscript 2] statistic proposed by Orlando and Thissen (2000) in evaluating item fit for multidimensional item response models. Monte Carlo simulation was conducted to investigate both the Type I error and statistical power of this fit statistic in analyzing two kinds of multidimensional test…

  6. An IPA-Embedded Model for Evaluating Creativity Curricula

    ERIC Educational Resources Information Center

    Chang, Chi-Cheng

    2014-01-01

    How to diagnose the effectiveness of creativity-related curricula is a crucial concern in the pursuit of educational excellence. This paper introduces an importance-performance analysis (IPA)-embedded model for curriculum evaluation, using the example of an IT project implementation course to assess the creativity performance deduced from student…

  7. Evaluation of an Interdisciplinary, Physically Active Lifestyle Course Model

    ERIC Educational Resources Information Center

    Fede, Marybeth H.

    2009-01-01

    The purpose of this study was to evaluate a fit for life program at a university and to use the findings from an extensive literature review, consultations with formative and summative committees, and data collection to develop an interdisciplinary, physically active lifestyle (IPAL) course model. To address the 5 research questions examined in…

  8. An Evaluation-Accountability Model for Regional Education Centers.

    ERIC Educational Resources Information Center

    Barber, R. Jerry; Benson, Charles W.

    This paper presents the rationale, techniques, and structure used to develop and implement an evaluation-accountability program for a new regional Education Service Center in Texas. Needs assessment, a critical element in this model, consists of objectively identifying the educational needs of clients and establishing an initial list of…

  9. The SCRAPE Model; A Conceptual Approach to Educational Program Evaluation.

    ERIC Educational Resources Information Center

    Liberty, Paul G., Jr.

    Using effectivemess, efficiency, self-sustenance, and communicability as criteria, a conceptual model, called SCRAPE, was developed at the University of Texas to systematically describe educational behaviors. The key elements of the system are: (1) diagnosis and prescription, (2) instructional events, (3) achievement evaluation, and (4) consequent…

  10. Evaluating the Predictive Value of Growth Prediction Models

    ERIC Educational Resources Information Center

    Murphy, Daniel L.; Gaertner, Matthew N.

    2014-01-01

    This study evaluates four growth prediction models--projection, student growth percentile, trajectory, and transition table--commonly used to forecast (and give schools credit for) middle school students' future proficiency. Analyses focused on vertically scaled summative mathematics assessments, and two performance standards conditions (high…

  11. Quantitative comparison between crowd models for evacuation planning and evaluation

    NASA Astrophysics Data System (ADS)

    Viswanathan, Vaisagh; Lee, Chong Eu; Lees, Michael Harold; Cheong, Siew Ann; Sloot, Peter M. A.

    2014-02-01

    Crowd simulation is rapidly becoming a standard tool for evacuation planning and evaluation. However, the many crowd models in the literature are structurally different, and few have been rigorously calibrated against real-world egress data, especially in emergency situations. In this paper we describe a procedure to quantitatively compare different crowd models or between models and real-world data. We simulated three models: (1) the lattice gas model, (2) the social force model, and (3) the RVO2 model, and obtained the distributions of six observables: (1) evacuation time, (2) zoned evacuation time, (3) passage density, (4) total distance traveled, (5) inconvenience, and (6) flow rate. We then used the DISTATIS procedure to compute the compromise matrix of statistical distances between the three models. Projecting the three models onto the first two principal components of the compromise matrix, we find the lattice gas and RVO2 models are similar in terms of the evacuation time, passage density, and flow rates, whereas the social force and RVO2 models are similar in terms of the total distance traveled. Most importantly, we find that the zoned evacuation times of the three models to be very different from each other. Thus we propose to use this variable, if it can be measured, as the key test between different models, and also between models and the real world. Finally, we compared the model flow rates against the flow rate of an emergency evacuation during the May 2008 Sichuan earthquake, and found the social force model agrees best with this real data.

  12. Two models of suicide treatment: evaluation and recommendations.

    PubMed

    Pulakos, J

    1993-01-01

    Treating suicidal patients is one of the most stressful aspects of psychotherapeutic work. This paper describes and evaluates two models of therapy with suicidal patients. The crisis-intervention model, which assumes suicidal feelings are acute and suicide is preventable; and the continuing-therapy model, which emphasizes chronic suicidal feelings and posits that suicide is not preventable. Ethical and legal issues as well as treatment strategies from each model are described. Both therapy models stress the importance of assessing, understanding, and validating the patient's feelings as well as establishing a good therapeutic relationship. The crisis intervention model recommends an active, directive intervention while the continuing therapy model emphasizes ongoing therapy principles. After reviewing the different models, this article concludes that the assumptions of the crisis-intervention model are not supported while those of the continuing-therapy model are. In addition, it is concluded that there are more therapeutic advantages to employing the continuing-therapy model. These include taking short-term risks to acquire long-term gain, treating the patient as a responsible adult and seeing the suicidal behavior in the context of the total personality.

  13. Evaluating plume dispersion models: Expanding the practice to include the model physics

    SciTech Connect

    Weil, J.C.

    1994-12-31

    Plume dispersion models are used in a variety of air-quality applications including the determination of source emission limits, new source sites, etc. The cost of pollution control and siting has generated much interest in model evaluation and accuracy. Two questions are of primary concern: (1) How well does a model predict the high ground-level concentrations (GLCs) that are necessary in assessing compliance with air-quality regulations? This prompts an operational performance evaluation; (2) Is the model based on sound physical principles and does it give good predictions for the {open_quotes}right{close_quotes} reasons? This prompts a model physics evaluation. Although air-quality managers are interested primarily in operational performance, model physics should be an equally important issue. The purpose in establishing good physics is to build confidence in model predictions beyond the limited experimental range, i.e., for new source applications.

  14. Model Evaluation and Hindcasting: An Experiment with an Integrated Assessment Model

    SciTech Connect

    Chaturvedi, Vaibhav; Kim, Son H.; Smith, Steven J.; Clarke, Leon E.; Zhou, Yuyu; Kyle, G. Page; Patel, Pralit L.

    2013-11-01

    Integrated assessment models have been extensively used for analyzing long term energy and greenhouse emissions trajectories and have influenced key policies on this subject. Though admittedly these models are focused on the long term trajectories, how well these models are able to capture historical dynamics is an open question. In a first experiment of its kind, we present a framework for evaluation of such integrated assessment models. We use Global Change Assessment Model for this zero order experiment, and focus on the building sector results for USA. We calibrate the model for 1990 and run it forward up to 2095 in five year time steps. This gives us results for 1995, 2000, 2005 and 2010 which we compare to observed historical data at both fuel level and service level. We focus on bringing out the key insights for the wider process of model evaluation through our experiment with GCAM. We begin with highlighting that creation of an evaluation dataset and identification of key evaluation metric is the foremost challenge in the evaluation process. Our analysis highlights that estimation of functional form of the relationship between energy service demand, which is an unobserved variable, and its drivers is a significant challenge in the absence of adequate historical data for both the dependent and driver variables. Historical data availability for key metrics is a serious limiting factor in the process of evaluation. Interestingly, service level data against which such models need to be evaluated are itself a result of models. Thus for energy services, the best we can do is compare our model results with other model results rather than observed and measured data. We show that long term models, by the nature of their construction, will most likely underestimate the rapid growth in some services observed in a short time span. Also, we learn that modeling saturated energy services like space heating is easier than modeling unsaturated services like space cooling

  15. Evaluation of battery models for prediction of electric vehicle range

    NASA Technical Reports Server (NTRS)

    Frank, H. A.; Phillips, A. M.

    1977-01-01

    Three analytical models for predicting electric vehicle battery output and the corresponding electric vehicle range for various driving cycles were evaluated. The models were used to predict output and range, and then compared with experimentally determined values determined by laboratory tests on batteries using discharge cycles identical to those encountered by an actual electric vehicle while on SAE cycles. Results indicate that the modified Hoxie model gave the best predictions with an accuracy of about 97 to 98% in the best cases and 86% in the worst case. A computer program was written to perform the lengthy iterative calculations required. The program and hardware used to automatically discharge the battery are described.

  16. Fractional Modeling Method of Cognition Process in Teaching Evaluation

    NASA Astrophysics Data System (ADS)

    Zhao, Chunna; Wu, Minhua; Zhao, Yu; Luo, Liming; Li, Yingshun

    Cognition process has been translated into other quantitative indicators in some assessment decision systems. In teaching evaluation system a fractional cognition process model is proposed in this paper. The fractional model is built on fractional calculus theory combining with classroom teaching features. The fractional coefficient is determined by the actual course information. Student self-parameter is decided by the actual situation potential of each individual student. The detailed descriptions are displayed through building block diagram. The objective quantitative description can be given in the fractional cognition process model. And the teaching quality assessments will be more objective and accurate based on the above quantitative description.

  17. Evaluation of nearshore wave models in steep reef environments

    NASA Astrophysics Data System (ADS)

    Buckley, Mark; Lowe, Ryan; Hansen, Jeff

    2014-06-01

    To provide coastal engineers and scientists with a quantitative evaluation of nearshore numerical wave models in reef environments, we review and compare three commonly used models with detailed laboratory observations. These models are the following: (1) SWASH (Simulating WAves till SHore) (Zijlema et al. 2011), a phase-resolving nonlinear shallow-water wave model with added nonhydrostatic terms; (2) SWAN (Simulating WAve Nearshore) (Booij et al. 1999), a phase-averaged spectral wave model; and (3) XBeach (Roelvink et al. 2009), a coupled phase-averaged spectral wave model (applied to modeling sea-swell waves) and a nonlinear shallow-water model (applied to modeling infragravity waves). A quantitative assessment was made of each model's ability to predict sea-swell (SS) wave height, infragravity (IG) wave height, wave spectra, and wave setup () at five locations across the laboratory fringing reef profile of Demirbilek et al. (2007). Simulations were performed with the "recommended" empirical coefficients as documented for each model, and then the key wave-breaking parameter for each model ( α in SWASH and γ in both SWAN and XBeach) was optimized to most accurately reproduce the observations. SWASH, SWAN, and XBeach were found to be capable of predicting SS wave height variations across the steep fringing reef profile with reasonable accuracy using the default coefficients. Nevertheless, tuning of the key wave-breaking parameter improved the accuracy of each model's predictions. SWASH and XBeach were also able to predict IG wave height and spectral transformation. Although SWAN was capable of modeling the SS wave height, in its current form, it was not capable of modeling the spectral transformation into lower frequencies, as evident in the underprediction of the low-frequency waves.

  18. Evaluation of mycobacterial virulence using rabbit skin liquefaction model.

    PubMed

    Zhang, Guoping; Zhu, Bingdong; Shi, Wanliang; Wang, Mingzhu; Da, Zejiao; Zhang, Ying

    2010-01-01

    Liquefaction is an important pathological process that can subsequently lead to cavitation where large numbers of bacilli can be coughed up which in turn causes spread of tuberculosis in humans. Current animal models to study the liquefaction process and to evaluate virulence of mycobacteria are tedious. In this study, we evaluated a rabbit skin model as a rapid model for liquefaction and virulence assessment using M. bovis BCG, M. tuberculosis avirulent strain H37Ra, M. smegmatis, and the H37Ra strains complemented with selected genes from virulent M. tuberculosis strain H37Rv. We found that with prime and/or boosting immunization, all of these live bacteria at enough high number could induce liquefaction, and the boosting induced stronger liquefaction and more severe lesions in shorter time compared with the prime injection. The skin lesions caused by high dose live BCG (5×10 (6) ) were the most severe followed by live M. tuberculosis H37Ra with M. smegmatis being the least pathogenic. It is of interest to note that none of the above heat-killed mycobacteria induced liquefaction. When H37Ra was complemented with certain wild type genes of H37Rv, some of the complemented H37Ra strains produced more severe skin lesions than H37Ra. These results suggest that the rabbit skin liquefaction model can be a more visual, convenient, rapid and useful model to evaluate virulence of different mycobacteria and to study the mechanisms of liquefaction.

  19. Toward diagnostic model calibration and evaluation: Approximate Bayesian computation

    NASA Astrophysics Data System (ADS)

    Vrugt, Jasper A.; Sadegh, Mojtaba

    2013-07-01

    The ever increasing pace of computational power, along with continued advances in measurement technologies and improvements in process understanding has stimulated the development of increasingly complex hydrologic models that simulate soil moisture flow, groundwater recharge, surface runoff, root water uptake, and river discharge at different spatial and temporal scales. Reconciling these high-order system models with perpetually larger volumes of field data is becoming more and more difficult, particularly because classical likelihood-based fitting methods lack the power to detect and pinpoint deficiencies in the model structure. Gupta et al. (2008) has recently proposed steps (amongst others) toward the development of a more robust and powerful method of model evaluation. Their diagnostic approach uses signature behaviors and patterns observed in the input-output data to illuminate to what degree a representation of the real world has been adequately achieved and how the model should be improved for the purpose of learning and scientific discovery. In this paper, we introduce approximate Bayesian computation (ABC) as a vehicle for diagnostic model evaluation. This statistical methodology relaxes the need for an explicit likelihood function in favor of one or multiple different summary statistics rooted in hydrologic theory that together have a clearer and more compelling diagnostic power than some average measure of the size of the error residuals. Two illustrative case studies are used to demonstrate that ABC is relatively easy to implement, and readily employs signature based indices to analyze and pinpoint which part of the model is malfunctioning and in need of further improvement.

  20. Parameter Sensitivity Evaluation of the CLM-Crop model

    NASA Astrophysics Data System (ADS)

    Drewniak, B. A.; Zeng, X.; Mametjanov, A.; Anitescu, M.; Norris, B.; Kotamarthi, V. R.

    2011-12-01

    In order to improve carbon cycling within Earth System Models, crop representation for corn, spring wheat, and soybean species has been incorporated into the latest version of the Community Land Model (CLM), the land surface model in the Community Earth System Model. As a means to evaluate and improve the CLM-Crop model, we will determine the sensitivity of various crop parameters on carbon fluxes (such as GPP and NEE), yields, and soil organic matter. The sensitivity analysis will perform small perturbations over a range of values for each parameter on individual grid sites, for comparison with AmeriFlux data, as well as globally so crop model parameters can be improved. Over 20 parameters have been identified for evaluation in this study including carbon-nitrogen ratios for leaves, stems, roots, and organs; fertilizer applications; growing degree days for each growth stage; and more. Results from this study will be presented to give a better understanding of the sensitivity of the various parameters used to represent crops, which will help improve the overall model performance and aid with determining future influences climate change will have on cropland ecosystems.

  1. Evaluating climate models: Should we use weather or climate observations?

    SciTech Connect

    Oglesby, Robert J; Erickson III, David J

    2009-12-01

    Calling the numerical models that we use for simulations of climate change 'climate models' is a bit of a misnomer. These 'general circulation models' (GCMs, AKA global climate models) and their cousins the 'regional climate models' (RCMs) are actually physically-based weather simulators. That is, these models simulate, either globally or locally, daily weather patterns in response to some change in forcing or boundary condition. These simulated weather patterns are then aggregated into climate statistics, very much as we aggregate observations into 'real climate statistics'. Traditionally, the output of GCMs has been evaluated using climate statistics, as opposed to their ability to simulate realistic daily weather observations. At the coarse global scale this may be a reasonable approach, however, as RCM's downscale to increasingly higher resolutions, the conjunction between weather and climate becomes more problematic. We present results from a series of present-day climate simulations using the WRF ARW for domains that cover North America, much of Latin America, and South Asia. The basic domains are at a 12 km resolution, but several inner domains at 4 km have also been simulated. These include regions of complex topography in Mexico, Colombia, Peru, and Sri Lanka, as well as a region of low topography and fairly homogeneous land surface type (the U.S. Great Plains). Model evaluations are performed using standard climate analyses (e.g., reanalyses; NCDC data) but also using time series of daily station observations. Preliminary results suggest little difference in the assessment of long-term mean quantities, but the variability on seasonal and interannual timescales is better described. Furthermore, the value-added by using daily weather observations as an evaluation tool increases with the model resolution.

  2. Towards the quantitative evaluation of visual attention models.

    PubMed

    Bylinskii, Z; DeGennaro, E M; Rajalingham, R; Ruda, H; Zhang, J; Tsotsos, J K

    2015-11-01

    Scores of visual attention models have been developed over the past several decades of research. Differences in implementation, assumptions, and evaluations have made comparison of these models very difficult. Taxonomies have been constructed in an attempt at the organization and classification of models, but are not sufficient at quantifying which classes of models are most capable of explaining available data. At the same time, a multitude of physiological and behavioral findings have been published, measuring various aspects of human and non-human primate visual attention. All of these elements highlight the need to integrate the computational models with the data by (1) operationalizing the definitions of visual attention tasks and (2) designing benchmark datasets to measure success on specific tasks, under these definitions. In this paper, we provide some examples of operationalizing and benchmarking different visual attention tasks, along with the relevant design considerations.

  3. Towards the quantitative evaluation of visual attention models.

    PubMed

    Bylinskii, Z; DeGennaro, E M; Rajalingham, R; Ruda, H; Zhang, J; Tsotsos, J K

    2015-11-01

    Scores of visual attention models have been developed over the past several decades of research. Differences in implementation, assumptions, and evaluations have made comparison of these models very difficult. Taxonomies have been constructed in an attempt at the organization and classification of models, but are not sufficient at quantifying which classes of models are most capable of explaining available data. At the same time, a multitude of physiological and behavioral findings have been published, measuring various aspects of human and non-human primate visual attention. All of these elements highlight the need to integrate the computational models with the data by (1) operationalizing the definitions of visual attention tasks and (2) designing benchmark datasets to measure success on specific tasks, under these definitions. In this paper, we provide some examples of operationalizing and benchmarking different visual attention tasks, along with the relevant design considerations. PMID:25951756

  4. Human Modeling Evaluations in Microgravity Workstation and Restraint Development

    NASA Technical Reports Server (NTRS)

    Whitmore, Mihriban; Chmielewski, Cynthia; Wheaton, Aneice; Hancock, Lorraine; Beierle, Jason; Bond, Robert L. (Technical Monitor)

    1999-01-01

    The International Space Station (ISS) will provide long-term missions which will enable the astronauts to live and work, as well as, conduct research in a microgravity environment. The dominant factor in space affecting the crew is "weightlessness" which creates a challenge for establishing workstation microgravity design requirements. The crewmembers will work at various workstations such as Human Research Facility (HRF), Microgravity Sciences Glovebox (MSG) and Life Sciences Glovebox (LSG). Since the crew will spend considerable amount of time at these workstations, it is critical that ergonomic design requirements are integral part of design and development effort. In order to achieve this goal, the Space Human Factors Laboratory in the Johnson Space Center Flight Crew Support Division has been tasked to conduct integrated evaluations of workstations and associated crew restraints. Thus, a two-phase approach was used: 1) ground and microgravity evaluations of the physical dimensions and layout of the workstation components, and 2) human modeling analyses of the user interface. Computer-based human modeling evaluations were an important part of the approach throughout the design and development process. Human modeling during the conceptual design phase included crew reach and accessibility of individual equipment, as well as, crew restraint needs. During later design phases, human modeling has been used in conjunction with ground reviews and microgravity evaluations of the mock-ups in order to verify the human factors requirements. (Specific examples will be discussed.) This two-phase approach was the most efficient method to determine ergonomic design characteristics for workstations and restraints. The real-time evaluations provided a hands-on implementation in a microgravity environment. On the other hand, only a limited number of participants could be tested. The human modeling evaluations provided a more detailed analysis of the setup. The issues identified

  5. Evaluation of Black Carbon Estimations in Global Aerosol Models

    SciTech Connect

    Koch, D.; Schulz, M.; Kinne, Stefan; McNaughton, C. S.; Spackman, J. R.; Balkanski, Y.; Bauer, S.; Berntsen, T.; Bond, Tami C.; Boucher, Olivier; Chin, M.; Clarke, A. D.; De Luca, N.; Dentener, F.; Diehl, T.; Dubovik, O.; Easter, Richard C.; Fahey, D. W.; Feichter, J.; Fillmore, D.; Freitag, S.; Ghan, Steven J.; Ginoux, P.; Gong, S.; Horowitz, L.; Iversen, T.; Kirkevag, A.; Klimont, Z.; Kondo, Yutaka; Krol, M.; Liu, Xiaohong; Miller, R.; Montanaro, V.; Moteki, N.; Myhre, G.; Penner, J.; Perlwitz, Ja; Pitari, G.; Reddy, S.; Sahu, L.; Sakamoto, H.; Schuster, G.; Schwarz, J. P.; Seland, O.; Stier, P.; Takegawa, Nobuyuki; Takemura, T.; Textor, C.; van Aardenne, John; Zhao, Y.

    2009-11-27

    We evaluate black carbon (BC) model predictions from the AeroCom model intercomparison project by considering the diversity among year 2000 model simulations and comparing model predictions with available measurements. These model-measurement intercomparisons include BC surface and aircraft concentrations, aerosol absorption optical depth (AAOD) from AERONET and OMI retrievals and BC column estimations based on AERONET. In regions other than Asia, most models are biased high compared to surface concentration measurements. However compared with (column) AAOD or BC burden retreivals, the models are generally biased low. The average ratio of model to retrieved AAOD is less than 0.7 in South American and 0.6 in African biomass burning regions; both of these regions lack surface concentration measurements. In Asia the average model to observed ratio is 0.6 for AAOD and 0.5 for BC surface concentrations. Compared with aircraft measurements over the Americas at latitudes between 0 and 50N, the average model is a factor of 10 larger than observed, and most models exceed the measured BC standard deviation in the mid to upper troposphere. At higher latitudes the average model to aircraft BC is 0.6 and underestimate the observed BC loading in the lower and middle troposphere associated with springtime Arctic haze. Low model bias for AAOD but overestimation of surface and upper atmospheric BC concentrations at lower latitudes suggests that most models are underestimating BC absorption and should improve estimates for refractive index, particle size, and optical effects of BC coating. Retrieval uncertainties and/or differences with model diagnostic treatment may also contribute to the model-measurement disparity. Largest AeroCom model diversity occurred in northern Eurasia and the remote Arctic, regions influenced by anthropogenic sources. Changing emissions, aging, removal, or optical properties within a single model generated a smaller change in model predictions than the

  6. Evaluation of reliability modeling tools for advanced fault tolerant systems

    NASA Technical Reports Server (NTRS)

    Baker, Robert; Scheper, Charlotte

    1986-01-01

    The Computer Aided Reliability Estimation (CARE III) and Automated Reliability Interactice Estimation System (ARIES 82) reliability tools for application to advanced fault tolerance aerospace systems were evaluated. To determine reliability modeling requirements, the evaluation focused on the Draper Laboratories' Advanced Information Processing System (AIPS) architecture as an example architecture for fault tolerance aerospace systems. Advantages and limitations were identified for each reliability evaluation tool. The CARE III program was designed primarily for analyzing ultrareliable flight control systems. The ARIES 82 program's primary use was to support university research and teaching. Both CARE III and ARIES 82 were not suited for determining the reliability of complex nodal networks of the type used to interconnect processing sites in the AIPS architecture. It was concluded that ARIES was not suitable for modeling advanced fault tolerant systems. It was further concluded that subject to some limitations (the difficulty in modeling systems with unpowered spare modules, systems where equipment maintenance must be considered, systems where failure depends on the sequence in which faults occurred, and systems where multiple faults greater than a double near coincident faults must be considered), CARE III is best suited for evaluating the reliability of advanced tolerant systems for air transport.

  7. Evaluation of Medical Education virtual Program: P3 model

    PubMed Central

    REZAEE, RITA; SHOKRPOUR, NASRIN; BOROUMAND, MARYAM

    2016-01-01

    Introduction: In e-learning, people get involved in a process and create the content (product) and make it available for virtual learners. The present study was carried out in order to evaluate the first virtual master program in medical education at Shiraz University of Medical Sciences according to P3 Model. Methods: This is an evaluation research study with post single group design used to determine how effective this program was. All students 60 who participated more than one year in this virtual program and 21 experts including teachers and directors participated in this evaluation project. Based on the P3 e-learning model, an evaluation tool with 5-point Likert rating scale was designed and applied to collect the descriptive data. Results: Students reported storyboard and course design as the most desirable element of learning environment (2.30±0.76), but they declared technical support as the less desirable part (1.17±1.23). Conclusion: Presence of such framework in this regard and using it within the format of appropriate tools for evaluation of e-learning in universities and higher education institutes, which present e-learning curricula in the country, may contribute to implementation of the present and future e-learning curricula efficiently and guarantee its implementation in an appropriate way.

  8. A review and evaluation of intraurban air pollution exposure models.

    PubMed

    Jerrett, Michael; Arain, Altaf; Kanaroglou, Pavlos; Beckerman, Bernardo; Potoglou, Dimitri; Sahsuvaroglu, Talar; Morrison, Jason; Giovis, Chris

    2005-03-01

    The development of models to assess air pollution exposures within cities for assignment to subjects in health studies has been identified as a priority area for future research. This paper reviews models for assessing intraurban exposure under six classes, including: (i) proximity-based assessments, (ii) statistical interpolation, (iii) land use regression models, (iv) line dispersion models, (v) integrated emission-meteorological models, and (vi) hybrid models combining personal or household exposure monitoring with one of the preceding methods. We enrich this review of the modelling procedures and results with applied examples from Hamilton, Canada. In addition, we qualitatively evaluate the models based on key criteria important to health effects assessment research. Hybrid models appear well suited to overcoming the problem of achieving population representative samples while understanding the role of exposure variation at the individual level. Remote sensing and activity-space analysis will complement refinements in pre-existing methods, and with expected advances, the field of exposure assessment may help to reduce scientific uncertainties that now impede policy intervention aimed at protecting public health.

  9. Postural effects on intracranial pressure: modeling and clinical evaluation.

    PubMed

    Qvarlander, Sara; Sundström, Nina; Malm, Jan; Eklund, Anders

    2013-11-01

    The physiological effect of posture on intracranial pressure (ICP) is not well described. This study defined and evaluated three mathematical models describing the postural effects on ICP, designed to predict ICP at different head-up tilt angles from the supine ICP value. Model I was based on a hydrostatic indifference point for the cerebrospinal fluid (CSF) system, i.e., the existence of a point in the system where pressure is independent of body position. Models II and III were based on Davson's equation for CSF absorption, which relates ICP to venous pressure, and postulated that gravitational effects within the venous system are transferred to the CSF system. Model II assumed a fully communicating venous system, and model III assumed that collapse of the jugular veins at higher tilt angles creates two separate hydrostatic compartments. Evaluation of the models was based on ICP measurements at seven tilt angles (0-71°) in 27 normal pressure hydrocephalus patients. ICP decreased with tilt angle (ANOVA: P < 0.01). The reduction was well predicted by model III (ANOVA lack-of-fit: P = 0.65), which showed excellent fit against measured ICP. Neither model I nor II adequately described the reduction in ICP (ANOVA lack-of-fit: P < 0.01). Postural changes in ICP could not be predicted based on the currently accepted theory of a hydrostatic indifference point for the CSF system, but a new model combining Davson's equation for CSF absorption and hydrostatic gradients in a collapsible venous system performed well and can be useful in future research on gravity and CSF physiology.

  10. Evaluation of Model Recognition for Grammar-Based Automatic 3d Building Model Reconstruction

    NASA Astrophysics Data System (ADS)

    Yu, Qian; Helmholz, Petra; Belton, David

    2016-06-01

    In recent years, 3D city models are in high demand by many public and private organisations, and the steadily growing capacity in both quality and quantity are increasing demand. The quality evaluation of these 3D models is a relevant issue both from the scientific and practical points of view. In this paper, we present a method for the quality evaluation of 3D building models which are reconstructed automatically from terrestrial laser scanning (TLS) data based on an attributed building grammar. The entire evaluation process has been performed in all the three dimensions in terms of completeness and correctness of the reconstruction. Six quality measures are introduced to apply on four datasets of reconstructed building models in order to describe the quality of the automatic reconstruction, and also are assessed on their validity from the evaluation point of view.

  11. Evaluation of a locally homogeneous flow model of spray combustion

    NASA Technical Reports Server (NTRS)

    Mao, C.-P.; Szekely, G. A., Jr.; Faeth, G. M.

    1980-01-01

    A simplified model of spray combustion was evaluated. The model was compared with measurements in both a gaseous propane flame and an air atomized n-pentane spray flame (35 micron Sauter mean diameter). Profiles of mean velocity, temperature, and species concentrations, as well as velocity fluctuations and Reynolds stress, were measured. The predictions for the gas flame were in excellent agreement with measurements, except for product species concentrations where errors due to finite reaction rates were detected. Predictions within the spray were qualitatively correct, but the model overestimated the rate of development of the flow; e.g., predicted flame lengths were 30% shorter than measured. Calibrated drop-life-history calculations showed that finite interphase transport rates caused the discrepancy and that initial drop diameters less than 20 microns would be required for quantitative accuracy of the model.

  12. obs4MIPS: Satellite Datasets for Model Evaluation

    NASA Astrophysics Data System (ADS)

    Ferraro, R.; Waliser, D. E.; Gleckler, P. J.

    2013-12-01

    This poster will review the current status of the obs4MIPs project, whose purpose is to provide a limited collection of well-established and documented datasets for comparison with Earth system models. These datasets have been reformatted to correspond with the CMIP5 model output requirements, and include technical documentation specifically targeted for their use in model output evaluation. There are currently over 50 datasets containing observations that directly correspond to CMIP5 model output variables. We will review the rational and requirements for obs4MIPs contributions, and provide summary information of the current obs4MIPs holdings on the Earth System Grid Federation. We will also provide some usage statistics, an update on governance for the obs4MIPs project, and plans for supporting CMIP6.

  13. Methods for evaluating the predictive accuracy of structural dynamic models

    NASA Technical Reports Server (NTRS)

    Hasselman, Timothy K.; Chrostowski, Jon D.

    1991-01-01

    Modeling uncertainty is defined in terms of the difference between predicted and measured eigenvalues and eigenvectors. Data compiled from 22 sets of analysis/test results was used to create statistical databases for large truss-type space structures and both pretest and posttest models of conventional satellite-type space structures. Modeling uncertainty is propagated through the model to produce intervals of uncertainty on frequency response functions, both amplitude and phase. This methodology was used successfully to evaluate the predictive accuracy of several structures, including the NASA CSI Evolutionary Structure tested at Langley Research Center. Test measurements for this structure were within + one-sigma intervals of predicted accuracy for the most part, demonstrating the validity of the methodology and computer code.

  14. An epidemic model to evaluate the homogeneous mixing assumption

    NASA Astrophysics Data System (ADS)

    Turnes, P. P.; Monteiro, L. H. A.

    2014-11-01

    Many epidemic models are written in terms of ordinary differential equations (ODE). This approach relies on the homogeneous mixing assumption; that is, the topological structure of the contact network established by the individuals of the host population is not relevant to predict the spread of a pathogen in this population. Here, we propose an epidemic model based on ODE to study the propagation of contagious diseases conferring no immunity. The state variables of this model are the percentages of susceptible individuals, infectious individuals and empty space. We show that this dynamical system can experience transcritical and Hopf bifurcations. Then, we employ this model to evaluate the validity of the homogeneous mixing assumption by using real data related to the transmission of gonorrhea, hepatitis C virus, human immunodeficiency virus, and obesity.

  15. scoringRules - A software package for probabilistic model evaluation

    NASA Astrophysics Data System (ADS)

    Lerch, Sebastian; Jordan, Alexander; Krüger, Fabian

    2016-04-01

    Models in the geosciences are generally surrounded by uncertainty, and being able to quantify this uncertainty is key to good decision making. Accordingly, probabilistic forecasts in the form of predictive distributions have become popular over the last decades. With the proliferation of probabilistic models arises the need for decision theoretically principled tools to evaluate the appropriateness of models and forecasts in a generalized way. Various scoring rules have been developed over the past decades to address this demand. Proper scoring rules are functions S(F,y) which evaluate the accuracy of a forecast distribution F , given that an outcome y was observed. As such, they allow to compare alternative models, a crucial ability given the variety of theories, data sources and statistical specifications that is available in many situations. This poster presents the software package scoringRules for the statistical programming language R, which contains functions to compute popular scoring rules such as the continuous ranked probability score for a variety of distributions F that come up in applied work. Two main classes are parametric distributions like normal, t, or gamma distributions, and distributions that are not known analytically, but are indirectly described through a sample of simulation draws. For example, Bayesian forecasts produced via Markov Chain Monte Carlo take this form. Thereby, the scoringRules package provides a framework for generalized model evaluation that both includes Bayesian as well as classical parametric models. The scoringRules package aims to be a convenient dictionary-like reference for computing scoring rules. We offer state of the art implementations of several known (but not routinely applied) formulas, and implement closed-form expressions that were previously unavailable. Whenever more than one implementation variant exists, we offer statistically principled default choices.

  16. Evaluation of Improved Spacecraft Models for GLONASS Orbit Determination

    NASA Astrophysics Data System (ADS)

    Weiss, J. P.; Sibthorpe, A.; Harvey, N.; Bar-Sever, Y.; Kuang, D.

    2010-12-01

    High-fidelity spacecraft models become more important as orbit determination strategies achieve greater levels of precision and accuracy. In this presentation, we assess the impacts of new solar radiation pressure and attitude models on precise orbit determination (POD) for GLONASS spacecraft within JPLs GIPSY-OASIS software. A new solar radiation pressure model is developed by empirically fitting a Fourier expansion to solar pressure forces acting on the spacecraft X, Y, Z components using one year of recent orbit data. Compared to a basic “box-wing” solar pressure model, the median 24-hour orbit prediction accuracy for one month of independent test data improves by 43%. We additionally implement an updated yaw attitude model during eclipse periods. We evaluate the impacts of both models on post-processed POD solutions spanning 6-months. We consider a number of metrics such as internal orbit and clock overlaps as well as comparisons to independent solutions. Improved yaw attitude modeling reduces the dependence of these metrics on the “solar elevation” angle. The updated solar pressure model improves orbit overlap statistics by several mm in the median sense and centimeters in the max sense (1D). Orbit differences relative to the IGS combined solution are at or below the 5 cm level (1D RMS).

  17. The establishment of the evaluation model for pupil's lunch suppliers

    NASA Astrophysics Data System (ADS)

    Lo, Chih-Yao; Hou, Cheng-I.; Ma, Rosa

    2011-10-01

    The aim of this study is the establishment of the evaluation model for the government-controlled private suppliers for school lunches in the public middle and primary schools in Miao-Li County. After finishing the literature search and the integration of the opinions from anonymous experts by Modified Delphi Method, the grade forms from relevant schools in and outside the Miao-Li County will firstly be collected and the delaminated structures for evaluation be constructed. Then, the data analysis will be performed on those retrieved questionnaires designed in accordance with the Analytic Hierarchy Process (AHP). Finally, the evaluation form for the government-controlled private suppliers can be constructed and presented in the hope of benefiting the personnel in charge of school meal purchasing.

  18. Photovoltaic performance models: an evaluation with actual field data

    NASA Astrophysics Data System (ADS)

    TamizhMani, Govindasamy; Ishioye, John-Paul; Voropayev, Arseniy; Kang, Yi

    2008-08-01

    Prediction of energy production is crucial to the design and installation of the building integrated photovoltaic systems. This prediction should be attainable based on the commonly available parameters such as system size, orientation and tilt angle. Several commercially available as well as free downloadable software tools exist to predict energy production. Six software models have been evaluated in this study and they are: PV Watts, PVsyst, MAUI, Clean Power Estimator, Solar Advisor Model (SAM) and RETScreen. This evaluation has been done by comparing the monthly, seasonaly and annually predicted data with the actual, field data obtained over a year period on a large number of residential PV systems ranging between 2 and 3 kWdc. All the systems are located in Arizona, within the Phoenix metropolitan area which lies at latitude 33° North, and longitude 112 West, and are all connected to the electrical grid.

  19. Evaluation of predictions in the CASP10 model refinement category

    PubMed Central

    Nugent, Timothy; Cozzetto, Domenico; Jones, David T

    2014-01-01

    Here we report on the assessment results of the third experiment to evaluate the state of the art in protein model refinement, where participants were invited to improve the accuracy of initial protein models for 27 targets. Using an array of complementary evaluation measures, we find that five groups performed better than the naïve (null) method—a marked improvement over CASP9, although only three were significantly better. The leading groups also demonstrated the ability to consistently improve both backbone and side chain positioning, while other groups reliably enhanced other aspects of protein physicality. The top-ranked group succeeded in improving the backbone conformation in almost 90% of targets, suggesting a strategy that for the first time in CASP refinement is successful in a clear majority of cases. A number of issues remain unsolved: the majority of groups still fail to improve the quality of the starting models; even successful groups are only able to make modest improvements; and no prediction is more similar to the native structure than to the starting model. Successful refinement attempts also often go unrecognized, as suggested by the relatively larger improvements when predictions not submitted as model 1 are also considered. Proteins 2014; 82(Suppl 2):98–111. PMID:23900810

  20. A Model Evaluation Data Set for the Tropical ARM Sites

    DOE Data Explorer

    Jakob, Christian

    2008-01-15

    This data set has been derived from various ARM and external data sources with the main aim of providing modelers easy access to quality controlled data for model evaluation. The data set contains highly aggregated (in time) data from a number of sources at the tropical ARM sites at Manus and Nauru. It spans the years of 1999 and 2000. The data set contains information on downward surface radiation; surface meteorology, including precipitation; atmospheric water vapor and cloud liquid water content; hydrometeor cover as a function of height; and cloud cover, cloud optical thickness and cloud top pressure information provided by the International Satellite Cloud Climatology Project (ISCCP).

  1. Evaluation of Differentiation Strategy in Shipping Enterprises with Simulation Model

    NASA Astrophysics Data System (ADS)

    Vaxevanou, Anthi Z.; Ferfeli, Maria V.; Damianos, Sakas P.

    2009-08-01

    The present inquiring study aims at investigating the circumstances that prevail in the European Shipping Enterprises with special reference to the Greek ones. This investigation is held in order to explore the potential implementation of strategies so as to create a unique competitive advantage [1]. The Shipping sector is composed of enterprises that are mainly activated in the following three areas: the passenger, the commercial and the naval. The main target is to create a dynamic simulation model which, with reference to the STAIR strategic model, will evaluate the strategic differential choice that some of the shipping enterprises have.

  2. Evaluating Arctic warming mechanisms in CMIP5 models

    NASA Astrophysics Data System (ADS)

    Franzke, Christian L. E.; Lee, Sukyoung; Feldstein, Steven B.

    2016-07-01

    Arctic warming is one of the most striking signals of global warming. The Arctic is one of the fastest warming regions on Earth and constitutes, thus, a good test bed to evaluate the ability of climate models to reproduce the physics and dynamics involved in Arctic warming. Different physical and dynamical mechanisms have been proposed to explain Arctic amplification. These mechanisms include the surface albedo feedback and poleward sensible and latent heat transport processes. During the winter season when Arctic amplification is most pronounced, the first mechanism relies on an enhancement in upward surface heat flux, while the second mechanism does not. In these mechanisms, it has been proposed that downward infrared radiation (IR) plays a role to a varying degree. Here, we show that the current generation of CMIP5 climate models all reproduce Arctic warming and there are high pattern correlations—typically greater than 0.9—between the surface air temperature (SAT) trend and the downward IR trend. However, we find that there are two groups of CMIP5 models: one with small pattern correlations between the Arctic SAT trend and the surface vertical heat flux trend (Group 1), and the other with large correlations (Group 2) between the same two variables. The Group 1 models exhibit higher pattern correlations between Arctic SAT and 500 hPa geopotential height trends, than do the Group 2 models. These findings suggest that Arctic warming in Group 1 models is more closely related to changes in the large-scale atmospheric circulation, whereas in Group 2, the albedo feedback effect plays a more important role. Interestingly, while Group 1 models have a warm or weak bias in their Arctic SAT, Group 2 models show large cold biases. This stark difference in model bias leads us to hypothesize that for a given model, the dominant Arctic warming mechanism and trend may be dependent on the bias of the model mean state.

  3. Modeling Healthcare Processes Using Commitments: An Empirical Evaluation

    PubMed Central

    2015-01-01

    The two primary objectives of this paper are: (a) to demonstrate how Comma, a business modeling methodology based on commitments, can be applied in healthcare process modeling, and (b) to evaluate the effectiveness of such an approach in producing healthcare process models. We apply the Comma approach on a breast cancer diagnosis process adapted from an HHS committee report, and presents the results of an empirical study that compares Comma with a traditional approach based on the HL7 Messaging Standard (Traditional-HL7). Our empirical study involved 47 subjects, and two phases. In the first phase, we partitioned the subjects into two approximately equal groups. We gave each group the same requirements based on a process scenario for breast cancer diagnosis. Members of one group first applied Traditional-HL7 and then Comma whereas members of the second group first applied Comma and then Traditional-HL7—each on the above-mentioned requirements. Thus, each subject produced two models, each model being a set of UML Sequence Diagrams. In the second phase, we repartitioned the subjects into two groups with approximately equal distributions from both original groups. We developed exemplar Traditional-HL7 and Comma models; we gave one repartitioned group our Traditional-HL7 model and the other repartitioned group our Comma model. We provided the same changed set of requirements to all subjects and asked them to modify the provided exemplar model to satisfy the new requirements. We assessed solutions produced by subjects in both phases with respect to measures of flexibility, time, difficulty, objective quality, and subjective quality. Our study found that Comma is superior to Traditional-HL7 in flexibility and objective quality as validated via Student’s t-test to the 10% level of significance. Comma is a promising new approach for modeling healthcare processes. Further gains could be made through improved tooling and enhanced training of modeling personnel. PMID

  4. Modeling Healthcare Processes Using Commitments: An Empirical Evaluation.

    PubMed

    Telang, Pankaj R; Kalia, Anup K; Singh, Munindar P

    2015-01-01

    The two primary objectives of this paper are: (a) to demonstrate how Comma, a business modeling methodology based on commitments, can be applied in healthcare process modeling, and (b) to evaluate the effectiveness of such an approach in producing healthcare process models. We apply the Comma approach on a breast cancer diagnosis process adapted from an HHS committee report, and presents the results of an empirical study that compares Comma with a traditional approach based on the HL7 Messaging Standard (Traditional-HL7). Our empirical study involved 47 subjects, and two phases. In the first phase, we partitioned the subjects into two approximately equal groups. We gave each group the same requirements based on a process scenario for breast cancer diagnosis. Members of one group first applied Traditional-HL7 and then Comma whereas members of the second group first applied Comma and then Traditional-HL7-each on the above-mentioned requirements. Thus, each subject produced two models, each model being a set of UML Sequence Diagrams. In the second phase, we repartitioned the subjects into two groups with approximately equal distributions from both original groups. We developed exemplar Traditional-HL7 and Comma models; we gave one repartitioned group our Traditional-HL7 model and the other repartitioned group our Comma model. We provided the same changed set of requirements to all subjects and asked them to modify the provided exemplar model to satisfy the new requirements. We assessed solutions produced by subjects in both phases with respect to measures of flexibility, time, difficulty, objective quality, and subjective quality. Our study found that Comma is superior to Traditional-HL7 in flexibility and objective quality as validated via Student's t-test to the 10% level of significance. Comma is a promising new approach for modeling healthcare processes. Further gains could be made through improved tooling and enhanced training of modeling personnel. PMID:26539985

  5. Risk assessment and remedial policy evaluation using predictive modeling

    SciTech Connect

    Linkov, L.; Schell, W.R.

    1996-06-01

    As a result of nuclear industry operation and accidents, large areas of natural ecosystems have been contaminated by radionuclides and toxic metals. Extensive societal pressure has been exerted to decrease the radiation dose to the population and to the environment. Thus, in making abatement and remediation policy decisions, not only economic costs but also human and environmental risk assessments are desired. This paper introduces a general framework for risk assessment and remedial policy evaluation using predictive modeling. Ecological risk assessment requires evaluation of the radionuclide distribution in ecosystems. The FORESTPATH model is used for predicting the radionuclide fate in forest compartments after deposition as well as for evaluating the efficiency of remedial policies. Time of intervention and radionuclide deposition profile was predicted as being crucial for the remediation efficiency. Risk assessment conducted for a critical group of forest users in Belarus shows that consumption of forest products (berries and mushrooms) leads to about 0.004% risk of a fatal cancer annually. Cost-benefit analysis for forest cleanup suggests that complete removal of organic layer is too expensive for application in Belarus and a better methodology is required. In conclusion, FORESTPATH modeling framework could have wide applications in environmental remediation of radionuclides and toxic metals as well as in dose reconstruction and, risk-assessment.

  6. An Evaluation of the Decision-Making Capacity Assessment Model

    PubMed Central

    Brémault-Phillips, Suzette C.; Parmar, Jasneet; Friesen, Steven; Rogers, Laura G.; Pike, Ashley; Sluggett, Bryan

    2016-01-01

    Background The Decision-Making Capacity Assessment (DMCA) Model includes a best-practice process and tools to assess DMCA, and implementation strategies at the organizational and assessor levels to support provision of DMCAs across the care continuum. A Developmental Evaluation of the DMCA Model was conducted. Methods A mixed methods approach was used. Survey (N = 126) and focus group (N = 49) data were collected from practitioners utilizing the Model. Results Strengths of the Model include its best-practice and implementation approach, applicability to independent practitioners and inter-professional teams, focus on training/mentoring to enhance knowledge/skills, and provision of tools/processes. Post-training, participants agreed that they followed the Model’s guiding principles (90%), used problem-solving (92%), understood discipline-specific roles (87%), were confident in their knowledge of DMCAs (75%) and pertinent legislation (72%), accessed consultative services (88%), and received management support (64%). Model implementation is impeded when role clarity, physician engagement, inter-professional buy-in, accountability, dedicated resources, information sharing systems, and remuneration are lacking. Dedicated resources, job descriptions inclusive of DMCAs, ongoing education/mentoring supports, access to consultative services, and appropriate remuneration would support implementation. Conclusions The DMCA Model offers practitioners, inter-professional teams, and organizations a best-practice and implementation approach to DMCAs. Addressing barriers and further contextualizing the Model would be warranted. PMID:27729947

  7. Evaluating sand and clay models: do rheological differences matter?

    NASA Astrophysics Data System (ADS)

    Eisenstadt, Gloria; Sims, Darrell

    2005-08-01

    Dry sand and wet clay are the most frequently used materials for physical modeling of brittle deformation. We present a series of experiments that shows when the two materials can be used interchangeably, document the differences in deformation patterns and discuss how best to evaluate and apply results of physical models. Extension and shortening produce similar large-scale deformation patterns in dry sand and wet clay models, indicating that the two materials can be used interchangeably for analysis of gross deformation geometries. There are subtle deformation features that are significantly different: (1) fault propagation and fault linkage; (2) fault width, spacing and displacement; (3) extent of deformation zone; and (4) amount of folding vs. faulting. These differences are primarily due to the lower cohesion of sand and its larger grain size. If these features are of interest, the best practice would be to repeat the experiments with more than one material to ensure that rheological differences are not biasing results. Dry sand and wet clay produce very different results in inversion models; almost all faults are reactivated in wet clay, and few, if any, are significantly reactivated in sand models. Fault reactivation is attributed to high fluid pressure along the fault zone in the wet clay, a situation that may be analogous to many rocks. Sand inversion models may be best applied to areas where most faults experience little to no reactivation, while clay models best fit areas where most pre-existing normal faults are reactivated.

  8. Performance criteria to evaluate air quality modeling applications

    NASA Astrophysics Data System (ADS)

    Thunis, P.; Pederzoli, A.; Pernigotti, D.

    2012-11-01

    A set of statistical indicators fit for air quality model evaluation is selected based on experience and literature: The Root Mean Square Error (RMSE), the bias, the Standard Deviation (SD) and the correlation coefficient (R). Among these the RMSE is proposed as the key one for the description of the model skill. Model Performance Criteria (MPC) to investigate whether model results are 'good enough' for a given application are calculated based on the observation uncertainty (U). The basic concept is to allow for model results a similar margin of tolerance (in terms of uncertainty) as for observations. U is pollutant, concentration level and station dependent, therefore the proposed MPC are normalized by U. Some composite diagrams are adapted or introduced to visualize model performance in terms of the proposed MPC and are illustrated in a real modeling application. The Target diagram, used to visualize the RMSE, is adapted with a new normalization on its axis, while complementary diagrams are proposed. In this first application the dependence of U on concentrations level is ignored, and an assumption on the pollutant dependent relative error is made. The advantages of this new approach are finally described.

  9. Evaluation Model of Life Loss Due to Dam Failure

    NASA Astrophysics Data System (ADS)

    Huang, Dongjing

    2016-04-01

    Dam failure poses a serious threat to human life, however there is still lack of systematic research on life loss which due to dam failure in China. From the perspective of protecting human life, an evaluation model for life loss caused by dam failure is put forward. The model building gets three progressive steps. Twenty dam failure cases in China are preferably chosen as the basic data, considering geographical location and construction time of dams, as well as various conditions of dam failure. Then twelve impact factors of life loss are selected, including severity degree of flood, population at risk, understanding of dam failure, warning time, evacuation condition, number of damaged buildings, water temperature, reservoir storage, dam height, dam type, break time and distance from flood area to dam. And through principal component analysis, it gets four principal components consisting of the first flood character principle component, the second warning system principle component, the third human character principle component and the fourth space-time impact principle component. After multivariate nonlinear regression and ten-fold validation in combination, the evaluation model for life loss is finally established. And the result of the proposed model is closer to the true value and better in fitting effect in comparison with the results of RESCDAM method and M. Peng method. The proposed model is not only applied to evaluate life loss and its rate under various kinds of dam failure conditions in China, but also provides reliable cause analysis and prediction approach to reduce the risk of life loss.

  10. Using water isotopes in the evaluation of land surface models

    NASA Astrophysics Data System (ADS)

    Guglielmo, Francesca; Risi, Camille; Ottlé, Catherine; Bastrikov, Vladislav; Valdayskikh, Victor; Cattani, Olivier; Jouzel, Jean; Gribanov, Konstantin; Nekrasova, Olga; Zacharov, Vyacheslav; Ogée, Jérôme; Wingate, Lisa; Raz-Yaseef, Naama

    2013-04-01

    Several studies show that uncertainties in the representation of land surface processes contribute significantly to the spread in projections for the hydrological cycle. Improvements in the evaluation of land surface models would therefore translate into more reliable predictions of future changes. The isotopic composition of water is affected by phase transitions and, for this reason, is a good tracer for the hydrological cycle. Particularly relevant for the assessment of land surface processes is the fact that bare soil evaporation and transpiration bear different isotopic signatures. Water isotopic measurement could thus be employed in the evaluation of the land surface hydrological budget. With this objective, isotopes have been implemented in the most recent version of the land surface model ORCHIDEE. This model has undergone considerable development in the past few years. In particular, a newly discretised (11 layers) hydrology aims at a more realistic representation of the soil water budget. In addition, biogeophysical processes, as, for instance, the dynamics of permafrost and of its interaction with snow and vegetation, have been included. This model version will allow us to better resolve vertical profiles of soil water isotopic composition and to more realistically simulate the land surface hydrological and isotopic budget in a broader range of climate zones. Model results have been evaluated against temperature profiles and isotopes measurements in soil and stem water at sites located in semi-arid (Yatir), temperate (Le Bray) and boreal (Labytnangi) regions. Seasonal cycles are reasonably well reproduced. Furthermore, a sensitivity analysis investigates to what extent water isotopic measurements in soil water can help constrain the representation of land surface processes, with a focus on the partitioning between evaporation and transpiration. In turn, improvements in the description of this partitioning may help reduce the uncertainties in the land

  11. Evaluation of a distributed catchment scale water balance model

    NASA Technical Reports Server (NTRS)

    Troch, Peter A.; Mancini, Marco; Paniconi, Claudio; Wood, Eric F.

    1993-01-01

    The validity of some of the simplifying assumptions in a conceptual water balance model is investigated by comparing simulation results from the conceptual model with simulation results from a three-dimensional physically based numerical model and with field observations. We examine, in particular, assumptions and simplifications related to water table dynamics, vertical soil moisture and pressure head distributions, and subsurface flow contributions to stream discharge. The conceptual model relies on a topographic index to predict saturation excess runoff and on Philip's infiltration equation to predict infiltration excess runoff. The numerical model solves the three-dimensional Richards equation describing flow in variably saturated porous media, and handles seepage face boundaries, infiltration excess and saturation excess runoff production, and soil driven and atmosphere driven surface fluxes. The study catchments (a 7.2 sq km catchment and a 0.64 sq km subcatchment) are located in the North Appalachian ridge and valley region of eastern Pennsylvania. Hydrologic data collected during the MACHYDRO 90 field experiment are used to calibrate the models and to evaluate simulation results. It is found that water table dynamics as predicted by the conceptual model are close to the observations in a shallow water well and therefore, that a linear relationship between a topographic index and the local water table depth is found to be a reasonable assumption for catchment scale modeling. However, the hydraulic equilibrium assumption is not valid for the upper 100 cm layer of the unsaturated zone and a conceptual model that incorporates a root zone is suggested. Furthermore, theoretical subsurface flow characteristics from the conceptual model are found to be different from field observations, numerical simulation results, and theoretical baseflow recession characteristics based on Boussinesq's groundwater equation.

  12. Comparison of multiplicative heterogeneous variance adjustment models for genetic evaluations.

    PubMed

    Márkus, Sz; Mäntysaari, E A; Strandén, I; Eriksson, J-Å; Lidauer, M H

    2014-06-01

    Two heterogeneous variance adjustment methods and two variance models were compared in a simulation study. The method used for heterogeneous variance adjustment in the Nordic test-day model, which is a multiplicative method based on Meuwissen (J. Dairy Sci., 79, 1996, 310), was compared with a restricted multiplicative method where the fixed effects were not scaled. Both methods were tested with two different variance models, one with a herd-year and the other with a herd-year-month random effect. The simulation study was built on two field data sets from Swedish Red dairy cattle herds. For both data sets, 200 herds with test-day observations over a 12-year period were sampled. For one data set, herds were sampled randomly, while for the other, each herd was required to have at least 10 first-calving cows per year. The simulations supported the applicability of both methods and models, but the multiplicative mixed model was more sensitive in the case of small strata sizes. Estimation of variance components for the variance models resulted in different parameter estimates, depending on the applied heterogeneous variance adjustment method and variance model combination. Our analyses showed that the assumption of a first-order autoregressive correlation structure between random-effect levels is reasonable when within-herd heterogeneity is modelled by year classes, but less appropriate for within-herd heterogeneity by month classes. Of the studied alternatives, the multiplicative method and a variance model with a random herd-year effect were found most suitable for the Nordic test-day model for dairy cattle evaluation.

  13. Evaluating Small Sample Approaches for Model Test Statistics in Structural Equation Modeling

    ERIC Educational Resources Information Center

    Nevitt, Jonathan; Hancock, Gregory R.

    2004-01-01

    Through Monte Carlo simulation, small sample methods for evaluating overall data-model fit in structural equation modeling were explored. Type I error behavior and power were examined using maximum likelihood (ML), Satorra-Bentler scaled and adjusted (SB; Satorra & Bentler, 1988, 1994), residual-based (Browne, 1984), and asymptotically…

  14. USE OF PHARMACOKINETIC MODELING TO DESIGN STUDIES FOR PATHWAY-SPECIFIC EXPOSURE MODEL EVALUATION

    EPA Science Inventory

    Validating an exposure pathway model is difficult because the biomarker, which is often used to evaluate the model prediction, is an integrated measure for exposures from all the exposure routes/pathways. The purpose of this paper is to demonstrate a method to use pharmacokeneti...

  15. Evaluation of semiempirical atmospheric density models for orbit determination applications

    NASA Technical Reports Server (NTRS)

    Cox, C. M.; Feiertag, R. J.; Oza, D. H.; Doll, C. E.

    1994-01-01

    This paper presents the results of an investigation of the orbit determination performance of the Jacchia-Roberts (JR), mass spectrometer incoherent scatter 1986 (MSIS-86), and drag temperature model (DTM) atmospheric density models. Evaluation of the models was performed to assess the modeling of the total atmospheric density. This study was made generic by using six spacecraft and selecting time periods of study representative of all portions of the 11-year cycle. Performance of the models was measured for multiple spacecraft, representing a selection of orbit geometries from near-equatorial to polar inclinations and altitudes from 400 kilometers to 900 kilometers. The orbit geometries represent typical low earth-orbiting spacecraft supported by the Goddard Space Flight Center (GSFC) Flight Dynamics Division (FDD). The best available modeling and orbit determination techniques using the Goddard Trajectory Determination System (GTDS) were employed to minimize the effects of modeling errors. The latest geopotential model available during the analysis, the Goddard earth model-T3 (GEM-T3), was employed to minimize geopotential model error effects on the drag estimation. Improved-accuracy techniques identified for TOPEX/Poseidon orbit determination analysis were used to improve the Tracking and Data Relay Satellite System (TDRSS)-based orbit determination used for most of the spacecraft chosen for this analysis. This paper shows that during periods of relatively quiet solar flux and geomagnetic activity near the solar minimum, the choice of atmospheric density model used for orbit determination is relatively inconsequential. During typical solar flux conditions near the solar maximum, the differences between the JR, DTM, and MSIS-86 models begin to become apparent. Time periods of extreme solar activity, those in which the daily and 81-day mean solar flux are high and change rapidly, result in significant differences between the models. During periods of high

  16. A model for evaluating physico-chemical substance properties required by consequence analysis models.

    PubMed

    Nikmo, Juha; Kukkonen, Jaakko; Riikonen, Kari

    2002-04-26

    Modeling systems for analyzing the consequences of chemical emergencies require as input values a number of physico-chemical substance properties, commonly as a function of temperature at atmospheric pressure. This paper presents a mathematical model "CHEMIC", which can be used for evaluating such substance properties, assuming that six basic constant quantities are available (molecular weight, freezing or melting point, normal boiling point, critical temperature, critical pressure and critical volume). The model has been designed to yield reasonably accurate numerical predictions, while at the same time keeping the amount of input data to a minimum. The model is based on molecular theory or thermodynamics, together with empirical corrections. Mostly, model equations are based on the so-called law of corresponding states. The model evaluates substance properties as a function of temperature at atmospheric pressure. These include seven properties commonly required by consequence analysis and heavy gas dispersion modeling systems: vapor pressure, vapor and liquid densities, heat of vaporization, vapor and liquid viscosities and binary diffusion coefficient. The model predictions for vapor pressure, vapor and liquid densities and heat of vaporization have been evaluated by using the Clausius-Clapeyron equation. We have also compared the predictions of the CHEMIC model with those of the DATABANK database (developed by the AEA Technology, UK), which includes detailed semi-empirical correlations. The computer program CHEMIC could be easily introduced into consequence analysis modeling systems in order to extend their performance to address a wider selection of substances.

  17. Evaluation of the Current State of Integrated Water Quality Modelling

    NASA Astrophysics Data System (ADS)

    Arhonditsis, G. B.; Wellen, C. C.; Ecological Modelling Laboratory

    2010-12-01

    Environmental policy and management implementation require robust methods for assessing the contribution of various point and non-point pollution sources to water quality problems as well as methods for estimating the expected and achieved compliance with the water quality goals. Water quality models have been widely used for creating the scientific basis for management decisions by providing a predictive link between restoration actions and ecosystem response. Modelling water quality and nutrient transport is challenging due a number of constraints associated with the input data and existing knowledge gaps related to the mathematical description of landscape and in-stream biogeochemical processes. While enormous effort has been invested to make watershed models process-based and spatially-distributed, there has not been a comprehensive meta-analysis of model credibility in watershed modelling literature. In this study, we evaluate the current state of integrated water quality modeling across the range of temporal and spatial scales typically utilized. We address several common modeling questions by providing a quantitative assessment of model performance and by assessing how model performance depends on model development. The data compiled represent a heterogeneous group of modeling studies, especially with respect to complexity, spatial and temporal scales and model development objectives. Beginning from 1992, the year when Beven and Binley published their seminal paper on uncertainty analysis in hydrological modelling, and ending in 2009, we selected over 150 papers fitting a number of criteria. These criteria involved publications that: (i) employed distributed or semi-distributed modelling approaches; (ii) provided predictions on flow and nutrient concentration state variables; and (iii) reported fit to measured data. Model performance was quantified with the Nash-Sutcliffe Efficiency, the relative error, and the coefficient of determination. Further, our

  18. Effects of question formats on causal judgments and model evaluation

    PubMed Central

    Smithson, Michael

    2015-01-01

    Evaluation of causal reasoning models depends on how well the subjects’ causal beliefs are assessed. Elicitation of causal beliefs is determined by the experimental questions put to subjects. We examined the impact of question formats commonly used in causal reasoning research on participant’s responses. The results of our experiment (Study 1) demonstrate that both the mean and homogeneity of the responses can be substantially influenced by the type of question (structure induction versus strength estimation versus prediction). Study 2A demonstrates that subjects’ responses to a question requiring them to predict the effect of a candidate cause can be significantly lower and more heterogeneous than their responses to a question asking them to diagnose a cause when given an effect. Study 2B suggests that diagnostic reasoning can strongly benefit from cues relating to temporal precedence of the cause in the question. Finally, we evaluated 16 variations of recent computational models and found the model fitting was substantially influenced by the type of questions. Our results show that future research in causal reasoning should place a high priority on disentangling the effects of question formats from the effects of experimental manipulations, because that will enable comparisons between models of causal reasoning uncontaminated by method artifact. PMID:25954225

  19. Effects of question formats on causal judgments and model evaluation.

    PubMed

    Shou, Yiyun; Smithson, Michael

    2015-01-01

    Evaluation of causal reasoning models depends on how well the subjects' causal beliefs are assessed. Elicitation of causal beliefs is determined by the experimental questions put to subjects. We examined the impact of question formats commonly used in causal reasoning research on participant's responses. The results of our experiment (Study 1) demonstrate that both the mean and homogeneity of the responses can be substantially influenced by the type of question (structure induction versus strength estimation versus prediction). Study 2A demonstrates that subjects' responses to a question requiring them to predict the effect of a candidate cause can be significantly lower and more heterogeneous than their responses to a question asking them to diagnose a cause when given an effect. Study 2B suggests that diagnostic reasoning can strongly benefit from cues relating to temporal precedence of the cause in the question. Finally, we evaluated 16 variations of recent computational models and found the model fitting was substantially influenced by the type of questions. Our results show that future research in causal reasoning should place a high priority on disentangling the effects of question formats from the effects of experimental manipulations, because that will enable comparisons between models of causal reasoning uncontaminated by method artifact.

  20. Evaluating Uncertainty Estimates Produced by Dose Assessment Models

    NASA Astrophysics Data System (ADS)

    Meyer, P. D.; Orr, S.

    2001-05-01

    Assessments of the dose and/or risk from contaminated sites and waste disposal facilities may rely on the use of relatively simplified models of subsurface flow and transport. Common simplifications include steady-state, one-dimensional flow; homogeneous and isotropic transport medium properties; and unit hydraulic gradient in the unsaturated zone. Because of their relative computational speed, such simplified models are particularly attractive when the impact of uncertainty in flow and transport needs to be evaluated. Simplifications in the representation of flow and transport have the potential to result in an unrepresentative estimate of uncertainty in dose/risk. `Unrepresentative' is used here to describe an estimate of uncertainty that significantly misrepresents the actual uncertainty. Such misrepresentation may have important consequences for decisions based on the dose/risk assessments. The significance of this concern is evaluated here by comparing test case results from uncertainty assessments conducted using a simplified modeling approach and a more complex/realistic modeling approach. The test case follows the U.S. Nuclear Regulatory Commission's framework for site decommissioning analyses. Subsurface properties are derived from data obtained in the Las Cruces Trench experiments with source term data reflecting an actual decommissioning case. Comparisons between the two approaches include the probability distribution of peak dose, the relative importance of parameters, and the value of site-specific data in reducing uncertainty.

  1. Evaluation of weather-based rice yield models in India.

    PubMed

    Sudharsan, D; Adinarayana, J; Reddy, D Raji; Sreenivas, G; Ninomiya, S; Hirafuji, M; Kiura, T; Tanaka, K; Desai, U B; Merchant, S N

    2013-01-01

    The objective of this study was to compare two different rice simulation models--standalone (Decision Support System for Agrotechnology Transfer [DSSAT]) and web based (SImulation Model for RIce-Weather relations [SIMRIW])--with agrometeorological data and agronomic parameters for estimation of rice crop production in southern semi-arid tropics of India. Studies were carried out on the BPT5204 rice variety to evaluate two crop simulation models. Long-term experiments were conducted in a research farm of Acharya N G Ranga Agricultural University (ANGRAU), Hyderabad, India. Initially, the results were obtained using 4 years (1994-1997) of data with weather parameters from a local weather station to evaluate DSSAT simulated results with observed values. Linear regression models used for the purpose showed a close relationship between DSSAT and observed yield. Subsequently, yield comparisons were also carried out with SIMRIW and DSSAT, and validated with actual observed values. Realizing the correlation coefficient values of SIMRIW simulation values in acceptable limits, further rice experiments in monsoon (Kharif) and post-monsoon (Rabi) agricultural seasons (2009, 2010 and 2011) were carried out with a location-specific distributed sensor network system. These proximal systems help to simulate dry weight, leaf area index and potential yield by the Java based SIMRIW on a daily/weekly/monthly/seasonal basis. These dynamic parameters are useful to the farming community for necessary decision making in a ubiquitous manner. However, SIMRIW requires fine tuning for better results/decision making.

  2. Looking beyond general metrics for model evaluation - lessons from an international model intercomparison study

    NASA Astrophysics Data System (ADS)

    Bouaziz, Laurène; de Boer-Euser, Tanja; Brauer, Claudia; Drogue, Gilles; Fenicia, Fabrizio; Grelier, Benjamin; de Niel, Jan; Nossent, Jiri; Pereira, Fernando; Savenije, Hubert; Thirel, Guillaume; Willems, Patrick

    2016-04-01

    International collaboration between institutes and universities is a promising way to reach consensus on hydrological model development. Education, experience and expert knowledge of the hydrological community have resulted in the development of a great variety of model concepts, calibration methods and analysis techniques. Although comparison studies are very valuable for international cooperation, they do often not lead to very clear new insights regarding the relevance of the modelled processes. We hypothesise that this is partly caused by model complexity and the used comparison methods, which focus on a good overall performance instead of focusing on specific events. We propose an approach that focuses on the evaluation of specific events. Eight international research groups calibrated their model for the Ourthe catchment in Belgium (1607 km2) and carried out a validation in time for the Ourthe (i.e. on two different periods, one of them on a blind mode for the modellers) and a validation in space for nested and neighbouring catchments of the Meuse in a completely blind mode. For each model, the same protocol was followed and an ensemble of best performing parameter sets was selected. Signatures were first used to assess model performances in the different catchments during validation. Comparison of the models was then followed by evaluation of selected events, which include: low flows, high flows and the transition from low to high flows. While the models show rather similar performances based on general metrics (i.e. Nash-Sutcliffe Efficiency), clear differences can be observed for specific events. While most models are able to simulate high flows well, large differences are observed during low flows and in the ability to capture the first peaks after drier months. The transferability of model parameters to neighbouring and nested catchments is assessed as an additional measure in the model evaluation. This suggested approach helps to select, among competing

  3. Performance Evaluation of 3d Modeling Software for Uav Photogrammetry

    NASA Astrophysics Data System (ADS)

    Yanagi, H.; Chikatsu, H.

    2016-06-01

    UAV (Unmanned Aerial Vehicle) photogrammetry, which combines UAV and freely available internet-based 3D modeling software, is widely used as a low-cost and user-friendly photogrammetry technique in the fields such as remote sensing and geosciences. In UAV photogrammetry, only the platform used in conventional aerial photogrammetry is changed. Consequently, 3D modeling software contributes significantly to its expansion. However, the algorithms of the 3D modelling software are black box algorithms. As a result, only a few studies have been able to evaluate their accuracy using 3D coordinate check points. With this motive, Smart3DCapture and Pix4Dmapper were downloaded from the Internet and commercial software PhotoScan was also employed; investigations were performed in this paper using check points and images obtained from UAV.

  4. Evaluations of Particle Scattering Models for Falling Snow

    NASA Astrophysics Data System (ADS)

    Duffy, G.; Nesbitt, S. W.; McFarquhar, G. M.

    2014-12-01

    Several millimeter wavelength scattering models have been developed over the past decade that could potentially be more accurate than the standard "soft sphere" model, a model with is used in GPM algorithms to retrieve snowfall precipitation rates from dual frequency radar measurements. Results from the GCPEx mission, a GPM Ground Validation experiment that flew HVPS and CIP particle imaging probes through snowstorms within fields of Ku/Ka band reflectivity, provide the data necessary to evaluate simulations of non-Rayleigh reflectivity against measured values. This research uses T-Matrix spheroid, RGA spheroid, and Mie Sphere simulations, as well as variations on axial ratio and diameter-density relationships, to quantify the merits and errors of different forward simulation strategies.

  5. The algorithmic anatomy of model-based evaluation

    PubMed Central

    Daw, Nathaniel D.; Dayan, Peter

    2014-01-01

    Despite many debates in the first half of the twentieth century, it is now largely a truism that humans and other animals build models of their environments and use them for prediction and control. However, model-based (MB) reasoning presents severe computational challenges. Alternative, computationally simpler, model-free (MF) schemes have been suggested in the reinforcement learning literature, and have afforded influential accounts of behavioural and neural data. Here, we study the realization of MB calculations, and the ways that this might be woven together with MF values and evaluation methods. There are as yet mostly only hints in the literature as to the resulting tapestry, so we offer more preview than review. PMID:25267820

  6. The fence experiment — a first evaluation of shelter models

    NASA Astrophysics Data System (ADS)

    Peña, Alfredo; Bechmann, Andreas; Conti, Davide; Angelou, Nikolas; Mann, Jakob

    2016-09-01

    We present a preliminary evaluation of shelter models of different degrees of complexity using full-scale lidar measurements of the shelter on a vertical plane behind and orthogonal to a fence. Model results accounting for the distribution of the relative wind direction within the observed direction interval are in better agreement with the observations than those that correspond to the simulation at the center of the direction interval, particularly in the far-wake region, for six vertical levels up to two fence heights. Generally, the CFD results are in better agreement with the observations than those from two engineering-like obstacle models but the latter two follow well the behavior of the observations in the far-wake region.

  7. Peformance Tuning and Evaluation of a Parallel Community Climate Model

    SciTech Connect

    Drake, J.B.; Worley, P.H.; Hammond, S.

    1999-11-13

    The Parallel Community Climate Model (PCCM) is a message-passing parallelization of version 2.1 of the Community Climate Model (CCM) developed by researchers at Argonne and Oak Ridge National Laboratories and at the National Center for Atmospheric Research in the early to mid 1990s. In preparation for use in the Department of Energy's Parallel Climate Model (PCM), PCCM has recently been updated with new physics routines from version 3.2 of the CCM, improvements to the parallel implementation, and ports to the SGIKray Research T3E and Origin 2000. We describe our experience in porting and tuning PCCM on these new platforms, evaluating the performance of different parallel algorithm options and comparing performance between the T3E and Origin 2000.

  8. Animal Models for Evaluation of Bone Implants and Devices: Comparative Bone Structure and Common Model Uses.

    PubMed

    Wancket, L M

    2015-09-01

    Bone implants and devices are a rapidly growing field within biomedical research, and implants have the potential to significantly improve human and animal health. Animal models play a key role in initial product development and are important components of nonclinical data included in applications for regulatory approval. Pathologists are increasingly being asked to evaluate these models at the initial developmental and nonclinical biocompatibility testing stages, and it is important to understand the relative merits and deficiencies of various species when evaluating a new material or device. This article summarizes characteristics of the most commonly used species in studies of bone implant materials, including detailed information about the relevance of a particular model to human bone physiology and pathology. Species reviewed include mice, rats, rabbits, guinea pigs, dogs, sheep, goats, and nonhuman primates. Ultimately, a comprehensive understanding of the benefits and limitations of different model species will aid in rigorously evaluating a novel bone implant material or device.

  9. The Third Phase of AQMEII: Evaluation Strategy and Multi-Model Performance Analysis

    EPA Science Inventory

    AQMEII (Air Quality Model Evaluation International Initiative) is an extraordinary effort promoting policy-relevant research on regional air quality model evaluation across the European and North American atmospheric modelling communities, providing the ideal platform for advanci...

  10. Model evaluation using a community benchmarking system for land surface models

    NASA Astrophysics Data System (ADS)

    Mu, M.; Hoffman, F. M.; Lawrence, D. M.; Riley, W. J.; Keppel-Aleks, G.; Kluzek, E. B.; Koven, C. D.; Randerson, J. T.

    2014-12-01

    Evaluation of atmosphere, ocean, sea ice, and land surface models is an important step in identifying deficiencies in Earth system models and developing improved estimates of future change. For the land surface and carbon cycle, the design of an open-source system has been an important objective of the International Land Model Benchmarking (ILAMB) project. Here we evaluated CMIP5 and CLM models using a benchmarking system that enables users to specify models, data sets, and scoring systems so that results can be tailored to specific model intercomparison projects. Our scoring system used information from four different aspects of global datasets, including climatological mean spatial patterns, seasonal cycle dynamics, interannual variability, and long-term trends. Variable-to-variable comparisons enable investigation of the mechanistic underpinnings of model behavior, and allow for some control of biases in model drivers. Graphics modules allow users to evaluate model performance at local, regional, and global scales. Use of modular structures makes it relatively easy for users to add new variables, diagnostic metrics, benchmarking datasets, or model simulations. Diagnostic results are automatically organized into HTML files, so users can conveniently share results with colleagues. We used this system to evaluate atmospheric carbon dioxide, burned area, global biomass and soil carbon stocks, net ecosystem exchange, gross primary production, ecosystem respiration, terrestrial water storage, evapotranspiration, and surface radiation from CMIP5 historical and ESM historical simulations. We found that the multi-model mean often performed better than many of the individual models for most variables. We plan to publicly release a stable version of the software during fall of 2014 that has land surface, carbon cycle, hydrology, radiation and energy cycle components.

  11. Evaluation of a laboratory model of human head impact biomechanics

    PubMed Central

    Hernandez, Fidel; Shull, Peter B.; Camarillo, David B.

    2015-01-01

    This work describes methodology for evaluating laboratory models of head impact biomechanics. Using this methodology, we investigated: how closely does twin-wire drop testing model head rotation in American football impacts? Head rotation is believed to cause mild traumatic brain injury (mTBI) but helmet safety standards only model head translations believed to cause severe TBI. It is unknown whether laboratory head impact models in safety standards, like twin-wire drop testing, reproduce six degree-of-freedom (6DOF) head impact biomechanics that may cause mTBI. We compared 6DOF measurements of 421 American football head impacts to twin-wire drop tests at impact sites and velocities weighted to represent typical field exposure. The highest rotational velocities produced by drop testing were the 74th percentile of non-injury field impacts. For a given translational acceleration level, drop testing underestimated field rotational acceleration by 46% and rotational velocity by 72%. Primary rotational acceleration frequencies were much larger in drop tests (~100Hz) than field impacts (~10Hz). Drop testing was physically unable to produce acceleration directions common in field impacts. Initial conditions of a single field impact were highly resolved in stereo high-speed video and reconstructed in a drop test. Reconstruction results reflected aggregate trends of lower amplitude rotational velocity and higher frequency rotational acceleration in drop testing, apparently due to twin-wire constraints and the absence of a neck. These results suggest twin-wire drop testing is limited in modeling head rotation during impact, and motivate continued evaluation of head impact models to ensure helmets are tested under conditions that may cause mTBI. PMID:26117075

  12. Evaluation of the WIND System atmospheric models: An analytic approach

    SciTech Connect

    Fast, J.D.

    1991-11-25

    An analytic approach was used in this study to test the logic, coding, and the theoretical limits of the WIND System atmospheric models for the Savannah River Plant. In this method, dose or concentration estimates predicted by the models were compared to the analytic solutions to evaluate their performance. The results from AREA EVACUATION and PLTFF/PLUME were very nearly identical to the analytic solutions they are based on and the evaluation procedure demonstrated that these models were able to reproduce the theoretical characteristics of a puff or a plume. The dose or concentration predicted by PLTFF/PLUME was always within 1% of the analytic solution. Differences between the dose predicted by 2DPUF and its analytic solution were substantially greater than those associated with PUFF/PLUME, but were usually smaller than 6%. This behavior was expected because PUFF/PLUME solves a form of the analytic solution for a single puff, and 2DPUF performs an integration over a period of time for several puffs to obtain the dose. Relatively large differences between the dose predicted by 2DPUF and its analytic solution were found to occur close to the source under stable atmospheric conditions. WIND System users should be aware of these situations in which the assumptions of the System atmospheric models may be violated so that dose predictions can be interpreted correctly. The WIND System atmospheric models are similar to many other dispersion codes used by the EPA, NRC, and DOE. If the quality of the source term and meteorological data is high, relatively accurate and timely forecasts for emergency response situations can be made by the WIND System atmospheric models.

  13. Evaluation of a laboratory model of human head impact biomechanics.

    PubMed

    Hernandez, Fidel; Shull, Peter B; Camarillo, David B

    2015-09-18

    This work describes methodology for evaluating laboratory models of head impact biomechanics. Using this methodology, we investigated: how closely does twin-wire drop testing model head rotation in American football impacts? Head rotation is believed to cause mild traumatic brain injury (mTBI) but helmet safety standards only model head translations believed to cause severe TBI. It is unknown whether laboratory head impact models in safety standards, like twin-wire drop testing, reproduce six degree-of-freedom (6DOF) head impact biomechanics that may cause mTBI. We compared 6DOF measurements of 421 American football head impacts to twin-wire drop tests at impact sites and velocities weighted to represent typical field exposure. The highest rotational velocities produced by drop testing were the 74th percentile of non-injury field impacts. For a given translational acceleration level, drop testing underestimated field rotational acceleration by 46% and rotational velocity by 72%. Primary rotational acceleration frequencies were much larger in drop tests (~100 Hz) than field impacts (~10 Hz). Drop testing was physically unable to produce acceleration directions common in field impacts. Initial conditions of a single field impact were highly resolved in stereo high-speed video and reconstructed in a drop test. Reconstruction results reflected aggregate trends of lower amplitude rotational velocity and higher frequency rotational acceleration in drop testing, apparently due to twin-wire constraints and the absence of a neck. These results suggest twin-wire drop testing is limited in modeling head rotation during impact, and motivate continued evaluation of head impact models to ensure helmets are tested under conditions that may cause mTBI.

  14. Statistical procedures for evaluating daily and monthly hydrologic model predictions

    USGS Publications Warehouse

    Coffey, M.E.; Workman, S.R.; Taraba, J.L.; Fogle, A.W.

    2004-01-01

    The overall study objective was to evaluate the applicability of different qualitative and quantitative methods for comparing daily and monthly SWAT computer model hydrologic streamflow predictions to observed data, and to recommend statistical methods for use in future model evaluations. Statistical methods were tested using daily streamflows and monthly equivalent runoff depths. The statistical techniques included linear regression, Nash-Sutcliffe efficiency, nonparametric tests, t-test, objective functions, autocorrelation, and cross-correlation. None of the methods specifically applied to the non-normal distribution and dependence between data points for the daily predicted and observed data. Of the tested methods, median objective functions, sign test, autocorrelation, and cross-correlation were most applicable for the daily data. The robust coefficient of determination (CD*) and robust modeling efficiency (EF*) objective functions were the preferred methods for daily model results due to the ease of comparing these values with a fixed ideal reference value of one. Predicted and observed monthly totals were more normally distributed, and there was less dependence between individual monthly totals than was observed for the corresponding predicted and observed daily values. More statistical methods were available for comparing SWAT model-predicted and observed monthly totals. The 1995 monthly SWAT model predictions and observed data had a regression Rr2 of 0.70, a Nash-Sutcliffe efficiency of 0.41, and the t-test failed to reject the equal data means hypothesis. The Nash-Sutcliffe coefficient and the R r2 coefficient were the preferred methods for monthly results due to the ability to compare these coefficients to a set ideal value of one.

  15. Evaluation of wave runup predictions from numerical and parametric models

    USGS Publications Warehouse

    Stockdon, Hilary F.; Thompson, David M.; Plant, Nathaniel G.; Long, Joseph W.

    2014-01-01

    Wave runup during storms is a primary driver of coastal evolution, including shoreline and dune erosion and barrier island overwash. Runup and its components, setup and swash, can be predicted from a parameterized model that was developed by comparing runup observations to offshore wave height, wave period, and local beach slope. Because observations during extreme storms are often unavailable, a numerical model is used to simulate the storm-driven runup to compare to the parameterized model and then develop an approach to improve the accuracy of the parameterization. Numerically simulated and parameterized runup were compared to observations to evaluate model accuracies. The analysis demonstrated that setup was accurately predicted by both the parameterized model and numerical simulations. Infragravity swash heights were most accurately predicted by the parameterized model. The numerical model suffered from bias and gain errors that depended on whether a one-dimensional or two-dimensional spatial domain was used. Nonetheless, all of the predictions were significantly correlated to the observations, implying that the systematic errors can be corrected. The numerical simulations did not resolve the incident-band swash motions, as expected, and the parameterized model performed best at predicting incident-band swash heights. An assimilated prediction using a weighted average of the parameterized model and the numerical simulations resulted in a reduction in prediction error variance. Finally, the numerical simulations were extended to include storm conditions that have not been previously observed. These results indicated that the parameterized predictions of setup may need modification for extreme conditions; numerical simulations can be used to extend the validity of the parameterized predictions of infragravity swash; and numerical simulations systematically underpredict incident swash, which is relatively unimportant under extreme conditions.

  16. Performance evaluation of four directional emissivity analytical models with thermal SAIL model and airborne images.

    PubMed

    Ren, Huazhong; Liu, Rongyuan; Yan, Guangjian; Li, Zhao-Liang; Qin, Qiming; Liu, Qiang; Nerry, Françoise

    2015-04-01

    Land surface emissivity is a crucial parameter in the surface status monitoring. This study aims at the evaluation of four directional emissivity models, including two bi-directional reflectance distribution function (BRDF) models and two gap-frequency-based models. Results showed that the kernel-driven BRDF model could well represent directional emissivity with an error less than 0.002, and was consequently used to retrieve emissivity with an accuracy of about 0.012 from an airborne multi-angular thermal infrared data set. Furthermore, we updated the cavity effect factor relating to multiple scattering inside canopy, which improved the performance of the gap-frequency-based models.

  17. [Evaluation of a face model for surgical education].

    PubMed

    Schneider, G; Voigt, S; Rettinger, G

    2011-09-01

    The complex anatomy of the human face requires a high degree of experience and skills in surgical dressing of facial soft tissue defects. The previous education contains literature studies and supervision during surgery, according to surgical spectrum of the educating hospital. A structured education including a training of different surgical methods on a model and slow increase of complexity could improve considerably the following education related to the patient.During a cooperative project, the 3 di GmbH and the Department of Otolaryngology at the Friedrich-Schiller-University Jena developed a face model for surgical education that allows the training of surgical interventions in the face. The model was used during the 6th and 8th Jena Workshop for Functional and Aesthetic Surgery as well as a workshop for surgical suturation, and tested and evaluated by the attendees.The attendees mostly rated the work-ability of the models and the possibility to practice on a realistic face model with artificial skin very well and beneficial. This model allows a repeatable and structured education of surgical standards, and is very helpful in preparation for operating facial defects of a patient.

  18. Evaluation of an Urban Canopy Parameterization in a Mesoscale Model

    SciTech Connect

    Chin, H S; Leach, M J; Sugiyama, G A; Leone, Jr., J M; Walker, H; Nasstrom, J; Brown, M J

    2004-03-18

    A modified urban canopy parameterization (UCP) is developed and evaluated in a three-dimensional mesoscale model to assess the urban impact on surface and lower atmospheric properties. This parameterization accounts for the effects of building drag, turbulent production, radiation balance, anthropogenic heating, and building rooftop heating/cooling. USGS land-use data are also utilized to derive urban infrastructure and urban surface properties needed for driving the UCP. An intensive observational period with clear-sky, strong ambient wind and drainage flow, and the absence of land-lake breeze over the Salt Lake Valley, occurring on 25-26 October 2000, is selected for this study. A series of sensitivity experiments are performed to gain understanding of the urban impact in the mesoscale model. Results indicate that within the selected urban environment, urban surface characteristics and anthropogenic heating play little role in the formation of the modeled nocturnal urban boundary layer. The rooftop effect appears to be the main contributor to this urban boundary layer. Sensitivity experiments also show that for this weak urban heat island case, the model horizontal grid resolution is important in simulating the elevated inversion layer. The root mean square errors of the predicted wind and temperature with respect to surface station measurements exhibit substantially larger discrepancies at the urban locations than the rural counterparts. However, the close agreement of modeled tracer concentration with observations fairly justifies the modeled urban impact on the wind direction shift and wind drag effects.

  19. Obs4MIPS: Satellite Observations for CMIP6 Model Evaluation

    NASA Astrophysics Data System (ADS)

    Ferraro, R.; Waliser, D. E.; Gleckler, P. J.; Taylor, K. E.; Eyring, V.

    2014-12-01

    This poster will review the current status of the obs4MIPs project, whose purpose is to provide a limited collection of well-established and documented datasets for comparison with Earth system models (https://www.earthsystemcog.org/projects/obs4mips/). These datasets have been reformatted to correspond with the CMIP5 model output requirements, and include technical documentation specifically targeted for their use in model otput evaluation. There are currently over 50 datasets containing observations that directly correspond to CMIP5 model output variables. We will review the results and recommendations from the recent obs4MIPs - CMIP6 planning meeting, which gathered over 50 experts in satellite observations and CMIP modeling, to assess the needed observations to support the next round of CMIP experiments. The recommendations address key issues regarding the inclusion of higher frequency datasets (both observations and model output), the need for error and bias characterization, the inclusion of reanalysis, and support for observation simulators. An update on the governance for the obs4MIPs project and recent usage statistics will also be presented.

  20. Evaluating thermoregulation in reptiles: an appropriate null model.

    PubMed

    Christian, Keith A; Tracy, Christopher R; Tracy, C Richard

    2006-09-01

    Established indexes of thermoregulation in ectotherms compare body temperatures of real animals with a null distribution of operative temperatures from a physical or mathematical model with the same size, shape, and color as the actual animal but without mass. These indexes, however, do not account for thermal inertia or the effects of inertia when animals move through thermally heterogeneous environments. Some recent models have incorporated body mass, to account for thermal inertia and the physiological control of warming and cooling rates seen in most reptiles, and other models have incorporated movement through the environment, but none includes all pertinent variables explaining body temperature. We present a new technique for calculating the distribution of body temperatures available to ectotherms that have thermal inertia, random movements, and different rates of warming and cooling. The approach uses a biophysical model of heat exchange in ectotherms and a model of random interaction with thermal environments over the course of a day to create a null distribution of body temperatures that can be used with conventional thermoregulation indexes. This new technique provides an unbiased method for evaluating thermoregulation in large ectotherms that store heat while moving through complex environments, but it can also generate null models for ectotherms of all sizes.

  1. [Evaluation of a face model for surgical education].

    PubMed

    Schneider, G; Voigt, S; Rettinger, G

    2011-09-01

    The complex anatomy of the human face requires a high degree of experience and skills in surgical dressing of facial soft tissue defects. The previous education contains literature studies and supervision during surgery, according to surgical spectrum of the educating hospital. A structured education including a training of different surgical methods on a model and slow increase of complexity could improve considerably the following education related to the patient.During a cooperative project, the 3 di GmbH and the Department of Otolaryngology at the Friedrich-Schiller-University Jena developed a face model for surgical education that allows the training of surgical interventions in the face. The model was used during the 6th and 8th Jena Workshop for Functional and Aesthetic Surgery as well as a workshop for surgical suturation, and tested and evaluated by the attendees.The attendees mostly rated the work-ability of the models and the possibility to practice on a realistic face model with artificial skin very well and beneficial. This model allows a repeatable and structured education of surgical standards, and is very helpful in preparation for operating facial defects of a patient. PMID:21913151

  2. Metal mixture modeling evaluation project: 2. Comparison of four modeling approaches.

    PubMed

    Farley, Kevin J; Meyer, Joseph S; Balistrieri, Laurie S; De Schamphelaere, Karel A C; Iwasaki, Yuichi; Janssen, Colin R; Kamo, Masashi; Lofts, Stephen; Mebane, Christopher A; Naito, Wataru; Ryan, Adam C; Santore, Robert C; Tipping, Edward

    2015-04-01

    As part of the Metal Mixture Modeling Evaluation (MMME) project, models were developed by the National Institute of Advanced Industrial Science and Technology (Japan), the US Geological Survey (USA), HDR|HydroQual (USA), and the Centre for Ecology and Hydrology (United Kingdom) to address the effects of metal mixtures on biological responses of aquatic organisms. A comparison of the 4 models, as they were presented at the MMME workshop in Brussels, Belgium (May 2012), is provided in the present study. Overall, the models were found to be similar in structure (free ion activities computed by the Windermere humic aqueous model [WHAM]; specific or nonspecific binding of metals/cations in or on the organism; specification of metal potency factors or toxicity response functions to relate metal accumulation to biological response). Major differences in modeling approaches are attributed to various modeling assumptions (e.g., single vs multiple types of binding sites on the organism) and specific calibration strategies that affected the selection of model parameters. The models provided a reasonable description of additive (or nearly additive) toxicity for a number of individual toxicity test results. Less-than-additive toxicity was more difficult to describe with the available models. Because of limitations in the available datasets and the strong interrelationships among the model parameters (binding constants, potency factors, toxicity response parameters), further evaluation of specific model assumptions and calibration strategies is needed.

  3. Gridded global surface ozone metrics for atmospheric chemistry model evaluation

    NASA Astrophysics Data System (ADS)

    Sofen, E. D.; Bowdalo, D.; Evans, M. J.; Apadula, F.; Bonasoni, P.; Cupeiro, M.; Ellul, R.; Galbally, I. E.; Girgzdiene, R.; Luppo, S.; Mimouni, M.; Nahas, A. C.; Saliba, M.; Tørseth, K.

    2016-02-01

    The concentration of ozone at the Earth's surface is measured at many locations across the globe for the purposes of air quality monitoring and atmospheric chemistry research. We have brought together all publicly available surface ozone observations from online databases from the modern era to build a consistent data set for the evaluation of chemical transport and chemistry-climate (Earth System) models for projects such as the Chemistry-Climate Model Initiative and Aer-Chem-MIP. From a total data set of approximately 6600 sites and 500 million hourly observations from 1971-2015, approximately 2200 sites and 200 million hourly observations pass screening as high-quality sites in regionally representative locations that are appropriate for use in global model evaluation. There is generally good data volume since the start of air quality monitoring networks in 1990 through 2013. Ozone observations are biased heavily toward North America and Europe with sparse coverage over the rest of the globe. This data set is made available for the purposes of model evaluation as a set of gridded metrics intended to describe the distribution of ozone concentrations on monthly and annual timescales. Metrics include the moments of the distribution, percentiles, maximum daily 8-hour average (MDA8), sum of means over 35 ppb (daily maximum 8-h; SOMO35), accumulated ozone exposure above a threshold of 40 ppbv (AOT40), and metrics related to air quality regulatory thresholds. Gridded data sets are stored as netCDF-4 files and are available to download from the British Atmospheric Data Centre (doi: 10.5285/08fbe63d-fa6d-4a7a-b952-5932e3ab0452). We provide recommendations to the ozone measurement community regarding improving metadata reporting to simplify ongoing and future efforts in working with ozone data from disparate networks in a consistent manner.

  4. Gridded global surface ozone metrics for atmospheric chemistry model evaluation

    NASA Astrophysics Data System (ADS)

    Sofen, E. D.; Bowdalo, D.; Evans, M. J.; Apadula, F.; Bonasoni, P.; Cupeiro, M.; Ellul, R.; Galbally, I. E.; Girgzdiene, R.; Luppo, S.; Mimouni, M.; Nahas, A. C.; Saliba, M.; Tørseth, K.; Wmo Gaw, Epa Aqs, Epa Castnet, Capmon, Naps, Airbase, Emep, Eanet Ozone Datasets, All Other Contributors To

    2015-07-01

    The concentration of ozone at the Earth's surface is measured at many locations across the globe for the purposes of air quality monitoring and atmospheric chemistry research. We have brought together all publicly available surface ozone observations from online databases from the modern era to build a consistent dataset for the evaluation of chemical transport and chemistry-climate (Earth System) models for projects such as the Chemistry-Climate Model Initiative and Aer-Chem-MIP. From a total dataset of approximately 6600 sites and 500 million hourly observations from 1971-2015, approximately 2200 sites and 200 million hourly observations pass screening as high-quality sites in regional background locations that are appropriate for use in global model evaluation. There is generally good data volume since the start of air quality monitoring networks in 1990 through 2013. Ozone observations are biased heavily toward North America and Europe with sparse coverage over the rest of the globe. This dataset is made available for the purposes of model evaluation as a set of gridded metrics intended to describe the distribution of ozone concentrations on monthly and annual timescales. Metrics include the moments of the distribution, percentiles, maximum daily eight-hour average (MDA8), SOMO35, AOT40, and metrics related to air quality regulatory thresholds. Gridded datasets are stored as netCDF-4 files and are available to download from the British Atmospheric Data Centre (doi:10.5285/08fbe63d-fa6d-4a7a-b952-5932e3ab0452). We provide recommendations to the ozone measurement community regarding improving metadata reporting to simplify ongoing and future efforts in working with ozone data from disparate networks in a consistent manner.

  5. Evaluating the uncertainty of input quantities in measurement models

    NASA Astrophysics Data System (ADS)

    Possolo, Antonio; Elster, Clemens

    2014-06-01

    The Guide to the Expression of Uncertainty in Measurement (GUM) gives guidance about how values and uncertainties should be assigned to the input quantities that appear in measurement models. This contribution offers a concrete proposal for how that guidance may be updated in light of the advances in the evaluation and expression of measurement uncertainty that were made in the course of the twenty years that have elapsed since the publication of the GUM, and also considering situations that the GUM does not yet contemplate. Our motivation is the ongoing conversation about a new edition of the GUM. While generally we favour a Bayesian approach to uncertainty evaluation, we also recognize the value that other approaches may bring to the problems considered here, and focus on methods for uncertainty evaluation and propagation that are widely applicable, including to cases that the GUM has not yet addressed. In addition to Bayesian methods, we discuss maximum-likelihood estimation, robust statistical methods, and measurement models where values of nominal properties play the same role that input quantities play in traditional models. We illustrate these general-purpose techniques in concrete examples, employing data sets that are realistic but that also are of conveniently small sizes. The supplementary material available online lists the R computer code that we have used to produce these examples (stacks.iop.org/Met/51/3/339/mmedia). Although we strive to stay close to clause 4 of the GUM, which addresses the evaluation of uncertainty for input quantities, we depart from it as we review the classes of measurement models that we believe are generally useful in contemporary measurement science. We also considerably expand and update the treatment that the GUM gives to Type B evaluations of uncertainty: reviewing the state-of-the-art, disciplined approach to the elicitation of expert knowledge, and its encapsulation in probability distributions that are usable in

  6. Evaluating Alzheimer's Disease Progression by Modeling Crosstalk Network Disruption

    PubMed Central

    Liu, Haochen; Wei, Chunxiang; He, Hua; Liu, Xiaoquan

    2016-01-01

    Aβ, tau, and P-tau have been widely accepted as reliable markers for Alzheimer's disease (AD). The crosstalk between these markers forms a complex network. AD may induce the integral variation and disruption of the network. The aim of this study was to develop a novel mathematic model based on a simplified crosstalk network to evaluate the disease progression of AD. The integral variation of the network is measured by three integral disruption parameters. The robustness of network is evaluated by network disruption probability. Presented results show that network disruption probability has a good linear relationship with Mini Mental State Examination (MMSE). The proposed model combined with Support vector machine (SVM) achieves a relative high 10-fold cross-validated performance in classification of AD vs. normal and mild cognitive impairment (MCI) vs. normal (95% accuracy, 95% sensitivity, 95% specificity for AD vs. normal; 90% accuracy, 94% sensitivity, 83% specificity for MCI vs. normal). This research evaluates the progression of AD and facilitates AD early diagnosis. PMID:26834548

  7. Evaluation of Data Used for Modelling the Stratosphere of Saturn

    NASA Astrophysics Data System (ADS)

    Armstrong, Eleanor Sophie; Irwin, Patrick G. J.; Moses, Julianne I.

    2015-11-01

    Planetary atmospheres are modeled through the use of a photochemical and kinetic reaction scheme constructed from experimentally and theoretically determined rate coefficients, photoabsorption cross sections and branching ratios for the molecules described within them. The KINETICS architecture has previously been developed to model planetary atmospheres and is applied here to Saturn’s stratosphere. We consider the pathways that comprise the reaction scheme of a current model, and update the reaction scheme according the to findings in a literature investigation. We evaluate contemporary photochemical literature, studying recent data sets of cross-sections and branching ratios for a number of hydrocarbons used in the photochemical scheme of Model C of KINETICS. In particular evaluation of new photodissociation branching ratios for CH4, C2H2, C2H4, C3H3, C3H5 and C4H2, and new cross-sectional data for C2H2, C2H4, C2H6, C3H3, C4H2, C6H2 and C8H2 are considered. By evaluating the techniques used and data sets obtained, a new reaction scheme selection was drawn up. These data are then used within the preferred reaction scheme of the thesis and applied to the KINETICS atmospheric model to produce a model of the stratosphere of Saturn in a steady state. A total output of the preferred reaction scheme is presented, and the data is compared both with the previous reaction scheme and with data from the Cassini spacecraft in orbit around Saturn.One of the key findings of this work is that there is significant change in the model’s output as a result of temperature dependent data determination. Although only shown within the changes to the photochemical portion of the preferred reaction scheme, it is suggested that an equally important temperature dependence will be exhibited in the kinetic section of the reaction scheme. The photochemical model output is shown to be highly dependent on the preferred reaction scheme used within it by this thesis. The importance of correct

  8. Integrating fire with hydrological projections: model evaluation to identify uncertainties and tradeoffs in model complexity

    NASA Astrophysics Data System (ADS)

    Kennedy, M.; McKenzie, D.

    2013-12-01

    It is imperative for resource managers to understand how a changing climate might modify future watershed and hydrological processes, and such an understanding is incomplete if disturbances such as fire are not integrated with hydrological projections. Can a robust fire spread model be developed that approximates patterns of fire spread in response to varying topography wind patterns, and fuel loads and moistures, without requiring intensive calibration to each new study area or time frame? We assessed the performance of a stochastic model of fire spread (WMFire), integrated with the Regional Hydro-Ecological Simulation System (RHESSys), for projecting the effects of climatic change on mountain watersheds. We first use Monte Carlo inference to determine that the fire spread model is able to replicate the spatial pattern of fire spread for a contemporary wildfire in Washington State (the Tripod fire), measured by the lacunarity and fractal dimension of the fire. We then integrate a version of WMFire able to replicate the contemporary wildfire with RHESSys and simulate a New Mexico watershed over the calibration period of RHESSys (1941-1997). In comparing the fire spread model to a single contemporary wildfire we found issues in parameter identifiability for several of the nine parameters, due to model input uncertainty and insensitivity of the mathematical function to certain ranges of the parameter values. Model input uncertainty is caused by the inherent difficulty in reconstructing fuel loads and fuel moistures for a fire event after the fire has occurred, as well as by issues in translating variables relevant to hydrological processes produced by the hydrological model to those known to affect fire spread and fire severity. The first stage in the model evaluation aided the improvement of the model in both of these regards. In transporting the model to a new landscape in order to evaluate fire regimes in addition to patterns of fire spread, we find reasonable

  9. Hierarchical Bayesian model averaging for hydrostratigraphic modeling: Uncertainty segregation and comparative evaluation

    NASA Astrophysics Data System (ADS)

    Tsai, Frank T.-C.; Elshall, Ahmed S.

    2013-09-01

    Analysts are often faced with competing propositions for each uncertain model component. How can we judge that we select a correct proposition(s) for an uncertain model component out of numerous possible propositions? We introduce the hierarchical Bayesian model averaging (HBMA) method as a multimodel framework for uncertainty analysis. The HBMA allows for segregating, prioritizing, and evaluating different sources of uncertainty and their corresponding competing propositions through a hierarchy of BMA models that forms a BMA tree. We apply the HBMA to conduct uncertainty analysis on the reconstructed hydrostratigraphic architectures of the Baton Rouge aquifer-fault system, Louisiana. Due to uncertainty in model data, structure, and parameters, multiple possible hydrostratigraphic models are produced and calibrated as base models. The study considers four sources of uncertainty. With respect to data uncertainty, the study considers two calibration data sets. With respect to model structure, the study considers three different variogram models, two geological stationarity assumptions and two fault conceptualizations. The base models are produced following a combinatorial design to allow for uncertainty segregation. Thus, these four uncertain model components with their corresponding competing model propositions result in 24 base models. The results show that the systematic dissection of the uncertain model components along with their corresponding competing propositions allows for detecting the robust model propositions and the major sources of uncertainty.

  10. Metal Mixture Modeling Evaluation project: 2. Comparison of four modeling approaches

    USGS Publications Warehouse

    Farley, Kevin J.; Meyer, Joe; Balistrieri, Laurie S.; DeSchamphelaere, Karl; Iwasaki, Yuichi; Janssen, Colin; Kamo, Masashi; Lofts, Steve; Mebane, Christopher A.; Naito, Wataru; Ryan, Adam C.; Santore, Robert C.; Tipping, Edward

    2015-01-01

    As part of the Metal Mixture Modeling Evaluation (MMME) project, models were developed by the National Institute of Advanced Industrial Science and Technology (Japan), the U.S. Geological Survey (USA), HDR⎪HydroQual, Inc. (USA), and the Centre for Ecology and Hydrology (UK) to address the effects of metal mixtures on biological responses of aquatic organisms. A comparison of the 4 models, as they were presented at the MMME Workshop in Brussels, Belgium (May 2012), is provided herein. Overall, the models were found to be similar in structure (free ion activities computed by WHAM; specific or non-specific binding of metals/cations in or on the organism; specification of metal potency factors and/or toxicity response functions to relate metal accumulation to biological response). Major differences in modeling approaches are attributed to various modeling assumptions (e.g., single versus multiple types of binding site on the organism) and specific calibration strategies that affected the selection of model parameters. The models provided a reasonable description of additive (or nearly additive) toxicity for a number of individual toxicity test results. Less-than-additive toxicity was more difficult to describe with the available models. Because of limitations in the available datasets and the strong inter-relationships among the model parameters (log KM values, potency factors, toxicity response parameters), further evaluation of specific model assumptions and calibration strategies is needed.

  11. Evaluation of weather-based rice yield models in India

    NASA Astrophysics Data System (ADS)

    Sudharsan, D.; Adinarayana, J.; Reddy, D. Raji; Sreenivas, G.; Ninomiya, S.; Hirafuji, M.; Kiura, T.; Tanaka, K.; Desai, U. B.; Merchant, S. N.

    2013-01-01

    The objective of this study was to compare two different rice simulation models—standalone (Decision Support System for Agrotechnology Transfer [DSSAT]) and web based (SImulation Model for RIce-Weather relations [SIMRIW])—with agrometeorological data and agronomic parameters for estimation of rice crop production in southern semi-arid tropics of India. Studies were carried out on the BPT5204 rice variety to evaluate two crop simulation models. Long-term experiments were conducted in a research farm of Acharya N G Ranga Agricultural University (ANGRAU), Hyderabad, India. Initially, the results were obtained using 4 years (1994-1997) of data with weather parameters from a local weather station to evaluate DSSAT simulated results with observed values. Linear regression models used for the purpose showed a close relationship between DSSAT and observed yield. Subsequently, yield comparisons were also carried out with SIMRIW and DSSAT, and validated with actual observed values. Realizing the correlation coefficient values of SIMRIW simulation values in acceptable limits, further rice experiments in monsoon (Kharif) and post-monsoon (Rabi) agricultural seasons (2009, 2010 and 2011) were carried out with a location-specific distributed sensor network system. These proximal systems help to simulate dry weight, leaf area index and potential yield by the Java based SIMRIW on a daily/weekly/monthly/seasonal basis. These dynamic parameters are useful to the farming community for necessary decision making in a ubiquitous manner. However, SIMRIW requires fine tuning for better results/decision making.

  12. A neural network model for credit risk evaluation.

    PubMed

    Khashman, Adnan

    2009-08-01

    Credit scoring is one of the key analytical techniques in credit risk evaluation which has been an active research area in financial risk management. This paper presents a credit risk evaluation system that uses a neural network model based on the back propagation learning algorithm. We train and implement the neural network to decide whether to approve or reject a credit application, using seven learning schemes and real world credit applications from the Australian credit approval datasets. A comparison of the system performance under the different learning schemes is provided, furthermore, we compare the performance of two neural networks; with one and two hidden layers following the ideal learning scheme. Experimental results suggest that neural networks can be effectively used in automatic processing of credit applications.

  13. An Evaluation of a Diagnostic Wind Model (CALMET)

    SciTech Connect

    Wang, Weiguo; Shaw, William J.; Seiple, Timothy E.; Rishel, Jeremy P.; Xie, YuLong

    2008-06-01

    An EPA-recommended diagnostic wind model (CALMET) was evaluated during a typical lake-breeze event in the Chicago region. We focused on the performance of CALMET in terms of simulating winds that were highly variable in space and time. The reference winds were generated by the PSU/NCAR MM5 assimilating system, with which CALMET results were compared. Statistical evaluations were conducted to quantify overall errors in wind speed and direction over the domain. Within the atmospheric boundary layer (ABL), relative errors in (layer-averaged) wind speed were about 25% to 40% during the simulation period; wind direction errors generally ranged from 6 to 20 deg. Above the ABL, the errors became larger due to lack of upper air stations in the studied domain. Analyses implied that model errors were dependent on time due to time-dependent spatial variability in winds. Trajectory analyses were made to examine the likely spatial dependence of model errors within the domain, suggesting that the quality of CALMET winds in local areas depended on their locations with respect to the lake-breeze front position. Large errors usually occurred near the front area, where observations cannot resolve the spatial variability of wind, or in the fringe of the domain, where observations are lacking. We also compared results simulated using different datasets and model options. Model errors tended to be reduced with data sampled from more stations or from more uniformly-distributed stations. Suggestions are offered for further improving or interpreting CALMET results under complex wind conditions in the Chicago region, which may also apply to other regions.

  14. Preliminary evaluation of a lake whitefish (Coregonus clupeaformis) bioenergetics model

    USGS Publications Warehouse

    Madenjian, Charles P.; Pothoven, Steven A.; Schneeberger, Philip J.; O'Connor, Daniel V.; Brandt, Stephen B.

    2005-01-01

    We conducted a preliminary evaluation of a lake whitefish (Coregonus clupeaformis) bioenergetics model by applying the model to size-at-age data for lake whitefish from northern Lake Michigan. We then compared estimates of gross growth efficiency (GGE) from our bioenergetis model with previously published estimates of GGE for bloater (C. hoyi) in Lake Michigan and for lake whitefish in Quebec. According to our model, the GGE of Lake Michigan lake whitefish decreased from 0.075 to 0.02 as age increased from 2 to 5 years. In contrast, the GGE of lake whitefish in Quebec inland waters decreased from 0.12 to 0.05 for the same ages. When our swimming-speed submodel was replaced with a submodel that had been used for lake trout (Salvelinus namaycush) in Lake Michigan and an observed predator energy density for Lake Michigan lake whitefish was employed, our model predicted that the GGE of Lake Michigan lake whitefish decreased from 0.12 to 0.04 as age increased from 2 to 5 years.

  15. An evaluation of attention models for use in SLAM

    NASA Astrophysics Data System (ADS)

    Dodge, Samuel; Karam, Lina

    2013-12-01

    In this paper we study the application of visual saliency models for the simultaneous localization and mapping (SLAM) problem. We consider visual SLAM, where the location of the camera and a map of the environment can be generated using images from a single moving camera. In visual SLAM, the interest point detector is of key importance. This detector must be invariant to certain image transformations so that features can be matched across di erent frames. Recent work has used a model of human visual attention to detect interest points, however it is unclear as to what is the best attention model for this purpose. To this aim, we compare the performance of interest points from four saliency models (Itti, GBVS, RARE, and AWS) with the performance of four traditional interest point detectors (Harris, Shi-Tomasi, SIFT, and FAST). We evaluate these detectors under several di erent types of image transformation and nd that the Itti saliency model, in general, achieves the best performance in terms of keypoint repeatability.

  16. Laboratory evaluation of a walleye (Sander vitreus) bioenergetics model

    USGS Publications Warehouse

    Madenjian, C.P.; Wang, C.; O'Brien, T. P.; Holuszko, M.J.; Ogilvie, L.M.; Stickel, R.G.

    2010-01-01

    Walleye (Sander vitreus) is an important game fish throughout much of North America. We evaluated the performance of the Wisconsin bioenergetics model for walleye in the laboratory. Walleyes were fed rainbow smelt (Osmerus mordax) in four laboratory tanks during a 126-day experiment. Based on a statistical comparison of bioenergetics model predictions of monthly consumption with the observed monthly consumption, we concluded that the bioenergetics model significantly underestimated food consumption by walleye in the laboratory. The degree of underestimation appeared to depend on the feeding rate. For the tank with the lowest feeding rate (1.4% of walleye body weight per day), the agreement between the bioenergetics model prediction of cumulative consumption over the entire 126-day experiment and the observed cumulative consumption was remarkably close, as the prediction was within 0.1% of the observed cumulative consumption. Feeding rates in the other three tanks ranged from 1.6% to 1.7% of walleye body weight per day, and bioenergetics model predictions of cumulative consumption over the 126-day experiment ranged between 11 and 15% less than the observed cumulative consumption. ?? 2008 Springer Science+Business Media B.V.

  17. Proper bibeta ROC model: algorithm, software, and performance evaluation

    NASA Astrophysics Data System (ADS)

    Chen, Weijie; Hu, Nan

    2016-03-01

    Semi-parametric models are often used to fit data collected in receiver operating characteristic (ROC) experiments to obtain a smooth ROC curve and ROC parameters for statistical inference purposes. The proper bibeta model as recently proposed by Mossman and Peng enjoys several theoretical properties. In addition to having explicit density functions for the latent decision variable and an explicit functional form of the ROC curve, the two parameter bibeta model also has simple closed-form expressions for true-positive fraction (TPF), false-positive fraction (FPF), and the area under the ROC curve (AUC). In this work, we developed a computational algorithm and R package implementing this model for ROC curve fitting. Our algorithm can deal with any ordinal data (categorical or continuous). To improve accuracy, efficiency, and reliability of our software, we adopted several strategies in our computational algorithm including: (1) the LABROC4 categorization to obtain the true maximum likelihood estimation of the ROC parameters; (2) a principled approach to initializing parameters; (3) analytical first-order and second-order derivatives of the likelihood function; (4) an efficient optimization procedure (the L-BFGS algorithm in the R package "nlopt"); and (5) an analytical delta method to estimate the variance of the AUC. We evaluated the performance of our software with intensive simulation studies and compared with the conventional binormal and the proper binormal-likelihood-ratio models developed at the University of Chicago. Our simulation results indicate that our software is highly accurate, efficient, and reliable.

  18. Interfacial Micromechanics in Fibrous Composites: Design, Evaluation, and Models

    PubMed Central

    Lei, Zhenkun; Li, Xuan; Qin, Fuyong; Qiu, Wei

    2014-01-01

    Recent advances of interfacial micromechanics in fiber reinforced composites using micro-Raman spectroscopy are given. The faced mechanical problems for interface design in fibrous composites are elaborated from three optimization ways: material, interface, and computation. Some reasons are depicted that the interfacial evaluation methods are difficult to guarantee the integrity, repeatability, and consistency. Micro-Raman study on the fiber interface failure behavior and the main interface mechanical problems in fibrous composites are summarized, including interfacial stress transfer, strength criterion of interface debonding and failure, fiber bridging, frictional slip, slip transition, and friction reloading. The theoretical models of above interface mechanical problems are given. PMID:24977189

  19. Further Evaluation of a Brief, Intensive Teacher-Training Model

    PubMed Central

    Lerman, Dorothea C; Tetreault, Allison; Hovanetz, Alyson; Strobel, Margaret; Garro, Joanie

    2008-01-01

    The purpose of this study was to further evaluate the outcomes of a model program that was designed to train current teachers of children with autism. Nine certified special education teachers participating in an intensive 5-day summer training program were taught a relatively large number of specific skills in two areas (preference assessment and direct teaching). The teachers met the mastery criteria for all of the skills during the summer training. Follow-up observations up to 6 months after training suggested that the skills generalized to their classrooms and were maintained for most teachers with brief feedback only. PMID:18595288

  20. Advancing Models and Evaluation of Cumulus, Climate and Aerosol Interactions

    SciTech Connect

    Gettelman, Andrew

    2015-10-27

    This project was successfully able to meet its’ goals, but faced some serious challenges due to personnel issues. Nonetheless, it was largely successful. The Project Objectives were as follows: 1. Develop a unified representation of stratifom and cumulus cloud microphysics for NCAR/DOE global community models. 2. Examine the effects of aerosols on clouds and their impact on precipitation in stratiform and cumulus clouds. We will also explore the effects of clouds and precipitation on aerosols. 3. Test these new formulations using advanced evaluation techniques and observations and release

  1. Interfacial micromechanics in fibrous composites: design, evaluation, and models.

    PubMed

    Lei, Zhenkun; Li, Xuan; Qin, Fuyong; Qiu, Wei

    2014-01-01

    Recent advances of interfacial micromechanics in fiber reinforced composites using micro-Raman spectroscopy are given. The faced mechanical problems for interface design in fibrous composites are elaborated from three optimization ways: material, interface, and computation. Some reasons are depicted that the interfacial evaluation methods are difficult to guarantee the integrity, repeatability, and consistency. Micro-Raman study on the fiber interface failure behavior and the main interface mechanical problems in fibrous composites are summarized, including interfacial stress transfer, strength criterion of interface debonding and failure, fiber bridging, frictional slip, slip transition, and friction reloading. The theoretical models of above interface mechanical problems are given. PMID:24977189

  2. Evaluation and cross-validation of Environmental Models

    NASA Astrophysics Data System (ADS)

    Lemaire, Joseph

    Before scientific models (statistical or empirical models based on experimental measurements; physical or mathematical models) can be proposed and selected as ISO Environmental Standards, a Commission of professional experts appointed by an established International Union or Association (e.g. IAGA for Geomagnetism and Aeronomy, . . . ) should have been able to study, document, evaluate and validate the best alternative models available at a given epoch. Examples will be given, indicating that different values for the Earth radius have been employed in different data processing laboratories, institutes or agencies, to process, analyse or retrieve series of experimental observations. Furthermore, invariant magnetic coordinates like B and L, commonly used in the study of Earth's radiation belts fluxes and for their mapping, differ from one space mission data center to the other, from team to team, and from country to country. Worse, users of empirical models generally fail to use the original magnetic model which had been employed to compile B and L , and thus to build these environmental models. These are just some flagrant examples of inconsistencies and misuses identified so far; there are probably more of them to be uncovered by careful, independent examination and benchmarking. A meter prototype, the standard unit length that has been determined on 20 May 1875, during the Diplomatic Conference of the Meter, and deposited at the BIPM (Bureau International des Poids et Mesures). In the same token, to coordinate and safeguard progress in the field of Space Weather, similar initiatives need to be undertaken, to prevent wild, uncontrolled dissemination of pseudo Environmental Models and Standards. Indeed, unless validation tests have been performed, there is guaranty, a priori, that all models on the market place have been built consistently with the same units system, and that they are based on identical definitions for the coordinates systems, etc... Therefore

  3. Evaluation of internal noise methods for Hotelling observer models

    SciTech Connect

    Zhang Yani; Pham, Binh T.; Eckstein, Miguel P.

    2007-08-15

    The inclusion of internal noise in model observers is a common method to allow for quantitative comparisons between human and model observer performance in visual detection tasks. In this article, we studied two different strategies for inserting internal noise into Hotelling model observers. In the first strategy, internal noise was added to the output of individual channels: (a) Independent nonuniform channel noise, (b) independent uniform channel noise. In the second strategy, internal noise was added to the decision variable arising from the combination of channel responses. The standard deviation of the zero mean internal noise was either constant or proportional to: (a) the decision variable's standard deviation due to the external noise, (b) the decision variable's variance caused by the external noise, (c) the decision variable magnitude on a trial to trial basis. We tested three model observers: square window Hotelling observer (HO), channelized Hotelling observer (CHO), and Laguerre-Gauss Hotelling observer (LGHO) using a four alternative forced choice (4AFC) signal known exactly but variable task with a simulated signal embedded in real x-ray coronary angiogram backgrounds. The results showed that the internal noise method that led to the best prediction of human performance differed across the studied model observers. The CHO model best predicted human observer performance with the channel internal noise. The HO and LGHO best predicted human observer performance with the decision variable internal noise. The present results might guide researchers with the choice of methods to include internal noise into Hotelling model observers when evaluating and optimizing medical image quality.

  4. An evaluation of sex-age-kill (SAK) model performance

    USGS Publications Warehouse

    Millspaugh, Joshua J.; Skalski, John R.; Townsend, Richard L.; Diefenbach, Duane R.; Boyce, Mark S.; Hansen, Lonnie P.; Kammermeyer, Kent

    2009-01-01

    The sex-age-kill (SAK) model is widely used to estimate abundance of harvested large mammals, including white-tailed deer (Odocoileus virginianus). Despite a long history of use, few formal evaluations of SAK performance exist. We investigated how violations of the stable age distribution and stationary population assumption, changes to male or female harvest, stochastic effects (i.e., random fluctuations in recruitment and survival), and sampling efforts influenced SAK estimation. When the simulated population had a stable age distribution and λ > 1, the SAK model underestimated abundance. Conversely, when λ < 1, the SAK overestimated abundance. When changes to male harvest were introduced, SAK estimates were opposite the true population trend. In contrast, SAK estimates were robust to changes in female harvest rates. Stochastic effects caused SAK estimates to fluctuate about their equilibrium abundance, but the effect dampened as the size of the surveyed population increased. When we considered both stochastic effects and sampling error at a deer management unit scale the resultant abundance estimates were within ±121.9% of the true population level 95% of the time. These combined results demonstrate extreme sensitivity to model violations and scale of analysis. Without changes to model formulation, the SAK model will be biased when λ ≠ 1. Furthermore, any factor that alters the male harvest rate, such as changes to regulations or changes in hunter attitudes, will bias population estimates. Sex-age-kill estimates may be precise at large spatial scales, such as the state level, but less so at the individual management unit level. Alternative models, such as statistical age-at-harvest models, which require similar data types, might allow for more robust, broad-scale demographic assessments.

  5. A model for evaluating the ballistic resistance of stratified packs

    NASA Astrophysics Data System (ADS)

    Pirvu, C.; Georgescu, C.; Badea, S.; Deleanu, L.

    2016-08-01

    Models for evaluating the ballistic performance of stratified packs are useful in reducing the time for laboratory tests, understanding the failure process and identifying key factors to improve the architecture of the packs. The authors present the results of simulating the bullet impact on a packs made of 24 layers, taking into consideration the friction between layers (μ = 0.4) and the friction between bullet and layers (μ = 0.3). The aim of this study is to obtain a number of layers that allows for the bullet arrest in the packs and to have several layers undamaged in order to offer a high level of safety for this kind of packs that could be included in individual armors. The model takes into account the yield and fracture limits of the two materials the bullet is made of and those for one layer, here considered as an orthotropic material, having maximum equivalent plastic strain of 0.06. All materials are considered to have bilinear isotropic hardening behavior. After documentation, the model was designed as isothermal because thermal influence of the impact is considered low for these impact velocities. The model was developed with the help of Ansys 14.5. Each layer has 200 mm × 200 × 0.35 mm. The bullet velocity just before impact was 400 m/s, a velocity characterizing the average values obtained in close range with a ballistic barrel and the bullet model is following the shape and dimensions of the 9 mm FMJ (full metal jacket). The model and the results concerning the number of broken layers were validated by experiments, as the number of broken layers for the actual pack (made of 24 layers of LFT SB1) were also seven...eight. The models for ballistic impact are useful when they are particularly formulated for resembling to the actual system projectile - target.

  6. Evaluating prospective hydrological model improvements with consideration of data and model uncertainty

    NASA Astrophysics Data System (ADS)

    Craig, James; Sgro, Nicholas; Tolson, Bryan

    2016-04-01

    New algorithms for simulating hydrological processes are regularly proposed in the hydrological literature. These algorithms are often promoted as being more physically-based or better at capturing hydrologic phenomenon seen in the field. However, the tests used to evaluate the effectiveness of these algorithms are typically no more than history matching - an improved model hydrograph is (often inappropriately) interpreted as an improved model. Here, a simple and more stringent method is proposed for comparing two model algorithms in terms of their ability to provide distinguishably different validation results under the impact of uncertainty in observation data and forcings. A key output of the test is whether results from two model configurations are fundamentally differentiable. This test can be used both to support improved algorithm development, but also to aid in hypothesis testing about watershed functioning or to support model selection. As may be expected, our ability to identify the preferred hydrologic algorithm is significantly diminished when model/data uncertainty is incorporated into the evaluation process. The information content of the data and compensatory parameter effects play a key role in our ability to distinguish one model algorithm from another, and the results suggest that simpler models justified by the available data may have more utility than complex physically-based models which can fit the data at the cost of poor validation performance. They also suggest that finding the "best" model structure is (unsurprisingly) dependent upon both the quality and information content of the available observation data.

  7. Physical model assisted probability of detection in nondestructive evaluation

    SciTech Connect

    Li, M.; Meeker, W. Q.; Thompson, R. B.

    2011-06-23

    Nondestructive evaluation is used widely in many engineering and industrial areas to detect defects or flaws such as cracks inside parts or structures during manufacturing or for products in service. The standard statistical model is a simple empirical linear regression between the (possibly transformed) signal response variables and the (possibly transformed) explanatory variables. For some applications, such a simple empirical approach is inadequate. An important alternative approach is to use knowledge of the physics of the inspection process to provide information about the underlying relationship between the response and explanatory variables. Use of such knowledge can greatly increase the power and accuracy of the statistical analysis and enable, when needed, proper extrapolation outside the range of the observed explanatory variables. This paper describes a set of physical model-assisted analyses to study the capability of two different ultrasonic testing inspection methods to detect synthetic hard alpha inclusion and flat-bottom hole defects in a titanium forging disk.

  8. Statistical evaluation and modeling of Internet dial-up traffic

    NASA Astrophysics Data System (ADS)

    Faerber, Johannes; Bodamer, Stefan; Charzinski, Joachim

    1999-08-01

    In times of Internet access being a popular consumer applications even for `normal' residential users, some telephone exchanges are congested by customers using modem or ISDN dial-up connections to their Internet Service Providers. In order to estimate the number of additional lines and switching capacity required in an exchange or a trunk group, Internet access traffic must be characterized in terms of holding time and call interarrival time distributions. In this paper, we analyze log files tracing the usage of the central ISDN access line pool at University of Stuttgart for a period of six months. Mathematical distributions are fitted to the measured data and the fit quality is evaluated with respect to the blocking probability caused by the synthetic traffic in a multiple server loss system. We show how the synthetic traffic model scales with the number of subscribers and how the model could be applied to compute economy of scale results for Internet access trunks or access servers.

  9. Anatomical evaluation of CT-MRI combined femoral model

    PubMed Central

    Lee, Yeon S; Seon, Jong K; Shin, Vladimir I; Kim, Gyu-Ha; Jeon, Moongu

    2008-01-01

    Background Both CT and MRI are complementary to each other in that CT can produce a distinct contour of bones, and MRI can show the shape of both ligaments and bones. It will be ideal to build a CT-MRI combined model to take advantage of complementary information of each modality. This study evaluated the accuracy of the combined femoral model in terms of anatomical inspection. Methods Six normal porcine femora (180 ± 10 days, 3 lefts and 3 rights) with ball markers were scanned by CT and MRI. The 3D/3D registration was performed by two methods, i.e. the landmark-based 3 points-to-3 points and the surface matching using the iterative closest point (ICP) algorithm. The matching accuracy of the combined model was evaluated with statistical global deviation and locally measure anatomical contour-based deviation. Statistical analysis to assess any significant difference between accuracies of those two methods was performed using univariate repeated measures ANOVA with the Turkey post hoc test. Results This study revealed that the local 2D contour-based measurement of matching deviation was 0.5 ± 0.3 mm in the femoral condyle, and in the middle femoral shaft. The global 3D contour matching deviation of the landmark-based matching was 1.1 ± 0.3 mm, but local 2D contour deviation through anatomical inspection was much larger as much as 3.0 ± 1.8 mm. Conclusion Even with human-factor derived errors accumulated from segmentation of MRI images, and limited image quality, the matching accuracy of CT-&-MRI combined 3D models was 0.5 ± 0.3 mm in terms of local anatomical inspection. PMID:18234068

  10. Model Evaluation for Low-Level Cloud Feedback

    NASA Astrophysics Data System (ADS)

    Shin, S.-H.

    2012-04-01

    The purpose of this research is to address the cloud feedbacks in future climate predicted using global climate models. To understand the variability of low clouds in current climate, variations in cloud cover as well as relationship between cloud cover and other variables are examined using the adjusted International Satellite Cloud Climatology Project (ISCCP) data and Intergovernmental panel on climate change (IPCC) Fourth Assessment Report (AR4) models. The study focuses on the low-cloud amount, which variability is very critical in balancing earth's radiation budget. The correlations of the observed low cloud cover anomalies with a variety of variables suggest that low clouds in tropical marine areas (persistent low cloud regions) are associated with a cool sea surface, stronger stability, and higher sea level pressure, and subsidence. An increase in SST causes a reduction in lower tropospheric stability. And the reduced stability allows for more vertical motion within and around the cloud deck, leading to increased entrainment of dry air. This brings about a reduction in cloudiness and a transition from low cloud to high cloud types. Higher SLP could also produce more subsidence aloft, increasing LTS independent of SST. The understanding of the physical processes that control the cloud response to climate variability and the evaluation of some components of cloud feedbacks in current models should help to assess which of the model estimates of cloud feedback is the most reliable. Being rooted on this observed features of total and low-cloud variability, we evaluate the performance and the realism for the model simulations form various coupled GCMs, which lead the selection of reliable models, CGCM3 (from CCCMa) and HadGEM1 (from UKMO). These two models exhibit considerably good agreement in net cloud radiative forcing and produce a reduction in cloud throughout much of the Pacific in response to greenhouse gas forcing (i.e., a positive feedback). In this study

  11. Fast Prediction and Evaluation of Gravitational Waveforms Using Surrogate Models

    NASA Astrophysics Data System (ADS)

    Field, Scott E.; Galley, Chad R.; Hesthaven, Jan S.; Kaye, Jason; Tiglio, Manuel

    2014-07-01

    We propose a solution to the problem of quickly and accurately predicting gravitational waveforms within any given physical model. The method is relevant for both real-time applications and more traditional scenarios where the generation of waveforms using standard methods can be prohibitively expensive. Our approach is based on three offline steps resulting in an accurate reduced order model in both parameter and physical dimensions that can be used as a surrogate for the true or fiducial waveform family. First, a set of m parameter values is determined using a greedy algorithm from which a reduced basis representation is constructed. Second, these m parameters induce the selection of m time values for interpolating a waveform time series using an empirical interpolant that is built for the fiducial waveform family. Third, a fit in the parameter dimension is performed for the waveform's value at each of these m times. The cost of predicting L waveform time samples for a generic parameter choice is of order O(mL+mcfit) online operations, where cfit denotes the fitting function operation count and, typically, m ≪L. The result is a compact, computationally efficient, and accurate surrogate model that retains the original physics of the fiducial waveform family while also being fast to evaluate. We generate accurate surrogate models for effective-one-body waveforms of nonspinning binary black hole coalescences with durations as long as 105M, mass ratios from 1 to 10, and for multiple spherical harmonic modes. We find that these surrogates are more than 3 orders of magnitude faster to evaluate as compared to the cost of generating effective-one-body waveforms in standard ways. Surrogate model building for other waveform families and models follows the same steps and has the same low computational online scaling cost. For expensive numerical simulations of binary black hole coalescences, we thus anticipate extremely large speedups in generating new waveforms with a

  12. visCOS: An R-package to evaluate model performance of hydrological models

    NASA Astrophysics Data System (ADS)

    Klotz, Daniel; Herrnegger, Mathew; Wesemann, Johannes; Schulz, Karsten

    2016-04-01

    The evaluation of model performance is a central part of (hydrological) modelling. Much attention has been given to the development of evaluation criteria and diagnostic frameworks. (Klemeš, 1986; Gupta et al., 2008; among many others). Nevertheless, many applications exist for which objective functions do not yet provide satisfying summaries. Thus, the necessity to visualize results arises in order to explore a wider range of model capacities, be it strengths or deficiencies. Visualizations are usually devised for specific projects and these efforts are often not distributed to a broader community (e.g. via open source software packages). Hence, the opportunity to explicitly discuss a state-of-the-art presentation technique is often missed. We therefore present a comprehensive R-package for evaluating model performance by visualizing and exploring different aspects of hydrological time-series. The presented package comprises a set of useful plots and visualization methods, which complement existing packages, such as hydroGOF (Zambrano-Bigiarini et al., 2012). It is derived from practical applications of the hydrological models COSERO and COSEROreg (Kling et al., 2014). visCOS, providing an interface in R, represents an easy-to-use software package for visualizing and assessing model performance and can be implemented in the process of model calibration or model development. The package provides functions to load hydrological data into R, clean the data, process, visualize, explore and finally save the results in a consistent way. Together with an interactive zoom function of the time series, an online calculation of the objective functions for variable time-windows is included. Common hydrological objective functions, such as the Nash-Sutcliffe Efficiency and the Kling-Gupta Efficiency, can also be evaluated and visualized in different ways for defined sub-periods like hydrological years or seasonal sections. Many hydrologists use long-term water-balances as a

  13. An updated summary of MATHEW/ADPIC model evaluation studies

    SciTech Connect

    Foster, K.T.; Dickerson, M.H.

    1990-05-01

    This paper summarizes the major model evaluation studies conducted for the MATHEW/ADPIC atmospheric transport and diffusion models used by the US Department of Energy's Atmospheric Release Advisory Capability. These studies have taken place over the last 15 years and involve field tracer releases influenced by a variety of meteorological and topographical conditions. Neutrally buoyant tracers released both as surface and elevated point sources, as well as material dispersed by explosive, thermally bouyant release mechanisms have been studied. Results from these studies show that the MATHEW/ADPIC models estimate the tracer air concentrations to within a factor of two of the measured values 20% to 50% of the time, and within a factor of five of the measurements 35% to 85% of the time depending on the complexity of the meteorology and terrain, and the release height of the tracer. Comparisons of model estimates to peak downwind deposition and air concentration measurements from explosive releases are shown to be generally within a factor of two to three. 24 refs., 14 figs., 3 tabs.

  14. An 8-Stage Model for Evaluating the Tennis Serve

    PubMed Central

    Kovacs, Mark; Ellenbecker, Todd

    2011-01-01

    Background: The tennis serve is a complex stroke characterized by a series of segmental rotations involving the entire kinetic chain. Many overhead athletes use a basic 6-stage throwing model; however, the tennis serve does provide some differences. Evidence Acquisition: To support the present 8-stage descriptive model, data were gathered from PubMed and SPORTDiscus databases using keywords tennis and serve for publications between 1980 and 2010. Results: An 8-stage model of analysis for the tennis serve that includes 3 distinct phases—preparation, acceleration, and follow-through—provides a more tennis-specific analysis than that previously presented in the clinical tennis literature. When a serve is evaluated, the total body perspective is just as important as the individual segments alone. Conclusion: The 8-stage model provides a more in-depth analysis that should be utilized in all tennis players to help better understand areas of weakness, potential areas of injury, as well as components that can be improved for greater performance. PMID:23016050

  15. Evaluation of data driven models for river suspended sediment concentration modeling

    NASA Astrophysics Data System (ADS)

    Zounemat-Kermani, Mohammad; Kişi, Özgür; Adamowski, Jan; Ramezani-Charmahineh, Abdollah

    2016-04-01

    Using eight-year data series from hydrometric stations located in Arkansas, Delaware and Idaho (USA), the ability of artificial neural network (ANN) and support vector regression (SVR) models to forecast/estimate daily suspended sediment concentrations ([SS]d) was evaluated and compared to that of traditional multiple linear regression (MLR) and sediment rating curve (SRC) models. Three different ANN model algorithms were tested [gradient descent, conjugate gradient and Broyden-Fletcher-Goldfarb-Shanno (BFGS)], along with four different SVR model kernels [linear, polynomial, sigmoid and Radial Basis Function (RBF)]. The reliability of the applied models was evaluated based on the statistical performance criteria of root mean square error (RMSE), Pearson's correlation coefficient (PCC) and Nash-Sutcliffe model efficiency coefficient (NSE). Based on RMSE values, and averaged across the three hydrometric stations, the ANN and SVR models showed, respectively, 23% and 18% improvements in forecasting and 18% and 15% improvements in estimation over traditional models. The use of the BFGS training algorithm for ANN, and the RBF kernel function for SVR models are recommended as useful options for simulating hydrological phenomena.

  16. Evaluating models in systems ergonomics with a taxonomy of model attributes.

    PubMed

    Sheridan, Thomas B

    2014-01-01

    A model, as the term is used here, is a way of representing knowledge for the purpose of thinking, communicating to others, or implementing decisions as in system analysis, design or operations. It can be said that to the extent that we can model some aspect of nature we understand it. Models can range from fleeting mental images to highly refined mathematical equations of computer algorithms that precisely predict physical events. In constructing and evaluating models of ergonomic systems it is important that we consider the attributes of our models in relation to our objectives and what we can reasonably aspire to. To that end this paper proposes a taxonomy of models in terms of six independent attributes: applicability to observables, dimensionality, metricity, robustness, social penetration and conciseness. Each of these attributes is defined along with the meaning of different levels of each. The attribute taxonomy may be used to evaluate the quality of a model. Examples of system ergonomics models having different combinations of attributes at different levels are provided. Philosophical caveats regarding models in system ergonomics are discussed, as well as the relation to scientific method. PMID:23615659

  17. Error apportionment for atmospheric chemistry-transport models - a new approach to model evaluation

    NASA Astrophysics Data System (ADS)

    Solazzo, Efisio; Galmarini, Stefano

    2016-05-01

    In this study, methods are proposed to diagnose the causes of errors in air quality (AQ) modelling systems. We investigate the deviation between modelled and observed time series of surface ozone through a revised formulation for breaking down the mean square error (MSE) into bias, variance and the minimum achievable MSE (mMSE). The bias measures the accuracy and implies the existence of systematic errors and poor representation of data complexity, the variance measures the precision and provides an estimate of the variability of the modelling results in relation to the observed data, and the mMSE reflects unsystematic errors and provides a measure of the associativity between the modelled and the observed fields through the correlation coefficient. Each of the error components is analysed independently and apportioned to resolved processes based on the corresponding timescale (long scale, synoptic, diurnal, and intra-day) and as a function of model complexity.The apportionment of the error is applied to the AQMEII (Air Quality Model Evaluation International Initiative) group of models, which embrace the majority of regional AQ modelling systems currently used in Europe and North America.The proposed technique has proven to be a compact estimator of the operational metrics commonly used for model evaluation (bias, variance, and correlation coefficient), and has the further benefit of apportioning the error to the originating timescale, thus allowing for a clearer diagnosis of the processes that caused the error.

  18. Sustainable deforestation evaluation model and system dynamics analysis.

    PubMed

    Feng, Huirong; Lim, C W; Chen, Liqun; Zhou, Xinnian; Zhou, Chengjun; Lin, Yi

    2014-01-01

    The current study used the improved fuzzy analytic hierarchy process to construct a sustainable deforestation development evaluation system and evaluation model, which has refined a diversified system to evaluate the theory of sustainable deforestation development. Leveraging the visual image of the system dynamics causal and power flow diagram, we illustrated here that sustainable forestry development is a complex system that encompasses the interaction and dynamic development of ecology, economy, and society and has reflected the time dynamic effect of sustainable forestry development from the three combined effects. We compared experimental programs to prove the direct and indirect impacts of the ecological, economic, and social effects of the corresponding deforest techniques and fully reflected the importance of developing scientific and rational ecological harvesting and transportation technologies. Experimental and theoretical results illustrated that light cableway skidding is an ecoskidding method that is beneficial for the sustainable development of resources, the environment, the economy, and society and forecasted the broad potential applications of light cableway skidding in timber production technology. Furthermore, we discussed the sustainable development countermeasures of forest ecosystems from the aspects of causality, interaction, and harmony. PMID:25254225

  19. Sustainable deforestation evaluation model and system dynamics analysis.

    PubMed

    Feng, Huirong; Lim, C W; Chen, Liqun; Zhou, Xinnian; Zhou, Chengjun; Lin, Yi

    2014-01-01

    The current study used the improved fuzzy analytic hierarchy process to construct a sustainable deforestation development evaluation system and evaluation model, which has refined a diversified system to evaluate the theory of sustainable deforestation development. Leveraging the visual image of the system dynamics causal and power flow diagram, we illustrated here that sustainable forestry development is a complex system that encompasses the interaction and dynamic development of ecology, economy, and society and has reflected the time dynamic effect of sustainable forestry development from the three combined effects. We compared experimental programs to prove the direct and indirect impacts of the ecological, economic, and social effects of the corresponding deforest techniques and fully reflected the importance of developing scientific and rational ecological harvesting and transportation technologies. Experimental and theoretical results illustrated that light cableway skidding is an ecoskidding method that is beneficial for the sustainable development of resources, the environment, the economy, and society and forecasted the broad potential applications of light cableway skidding in timber production technology. Furthermore, we discussed the sustainable development countermeasures of forest ecosystems from the aspects of causality, interaction, and harmony.

  20. Sustainable Deforestation Evaluation Model and System Dynamics Analysis

    PubMed Central

    Feng, Huirong; Lim, C. W.; Chen, Liqun; Zhou, Xinnian; Zhou, Chengjun; Lin, Yi

    2014-01-01

    The current study used the improved fuzzy analytic hierarchy process to construct a sustainable deforestation development evaluation system and evaluation model, which has refined a diversified system to evaluate the theory of sustainable deforestation development. Leveraging the visual image of the system dynamics causal and power flow diagram, we illustrated here that sustainable forestry development is a complex system that encompasses the interaction and dynamic development of ecology, economy, and society and has reflected the time dynamic effect of sustainable forestry development from the three combined effects. We compared experimental programs to prove the direct and indirect impacts of the ecological, economic, and social effects of the corresponding deforest techniques and fully reflected the importance of developing scientific and rational ecological harvesting and transportation technologies. Experimental and theoretical results illustrated that light cableway skidding is an ecoskidding method that is beneficial for the sustainable development of resources, the environment, the economy, and society and forecasted the broad potential applications of light cableway skidding in timber production technology. Furthermore, we discussed the sustainable development countermeasures of forest ecosystems from the aspects of causality, interaction, and harmony. PMID:25254225

  1. Evaluation of atmospheric chemical models using aircraft data (Invited)

    NASA Astrophysics Data System (ADS)

    Freeman, S.; Grossberg, N.; Pierce, R.; Lee, P.; Ngan, F.; Yates, E. L.; Iraci, L. T.; Lefer, B. L.

    2013-12-01

    Air quality prediction is an important and growing field, as the adverse health effects of ozone (O3) are becoming more important to the general public. Two atmospheric chemical models, the Realtime Air Quality Modeling System (RAQMS) and the Community Multiscale Air Quality modeling system (CMAQ) are evaluated during NASA's Student Airborne Research Project (SARP) and the NASA Alpha Jet Atmospheric eXperiment (AJAX) flights. CO, O3, and NOx data simulated by the models are interpolated using an inverse distance weighting in space and a linear interpolation in time to both the SARP and AJAX flight tracks and compared to the CO, O3, and NOx observations at those points. Results for the seven flights included show moderate error in O3 during the flights, with RAQMS having a high O3 bias (+15.7 ppbv average) above 6 km and a low O3 bias (-17.5 ppbv average) below 4km. CMAQ was found to have a low O3 bias (-13.0 ppbv average) everywhere. Additionally, little bias (-5.36% RAQMS, -11.8% CMAQ) in the CO data was observed with the exception of a wildfire smoke plume that was flown through on one SARP flight, as CMAQ lacks any wildfire sources and RAQMS resolution is too coarse to resolve narrow plumes. This indicates improvement in emissions inventories compared to previous studies. CMAQ additionally incorrectly predicted a NOx plume due to incorrectly vertically advecting it from the surface, which caused NOx titration to occur, limiting the production of ozone. This study shows that these models perform reasonably well in most conditions; however more work must be done to assimilate wildfires, improve emissions inventories, and improve meteorological forecasts for the models.

  2. Evaluating wind extremes in CMIP5 climate models

    NASA Astrophysics Data System (ADS)

    Kumar, Devashish; Mishra, Vimal; Ganguly, Auroop R.

    2015-07-01

    Wind extremes have consequences for renewable energy sectors, critical infrastructures, coastal ecosystems, and insurance industry. Considerable debates remain regarding the impacts of climate change on wind extremes. While climate models have occasionally shown increases in regional wind extremes, a decline in the magnitude of mean and extreme near-surface wind speeds has been recently reported over most regions of the Northern Hemisphere using observed data. Previous studies of wind extremes under climate change have focused on selected regions and employed outputs from the regional climate models (RCMs). However, RCMs ultimately rely on the outputs of global circulation models (GCMs), and the value-addition from the former over the latter has been questioned. Regional model runs rarely employ the full suite of GCM ensembles, and hence may not be able to encapsulate the most likely projections or their variability. Here we evaluate the performance of the latest generation of GCMs, the Coupled Model Intercomparison Project phase 5 (CMIP5), in simulating extreme winds. We find that the multimodel ensemble (MME) mean captures the spatial variability of annual maximum wind speeds over most regions except over the mountainous terrains. However, the historical temporal trends in annual maximum wind speeds for the reanalysis data, ERA-Interim, are not well represented in the GCMs. The historical trends in extreme winds from GCMs are statistically not significant over most regions. The MME model simulates the spatial patterns of extreme winds for 25-100 year return periods. The projected extreme winds from GCMs exhibit statistically less significant trends compared to the historical reference period.

  3. Evaluation Digital Elevation Model Generated by Synthetic Aperture Radar Data

    NASA Astrophysics Data System (ADS)

    Makineci, H. B.; Karabörk, H.

    2016-06-01

    Digital elevation model, showing the physical and topographical situation of the earth, is defined a tree-dimensional digital model obtained from the elevation of the surface by using of selected an appropriate interpolation method. DEMs are used in many areas such as management of natural resources, engineering and infrastructure projects, disaster and risk analysis, archaeology, security, aviation, forestry, energy, topographic mapping, landslide and flood analysis, Geographic Information Systems (GIS). Digital elevation models, which are the fundamental components of cartography, is calculated by many methods. Digital elevation models can be obtained terrestrial methods or data obtained by digitization of maps by processing the digital platform in general. Today, Digital elevation model data is generated by the processing of stereo optical satellite images, radar images (radargrammetry, interferometry) and lidar data using remote sensing and photogrammetric techniques with the help of improving technology. One of the fundamental components of remote sensing radar technology is very advanced nowadays. In response to this progress it began to be used more frequently in various fields. Determining the shape of topography and creating digital elevation model comes the beginning topics of these areas. It is aimed in this work , the differences of evaluation of quality between Sentinel-1A SAR image ,which is sent by European Space Agency ESA and Interferometry Wide Swath imaging mode and C band type , and DTED-2 (Digital Terrain Elevation Data) and application between them. The application includes RMS static method for detecting precision of data. Results show us to variance of points make a high decrease from mountain area to plane area.

  4. Addressing Conceptual Model Uncertainty in the Evaluation of Model Prediction Errors

    NASA Astrophysics Data System (ADS)

    Carrera, J.; Pool, M.

    2014-12-01

    Model predictions are uncertain because of errors in model parameters, future forcing terms, and model concepts. The latter remain the largest and most difficult to assess source of uncertainty in long term model predictions. We first review existing methods to evaluate conceptual model uncertainty. We argue that they are highly sensitive to the ingenuity of the modeler, in the sense that they rely on the modeler's ability to propose alternative model concepts. Worse, we find that the standard practice of stochastic methods leads to poor, potentially biased and often too optimistic, estimation of actual model errors. This is bad news because stochastic methods are purported to properly represent uncertainty. We contend that the problem does not lie on the stochastic approach itself, but on the way it is applied. Specifically, stochastic inversion methodologies, which demand quantitative information, tend to ignore geological understanding, which is conceptually rich. We illustrate some of these problems with the application to Mar del Plata aquifer, where extensive data are available for nearly a century. Geologically based models, where spatial variability is handled through zonation, yield calibration fits similar to geostatiscally based models, but much better predictions. In fact, the appearance of the stochastic T fields is similar to the geologically based models only in areas with high density of data. We take this finding to illustrate the ability of stochastic models to accommodate many data, but also, ironically, their inability to address conceptual model uncertainty. In fact, stochastic model realizations tend to be too close to the "most likely" one (i.e., they do not really realize the full conceptualuncertainty). The second part of the presentation is devoted to argue that acknowledging model uncertainty may lead to qualitatively different decisions than just working with "most likely" model predictions. Therefore, efforts should concentrate on

  5. Development of Conceptual Benchmark Models to Evaluate Complex Hydrologic Model Calibration in Managed Basins Using Python

    NASA Astrophysics Data System (ADS)

    Hughes, J. D.; White, J.

    2013-12-01

    For many numerical hydrologic models it is a challenge to quantitatively demonstrate that complex models are preferable to simpler models. Typically, a decision is made to develop and calibrate a complex model at the beginning of a study. The value of selecting a complex model over simpler models is commonly inferred from use of a model with fewer simplifications of the governing equations because it can be time consuming to develop another numerical code with data processing and parameter estimation functionality. High-level programming languages like Python can greatly reduce the effort required to develop and calibrate simple models that can be used to quantitatively demonstrate the increased value of a complex model. We have developed and calibrated a spatially-distributed surface-water/groundwater flow model for managed basins in southeast Florida, USA, to (1) evaluate the effect of municipal groundwater pumpage on surface-water/groundwater exchange, (2) investigate how the study area will respond to sea-level rise, and (3) explore combinations of these forcing functions. To demonstrate the increased value of this complex model, we developed a two-parameter conceptual-benchmark-discharge model for each basin in the study area. The conceptual-benchmark-discharge model includes seasonal scaling and lag parameters and is driven by basin rainfall. The conceptual-benchmark-discharge models were developed in the Python programming language and used weekly rainfall data. Calibration was implemented with the Broyden-Fletcher-Goldfarb-Shanno method available in the Scientific Python (SciPy) library. Normalized benchmark efficiencies calculated using output from the complex model and the corresponding conceptual-benchmark-discharge model indicate that the complex model has more explanatory power than the simple model driven only by rainfall.

  6. In-vitro model for evaluation of pulse oximetry

    NASA Astrophysics Data System (ADS)

    Vegfors, Magnus; Lindberg, Lars-Goeran; Lennmarken, Claes; Oberg, P. Ake

    1991-06-01

    An in vitro model with blood circulating in a silicon tubing system and including an artificial arterial bed is an important tool for evaluation of the pulse oximetry technique. The oxygen saturation was measured on an artificial finger using a pulse oximeter (SpO2) and on blood samples using a hemoximeter (SaO2). Measurements were performed at different blood flows and at different blood hematocrits. An increase in steady as well as in pulsatile blood flow was followed by an increase in pulse oximeter readings and a better agreement between SpO2 and SaO2 readings. After diluting the blood with normal saline (decreased hematocrit) the agreement was further improved. These results indicate that the pulse oximeter signal is related to blood hematocrit and the velocity of blood. The flow-related dependance of SpO2 was also evaluated in a human model. These results provided evidence that the pulse oximeter signal is dependent on vascular changes.

  7. Evaluation of field development plans using 3-D reservoir modelling

    SciTech Connect

    Seifert, D.; Lewis, J.J.M.; Newbery, J.D.H.

    1997-08-01

    Three-dimensional reservoir modelling has become an accepted tool in reservoir description and is used for various purposes, such as reservoir performance prediction or integration and visualisation of data. In this case study, a small Northern North Sea turbiditic reservoir was to be developed with a line drive strategy utilising a series of horizontal producer and injector pairs, oriented north-south. This development plan was to be evaluated and the expected outcome of the wells was to be assessed and risked. Detailed analyses of core, well log and analogue data has led to the development of two geological {open_quotes}end member{close_quotes} scenarios. Both scenarios have been stochastically modelled using the Sequential Indicator Simulation method. The resulting equiprobable realisations have been subjected to detailed statistical well placement optimisation techniques. Based upon bivariate statistical evaluation of more than 1000 numerical well trajectories for each of the two scenarios, it was found that the wells inclinations and lengths had a great impact on the wells success, whereas the azimuth was found to have only a minor impact. After integration of the above results, the actual well paths were redesigned to meet external drilling constraints, resulting in substantial reductions in drilling time and costs.

  8. Evaluation Between Existing and Improved CCF Modeling Using the NRC SPAR Models

    SciTech Connect

    James K. Knudsen

    2010-06-01

    Abstract: The NRC SPAR models currently employ the alpha factor common cause failure (CCF) methodology and model CCF for a group of redundant components as a single “rolled-up” basic event. These SPAR models will be updated to employ a more computationally intensive and accurate approach by expanding the CCF basic events for all active components to include all terms that appear in the Basic Parameter Model (BPM). A discussion is provided to detail the differences between the rolled-up common cause group (CCG) and expanded BPM adjustment concepts based on differences in core damage frequency and individual component importance measures. Lastly, a hypothetical condition is evaluated with a SPAR model to show the difference in results between the current adjustment method (rolled-up CCF events) and the newer method employing all of the expanded terms in the BPM. The event evaluation on the SPAR model employing the expanded terms will be solved using the graphical evaluation module (GEM) and the proposed method discussed in Reference 1.

  9. Use of Numerical Groundwater Modeling to Evaluate Uncertainty in Conceptual Models of Recharge and Hydrostratigraphy

    SciTech Connect

    Pohlmann, Karl; Ye, Ming; Pohll, Greg; Chapman, Jenny

    2007-01-19

    Numerical groundwater models are based on conceptualizations of hydrogeologic systems that are by necessity developed from limited information and therefore are simplifications of real conditions. Each aspect (e.g. recharge, hydrostratigraphy, boundary conditions) of the groundwater model is often based on a single conceptual model that is considered to be the best representation given the available data. However, the very nature of their construction means that each conceptual model is inherently uncertain and the available information may be insufficient to refute plausible alternatives, thereby raising the possibility that the flow model is underestimating overall uncertainty. In this study we use the Death Valley Regional Flow System model developed by the U.S. Geological Survey as a framework to predict regional groundwater flow southward into Yucca Flat on the Nevada Test Site. An important aspect of our work is to evaluate the uncertainty associated with multiple conceptual models of groundwater recharge and subsurface hydrostratigraphy and quantify the impacts of this uncertainty on model predictions. In our study, conceptual model uncertainty arises from two sources: (1) alternative interpretations of the hydrostratigraphy in the northern portion of Yucca Flat where, owing to sparse data, the hydrogeologic system can be conceptualized in different ways, and (2) uncertainty in groundwater recharge in the region as evidenced by the existence of several independent approaches for estimating this aspect of the hydrologic system. The composite prediction of groundwater flow is derived from the regional model that formally incorporates the uncertainty in these alternative input models using the maximum likelihood Bayesian model averaging method. An assessment of the joint predictive uncertainty of the input conceptual models is also produced. During this process, predictions of the alternative models are weighted by model probability, which is the degree of

  10. A Comprehensive and Systematic Model of User Evaluation of Web Search Engines: II. An Evaluation by Undergraduates.

    ERIC Educational Resources Information Center

    Su, Louise T.

    2003-01-01

    Presents an application of a model of user evaluation of four major Web search engines (Alta Vista, Excite, Infoseek, and Lycos) by undergraduates. Evaluation was based on 16 performance measures representing five evaluation criteria-relevance, efficiency, utility, user satisfaction, and connectivity. Content analysis of verbal data identified a…

  11. Evaluation of Turbulence-Model Performance in Jet Flows

    NASA Technical Reports Server (NTRS)

    Woodruff, S. L.; Seiner, J. M.; Hussaini, M. Y.; Erlebacher, G.

    2001-01-01

    The importance of reducing jet noise in both commercial and military aircraft applications has made jet acoustics a significant area of research. A technique for jet noise prediction commonly employed in practice is the MGB approach, based on the Lighthill acoustic analogy. This technique requires as aerodynamic input mean flow quantities and turbulence quantities like the kinetic energy and the dissipation. The purpose of the present paper is to assess existing capabilities for predicting these aerodynamic inputs. Two modern Navier-Stokes flow solvers, coupled with several modern turbulence models, are evaluated by comparison with experiment for their ability to predict mean flow properties in a supersonic jet plume. Potential weaknesses are identified for further investigation. Another comparison with similar intent is discussed by Barber et al. The ultimate goal of this research is to develop a reliable flow solver applicable to the low-noise, propulsion-efficient, nozzle exhaust systems being developed in NASA focused programs. These programs address a broad range of complex nozzle geometries operating in high temperature, compressible, flows. Seiner et al. previously discussed the jet configuration examined here. This convergent-divergent nozzle with an exit diameter of 3.6 inches was designed for an exhaust Mach number of 2.0 and a total temperature of 1680 F. The acoustic and aerodynamic data reported by Seiner et al. covered a range of jet total temperatures from 104 F to 2200 F at the fully-expanded nozzle pressure ratio. The aerodynamic data included centerline mean velocity and total temperature profiles. Computations were performed independently with two computational fluid dynamics (CFD) codes, ISAAC and PAB3D. Turbulence models employed include the k-epsilon model, the Gatski-Speziale algebraic-stress model and the Girimaji model, with and without the Sarkar compressibility correction. Centerline values of mean velocity and mean temperature are

  12. Development and Evaluation of Land-Use Regression Models Using Modeled Air Quality Concentrations

    EPA Science Inventory

    Abstract Land-use regression (LUR) models have emerged as a preferred methodology for estimating individual exposure to ambient air pollution in epidemiologic studies in absence of subject-specific measurements. Although there is a growing literature focused on LUR evaluation, fu...

  13. Evaluating biomarkers to model cancer risk post cosmic ray exposure

    NASA Astrophysics Data System (ADS)

    Sridharan, Deepa M.; Asaithamby, Aroumougame; Blattnig, Steve R.; Costes, Sylvain V.; Doetsch, Paul W.; Dynan, William S.; Hahnfeldt, Philip; Hlatky, Lynn; Kidane, Yared; Kronenberg, Amy; Naidu, Mamta D.; Peterson, Leif E.; Plante, Ianik; Ponomarev, Artem L.; Saha, Janapriya; Snijders, Antoine M.; Srinivasan, Kalayarasan; Tang, Jonathan; Werner, Erica; Pluth, Janice M.

    2016-06-01

    Robust predictive models are essential to manage the risk of radiation-induced carcinogenesis. Chronic exposure to cosmic rays in the context of the complex deep space environment may place astronauts at high cancer risk. To estimate this risk, it is critical to understand how radiation-induced cellular stress impacts cell fate decisions and how this in turn alters the risk of carcinogenesis. Exposure to the heavy ion component of cosmic rays triggers a multitude of cellular changes, depending on the rate of exposure, the type of damage incurred and individual susceptibility. Heterogeneity in dose, dose rate, radiation quality, energy and particle flux contribute to the complexity of risk assessment. To unravel the impact of each of these factors, it is critical to identify sensitive biomarkers that can serve as inputs for robust modeling of individual risk of cancer or other long-term health consequences of exposure. Limitations in sensitivity of biomarkers to dose and dose rate, and the complexity of longitudinal monitoring, are some of the factors that increase uncertainties in the output from risk prediction models. Here, we critically evaluate candidate early and late biomarkers of radiation exposure and discuss their usefulness in predicting cell fate decisions. Some of the biomarkers we have reviewed include complex clustered DNA damage, persistent DNA repair foci, reactive oxygen species, chromosome aberrations and inflammation. Other biomarkers discussed, often assayed for at longer points post exposure, include mutations, chromosome aberrations, reactive oxygen species and telomere length changes. We discuss the relationship of biomarkers to different potential cell fates, including proliferation, apoptosis, senescence, and loss of stemness, which can propagate genomic instability and alter tissue composition and the underlying mRNA signatures that contribute to cell fate decisions. Our goal is to highlight factors that are important in choosing

  14. Evaluating biomarkers to model cancer risk post cosmic ray exposure.

    PubMed

    Sridharan, Deepa M; Asaithamby, Aroumougame; Blattnig, Steve R; Costes, Sylvain V; Doetsch, Paul W; Dynan, William S; Hahnfeldt, Philip; Hlatky, Lynn; Kidane, Yared; Kronenberg, Amy; Naidu, Mamta D; Peterson, Leif E; Plante, Ianik; Ponomarev, Artem L; Saha, Janapriya; Snijders, Antoine M; Srinivasan, Kalayarasan; Tang, Jonathan; Werner, Erica; Pluth, Janice M

    2016-06-01

    Robust predictive models are essential to manage the risk of radiation-induced carcinogenesis. Chronic exposure to cosmic rays in the context of the complex deep space environment may place astronauts at high cancer risk. To estimate this risk, it is critical to understand how radiation-induced cellular stress impacts cell fate decisions and how this in turn alters the risk of carcinogenesis. Exposure to the heavy ion component of cosmic rays triggers a multitude of cellular changes, depending on the rate of exposure, the type of damage incurred and individual susceptibility. Heterogeneity in dose, dose rate, radiation quality, energy and particle flux contribute to the complexity of risk assessment. To unravel the impact of each of these factors, it is critical to identify sensitive biomarkers that can serve as inputs for robust modeling of individual risk of cancer or other long-term health consequences of exposure. Limitations in sensitivity of biomarkers to dose and dose rate, and the complexity of longitudinal monitoring, are some of the factors that increase uncertainties in the output from risk prediction models. Here, we critically evaluate candidate early and late biomarkers of radiation exposure and discuss their usefulness in predicting cell fate decisions. Some of the biomarkers we have reviewed include complex clustered DNA damage, persistent DNA repair foci, reactive oxygen species, chromosome aberrations and inflammation. Other biomarkers discussed, often assayed for at longer points post exposure, include mutations, chromosome aberrations, reactive oxygen species and telomere length changes. We discuss the relationship of biomarkers to different potential cell fates, including proliferation, apoptosis, senescence, and loss of stemness, which can propagate genomic instability and alter tissue composition and the underlying mRNA signatures that contribute to cell fate decisions. Our goal is to highlight factors that are important in choosing

  15. New Methods for Air Quality Model Evaluation with Satellite Data

    NASA Astrophysics Data System (ADS)

    Holloway, T.; Harkey, M.

    2015-12-01

    Despite major advances in the ability of satellites to detect gases and aerosols in the atmosphere, there remains significant, untapped potential to apply space-based data to air quality regulatory applications. Here, we showcase research findings geared toward increasing the relevance of satellite data to support operational air quality management, focused on model evaluation. Particular emphasis is given to nitrogen dioxide (NO2) and formaldehyde (HCHO) from the Ozone Monitoring Instrument aboard the NASA Aura satellite, and evaluation of simulations from the EPA Community Multiscale Air Quality (CMAQ) model. This work is part of the NASA Air Quality Applied Sciences Team (AQAST), and is motivated by ongoing dialog with state and federal air quality management agencies. We present the response of satellite-derived NO2 to meteorological conditions, satellite-derived HCHO:NO2 ratios as an indicator of ozone production regime, and the ability of models to capture these sensitivities over the continental U.S. In the case of NO2-weather sensitivities, we find boundary layer height, wind speed, temperature, and relative humidity to be the most important variables in determining near-surface NO2 variability. CMAQ agreed with relationships observed in satellite data, as well as in ground-based data, over most regions. However, we find that the southwest U.S. is a problem area for CMAQ, where modeled NO2 responses to insolation, boundary layer height, and other variables are at odds with the observations. Our analyses utilize a software developed by our team, the Wisconsin Horizontal Interpolation Program for Satellites (WHIPS): a free, open-source program designed to make satellite-derived air quality data more usable. WHIPS interpolates level 2 satellite retrievals onto a user-defined fixed grid, in effect creating custom-gridded level 3 satellite product. Currently, WHIPS can process the following data products: OMI NO2 (NASA retrieval); OMI NO2 (KNMI retrieval); OMI

  16. Evaluating biomarkers to model cancer risk post cosmic ray exposure.

    PubMed

    Sridharan, Deepa M; Asaithamby, Aroumougame; Blattnig, Steve R; Costes, Sylvain V; Doetsch, Paul W; Dynan, William S; Hahnfeldt, Philip; Hlatky, Lynn; Kidane, Yared; Kronenberg, Amy; Naidu, Mamta D; Peterson, Leif E; Plante, Ianik; Ponomarev, Artem L; Saha, Janapriya; Snijders, Antoine M; Srinivasan, Kalayarasan; Tang, Jonathan; Werner, Erica; Pluth, Janice M

    2016-06-01

    Robust predictive models are essential to manage the risk of radiation-induced carcinogenesis. Chronic exposure to cosmic rays in the context of the complex deep space environment may place astronauts at high cancer risk. To estimate this risk, it is critical to understand how radiation-induced cellular stress impacts cell fate decisions and how this in turn alters the risk of carcinogenesis. Exposure to the heavy ion component of cosmic rays triggers a multitude of cellular changes, depending on the rate of exposure, the type of damage incurred and individual susceptibility. Heterogeneity in dose, dose rate, radiation quality, energy and particle flux contribute to the complexity of risk assessment. To unravel the impact of each of these factors, it is critical to identify sensitive biomarkers that can serve as inputs for robust modeling of individual risk of cancer or other long-term health consequences of exposure. Limitations in sensitivity of biomarkers to dose and dose rate, and the complexity of longitudinal monitoring, are some of the factors that increase uncertainties in the output from risk prediction models. Here, we critically evaluate candidate early and late biomarkers of radiation exposure and discuss their usefulness in predicting cell fate decisions. Some of the biomarkers we have reviewed include complex clustered DNA damage, persistent DNA repair foci, reactive oxygen species, chromosome aberrations and inflammation. Other biomarkers discussed, often assayed for at longer points post exposure, include mutations, chromosome aberrations, reactive oxygen species and telomere length changes. We discuss the relationship of biomarkers to different potential cell fates, including proliferation, apoptosis, senescence, and loss of stemness, which can propagate genomic instability and alter tissue composition and the underlying mRNA signatures that contribute to cell fate decisions. Our goal is to highlight factors that are important in choosing

  17. Material Model Evaluation of a Composite Honeycomb Energy Absorber

    NASA Technical Reports Server (NTRS)

    Jackson, Karen E.; Annett, Martin S.; Fasanella, Edwin L.; Polanco, Michael A.

    2012-01-01

    A study was conducted to evaluate four different material models in predicting the dynamic crushing response of solid-element-based models of a composite honeycomb energy absorber, designated the Deployable Energy Absorber (DEA). Dynamic crush tests of three DEA components were simulated using the nonlinear, explicit transient dynamic code, LS-DYNA . In addition, a full-scale crash test of an MD-500 helicopter, retrofitted with DEA blocks, was simulated. The four material models used to represent the DEA included: *MAT_CRUSHABLE_FOAM (Mat 63), *MAT_HONEYCOMB (Mat 26), *MAT_SIMPLIFIED_RUBBER/FOAM (Mat 181), and *MAT_TRANSVERSELY_ANISOTROPIC_CRUSHABLE_FOAM (Mat 142). Test-analysis calibration metrics included simple percentage error comparisons of initial peak acceleration, sustained crush stress, and peak compaction acceleration of the DEA components. In addition, the Roadside Safety Verification and Validation Program (RSVVP) was used to assess similarities and differences between the experimental and analytical curves for the full-scale crash test.

  18. A new approach toward evaluation of fish bioenergetics models

    USGS Publications Warehouse

    Madenjian, Charles P.; O'Connor, Daniel V.; Nortrup, David A.

    2000-01-01

    A new approach was used to evaluate the Wisconsin bioenergetics model for lake trout (Salvelinus namaycush). Lake trout in laboratory tanks were fed alewife (Alosa pseudoharengus) and rainbow smelt (Osmerus mordax), prey typical of lake trout in Lake Michigan. Food consumption and growth by lake trout during the experiment were measured. Polychlorinated biphenyl (PCB) concentrations of the alewife and rainbow smelt, as well as of the lake trout at the beginning and end of the experiment, were determined. From these data, we calculated that lake trout retained 81% of the PCBs contained within their food. In an earlier study, application of the Wisconsin lake trout bioenergetics model to growth and diet data for lake trout in Lake Michigan, in conjunction with PCB data for lake trout and prey fish from Lake Michigan, yielded an estimate of PCB assimilation efficiency from food of 81%. This close agreement in the estimates of efficiency with which lake trout retain PCBs from their food indicated that the bioenergetics model was furnishing accurate predictions of food consumption by lake trout in Lake Michigan.

  19. Evaluation of Influenza Vaccination Efficacy: A Universal Epidemic Model

    PubMed Central

    Bazhan, Sergei I.

    2016-01-01

    By means of a designed epidemic model, we evaluated the influence of seasonal vaccination coverage as well as a potential universal vaccine with differing efficacy on the aftermath of seasonal and pandemic influenza. The results of the modeling enabled us to conclude that, to control a seasonal influenza epidemic with a reproduction coefficient R0 ≤ 1.5, a 35% vaccination coverage with the current seasonal influenza vaccine formulation is sufficient, provided that other epidemiology measures are regularly implemented. Increasing R0 level of pandemic strains will obviously require stronger intervention. In addition, seasonal influenza vaccines fail to confer protection against antigenically distinct pandemic influenza strains. Therefore, the necessity of a universal influenza vaccine is clear. The model predicts that a potential universal vaccine will be able to provide sufficient reliable (90%) protection against pandemic influenza only if its efficacy is comparable with the effectiveness of modern vaccines against seasonal influenza strains (70%–80%); given that at least 40% of the population has been vaccinated in advance, ill individuals have been isolated (observed), and a quarantine has been introduced. If other antiepidemic measures are absent, a vaccination coverage of at least 80% is required. PMID:27668256

  20. Experimental performance evaluation of human balance control models.

    PubMed

    Huryn, Thomas P; Blouin, Jean-Sébastien; Croft, Elizabeth A; Koehle, Michael S; Van der Loos, H F Machiel

    2014-11-01

    Two factors commonly differentiate proposed balance control models for quiet human standing: 1) intermittent muscle activation and 2) prediction that overcomes sensorimotor time delays. In this experiment we assessed the viability and performance of intermittent activation and prediction in a balance control loop that included the neuromuscular dynamics of human calf muscles. Muscles were driven by functional electrical stimulation (FES). The performance of the different controllers was compared based on sway patterns and mechanical effort required to balance a human body load on a robotic balance simulator. All evaluated controllers balanced subjects with and without a neural block applied to their common peroneal and tibial nerves, showing that the models can produce stable balance in the absence of natural activation. Intermittent activation required less stimulation energy than continuous control but predisposed the system to increased sway. Relative to intermittent control, continuous control reproduced the sway size of natural standing better. Prediction was not necessary for stable balance control but did improve stability when control was intermittent, suggesting a possible benefit of a predictor for intermittent activation. Further application of intermittent activation and predictive control models may drive prolonged, stable FES-controlled standing that improves quality of life for people with balance impairments. PMID:24771586

  1. Evaluation of Influenza Vaccination Efficacy: A Universal Epidemic Model

    PubMed Central

    Bazhan, Sergei I.

    2016-01-01

    By means of a designed epidemic model, we evaluated the influence of seasonal vaccination coverage as well as a potential universal vaccine with differing efficacy on the aftermath of seasonal and pandemic influenza. The results of the modeling enabled us to conclude that, to control a seasonal influenza epidemic with a reproduction coefficient R0 ≤ 1.5, a 35% vaccination coverage with the current seasonal influenza vaccine formulation is sufficient, provided that other epidemiology measures are regularly implemented. Increasing R0 level of pandemic strains will obviously require stronger intervention. In addition, seasonal influenza vaccines fail to confer protection against antigenically distinct pandemic influenza strains. Therefore, the necessity of a universal influenza vaccine is clear. The model predicts that a potential universal vaccine will be able to provide sufficient reliable (90%) protection against pandemic influenza only if its efficacy is comparable with the effectiveness of modern vaccines against seasonal influenza strains (70%–80%); given that at least 40% of the population has been vaccinated in advance, ill individuals have been isolated (observed), and a quarantine has been introduced. If other antiepidemic measures are absent, a vaccination coverage of at least 80% is required.

  2. A Method of Evaluating Atmospheric Models Using Tracer Measurements.

    NASA Astrophysics Data System (ADS)

    Korain, Darko; Frye, James; Isakov, Vlad

    2000-02-01

    The authors have developed a method that uses tracer measurements as the basis for comparing and evaluating wind fields. An important advantage of the method is that the wind fields are evaluated from the tracer measurements without introducing dispersion calculations. The method can be applied to wind fields predicted by different atmospheric models or to wind fields obtained from interpolation and extrapolation of measured data. The method uses a cost function to quantify the success of wind fields in representing tracer transport. A cost function, `tracer potential,' is defined to account for the magnitude of the tracer concentration at the tracer receptors and the separation between each segment of a trajectory representing wind field transport and each of the tracer receptors. The tracer potential resembles a general expression for a physical potential because the success of a wind field trajectory is directly proportional to the magnitude of the tracer concentration and inversely proportional to its distance from this concentration. A reference tracer potential is required to evaluate the relative success of the wind fields and is defined by the initial location of any trajectory at the source. Then the method is used to calculate continuously the tracer potential along each trajectory as determined by the wind fields in time and space. Increased potential relative to the reference potential along the trajectory indicates good performance of the wind fields and vice versa. If there is sufficient spatial coverage of near and far receptors around the source, then the net tracer potential area can be used to infer the overall success of the wind fields. If there are mainly near-source receptors, then the positive tracer potential area should be used. If the vertical velocity of the wind fields is not available, then the success of the wind fields can be estimated from the vertically integrated area under the tracer potential curve. A trajectory with a maximum

  3. Evaluation of Atmospheric Loading and Improved Troposphere Modelling

    NASA Technical Reports Server (NTRS)

    Zelensky, Nikita P.; Chinn, Douglas S.; Lemoine, F. G.; Le Bail, Karine; Pavlis, Despina E.

    2012-01-01

    Forward modeling of non-tidal atmospheric loading displacements at geodetic tracking stations have not routinely been included in Doppler Orbitography and Radiopositionning Integrated by Satellite (DORIS) or Satellite Laser Ranging (SLR) station analyses for either POD applications or reference frame determination. The displacements which are computed from 6-hourly models such as the ECMWF and can amount to 3-10 mm in the east, north and up components depending on the tracking station locations. We evaluate the application of atmospheric loading in a number ways using the NASA GSFC GEODYN software: First we assess the impact on SLR & DORIS-determined orbits such as Jason-2, where we evaluate the impact on the tracking data RMS of fit and how the total orbits are changed with the application of this correction. Preliminary results show an RMS radial change of 0.5 mm for Jason-2 over 54 cycles and a total change in the Z-centering of the orbit of 3 mm peak-to-peak over one year. We also evaluate the effects on other DORIS-satellites such as Cryosat-2, Envisat and the SPOT satellites. In the second step, we produce two SINEX time series based on data from available DORIS satellites and assess the differences in WRMS, scale and Helmert translation parameters. Troposphere refraction is obviously an important correction for radiometric data types such as DORIS. We evaluate recent improvements in DORIS processing at GSFC including the application of the Vienna Mapping Function (VMF1) grids with a-priori hydrostatic (VZHDs) and wet (VZWDs) zenith delays. We reduce the gridded VZHD at the stations height using pressure and temperature derived from GPT (strategy 1) and Saastamoinen. We discuss the validation of the VMF1 implementation and its application to the Jason-2 POD processing, compared to corrections using the Niell mapping function and the GMF. Using one year of data, we also assess the impact of the new troposphere corrections on the DORIS-only solutions, most

  4. Towards systematic evaluation of crop model outputs for global land-use models

    NASA Astrophysics Data System (ADS)

    Leclere, David; Azevedo, Ligia B.; Skalský, Rastislav; Balkovič, Juraj; Havlík, Petr

    2016-04-01

    Land provides vital socioeconomic resources to the society, however at the cost of large environmental degradations. Global integrated models combining high resolution global gridded crop models (GGCMs) and global economic models (GEMs) are increasingly being used to inform sustainable solution for agricultural land-use. However, little effort has yet been done to evaluate and compare the accuracy of GGCM outputs. In addition, GGCM datasets require a large amount of parameters whose values and their variability across space are weakly constrained: increasing the accuracy of such dataset has a very high computing cost. Innovative evaluation methods are required both to ground credibility to the global integrated models, and to allow efficient parameter specification of GGCMs. We propose an evaluation strategy for GGCM datasets in the perspective of use in GEMs, illustrated with preliminary results from a novel dataset (the Hypercube) generated by the EPIC GGCM and used in the GLOBIOM land use GEM to inform on present-day crop yield, water and nutrient input needs for 16 crops x 15 management intensities, at a spatial resolution of 5 arc-minutes. We adopt the following principle: evaluation should provide a transparent diagnosis of model adequacy for its intended use. We briefly describe how the Hypercube data is generated and how it articulates with GLOBIOM in order to transparently identify the performances to be evaluated, as well as the main assumptions and data processing involved. Expected performances include adequately representing the sub-national heterogeneity in crop yield and input needs: i) in space, ii) across crop species, and iii) across management intensities. We will present and discuss measures of these expected performances and weight the relative contribution of crop model, input data and data processing steps in performances. We will also compare obtained yield gaps and main yield-limiting factors against the M3 dataset. Next steps include

  5. EMPIRE: Nuclear Reaction Model Code System for Data Evaluation

    NASA Astrophysics Data System (ADS)

    Herman, M.; Capote, R.; Carlson, B. V.; Obložinský, P.; Sin, M.; Trkov, A.; Wienke, H.; Zerkin, V.

    2007-12-01

    accompanying code EMPEND and completed with neutron resonances extracted from the existing evaluations. The package contains the full EXFOR (CSISRS) library of experimental reaction data that are automatically retrieved during the calculations. Publication quality graphs can be obtained using the powerful and flexible plotting package ZVView. The graphic user interface, written in Tcl/Tk, provides for easy operation of the system. This paper describes the capabilities of the code, outlines physical models and indicates parameter libraries used by EMPIRE to predict reaction cross sections and spectra, mainly for nucleon-induced reactions. Selected applications of EMPIRE are discussed, the most important being an extensive use of the code in evaluations of neutron reactions for the new US library ENDF/B-VII.0. Future extensions of the system are outlined, including neutron resonance module as well as capabilities of generating covariances, using both KALMAN and Monte-Carlo methods, that are still being advanced and refined.

  6. A Student Evaluation of Molecular Modeling in First Year College Chemistry.

    ERIC Educational Resources Information Center

    Ealy, Julie B.

    1999-01-01

    Evaluates first-year college students' perceptions of molecular modeling. Examines the effectiveness, integration with course content, interests, benefits, advantages, and disadvantages of molecular modeling. (Author/CCM)

  7. A generalised model for traffic induced road dust emissions. Model description and evaluation

    NASA Astrophysics Data System (ADS)

    Berger, Janne; Denby, Bruce

    2011-07-01

    This paper concerns the development and evaluation of a new and generalised road dust emission model. Most of today's road dust emission models are based on local measurements and/or contain empirical emission factors that are specific for a given road environment. In this study, a more generalised road dust emission model is presented and evaluated. We have based the emissions on road, tyre and brake wear rates and used the mass balance concept to describe the build-up of road dust on the road surface and road shoulder. The model separates the emissions into a direct part and a resuspension part, and treats the road surface and road shoulder as two different sources. We tested the model under idealized conditions as well as on two datasets in and just outside of Oslo in Norway during the studded tyre season. We found that the model reproduced the observed increase in road dust emissions directly after drying of the road surface. The time scale for the build-up of road dust on the road surface is less than an hour for medium to heavy traffic density. The model performs well for temperatures above 0 °C and less well during colder periods. Since the model does not yet include salting as an additional mass source, underestimations are evident under dry periods with temperatures around 0 °C, under which salting occurs. The model overestimates the measured PM 10 (particulate matter less than 10 μm in diameter) concentrations under heavy precipitation events since the model does not take the amount of precipitation into account. There is a strong sensitivity of the modelled emissions to the road surface conditions and the current parameterisations of the effect of precipitation, runoff and evaporation seem inadequate.

  8. Model selection on solid ground: Rigorous comparison of nine ways to evaluate Bayesian model evidence

    PubMed Central

    Schöniger, Anneli; Wöhling, Thomas; Samaniego, Luis; Nowak, Wolfgang

    2014-01-01

    Bayesian model selection or averaging objectively ranks a number of plausible, competing conceptual models based on Bayes' theorem. It implicitly performs an optimal trade-off between performance in fitting available data and minimum model complexity. The procedure requires determining Bayesian model evidence (BME), which is the likelihood of the observed data integrated over each model's parameter space. The computation of this integral is highly challenging because it is as high-dimensional as the number of model parameters. Three classes of techniques to compute BME are available, each with its own challenges and limitations: (1) Exact and fast analytical solutions are limited by strong assumptions. (2) Numerical evaluation quickly becomes unfeasible for expensive models. (3) Approximations known as information criteria (ICs) such as the AIC, BIC, or KIC (Akaike, Bayesian, or Kashyap information criterion, respectively) yield contradicting results with regard to model ranking. Our study features a theory-based intercomparison of these techniques. We further assess their accuracy in a simplistic synthetic example where for some scenarios an exact analytical solution exists. In more challenging scenarios, we use a brute-force Monte Carlo integration method as reference. We continue this analysis with a real-world application of hydrological model selection. This is a first-time benchmarking of the various methods for BME evaluation against true solutions. Results show that BME values from ICs are often heavily biased and that the choice of approximation method substantially influences the accuracy of model ranking. For reliable model selection, bias-free numerical methods should be preferred over ICs whenever computationally feasible. PMID:25745272

  9. Modelling phosphorus intake, digestion, retention and excretion in growing and finishing pig: model evaluation.

    PubMed

    Symeou, V; Leinonen, I; Kyriazakis, I

    2014-10-01

    A deterministic, dynamic model was developed, to enable predictions of phosphorus (P) digested, retained and excreted for different pig genotypes and under different dietary conditions. Before confidence can be placed on the predictions of the model, its evaluation was required. A sensitivity analysis of model predictions to ±20% changes in the model parameters was undertaken using a basal UK industry standard diet and a pig genotype characterized by British Society Animal Science as being of 'intermediate growth'. Model outputs were most sensitive to the values of the efficiency of digestible P utilization for growth and the non-phytate P absorption coefficient from the small intestine into the bloodstream; all other model parameters influenced model outputs by <10%, with the majority of the parameters influencing outputs by <5%. Independent data sets of published experiments were used to evaluate model performance based on graphical comparisons and statistical analysis. The literature studies were selected on the basis of the following criteria: they were within the BW range of 20 to 120 kg, pigs grew in a thermo-neutral environment; and they provided information on P intake, retention and excretion. In general, the model predicted satisfactorily the quantitative pig responses, in terms of P digested, retained and excreted, to variation in dietary inorganic P supply, Ca and phytase supplementation. The model performed well with 'conventional', European feed ingredients and poorly with 'less conventional' ones, such as dried distillers grains with solubles and canola meal. Explanations for these inconsistencies in the predictions are offered in the paper and they are expected to lead to further model development and improvement. The latter would include the characterization of the origin of phytate in pig diets.

  10. Model selection on solid ground: Rigorous comparison of nine ways to evaluate Bayesian model evidence

    NASA Astrophysics Data System (ADS)

    Schöniger, Anneli; Wöhling, Thomas; Samaniego, Luis; Nowak, Wolfgang

    2014-12-01

    Bayesian model selection or averaging objectively ranks a number of plausible, competing conceptual models based on Bayes' theorem. It implicitly performs an optimal trade-off between performance in fitting available data and minimum model complexity. The procedure requires determining Bayesian model evidence (BME), which is the likelihood of the observed data integrated over each model's parameter space. The computation of this integral is highly challenging because it is as high-dimensional as the number of model parameters. Three classes of techniques to compute BME are available, each with its own challenges and limitations: (1) Exact and fast analytical solutions are limited by strong assumptions. (2) Numerical evaluation quickly becomes unfeasible for expensive models. (3) Approximations known as information criteria (ICs) such as the AIC, BIC, or KIC (Akaike, Bayesian, or Kashyap information criterion, respectively) yield contradicting results with regard to model ranking. Our study features a theory-based intercomparison of these techniques. We further assess their accuracy in a simplistic synthetic example where for some scenarios an exact analytical solution exists. In more challenging scenarios, we use a brute-force Monte Carlo integration method as reference. We continue this analysis with a real-world application of hydrological model selection. This is a first-time benchmarking of the various methods for BME evaluation against true solutions. Results show that BME values from ICs are often heavily biased and that the choice of approximation method substantially influences the accuracy of model ranking. For reliable model selection, bias-free numerical methods should be preferred over ICs whenever computationally feasible.

  11. Novel Planar Electromagnetic Sensors: Modeling and Performance Evaluation

    PubMed Central

    Mukhopadhyay, Subhas C.

    2005-01-01

    High performance planar electromagnetic sensors, their modeling and a few applications have been reported in this paper. The researches employing planar type electromagnetic sensors have started quite a few years back with the initial emphasis on the inspection of defects on printed circuit board. The use of the planar type sensing system has been extended for the evaluation of near-surface material properties such as conductivity, permittivity, permeability etc and can also be used for the inspection of defects in the near-surface of materials. Recently the sensor has been used for the inspection of quality of saxophone reeds and dairy products. The electromagnetic responses of planar interdigital sensors with pork meats have been investigated.

  12. Integrated modelling approach for the evaluation of low emission zones.

    PubMed

    Dias, Daniela; Tchepel, Oxana; Antunes, António Pais

    2016-07-15

    Low emission zones (LEZ) are areas where the most polluting vehicles are restricted or deterred from entering. In recent years, LEZ became a popular option to reduce traffic-related air pollution and have been implemented in many cities worldwide, notably in Europe. However, the evidence about their effectiveness is inconsistent. This calls for the development of tools to evaluate ex-ante the air quality impacts of a LEZ. The integrated modelling approach we propose in this paper aims to respond to this call. It links a transportation model with an emissions model and an air quality model operating over a GIS-based platform. Through the application of the approach, it is possible to estimate the changes induced by the creation of a LEZ applied to private cars with respect to air pollution levels not only inside the LEZ, but also, more generally, in the city where it is located. The usefulness of the proposed approach was demonstrated for a case study involving the city of Coimbra (Portugal), where the creation of a LEZ is being sought to mitigate the air quality problems that its historic centre currently faces. The main result of this study was that PM10 and NO2 emissions from private cars would decrease significantly inside the LEZ (63% and 52%, respectively) but the improvement in air quality would be small and exceedances to the air pollution limits adopted in the European Union would not be fully avoided. In contrast, at city level, total emissions increase and a deterioration of air quality is expected to occur.

  13. Evaluation of CM5 Charges for Condensed-Phase Modeling.

    PubMed

    Vilseck, Jonah Z; Tirado-Rives, Julian; Jorgensen, William L

    2014-07-01

    The recently developed Charge Model 5 (CM5) is tested for its utility in condensed-phase simulations. The CM5 approach, which derives partial atomic charges from Hirshfeld population analyses, provides excellent results for gas-phase dipole moments and is applicable to all elements of the periodic table. Herein, the adequacy of scaled CM5 charges for use in modeling aqueous solutions has been evaluated by computing free energies of hydration (ΔG hyd) for 42 neutral organic molecules via Monte Carlo statistical mechanics. An optimal scaling factor for the CM5 charges was determined to be 1.27, resulting in a mean unsigned error (MUE) of 1.1 kcal/mol for the free energies of hydration. Testing for an additional 20 molecules gave an MUE of 1.3 kcal/mol. The high precision of the results is confirmed by free energy calculations using both sequential perturbations and complete molecular annihilation. Performance for specific functional groups is discussed; sulfur-containing molecules yield the largest errors. In addition, the scaling factor of 1.27 is shown to be appropriate for CM5 charges derived from a variety of density functional methods and basis sets. Though the average errors from the 1.27*CM5 results are only slightly lower than those using 1.14*CM1A charges, the broader applicability and easier access to CM5 charges via the Gaussian program are additional attractive features. The 1.27*CM5 charge model can be used for an enormous variety of applications in conjunction with many fixed-charge force fields and molecular modeling programs. PMID:25061445

  14. Evaluating status change of soil potassium from path model.

    PubMed

    He, Wenming; Chen, Fang

    2013-01-01

    The purpose of this study is to determine critical environmental parameters of soil K availability and to quantify those contributors by using a proposed path model. In this study, plot experiments were designed into different treatments, and soil samples were collected and further analyzed in laboratory to investigate soil properties influence on soil potassium forms (water soluble K, exchangeable K, non-exchangeable K). Furthermore, path analysis based on proposed path model was carried out to evaluate the relationship between potassium forms and soil properties. Research findings were achieved as followings. Firstly, key direct factors were soil S, ratio of sodium-potassium (Na/K), the chemical index of alteration (CIA), Soil Organic Matter in soil solution (SOM), Na and total nitrogen in soil solution (TN), and key indirect factors were Carbonate (CO3), Mg, pH, Na, S, and SOM. Secondly, path model can effectively determine direction and quantities of potassium status changes between Exchangeable potassium (eK), Non-exchangeable potassium (neK) and water-soluble potassium (wsK) under influences of specific environmental parameters. In reversible equilibrium state of [Formula: see text], K balance state was inclined to be moved into β and χ directions in treatments of potassium shortage. However in reversible equilibrium of [Formula: see text], K balance state was inclined to be moved into θ and λ directions in treatments of water shortage. Results showed that the proposed path model was able to quantitatively disclose moving direction of K status and quantify its equilibrium threshold. It provided a theoretical and practical basis for scientific and effective fertilization in agricultural plants growth. PMID:24204659

  15. Statistical models and computation to evaluate measurement uncertainty

    NASA Astrophysics Data System (ADS)

    Possolo, Antonio

    2014-08-01

    In the course of the twenty years since the publication of the Guide to the Expression of Uncertainty in Measurement (GUM), the recognition has been steadily growing of the value that statistical models and statistical computing bring to the evaluation of measurement uncertainty, and of how they enable its probabilistic interpretation. These models and computational methods can address all the problems originally discussed and illustrated in the GUM, and enable addressing other, more challenging problems, that measurement science is facing today and that it is expected to face in the years ahead. These problems that lie beyond the reach of the techniques in the GUM include (i) characterizing the uncertainty associated with the assignment of value to measurands of greater complexity than, or altogether different in nature from, the scalar or vectorial measurands entertained in the GUM: for example, sequences of nucleotides in DNA, calibration functions and optical and other spectra, spatial distribution of radioactivity over a geographical region, shape of polymeric scaffolds for bioengineering applications, etc; (ii) incorporating relevant information about the measurand that predates or is otherwise external to the measurement experiment; (iii) combining results from measurements of the same measurand that are mutually independent, obtained by different methods or produced by different laboratories. This review of several of these statistical models and computational methods illustrates some of the advances that they have enabled, and in the process invites a reflection on the interesting historical fact that these very same models and methods, by and large, were already available twenty years ago, when the GUM was first published—but then the dialogue between metrologists, statisticians and mathematicians was still in bud. It is in full bloom today, much to the benefit of all.

  16. Development and evaluation of a bioenergetics model for bull trout

    USGS Publications Warehouse

    Mesa, Matthew G.; Welland, Lisa K.; Christiansen, Helena E.; Sauter, Sally T.; Beauchamp, David A.

    2013-01-01

    We conducted laboratory experiments to parameterize a bioenergetics model for wild Bull Trout Salvelinus confluentus, estimating the effects of body mass (12–1,117 g) and temperature (3–20°C) on maximum consumption (C max) and standard metabolic rates. The temperature associated with the highest C max was 16°C, and C max showed the characteristic dome-shaped temperature-dependent response. Mass-dependent values of C max (N = 28) at 16°C ranged from 0.03 to 0.13 g·g−1·d−1. The standard metabolic rates of fish (N = 110) ranged from 0.0005 to 0.003 g·O2·g−1·d−1 and increased with increasing temperature but declined with increasing body mass. In two separate evaluation experiments, which were conducted at only one ration level (40% of estimated C max), the model predicted final weights that were, on average, within 1.2 ± 2.5% (mean ± SD) of observed values for fish ranging from 119 to 573 g and within 3.5 ± 4.9% of values for 31–65 g fish. Model-predicted consumption was within 5.5 ± 10.9% of observed values for larger fish and within 12.4 ± 16.0% for smaller fish. Our model should be useful to those dealing with issues currently faced by Bull Trout, such as climate change or alterations in prey availability.

  17. Evaluation of data for Sinkhole-development risk models

    NASA Astrophysics Data System (ADS)

    Upchurch, Sam B.; Littlefield, James R.

    1988-10-01

    Before risk assessments for sinkhole damage and indemnification are developed, a data base must be created to predict the occurrence and distribution of sinkholes. This database must be evaluated in terms of the following questions: (1) are available records of modern sinkhole development adequate, (2) can the distribution of ancient sinks be used for predictive purposes, and (3) at what areal scale must sinkhole occurrences be evaluated for predictive and risk analysis purposes? Twelve 7.5' quadrangles with varying karst development in Hillsborough County, Florida provide insight into these questions. The area includes 179 modern sinks that developed between 1964 and 1985 and 2,303 ancient sinks. The sinks occur in urban, suburban, agricultural, and major forest wetland areas. The number of ancient sinks ranges from 0.1 to 3.2/km2 and averages 1.1/km2 for the entire area. The quadrangle area occupied by ancient sinks ranges from 0.3 to 10.2 percent. The distribution of ancient sinkholes within a quadrangle ranges from 0 to over 25 percent of the land surface. In bare karst areas, the sinks are localized along major lineaments, especially at lineament intersections. Where there is covered karst, ancient sinks may be obscured. Modern sinkholes did not uniformly through time, they ranged from 0 to 29/yr. The regional occurrence rate is 7.6/yr. Most were reported in urban or suburban areas and their locations coincide with the lineament-controlled areas of ancient karst. Moving-average analysis indicates that the distribution of modern sinks is highly localized and ranges from 0 to 1.9/km2. Chi-square tests show that the distribution of ancient sinks in bare karst areas significantly predicts the locations of modern sinks. In areas of covered karst, the locations of ancient sinkholes do not predict modern sinks. It appears that risk-assessment models for sinkhole development can use the distribution of ancient sinks where bare karst is present. In covered karst areas

  18. Storytelling Voice Conversion: Evaluation Experiment Using Gaussian Mixture Models

    NASA Astrophysics Data System (ADS)

    Přibil, Jiří; Přibilová, Anna; Ďuračková, Daniela

    2015-07-01

    In the development of the voice conversion and personification of the text-to-speech (TTS) systems, it is very necessary to have feedback information about the users' opinion on the resulting synthetic speech quality. Therefore, the main aim of the experiments described in this paper was to find out whether the classifier based on Gaussian mixture models (GMM) could be applied for evaluation of different storytelling voices created by transformation of the sentences generated by the Czech and Slovak TTS system. We suppose that it is possible to combine this GMM-based statistical evaluation with the classical one in the form of listening tests or it can replace them. The results obtained in this way were in good correlation with the results of the conventional listening test, so they confirm practical usability of the developed GMM classifier. With the help of the performed analysis, the optimal setting of the initial parameters and the structure of the input feature set for recognition of the storytelling voices was finally determined.

  19. [Tridimensional evaluation model of health promotion in school -- a proposition].

    PubMed

    Kulmatycki, Lesław

    2005-01-01

    A good school health programme can be one of the most cost effective investments for simultaneously improving education and health. The general direction of WHO's European Network of Health Promoting Schools and Global Schools Health Initiative is guided by the holistic approach and the Ottawa Charter for Health Promotion (1986). A health promoting school strives to improve the health and well-being of school pupils as well as school personnel, families and community members; and works with community leaders to help them understand how the community contributes to health and education. Evaluation research is essential to describe the nature and effectiveness of school health promoting activity. The overall aim of this paper is to help school leaders and health promotion coordinators to measure their work well and effectively. The specific aim is to offer a practical three-dimensional evaluation model for health promoting schools. The material is presented in two sections. The first one is a 'theoretical base' for health promotion which was identified from broad based daily health promotion practical activities, strategies and intersectional interventions closely related to the philosophy of the holistic approach. The three dimensions refer to: 1. 'areas' -- according to the mandala of health. 2. 'actions' -- according to Ottawa Charter strategies which should be adapted to the local school networks. 3. 'data'-- according to different groups of evidence (process, changes and progress). The second one, as a result of the mentioned base, represents the three 'core elements': standards, criteria and indicators. In conclusion, this article provides a practical answer to the dilemma of the evaluation model in the network of local school environment. This proposition is addressed to school staff and school health promotion providers to make their work as effective as possible to improve pupils health. Health promoting school can be characterized as a school constantly

  20. Evaluating Consistency in the Ocean Model Component of the Community Earth System Model

    NASA Astrophysics Data System (ADS)

    Hammerling, D.; Hu, Y.; Baker, A. H.; Huang, X.; Tseng, Y. H.; Bryan, F.

    2015-12-01

    We developed a new ensemble-based statistical method for evaluating the consistency in the Parallel Ocean Program (POP), the ocean model component of the Community Earth System Model (CESM). Since the ocean dynamics are chaotic in nature, a roundoff-level perturbation in the initial conditions will potentially result in distinct model solutions. No bit-for-bit (BFB) identical results in ocean solutions can be guaranteed for even tiny code modification. Our approach takes the natural variability of the ocean model into account through POP ensemble simulations. In particular, the statistical distribution from an ensemble of POP simulations is used to determine the standard score of any new model solution at each grid point. This setup accounts for the spatial heterogeneity in variability within the ensemble. Then the percentage of grid points which have scores greater than a specified threshold indicates whether the new model simulation is statistically distinguishable from the ensemble simulations. We evaluate the new tool on three types of scenarios: running with different processor layouts, changing the physical parameterization, and varying the convergence tolerance in the barotropic solver. Results indicate that our new testing tool is capable of distinguishing cases which should be consistent with the ensemble, such as the solutions with different processor layouts, and those which should not, such as increasing a certain physical parameter by two or more times. This new tool provides a simple, subjective and systematic way to evaluate the difference between the given solution and the ensemble, thus facilitating the detection of errors introduced during model development.

  1. The western Pacific monsoon in CMIP5 models: Model evaluation and projections

    NASA Astrophysics Data System (ADS)

    Brown, Josephine R.; Colman, Robert A.; Moise, Aurel F.; Smith, Ian N.

    2013-11-01

    ability of 35 models from the Coupled Model Intercomparison Project Phase 5 (CMIP5) to simulate the western Pacific (WP) monsoon is evaluated over four representative regions around Timor, New Guinea, the Solomon Islands and Palau. Coupled model simulations are compared with atmosphere-only model simulations (with observed sea surface temperatures, SSTs) to determine the impact of SST biases on model performance. Overall, the CMIP5 models simulate the WP monsoon better than previous-generation Coupled Model Intercomparison Project Phase 3 (CMIP3) models, but some systematic biases remain. The atmosphere-only models are better able to simulate the seasonal cycle of zonal winds than the coupled models, but display comparable biases in the rainfall. The CMIP5 models are able to capture features of interannual variability in response to the El Niño-Southern Oscillation. In climate projections under the RCP8.5 scenario, monsoon rainfall is increased over most of the WP monsoon domain, while wind changes are small. Widespread rainfall increases at low latitudes in the summer hemisphere appear robust as a large majority of models agree on the sign of the change. There is less agreement on rainfall changes in winter. Interannual variability of monsoon wet season rainfall is increased in a warmer climate, particularly over Palau, Timor and the Solomon Islands. A subset of the models showing greatest skill in the current climate confirms the overall projections, although showing markedly smaller rainfall increases in the western equatorial Pacific. The changes found here may have large impacts on Pacific island countries influenced by the WP monsoon.

  2. Risk evaluation of uranium mining: A geochemical inverse modelling approach

    NASA Astrophysics Data System (ADS)

    Rillard, J.; Zuddas, P.; Scislewski, A.

    2011-12-01

    It is well known that uranium extraction operations can increase risks linked to radiation exposure. The toxicity of uranium and associated heavy metals is the main environmental concern regarding exploitation and processing of U-ore. In areas where U mining is planned, a careful assessment of toxic and radioactive element concentrations is recommended before the start of mining activities. A background evaluation of harmful elements is important in order to prevent and/or quantify future water contamination resulting from possible migration of toxic metals coming from ore and waste water interaction. Controlled leaching experiments were carried out to investigate processes of ore and waste (leached ore) degradation, using samples from the uranium exploitation site located in Caetité-Bahia, Brazil. In experiments in which the reaction of waste with water was tested, we found that the water had low pH and high levels of sulphates and aluminium. On the other hand, in experiments in which ore was tested, the water had a chemical composition comparable to natural water found in the region of Caetité. On the basis of our experiments, we suggest that waste resulting from sulphuric acid treatment can induce acidification and salinization of surface and ground water. For this reason proper storage of waste is imperative. As a tool to evaluate the risks, a geochemical inverse modelling approach was developed to estimate the water-mineral interaction involving the presence of toxic elements. We used a method earlier described by Scislewski and Zuddas 2010 (Geochim. Cosmochim. Acta 74, 6996-7007) in which the reactive surface area of mineral dissolution can be estimated. We found that the reactive surface area of rock parent minerals is not constant during time but varies according to several orders of magnitude in only two months of interaction. We propose that parent mineral heterogeneity and particularly, neogenic phase formation may explain the observed variation of the

  3. Evaluation of five fracture models in Taylor impact fracture

    NASA Astrophysics Data System (ADS)

    Zhang, Wei; Xiao, Xin-Ke; Wei, Gang; Guo, Zitao

    2012-03-01

    Taylor impact test presented in a previous study on a commercial high strength and super hard aluminum alloy 7A04-T6 are numerically evaluated using the finite element code ABAQUS/Explicit. In the present study, the influence of fracture criterion in numerical simulations of the deformation and fracture behavior of Taylor rod has been studied. Included in the paper are a modified version of Johnson-Cook, the Cockcroft-Latham(C-L), the constant fracture strain, the maximum shear stress and the maximum principle stress fracture models. Model constants for each criterion are calibrated from material tests. The modified version of Johnson-Cook fracture criterion with the stress triaxiality cut off idea is found to give good prediction of the Taylor impact fracture behavior. However, this study will also show that the C-L fracture criterion where only one simple material test is required for calibration is found to give reasonable predictions. Unfortunately, the other three criteria are not able to repeat the experimentally obtained fracture behavior. The study indicates that the stress triaxiality cut off idea is necessary to predict the Taylor impact fracture.

  4. Evaluation of Five Fracture Models in Taylor Impact Fracture

    NASA Astrophysics Data System (ADS)

    Zhang, Wei; Xiao, Xinke; Wei, Gang; Guo, Zitao

    2011-06-01

    Taylor impact test presented in a previous study on a commercial high strength and super hard aluminum alloy 7A04-T6 are numerically evaluated using the finite element code ABAQUS/Explicit. In the present study, the influence of fracture criterion in numerical simulations of the deformation and fracture behavior of Taylor rod has been studied. Included in the paper are a modified version of Johnson-Cook, the Cockcroft-Latham(C-L), the constant fracture strain, the maximum shear stress and the maximum principle stress fracture models. Model constants for each criterion are calibrated from material tests. The modified version of Johnson-Cook fracture criterion with the stress triaxiality cut off idea is found to give good prediction of the Taylor impact fracture behavior. However, this study will also show that the C-L fracture criterion where only one simple material test is required for calibration, is found to give reasonable predictions. Unfortunately, the other three criteria are not able to repeat the experimentally obtained fracture behavior. The study indicates that the stress triaxiality cut off idea is necessary to predict the Taylor impact fracture. The National Natural Science Foundation of China (No.: 11072072).

  5. Reliability of Bolton analysis evaluation in tridimensional virtual models

    PubMed Central

    Brandão, Marianna Mendonca; Sobral, Marcio Costal; Vogel, Carlos Jorge

    2015-01-01

    Objective: The present study aimed at evaluating the reliability of Bolton analysis in tridimensional virtual models, comparing it with the manual method carried out with dental casts. Methods: The present investigation was performed using 56 pairs of dental casts produced from the dental arches of patients in perfect conditions and randomly selected from Universidade Federal da Bahia, School of Dentistry, Orthodontics Postgraduate Program. Manual measurements were obtained with the aid of a digital Cen-Tech 4"(r) caliper (Harpor Freight Tools, Calabasas, CA, USA). Subsequently, samples were digitized on 3Shape(r) R-700T scanner (Copenhagen, Denmark) and digital measures were obtained by Ortho Analyzer software. Results: Data were subject to statistical analysis and results revealed that there were no statistically significant differences between measurements with p-values equal to p = 0.173 and p= 0.239 for total and anterior proportions, respectively. Conclusion: Based on these findings, it is possible to deduce that Bolton analysis performed on tridimensional virtual models is as reliable as measurements obtained from dental casts with satisfactory agreement. PMID:26560824

  6. Solid rocket booster performance evaluation model. Volume 1: Engineering description

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The space shuttle solid rocket booster performance evaluation model (SRB-II) is made up of analytical and functional simulation techniques linked together so that a single pass through the model will predict the performance of the propulsion elements of a space shuttle solid rocket booster. The available options allow the user to predict static test performance, predict nominal and off nominal flight performance, and reconstruct actual flight and static test performance. Options selected by the user are dependent on the data available. These can include data derived from theoretical analysis, small scale motor test data, large motor test data and motor configuration data. The user has several options for output format that include print, cards, tape and plots. Output includes all major performance parameters (Isp, thrust, flowrate, mass accounting and operating pressures) as a function of time as well as calculated single point performance data. The engineering description of SRB-II discusses the engineering and programming fundamentals used, the function of each module, and the limitations of each module.

  7. Evaluating Geographically Weighted Regression Models for Environmental Chemical Risk Analysis

    PubMed Central

    Czarnota, Jenna; Wheeler, David C; Gennings, Chris

    2015-01-01

    In the evaluation of cancer risk related to environmental chemical exposures, the effect of many correlated chemicals on disease is often of interest. The relationship between correlated environmental chemicals and health effects is not always constant across a study area, as exposure levels may change spatially due to various environmental factors. Geographically weighted regression (GWR) has been proposed to model spatially varying effects. However, concerns about collinearity effects, including regression coefficient sign reversal (ie, reversal paradox), may limit the applicability of GWR for environmental chemical risk analysis. A penalized version of GWR, the geographically weighted lasso, has been proposed to remediate the collinearity effects in GWR models. Our focus in this study was on assessing through a simulation study the ability of GWR and GWL to correctly identify spatially varying chemical effects for a mixture of correlated chemicals within a study area. Our results showed that GWR suffered from the reversal paradox, while GWL overpenalized the effects for the chemical most strongly related to the outcome. PMID:25983546

  8. Evaluation of Spatial Agreement of Distinct Landslide Prediction Models

    NASA Astrophysics Data System (ADS)

    Sterlacchini, Simone; Bordogna, Gloria; Frigerio, Ivan

    2013-04-01

    derived to test agreement among the maps. Nevertheless, no information was made available about the location where the prediction of two or more maps agreed and where they did not. Thus we wanted to study if also the spatial agreements of the models predicted the same or similar values. To this end we adopted a soft image fusion approach proposed in. It is defined as a group decision making model for ranking spatial alternatives based on a soft fusion of coherent evaluations. In order to apply this approach, the prediction maps were categorized into 10 distinct classes by using an equal-area criterion to compare the predicted results. Thus we applied soft fusion of the prediction maps regarded as evaluations of distinct human experts. The fusion process needs the definition of the concept of "fuzzy majority", provided by a linguistic quantifier, in order to determine the coherence of a majority of maps in each pixel of the territory. Based on this, the overall spatial coherence among the majority of the prediction maps was evaluated. The spatial coherence among a fuzzy majority is defined based on the Minkowski OWA operators. The result made it possible to spatially identify sectors of the study area in which the predictions were in agreement for the same or for close classes of susceptibility, or discordant, or even distant classes. We studied the spatial agreement among a "fuzzy majority" defined as "80% of the 13 coherent maps", thus requiring that at least 11 out of 13 agree, since from previous results we knew that two maps were in disagreement. So the fuzzy majority AtLeast80% was defined by a quantifier with linear increasing membership function (0.8, 1). The coherence metric used was the Euclidean distance. We thus computed the soft fusion of AtLeast80% coherent maps for homogeneous groups of classes. We considered as homogeneous classes the highest two classes (9 and 10), the lowest two classes, and the central classes (4, 5 and 6). We then fused the maps

  9. An evaluation of evaluative personality terms: a comparison of the big seven and five-factor model in predicting psychopathology.

    PubMed

    Durrett, Christine; Trull, Timothy J

    2005-09-01

    Two personality models are compared regarding their relationship with personality disorder (PD) symptom counts and with lifetime Axis I diagnoses. These models share 5 similar domains, and the Big 7 model also includes 2 domains assessing self-evaluation: positive and negative valence. The Big 7 model accounted for more variance in PDs than the 5-factor model, primarily because of the association of negative valence with most PDs. Although low-positive valence was associated with most Axis I diagnoses, the 5-factor model generally accounted for more variance in Axis I diagnoses than the Big 7 model. Some predicted associations between self-evaluation and psychopathology were not found, and unanticipated associations emerged. These findings are discussed regarding the utility of evaluative terms in clinical assessment.

  10. A merged model of quality improvement and evaluation: maximizing return on investment.

    PubMed

    Woodhouse, Lynn D; Toal, Russ; Nguyen, Trang; Keene, DeAnna; Gunn, Laura; Kellum, Andrea; Nelson, Gary; Charles, Simone; Tedders, Stuart; Williams, Natalie; Livingood, William C

    2013-11-01

    Quality improvement (QI) and evaluation are frequently considered to be alternative approaches for monitoring and assessing program implementation and impact. The emphasis on third-party evaluation, particularly associated with summative evaluation, and the grounding of evaluation in the social and behavioral science contrast with an emphasis on the integration of QI process within programs or organizations and its origins in management science and industrial engineering. Working with a major philanthropic organization in Georgia, we illustrate how a QI model is integrated with evaluation for five asthma prevention and control sites serving poor and underserved communities in rural and urban Georgia. A primary foundation of this merged model of QI and evaluation is a refocusing of the evaluation from an intimidating report card summative evaluation by external evaluators to an internally engaged program focus on developmental evaluation. The benefits of the merged model to both QI and evaluation are discussed. The use of evaluation based logic models can help anchor a QI program in evidence-based practice and provide linkage between process and outputs with the longer term distal outcomes. Merging the QI approach with evaluation has major advantages, particularly related to enhancing the funder's return on investment. We illustrate how a Plan-Do-Study-Act model of QI can (a) be integrated with evaluation based logic models, (b) help refocus emphasis from summative to developmental evaluation, (c) enhance program ownership and engagement in evaluation activities, and (d) increase the role of evaluators in providing technical assistance and support.

  11. Evaluation of Model Microphysics Within Precipitation Bands of Extratropical Cyclones

    NASA Technical Reports Server (NTRS)

    Colle, Brian A.; Molthan, Andrew; Yu, Ruyi; Stark, David; Yuter, Sandra; Nesbitt, Steven

    2013-01-01

    Recent studies evaluating the bulk microphysical schemes (BMPs) within cloud resolving models (CRMs) have indicated large uncertainties and errors in the amount and size distributions of snow and cloud ice aloft. The snow prediction is sensitive to the snow densities, habits, and degree of riming within the BMPs. Improving these BMPs is a crucial step toward improving both weather forecasting and climate predictions. Several microphysical schemes in the Weather Research and Forecasting (WRF) model down to 1.33-km grid spacing are evaluated using aircraft, radar, and ground in situ data from the Global Precipitation Mission Coldseason Precipitation Experiment (GCPEx) experiment, as well as a few years (15 winter storms) of surface measurements of riming, crystal habit, snow density, and radar measurements at Stony Brook, NY (SBNY on north shore of Long Island) during the 2009-2012 winter seasons. Surface microphysical measurements at SBNY were taken every 15 to 30 minutes using a stereo microscope and camera, and snow depth and snow density were also recorded. During these storms, a vertically-pointing Ku-band radar was used to observe the vertical evolution of reflectivity and Doppler vertical velocities. A Particle Size and Velocity (PARSIVEL) disdrometer was also used to measure the surface size distribution and fall speeds of snow at SBNY. For the 15 cases at SBNY, the WSM6, Morrison (MORR), Thompson (THOM2), and Stony Brook (SBU-YLIN) BMPs were validated. A non-spherical snow assumption (THOM2 and SBU-YLIN) simulated a more realistic distribution of reflectivity than spherical snow assumptions in the WSM6 and MORR schemes. The MORR, WSM6, and SBU-YLIN schemes are comparable to the observed velocity distribution in light and moderate riming periods. The THOM2 is 0.25 meters per second too slow with its velocity distribution in these periods. In heavier riming, the vertical Doppler velocities in the WSM6, THOM2, and MORR schemes were 0.25 meters per second too

  12. Evaluation of Model Microphysics within Precipitation Bands of Extratropical Cyclones

    NASA Technical Reports Server (NTRS)

    Colle, Brian A.; Yu, Ruyi; Molthan, Andrew L.; Nesbitt, Steven

    2013-01-01

    Recent studies evaluating the bulk microphysical schemes (BMPs) within cloud resolving models (CRMs) have indicated large uncertainties and errors in the amount and size distributions of snow and cloud ice aloft. The snow prediction is sensitive to the snow densities, habits, and degree of riming within the BMPs. Improving these BMPs is a crucial step toward improving both weather forecasting and climate predictions. Several microphysical schemes in the Weather Research and Forecasting (WRF) model down to 1.33-km grid spacing are evaluated using aircraft, radar, and ground in situ data from the Global Precipitation Mission Coldseason Precipitation Experiment (GCPEx) experiment, as well as a few years (15 winter storms) of surface measurements of riming, crystal habit, snow density, and radar measurements at Stony Brook, NY (SBNY on north shore of Long Island) during the 2009-2012 winter seasons. Surface microphysical measurements at SBNY were taken every 15 to 30 minutes using a stereo microscope and camera, and snow depth and snow density were also recorded. During these storms, a vertically-pointing Ku-band radar was used to observe the vertical evolution of reflectivity and Doppler vertical velocities. A Particle Size and Velocity (PARSIVEL) disdrometer was also used to measure the surface size distribution and fall speeds of snow at SBNY. For the 15 cases at SBNY, the WSM6, Morrison (MORR), Thompson (THOM2), and Stony Brook (SBU-YLIN) BMPs were validated. A non-spherical snow assumption (THOM2 and SBU-YLIN) simulated a more realistic distribution of reflectivity than spherical snow assumptions in the WSM6 and MORR schemes. The MORR, WSM6, and SBU-YLIN schemes are comparable to the observed velocity distribution in light and moderate riming periods. The THOM2 is approx 0.25 m/s too slow with its velocity distribution in these periods. In heavier riming, the vertical Doppler velocities in the WSM6, THOM2, and MORR schemes were approx 0.25 m/s too slow, while the

  13. Global Modeling of Tropospheric Chemistry with Assimilated Meteorology: Model Description and Evaluation

    NASA Technical Reports Server (NTRS)

    Bey, Isabelle; Jacob, Daniel J.; Yantosca, Robert M.; Logan, Jennifer A.; Field, Brendan D.; Fiore, Arlene M.; Li, Qin-Bin; Liu, Hong-Yu; Mickley, Loretta J.; Schultz, Martin G.

    2001-01-01

    We present a first description and evaluation of GEOS-CHEM, a global three-dimensional (3-D) model of tropospheric chemistry driven by assimilated meteorological observations from the Goddard Earth Observing System (GEOS) of the NASA Data Assimilation Office (DAO). The model is applied to a 1-year simulation of tropospheric ozone-NOx-hydrocarbon chemistry for 1994, and is evaluated with observations both for 1994 and for other years. It reproduces usually to within 10 ppb the concentrations of ozone observed from the worldwide ozonesonde data network. It simulates correctly the seasonal phases and amplitudes of ozone concentrations for different regions and altitudes, but tends to underestimate the seasonal amplitude at northern midlatitudes. Observed concentrations of NO and peroxyacetylnitrate (PAN) observed in aircraft campaigns are generally reproduced to within a factor of 2 and often much better. Concentrations of HNO3 in the remote troposphere are overestimated typically by a factor of 2-3, a common problem in global models that may reflect a combination of insufficient precipitation scavenging and gas-aerosol partitioning not resolved by the model. The model yields an atmospheric lifetime of methylchloroform (proxy for global OH) of 5.1 years, as compared to a best estimate from observations of 5.5 plus or minus 0.8 years, and simulates H2O2 concentrations observed from aircraft with significant regional disagreements but no global bias. The OH concentrations are approximately 20% higher than in our previous global 3-D model which included an UV-absorbing aerosol. Concentrations of CO tend to be underestimated by the model, often by 10-30 ppb, which could reflect a combination of excessive OH (a 20% decrease in model OH could be accommodated by the methylchloroform constraint) and an underestimate of CO sources (particularly biogenic). The model underestimates observed acetone concentrations over the South Pacific in fall by a factor of 3; a missing source

  14. Evaluation and Applications of the Prediction of Intensity Model Error (PRIME) Model

    NASA Astrophysics Data System (ADS)

    Bhatia, K. T.; Nolan, D. S.; Demaria, M.; Schumacher, A.

    2015-12-01

    Forecasters and end users of tropical cyclone (TC) intensity forecasts would greatly benefit from a reliable expectation of model error to counteract the lack of consistency in TC intensity forecast performance. As a first step towards producing error predictions to accompany each TC intensity forecast, Bhatia and Nolan (2013) studied the relationship between synoptic parameters, TC attributes, and forecast errors. In this study, we build on previous results of Bhatia and Nolan (2013) by testing the ability of the Prediction of Intensity Model Error (PRIME) model to forecast the absolute error and bias of four leading intensity models available for guidance in the Atlantic basin. PRIME forecasts are independently evaluated at each 12-hour interval from 12 to 120 hours during the 2007-2014 Atlantic hurricane seasons. The absolute error and bias predictions of PRIME are compared to their respective climatologies to determine their skill. In addition to these results, we will present the performance of the operational version of PRIME run during the 2015 hurricane season. PRIME verification results show that it can reliably anticipate situations where particular models excel, and therefore could lead to a more informed protocol for hurricane evacuations and storm preparations. These positive conclusions suggest that PRIME forecasts also have the potential to lower the error in the original intensity forecasts of each model. As a result, two techniques are proposed to develop a post-processing procedure for a multimodel ensemble based on PRIME. The first approach is to inverse-weight models using PRIME absolute error predictions (higher predicted absolute error corresponds to lower weights). The second multimodel ensemble applies PRIME bias predictions to each model's intensity forecast and the mean of the corrected models is evaluated. The forecasts of both of these experimental ensembles are compared to those of the equal-weight ICON ensemble, which currently

  15. Global modeling of tropospheric chemistry with assimilated meteorology: Model description and evaluation

    NASA Astrophysics Data System (ADS)

    Bey, Isabelle; Jacob, Daniel J.; Yantosca, Robert M.; Logan, Jennifer A.; Field, Brendan D.; Fiore, Arlene M.; Li, Qinbin; Liu, Honguy Y.; Mickley, Loretta J.; Schultz, Martin G.

    2001-10-01

    We present a first description and evaluation of GEOS-CHEM, a global threedimensional (3-D) model of tropospheric chemistry driven by assimilated meteorological observations from the Goddard Earth Observing System (GEOS) of the NASA Data Assimilation Office (DAO). The model is applied to a 1-year simulation of tropospheric ozone-NOx-hydrocarbon chemistry for 1994, and is evaluated with observations both for 1994 and for other years. It reproduces usually to within 10 ppb the concentrations of ozone observed from the worldwide ozonesonde data network. It simulates correctly the seasonal phases and amplitudes of ozone concentrations for different regions and altitudes, but tends to underestimate the seasonal amplitude at northern midlatitudes. Observed concentrations of NO and peroxyacetylnitrate (PAN) observed in aircraft campaigns are generally reproduced to within a factor of 2 and often much better. Concentrations of HNO3 in the remote troposphere are overestimated typically by a factor of 2-3, a common problem in global models that may reflect a combination of insufficient precipitation scavenging and gas-aerosol partitioning not resolved by the model. The model yields an atmospheric lifetime of methylchloroform (proxy for global OH) of 5.1 years, as compared to a best estimate from observations of 5.5 +/- 0.8 years, and simulates H2O2 concentrations observed from aircraft with significant regional disagreements but no global bias. The OH concentrations are ˜20% higher than in our previous global 3-D model which included an UV-absorbing aerosol. Concentrations of CO tend to be underestimated by the model, often by 10-30 ppb, which could reflect a combination of excessive OH (a 20% decrease in model OH could be accommodated by the methylchloroform constraint) and an underestimate of CO sources (particularly biogenic). The model underestimates observed acetone concentrations over the South Pacific in fall by a factor of 3; a missing source from the ocean may be

  16. Evaluating experimental design for soil-plant model selection with Bayesian model averaging

    NASA Astrophysics Data System (ADS)

    Wöhling, Thomas; Geiges, Andreas; Nowak, Wolfgang; Gayler, Sebastian

    2013-04-01

    The objective selection of appropriate models for realistic simulations of coupled soil-plant processes is a challenging task since the processes are complex, not fully understood at larger scales, and highly non-linear. Also, comprehensive data sets are scarce, and measurements are uncertain. In the past decades, a variety of different models have been developed that exhibit a wide range of complexity regarding their approximation of processes in the coupled model compartments. We present a method for evaluating experimental design for maximum confidence in the model selection task. The method considers uncertainty in parameters, measurements and model structures. Advancing the ideas behind Bayesian Model Averaging (BMA), the model weights in BMA are perceived as uncertain quantities with assigned probability distributions that narrow down as more data are made available. This allows assessing the power of different data types, data densities and data locations in identifying the best model structure from among a suite of plausible models. The models considered in this study are the crop models CERES, SUCROS, GECROS and SPASS, which are coupled to identical routines for simulating soil processes within the modelling framework Expert-N. The four models considerably differ in the degree of detail at which crop growth and root water uptake are represented. Monte-Carlo simulations were conducted for each of these models considering their uncertainty in soil hydraulic properties and selected crop model parameters. The models were then conditioned on field measurements of soil moisture, leaf-area index (LAI), and evapotranspiration rates (from eddy-covariance measurements) during a vegetation period of winter wheat at the Nellingen site in Southwestern Germany. Following our new method, we derived the BMA model weights (and their distributions) when using all data or different subsets thereof. We discuss to which degree the posterior BMA mean outperformed the prior BMA

  17. Collaborative evaluation and market research converge: an innovative model agricultural development program evaluation in Southern Sudan.

    PubMed

    O'Sullivan, John M; O'Sullivan, Rita

    2012-11-01

    In June and July 2006 a team of outside experts arrived in Yei, Southern Sudan through an AID project to provide support to a local agricultural development project. The team brought evaluation, agricultural marketing and financial management expertise to the in-country partners looking at steps to rebuild the economy of the war ravaged region. A partnership of local officials, agricultural development staff, and students worked with the outside team to craft a survey of agricultural traders working between northern Uganda and Southern Sudan the steps approach of a collaborative model. The goal was to create a market directory of use to producers, government officials and others interested in stimulating agricultural trade. The directory of agricultural producers and distributors served as an agricultural development and promotion tool as did the collaborative process itself.

  18. Collaborative evaluation and market research converge: an innovative model agricultural development program evaluation in Southern Sudan.

    PubMed

    O'Sullivan, John M; O'Sullivan, Rita

    2012-11-01

    In June and July 2006 a team of outside experts arrived in Yei, Southern Sudan through an AID project to provide support to a local agricultural development project. The team brought evaluation, agricultural marketing and financial management expertise to the in-country partners looking at steps to rebuild the economy of the war ravaged region. A partnership of local officials, agricultural development staff, and students worked with the outside team to craft a survey of agricultural traders working between northern Uganda and Southern Sudan the steps approach of a collaborative model. The goal was to create a market directory of use to producers, government officials and others interested in stimulating agricultural trade. The directory of agricultural producers and distributors served as an agricultural development and promotion tool as did the collaborative process itself. PMID:22309968

  19. Simulation Modeling and Performance Evaluation of Space Networks

    NASA Technical Reports Server (NTRS)

    Jennings, Esther H.; Segui, John

    2006-01-01

    In space exploration missions, the coordinated use of spacecraft as communication relays increases the efficiency of the endeavors. To conduct trade-off studies of the performance and resource usage of different communication protocols and network designs, JPL designed a comprehensive extendable tool, the Multi-mission Advanced Communications Hybrid Environment for Test and Evaluation (MACHETE). The design and development of MACHETE began in 2000 and is constantly evolving. Currently, MACHETE contains Consultative Committee for Space Data Systems (CCSDS) protocol standards such as Proximity-1, Advanced Orbiting Systems (AOS), Packet Telemetry/Telecommand, Space Communications Protocol Specification (SCPS), and the CCSDS File Delivery Protocol (CFDP). MACHETE uses the Aerospace Corporation s Satellite Orbital Analysis Program (SOAP) to generate the orbital geometry information and contact opportunities. Matlab scripts provide the link characteristics. At the core of MACHETE is a discrete event simulator, QualNet. Delay Tolerant Networking (DTN) is an end-to-end architecture providing communication in and/or through highly stressed networking environments. Stressed networking environments include those with intermittent connectivity, large and/or variable delays, and high bit error rates. To provide its services, the DTN protocols reside at the application layer of the constituent internets, forming a store-and-forward overlay network. The key capabilities of the bundling protocols include custody-based reliability, ability to cope with intermittent connectivity, ability to take advantage of scheduled and opportunistic connectivity, and late binding of names to addresses. In this presentation, we report on the addition of MACHETE models needed to support DTN, namely: the Bundle Protocol (BP) model. To illustrate the use of MACHETE with the additional DTN model, we provide an example simulation to benchmark its performance. We demonstrate the use of the DTN protocol

  20. Evaluation of an in vitro toxicogenetic mouse model for hepatotoxicity

    SciTech Connect

    Martinez, Stephanie M.; Bradford, Blair U.; Soldatow, Valerie Y.; Witek, Rafal; Kaiser, Robert; Stewart, Todd; Amaral, Kirsten; Freeman, Kimberly; Black, Chris; LeCluyse, Edward L.; Ferguson, Stephen S.

    2010-12-15

    Numerous studies support the fact that a genetically diverse mouse population may be useful as an animal model to understand and predict toxicity in humans. We hypothesized that cultures of hepatocytes obtained from a large panel of inbred mouse strains can produce data indicative of inter-individual differences in in vivo responses to hepato-toxicants. In order to test this hypothesis and establish whether in vitro studies using cultured hepatocytes from genetically distinct mouse strains are feasible, we aimed to determine whether viable cells may be isolated from different mouse inbred strains, evaluate the reproducibility of cell yield, viability and functionality over subsequent isolations, and assess the utility of the model for toxicity screening. Hepatocytes were isolated from 15 strains of mice (A/J, B6C3F1, BALB/cJ, C3H/HeJ, C57BL/6J, CAST/EiJ, DBA/2J, FVB/NJ, BALB/cByJ, AKR/J, MRL/MpJ, NOD/LtJ, NZW/LacJ, PWD/PhJ and WSB/EiJ males) and cultured for up to 7 days in traditional 2-dimensional culture. Cells from B6C3F1, C57BL/6J, and NOD/LtJ strains were treated with acetaminophen, WY-14,643 or rifampin and concentration-response effects on viability and function were established. Our data suggest that high yield and viability can be achieved across a panel of strains. Cell function and expression of key liver-specific genes of hepatocytes isolated from different strains and cultured under standardized conditions are comparable. Strain-specific responses to toxicant exposure have been observed in cultured hepatocytes and these experiments open new opportunities for further developments of in vitro models of hepatotoxicity in a genetically diverse population.

  1. Small animal model of weightlessness for pharmacokinetic evaluation.

    PubMed

    Feldman, S; Brunner, L J

    1994-06-01

    As the United States seeks a greater presence in space, physiologic changes associated with space flight become of greater concern. Exposure to a weightless environment has been shown to have numerous effects on body composition and organ function. Alterations include decreases in muscle and liver mass, changes in bone structure and integrity, and fluid shifts markedly affecting cardiovascular functioning. Furthermore, metabolic activity of the liver has been found to be altered in rats after extended periods of weightlessness. As the length of space travel increases, the probability for the need to administer pharmacologic agents to crew members during space flight for prophylaxis or treatment becomes greater. Thus, because of the observed physiologic and metabolic changes associated with weightlessness, it is reasonable to assume that the pharmacokinetics and pharmacodynamics of xenobiotics administered during space flight may be different that those found in 1g environment. To address these possible changes, the development of a model of weightlessness to investigate possible alterations in drug pharmacokinetics and pharmacodynamics before space flight is of importance. The tail-suspended rat is a well-described model of weightlessness. During the time of the suspension, the pharmacokinetics of marker compounds can be used to evaluate changes in hepatic and renal physiology. Rats suspended for different periods allow for the investigation of the length of weightlessness exposure and drug pharmacology. Results from the use of the suspended rat model provide valuable information regarding possible pharmacokinetic and pharmacodynamic changes associated with weightlessness, and therefore, provide space biomedical researchers with a method of investigating drug administration during space flight missions.

  2. Model-based damage evaluation of layered CFRP structures

    NASA Astrophysics Data System (ADS)

    Munoz, Rafael; Bochud, Nicolas; Rus, Guillermo; Peralta, Laura; Melchor, Juan; Chiachío, Juan; Chiachío, Manuel; Bond, Leonard J.

    2015-03-01

    An ultrasonic evaluation technique for damage identification of layered CFRP structures is presented. This approach relies on a model-based estimation procedure that combines experimental data and simulation of ultrasonic damage-propagation interactions. The CFPR structure, a [0/90]4s lay-up, has been tested in an immersion through transmission experiment, where a scan has been performed on a damaged specimen. Most ultrasonic techniques in industrial practice consider only a few features of the received signals, namely, time of flight, amplitude, attenuation, frequency contents, and so forth. In this case, once signals are captured, an algorithm is used to reconstruct the complete signal waveform and extract the unknown damage parameters by means of modeling procedures. A linear version of the data processing has been performed, where only Young modulus has been monitored and, in a second nonlinear version, the first order nonlinear coefficient β was incorporated to test the possibility of detection of early damage. The aforementioned physical simulation models are solved by the Transfer Matrix formalism, which has been extended from linear to nonlinear harmonic generation technique. The damage parameter search strategy is based on minimizing the mismatch between the captured and simulated signals in the time domain in an automated way using Genetic Algorithms. Processing all scanned locations, a C-scan of the parameter of each layer can be reconstructed, obtaining the information describing the state of each layer and each interface. Damage can be located and quantified in terms of changes in the selected parameter with a measurable extension. In the case of the nonlinear coefficient of first order, evidence of higher sensitivity to damage than imaging the linearly estimated Young Modulus is provided.

  3. MODELING AND BIOPHARMACEUTICAL EVALUATION OF SEMISOLID SYSTEMS WITH ROSEMARY EXTRACT.

    PubMed

    Ramanauskiene, Kristina; Zilius, Modestas; Kancauskas, Marius; Juskaite, Vaida; Cizinauskas, Vytis; Inkeniene, Asta; Petrikaite, Vilma; Rimdeika, Rytis; Briedis, Vitalis

    2016-01-01

    Scientific literature provides a great deal of studies supporting antioxidant effects of rosemary, protecting the body's cells against reactive oxygen species and their negative impact. Ethanol rosemary extracts were produced by maceration method. To assess biological activity of rosemary extracts, antioxidant and antimicrobial activity tests were performed. Antimicrobial activity tests revealed that G+ microorganisms are most sensitive to liquid rosemary extract, while G-microorganisms are most resistant to it. For the purposes of experimenting, five types of semisolid systems were modeled: hydrogel, oleogel, absorption-hydrophobic ointment, oil-in-water-type cream and water-in-oil-type cream, which contained rosemary extract as an active ingredient. Study results show that liquid rosemary extract was distributed evenly in the aqueous phase of water-in-oil-type system, forming the stable emulsion systems. The following research aim was chosen to evaluate the semisolid systems with rosemary exctract: to model semisolid preparations with liquid rosemary extract and determine the influence of excipients on their quality, and perform in vitro study of the release of active ingredients and antimicrobial activity. It was found that oil-in-water type gel-cream has antimicrobial activity against Staphylococcus epidermidis bacteria and Candida albicans fungus, while hydrogel affected only Candida albicans. According to the results of biopharmaceutical study, modeled semisolid systems with rosemary extract can be arranged in an ascending order of the release of phenolic compounds from the forms: water-in-oil-type cream < absorption-hydrophobic ointment < Pionier PLW oleogel < oil-in-water-type eucerin cream < hydrogel < oil-in-water-type gel-cream. Study results showed that oil-in-water-type gel-cream is the most suitable vehicle for liquid rosemary extract used as an active ingredient.

  4. Carbosoil, a land evaluation model for soil carbon accounting

    NASA Astrophysics Data System (ADS)

    Anaya-Romero, M.; Muñoz-Rojas, M.; Pino, R.; Jordan, A.; Zavala, L. M.; De la Rosa, D.

    2012-04-01

    The belowground carbon content is particularly difficult to quantify and most of the time is assumed to be a fixed fraction or ignored for lack of better information. In this respect, this research presents a land evaluation tool, Carbosoil, for predicting soil carbon accounting where this data are scarce or not available, as a new component of MicroLEIS DSS. The pilot study area was a Mediterranean region (Andalusia, Southern Spain) during 1956-2007. Input data were obtained from different data sources and include 1689 soil profiles from Andalusia (S Spain). Previously, detailed studies of changes in LU and vegetation carbon stocks, and soil organic carbon (SOC) dynamic were carried out. Previous results showed the influence of LU, climate (mean temperature and rainfall) and soil variables related with SOC dynamics. For instance, SCS decreased in Cambisols and Regosols by 80% when LU changed from forest to heterogeneous agricultural areas. Taking this into account, the input variables considered were LU, site (elevation, slope, erosion, type-of-drainage, and soil-depth), climate (mean winter/summer temperature and annual precipitation), and soil (pH, nitrates, CEC, sand/clay content, bulk density and field capacity). The available data set was randomly split into two parts: training-set (75%), and validation-set (25%). The model was built by using multiple linear regression. The regression coefficient (R2) obtained in the calibration and validation of Carbosoil was >0.9 for the considered soil sections (0-25, 25-50, and 50-75 cm). The validation showed the high accuracy of the model and its capacity to discriminate carbon distribution regarding different climate, LU and soil management scenarios. Carbosoil model together with the methodologies and information generated in this work will be a useful basis to accurately quantify and understanding the distribution of soil carbon account helpful for decision makers.

  5. A Geospatial Model for Remedial Design Optimization and Performance Evaluation

    SciTech Connect

    Madrid, V M; Demir, Z; Gregory, S; Valett, J; Halden, R U

    2002-02-19

    Soil and ground water remediation projects require collection and interpretation of large amounts of spatial data. Two-dimensional (2D) mapping techniques are often inadequate for characterizing complex subsurface conditions at contaminated sites. To interpret data from these sites, we developed a methodology that allows integration of multiple, three-dimensional (3D) data sets for spatial analysis. This methodology was applied to the Department of Energy (DOE) Building 834 Operable Unit at Lawrence Livermore National Laboratory Site 300, in central California. This site is contaminated with a non-aqueous phase liquid (NAPL) mixture consisting of trichloroethene (TCE) and tetrakis (2-ethylbutoxy) silane (TKEBS). In the 1960s and 1970s, releases of this heat-exchange fluid to the environment resulted in TCE concentrations up to 970 mg/kg in soil and dissolved-phase concentrations approaching the solubility limit in a shallow, perched water-bearing zone. A geospatial model was developed using site hydrogeological data, and monitoring data for volatile organic compounds (VOCs) and biogeochemical parameters. The model was used to characterize the distribution of contamination in different geologic media, and to track changes in subsurface contaminant mass related to treatment facility operation and natural attenuation processes. Natural attenuation occurs mainly as microbial reductive dechlorination of TCE which is dependent on the presence of TKEBS, whose fermentation provides the hydrogen required for microbial reductive dechlorination of VOCs. Output of the geospatial model shows that soil vapor extraction (SVE) is incompatible with anaerobic VOC transformation, presumably due to temporary microbial inhibition caused by oxygen influx into the subsurface. Geospatial analysis of monitoring data collected over a three-year period allowed for generation of representative monthly VOC plume maps and dissolved-phase mass estimates. The latter information proved to be

  6. MODELING AND BIOPHARMACEUTICAL EVALUATION OF SEMISOLID SYSTEMS WITH ROSEMARY EXTRACT.

    PubMed

    Ramanauskiene, Kristina; Zilius, Modestas; Kancauskas, Marius; Juskaite, Vaida; Cizinauskas, Vytis; Inkeniene, Asta; Petrikaite, Vilma; Rimdeika, Rytis; Briedis, Vitalis

    2016-01-01

    Scientific literature provides a great deal of studies supporting antioxidant effects of rosemary, protecting the body's cells against reactive oxygen species and their negative impact. Ethanol rosemary extracts were produced by maceration method. To assess biological activity of rosemary extracts, antioxidant and antimicrobial activity tests were performed. Antimicrobial activity tests revealed that G+ microorganisms are most sensitive to liquid rosemary extract, while G-microorganisms are most resistant to it. For the purposes of experimenting, five types of semisolid systems were modeled: hydrogel, oleogel, absorption-hydrophobic ointment, oil-in-water-type cream and water-in-oil-type cream, which contained rosemary extract as an active ingredient. Study results show that liquid rosemary extract was distributed evenly in the aqueous phase of water-in-oil-type system, forming the stable emulsion systems. The following research aim was chosen to evaluate the semisolid systems with rosemary exctract: to model semisolid preparations with liquid rosemary extract and determine the influence of excipients on their quality, and perform in vitro study of the release of active ingredients and antimicrobial activity. It was found that oil-in-water type gel-cream has antimicrobial activity against Staphylococcus epidermidis bacteria and Candida albicans fungus, while hydrogel affected only Candida albicans. According to the results of biopharmaceutical study, modeled semisolid systems with rosemary extract can be arranged in an ascending order of the release of phenolic compounds from the forms: water-in-oil-type cream < absorption-hydrophobic ointment < Pionier PLW oleogel < oil-in-water-type eucerin cream < hydrogel < oil-in-water-type gel-cream. Study results showed that oil-in-water-type gel-cream is the most suitable vehicle for liquid rosemary extract used as an active ingredient. PMID:27008810

  7. Evaluation for School Improvement: A Multi-Level, Multi-Purpose Model. Project: Multi-Level Evaluation Systems.

    ERIC Educational Resources Information Center

    Herman, Joan L.

    A model for a comprehensive, multi-purpose, multi-user evaluation system is presented to facilitate educational decision making and to support school improvement and renewal. The model is school district-based but oriented to meet state-, school-, and classroom-level needs as well. The model emphasizes the usefulness of common or compatible…

  8. A multimedia fate and chemical transport modeling system for pesticides: II. Model evaluation

    NASA Astrophysics Data System (ADS)

    Li, Rong; Scholtz, M. Trevor; Yang, Fuquan; Sloan, James J.

    2011-07-01

    Pesticides have adverse health effects and can be transported over long distances to contaminate sensitive ecosystems. To address problems caused by environmental pesticides we developed a multimedia multi-pollutant modeling system, and here we present an evaluation of the model by comparing modeled results against measurements. The modeled toxaphene air concentrations for two sites, in Louisiana (LA) and Michigan (MI), are in good agreement with measurements (average concentrations agree to within a factor of 2). Because the residue inventory showed no soil residues at these two sites, resulting in no emissions, the concentrations must be caused by transport; the good agreement between the modeled and measured concentrations suggests that the model simulates atmospheric transport accurately. Compared to the LA and MI sites, the measured air concentrations at two other sites having toxaphene soil residues leading to emissions, in Indiana and Arkansas, showed more pronounced seasonal variability (higher in warmer months); this pattern was also captured by the model. The model-predicted toxaphene concentration fraction on particles (0.5-5%) agrees well with measurement-based estimates (3% or 6%). There is also good agreement between modeled and measured dry (1:1) and wet (within a factor of less than 2) depositions in Lake Ontario. Additionally this study identified erroneous soil residue data around a site in Texas in a published US toxaphene residue inventory, which led to very low modeled air concentrations at this site. Except for the erroneous soil residue data around this site, the good agreement between the modeled and observed results implies that both the US and Mexican toxaphene soil residue inventories are reasonably good. This agreement also suggests that the modeling system is capable of simulating the important physical and chemical processes in the multimedia compartments.

  9. MIRAGE: Model Description and Evaluation of Aerosols and Trace Gases

    SciTech Connect

    Easter, Richard C.; Ghan, Steven J.; Zhang, Yang; Saylor, Rick D.; Chapman, Elaine G.; Laulainen, Nels S.; Abdul-Razzak, Hayder; Leung, Lai-Yung R.; Bian, Xindi; Zaveri, Rahul A.

    2004-10-27

    The MIRAGE (Model for Integrated Research on Atmospheric Global Exchanges) modeling system, designed to study the impacts of anthropogenic aerosols on the global environment, is described. MIRAGE consists of a chemical transport model coupled on line with a global climate model. The chemical transport model simulates trace gases, aerosol number, and aerosol chemical component mass [sulfate, MSA, organic matter, black carbon (BC), sea salt, mineral dust] for four aerosol modes (Aitken, accumulation, coarse sea salt, coarse mineral dust) using the modal aerosol dynamics approach. Cloud-phase and interstitial aerosol are predicted separately. The climate model, based on the CCM2, has physically-based treatments of aerosol direct and indirect forcing. Stratiform cloud water and droplet number are simulated using a bulk microphysics parameterization that includes aerosol activation. Aerosol and trace gas species simulated by MIRAGE are presented and evaluated using surface and aircraft measurements. Surface-level SO2 in N. American and European source regions is higher than observed. SO2 above the boundary layer is in better agreement with observations, and surface-level SO2 at marine locations is somewhat lower than observed. Comparison with other models suggests insufficient SO2 dry deposition; increasing the deposition velocity improves simulated SO2. Surface-level sulfate in N. American and European source regions is in good agreement with observations, although the seasonal cycle in Europe is stronger than observed. Surface-level sulfate at high-latitude and marine locations, and sulfate above the boundary layer, are higher than observed. This is attributed primarily to insufficient wet removal; increasing the wet removal improves simulated sulfate at remote locations and aloft. Because of the high sulfate bias, radiative forcing estimates for anthropogenic sulfur in Ghan et al. [2001c] are probably too high. Surface-level DMS is {approx}40% higher than observed

  10. Wind-blown sand on beaches: an evaluation of models

    NASA Astrophysics Data System (ADS)

    Sherman, Douglas J.; Jackson, Derek W. T.; Namikas, Steven L.; Wang, Jinkang

    1998-03-01

    Five models for predicting rates of aeolian sand transport were evaluated using empirical data obtained from field experiments conducted in April, 1994 at a beach on Inch Spit, Co. Kerry, Republic of Ireland. Measurements were made of vertical wind profiles (to derive shear velocity estimates), beach slope, and rates of sand transport. Sediment samples were taken to assess characteristics of grain size and surface moisture content. Estimates of threshold shear velocity were derived using grain size data. After parsing the field data on the basis of the quality of shear velocity estimation and the occurrence of blowing sand, 51 data sets describing rates of sand transport and environmental conditions were retained. Mean grain diameter was 0.17 mm. Surface slopes ranged from 0.02 on the foreshore to about 0.11 near the dune toe. Mean shear velocities ranged from 0.23 m s -1 (just above the observed transport threshold) to 0.65 m s -1. Rates of transport ranged from 0.02 kg m -1 h -1 to more than 80 kg m -1 h -1. These data were used as input to the models of Bagnold [Bagnold, R.A., 1936. The Movement of Desert Sand. Proc. R. Soc. London, A157, 594-620], Kawamura [Kawamura, R., 1951. Study of Sand Movement by Wind. Translated (1965) as University of California Hydraulics Engineering Laboratory Report HEL 2-8, Berkeley], Zingg [Zingg, A.W., 1953. Wind tunnel studies of the movement of sedimentary material. Proc. 5th Hydraulics Conf. Bull. 34, Iowa City, Inst. of Hydraulics, pp. 111-135], Kadib [Kadib, A.A., 1965. A function for sand movement by wind. University of California Hydraulics Engineering Laboratory Report HEL 2-8, Berkeley], and Lettau and Lettau [Lettau, K. and Lettau, H., 1977. Experimental and Micrometeorological Field Studies of Dune Migration. In: K. Lettau and H. Lettau (Eds.), Exploring the World's Driest Climate. University of Wisconsin-Madison, IES Report 101, pp. 110-147]. Correction factors to adjust predictions of the rate of transport to account

  11. Modeling Urban Dynamics Using Random Forest: Implementing Roc and Toc for Model Evaluation

    NASA Astrophysics Data System (ADS)

    Ahmadlou, M.; Delavar, M. R.; Shafizadeh-Moghadam, H.; Tayyebi, A.

    2016-06-01

    The importance of spatial accuracy of land use/cover change maps necessitates the use of high performance models. To reach this goal, calibrating machine learning (ML) approaches to model land use/cover conversions have received increasing interest among the scholars. This originates from the strength of these techniques as they powerfully account for the complex relationships underlying urban dynamics. Compared to other ML techniques, random forest has rarely been used for modeling urban growth. This paper, drawing on information from the multi-temporal Landsat satellite images of 1985, 2000 and 2015, calibrates a random forest regression (RFR) model to quantify the variable importance and simulation of urban change spatial patterns. The results and performance of RFR model were evaluated using two complementary tools, relative operating characteristics (ROC) and total operating characteristics (TOC), by overlaying the map of observed change and the modeled suitability map for land use change (error map). The suitability map produced by RFR model showed 82.48% area under curve for the ROC model which indicates a very good performance and highlights its appropriateness for simulating urban growth.

  12. A linear programming model for optimizing HDR brachytherapy dose distributions with respect to mean dose in the DVH-tail

    SciTech Connect

    Holm, Åsa; Larsson, Torbjörn; Tedgren, Åsa Carlsson

    2013-08-15

    Purpose: Recent research has shown that the optimization model hitherto used in high-dose-rate (HDR) brachytherapy corresponds weakly to the dosimetric indices used to evaluate the quality of a dose distribution. Although alternative models that explicitly include such dosimetric indices have been presented, the inclusion of the dosimetric indices explicitly yields intractable models. The purpose of this paper is to develop a model for optimizing dosimetric indices that is easier to solve than those proposed earlier.Methods: In this paper, the authors present an alternative approach for optimizing dose distributions for HDR brachytherapy where dosimetric indices are taken into account through surrogates based on the conditional value-at-risk concept. This yields a linear optimization model that is easy to solve, and has the advantage that the constraints are easy to interpret and modify to obtain satisfactory dose distributions.Results: The authors show by experimental comparisons, carried out retrospectively for a set of prostate cancer patients, that their proposed model corresponds well with constraining dosimetric indices. All modifications of the parameters in the authors' model yield the expected result. The dose distributions generated are also comparable to those generated by the standard model with respect to the dosimetric indices that are used for evaluating quality.Conclusions: The authors' new model is a viable surrogate to optimizing dosimetric indices and quickly and easily yields high quality dose distributions.

  13. Evaluating ET estimates from the Simplified Surface Energy Balance (SSEB) model using METRIC model output

    NASA Astrophysics Data System (ADS)

    Senay, G. B.; Budde, M. E.; Allen, R. G.; Verdin, J. P.

    2008-12-01

    Evapotranspiration (ET) is an important component of the hydrologic budget because it expresses the exchange of mass and energy between the soil-water-vegetation system and the atmosphere. Since direct measurement of ET is difficult, various modeling methods are used to estimate actual ET (ETa). Generally, the choice of method for ET estimation depends on the objective of the study and is further limited by the availability of data and desired accuracy of the ET estimate. Operational monitoring of crop performance requires processing large data sets and a quick response time. A Simplified Surface Energy Balance (SSEB) model was developed by the U.S. Geological Survey's Famine Early Warning Systems Network to estimate irrigation water use in remote places of the world. In this study, we evaluated the performance of the SSEB model with the METRIC (Mapping Evapotranspiration at high Resolution and with Internalized Calibration) model that has been evaluated by several researchers using the Lysimeter data. The METRIC model has been proven to provide reliable ET estimates in different regions of the world. Reference ET fractions of both models (ETrF of METRIC vs. ETf of SSEB) were generated and compared using individual Landsat thermal images collected from 2000 though 2005 in Idaho, New Mexico, and California. In addition, the models were compared using monthly and seasonal total ETa estimates. The SSEB model reproduced both the spatial and temporal variability exhibited by METRIC on land surfaces, explaining up to 80 percent of the spatial variability. However, the ETa estimates over water bodies were systematically higher in the SSEB output, which could be improved by using a correction coefficient to take into account the absorption of solar energy by deeper water layers that has little contribution to the ET process. This study demonstrated the usefulness of the SSEB method for large-scale agro-hydrologic applications for operational monitoring and assessing of

  14. Critical evaluation and modeling of algal harvesting using dissolved air flotation. DAF Algal Harvesting Modeling

    SciTech Connect

    Zhang, Xuezhi; Hewson, John C.; Amendola, Pasquale; Reynoso, Monica; Sommerfeld, Milton; Chen, Yongsheng; Hu, Qiang

    2014-07-14

    In our study, Chlorella zofingiensis harvesting by dissolved air flotation (DAF) was critically evaluated with regard to algal concentration, culture conditions, type and dosage of coagulants, and recycle ratio. Harvesting efficiency increased with coagulant dosage and leveled off at 81%, 86%, 91%, and 87% when chitosan, Al3+, Fe3+, and cetyl trimethylammonium bromide (CTAB) were used at dosages of 70, 180, 250, and 500 mg g-1, respectively. The DAF efficiency-coagulant dosage relationship changed with algal culture conditions. In evaluating the influence of the initial algal concentration and recycle ratio revealed that, under conditions typical for algal harvesting, we found that it is possible that the number of bubbles is insufficient. A DAF algal harvesting model was developed to explain this observation by introducing mass-based floc size distributions and a bubble limitation into the white water blanket model. Moreover, the model revealed the importance of coagulation to increase floc-bubble collision and attachment, and the preferential interaction of bubbles with larger flocs, which limited the availability of bubbles to the smaller sized flocs. The harvesting efficiencies predicted by the model agree reasonably with experimental data obtained at different Al3+ dosages, algal concentrations, and recycle ratios. Based on this modeling, critical parameters for efficient algal harvesting were identified.

  15. Critical evaluation and modeling of algal harvesting using dissolved air flotation. DAF Algal Harvesting Modeling

    DOE PAGES

    Zhang, Xuezhi; Hewson, John C.; Amendola, Pasquale; Reynoso, Monica; Sommerfeld, Milton; Chen, Yongsheng; Hu, Qiang

    2014-07-14

    In our study, Chlorella zofingiensis harvesting by dissolved air flotation (DAF) was critically evaluated with regard to algal concentration, culture conditions, type and dosage of coagulants, and recycle ratio. Harvesting efficiency increased with coagulant dosage and leveled off at 81%, 86%, 91%, and 87% when chitosan, Al3+, Fe3+, and cetyl trimethylammonium bromide (CTAB) were used at dosages of 70, 180, 250, and 500 mg g-1, respectively. The DAF efficiency-coagulant dosage relationship changed with algal culture conditions. In evaluating the influence of the initial algal concentration and recycle ratio revealed that, under conditions typical for algal harvesting, we found that itmore » is possible that the number of bubbles is insufficient. A DAF algal harvesting model was developed to explain this observation by introducing mass-based floc size distributions and a bubble limitation into the white water blanket model. Moreover, the model revealed the importance of coagulation to increase floc-bubble collision and attachment, and the preferential interaction of bubbles with larger flocs, which limited the availability of bubbles to the smaller sized flocs. The harvesting efficiencies predicted by the model agree reasonably with experimental data obtained at different Al3+ dosages, algal concentrations, and recycle ratios. Based on this modeling, critical parameters for efficient algal harvesting were identified.« less

  16. Logic models: a useful way to study theories of evaluation practice?

    PubMed

    Miller, Robin Lin

    2013-06-01

    This paper comments on the papers in the special volume on logic modeling and evaluation theory. Logic modeling offers a potentially useful approach to learning about the assumptions, activities, and consequences described in an evaluation theory and may facilitate comparative analysis of evaluation theories. However, logic models are imperfect vehicles for depicting the contingent and dynamic nature of evaluation theories. Alternative approaches to studying theories are necessary to capture the essence of theories as they may work in actual practice.

  17. Evaluating the SWAT Model for Hydrological Modeling in the Xixian Watershed and A Comparison with the XAJ Model

    SciTech Connect

    Shi, Peng; Chen, Chao; Srinivasan, Raghavan; Zhang, Xuesong; Cai, Tao; Fang, Xiuqin; Qu, Simin; Chen, Xi; Li, Qiongfang

    2011-09-10

    Already declining water availability in Huaihe River, the 6th largest river in China, is further stressed by climate change and intense human activities. There is a pressing need for a watershed model to better understand the interaction between land use activities and hydrologic processes and to support sustainable water use planning. In this study, we evaluated the performance of SWAT for hydrologic modeling in the Xixian River Basin, located at the headwaters of the Huaihe River, and compared its performance with the Xinanjiang (XAJ) model that has been widely used in China

  18. A Large Animal Survival Model to Evaluate Bariatric Surgery Mechanisms

    PubMed Central

    Simianu, Vlad V.; Sham, Jonathan G.; Wright, Andrew S.; Stewart, Skye D.; Alloosh, Mouhamad; Sturek, Michael; Cummings, David E.; Flum, David R.

    2016-01-01

    Background The impact of Roux-en-Y gastric bypass (RYGB) on type 2 diabetes mellitus is thought to result from upper and/or lower gut hormone alterations. Evidence supporting these mechanisms is incomplete, in part because of limitations in relevant bariatric-surgery animal models, specifically the lack of naturally insulin-resistant large animals. With overfeeding, Ossabaw swine develop a robust metabolic syndrome, and may be suitable for studying post-surgical physiology. Whether bariatric surgery is feasible in these animals with acceptable survival is unknown. Methods Thirty-two Ossabaws were fed a high-fat, high-cholesterol diet to induce obesity and insulin resistance. These animals were assigned to RYGB (n = 8), RYGB with vagotomy (RYGB-V, n = 5), gastrojejunostomy (GJ, n = 10), GJ with duodenal exclusion (GJD, n = 7), or sham operation (n = 2) and were euthanized 60 days post-operatively. Post-operative changes in weight and food intake are reported. Results Survival to scheduled necropsy among surgical groups was 77%, living an average of 57 days post-operatively. Cardiac arrest under anesthesia occurred in 4 pigs. Greatest weight loss (18.0% ± 6%) and food intake decrease (57.0% ± 20%) occurred following RYGB while animals undergoing RYGB-V showed only 6.6% ± 3% weight loss despite 50.8% ± 25% food intake decrease. GJ (12.7% ± 4%) and GJD (1.2% ± 1%) pigs gained weight, but less than sham controls (13.4% ± 10%). Conclusions A survival model of metabolic surgical procedures is feasible, leads to significant weight loss, and provides the opportunity to evaluate new interventions and subtle variations in surgical technique (e.g. vagus nerve sparing) that may provide new mechanistic insights. PMID:27213116

  19. Linear multivariate evaluation models for spatial perception of soundscape.

    PubMed

    Deng, Zhiyong; Kang, Jian; Wang, Daiwei; Liu, Aili; Kang, Joe Zhengyu

    2015-11-01

    Soundscape is a sound environment that emphasizes the awareness of auditory perception and social or cultural understandings. The case of spatial perception is significant to soundscape. However, previous studies on the auditory spatial perception of the soundscape environment have been limited. Based on 21 native binaural-recorded soundscape samples and a set of auditory experiments for subjective spatial perception (SSP), a study of the analysis among semantic parameters, the inter-aural-cross-correlation coefficient (IACC), A-weighted-equal sound-pressure-level (L(eq)), dynamic (D), and SSP is introduced to verify the independent effect of each parameter and to re-determine some of their possible relationships. The results show that the more noisiness the audience perceived, the worse spatial awareness they received, while the closer and more directional the sound source image variations, dynamics, and numbers of sound sources in the soundscape are, the better the spatial awareness would be. Thus, the sensations of roughness, sound intensity, transient dynamic, and the values of Leq and IACC have a suitable range for better spatial perception. A better spatial awareness seems to promote the preference slightly for the audience. Finally, setting SSPs as functions of the semantic parameters and Leq-D-IACC, two linear multivariate evaluation models of subjective spatial perception are proposed.

  20. Modelling and Evaluation of Spectra in Beam Aided Spectroscopy

    SciTech Connect

    Hellermann, M. G. von; Delabie, E.; Jaspers, R.; Lotte, P.; Summers, H. P.

    2008-10-22

    The evaluation of active beam induced spectra requires advanced modelling of both active and passive features. Three types of line shapes are addressed in this paper: Thermal spectra representing Maxwellian distribution functions described by Gaussian-like line shapes, secondly broad-band fast ion spectra with energies well above local ion temperatures, and, finally, the narrow lines shapes of the equi-spaced Motion Stark multiplet (MSE) of excited neutral beam particles travelling through the magnetic field confining the plasma. In each case additional line shape broadening caused by Gaussian-like instrument functions is taken into account. Further broadening effects are induced by collision velocity dependent effective atomic rates where the observed spectral shape is the result of a convolution of emission rate function and velocity distribution function projected into the direction of observation. In the case of Beam Emission Spectroscopy which encompasses the Motional Stark features, line broadening is also caused by the finite angular spread of injected neutrals and secondly by a ripple in the acceleration voltage associated with high energy neutral beams.

  1. Evaluation of the Actuator Line Model with coarse resolutions

    NASA Astrophysics Data System (ADS)

    Draper, M.; Usera, G.

    2015-06-01

    The aim of the present paper is to evaluate the Actuator Line Model (ALM) in spatial resolutions coarser than what is generally recommended, also using larger time steps. To accomplish this, the ALM has been implemented in the open source code caffa3d.MBRi and validated against experimental measurements of two wind tunnel campaigns (stand alone wind turbine and two wind turbines in line, case A and B respectively), taking into account two spatial resolutions: R/8 and R/15 (R is the rotor radius). A sensitivity analysis in case A was performed in order to get some insight into the influence of the smearing factor (3D Gaussian distribution) and time step size in power and thrust, as well as in the wake, without applying a tip loss correction factor (TLCF), for one tip speed ratio (TSR). It is concluded that as the smearing factor is larger or time step size is smaller the power is increased, but the velocity deficit is not as much affected. From this analysis, a smearing factor was obtained in order to calculate precisely the power coefficient for that TSR without applying TLCF. Results with this approach were compared with another simulation choosing a larger smearing factor and applying Prandtl's TLCF, for three values of TSR. It is found that applying the TLCF improves the power estimation and weakens the influence of the smearing factor. Finally, these 2 alternatives were tested in case B, confirming that conclusion.

  2. Field evaluation of an avian risk assessment model

    USGS Publications Warehouse

    Vyas, N.B.; Spann, J.W.; Hulse, C.S.; Borges, S.L.; Bennett, R.S.; Torrez, M.; Williams, B.I.; Leffel, R.

    2006-01-01

    We conducted two laboratory subacute dietary toxicity tests and one outdoor subacute dietary toxicity test to determine the effectiveness of the U.S. Environmental Protection Agency's deterministic risk assessment model for evaluating the potential of adverse effects to birds in the field. We tested technical-grade diazinon and its D Z N- 50W (50% diazinon active ingredient wettable powder) formulation on Canada goose (Branta canadensis) goslings. Brain acetylcholinesterase activity was measured, and the feathers and skin, feet. and gastrointestinal contents were analyzed for diazinon residues. The dose-response curves showed that diazinon was significantly more toxic to goslings in the outdoor test than in the laboratory tests. The deterministic risk assessment method identified the potential for risk to birds in general, but the factors associated with extrapolating from the laboratory to the field, and from the laboratory test species to other species, resulted in the underestimation of risk to the goslings. The present study indicates that laboratory-based risk quotients should be interpreted with caution.

  3. Evaluation of air pollution modelling tools as environmental engineering courseware.

    PubMed

    Souto González, J A; Bello Bugallo, P M; Casares Long, J J

    2004-01-01

    The study of phenomena related to the dispersion of pollutants usually takes advantage of the use of mathematical models based on the description of the different processes involved. This educational approach is especially important in air pollution dispersion, when the processes follow a non-linear behaviour so it is difficult to understand the relationships between inputs and outputs, and in a 3D context where it becomes hard to analyze alphanumeric results. In this work, three different software tools, as computer solvers for typical air pollution dispersion phenomena, are presented. Each software tool developed to be implemented on PCs, follows approaches that represent three generations of programming languages (Fortran 77, VisualBasic and Java), applied over three different environments: MS-DOS, MS-Windows and the world wide web. The software tools were tested by students of environmental engineering (undergraduate) and chemical engineering (postgraduate), in order to evaluate the ability of these software tools to improve both theoretical and practical knowledge of the air pollution dispersion problem, and the impact of the different environment in the learning process in terms of content, ease of use and visualization of results. PMID:15193095

  4. Modelling and Evaluation of Spectra in Beam Aided Spectroscopy

    NASA Astrophysics Data System (ADS)

    von Hellermann, M. G.; Delabie, E.; Jaspers, R.; Lotte, P.; Summers, H. P.

    2008-10-01

    The evaluation of active beam induced spectra requires advanced modelling of both active and passive features. Three types of line shapes are addressed in this paper: Thermal spectra representing Maxwellian distribution functions described by Gaussian-like line shapes, secondly broad-band fast ion spectra with energies well above local ion temperatures, and, finally, the narrow lines shapes of the equi-spaced Motion Stark multiplet (MSE) of excited neutral beam particles travelling through the magnetic field confining the plasma. In each case additional line shape broadening caused by Gaussian-like instrument functions is taken into account. Further broadening effects are induced by collision velocity dependent effective atomic rates where the observed spectral shape is the result of a convolution of emission rate function and velocity distribution function projected into the direction of observation. In the case of Beam Emission Spectroscopy which encompasses the Motional Stark features, line broadening is also caused by the finite angular spread of injected neutrals and secondly by a ripple in the acceleration voltage associated with high energy neutral beams.

  5. Biomechanical modelling and evaluation of construction jobs for performance improvement.

    PubMed

    Parida, Ratri; Ray, Pradip Kumar

    2012-01-01

    Occupational risk factors, such as awkward posture, repetition, lack of rest, insufficient illumination and heavy workload related to construction-related MMH activities may cause musculoskeletal disorders and poor performance of the workers, ergonomic design of construction worksystems was a critical need for improving their health and safety wherein a dynamic biomechanical models were required to be empirically developed and tested at a construction site of Tata Steel, the largest steel making company of India in private sector. In this study, a comprehensive framework is proposed for biomechanical evaluation of shovelling and grinding under diverse work environments. The benefit of such an analysis lies in its usefulness in setting guidelines for designing such jobs with minimization of risks of musculoskeletal disorders (MSDs) and enhancing correct methods of carrying out the jobs leading to reduced fatigue and physical stress. Data based on direct observations and videography were collected for the shovellers and grinders over a number of workcycles. Compressive forces and moments for a number of segments and joints are computed with respect to joint flexion and extension. The results indicate that moments and compressive forces at L5/S1 link are significant for shovellers while moments at elbow and wrist are significant for grinders.

  6. Logic Models for Program Design, Implementation, and Evaluation: Workshop Toolkit. REL 2015-057

    ERIC Educational Resources Information Center

    Shakman, Karen; Rodriguez, Sheila M.

    2015-01-01

    The Logic Model Workshop Toolkit is designed to help practitioners learn the purpose of logic models, the different elements of a logic model, and the appropriate steps for developing and using a logic model for program evaluation. Topics covered in the sessions include an overview of logic models, the elements of a logic model, an introduction to…

  7. Evaluation of Aerosol-Cloud Interactions in GISS ModelE Using ASR Observations

    NASA Astrophysics Data System (ADS)

    de Boer, G.; Menon, S.; Bauer, S. E.; Toto, T.; Bennartz, R.; Cribb, M.

    2011-12-01

    The impacts of aerosol particles on clouds continue to rank among the largest uncertainties in global climate simulation. In this work we assess the capability of the NASA GISS ModelE, coupled to MATRIX aerosol microphysics, in correctly representing warm-phase aerosol-cloud interactions. This evaluation is completed through the analysis of a nudged, multi-year global simulation using measurements from various US Department of Energy sponsored measurement campaigns and satellite-based observations. Campaign observations include the Aerosol Intensive Operations Period (Aerosol IOP) and Routine ARM Arial Facility Clouds with Low Optical Water Depths (CLOWD) Optical Radiative Observations (RACORO) at the Southern Great Plains site in Oklahoma, the Marine Stratus Radiation, Aerosol, and Drizzle (MASRAD) campaign at Pt. Reyes, California, and the ARM mobile facility's 2008 deployment to China. This combination of datasets provides a variety of aerosol and atmospheric conditions under which to test ModelE parameterizations. In addition to these localized comparisons, we provide the results of global evaluations completed using measurements derived from satellite remote sensors. We will provide a basic overview of simulation performance, as well as a detailed analysis of parameterizations relevant to aerosol indirect effects.

  8. A MODEL FOR THE EVALUATION OF A TESTING PROGRAM.

    ERIC Educational Resources Information Center

    COX, RICHARD C.; UNKS, NANCY J.

    THE EVALUATION OF AN EDUCATIONAL PROGRAM TYPICALLY IMPLIES MEASUREMENT. MEASUREMENT, IN TURN, IMPLIES TESTING IN ONE FORM OR ANOTHER. IN ORDER TO CARRY OUT THE TESTING NECESSARY FOR THE EVALUATION OF AN EDUCATIONAL PROGRAM, RESEARCHERS OFTEN DEVELOP A COMPLETE TESTING SUB-PROGRAM. THE EVALUATION OF THE TOTAL PROJECT MAY DEPEND UPON THE TESTING…

  9. Understanding Evaluation Influence within Public Sector Partnerships: A Conceptual Model

    ERIC Educational Resources Information Center

    Appleton-Dyer, Sarah; Clinton, Janet; Carswell, Peter; McNeill, Rob

    2012-01-01

    The importance of evaluation use has led to a large amount of theoretical and empirical study. Evaluation use, however, is still not well understood. There is a need to capture the complexity of this phenomenon across a diverse range of contexts. In response to such complexities, the notion of "evaluation influence" emerged. This article presents…

  10. A Meta-Model for Evaluating Information Retrieval Serviceability.

    ERIC Educational Resources Information Center

    Hjerppe, Roland

    This document first outlines considerations relative to a systems approach to evaluation, and then argues for such an approach to the evaluation of information retrieval systems (ISR). The criterion of such evaluations should be the utility of the information retrieved to the user, and the ISR ought to be regarded as one of three interrelated…

  11. An Organizational Model to Distinguish between and Integrate Research and Evaluation Activities in a Theory Based Evaluation

    ERIC Educational Resources Information Center

    Sample McMeeking, Laura B.; Basile, Carole; Cobb, R. Brian

    2012-01-01

    Theory-based evaluation (TBE) is an evaluation method that shows how a program will work under certain conditions and has been supported as a viable, evidence-based option in cases where randomized trials or high-quality quasi-experiments are not feasible. Despite the model's widely accepted theoretical appeal there are few examples of its…

  12. A Program Evaluation Model: Using Bloom's Taxonomy to Identify Outcome Indicators in Outcomes-Based Program Evaluations

    ERIC Educational Resources Information Center

    McNeil, Rita C.

    2011-01-01

    Outcomes-based program evaluation is a systematic approach to identifying outcome indicators and measuring results against those indicators. One dimension of program evaluation is assessing the level of learner acquisition to determine if learning objectives were achieved as intended. The purpose of the proposed model is to use Bloom's Taxonomy to…

  13. Evaluating pharmacological models of high and low anxiety in sheep

    PubMed Central

    Lee, Caroline; McGill, David M.; Mendl, Michael

    2015-01-01

    New tests of animal affect and welfare require validation in subjects experiencing putatively different states. Pharmacological manipulations of affective state are advantageous because they can be administered in a standardised fashion, and the duration of their action can be established and tailored to suit the length of a particular test. To this end, the current study aimed to evaluate a pharmacological model of high and low anxiety in an important agricultural and laboratory species, the sheep. Thirty-five 8-month-old female sheep received either an intramuscular injection of the putatively anxiogenic drug 1-(m-chlorophenyl)piperazine (mCPP; 1 mg/kg; n = 12), an intravenous injection of the putatively anxiolytic drug diazepam (0.1 mg/kg; n = 12), or acted as a control (saline intramuscular injection n = 11). Thirty minutes after the treatments, sheep were individually exposed to a variety of tests assessing their general movement, performance in a ‘runway task’ (moving down a raceway for a food reward), response to startle, and behaviour in isolation. A test to assess feeding motivation was performed 2 days later following administration of the drugs to the same animals in the same manner. The mCPP sheep had poorer performance in the two runway tasks (6.8 and 7.7 × slower respectively than control group; p < 0.001), a greater startle response (1.4 vs. 0.6; p = 0.02), a higher level of movement during isolation (9.1 steps vs. 5.4; p < 0.001), and a lower feeding motivation (1.8 × slower; p < 0.001) than the control group, all of which act as indicators of anxiety. These results show that mCPP is an effective pharmacological model of high anxiety in sheep. Comparatively, the sheep treated with diazepam did not display any differences compared to the control sheep. Thus we suggest that mCPP is an effective treatment to validate future tests aimed at assessing anxiety in sheep, and that future studies should include other subtle indicators of positive

  14. Evaluating pharmacological models of high and low anxiety in sheep.

    PubMed

    Doyle, Rebecca E; Lee, Caroline; McGill, David M; Mendl, Michael

    2015-01-01

    New tests of animal affect and welfare require validation in subjects experiencing putatively different states. Pharmacological manipulations of affective state are advantageous because they can be administered in a standardised fashion, and the duration of their action can be established and tailored to suit the length of a particular test. To this end, the current study aimed to evaluate a pharmacological model of high and low anxiety in an important agricultural and laboratory species, the sheep. Thirty-five 8-month-old female sheep received either an intramuscular injection of the putatively anxiogenic drug 1-(m-chlorophenyl)piperazine (mCPP; 1 mg/kg; n = 12), an intravenous injection of the putatively anxiolytic drug diazepam (0.1 mg/kg; n = 12), or acted as a control (saline intramuscular injection n = 11). Thirty minutes after the treatments, sheep were individually exposed to a variety of tests assessing their general movement, performance in a 'runway task' (moving down a raceway for a food reward), response to startle, and behaviour in isolation. A test to assess feeding motivation was performed 2 days later following administration of the drugs to the same animals in the same manner. The mCPP sheep had poorer performance in the two runway tasks (6.8 and 7.7 × slower respectively than control group; p < 0.001), a greater startle response (1.4 vs. 0.6; p = 0.02), a higher level of movement during isolation (9.1 steps vs. 5.4; p < 0.001), and a lower feeding motivation (1.8 × slower; p < 0.001) than the control group, all of which act as indicators of anxiety. These results show that mCPP is an effective pharmacological model of high anxiety in sheep. Comparatively, the sheep treated with diazepam did not display any differences compared to the control sheep. Thus we suggest that mCPP is an effective treatment to validate future tests aimed at assessing anxiety in sheep, and that future studies should include other subtle indicators of positive affective

  15. Evaluating pharmacological models of high and low anxiety in sheep.

    PubMed

    Doyle, Rebecca E; Lee, Caroline; McGill, David M; Mendl, Michael

    2015-01-01

    New tests of animal affect and welfare require validation in subjects experiencing putatively different states. Pharmacological manipulations of affective state are advantageous because they can be administered in a standardised fashion, and the duration of their action can be established and tailored to suit the length of a particular test. To this end, the current study aimed to evaluate a pharmacological model of high and low anxiety in an important agricultural and laboratory species, the sheep. Thirty-five 8-month-old female sheep received either an intramuscular injection of the putatively anxiogenic drug 1-(m-chlorophenyl)piperazine (mCPP; 1 mg/kg; n = 12), an intravenous injection of the putatively anxiolytic drug diazepam (0.1 mg/kg; n = 12), or acted as a control (saline intramuscular injection n = 11). Thirty minutes after the treatments, sheep were individually exposed to a variety of tests assessing their general movement, performance in a 'runway task' (moving down a raceway for a food reward), response to startle, and behaviour in isolation. A test to assess feeding motivation was performed 2 days later following administration of the drugs to the same animals in the same manner. The mCPP sheep had poorer performance in the two runway tasks (6.8 and 7.7 × slower respectively than control group; p < 0.001), a greater startle response (1.4 vs. 0.6; p = 0.02), a higher level of movement during isolation (9.1 steps vs. 5.4; p < 0.001), and a lower feeding motivation (1.8 × slower; p < 0.001) than the control group, all of which act as indicators of anxiety. These results show that mCPP is an effective pharmacological model of high anxiety in sheep. Comparatively, the sheep treated with diazepam did not display any differences compared to the control sheep. Thus we suggest that mCPP is an effective treatment to validate future tests aimed at assessing anxiety in sheep, and that future studies should include other subtle indicators of positive affective

  16. Information and communication technology: models of evaluation in France.

    PubMed

    Baron, Georges-Louis; Bruillard, Eric

    2003-05-01

    This paper aims at analyzing the evaluation of information and communication technology (ICT) in educational settings in France. First, it focuses on some characteristics of the French educational system and analyzes the trend towards a more decentralized management of education, which raises several important issues, including the trend for central evaluation to evolve from control to communication. Secondly, we define our view of ICT and evaluation. Then we present an overview of evaluation at the national level and European level and discuss some of the main research approaches in France concerning students' learning, learning instruments, and teachers' communities. Finally, some perspectives for the future of ICT evaluation are proposed. PMID:24011486

  17. Models and techniques for evaluating the effectiveness of aircraft computing systems

    NASA Technical Reports Server (NTRS)

    Meyer, J. F.

    1978-01-01

    Progress in the development of system models and techniques for the formulation and evaluation of aircraft computer system effectiveness is reported. Topics covered include: analysis of functional dependence: a prototype software package, METAPHOR, developed to aid the evaluation of performability; and a comprehensive performability modeling and evaluation exercise involving the SIFT computer.

  18. A Multi-Component Model for Assessing Learning Objects: The Learning Object Evaluation Metric (LOEM)

    ERIC Educational Resources Information Center

    Kay, Robin H.; Knaack, Liesel

    2008-01-01

    While discussion of the criteria needed to assess learning objects has been extensive, a formal, systematic model for evaluation has yet to be thoroughly tested. The purpose of the following study was to develop and assess a multi-component model for evaluating learning objects. The Learning Object Evaluation Metric (LOEM) was developed from a…

  19. Evaluation of a gully headcut retreat model using multitemporal aerial photographs and digital elevation models

    NASA Astrophysics Data System (ADS)

    Campo-Bescós, M. A.; Flores-Cervantes, J. H.; Bras, R. L.; Casalí, J.; Giráldez, J. V.

    2013-12-01

    large fraction of soil erosion in temperate climate systems proceeds from gully headcut growth processes. Nevertheless, headcut retreat is not well understood. Few erosion models include gully headcut growth processes, and none of the existing headcut retreat models have been tested against long-term retreat rate estimates. In this work the headcut retreat resulting from plunge pool erosion in the Channel Hillslope Integrated Landscape Development (CHILD) model is calibrated and compared to long-term evolution measurements of six gullies at the Bardenas Reales, northeast Spain. The headcut retreat module of CHILD was calibrated by adjusting the shape factor parameter to fit the observed retreat and volumetric soil loss of one gully during a 36 year period, using reported and collected field data to parameterize the rest of the model. To test the calibrated model, estimates by CHILD were compared to observations of headcut retreat from five other neighboring gullies. The differences in volumetric soil loss rates between the simulations and observations were less than 0.05 m3 yr-1, on average, with standard deviations smaller than 0.35 m3 yr-1. These results are the first evaluation of the headcut retreat module implemented in CHILD with a field data set. These results also show the usefulness of the model as a tool for simulating long-term volumetric gully evolution due to plunge pool erosion.

  20. Development of an efficient coupled model for soil-atmosphere modelling (FHAVeT): model evaluation and comparison

    NASA Astrophysics Data System (ADS)

    Tinet, A.-J.; Chanzy, A.; Braud, I.; Crevoisier, D.; Lafolie, F.

    2014-07-01

    In agricultural management, a good timing in operations such as irrigation or sowing, is essential to enhance both economical and environmental performance. To improve such timing, predictive software are of particular interest. An optimal decision making software would require process modules which provides robust, efficient and accurate predictions while being based on a minimal amount of parameters easily available. This paper develops a coupled soil-atmosphere model based on Ross fast solution for Richards' equation, heat transfer and detailed surface energy balance. In this paper, the developed model, FHAVeT (Fast Hydro Atmosphere Vegetation Temperature), has been evaluated in bare soil conditions against the coupled model based of the De Vries description, TEC. The two models were compared for different climatic and soil conditions. Moreover, the model allows the use of various pedotransfer functions. The FHAVeT model showed better performance in regards to mass balance. In order to allow a more precise comparison, 6 time windows were selected. The study demonstrated that the FHAVeT behaviour is quite similar to the TEC behaviour except under some dry conditions. An evaluation of day detection in regards to moisture thresholds is performed.

  1. A dynamic model of metabolizable energy utilization in growing and mature cattle. III. Model evaluation.

    PubMed

    Williams, C B; Jenkins, T G

    2003-06-01

    Component models of heat production identified in a proposed system of partitioning ME intake and a dynamic systems model that predicts gain in empty BW in cattle resulting from a known intake of ME were evaluated. Evaluations were done in four main areas: 1) net efficiency of ME utilization for gain, 2) relationship between recovered energy and ME intake, 3) predicting gain in empty BW from recovered energy, and 4) predicting gain in empty BW from ME intake. An analysis of published data showed that the net partial efficiencies of ME utilization for protein and fat gain were approximately 0.2 and 0.75, respectively, and that the net efficiency of ME utilization for gain could be estimated using these net partial efficiencies and the fraction of recovered energy that is contained in protein. Analyses of published sheep and cattle experimental data showed a significant linear relationship between recovered energy and ME intake, with no evidence for a nonlinear relationship. Growth and body composition of Hereford x Angus steers simulated from weaning to slaughter showed that over the finishing period, 20.8% of ME intake was recovered in gain. These results were similar to observed data and comparable to feedlot data of 26.5% for a shorter finishing period with a higher-quality diet. The component model to predict gain in empty BW from recovered energy was evaluated with growth and body composition data of five steer genotypes on two levels of nutrition. Linear regression of observed on predicted values for empty BW resulted in an intercept and slope that were not different (P < 0.05) from 0 and 1, respectively. Evaluations of the dynamic systems model to predict gain in empty BW using ME intake as the input showed close agreement between predicted and observed final empty BW for steers that were finished on high-energy diets, and the model accurately predicted growth patterns for Angus, Charolais, and Simmental reproducing females from 10 mo to 7 yr of age. PMID

  2. A dynamic model of metabolizable energy utilization in growing and mature cattle. III. Model evaluation.

    PubMed

    Williams, C B; Jenkins, T G

    2003-06-01

    Component models of heat production identified in a proposed system of partitioning ME intake and a dynamic systems model that predicts gain in empty BW in cattle resulting from a known intake of ME were evaluated. Evaluations were done in four main areas: 1) net efficiency of ME utilization for gain, 2) relationship between recovered energy and ME intake, 3) predicting gain in empty BW from recovered energy, and 4) predicting gain in empty BW from ME intake. An analysis of published data showed that the net partial efficiencies of ME utilization for protein and fat gain were approximately 0.2 and 0.75, respectively, and that the net efficiency of ME utilization for gain could be estimated using these net partial efficiencies and the fraction of recovered energy that is contained in protein. Analyses of published sheep and cattle experimental data showed a significant linear relationship between recovered energy and ME intake, with no evidence for a nonlinear relationship. Growth and body composition of Hereford x Angus steers simulated from weaning to slaughter showed that over the finishing period, 20.8% of ME intake was recovered in gain. These results were similar to observed data and comparable to feedlot data of 26.5% for a shorter finishing period with a higher-quality diet. The component model to predict gain in empty BW from recovered energy was evaluated with growth and body composition data of five steer genotypes on two levels of nutrition. Linear regression of observed on predicted values for empty BW resulted in an intercept and slope that were not different (P < 0.05) from 0 and 1, respectively. Evaluations of the dynamic systems model to predict gain in empty BW using ME intake as the input showed close agreement between predicted and observed final empty BW for steers that were finished on high-energy diets, and the model accurately predicted growth patterns for Angus, Charolais, and Simmental reproducing females from 10 mo to 7 yr of age.

  3. Evaluating the use of different precipitation datasets in flood modelling

    NASA Astrophysics Data System (ADS)

    Akyurek, Zuhal; Soytekin, Arzu

    2016-04-01

    Satellite based precipitation products, numerical weather prediction model precipitation forecasts and weather radar precipitation estimates can be a remedy for gauge sparse regions especially in flood forecasting studies. However, there is a strong need for evaluation of the performance and limitations of these estimates in hydrology. This study compares the Hydro-Estimator precipitation product, Weather Research and Forecasting (WRF) model precipitation and weather radar values with gauge data in Samsun-Terme region located in the eastern Black Sea region of Turkey, which generally receives high rainfall from north-facing slopes of mountains. Using different statistical factors, performance of the precipitation estimates are compared in point and areal based manner. In point based comparisons, three matching methods; direct matching method (DM), probability matching method (PMM) and window correlation matching method (WCMM) are used to make comparisons for the flood event (22.11.2014) lasted 40 hours. Hourly rainfall data from 13 ground observation stations were used in the analyses. This flood event created 541 m3/sec peak discharge at the 22-45 discharge observation station and flooding at the downstream of the basin. It is seen that, general trend of the rainfall is captured by the radar rainfall estimation well but radar underestimates the peaks. Moreover, it is observed that the assessment factor (gauge rainfall/ radar rainfall estimation) does not depend on the distance between radar and gauge station. In WCMM calculation it is found out that change of space window from 1x1 type to 5x5 type does not improve the results dramatically. In areal based comparisons, it is found out that the distribution of the HE product in time series does not show similarity for other datasets. Furthermore, the geometry of the subbasins, size of the area in 2D and 3D and average elevation do not have an impact on the mean statistics, RMSE, r and bias calculation for both radar

  4. ACTINIDE REMOVAL PROCESS SAMPLE ANALYSIS, CHEMICAL MODELING, AND FILTRATION EVALUATION

    SciTech Connect

    Martino, C.; Herman, D.; Pike, J.; Peters, T.

    2014-06-05

    Filtration within the Actinide Removal Process (ARP) currently limits the throughput in interim salt processing at the Savannah River Site. In this process, batches of salt solution with Monosodium Titanate (MST) sorbent are concentrated by crossflow filtration. The filtrate is subsequently processed to remove cesium in the Modular Caustic Side Solvent Extraction Unit (MCU) followed by disposal in saltstone grout. The concentrated MST slurry is washed and sent to the Defense Waste Processing Facility (DWPF) for vitrification. During recent ARP processing, there has been a degradation of filter performance manifested as the inability to maintain high filtrate flux throughout a multi-batch cycle. The objectives of this effort were to characterize the feed streams, to determine if solids (in addition to MST) are precipitating and causing the degraded performance of the filters, and to assess the particle size and rheological data to address potential filtration impacts. Equilibrium modelling with OLI Analyzer{sup TM} and OLI ESP{sup TM} was performed to determine chemical components at risk of precipitation and to simulate the ARP process. The performance of ARP filtration was evaluated to review potential causes of the observed filter behavior. Task activities for this study included extensive physical and chemical analysis of samples from the Late Wash Pump Tank (LWPT) and the Late Wash Hold Tank (LWHT) within ARP as well as samples of the tank farm feed from Tank 49H. The samples from the LWPT and LWHT were obtained from several stages of processing of Salt Batch 6D, Cycle 6, Batch 16.

  5. Model Evaluation and Ensemble Modelling of Surface-Level Ozone in Europe and North America in the Context of AQMEII

    EPA Science Inventory

    More than ten state-of-the-art regional air quality models have been applied as part of the Air Quality Model Evaluation International Initiative (AQMEII). These models were run by twenty independent groups in Europe and North America. Standardised modelling outputs over a full y...

  6. Validating the ACE Model for Evaluating Student Performance Using a Teaching-Learning Process Based on Computational Modeling Systems

    ERIC Educational Resources Information Center

    Louzada, Alexandre Neves; Elia, Marcos da Fonseca; Sampaio, Fábio Ferrentini; Vidal, Andre Luiz Pestana

    2014-01-01

    The aim of this work is to adapt and test, in a Brazilian public school, the ACE model proposed by Borkulo for evaluating student performance as a teaching-learning process based on computational modeling systems. The ACE model is based on different types of reasoning involving three dimensions. In addition to adapting the model and introducing…

  7. The Discrepancy Evaluation Model: A Strategy for Improving a Simulation and Determining Effectiveness.

    ERIC Educational Resources Information Center

    Morra, Linda G.

    This paper presents the Discrepancy Evaluation Model (DEM) as an overall strategy or framework for both the improvement and assessment of effectiveness of simulation/games. While application of the evaluation model to simulation/games rather than educational programs requires modification of the model, its critical features remain. These include:…

  8. A Model of Evaluation Planning, Implementation and Management toward a "Culture of Information" within Organizations.

    ERIC Educational Resources Information Center

    Bhola, H. S.

    The argument underlying the ongoing paradigm shift from logical-positivism to constructionism is briefly outlined. A model of evaluation planning, implementation, and management (the P-I-M model) is then presented, which assumes a complementarity between the two paradigms. The P-I-M Model includes three components of educational evaluation: a…

  9. Relative and Absolute Fit Evaluation in Cognitive Diagnosis Modeling

    ERIC Educational Resources Information Center

    Chen, Jinsong; de la Torre, Jimmy; Zhang, Zao

    2013-01-01

    As with any psychometric models, the validity of inferences from cognitive diagnosis models (CDMs) determines the extent to which these models can be useful. For inferences from CDMs to be valid, it is crucial that the fit of the model to the data is ascertained. Based on a simulation study, this study investigated the sensitivity of various fit…

  10. A Framework for Multifaceted Evaluation of Student Models

    ERIC Educational Resources Information Center

    Huang, Yun; González-Brenes, José P.; Kumar, Rohit; Brusilovsky, Peter

    2015-01-01

    Latent variable models, such as the popular Knowledge Tracing method, are often used to enable adaptive tutoring systems to personalize education. However, finding optimal model parameters is usually a difficult non-convex optimization problem when considering latent variable models. Prior work has reported that latent variable models obtained…

  11. Towards Model Diagnosis in Hydrologic Models: Evaluation of the abcd Water Balance Model Using the HCDN Dataset

    NASA Astrophysics Data System (ADS)

    Martinez Baquero, G. F.; Gupta, H. V.

    2006-12-01

    Increasing model complexity demands the development of new methods able to mine larger amounts of information from model results and available data. Measures commonly used to compare models with data typically lack diagnostic "power". This work therefore explores the design of more powerful strategies to identify the causes of discrepancy between models and hydrologic phenomena, as well as to increase the knowledge about the input-output relationship of the system. In this context, we evaluate how the abcd monthly water balance model, used to infer soil moisture conditions or groundwater recharge, performs in 764 watersheds of the conterminous United States. Work done under these guidelines required the integration of the Hydro-Climatic Data Network dataset with spatial information to summarize the results and relate the performance with the model assumptions and specific conditions of the basins. The diagnostic process is implemented by the definition of appropriate hydrologic signatures to measure the capability of watersheds to transform environmental inputs and propose equivalent modeling structures. Knowledge acquired during this process is used to test modifications of the model in hydrologic regions where the performance is poor.

  12. Evaluating mallard adaptive management models with time series

    USGS Publications Warehouse

    Conn, P.B.; Kendall, W.L.

    2004-01-01

    Wildlife practitioners concerned with midcontinent mallard (Anas platyrhynchos) management in the United States have instituted a system of adaptive harvest management (AHM) as an objective format for setting harvest regulations. Under the AHM paradigm, predictions from a set of models that reflect key uncertainties about processes underlying population dynamics are used in coordination with optimization software to determine an optimal set of harvest decisions. Managers use comparisons of the predictive abilities of these models to gauge the relative truth of different hypotheses about density-dependent recruitment and survival, with better-predicting models giving more weight to the determination of harvest regulations. We tested the effectiveness of this strategy by examining convergence rates of 'predictor' models when the true model for population dynamics was known a priori. We generated time series for cases when the a priori model was 1 of the predictor models as well as for several cases when the a priori model was not in the model set. We further examined the addition of different levels of uncertainty into the variance structure of predictor models, reflecting different levels of confidence about estimated parameters. We showed that in certain situations, the model-selection process favors a predictor model that incorporates the hypotheses of additive harvest mortality and weakly density-dependent recruitment, even when the model is not used to generate data. Higher levels of predictor model variance led to decreased rates of convergence to the model that generated the data, but model weight trajectories were in general more stable. We suggest that predictive models should incorporate all sources of uncertainty about estimated parameters, that the variance structure should be similar for all predictor models, and that models with different functional forms for population dynamics should be considered for inclusion in predictor model! sets. All of these

  13. Course Content and Program Evaluation Model. Final Report.

    ERIC Educational Resources Information Center

    Callahan, Mary Patricia; Marson, Arthur

    In order to evaluate the content of the courses and programs of the Moraine Park Technical Institute (MPTI) and to identify weaknesses and strengths in meeting the needs of the employee and employer, an in-depth evaluation of the school's five departments (trade and industry, business education, health occupations, agriculture, and general…

  14. Evaluating Organizational Performance: Rational, Natural, and Open System Models

    ERIC Educational Resources Information Center

    Martz, Wes

    2013-01-01

    As the definition of organization has evolved, so have the approaches used to evaluate organizational performance. During the past 60 years, organizational theorists and management scholars have developed a comprehensive line of thinking with respect to organizational assessment that serves to inform and be informed by the evaluation discipline.…

  15. Improving the Evaluation Model for the Lithuanian Informatics Olympiads

    ERIC Educational Resources Information Center

    Skupiene, Jurate

    2010-01-01

    The Lithuanian Informatics Olympiads (LitIO) is a problem solving programming contest for students in secondary education. The work of the student to be evaluated is an algorithm designed by the student and implemented as a working program. The current evaluation process involves both automated (for correctness and performance of programs with the…

  16. A diagnostic evaluation model for complex research partnerships with community engagement: the partnership for Native American Cancer Prevention (NACP) model.

    PubMed

    Trotter, Robert T; Laurila, Kelly; Alberts, David; Huenneke, Laura F

    2015-02-01

    Complex community oriented health care prevention and intervention partnerships fail or only partially succeed at alarming rates. In light of the current rapid expansion of critically needed programs targeted at health disparities in minority populations, we have designed and are testing an "logic model plus" evaluation model that combines classic logic model and query based evaluation designs (CDC, NIH, Kellogg Foundation) with advances in community engaged designs derived from industry-university partnership models. These approaches support the application of a "near real time" feedback system (diagnosis and intervention) based on organizational theory, social network theory, and logic model metrics directed at partnership dynamics, combined with logic model metrics.

  17. Evaluation of a Linear Mixing Model to Retrieve Soil and Vegetation Temperatures of Land Targets

    NASA Astrophysics Data System (ADS)

    Yang, Jinxin; Jia, Li; Cui, Yaokui; Zhou, Jie; Menenti, Massimo

    2014-03-01

    A simple linear mixing model of heterogeneous soil-vegetation system and retrieval of component temperatures from directional remote sensing measurements by inverting this model is evaluated in this paper using observations by a thermal camera. The thermal camera was used to obtain multi-angular TIR (Thermal Infra-Red) images over vegetable and orchard canopies. A whole thermal camera image was treated as a pixel of a satellite image to evaluate the model with the two-component system, i.e. soil and vegetation. The evaluation included two parts: evaluation of the linear mixing model and evaluation of the inversion of the model to retrieve component temperatures. For evaluation of the linear mixing model, the RMSE is 0.2 K between the observed and modelled brightness temperatures, which indicates that the linear mixing model works well under most conditions. For evaluation of the model inversion, the RMSE between the model retrieved and the observed vegetation temperatures is 1.6K, correspondingly, the RMSE between the observed and retrieved soil temperatures is 2.0K. According to the evaluation of the sensitivity of retrieved component temperatures on fractional cover, the linear mixing model gives more accurate retrieval accuracies for both soil and vegetation temperatures under intermediate fractional cover conditions.

  18. Statistical and graphical methods for evaluating solute transport models: Overview and application

    NASA Astrophysics Data System (ADS)

    Loague, Keith; Green, Richard E.

    1991-01-01

    Mathematical modeling is the major tool to predict the mobility and the persistence of pollutants to and within groundwater systems. Several comprehensive institutional models have been developed in recent years for this purpose. However, evaluation procedures are not well established for models of saturated-unsaturated soil-water flow and chemical transport. This paper consists of three parts: (1) an overview of various aspects of mathematical modeling focused upon solute transport models; (2) an introduction to statistical criteria and graphical displays that can be useful for model evaluation; and (3) an example of model evaluation for a mathematical model of pesticide leaching. The model testing example uses observed and predicted atrazine concentration profiles from a small catchment in Georgia. The model tested is the EPA pesticide root zone model (PRZM).

  19. EVALUATION OF UNSATURATED/VADOSE ZONE MODELS FOR SUPERFUND SITES

    EPA Science Inventory

    Mathematical models of water and chemical movement in soils are being used as decision aids for defining groundwater protection practices for Superfund sites. Numerous transport models exist for predicting movementand degradation of hazardous chemicals through soils. Many of thes...

  20. An evaluation of recent quantitative magnetospheric magnetic field models

    NASA Technical Reports Server (NTRS)

    Walker, R. J.

    1976-01-01

    Magnetospheric field models involving dipole tilt effects are discussed, with particular reference to defined magnetopause models and boundary surface models. The models are compared with observations and with each other whenever possible. It is shown that models containing only contributions from magnetopause and tail current systems are capable of reproducing the observed quiet time field just in a qualitative way. The best quantitative agreement between models and observations take place when currents distributed in the inner magnetosphere are added to the magnetopause and tail current systems. One region in which all the models fall short is the region around the polar cusp. Obtaining physically reasonable gradients should have high priority in the development of future models.

  1. Multimedia modeling of engineered nanoparticles with SimpleBox4nano: model definition and evaluation.

    PubMed

    Meesters, Johannes A J; Koelmans, Albert A; Quik, Joris T K; Hendriks, A Jan; van de Meent, Dik

    2014-05-20

    Screening level models for environmental assessment of engineered nanoparticles (ENP) are not generally available. Here, we present SimpleBox4Nano (SB4N) as the first model of this type, assess its validity, and evaluate it by comparisons with a known material flow model. SB4N expresses ENP transport and concentrations in and across air, rain, surface waters, soil, and sediment, accounting for nanospecific processes such as aggregation, attachment, and dissolution. The model solves simultaneous mass balance equations (MBE) using simple matrix algebra. The MBEs link all concentrations and transfer processes using first-order rate constants for all processes known to be relevant for ENPs. The first-order rate constants are obtained from the literature. The output of SB4N is mass concentrations of ENPs as free dispersive species, heteroaggregates with natural colloids, and larger natural particles in each compartment in time and at steady state. Known scenario studies for Switzerland were used to demonstrate the impact of the transport processes included in SB4N on the prediction of environmental concentrations. We argue that SB4N-predicted environmental concentrations are useful as background concentrations in environmental risk assessment.

  2. A Simplified Land Model (SLM) for use in cloud-resolving models: Formulation and evaluation

    NASA Astrophysics Data System (ADS)

    Lee, Jungmin M.; Khairoutdinov, Marat

    2015-09-01

    A Simplified Land Model (SLM) that uses a minimalist set of parameters with a single-layer vegetation and multilevel soil structure has been developed distinguishing canopy and undercanopy energy budgets. The primary motivation has been to design a land model for use in the System for Atmospheric Modeling (SAM) cloud-resolving model to study land-atmosphere interactions with a sufficient level of realism. SLM uses simplified expressions for the transport of heat, moisture, momentum, and radiation in soil-vegetation system. The SLM performance has been evaluated over several land surface types using summertime tower observations of micrometeorological and biophysical data from three AmeriFlux sites, which include grassland, cropland, and deciduous-broadleaf forest. In general, the SLM captures the observed diurnal cycle of surface energy budget and soil temperature reasonably well, although reproducing the evolution of soil moisture, especially after rain events, has been challenging. The SLM coupled to SAM has been applied to the case of summertime shallow cumulus convection over land based on the Atmospheric Radiation Measurements (ARM) Southern Great Plain (SGP) observations. The simulated surface latent and sensible heat fluxes as well as the evolution of thermodynamic profiles in convective boundary layer agree well with the estimates based on the observations. Sensitivity of atmospheric boundary layer development to the soil moisture and different land cover types has been also examined.

  3. Evaluating Mixture Modeling for Clustering: Recommendations and Cautions

    ERIC Educational Resources Information Center

    Steinley, Douglas; Brusco, Michael J.

    2011-01-01

    This article provides a large-scale investigation into several of the properties of mixture-model clustering techniques (also referred to as latent class cluster analysis, latent profile analysis, model-based clustering, probabilistic clustering, Bayesian classification, unsupervised learning, and finite mixture models; see Vermunt & Magdison,…

  4. Application of an Aesthetic Evaluation Model to Data Entry Screens.

    ERIC Educational Resources Information Center

    Ngo, D. C. L.; Byrne, J. G.

    2001-01-01

    Describes a new model for quantitatively assessing screen formats. Results of applying the model to data entry screens support the use of the model. Also described is a critiquing mechanism embedded in a user interface design environment as a demonstration of this approach. (Author/AEF)

  5. Evaluating performances of simplified physically based models for landslide susceptibility

    NASA Astrophysics Data System (ADS)

    Formetta, G.; Capparelli, G.; Versace, P.

    2015-12-01

    Rainfall induced shallow landslides cause loss of life and significant damages involving private and public properties, transportation system, etc. Prediction of shallow landslides susceptible locations is a complex task that involves many disciplines: hydrology, geotechnical science, geomorphology, and statistics. Usually to accomplish this task two main approaches are used: statistical or physically based model. Reliable models' applications involve: automatic parameters calibration, objective quantification of the quality of susceptibility maps, model sensitivity analysis. This paper presents a methodology to systemically and objectively calibrate, verify and compare different models and different models performances indicators in order to individuate and eventually select the models whose behaviors are more reliable for a certain case study. The procedure was implemented in package of models for landslide susceptibility analysis and integrated in the NewAge-JGrass hydrological model. The package includes three simplified physically based models for landslides susceptibility analysis (M1, M2, and M3) and a component for models verifications. It computes eight goodness of fit indices by comparing pixel-by-pixel model results and measurements data. Moreover, the package integration in NewAge-JGrass allows the use of other components such as geographic information system tools to manage inputs-output processes, and automatic calibration algorithms to estimate model parameters. The system was applied for a case study in Calabria (Italy) along the Salerno-Reggio Calabria highway, between Cosenza and Altilia municipality. The analysis provided that among all the optimized indices and all the three models, the optimization of the index distance to perfect classification in the receiver operating characteristic plane (D2PC) coupled with model M3 is the best modeling solution for our test case.

  6. Staying in the Light: Evaluating Sustainability Models for Brokering Software

    NASA Astrophysics Data System (ADS)

    Powers, L. A.; Benedict, K. K.; Best, M.; Fyfe, S.; Jacobs, C. A.; Michener, W. K.; Pearlman, J.; Turner, A.; Nativi, S.

    2015-12-01

    The Business Models Team of the Research Data Alliance Brokering Governance Working Group examined several support models proposed to promote the long-term sustainability of brokering middleware. The business model analysis includes examination of funding source, implementation frameworks and obstacles, and policy and legal considerations. The issue of sustainability is not unique to brokering software and these models may be relevant to many applications. Results of this comprehensive analysis highlight advantages and disadvantages of the various models in respect to the specific requirements for brokering services. We offer recommendations based on the outcomes of this analysis while recognizing that all software is part of an evolutionary process and has a lifespan.

  7. Development and evaluation of a dense gas plume model

    SciTech Connect

    Matthias, C.S.

    1994-12-31

    The dense gas plume model (continuous release) described in this paper has been developed using the same principles as for a dense gas puff model (instantaneous release). It is a box model for which the main goal is to predict the height H, width W, and maximum concentration C{sub b} for a steady dense plume. A secondary goal is to distribute the mass more realistically by empirically attaching Gaussian distributions in the horizontal and vertical directions. For ease of reference, the models and supporting programs will be referred to as DGM (Dense Gas Models).

  8. Evaluation of a bridge using simplified element modeling

    SciTech Connect

    Farrar, C.R.; Duffey, T.A.

    1995-02-01

    An experimental-numerical comparison of the forced and ambient vibrations Of a multi-span composite plate-girder bridge was performed. The bridge was modeled using a finite element program at three levels of complexity, including a simple 250 DOF model that utilizes a single beam element to represent the entire bridge cross section. Difficulties encountered in the development of the simple model are discussed. The dynamic properties predicted by the simple model were consistent with these measured on the bridge and computed using more detailed finite element models.

  9. Cancer cell spheroids as a model to evaluate chemotherapy protocols.

    PubMed

    Perche, Federico; Torchilin, Vladimir P

    2012-10-01

    To determine whether the spheroid culture can be used to evaluate drug efficacy, we have evaluated the toxicity of free or carrier-associated doxorubicin as a single drug or in combination with other antineoplastic agents using the spheroid cultures of drug-resistant cancer cells. Paclitaxel, cisplatin, dexamethasone, mitoxantrone, sclareol or methotrexate were used in combination with doxorubicin. The effect of the treatment protocols on free, micellar and liposomal doxorubicin accumulation in spheroids and on resulting toxicity was evaluated by fluorescence and lactate dehydrogenase release, respectively. Enhanced doxorubicin accumulation and toxicity were observed after spheroid pretreatment with mitoxantrone or paclitaxel. Effects of the drug combination with doxorubicin were sequence dependent, use of doxorubicin as the first drug being the least inducer of toxicity. Finally, spheroids were recognized by a cancer cell-specific antibody. Our results suggest the usefulness of spheroids to evaluate chemotherapy combinations. PMID:22892843

  10. Cancer cell spheroids as a model to evaluate chemotherapy protocols

    PubMed Central

    Perche, Federico; Torchilin, Vladimir P.

    2012-01-01

    To determine whether the spheroid culture can be used to evaluate drug efficacy, we have evaluated the toxicity of free or carrier-associated doxorubicin as a single drug or in combination with other antineoplastic agents using the spheroid cultures of drug-resistant cancer cells. Paclitaxel, cisplatin, dexamethasone, mitoxantrone, sclareol or methotrexate were used in combination with doxorubicin. The effect of the treatment protocols on free, micellar and liposomal doxorubicin accumulation in spheroids and on resulting toxicity was evaluated by fluorescence and lactate dehydrogenase release, respectively. Enhanced doxorubicin accumulation and toxicity were observed after spheroid pretreatment with mitoxantrone or paclitaxel. Effects of the drug combination with doxorubicin were sequence dependent, use of doxorubicin as the first drug being the least inducer of toxicity. Finally, spheroids were recognized by a cancer cell-specific antibody. Our results suggest the usefulness of spheroids to evaluate chemotherapy combinations. PMID:22892843

  11. Increasing Student Evaluation Capacity through a Collaborative Community-Based Program Evaluation Teaching Model

    ERIC Educational Resources Information Center

    Carlisle, Shauna K.; Kruzich, Jean M.

    2013-01-01

    The evaluation literature reflects a long-standing interest in ways to provide practical hands-on training experience in evaluation courses. Concomitantly, some funders have shown rising expectations for increased accountability on the part of Community-Based organizations (CBOs), even though agencies often lack the associated funding and…

  12. Collaborative Evaluation Communities in Urban Schools: A Model of Evaluation Capacity Building for STEM Education

    ERIC Educational Resources Information Center

    Huffman, Douglas; Lawrenz, Frances; Thomas, Kelli; Clarkson, Lesa

    2006-01-01

    Building the evaluation capacity of K-12 schools is clearly an important goal for the field of evaluation, especially in the current educational environment, which is dominated by issues of accountability and high-stakes testing. The focus on student achievement has forced schools to become more data-driven as they attempt to analyze test scores…

  13. Software Quality Evaluation Models Applicable in Health Information and Communications Technologies. A Review of the Literature.

    PubMed

    Villamor Ordozgoiti, Alberto; Delgado Hito, Pilar; Guix Comellas, Eva María; Fernandez Sanchez, Carlos Manuel; Garcia Hernandez, Milagros; Lluch Canut, Teresa

    2016-01-01

    Information and Communications Technologies in healthcare has increased the need to consider quality criteria through standardised processes. The aim of this study was to analyse the software quality evaluation models applicable to healthcare from the perspective of ICT-purchasers. Through a systematic literature review with the keywords software, product, quality, evaluation and health, we selected and analysed 20 original research papers published from 2005-2016 in health science and technology databases. The results showed four main topics: non-ISO models, software quality evaluation models based on ISO/IEC standards, studies analysing software quality evaluation models, and studies analysing ISO standards for software quality evaluation. The models provide cost-efficiency criteria for specific software, and improve use outcomes. The ISO/IEC25000 standard is shown as the most suitable for evaluating the quality of ICTs for healthcare use from the perspective of institutional acquisition. PMID:27350495

  14. Quantized Step-up Model for Evaluation of Internship in Teaching of Prospective Science Teachers.

    ERIC Educational Resources Information Center

    Sindhu, R. S.

    2002-01-01

    Describes the quantized step-up model developed for the evaluation purposes of internship in teaching which is an analogous model of the atomic structure. Assesses prospective teachers' abilities in lesson delivery. (YDS)

  15. Evaluation of COSMO-ART in the Framework of the Air Quality Model Evaluation International Initiative (AQMEII)

    NASA Astrophysics Data System (ADS)

    Giordano, Lea; Brunner, Dominik; Im, Ulas; Galmarini, Stefano

    2014-05-01

    The Air Quality Model Evaluation International Initiative (AQMEII) coordinated by the EC-JRC and US-EPA, promotes since 2008 research on regional air quality model evaluation across the atmospheric modelling communities of Europe and North America. AQMEII has now reached its Phase 2 that is dedicated to the evaluation of on-line coupled chemistry-meteorology models as opposed to Phase 1 where only off-line models were considered. At European level, AQMEII collaborates with the COST Action "European framework for on-line integrated air quality and meteorology modelling" (EuMetChem). All European groups participating in AQMEII performed simulations over the same spatial domain (Europe at a resolution of about 20 km) and using the same simulation strategy (e.g. no nudging allowed) and the same input data as much as possible. The initial and boundary conditions (IC/BC) were shared between all groups. Emissions were provided by the TNO-MACC database for anthropogenic emissions and the FMI database for biomass burning emissions. Chemical IC/BC data were taken from IFS-MOZART output, and meteorological IC/BC from the ECWMF global model. Evaluation data sets were collected by the Joint Research Center (JRC) and include measurements from surface in situ networks (AirBase and EMEP), vertical profiles from ozone sondes and aircraft (MOZAIC), and remote sensing (AERONET, satellites). Since Phase 2 focuses on on-line coupled models, a special effort is devoted to the detailed speciation of particulate matter components, with the goal of studying feedback processes. For the AQMEII exercise, COSMO-ART has been run with 40 levels of vertical resolution, and a chemical scheme that includes the SCAV module of Knote and Brunner (ACP 2013) for wet-phase chemistry and the SOA treatment according to VBS (volatility basis set) approach (Athanasopoulou et al., ACP 2013). The COSMO-ART evaluation shows that, next to a good performance in the meteorology, the gas phase chemistry is well

  16. Evaluation of the new EMAC-SWIFT chemistry climate model

    NASA Astrophysics Data System (ADS)

    Scheffler, Janice; Langematz, Ulrike; Wohltmann, Ingo; Rex, Markus

    2016-04-01

    It is well known that the representation of atmospheric ozone chemistry in weather and climate models is essential for a realistic simulation of the atmospheric state. Including atmospheric ozone chemistry into climate simulations is usually done by prescribing a climatological ozone field, by including a fast linear ozone scheme into the model or by using a climate model with complex interactive chemistry. While prescribed climatological ozone fields are often not aligned with the modelled dynamics, a linear ozone scheme may not be applicable for a wide range of climatological conditions. Although interactive chemistry provides a realistic representation of atmospheric chemistry such model simulations are computationally very expensive and hence not suitable for ensemble simulations or simulations with multiple climate change scenarios. A new approach to represent atmospheric chemistry in climate models which can cope with non-linearities in ozone chemistry and is applicable to a wide range of climatic states is the Semi-empirical Weighted Iterative Fit Technique (SWIFT) that is driven by reanalysis data and has been validated against observational satellite data and runs of a full Chemistry and Transport Model. SWIFT has recently been implemented into the ECHAM/MESSy (EMAC) chemistry climate model that uses a modular approach to climate modelling where individual model components can be switched on and off. Here, we show first results of EMAC-SWIFT simulations and validate these against EMAC simulations using the complex interactive chemistry scheme MECCA, and against observations.

  17. A YEAR-LONG MM5 EVALUATION USING A MODEL EVALUATION TOOLKIT

    EPA Science Inventory

    Air quality modeling has expanded in both sophistication and application over the past decade. Meteorological and air quality modeling tools are being used for research, forecasting, and regulatory related emission control strategies. Results from air quality simulations have far...

  18. Experimental Evaluation of Equivalent-Fluid Models for Melamine Foam

    NASA Technical Reports Server (NTRS)

    Allen, Albert R.; Schiller, Noah H.

    2016-01-01

    Melamine foam is a soft porous material commonly used in noise control applications. Many models exist to represent porous materials at various levels of fidelity. This work focuses on rigid frame equivalent fluid models, which represent the foam as a fluid with a complex speed of sound and density. There are several empirical models available to determine these frequency dependent parameters based on an estimate of the material flow resistivity. Alternatively, these properties can be experimentally educed using an impedance tube setup. Since vibroacoustic models are generally sensitive to these properties, this paper assesses the accuracy of several empirical models relative to impedance tube measurements collected with melamine foam samples. Diffuse field sound absorption measurements collected using large test articles in a laboratory are also compared with absorption predictions determined using model-based and measured foam properties. Melamine foam slabs of various thicknesses are considered.

  19. Evaluation of a Computational Model of Situational Awareness

    NASA Technical Reports Server (NTRS)

    Burdick, Mark D.; Shively, R. Jay; Rutkewski, Michael (Technical Monitor)

    2000-01-01

    Although the use of the psychological construct of situational awareness (SA) assists researchers in creating a flight environment that is safer and more predictable, its true potential remains untapped until a valid means of predicting SA a priori becomes available. Previous work proposed a computational model of SA (CSA) that sought to Fill that void. The current line of research is aimed at validating that model. The results show that the model accurately predicted SA in a piloted simulation.

  20. Evaluating Child Welfare policies with decision-analytic simulation models

    PubMed Central

    Goldhaber-Fiebert, Jeremy D.; Bailey, Stephanie L.; Hurlburt, Michael S.; Zhang, Jinjin; Snowden, Lonnie R.; Wulczyn, Fred; Landsverk, John; Horwitz, Sarah M.

    2013-01-01

    The objective was to demonstrate decision-analytic modeling in support of Child Welfare policymakers considering implementing evidence-based interventions. Outcomes included permanency (e.g., adoptions) and stability (e.g., foster placement changes). Analyses of a randomized trial of KEEP -- a foster parenting intervention -- and NSCAW-1 estimated placement change rates and KEEP's effects. A microsimulation model generalized these findings to other Child Welfare systems. The model projected that KEEP could increase permanency and stability, identifying strategies targeting higher-risk children and geographical regions that achieve benefits efficiently. Decision-analytic models enable planners to gauge the value of potential implementations. PMID:21861204