Science.gov

Sample records for evaluating value-at-risk models

  1. Multifractal Value at Risk model

    NASA Astrophysics Data System (ADS)

    Lee, Hojin; Song, Jae Wook; Chang, Woojin

    2016-06-01

    In this paper new Value at Risk (VaR) model is proposed and investigated. We consider the multifractal property of financial time series and develop a multifractal Value at Risk (MFVaR). MFVaR introduced in this paper is analytically tractable and not based on simulation. Empirical study showed that MFVaR can provide the more stable and accurate forecasting performance in volatile financial markets where large loss can be incurred. This implies that our multifractal VaR works well for the risk measurement of extreme credit events.

  2. Quantile uncertainty and value-at-risk model risk.

    PubMed

    Alexander, Carol; Sarabia, José María

    2012-08-01

    This article develops a methodology for quantifying model risk in quantile risk estimates. The application of quantile estimates to risk assessment has become common practice in many disciplines, including hydrology, climate change, statistical process control, insurance and actuarial science, and the uncertainty surrounding these estimates has long been recognized. Our work is particularly important in finance, where quantile estimates (called Value-at-Risk) have been the cornerstone of banking risk management since the mid 1980s. A recent amendment to the Basel II Accord recommends additional market risk capital to cover all sources of "model risk" in the estimation of these quantiles. We provide a novel and elegant framework whereby quantile estimates are adjusted for model risk, relative to a benchmark which represents the state of knowledge of the authority that is responsible for model risk. A simulation experiment in which the degree of model risk is controlled illustrates how to quantify Value-at-Risk model risk and compute the required regulatory capital add-on for banks. An empirical example based on real data shows how the methodology can be put into practice, using only two time series (daily Value-at-Risk and daily profit and loss) from a large bank. We conclude with a discussion of potential applications to nonfinancial risks.

  3. Value-at-risk prediction using context modeling

    NASA Astrophysics Data System (ADS)

    Denecker, K.; van Assche, S.; Crombez, J.; Vander Vennet, R.; Lemahieu, I.

    2001-04-01

    In financial market risk measurement, Value-at-Risk (VaR) techniques have proven to be a very useful and popular tool. Unfortunately, most VaR estimation models suffer from major drawbacks: the lognormal (Gaussian) modeling of the returns does not take into account the observed fat tail distribution and the non-stationarity of the financial instruments severely limits the efficiency of the VaR predictions. In this paper, we present a new approach to VaR estimation which is based on ideas from the field of information theory and lossless data compression. More specifically, the technique of context modeling is applied to estimate the VaR by conditioning the probability density function on the present context. Tree-structured vector quantization is applied to partition the multi-dimensional state space of both macroeconomic and microeconomic priors into an increasing but limited number of context classes. Each class can be interpreted as a state of aggregation with its own statistical and dynamic behavior, or as a random walk with its own drift and step size. Results on the US S&P500 index, obtained using several evaluation methods, show the strong potential of this approach and prove that it can be applied successfully for, amongst other useful applications, VaR and volatility prediction. The October 1997 crash is indicated in time.

  4. Application of the Beck model to stock markets: Value-at-Risk and portfolio risk assessment

    NASA Astrophysics Data System (ADS)

    Kozaki, M.; Sato, A.-H.

    2008-02-01

    We apply the Beck model, developed for turbulent systems that exhibit scaling properties, to stock markets. Our study reveals that the Beck model elucidates the properties of stock market returns and is applicable to practical use such as the Value-at-Risk estimation and the portfolio analysis. We perform empirical analysis with daily/intraday data of the S&P500 index return and find that the volatility fluctuation of real markets is well-consistent with the assumptions of the Beck model: The volatility fluctuates at a much larger time scale than the return itself and the inverse of variance, or “inverse temperature”, β obeys Γ-distribution. As predicted by the Beck model, the distribution of returns is well-fitted by q-Gaussian distribution of Tsallis statistics. The evaluation method of Value-at-Risk (VaR), one of the most significant indicators in risk management, is studied for q-Gaussian distribution. Our proposed method enables the VaR evaluation in consideration of tail risk, which is underestimated by the variance-covariance method. A framework of portfolio risk assessment under the existence of tail risk is considered. We propose a multi-asset model with a single volatility fluctuation shared by all assets, named the single β model, and empirically examine the agreement between the model and an imaginary portfolio with Dow Jones indices. It turns out that the single β model gives good approximation to portfolios composed of the assets with non-Gaussian and correlated returns.

  5. Value-at-Risk forecasts by a spatiotemporal model in Chinese stock market

    NASA Astrophysics Data System (ADS)

    Gong, Pu; Weng, Yingliang

    2016-01-01

    This paper generalizes a recently proposed spatial autoregressive model and introduces a spatiotemporal model for forecasting stock returns. We support the view that stock returns are affected not only by the absolute values of factors such as firm size, book-to-market ratio and momentum but also by the relative values of factors like trading volume ranking and market capitalization ranking in each period. This article studies a new method for constructing stocks' reference groups; the method is called quartile method. Applying the method empirically to the Shanghai Stock Exchange 50 Index, we compare the daily volatility forecasting performance and the out-of-sample forecasting performance of Value-at-Risk (VaR) estimated by different models. The empirical results show that the spatiotemporal model performs surprisingly well in terms of capturing spatial dependences among individual stocks, and it produces more accurate VaR forecasts than the other three models introduced in the previous literature. Moreover, the findings indicate that both allowing for serial correlation in the disturbances and using time-varying spatial weight matrices can greatly improve the predictive accuracy of a spatial autoregressive model.

  6. Value at risk estimation using independent component analysis-generalized autoregressive conditional heteroscedasticity (ICA-GARCH) models.

    PubMed

    Wu, Edmond H C; Yu, Philip L H; Li, W K

    2006-10-01

    We suggest using independent component analysis (ICA) to decompose multivariate time series into statistically independent time series. Then, we propose to use ICA-GARCH models which are computationally efficient to estimate the multivariate volatilities. The experimental results show that the ICA-GARCH models are more effective than existing methods, including DCC, PCA-GARCH, and EWMA. We also apply the proposed models to compute value at risk (VaR) for risk management applications. The backtesting and the out-of-sample tests validate the performance of ICA-GARCH models for value at risk estimation.

  7. The value-at-risk evaluation of Brent's crude oil market

    NASA Astrophysics Data System (ADS)

    Cheong, Chin Wen; Isa, Zaidi; Ying, Khor Chia; Lai, Ng Sew

    2014-06-01

    This study investigates the market risk of the Brent's crude oil market. First the long memory time-varying volatility is modelled under the Chung's specification. Second, for model adequacy evaluations on the heavy-tailed, long memory and endogenously estimated power transformation models indicated superior performance in out-of-sample forecasts. Lastly, these findings are further applied in the long and short trading positions of market risk evaluations of the Brent's market.

  8. Comparison of new conditional value-at-risk-based management models for optimal allocation of uncertain water supplies

    NASA Astrophysics Data System (ADS)

    Yamout, Ghina M.; Hatfield, Kirk; Romeijn, H. Edwin

    2007-07-01

    The paper studies the effect of incorporating the conditional value-at-risk (CVaRα) in analyzing a water allocation problem versus using the frequently used expected value, two-stage modeling, scenario analysis, and linear optimization tools. Five models are developed to examine water resource allocation when available supplies are uncertain: (1) a deterministic expected value model, (2) a scenario analysis model, (3) a two-stage stochastic model with recourse, (4) a CVaRα objective function model, and (5) a CVaRα constraint model. The models are applied over a region of east central Florida. Results show the deterministic expected value model underestimates system costs and water shortage. Furthermore, the expected value model produces identical cost estimates for different standard deviations distributions of water supplies with identical mean. From the scenario analysis model it is again demonstrated that the expected value of results taken from many scenarios underestimates costs and water shortages. Using a two-stage stochastic mixed integer formulation with recourse permits an improved representation of uncertainties and real-life decision making which in turn predicts higher costs. The inclusion of CVaRα objective function in the latter provides for the optimization and control of high-risk events. Minimizing CVaRα does not, however, permit control of lower-risk events. Constraining CVaRα while minimizing cost, on the other hand, allows for the control of high-risk events while minimizing the costs of all events. Results show CVaRα exhibits continuous and consistent behavior with respect to the confidence level α, when compared to value-at-risk (VaRα).

  9. A multi-objective optimization model with conditional value-at-risk constraints for water allocation equality

    NASA Astrophysics Data System (ADS)

    Hu, Zhineng; Wei, Changting; Yao, Liming; Li, Ling; Li, Chaozhi

    2016-11-01

    Water scarcity is a global problem which causes economic and political conflicts as well as degradation of ecosystems. Moreover, the uncertainty caused by extreme weather increases the risk of economic inefficiency, an essential consideration for water users. In this study, a multi-objective model involving water allocation equality and economic efficiency risk control is developed to help water managers mitigate these problems. Gini coefficient is introduced to optimize water allocation equality in water use sectors (agricultural, domestic, and industrial sectors), and CVaR is integrated into the model constraints to control the economic efficiency loss risk corresponding to variations in water availability. The case study demonstrates the practicability and rationality of the developed model, allowing the river basin authority to determine water allocation strategies for a single river basin.

  10. Modelling climate change impacts on and adaptation strategies for agriculture in Sardinia and Tunisia using AquaCrop and value-at-risk.

    PubMed

    Bird, David Neil; Benabdallah, Sihem; Gouda, Nadine; Hummel, Franz; Koeberl, Judith; La Jeunesse, Isabelle; Meyer, Swen; Prettenthaler, Franz; Soddu, Antonino; Woess-Gallasch, Susanne

    2016-02-01

    In Europe, there is concern that climate change will cause significant impacts around the Mediterranean. The goals of this study are to quantify the economic risk to crop production, to demonstrate the variability of yield by soil texture and climate model and to investigate possible adaptation strategies. In the Rio Mannu di San Sperate watershed, located in Sardinia (Italy) we investigate production of wheat, a rainfed crop. In the Chiba watershed located in Cap Bon (Tunisia), we analyze irrigated tomato production. We find, using the FAO model AquaCrop that crop production will decrease significantly in a future climate (2040-2070) as compared to the present without adaptation measures. Using "value-at-risk", we show that production should be viewed in a statistical manner. Wheat yields in Sardinia are modelled to decrease by 64% on clay loams, and to increase by 8% and 26% respectively on sandy loams and sandy clay loams. Assuming constant irrigation, tomatoes sown in August in Cap Bon are modelled to have a 45% chance of crop failure on loamy sands; a 39% decrease in yields on sandy clay loams; and a 12% increase in yields on sandy loams. For tomatoes sown in March; sandy clay loams will fail 81% of the time; on loamy sands the crop yields will be 63% less while on sandy loams, the yield will increase by 12%. However, if one assume 10% less water available for irrigation then tomatoes sown in March are not viable. Some adaptation strategies will be able to counteract the modelled crop losses. Increasing the amount of irrigation one strategy however this may not be sustainable. Changes in agricultural management such as changing the planting date of wheat to coincide with changing rainfall patterns in Sardinia or mulching of tomatoes in Tunisia can be effective at reducing crop losses.

  11. Multifractality and value-at-risk forecasting of exchange rates

    NASA Astrophysics Data System (ADS)

    Batten, Jonathan A.; Kinateder, Harald; Wagner, Niklas

    2014-05-01

    This paper addresses market risk prediction for high frequency foreign exchange rates under nonlinear risk scaling behaviour. We use a modified version of the multifractal model of asset returns (MMAR) where trading time is represented by the series of volume ticks. Our dataset consists of 138,418 5-min round-the-clock observations of EUR/USD spot quotes and trading ticks during the period January 5, 2006 to December 31, 2007. Considering fat-tails, long-range dependence as well as scale inconsistency with the MMAR, we derive out-of-sample value-at-risk (VaR) forecasts and compare our approach to historical simulation as well as a benchmark GARCH(1,1) location-scale VaR model. Our findings underline that the multifractal properties in EUR/USD returns in fact have notable risk management implications. The MMAR approach is a parsimonious model which produces admissible VaR forecasts at the 12-h forecast horizon. For the daily horizon, the MMAR outperforms both alternatives based on conditional as well as unconditional coverage statistics.

  12. The social values at risk from sea-level rise

    SciTech Connect

    Graham, Sonia; Barnett, Jon; Fincher, Ruth; Hurlimann, Anna; Mortreux, Colette; Waters, Elissa

    2013-07-15

    Analysis of the risks of sea-level rise favours conventionally measured metrics such as the area of land that may be subsumed, the numbers of properties at risk, and the capital values of assets at risk. Despite this, it is clear that there exist many less material but no less important values at risk from sea-level rise. This paper re-theorises these multifarious social values at risk from sea-level rise, by explaining their diverse nature, and grounding them in the everyday practices of people living in coastal places. It is informed by a review and analysis of research on social values from within the fields of social impact assessment, human geography, psychology, decision analysis, and climate change adaptation. From this we propose that it is the ‘lived values’ of coastal places that are most at risk from sea-level rise. We then offer a framework that groups these lived values into five types: those that are physiological in nature, and those that relate to issues of security, belonging, esteem, and self-actualisation. This framework of lived values at risk from sea-level rise can guide empirical research investigating the social impacts of sea-level rise, as well as the impacts of actions to adapt to sea-level rise. It also offers a basis for identifying the distribution of related social outcomes across populations exposed to sea-level rise or sea-level rise policies.

  13. Value-at-risk estimation with wavelet-based extreme value theory: Evidence from emerging markets

    NASA Astrophysics Data System (ADS)

    Cifter, Atilla

    2011-06-01

    This paper introduces wavelet-based extreme value theory (EVT) for univariate value-at-risk estimation. Wavelets and EVT are combined for volatility forecasting to estimate a hybrid model. In the first stage, wavelets are used as a threshold in generalized Pareto distribution, and in the second stage, EVT is applied with a wavelet-based threshold. This new model is applied to two major emerging stock markets: the Istanbul Stock Exchange (ISE) and the Budapest Stock Exchange (BUX). The relative performance of wavelet-based EVT is benchmarked against the Riskmetrics-EWMA, ARMA-GARCH, generalized Pareto distribution, and conditional generalized Pareto distribution models. The empirical results show that the wavelet-based extreme value theory increases predictive performance of financial forecasting according to number of violations and tail-loss tests. The superior forecasting performance of the wavelet-based EVT model is also consistent with Basel II requirements, and this new model can be used by financial institutions as well.

  14. Solving portfolio selection problems with minimum transaction lots based on conditional-value-at-risk

    NASA Astrophysics Data System (ADS)

    Setiawan, E. P.; Rosadi, D.

    2017-01-01

    Portfolio selection problems conventionally means ‘minimizing the risk, given the certain level of returns’ from some financial assets. This problem is frequently solved with quadratic or linear programming methods, depending on the risk measure that used in the objective function. However, the solutions obtained by these method are in real numbers, which may give some problem in real application because each asset usually has its minimum transaction lots. In the classical approach considering minimum transaction lots were developed based on linear Mean Absolute Deviation (MAD), variance (like Markowitz’s model), and semi-variance as risk measure. In this paper we investigated the portfolio selection methods with minimum transaction lots with conditional value at risk (CVaR) as risk measure. The mean-CVaR methodology only involves the part of the tail of the distribution that contributed to high losses. This approach looks better when we work with non-symmetric return probability distribution. Solution of this method can be found with Genetic Algorithm (GA) methods. We provide real examples using stocks from Indonesia stocks market.

  15. Value at risk estimation with entropy-based wavelet analysis in exchange markets

    NASA Astrophysics Data System (ADS)

    He, Kaijian; Wang, Lijun; Zou, Yingchao; Lai, Kin Keung

    2014-08-01

    In recent years, exchange markets are increasingly integrated together. Fluctuations and risks across different exchange markets exhibit co-moving and complex dynamics. In this paper we propose the entropy-based multivariate wavelet based approaches to analyze the multiscale characteristic in the multidimensional domain and improve further the Value at Risk estimation reliability. Wavelet analysis has been introduced to construct the entropy-based Multiscale Portfolio Value at Risk estimation algorithm to account for the multiscale dynamic correlation. The entropy measure has been proposed as the more effective measure with the error minimization principle to select the best basis when determining the wavelet families and the decomposition level to use. The empirical studies conducted in this paper have provided positive evidence as to the superior performance of the proposed approach, using the closely related Chinese Renminbi and European Euro exchange market.

  16. Measuring daily Value-at-Risk of SSEC index: A new approach based on multifractal analysis and extreme value theory

    NASA Astrophysics Data System (ADS)

    Wei, Yu; Chen, Wang; Lin, Yu

    2013-05-01

    Recent studies in the econophysics literature reveal that price variability has fractal and multifractal characteristics not only in developed financial markets, but also in emerging markets. Taking high-frequency intraday quotes of the Shanghai Stock Exchange Component (SSEC) Index as example, this paper proposes a new method to measure daily Value-at-Risk (VaR) by combining the newly introduced multifractal volatility (MFV) model and the extreme value theory (EVT) method. Two VaR backtesting techniques are then employed to compare the performance of the model with that of a group of linear and nonlinear generalized autoregressive conditional heteroskedasticity (GARCH) models. The empirical results show the multifractal nature of price volatility in Chinese stock market. VaR measures based on the multifractal volatility model and EVT method outperform many GARCH-type models at high-risk levels.

  17. On Value at Risk for Foreign Exchange Rates --- the Copula Approach

    NASA Astrophysics Data System (ADS)

    Jaworski, P.

    2006-11-01

    The aim of this paper is to determine the Value at Risk (VaR) of the portfolio consisting of long positions in foreign currencies on an emerging market. Basing on empirical data we restrict ourselves to the case when the tail parts of distributions of logarithmic returns of these assets follow the power laws and the lower tail of associated copula C follows the power law of degree 1. We will illustrate the practical usefulness of this approach by the analysis of the exchange rates of EUR and CHF at the Polish forex market.

  18. 'Weather Value at Risk': A uniform approach to describe and compare sectoral income risks from climate change.

    PubMed

    Prettenthaler, Franz; Köberl, Judith; Bird, David Neil

    2016-02-01

    We extend the concept of 'Weather Value at Risk' - initially introduced to measure the economic risks resulting from current weather fluctuations - to describe and compare sectoral income risks from climate change. This is illustrated using the examples of wheat cultivation and summer tourism in (parts of) Sardinia. Based on climate scenario data from four different regional climate models we study the change in the risk of weather-related income losses between some reference (1971-2000) and some future (2041-2070) period. Results from both examples suggest an increase in weather-related risks of income losses due to climate change, which is somewhat more pronounced for summer tourism. Nevertheless, income from wheat cultivation is at much higher risk of weather-related losses than income from summer tourism, both under reference and future climatic conditions. A weather-induced loss of at least 5% - compared to the income associated with average reference weather conditions - shows a 40% (80%) probability of occurrence in the case of wheat cultivation, but only a 0.4% (16%) probability of occurrence in the case of summer tourism, given reference (future) climatic conditions. Whereas in the agricultural example increases in the weather-related income risks mainly result from an overall decrease in average wheat yields, the heightened risk in the tourism example stems mostly from a change in the weather-induced variability of tourism incomes. With the extended 'Weather Value at Risk' concept being able to capture both, impacts from changes in the mean and the variability of the climate, it is a powerful tool for presenting and disseminating the results of climate change impact assessments. Due to its flexibility, the concept can be applied to any economic sector and therefore provides a valuable tool for cross-sectoral comparisons of climate change impacts, but also for the assessment of the costs and benefits of adaptation measures.

  19. Deficient Contractor Business Systems: Applying the Value at Risk (VAR) Model to Earned Value Management Systems

    DTIC Science & Technology

    2013-06-01

    Institute/Electronic Industries Association (ANSI/ EIA ) 748 (DCMA, 2011). Dibert and Velez (2006) stated that the guidelines provide a practical...EVMS guidelines as issued by ANSI/ EIA 748 . However, the present research project focuses only on 13 EVMS guidelines that Senior DCMA EVM specialists...2012a). Failure 4 to meet ANSI/ EIA 748 ’s standards for any of the 13 guidelines results in a significant deficiency and disapproval of the EVM

  20. Deficient Contractor Business Systems: Applying the Value at Risk (VaR) Model to Earned Value Management Systems

    DTIC Science & Technology

    2013-06-30

    Battalion, 4th Brigade Combat Team, 1st Cavalry Division, Mosul, Iraq, in support of Operation New Dawn ; company commander, Yongsan Readiness Center...pÅÜççä=çÑ=_ìëáåÉëë=C=mìÄäáÅ=mçäáÅó - 80 - k~î~ä=mçëíÖê~Çì~íÉ=pÅÜççä= LIST OF REFERENCES Alleman , G. (2012). Establishing the performance

  1. The EMEFS model evaluation

    SciTech Connect

    Barchet, W.R. ); Dennis, R.L. ); Seilkop, S.K. ); Banic, C.M.; Davies, D.; Hoff, R.M.; Macdonald, A.M.; Mickle, R.E.; Padro, J.; Puckett, K. ); Byun, D.; McHenry, J.N.

    1991-12-01

    The binational Eulerian Model Evaluation Field Study (EMEFS) consisted of several coordinated data gathering and model evaluation activities. In the EMEFS, data were collected by five air and precipitation monitoring networks between June 1988 and June 1990. Model evaluation is continuing. This interim report summarizes the progress made in the evaluation of the Regional Acid Deposition Model (RADM) and the Acid Deposition and Oxidant Model (ADOM) through the December 1990 completion of a State of Science and Technology report on model evaluation for the National Acid Precipitation Assessment Program (NAPAP). Because various assessment applications of RADM had to be evaluated for NAPAP, the report emphasizes the RADM component of the evaluation. A protocol for the evaluation was developed by the model evaluation team and defined the observed and predicted values to be used and the methods by which the observed and predicted values were to be compared. Scatter plots and time series of predicted and observed values were used to present the comparisons graphically. Difference statistics and correlations were used to quantify model performance. 64 refs., 34 figs., 6 tabs.

  2. Guidelines for Model Evaluation.

    DTIC Science & Technology

    1979-01-01

    by a decisionmaker. The full-scale evaluation of a complex model can be an expensive, time- consuming effort requiring diverse talents and skills...relative to PIES, were documented in a report to the Congress. 2/ An important side- effect of that document was that a foundation was laid for model...while for model evaluation there are no generally accepted standards or methods. Hence, GAO perceives the need to expand upon the lessons learned in

  3. Climate models and model evaluation

    SciTech Connect

    Gates, W.L.

    1994-12-31

    This brief overview addresses aspects of the nature, uses, evaluation and limitations of climate models. A comprehensive global modeling capability has been achieved only for the physical climate system, which is characterized by processes that serve to transport and exchange momentum, heat and moisture within and between the atmosphere, ocean and land surface. The fundamental aim of climate modeling, and the justification for the use of climate models, is the need to achieve a quantitative understanding of the operation of the climate system and to exploit any potential predictability that may exist.

  4. CMAQ Model Evaluation Framework

    EPA Pesticide Factsheets

    CMAQ is tested to establish the modeling system’s credibility in predicting pollutants such as ozone and particulate matter. Evaluation of CMAQ has been designed to assess the model’s performance for specific time periods and for specific uses.

  5. VPPA weld model evaluation

    NASA Technical Reports Server (NTRS)

    Mccutcheon, Kimble D.; Gordon, Stephen S.; Thompson, Paul A.

    1992-01-01

    NASA uses the Variable Polarity Plasma Arc Welding (VPPAW) process extensively for fabrication of Space Shuttle External Tanks. This welding process has been in use at NASA since the late 1970's but the physics of the process have never been satisfactorily modeled and understood. In an attempt to advance the level of understanding of VPPAW, Dr. Arthur C. Nunes, Jr., (NASA) has developed a mathematical model of the process. The work described in this report evaluated and used two versions (level-0 and level-1) of Dr. Nunes' model, and a model derived by the University of Alabama at Huntsville (UAH) from Dr. Nunes' level-1 model. Two series of VPPAW experiments were done, using over 400 different combinations of welding parameters. Observations were made of VPPAW process behavior as a function of specific welding parameter changes. Data from these weld experiments was used to evaluate and suggest improvements to Dr. Nunes' model. Experimental data and correlations with the model were used to develop a multi-variable control algorithm for use with a future VPPAW controller. This algorithm is designed to control weld widths (both on the crown and root of the weld) based upon the weld parameters, base metal properties, and real-time observation of the crown width. The algorithm exhibited accuracy comparable to that of the weld width measurements for both aluminum and mild steel welds.

  6. Pragmatic geometric model evaluation

    NASA Astrophysics Data System (ADS)

    Pamer, Robert

    2015-04-01

    Quantification of subsurface model reliability is mathematically and technically demanding as there are many different sources of uncertainty and some of the factors can be assessed merely in a subjective way. For many practical applications in industry or risk assessment (e. g. geothermal drilling) a quantitative estimation of possible geometric variations in depth unit is preferred over relative numbers because of cost calculations for different scenarios. The talk gives an overview of several factors that affect the geometry of structural subsurface models that are based upon typical geological survey organization (GSO) data like geological maps, borehole data and conceptually driven construction of subsurface elements (e. g. fault network). Within the context of the trans-European project "GeoMol" uncertainty analysis has to be very pragmatic also because of different data rights, data policies and modelling software between the project partners. In a case study a two-step evaluation methodology for geometric subsurface model uncertainty is being developed. In a first step several models of the same volume of interest have been calculated by omitting successively more and more input data types (seismic constraints, fault network, outcrop data). The positions of the various horizon surfaces are then compared. The procedure is equivalent to comparing data of various levels of detail and therefore structural complexity. This gives a measure of the structural significance of each data set in space and as a consequence areas of geometric complexity are identified. These areas are usually very data sensitive hence geometric variability in between individual data points in these areas is higher than in areas of low structural complexity. Instead of calculating a multitude of different models by varying some input data or parameters as it is done by Monte-Carlo-simulations, the aim of the second step of the evaluation procedure (which is part of the ongoing work) is to

  7. THE ATMOSPHERIC MODEL EVALUATION TOOL

    EPA Science Inventory

    This poster describes a model evaluation tool that is currently being developed and applied for meteorological and air quality model evaluation. The poster outlines the framework and provides examples of statistical evaluations that can be performed with the model evaluation tool...

  8. BioVapor Model Evaluation

    EPA Science Inventory

    General background on modeling and specifics of modeling vapor intrusion are given. Three classical model applications are described and related to the problem of petroleum vapor intrusion. These indicate the need for model calibration and uncertainty analysis. Evaluation of Bi...

  9. Evaluation Theory, Models, and Applications

    ERIC Educational Resources Information Center

    Stufflebeam, Daniel L.; Shinkfield, Anthony J.

    2007-01-01

    "Evaluation Theory, Models, and Applications" is designed for evaluators and students who need to develop a commanding knowledge of the evaluation field: its history, theory and standards, models and approaches, procedures, and inclusion of personnel as well as program evaluation. This important book shows how to choose from a growing…

  10. Evaluating Causal Models.

    ERIC Educational Resources Information Center

    Watt, James H., Jr.

    Pointing out that linear causal models can organize the interrelationships of a large number of variables, this paper contends that such models are particularly useful to mass communication research, which must by necessity deal with complex systems of variables. The paper first outlines briefly the philosophical requirements for establishing a…

  11. Model Program Evaluations. Fact Sheet

    ERIC Educational Resources Information Center

    Arkansas Safe Schools Initiative Division, 2002

    2002-01-01

    There are probably thousands of programs and courses intended to prevent or reduce violence in this nation's schools. Evaluating these many programs has become a problem or goal in itself. There are now many evaluation programs, with many levels of designations, such as model, promising, best practice, exemplary and noteworthy. "Model program" is…

  12. Advocacy Evaluation: A Model for Internal Evaluation Offices.

    ERIC Educational Resources Information Center

    Sonnichsen, Richard C.

    1988-01-01

    As evaluations are more often implemented by internal staff, internal evaluators must begin to assume decision-making and advocacy tasks. This advocacy evaluation concept is described using the Federal Bureau of Investigation evaluation staff as a model. (TJH)

  13. Sequentially Executed Model Evaluation Framework

    SciTech Connect

    2015-10-20

    Provides a message passing framework between generic input, model and output drivers, and specifies an API for developing such drivers. Also provides batch and real-time controllers which step the model and I/O through the time domain (or other discrete domain), and sample I/O drivers. This is a library framework, and does not, itself, solve any problems or execute any modeling. The SeMe framework aids in development of models which operate on sequential information, such as time-series, where evaluation is based on prior results combined with new data for this iteration. Has applications in quality monitoring, and was developed as part of the CANARY-EDS software, where real-time water quality data is being analyzed for anomalies.

  14. Sequentially Executed Model Evaluation Framework

    SciTech Connect

    2014-02-14

    Provides a message passing framework between generic input, model and output drivers, and specifies an API for developing such drivers. Also provides batch and real-time controllers which step the model and 1/0 through the time domain (or other discrete domain), and sample 1/0 drivers. This is a Framework library framework, and does not, itself, solve any problems or execute any modelling. The SeMe framework aids in development of models which operate on sequential information, such as time-series, where evaluation is based on prior results combined with new data for this iteration. Ha) applications in quality monitoring, and was developed as part of the CANARY-EDS software, where real-time water quality data is being analyzed

  15. A Taxonomy of Evaluation Models: Use of Evaluation Models in Program Evaluation.

    ERIC Educational Resources Information Center

    Carter, Wayne E.

    In the nine years following the passage of the Elementary Secondary Education Act (ESEA), several models have been developed to attempt to remedy the deficiencies in existing educational evaluation and decision theory noted by Stufflebeam and co-workers. Compilations of evaluation models have been undertaken and listings exist of models available…

  16. Infrasound Sensor Models and Evaluations

    SciTech Connect

    KROMER,RICHARD P.; MCDONALD,TIMOTHY S.

    2000-07-31

    Sandia National Laboratories has continued to evaluate the performance of infrasound sensors that are candidates for use by the International Monitoring System (IMS) for the Comprehensive Nuclear-Test-Ban Treaty Organization. The performance criteria against which these sensors are assessed are specified in ``Operational Manual for Infra-sound Monitoring and the International Exchange of Infrasound Data''. This presentation includes the results of efforts concerning two of these sensors: (1) Chaparral Physics Model 5; and (2) CEA MB2000. Sandia is working with Chaparral Physics in order to improve the capability of the Model 5 (a prototype sensor) to be calibrated and evaluated. With the assistance of the Scripps Institution of Oceanography, Sandia is also conducting tests to evaluate the performance of the CEA MB2000. Sensor models based on theoretical transfer functions and manufacturer specifications for these two devices have been developed. This presentation will feature the results of coherence-based data analysis of signals from a huddle test, utilizing several sensors of both types, in order to verify the sensor performance.

  17. Evaluation strategies for CNSs: application of an evaluation model.

    PubMed

    Kennedy-Malone, L M

    1996-07-01

    Program development has become an essential role function for today's CNS, who must be able to evaluate programs to determine their efficacy. A useful evaluation guide is Stufflebeam's CIPP (context, input, process, and product) model, which includes a framework to evaluate indirect care measures directly affecting cost-effectiveness and accountability. The model's core consists of (1) context evaluation leading to informed, contemplated decisions; (2) input evaluation directing structured decisions; (3) process evaluation guiding implemented decisions; and (4) product evaluation serving to recycle decisions. Strategies for using Stufflebeam's CIPP model are described.

  18. The Idiographic Evaluation Model in Crime Control.

    ERIC Educational Resources Information Center

    Hurwitz, Jacob I.

    1984-01-01

    Presents some recent developments in the evaluation of crime prevention and control programs, including the increased use of process evaluation models. Describes the nature, methods, and advantages of the idiographic (or single subject) model as used in social work. (JAC)

  19. Model Performance Evaluation and Scenario Analysis (MPESA)

    EPA Pesticide Factsheets

    Model Performance Evaluation and Scenario Analysis (MPESA) assesses the performance with which models predict time series data. The tool was developed Hydrological Simulation Program-Fortran (HSPF) and the Stormwater Management Model (SWMM)

  20. Model Evaluation of Continuous Data Pharmacometric Models: Metrics and Graphics

    PubMed Central

    Nguyen, THT; Mouksassi, M‐S; Holford, N; Al‐Huniti, N; Freedman, I; Hooker, AC; John, J; Karlsson, MO; Mould, DR; Pérez Ruixo, JJ; Plan, EL; Savic, R; van Hasselt, JGC; Weber, B; Zhou, C; Comets, E

    2017-01-01

    This article represents the first in a series of tutorials on model evaluation in nonlinear mixed effect models (NLMEMs), from the International Society of Pharmacometrics (ISoP) Model Evaluation Group. Numerous tools are available for evaluation of NLMEM, with a particular emphasis on visual assessment. This first basic tutorial focuses on presenting graphical evaluation tools of NLMEM for continuous data. It illustrates graphs for correct or misspecified models, discusses their pros and cons, and recalls the definition of metrics used. PMID:27884052

  1. Model Evaluation of Continuous Data Pharmacometric Models: Metrics and Graphics.

    PubMed

    Nguyen, Tht; Mouksassi, M-S; Holford, N; Al-Huniti, N; Freedman, I; Hooker, A C; John, J; Karlsson, M O; Mould, D R; Pérez Ruixo, J J; Plan, E L; Savic, R; van Hasselt, Jgc; Weber, B; Zhou, C; Comets, E; Mentré, F

    2017-02-01

    This article represents the first in a series of tutorials on model evaluation in nonlinear mixed effect models (NLMEMs), from the International Society of Pharmacometrics (ISoP) Model Evaluation Group. Numerous tools are available for evaluation of NLMEM, with a particular emphasis on visual assessment. This first basic tutorial focuses on presenting graphical evaluation tools of NLMEM for continuous data. It illustrates graphs for correct or misspecified models, discusses their pros and cons, and recalls the definition of metrics used.

  2. Evaluating Interactive Instructional Technologies: A Cognitive Model.

    ERIC Educational Resources Information Center

    Tucker, Susan A.

    Strengths and weaknesses of prevailing evaluation models are analyzed, with attention to the role of feedback in each paradigm. A framework is then presented for analyzing issues faced by evaluators of interactive instructional technologies. The current practice of evaluation relies heavily on 3 models developed over 20 years ago: (1) the…

  3. EPA Corporate GHG Goal Evaluation Model

    EPA Pesticide Factsheets

    The EPA Corporate GHG Goal Evaluation Model provides companies with a transparent and publicly available benchmarking resource to help evaluate and establish new or existing GHG goals that go beyond business as usual for their individual sectors.

  4. Differential program evaluation model in child protection.

    PubMed

    Lalayants, Marina

    2012-01-01

    Increasingly attention has been focused to the degree to which social programs have effectively and efficiently delivered services. Using the differential program evaluation model by Tripodi, Fellin, and Epstein (1978) and by Bielawski and Epstein (1984), this paper described the application of this model to evaluating a multidisciplinary clinical consultation practice in child protection. This paper discussed the uses of the model by demonstrating them through the four stages of program initiation, contact, implementation, and stabilization. This organizational case study made a contribution to the model by introducing essential and interrelated elements of a "practical evaluation" methodology in evaluating social programs, such as a participatory evaluation approach; learning, empowerment and sustainability; and a flexible individualized approach to evaluation. The study results demonstrated that by applying the program development model, child-protective administrators and practitioners were able to evaluate the existing practices and recognize areas for program improvement.

  5. Toward an Ecological Evaluation Model.

    ERIC Educational Resources Information Center

    Parker, Jackson; Patterson, Jerry L.

    1979-01-01

    The authors suggest that the aura of authority traditionally placed on educational research and evaluation has been based on an outdated understanding of the scientific enterprise. They outline an alternative view of science which is more ecological and provides more scope and power for evaluating educational programs. They propose a new framework…

  6. Using multifractals to evaluate oceanographic model skill

    NASA Astrophysics Data System (ADS)

    Skákala, Jozef; Cazenave, Pierre W.; Smyth, Timothy J.; Torres, Ricardo

    2016-08-01

    We are in an era of unprecedented data volumes generated from observations and model simulations. This is particularly true from satellite Earth Observations (EO) and global scale oceanographic models. This presents us with an opportunity to evaluate large-scale oceanographic model outputs using EO data. Previous work on model skill evaluation has led to a plethora of metrics. The paper defines two new model skill evaluation metrics. The metrics are based on the theory of universal multifractals and their purpose is to measure the structural similarity between the model predictions and the EO data. The two metrics have the following advantages over the standard techniques: (a) they are scale-free and (b) they carry important part of information about how model represents different oceanographic drivers. Those two metrics are then used in the paper to evaluate the performance of the FVCOM model in the shelf seas around the south-west coast of the UK.

  7. The EMEFS model evaluation. An interim report

    SciTech Connect

    Barchet, W.R.; Dennis, R.L.; Seilkop, S.K.; Banic, C.M.; Davies, D.; Hoff, R.M.; Macdonald, A.M.; Mickle, R.E.; Padro, J.; Puckett, K.; Byun, D.; McHenry, J.N.; Karamchandani, P.; Venkatram, A.; Fung, C.; Misra, P.K.; Hansen, D.A.; Chang, J.S.

    1991-12-01

    The binational Eulerian Model Evaluation Field Study (EMEFS) consisted of several coordinated data gathering and model evaluation activities. In the EMEFS, data were collected by five air and precipitation monitoring networks between June 1988 and June 1990. Model evaluation is continuing. This interim report summarizes the progress made in the evaluation of the Regional Acid Deposition Model (RADM) and the Acid Deposition and Oxidant Model (ADOM) through the December 1990 completion of a State of Science and Technology report on model evaluation for the National Acid Precipitation Assessment Program (NAPAP). Because various assessment applications of RADM had to be evaluated for NAPAP, the report emphasizes the RADM component of the evaluation. A protocol for the evaluation was developed by the model evaluation team and defined the observed and predicted values to be used and the methods by which the observed and predicted values were to be compared. Scatter plots and time series of predicted and observed values were used to present the comparisons graphically. Difference statistics and correlations were used to quantify model performance. 64 refs., 34 figs., 6 tabs.

  8. Large Signal Evaluation of Nonlinear HBT Model

    NASA Astrophysics Data System (ADS)

    Angelov, Iltcho; Inoue, Akira; Watanabe, Shinsuke

    The performance of recently developed Large Signal (LS) HBT model was evaluated with extensive LS measurements like Power spectrum, Load pull and Inter-modulation investigations. Proposed model has adopted temperature dependent leakage resistance and a simplified capacitance models. The model was implemented in ADS as SDD. Important feature of the model is that the main model parameters are taken directly from measurements in rather simple and understandable way. Results show good accuracy despite the simplicity of the model. To our knowledge the HBT model is one of a few HBT models which can handle high current & Power HBT devices, with significantly less model parameters with good accuracy.

  9. Evaluating modeling tools for the EDOS

    NASA Technical Reports Server (NTRS)

    Knoble, Gordon; Mccaleb, Frederick; Aslam, Tanweer; Nester, Paul

    1994-01-01

    The Earth Observing System (EOS) Data and Operations System (EDOS) Project is developing a functional, system performance model to support the system implementation phase of the EDOS which is being designed and built by the Goddard Space Flight Center (GSFC). The EDOS Project will use modeling to meet two key objectives: (1) manage system design impacts introduced by unplanned changed in mission requirements; and (2) evaluate evolutionary technology insertions throughout the development of the EDOS. To select a suitable modeling tool, the EDOS modeling team developed an approach for evaluating modeling tools and languages by deriving evaluation criteria from both the EDOS modeling requirements and the development plan. Essential and optional features for an appropriate modeling tool were identified and compared with known capabilities of several modeling tools. Vendors were also provided the opportunity to model a representative EDOS processing function to demonstrate the applicability of their modeling tool to the EDOS modeling requirements. This paper emphasizes the importance of using a well defined approach for evaluating tools to model complex systems like the EDOS. The results of this evaluation study do not in any way signify the superiority of any one modeling tool since the results will vary with the specific modeling requirements of each project.

  10. A Model for Administrative Evaluation by Subordinates.

    ERIC Educational Resources Information Center

    Budig, Jeanne E.

    Under the administrator evaluation program adopted at Vincennes University, all faculty and professional staff are invited to evaluate each administrator above them in the chain of command. Originally based on the Purdue University "cafeteria" system, this evaluation model has been used biannually for 10 years. In an effort to simplify the system,…

  11. Program Development and Evaluation: A Modeling Process.

    ERIC Educational Resources Information Center

    Green, Donald W.; Corgiat, RayLene

    A model of program development and evaluation was developed at Genesee Community College, utilizing a system theory/process of deductive and inductive reasoning to ensure coherence and continuity within the program. The model links activities to specific measurable outcomes. Evaluation checks and feedback are built in at various levels so that…

  12. Corrections Education Evaluation System Model.

    ERIC Educational Resources Information Center

    Nelson, Orville; And Others

    The purpose of this project was to develop an evaluation system for the competency-based vocational program developed by Wisconsin's Division of Corrections, Department of Public Instruction (DPI), and the Vocational, Technical, and Adult Education System (VTAE). Site visits were conducted at five correctional institutions in March and April of…

  13. Model Performance Evaluation and Scenario Analysis (MPESA) Tutorial

    EPA Science Inventory

    This tool consists of two parts: model performance evaluation and scenario analysis (MPESA). The model performance evaluation consists of two components: model performance evaluation metrics and model diagnostics. These metrics provides modelers with statistical goodness-of-fit m...

  14. Comprehensive system models: Strategies for evaluation

    NASA Technical Reports Server (NTRS)

    Field, Christopher; Kutzbach, John E.; Ramanathan, V.; Maccracken, Michael C.

    1992-01-01

    The task of evaluating comprehensive earth system models is vast involving validations of every model component at every scale of organization, as well as tests of all the individual linkages. Even the most detailed evaluation of each of the component processes and the individual links among them should not, however, engender confidence in the performance of the whole. The integrated earth system is so rich with complex feedback loops, often involving components of the atmosphere, oceans, biosphere, and cryosphere, that it is certain to exhibit emergent properties very difficult to predict from the perspective of a narrow focus on any individual component of the system. Therefore, a substantial share of the task of evaluating comprehensive earth system models must reside at the level of whole system evaluations. Since complete, integrated atmosphere/ ocean/ biosphere/ hydrology models are not yet operational, questions of evaluation must be addressed at the level of the kinds of earth system processes that the models should be competent to simulate, rather than at the level of specific performance criteria. Here, we have tried to identify examples of earth system processes that are difficult to simulate with existing models and that involve a rich enough suite of feedbacks that they are unlikely to be satisfactorily described by highly simplified or toy models. Our purpose is not to specify a checklist of evaluation criteria but to introduce characteristics of the earth system that may present useful opportunities for model testing and, of course, improvement.

  15. Evaluation and Comparison of Computational Models

    PubMed Central

    Myung, Jay; Tang, Yun; Pitt, Mark A.

    2009-01-01

    Computational models are powerful tools that can enhance the understanding of scientific phenomena. The enterprise of modeling is most productive when the reasons underlying the adequacy of a model, and possibly its superiority to other models, are understood. This chapter begins with an overview of the main criteria that must be considered in model evaluation and selection, in particular explaining why generalizability is the preferred criterion for model selection. This is followed by a review of measures of generalizability. The final section demonstrates the use of five versatile and easy-to-use selection methods for choosing between two mathematical models of protein folding. PMID:19216931

  16. [Evaluation model for municipal health planning management].

    PubMed

    Berretta, Isabel Quint; Lacerda, Josimari Telino de; Calvo, Maria Cristina Marino

    2011-11-01

    This article presents an evaluation model for municipal health planning management. The basis was a methodological study using the health planning theoretical framework to construct the evaluation matrix, in addition to an understanding of the organization and functioning designed by the Planning System of the Unified National Health System (PlanejaSUS) and definition of responsibilities for the municipal level under the Health Management Pact. The indicators and measures were validated using the consensus technique with specialists in planning and evaluation. The applicability was tested in 271 municipalities (counties) in the State of Santa Catarina, Brazil, based on population size. The proposed model features two evaluative dimensions which reflect the municipal health administrator's commitment to planning: the guarantee of resources and the internal and external relations needed for developing the activities. The data were analyzed using indicators, sub-dimensions, and dimensions. The study concludes that the model is feasible and appropriate for evaluating municipal performance in health planning management.

  17. Evaluation of Galactic Cosmic Ray Models

    NASA Technical Reports Server (NTRS)

    Adams, James H., Jr.; Heiblim, Samuel; Malott, Christopher

    2009-01-01

    Models of the galactic cosmic ray spectra have been tested by comparing their predictions to an evaluated database containing more than 380 measured cosmic ray spectra extending from 1960 to the present.

  18. Metrics for Evaluation of Student Models

    ERIC Educational Resources Information Center

    Pelanek, Radek

    2015-01-01

    Researchers use many different metrics for evaluation of performance of student models. The aim of this paper is to provide an overview of commonly used metrics, to discuss properties, advantages, and disadvantages of different metrics, to summarize current practice in educational data mining, and to provide guidance for evaluation of student…

  19. Evaluation Model for Career Programs. Final Report.

    ERIC Educational Resources Information Center

    Byerly, Richard L.; And Others

    A study was conducted to provide and test an evaluative model that could be utilized in providing curricular evaluation of the various career programs. Two career fields, dental assistant and auto mechanic, were chosen for study. A questionnaire based upon the actual job performance was completed by six groups connected with the auto mechanics and…

  20. SAPHIRE models and software for ASP evaluations

    SciTech Connect

    Sattison, M.B.; Schroeder, J.A.; Russell, K.D.

    1995-04-01

    The Idaho National Engineering Laboratory (INEL) over the past year has created 75 plant-specific Accident Sequence Precursor (ASP) models using the SAPHIRE suite of PRA codes. Along with the new models, the INEL has also developed a new module for SAPHIRE which is tailored specifically to the unique needs of conditional core damage probability (CCDP) evaluations. These models and software will be the next generation of risk tools for the evaluation of accident precursors by both NRR and AEOD. This paper presents an overview of the models and software. Key characteristics include: (1) classification of the plant models according to plant response with a unique set of event trees for each plant class, (2) plant-specific fault trees using supercomponents, (3) generation and retention of all system and sequence cutsets, (4) full flexibility in modifying logic, regenerating cutsets, and requantifying results, and (5) user interface for streamlined evaluation of ASP events.

  1. The Air Quality Model Evaluation International Initiative ...

    EPA Pesticide Factsheets

    This presentation provides an overview of the Air Quality Model Evaluation International Initiative (AQMEII). It contains a synopsis of the three phases of AQMEII, including objectives, logistics, and timelines. It also provides a number of examples of analyses conducted through AQMEII with a particular focus on past and future analyses of deposition. The National Exposure Research Laboratory (NERL) Computational Exposure Division (CED) develops and evaluates data, decision-support tools, and models to be applied to media-specific or receptor-specific problem areas. CED uses modeling-based approaches to characterize exposures, evaluate fate and transport, and support environmental diagnostics/forensics with input from multiple data sources. It also develops media- and receptor-specific models, process models, and decision support tools for use both within and outside of EPA.

  2. Evaluating AEROCOM Models with Remote Sensing Datasets

    NASA Astrophysics Data System (ADS)

    Schutgens, N.; Gryspeerdt, E.; Weigum, N.; Veira, A.; Partridge, D.; Stier, P.

    2014-12-01

    We present an in-depth evaluation of AEROCOM models with a variety of remote sensing datasets: MODIS AOT (& AE over ocean), AERONET, AOT, AE & SSA and Maritime Aerosol Network (MAN) AOT & AE. Together these datasets provide extensive global and temporal coverage and measure both extensive (AOT) as well as intensive aerosol properties (AE & SSA). Models and observations differ strongly in their spatio-temporal sampling. Model results are typical of large gridboxes (100 by 100 km), while observations are made over much smaller areas (10 by 10 km for MODIS, even smaller for AERONET and MAN). Model results are always available in contrast to observations that are intermittent due to orbital constraints, retrieval limitations and instrument failure/maintenance. We find that differences in AOT due to sampling effects can be 100% for instantaneous values and can still be 40% for monthly or yearly averages. Such differences are comparable to or larger than typical retrieval errors in the observations. We propose strategies (temporal colocation, spatial aggregation) for reducing these sampling errors Finally, we evaluate one year of co-located AOT, AE and SSA from several AEROCOM models against MODIS, AERONET and MAN observations. Where the observational datasets overlap, they give similar results but in general they allow us to evaluate models in very different spatio-temporal domains. We show that even small datasets like MAN AOT or AERONET SSA, provide a useful standard for evaluating models thanks to temporal colocation. The models differ quite a bit from the observations and each model differs in its own way. These results are presented through global maps of yearly averaged differences, time-series of modelled and observed data, scatter plots of correlations among observables (e.g. SSA vs AE) and Taylor diagrams. In particular, we find that the AEROCOM emissions substantially underestimate wildfire emissions and that many models have aerosol that is too absorbing.

  3. Evaluation of a lake whitefish bioenergetics model

    USGS Publications Warehouse

    Madenjian, Charles P.; O'Connor, Daniel V.; Pothoven, Steven A.; Schneeberger, Philip J.; Rediske, Richard R.; O'Keefe, James P.; Bergstedt, Roger A.; Argyle, Ray L.; Brandt, Stephen B.

    2006-01-01

    We evaluated the Wisconsin bioenergetics model for lake whitefish Coregonus clupeaformis in the laboratory and in the field. For the laboratory evaluation, lake whitefish were fed rainbow smelt Osmerus mordax in four laboratory tanks during a 133-d experiment. Based on a comparison of bioenergetics model predictions of lake whitefish food consumption and growth with observed consumption and growth, we concluded that the bioenergetics model furnished significantly biased estimates of both food consumption and growth. On average, the model overestimated consumption by 61% and underestimated growth by 16%. The source of the bias was probably an overestimation of the respiration rate. We therefore adjusted the respiration component of the bioenergetics model to obtain a good fit of the model to the observed consumption and growth in our laboratory tanks. Based on the adjusted model, predictions of food consumption over the 133-d period fell within 5% of observed consumption in three of the four tanks and within 9% of observed consumption in the remaining tank. We used polychlorinated biphenyls (PCBs) as a tracer to evaluate model performance in the field. Based on our laboratory experiment, the efficiency with which lake whitefish retained PCBs from their food (I?) was estimated at 0.45. We applied the bioenergetics model to Lake Michigan lake whitefish and then used PCB determinations of both lake whitefish and their prey from Lake Michigan to estimate p in the field. Application of the original model to Lake Michigan lake whitefish yielded a field estimate of 0.28, implying that the original formulation of the model overestimated consumption in Lake Michigan by 61%. Application of the bioenergetics model with the adjusted respiration component resulted in a field I? estimate of 0.56, implying that this revised model underestimated consumption by 20%.

  4. Saphire models and software for ASP evaluations

    SciTech Connect

    Sattison, M.B.

    1997-02-01

    The Idaho National Engineering Laboratory (INEL) over the three years has created 75 plant-specific Accident Sequence Precursor (ASP) models using the SAPHIRE suite of PRA codes. Along with the new models, the INEL has also developed a new module for SAPHIRE which is tailored specifically to the unique needs of ASP evaluations. These models and software will be the next generation of risk tools for the evaluation of accident precursors by both the U.S. Nuclear Regulatory Commission`s (NRC`s) Office of Nuclear Reactor Regulation (NRR) and the Office for Analysis and Evaluation of Operational Data (AEOD). This paper presents an overview of the models and software. Key characteristics include: (1) classification of the plant models according to plant response with a unique set of event trees for each plant class, (2) plant-specific fault trees using supercomponents, (3) generation and retention of all system and sequence cutsets, (4) full flexibility in modifying logic, regenerating cutsets, and requantifying results, and (5) user interface for streamlined evaluation of ASP events. Future plans for the ASP models is also presented.

  5. Evaluation of trends in wheat yield models

    NASA Technical Reports Server (NTRS)

    Ferguson, M. C.

    1982-01-01

    Trend terms in models for wheat yield in the U.S. Great Plains for the years 1932 to 1976 are evaluated. The subset of meteorological variables yielding the largest adjusted R(2) is selected using the method of leaps and bounds. Latent root regression is used to eliminate multicollinearities, and generalized ridge regression is used to introduce bias to provide stability in the data matrix. The regression model used provides for two trends in each of two models: a dependent model in which the trend line is piece-wise continuous, and an independent model in which the trend line is discontinuous at the year of the slope change. It was found that the trend lines best describing the wheat yields consisted of combinations of increasing, decreasing, and constant trend: four combinations for the dependent model and seven for the independent model.

  6. [Evaluation of the Dresden Tympanoplasty Model (DTM)].

    PubMed

    Beleites, T; Neudert, M; Lasurashvili, N; Kemper, M; Offergeld, C; Hofmann, G; Zahnert, T

    2011-11-01

    The training of microsurgical motor skills is essentiell for surgical education if the interests of the patient are to be safeguarded. In otosurgery the complex anatomy of the temporal bone and variations necessitate a special training before performing surgery on a patient. We therefore developed and evaluated a simplified middle ear model for acquiring first microsurgical skills in tympanoplasty.The simplified tympanoplasty model consists of the outer ear canal and a tympanic cavity. A stapes model is placed in projection of the upper posterior tympanic membrane quadrant at the medial wall of the simulated tympanic cavity. To imitate the annular ligament flexibility the stapes is fixed on a soft plastic pad. 41 subjects evaluated the model´s anatomical analogy, the comparability to the real surgical situation and the general model properties the using a special questionnaire.The tympanoplasty model was very well evaluated by all participants. It is a reasonably priced model and a useful tool in microsurgical skills training. Thereby, it closes the gap between theoretical training and real operation conditions.

  7. PREFACE SPECIAL ISSUE ON MODEL EVALUATION: EVALUATION OF URBAN AND REGIONAL EULERIAN AIR QUALITY MODELS

    EPA Science Inventory

    The "Preface to the Special Edition on Model Evaluation: Evaluation of Urban and Regional Eulerian Air Quality Models" is a brief introduction to the papers included in a special issue of Atmospheric Environment. The Preface provides a background for the papers, which have thei...

  8. Mesoscale Wind Predictions for Wave Model Evaluation

    DTIC Science & Technology

    2016-06-07

    SEP 1999 2. REPORT TYPE 3. DATES COVERED 00-00-1999 to 00-00-1999 4. TITLE AND SUBTITLE Mesoscale Wind Predictions for Wave Model Evaluation...unclassified c. THIS PAGE unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 1 Mesoscale Wind Predictions for Wave Model...resolution (< 10 km) atmospheric wind and surface stress fields produced by an atmospheric mesoscale data assimilation system to the numerical prediction of

  9. Evaluating snow models for hydrological applications

    NASA Astrophysics Data System (ADS)

    Jonas, T.; Magnusson, J.; Wever, N.; Essery, R.; Helbig, N.

    2014-12-01

    Much effort has been invested in developing snow models over several decades, resulting in a wide variety of empirical and physically-based snow models. Within the two categories, models are built on the same principles but mainly differ in choices of model simplifications and parameterizations describing individual processes. In this study, we demonstrate an informative method for evaluating a large range of snow model structures for hydrological applications using an existing multi-model energy-balance framework and data from two well-instrumented sites with a seasonal snow cover. We also include two temperature-index snow models and one physically-based multi-layer snow model in our analyses. Our results show that the ability of models to predict snowpack runoff is strongly related to the agreement of observed and modelled snow water equivalent whereas such relationship is not present for snow depth or snow surface temperature measurements. For snow water equivalent and runoff, the models seem transferable between our two study sites, a behaviour which is not observed for snow surface temperature predictions due to site-specificity of turbulent heat transfer formulations. Uncertainties in the input and validation data, rather than model formulation, appear to contribute most to low model performances in some winters. More importantly, we find that model complexity is not a determinant for predicting daily snow water equivalent and runoff reliably, but choosing an appropriate model structure is. Our study shows the usefulness of the multi-model framework for identifying appropriate models under given constraints such as data availability, properties of interest and computational cost.

  10. Performance Evaluation of Dense Gas Dispersion Models.

    NASA Astrophysics Data System (ADS)

    Touma, Jawad S.; Cox, William M.; Thistle, Harold; Zapert, James G.

    1995-03-01

    This paper summarizes the results of a study to evaluate the performance of seven dense gas dispersion models using data from three field experiments. Two models (DEGADIS and SLAB) are in the public domain and the other five (AIRTOX, CHARM, FOCUS, SAFEMODE, and TRACE) are proprietary. The field data used are the Desert Tortoise pressurized ammonia releases, Burro liquefied natural gas spill tests, and the Goldfish anhydrous hydrofluoric acid spill experiments. Desert Tortoise and Goldfish releases were simulated as horizontal jet releases, and Burro as a liquid pool. Performance statistics were used to compare maximum observed concentrations and plume half-width to those predicted by each model. Model performance varied and no model exhibited consistently good performance across all three databases. However, when combined across the three databases, all models performed within a factor of 2. Problems encountered are discussed in order to help future investigators.

  11. Evaluation of a habitat suitability index model

    USGS Publications Warehouse

    Farmer, A.H.; Cade, B.S.; Stauffer, D.F.

    2002-01-01

    We assisted with development of a model for maternity habitat of the Indiana bat (Myotis soda/is), for use in conducting assessments of projects potentially impacting this endangered species. We started with an existing model, modified that model in a workshop, and evaluated the revised model, using data previously collected by others. Our analyses showed that higher indices of habitat suitability were associated with sites where Indiana bats were present and, thus, the model may be useful for identifying suitable habitat. Utility of the model, however, was based on a single component-density of suitable roost trees. Percentage of landscape in forest did not allow differentiation between sites occupied and not occupied by Indiana bats. Moreover, in spite of a general opinion by participants in the workshop that bodies of water were highly productive feeding areas and that a diversity of feeding habitats was optimal, we found no evidence to support either hypothesis.

  12. CMAQ Involvement in Air Quality Model Evaluation International Initiative

    EPA Pesticide Factsheets

    Description of Air Quality Model Evaluation International Initiative (AQMEII). Different chemical transport models are applied by different groups over North America and Europe and evaluated against observations.

  13. Evaluation of Usability Utilizing Markov Models

    ERIC Educational Resources Information Center

    Penedo, Janaina Rodrigues; Diniz, Morganna; Ferreira, Simone Bacellar Leal; Silveira, Denis S.; Capra, Eliane

    2012-01-01

    Purpose: The purpose of this paper is to analyze the usability of a remote learning system in its initial development phase, using a quantitative usability evaluation method through Markov models. Design/methodology/approach: The paper opted for an exploratory study. The data of interest of the research correspond to the possible accesses of users…

  14. Optical Storage Performance Modeling and Evaluation.

    ERIC Educational Resources Information Center

    Behera, Bailochan; Singh, Harpreet

    1990-01-01

    Evaluates different types of storage media for long-term archival storage of large amounts of data. Existing storage media are reviewed, including optical disks, optical tape, magnetic storage, and microfilm; three models are proposed based on document storage requirements; performance analysis is considered; and cost effectiveness is discussed.…

  15. Performance Evaluation Model for Application Layer Firewalls

    PubMed Central

    Yang, Wu; Dong, Hui; Zhang, Jiangchuan

    2016-01-01

    Application layer firewalls protect the trusted area network against information security risks. However, firewall performance may affect user experience. Therefore, performance analysis plays a significant role in the evaluation of application layer firewalls. This paper presents an analytic model of the application layer firewall, based on a system analysis to evaluate the capability of the firewall. In order to enable users to improve the performance of the application layer firewall with limited resources, resource allocation was evaluated to obtain the optimal resource allocation scheme in terms of throughput, delay, and packet loss rate. The proposed model employs the Erlangian queuing model to analyze the performance parameters of the system with regard to the three layers (network, transport, and application layers). Then, the analysis results of all the layers are combined to obtain the overall system performance indicators. A discrete event simulation method was used to evaluate the proposed model. Finally, limited service desk resources were allocated to obtain the values of the performance indicators under different resource allocation scenarios in order to determine the optimal allocation scheme. Under limited resource allocation, this scheme enables users to maximize the performance of the application layer firewall. PMID:27893803

  16. Performance Evaluation Model for Application Layer Firewalls.

    PubMed

    Xuan, Shichang; Yang, Wu; Dong, Hui; Zhang, Jiangchuan

    2016-01-01

    Application layer firewalls protect the trusted area network against information security risks. However, firewall performance may affect user experience. Therefore, performance analysis plays a significant role in the evaluation of application layer firewalls. This paper presents an analytic model of the application layer firewall, based on a system analysis to evaluate the capability of the firewall. In order to enable users to improve the performance of the application layer firewall with limited resources, resource allocation was evaluated to obtain the optimal resource allocation scheme in terms of throughput, delay, and packet loss rate. The proposed model employs the Erlangian queuing model to analyze the performance parameters of the system with regard to the three layers (network, transport, and application layers). Then, the analysis results of all the layers are combined to obtain the overall system performance indicators. A discrete event simulation method was used to evaluate the proposed model. Finally, limited service desk resources were allocated to obtain the values of the performance indicators under different resource allocation scenarios in order to determine the optimal allocation scheme. Under limited resource allocation, this scheme enables users to maximize the performance of the application layer firewall.

  17. Evaluation of help model replacement codes

    SciTech Connect

    Whiteside, Tad; Hang, Thong; Flach, Gregory

    2009-07-01

    This work evaluates the computer codes that are proposed to be used to predict percolation of water through the closure-cap and into the waste containment zone at the Department of Energy closure sites. This work compares the currently used water-balance code (HELP) with newly developed computer codes that use unsaturated flow (Richards’ equation). It provides a literature review of the HELP model and the proposed codes, which result in two recommended codes for further evaluation: HYDRUS-2D3D and VADOSE/W. This further evaluation involved performing actual simulations on a simple model and comparing the results of those simulations to those obtained with the HELP code and the field data. From the results of this work, we conclude that the new codes perform nearly the same, although moving forward, we recommend HYDRUS-2D3D.

  18. Evaluation of a Mysis bioenergetics model

    USGS Publications Warehouse

    Chipps, S.R.; Bennett, D.H.

    2002-01-01

    Direct approaches for estimating the feeding rate of the opossum shrimp Mysis relicta can be hampered by variable gut residence time (evacuation rate models) and non-linear functional responses (clearance rate models). Bioenergetics modeling provides an alternative method, but the reliability of this approach needs to be evaluated using independent measures of growth and food consumption. In this study, we measured growth and food consumption for M. relicta and compared experimental results with those predicted from a Mysis bioenergetics model. For Mysis reared at 10??C, model predictions were not significantly different from observed values. Moreover, decomposition of mean square error indicated that 70% of the variation between model predictions and observed values was attributable to random error. On average, model predictions were within 12% of observed values. A sensitivity analysis revealed that Mysis respiration and prey energy density were the most sensitive parameters affecting model output. By accounting for uncertainty (95% CLs) in Mysis respiration, we observed a significant improvement in the accuracy of model output (within 5% of observed values), illustrating the importance of sensitive input parameters for model performance. These findings help corroborate the Mysis bioenergetics model and demonstrate the usefulness of this approach for estimating Mysis feeding rate.

  19. Evaluation (not validation) of quantitative models.

    PubMed

    Oreskes, N

    1998-12-01

    The present regulatory climate has led to increasing demands for scientists to attest to the predictive reliability of numerical simulation models used to help set public policy, a process frequently referred to as model validation. But while model validation may reveal useful information, this paper argues that it is not possible to demonstrate the predictive reliability of any model of a complex natural system in advance of its actual use. All models embed uncertainties, and these uncertainties can and frequently do undermine predictive reliability. In the case of lead in the environment, we may categorize model uncertainties as theoretical, empirical, parametrical, and temporal. Theoretical uncertainties are aspects of the system that are not fully understood, such as the biokinetic pathways of lead metabolism. Empirical uncertainties are aspects of the system that are difficult (or impossible) to measure, such as actual lead ingestion by an individual child. Parametrical uncertainties arise when complexities in the system are simplified to provide manageable model input, such as representing longitudinal lead exposure by cross-sectional measurements. Temporal uncertainties arise from the assumption that systems are stable in time. A model may also be conceptually flawed. The Ptolemaic system of astronomy is a historical example of a model that was empirically adequate but based on a wrong conceptualization. Yet had it been computerized--and had the word then existed--its users would have had every right to call it validated. Thus, rather than talking about strategies for validation, we should be talking about means of evaluation. That is not to say that language alone will solve our problems or that the problems of model evaluation are primarily linguistic. The uncertainties inherent in large, complex models will not go away simply because we change the way we talk about them. But this is precisely the point: calling a model validated does not make it valid

  20. Evaluation (not validation) of quantitative models.

    PubMed Central

    Oreskes, N

    1998-01-01

    The present regulatory climate has led to increasing demands for scientists to attest to the predictive reliability of numerical simulation models used to help set public policy, a process frequently referred to as model validation. But while model validation may reveal useful information, this paper argues that it is not possible to demonstrate the predictive reliability of any model of a complex natural system in advance of its actual use. All models embed uncertainties, and these uncertainties can and frequently do undermine predictive reliability. In the case of lead in the environment, we may categorize model uncertainties as theoretical, empirical, parametrical, and temporal. Theoretical uncertainties are aspects of the system that are not fully understood, such as the biokinetic pathways of lead metabolism. Empirical uncertainties are aspects of the system that are difficult (or impossible) to measure, such as actual lead ingestion by an individual child. Parametrical uncertainties arise when complexities in the system are simplified to provide manageable model input, such as representing longitudinal lead exposure by cross-sectional measurements. Temporal uncertainties arise from the assumption that systems are stable in time. A model may also be conceptually flawed. The Ptolemaic system of astronomy is a historical example of a model that was empirically adequate but based on a wrong conceptualization. Yet had it been computerized--and had the word then existed--its users would have had every right to call it validated. Thus, rather than talking about strategies for validation, we should be talking about means of evaluation. That is not to say that language alone will solve our problems or that the problems of model evaluation are primarily linguistic. The uncertainties inherent in large, complex models will not go away simply because we change the way we talk about them. But this is precisely the point: calling a model validated does not make it valid

  1. Evaluating computational models of cholesterol metabolism.

    PubMed

    Paalvast, Yared; Kuivenhoven, Jan Albert; Groen, Albert K

    2015-10-01

    Regulation of cholesterol homeostasis has been studied extensively during the last decades. Many of the metabolic pathways involved have been discovered. Yet important gaps in our knowledge remain. For example, knowledge on intracellular cholesterol traffic and its relation to the regulation of cholesterol synthesis and plasma cholesterol levels is incomplete. One way of addressing the remaining questions is by making use of computational models. Here, we critically evaluate existing computational models of cholesterol metabolism making use of ordinary differential equations and addressed whether they used assumptions and make predictions in line with current knowledge on cholesterol homeostasis. Having studied the results described by the authors, we have also tested their models. This was done primarily by testing the effect of statin treatment in each model. Ten out of eleven models tested have made assumptions in line with current knowledge of cholesterol metabolism. Three out of the ten remaining models made correct predictions, i.e. predicting a decrease in plasma total and LDL cholesterol or increased uptake of LDL upon treatment upon the use of statins. In conclusion, few models on cholesterol metabolism are able to pass a functional test. Apparently most models have not undergone the critical iterative systems biology cycle of validation. We expect modeling of cholesterol metabolism to go through many more model topologies and iterative cycles and welcome the increased understanding of cholesterol metabolism these are likely to bring.

  2. A model evaluation checklist for process-based environmental models

    NASA Astrophysics Data System (ADS)

    Jackson-Blake, Leah

    2015-04-01

    Mechanistic catchment-scale phosphorus models appear to perform poorly where diffuse sources dominate. The reasons for this were investigated for one commonly-applied model, the INtegrated model of CAtchment Phosphorus (INCA-P). Model output was compared to 18 months of daily water quality monitoring data in a small agricultural catchment in Scotland, and model structure, key model processes and internal model responses were examined. Although the model broadly reproduced dissolved phosphorus dynamics, it struggled with particulates. The reasons for poor performance were explored, together with ways in which improvements could be made. The process of critiquing and assessing model performance was then generalised to provide a broadly-applicable model evaluation checklist, incorporating: (1) Calibration challenges, relating to difficulties in thoroughly searching a high-dimensional parameter space and in selecting appropriate means of evaluating model performance. In this study, for example, model simplification was identified as a necessary improvement to reduce the number of parameters requiring calibration, whilst the traditionally-used Nash Sutcliffe model performance statistic was not able to discriminate between realistic and unrealistic model simulations, and alternative statistics were needed. (2) Data limitations, relating to a lack of (or uncertainty in) input data, data to constrain model parameters, data for model calibration and testing, and data to test internal model processes. In this study, model reliability could be improved by addressing all four kinds of data limitation. For example, there was insufficient surface water monitoring data for model testing against an independent dataset to that used in calibration, whilst additional monitoring of groundwater and effluent phosphorus inputs would help distinguish between alternative plausible model parameterisations. (3) Model structural inadequacies, whereby model structure may inadequately represent

  3. Evaluating model accuracy for model-based reasoning

    NASA Technical Reports Server (NTRS)

    Chien, Steve; Roden, Joseph

    1992-01-01

    Described here is an approach to automatically assessing the accuracy of various components of a model. In this approach, actual data from the operation of a target system is used to drive statistical measures to evaluate the prediction accuracy of various portions of the model. We describe how these statistical measures of model accuracy can be used in model-based reasoning for monitoring and design. We then describe the application of these techniques to the monitoring and design of the water recovery system of the Environmental Control and Life Support System (ECLSS) of Space Station Freedom.

  4. Two models for evaluating landslide hazards

    USGS Publications Warehouse

    Davis, J.C.; Chung, C.-J.; Ohlmacher, G.C.

    2006-01-01

    Two alternative procedures for estimating landslide hazards were evaluated using data on topographic digital elevation models (DEMs) and bedrock lithologies in an area adjacent to the Missouri River in Atchison County, Kansas, USA. The two procedures are based on the likelihood ratio model but utilize different assumptions. The empirical likelihood ratio model is based on non-parametric empirical univariate frequency distribution functions under an assumption of conditional independence while the multivariate logistic discriminant model assumes that likelihood ratios can be expressed in terms of logistic functions. The relative hazards of occurrence of landslides were estimated by an empirical likelihood ratio model and by multivariate logistic discriminant analysis. Predictor variables consisted of grids containing topographic elevations, slope angles, and slope aspects calculated from a 30-m DEM. An integer grid of coded bedrock lithologies taken from digitized geologic maps was also used as a predictor variable. Both statistical models yield relative estimates in the form of the proportion of total map area predicted to already contain or to be the site of future landslides. The stabilities of estimates were checked by cross-validation of results from random subsamples, using each of the two procedures. Cell-by-cell comparisons of hazard maps made by the two models show that the two sets of estimates are virtually identical. This suggests that the empirical likelihood ratio and the logistic discriminant analysis models are robust with respect to the conditional independent assumption and the logistic function assumption, respectively, and that either model can be used successfully to evaluate landslide hazards. ?? 2006.

  5. CTBT integrated verification system evaluation model supplement

    SciTech Connect

    EDENBURN,MICHAEL W.; BUNTING,MARCUS; PAYNE JR.,ARTHUR C.; TROST,LAWRENCE C.

    2000-03-02

    Sandia National Laboratories has developed a computer based model called IVSEM (Integrated Verification System Evaluation Model) to estimate the performance of a nuclear detonation monitoring system. The IVSEM project was initiated in June 1994, by Sandia's Monitoring Systems and Technology Center and has been funded by the U.S. Department of Energy's Office of Nonproliferation and National Security (DOE/NN). IVSEM is a simple, ''top-level,'' modeling tool which estimates the performance of a Comprehensive Nuclear Test Ban Treaty (CTBT) monitoring system and can help explore the impact of various sensor system concepts and technology advancements on CTBT monitoring. One of IVSEM's unique features is that it integrates results from the various CTBT sensor technologies (seismic, in sound, radionuclide, and hydroacoustic) and allows the user to investigate synergy among the technologies. Specifically, IVSEM estimates the detection effectiveness (probability of detection), location accuracy, and identification capability of the integrated system and of each technology subsystem individually. The model attempts to accurately estimate the monitoring system's performance at medium interfaces (air-land, air-water) and for some evasive testing methods such as seismic decoupling. The original IVSEM report, CTBT Integrated Verification System Evaluation Model, SAND97-25 18, described version 1.2 of IVSEM. This report describes the changes made to IVSEM version 1.2 and the addition of identification capability estimates that have been incorporated into IVSEM version 2.0.

  6. An Evaluation of Software Cost Estimating Models.

    DTIC Science & Technology

    1981-06-01

    EVALUATION OF SOFTWARE COST ESTIMATING Sep 73- Oct 79 MODELS. R14- --. R IOTNME 7. AUTHOR (.) * ce.4 **CT OR GRANT NUMBER(C.’ * ~ Robert Thibodeau K 1 F30602...review of the draft DCP begins, the program can be terminated with the approval of the highest command level which authorized it. Once DSARC review begins...concert with many other elements. Initially, we might speak of the navigation subsystem and its functions. Later, we would describe the alignment element

  7. Evaluation of Community Land Model Hydrologic Predictions

    NASA Astrophysics Data System (ADS)

    Li, K. Y.; Lettenmaier, D. P.; Bohn, T.; Delire, C.

    2005-12-01

    Confidence in representation and parameterization of land surface processes in coupled land-atmosphere models is strongly dependent on a diversity of opportunities for model testing, since such coupled models are usually intended for application in a wide range of conditions (regional models) or globally. Land surface models have been increasing in complexity over the past decade, which has increased the demands on data sets appropriate for model testing and evaluation. In this study, we compare the performance of two commonly used land surface schemes - the Variable Infiltration Capacity (VIC) and Community Land Model (CLM) with respect to their ability to reproduce observed water and energy fluxes in off-line tests for two large river basins with contrasting hydroclimatic conditions spanning the range from temperate continental to arctic, and for five point (column flux) sites spanning the range from tropical to arctic. The two large river basins are the Arkansas-Red in U.S. southern Great Plains, and the Torne-Kalix in northern Scandinavia. The column flux evaluations are for a tropical forest site at Reserva Jaru (ABRACOS) in Brazil, a prairie site (FIFE) near Manhattan, Kansas in the central U.S., a soybean site at Caumont (HAPEX-Monbilhy) in France, a meadow site at Cabauw in the Netherlands, and a small grassland catchment at Valday, Russia. The results indicate that VIC can reasonably well capture the land surface biophysical processes, while CLM is somewhat less successful. We suggest changes to the CLM parameterizations that would improve its general performance with respect to its representation of land surface hydrologic processes.

  8. Training Module on the Evaluation of Best Modeling Practices

    EPA Pesticide Factsheets

    Building upon the fundamental concepts outlined in previous modules, the objectives of this module are to explore the topic of model evaluation and identify the 'best modeling practices' and strategies for the Evaluation Stage of the model life-cycle.

  9. User's appraisal of yield model evaluation criteria

    NASA Technical Reports Server (NTRS)

    Warren, F. B. (Principal Investigator)

    1982-01-01

    The five major potential USDA users of AgRISTAR crop yield forecast models rated the Yield Model Development (YMD) project Test and Evaluation Criteria by the importance placed on them. These users were agreed that the "TIMELINES" and "RELIABILITY" of the forecast yields would be of major importance in determining if a proposed yield model was worthy of adoption. Although there was considerable difference of opinion as to the relative importance of the other criteria, "COST", "OBJECTIVITY", "ADEQUACY", AND "MEASURES OF ACCURACY" generally were felt to be more important that "SIMPLICITY" and "CONSISTENCY WITH SCIENTIFIC KNOWLEDGE". However, some of the comments which accompanied the ratings did indicate that several of the definitions and descriptions of the criteria were confusing.

  10. Evaluation of a mallard productivity model

    USGS Publications Warehouse

    Johnson, D.H.; Cowardin, L.M.; Sparling, D.W.; Verner, J.; Morrison, L.M.; Ralph, C.J.

    1986-01-01

    A stochastic model of mallard (Anas platyrhynchos) productivity has been developed over a 10-year period and successfully applied to several management questions. Here we review the model and describe some recent uses and improvements that increase its realism and applicability, including naturally occurring changes in wetland habitat, catastrophic weather events, and the migrational homing of mallards. The amount of wetland habitat influenced productivity primarily by affecting the renesting rate. Late snowstorms severely reduced productivity, whereas the loss of nests due to flooding was largely compensated for by increased renesting, often in habitats where hatching rates were better. Migrational homing was shown to be an important phenomenon in population modeling and should be considered when evaluating management plans.

  11. Hazardous gas model evaluation with field observations

    NASA Astrophysics Data System (ADS)

    Hanna, S. R.; Chang, J. C.; Strimaitis, D. G.

    Fifteen hazardous gas models were evaluated using data from eight field experiments. The models include seven publicly available models (AFTOX, DEGADIS, HEGADAS, HGSYSTEM, INPUFF, OB/DG and SLAB), six proprietary models (AIRTOX, CHARM, FOCUS, GASTAR, PHAST and TRACE), and two "benchmark" analytical models (the Gaussian Plume Model and the analytical approximations to the Britter and McQuaid Workbook nomograms). The field data were divided into three groups—continuous dense gas releases (Burro LNG, Coyote LNG, Desert Tortoise NH 3-gas and aerosols, Goldfish HF-gas and aerosols, and Maplin Sands LNG), continuous passive gas releases (Prairie Grass and Hanford), and instantaneous dense gas releases (Thorney Island freon). The dense gas models that produced the most consistent predictions of plume centerline concentrations across the dense gas data sets are the Britter and McQuaid, CHARM, GASTAR, HEGADAS, HGSYSTEM, PHAST, SLAB and TRACE models, with relative mean biases of about ±30% or less and magnitudes of relative scatter that are about equal to the mean. The dense gas models tended to overpredict the plume widths and underpredict the plume depths by about a factor of two. All models except GASTAR, TRACE, and the area source version of DEGADIS perform fairly well with the continuous passive gas data sets. Some sensitivity studies were also carried out. It was found that three of the more widely used publicly-available dense gas models (DEGADIS, HGSYSTEM and SLAB) predicted increases in concentration of about 70% as roughness length decreased by an order of magnitude for the Desert Tortoise and Goldfish field studies. It was also found that none of the dense gas models that were considered came close to simulating the observed factor of two increase in peak concentrations as averaging time decreased from several minutes to 1 s. Because of their assumption that a concentrated dense gas core existed that was unaffected by variations in averaging time, the dense gas

  12. [Systemic-psychodynamic model for family evaluation].

    PubMed

    Salinas, J L; Pérez, M P; Viniegra, L; Armando Barriguete, J; Casillas, J; Valencia, A

    1992-01-01

    In this paper a family evaluation instrument called systemic-psychodynamic family evaluation model is described. Also, the second stage of the validation study of this instrument is presented (which deals with the inter-observers variation). Twenty families were studied. They were assessed always by the same interviewers designated as experts. They are all family therapy specialists and their assessment was used as the evaluation reference standard or "gold standard". The observers were psychiatrists without previous training in family therapy. For the purpose of the interview, both experts and observers were blind to the medical diagnosis of the patients. During the first stage of the validation study the observers did not have a reference guide which resulted in a low concordance rating. For the second stage, a 177 item guide was used and a considerable increase in the concordance rating was observed. Validation studies like the one used here are of considerable value to increase the reliability and further utilisation of evaluation instruments of this type.

  13. A pesticide emission model (PEM) Part II: model evaluation

    NASA Astrophysics Data System (ADS)

    Scholtz, M. T.; Voldner, E.; Van Heyst, B. J.; McMillan, A. C.; Pattey, E.

    In the first part of the paper, the development of a numerical pesticide emission model (PEM) is described for predicting the volatilization of pesticides applied to agricultural soils and crops through soil incorporation, surface spraying, or in the furrow at the time of planting. In this paper the results of three steps toward the evaluation of PEM are reported. The evaluation involves: (i) verifying the numerical algorithms and computer code through comparison of PEM simulations with an available analytical solution of the advection/diffusion equation for semi-volatile solutes in soil; (ii) comparing hourly heat, moisture and emission fluxes of trifluralin and triallate modeled by PEM with fluxes measured using the relaxed eddy-accumulation technique; and (iii) comparison of the PEM predictions of persistence half-life for 29 pesticides with the ranges of persistence found in the literature. The overall conclusion from this limited evaluation study is that PEM is a useful model for estimating the volatilization rates of pesticides from agricultural soils and crops. The lack of reliable estimates of chemical and photochemical degradation rates of pesticide on foliage, however, introduces large uncertainties in the estimates from any model of the volatilization of pesticide that impacts the canopy.

  14. Evaluating conflation methods using uncertainty modeling

    NASA Astrophysics Data System (ADS)

    Doucette, Peter; Dolloff, John; Canavosio-Zuzelski, Roberto; Lenihan, Michael; Motsko, Dennis

    2013-05-01

    The classic problem of computer-assisted conflation involves the matching of individual features (e.g., point, polyline, or polygon vectors) as stored in a geographic information system (GIS), between two different sets (layers) of features. The classical goal of conflation is the transfer of feature metadata (attributes) from one layer to another. The age of free public and open source geospatial feature data has significantly increased the opportunity to conflate such data to create enhanced products. There are currently several spatial conflation tools in the marketplace with varying degrees of automation. An ability to evaluate conflation tool performance quantitatively is of operational value, although manual truthing of matched features is laborious and costly. In this paper, we present a novel methodology that uses spatial uncertainty modeling to simulate realistic feature layers to streamline evaluation of feature matching performance for conflation methods. Performance results are compiled for DCGIS street centerline features.

  15. Pasteur Institute of Iran- An Evaluation Model

    PubMed Central

    Dejman, Masoumeh; Habibi, Elham; Baradarn Eftekhari, Monir; Falahat, Katayoun; Malekafzali, Hossein

    2014-01-01

    Background: Pasteur Institute of Iran was established in 1919 with the aim to produce vaccines and prevent communicable diseases in Iran. Over time, their activities extended into areas of research, education and services. Naturally, such a vast development begs establishment of a comprehensive management and monitoring system. With this outlook, the present study was carried out with the aim to design a performance assessment model for Pasteur Institute of Iran that, in addition to determining evaluation indicators, it could prepare the necessary grounds for providing a unified assessment model for the global network of the Pasteur Institutes. Method: This study was designed and performed in 4 stages: first; design of indicators and determining their scores. Second; editing indicators according to the outcome of discussions and debates held with members of Research Council of Pasteur Institute of Iran. Third; implementation of a pilot model based on the Institute’s activities in 2011. Fourth; providing the pilot model feedback to the stakeholders and finalizing the model according to an opinion survey. Results: Based on the results obtained, the developed indicators for Pasteur Institute of Iran evaluation were designed in 10 axes and 18 sub-axes, which included 101 major and 58 minor indicators. The axes included governance and leadership, resources and facilities, capacity building, knowledge production and collaborations, reference services, economic value of products and services, participation in industrial exhibitions, status of the institute, satisfaction and institute’s role in health promotion. Conclusion: The indicators presented in this article have been prepared based on the balance in the Institute’s four missions, to provide the basis for assessment of the Institute’s activities in consecutive years, and possibility of comparison with other institutes worldwide. PMID:24842146

  16. Brief Lags in Interrupted Sequential Performance: Evaluating a Model and Model Evaluation Method

    DTIC Science & Technology

    2015-01-05

    Task interruption Sequence errors Cognitive modeling Goodness-of- fit testing a b s t r a c t We examined effects of adding brief (1 second) lags...rehearsal mechanism in the model. To evaluate the model we developed a simple new goodness-of- fit test based on analysis of variance that offers an...andweincorporatearehearsalmechanisminthemodel.Toevaluatethemodelwe developed asimplenewgoodness-of- fit testbasedonanalysisofvariancethatoffersaninferentialbasis for

  17. Evaluation of models of waste glass durability

    SciTech Connect

    Ellison, A.

    1995-08-01

    The main variable under the control of the waste glass producer is the composition of the glass; thus a need exists to establish functional relationships between the composition of a waste glass and measures of processability, product consistency, and durability. Many years of research show that the structure and properties of a glass depend on its composition, so it seems reasonable to assume that there also is relationship between the composition of a waste glass and its resistance to attack by an aqueous solution. Several models have been developed to describe this dependence, and an evaluation their predictive capabilities is the subject of this paper. The objective is to determine whether any of these models describe the ``correct`` functional relationship between composition and corrosion rate. A more thorough treatment of the relationships between glass composition and durability has been presented elsewhere, and the reader is encouraged to consult it for a more detailed discussion. The models examined in this study are the free energy of hydration model, developed at the Savannah River Laboratory, the structural bond strength model, developed at the Vitreous State Laboratory at the Catholic University of America, and the Composition Variation Study, developed at Pacific Northwest Laboratory.

  18. Data Assimilation and Model Evaluation Experiment Datasets.

    NASA Astrophysics Data System (ADS)

    Lai, Chung-Chieng A.; Qian, Wen; Glenn, Scott M.

    1994-05-01

    The Institute for Naval Oceanography, in cooperation with Naval Research Laboratories and universities, executed the Data Assimilation and Model Evaluation Experiment (DAMÉE) for the Gulf Stream region during fiscal years 1991-1993. Enormous effort has gone into the preparation of several high-quality and consistent datasets for model initialization and verification. This paper describes the preparation process, the temporal and spatial scopes, the contents, the structure, etc., of these datasets.The goal of DAMEE and the need of data for the four phases of experiment are briefly stated. The preparation of DAMEE datasets consisted of a series of processes: 1)collection of observational data; 2) analysis and interpretation; 3) interpolation using the Optimum Thermal Interpolation System package; 4) quality control and re-analysis; and 5) data archiving and software documentation.The data products from these processes included a time series of 3D fields of temperature and salinity, 2D fields of surface dynamic height and mixed-layer depth, analysis of the Gulf Stream and rings system, and bathythermograph profiles. To date, these are the most detailed and high-quality data for mesoscale ocean modeling, data assimilation, and forecasting research. Feedback from ocean modeling groups who tested this data was incorporated into its refinement.Suggestions for DAMEE data usages include 1) ocean modeling and data assimilation studies, 2) diagnosis and theorectical studies, and 3) comparisons with locally detailed observations.

  19. Data assimilation and model evaluation experiment datasets

    NASA Technical Reports Server (NTRS)

    Lai, Chung-Cheng A.; Qian, Wen; Glenn, Scott M.

    1994-01-01

    The Institute for Naval Oceanography, in cooperation with Naval Research Laboratories and universities, executed the Data Assimilation and Model Evaluation Experiment (DAMEE) for the Gulf Stream region during fiscal years 1991-1993. Enormous effort has gone into the preparation of several high-quality and consistent datasets for model initialization and verification. This paper describes the preparation process, the temporal and spatial scopes, the contents, the structure, etc., of these datasets. The goal of DAMEE and the need of data for the four phases of experiment are briefly stated. The preparation of DAMEE datasets consisted of a series of processes: (1) collection of observational data; (2) analysis and interpretation; (3) interpolation using the Optimum Thermal Interpolation System package; (4) quality control and re-analysis; and (5) data archiving and software documentation. The data products from these processes included a time series of 3D fields of temperature and salinity, 2D fields of surface dynamic height and mixed-layer depth, analysis of the Gulf Stream and rings system, and bathythermograph profiles. To date, these are the most detailed and high-quality data for mesoscale ocean modeling, data assimilation, and forecasting research. Feedback from ocean modeling groups who tested this data was incorporated into its refinement. Suggestions for DAMEE data usages include (1) ocean modeling and data assimilation studies, (2) diagnosis and theoretical studies, and (3) comparisons with locally detailed observations.

  20. Acceptance criteria for urban dispersion model evaluation

    NASA Astrophysics Data System (ADS)

    Hanna, Steven; Chang, Joseph

    2012-05-01

    The authors suggested acceptance criteria for rural dispersion models' performance measures in this journal in 2004. The current paper suggests modified values of acceptance criteria for urban applications and tests them with tracer data from four urban field experiments. For the arc-maximum concentrations, the fractional bias should have a magnitude <0.67 (i.e., the relative mean bias is less than a factor of 2); the normalized mean-square error should be <6 (i.e., the random scatter is less than about 2.4 times the mean); and the fraction of predictions that are within a factor of two of the observations (FAC2) should be >0.3. For all data paired in space, for which a threshold concentration must always be defined, the normalized absolute difference should be <0.50, when the threshold is three times the instrument's limit of quantification (LOQ). An overall criterion is then applied that the total set of acceptance criteria should be satisfied in at least half of the field experiments. These acceptance criteria are applied to evaluations of the US Department of Defense's Joint Effects Model (JEM) with tracer data from US urban field experiments in Salt Lake City (U2000), Oklahoma City (JU2003), and Manhattan (MSG05 and MID05). JEM includes the SCIPUFF dispersion model with the urban canopy option and the urban dispersion model (UDM) option. In each set of evaluations, three or four likely options are tested for meteorological inputs (e.g., a local building top wind speed, the closest National Weather Service airport observations, or outputs from numerical weather prediction models). It is found that, due to large natural variability in the urban data, there is not a large difference between the performance measures for the two model options and the three or four meteorological input options. The more detailed UDM and the state-of-the-art numerical weather models do provide a slight improvement over the other options. The proposed urban dispersion model acceptance

  1. Modelling approaches for evaluating multiscale tendon mechanics

    PubMed Central

    Fang, Fei; Lake, Spencer P.

    2016-01-01

    Tendon exhibits anisotropic, inhomogeneous and viscoelastic mechanical properties that are determined by its complicated hierarchical structure and varying amounts/organization of different tissue constituents. Although extensive research has been conducted to use modelling approaches to interpret tendon structure–function relationships in combination with experimental data, many issues remain unclear (i.e. the role of minor components such as decorin, aggrecan and elastin), and the integration of mechanical analysis across different length scales has not been well applied to explore stress or strain transfer from macro- to microscale. This review outlines mathematical and computational models that have been used to understand tendon mechanics at different scales of the hierarchical organization. Model representations at the molecular, fibril and tissue levels are discussed, including formulations that follow phenomenological and microstructural approaches (which include evaluations of crimp, helical structure and the interaction between collagen fibrils and proteoglycans). Multiscale modelling approaches incorporating tendon features are suggested to be an advantageous methodology to understand further the physiological mechanical response of tendon and corresponding adaptation of properties owing to unique in vivo loading environments. PMID:26855747

  2. Two criteria for evaluating risk prediction models.

    PubMed

    Pfeiffer, R M; Gail, M H

    2011-09-01

    We propose and study two criteria to assess the usefulness of models that predict risk of disease incidence for screening and prevention, or the usefulness of prognostic models for management following disease diagnosis. The first criterion, the proportion of cases followed PCF (q), is the proportion of individuals who will develop disease who are included in the proportion q of individuals in the population at highest risk. The second criterion is the proportion needed to follow-up, PNF (p), namely the proportion of the general population at highest risk that one needs to follow in order that a proportion p of those destined to become cases will be followed. PCF (q) assesses the effectiveness of a program that follows 100q% of the population at highest risk. PNF (p) assess the feasibility of covering 100p% of cases by indicating how much of the population at highest risk must be followed. We show the relationship of those two criteria to the Lorenz curve and its inverse, and present distribution theory for estimates of PCF and PNF. We develop new methods, based on influence functions, for inference for a single risk model, and also for comparing the PCFs and PNFs of two risk models, both of which were evaluated in the same validation data.

  3. CTBT Integrated Verification System Evaluation Model

    SciTech Connect

    Edenburn, M.W.; Bunting, M.L.; Payne, A.C. Jr.

    1997-10-01

    Sandia National Laboratories has developed a computer based model called IVSEM (Integrated Verification System Evaluation Model) to estimate the performance of a nuclear detonation monitoring system. The IVSEM project was initiated in June 1994, by Sandia`s Monitoring Systems and Technology Center and has been funded by the US Department of Energy`s Office of Nonproliferation and National Security (DOE/NN). IVSEM is a simple, top-level, modeling tool which estimates the performance of a Comprehensive Nuclear Test Ban Treaty (CTBT) monitoring system and can help explore the impact of various sensor system concepts and technology advancements on CTBT monitoring. One of IVSEM`s unique features is that it integrates results from the various CTBT sensor technologies (seismic, infrasound, radionuclide, and hydroacoustic) and allows the user to investigate synergy among the technologies. Specifically, IVSEM estimates the detection effectiveness (probability of detection) and location accuracy of the integrated system and of each technology subsystem individually. The model attempts to accurately estimate the monitoring system`s performance at medium interfaces (air-land, air-water) and for some evasive testing methods such as seismic decoupling. This report describes version 1.2 of IVSEM.

  4. An evaluation framework for participatory modelling

    NASA Astrophysics Data System (ADS)

    Krueger, T.; Inman, A.; Chilvers, J.

    2012-04-01

    Strong arguments for participatory modelling in hydrology can be made on substantive, instrumental and normative grounds. These arguments have led to increasingly diverse groups of stakeholders (here anyone affecting or affected by an issue) getting involved in hydrological research and the management of water resources. In fact, participation has become a requirement of many research grants, programs, plans and policies. However, evidence of beneficial outcomes of participation as suggested by the arguments is difficult to generate and therefore rare. This is because outcomes are diverse, distributed, often tacit, and take time to emerge. In this paper we develop an evaluation framework for participatory modelling focussed on learning outcomes. Learning encompasses many of the potential benefits of participation, such as better models through diversity of knowledge and scrutiny, stakeholder empowerment, greater trust in models and ownership of subsequent decisions, individual moral development, reflexivity, relationships, social capital, institutional change, resilience and sustainability. Based on the theories of experiential, transformative and social learning, complemented by practitioner experience our framework examines if, when and how learning has occurred. Special emphasis is placed on the role of models as learning catalysts. We map the distribution of learning between stakeholders, scientists (as a subgroup of stakeholders) and models. And we analyse what type of learning has occurred: instrumental learning (broadly cognitive enhancement) and/or communicative learning (change in interpreting meanings, intentions and values associated with actions and activities; group dynamics). We demonstrate how our framework can be translated into a questionnaire-based survey conducted with stakeholders and scientists at key stages of the participatory process, and show preliminary insights from applying the framework within a rural pollution management situation in

  5. Performance Evaluation Modeling of Network Sensors

    NASA Technical Reports Server (NTRS)

    Clare, Loren P.; Jennings, Esther H.; Gao, Jay L.

    2003-01-01

    Substantial benefits are promised by operating many spatially separated sensors collectively. Such systems are envisioned to consist of sensor nodes that are connected by a communications network. A simulation tool is being developed to evaluate the performance of networked sensor systems, incorporating such metrics as target detection probabilities, false alarms rates, and classification confusion probabilities. The tool will be used to determine configuration impacts associated with such aspects as spatial laydown, and mixture of different types of sensors (acoustic, seismic, imaging, magnetic, RF, etc.), and fusion architecture. The QualNet discrete-event simulation environment serves as the underlying basis for model development and execution. This platform is recognized for its capabilities in efficiently simulating networking among mobile entities that communicate via wireless media. We are extending QualNet's communications modeling constructs to capture the sensing aspects of multi-target sensing (analogous to multiple access communications), unimodal multi-sensing (broadcast), and multi-modal sensing (multiple channels and correlated transmissions). Methods are also being developed for modeling the sensor signal sources (transmitters), signal propagation through the media, and sensors (receivers) that are consistent with the discrete event paradigm needed for performance determination of sensor network systems. This work is supported under the Microsensors Technical Area of the Army Research Laboratory (ARL) Advanced Sensors Collaborative Technology Alliance.

  6. Biological Modeling As A Method for Data Evaluation and ...

    EPA Pesticide Factsheets

    Biological Models, evaluating consistency of data and integrating diverse data, examples of pharmacokinetics and response and pharmacodynamics Biological Models, evaluating consistency of data and integrating diverse data, examples of pharmacokinetics and response and pharmacodynamics

  7. A Model for the Evaluation of Educational Products.

    ERIC Educational Resources Information Center

    Bertram, Charles L.

    A model for the evaluation of educational products based on experience with development of three such products is described. The purpose of the evaluation model is to indicate the flow of evaluation activity as products undergo development. Evaluation is given Stufflebeam's definition as the process of delineating, obtaining, and providing useful…

  8. Moisture evaluation by dynamic thermography data modeling

    NASA Astrophysics Data System (ADS)

    Bison, Paolo G.; Grinzato, Ermanno G.; Marinetti, Sergio

    1994-03-01

    This paper discusses the design of a nondestructive method for in situ detection of moistened areas in buildings and the evaluation of the water content in porous materials by thermographic analysis. The use of heat transfer model to interpret data allows to improve the measurement accuracy taking into account the actual boundary conditions. The relative increase of computation time is balanced by the additional advantage to optimize the testing procedure of different objects simulating the heat transfer. Experimental results on bricks used in building for restoration activities, are discussed. The water content measured in different hygrometric conditions is compared with known values. A correction on the absorptivity coefficient dependent on water content is introduced.

  9. ZATPAC: a model consortium evaluates teen programs.

    PubMed

    Owen, Kathryn; Murphy, Dana; Parsons, Chris

    2009-09-01

    How do we advance the environmental literacy of young people, support the next generation of environmental stewards and increase the diversity of the leadership of zoos and aquariums? We believe it is through ongoing evaluation of zoo and aquarium teen programming and have founded a consortium to pursue those goals. The Zoo and Aquarium Teen Program Assessment Consortium (ZATPAC) is an initiative by six of the nation's leading zoos and aquariums to strengthen institutional evaluation capacity, model a collaborative approach toward assessing the impact of youth programs, and bring additional rigor to evaluation efforts within the field of informal science education. Since its beginning in 2004, ZATPAC has researched, developed, pilot-tested and implemented a pre-post program survey instrument designed to assess teens' knowledge of environmental issues, skills and abilities to take conservation actions, self-efficacy in environmental actions, and engagement in environmentally responsible behaviors. Findings from this survey indicate that teens who join zoo/aquarium programs are already actively engaged in many conservation behaviors. After participating in the programs, teens showed a statistically significant increase in their reported knowledge of conservation and environmental issues and their abilities to research, explain, and find resources to take action on conservation issues of personal concern. Teens also showed statistically significant increases pre-program to post-program for various conservation behaviors, including "I talk with my family and/or friends about things they can do to help the animals or the environment," "I save water...," "I save energy...," "When I am shopping I look for recycled products," and "I help with projects that restore wildlife habitat."

  10. A Model for Evaluating Student Clinical Psychomotor Skills.

    ERIC Educational Resources Information Center

    And Others; Fiel, Nicholas J.

    1979-01-01

    A long-range plan to evaluate medical students' physical examination skills was undertaken at the Ingham Family Medical Clinic at Michigan State University. The development of the psychomotor skills evaluation model to evaluate the skill of blood pressure measurement, tests of the model's reliability, and the use of the model are described. (JMD)

  11. Evaluation of Mesoscale Model Phenomenological Verification Techniques

    NASA Technical Reports Server (NTRS)

    Lambert, Winifred

    2006-01-01

    Forecasters at the Spaceflight Meteorology Group, 45th Weather Squadron, and National Weather Service in Melbourne, FL use mesoscale numerical weather prediction model output in creating their operational forecasts. These models aid in forecasting weather phenomena that could compromise the safety of launch, landing, and daily ground operations and must produce reasonable weather forecasts in order for their output to be useful in operations. Considering the importance of model forecasts to operations, their accuracy in forecasting critical weather phenomena must be verified to determine their usefulness. The currently-used traditional verification techniques involve an objective point-by-point comparison of model output and observations valid at the same time and location. The resulting statistics can unfairly penalize high-resolution models that make realistic forecasts of a certain phenomena, but are offset from the observations in small time and/or space increments. Manual subjective verification can provide a more valid representation of model performance, but is time-consuming and prone to personal biases. An objective technique that verifies specific meteorological phenomena, much in the way a human would in a subjective evaluation, would likely produce a more realistic assessment of model performance. Such techniques are being developed in the research community. The Applied Meteorology Unit (AMU) was tasked to conduct a literature search to identify phenomenological verification techniques being developed, determine if any are ready to use operationally, and outline the steps needed to implement any operationally-ready techniques into the Advanced Weather Information Processing System (AWIPS). The AMU conducted a search of all literature on the topic of phenomenological-based mesoscale model verification techniques and found 10 different techniques in various stages of development. Six of the techniques were developed to verify precipitation forecasts, one

  12. Treatment modalities and evaluation models for periodontitis

    PubMed Central

    Tariq, Mohammad; Iqbal, Zeenat; Ali, Javed; Baboota, Sanjula; Talegaonkar, Sushama; Ahmad, Zulfiqar; Sahni, Jasjeet K

    2012-01-01

    Periodontitis is the most common localized dental inflammatory disease related with several pathological conditions like inflammation of gums (gingivitis), degeneration of periodontal ligament, dental cementum and alveolar bone loss. In this perspective, the various preventive and treatment modalities, including oral hygiene, gingival irrigations, mechanical instrumentation, full mouth disinfection, host modulation and antimicrobial therapy, which are used either as adjunctive treatments or as stand-alone therapies in the non-surgical management of periodontal infections, have been discussed. Intra-pocket, sustained release systems have emerged as a novel paradigm for the future research. In this article, special consideration is given to different locally delivered anti-microbial and anti inflammatory medications which are either commercially available or are currently under consideration for Food and Drug Administration (FDA) approval. The various in vitro dissolution models and microbiological strain investigated to impersonate the infected and inflamed periodontal cavity and to predict the in vivo performance of treatment modalities have also been thrashed out. Animal models that have been employed to explore the pathology at the different stages of periodontitis and to evaluate its treatment modalities are enlightened in this proposed review. PMID:23373002

  13. Report of the Inter-Organizational Committee on Evaluation. Internal Evaluation Model.

    ERIC Educational Resources Information Center

    White, Roy; Murray, John

    Based upon the premise that school divisions in Manitoba, Canada, should evaluate and improve upon themselves, this evaluation model was developed. The participating personnel and the development of the evaluation model are described. The model has 11 parts: (1) needs assessment; (2) statement of objectives; (3) definition of objectives; (4)…

  14. Increasing the Use of Evaluation Information: An Evaluator-Manager Interaction Model.

    ERIC Educational Resources Information Center

    Alexander, Jay; And Others

    An evaluator-manager interaction model is developed for predicting the impact of evaluation and research findings. Instruments are developed for measuring the variables of interpersonal involvement, impact of evaluation, and managerial style in the relationship between evaluator and manager. The hypothesis advanced suggests that evaluators can…

  15. The design and implementation of an operational model evaluation system

    SciTech Connect

    Foster, K.T.

    1995-06-01

    An evaluation of an atmospheric transport and diffusion model`s operational performance typically involves the comparison of the model`s calculations with measurements of an atmospheric pollutant`s temporal and spatial distribution. These evaluations however often use data from a small number of experiments and may be limited to producing some of the commonly quoted statistics based on the differences between model calculations and the measurements. This paper presents efforts to develop a model evaluation system geared for both the objective statistical analysis and the more subjective visualization of the inter-relationships between a model`s calculations and the appropriate field measurement data.

  16. Model Performance Evaluation and Scenario Analysis (MPESA) Tutorial

    EPA Pesticide Factsheets

    The model performance evaluation consists of metrics and model diagnostics. These metrics provides modelers with statistical goodness-of-fit measures that capture magnitude only, sequence only, and combined magnitude and sequence errors.

  17. A Hybrid Evaluation Model for Evaluating Online Professional Development

    ERIC Educational Resources Information Center

    Hahs-Vaughn, Debbie; Zygouris-Coe, Vicky; Fiedler, Rebecca

    2007-01-01

    Online professional development is multidimensional. It encompasses: a) an online, web-based format; b) professional development; and most likely c) specific objectives tailored to and created for the respective online professional development course. Evaluating online professional development is therefore also multidimensional and as such both…

  18. Modelling and evaluating against the violent insider

    SciTech Connect

    Fortney, D.S.; Al-Ayat, R.A.; Saleh, R.A.

    1991-07-01

    The violent insider threat poses a special challenge to facilities protecting special nuclear material from theft or diversion. These insiders could potentially behave as nonviolent insiders to deceitfully defeat certain safeguards elements and use violence to forcefully defeat hardware or personnel. While several vulnerability assessment tools are available to deal with the nonviolent insider, very limited effort has been directed to developing analysis tools for the violent threat. In this paper, we present an approach using the results of a vulnerability assessment for nonviolent insiders to evaluate certain violent insider scenarios. Since existing tools do not explicitly consider violent insiders, the approach is intended for experienced safeguards analysts and relies on the analyst to brainstorm possible violent actions, to assign detection probabilities, and to ensure consistency. We then discuss our efforts in developing an automated tool for assessing the vulnerability against those violent insiders who are willing to use force against barriers, but who are unwilling to kill or be killed. Specifically, we discuss our efforts in developing databases for violent insiders penetrating barriers, algorithms for considering the entry of contraband, and modelling issues in considering the use of violence.

  19. A Model for Evaluating Title 1 Programs.

    ERIC Educational Resources Information Center

    Rost, Paul; And Others

    Albuquerque's Title I evaluation staff is in the process of generating a comprehensive local evaluation design because it considers the federally required product evaluation unsatisfactory. The required mean-gain comparisons were extended beyond the dimension of program to the dimensions of school, grade, and Title I instructor. This evaluation…

  20. THE ATMOSPHERIC MODEL EVALUATION TOOL (AMET); AIR QUALITY MODULE

    EPA Science Inventory

    This presentation reviews the development of the Atmospheric Model Evaluation Tool (AMET) air quality module. The AMET tool is being developed to aid in the model evaluation. This presentation focuses on the air quality evaluation portion of AMET. Presented are examples of the...

  1. Superquantile Regression with Applications to Buffered Reliability, Uncertainty Quantification, and Conditional Value-at-Risk

    DTIC Science & Technology

    2013-02-06

    Y ) dβ. (1) Since a superquantile is a coherent measure of risk and by the virtue of being an ‘average’ of quantiles is also more stable than a...deter- mination for superquantile regression similarly in the case where the distribution of (X,Y ) has a finite support of cardinality ν. Definition 2

  2. Simplified cost models for prefeasibility mineral evaluations

    USGS Publications Warehouse

    Camm, Thomas W.

    1991-01-01

    This report contains 2 open pit models, 6 underground mine models, 11 mill models, and cost equations for access roads, power lines, and tailings ponds. In addition, adjustment factors for variation in haulage distances are provided for open pit models and variation in mining depths for underground models.

  3. Program evaluation models and related theories: AMEE guide no. 67.

    PubMed

    Frye, Ann W; Hemmer, Paul A

    2012-01-01

    This Guide reviews theories of science that have influenced the development of common educational evaluation models. Educators can be more confident when choosing an appropriate evaluation model if they first consider the model's theoretical basis against their program's complexity and their own evaluation needs. Reductionism, system theory, and (most recently) complexity theory have inspired the development of models commonly applied in evaluation studies today. This Guide describes experimental and quasi-experimental models, Kirkpatrick's four-level model, the Logic Model, and the CIPP (Context/Input/Process/Product) model in the context of the theories that influenced their development and that limit or support their ability to do what educators need. The goal of this Guide is for educators to become more competent and confident in being able to design educational program evaluations that support intentional program improvement while adequately documenting or describing the changes and outcomes-intended and unintended-associated with their programs.

  4. Global daily reference evapotranspiration modeling and evaluation

    USGS Publications Warehouse

    Senay, G.B.; Verdin, J.P.; Lietzow, R.; Melesse, Assefa M.

    2008-01-01

    Accurate and reliable evapotranspiration (ET) datasets are crucial in regional water and energy balance studies. Due to the complex instrumentation requirements, actual ET values are generally estimated from reference ET values by adjustment factors using coefficients for water stress and vegetation conditions, commonly referred to as crop coefficients. Until recently, the modeling of reference ET has been solely based on important weather variables collected from weather stations that are generally located in selected agro-climatic locations. Since 2001, the National Oceanic and Atmospheric Administration’s Global Data Assimilation System (GDAS) has been producing six-hourly climate parameter datasets that are used to calculate daily reference ET for the whole globe at 1-degree spatial resolution. The U.S. Geological Survey Center for Earth Resources Observation and Science has been producing daily reference ET (ETo) since 2001, and it has been used on a variety of operational hydrological models for drought and streamflow monitoring all over the world. With the increasing availability of local station-based reference ET estimates, we evaluated the GDAS-based reference ET estimates using data from the California Irrigation Management Information System (CIMIS). Daily CIMIS reference ET estimates from 85 stations were compared with GDAS-based reference ET at different spatial and temporal scales using five-year daily data from 2002 through 2006. Despite the large difference in spatial scale (point vs. ∼100 km grid cell) between the two datasets, the correlations between station-based ET and GDAS-ET were very high, exceeding 0.97 on a daily basis to more than 0.99 on time scales of more than 10 days. Both the temporal and spatial correspondences in trend/pattern and magnitudes between the two datasets were satisfactory, suggesting the reliability of using GDAS parameter-based reference ET for regional water and energy balance studies in many parts of the world

  5. Rhode Island Model Evaluation & Support System: Teacher. Edition III

    ERIC Educational Resources Information Center

    Rhode Island Department of Education, 2015

    2015-01-01

    Rhode Island educators believe that implementing a fair, accurate, and meaningful educator evaluation and support system will help improve teaching and learning. The primary purpose of the Rhode Island Model Teacher Evaluation and Support System (Rhode Island Model) is to help all teachers improve. Through the Model, the goal is to help create a…

  6. The Relevance of the CIPP Evaluation Model for Educational Accountability.

    ERIC Educational Resources Information Center

    Stufflebeam, Daniel L.

    The CIPP Evaluation Model was originally developed to provide timely information in a systematic way for decision making, which is a proactive application of evaluation. This article examines whether the CIPP model also serves the retroactive purpose of providing information for accountability. Specifically, can the CIPP Model adequately assist…

  7. Promoting Excellence in Nursing Education (PENE): Pross evaluation model.

    PubMed

    Pross, Elizabeth A

    2010-08-01

    The purpose of this article is to examine the Promoting Excellence in Nursing Education (PENE) Pross evaluation model. A conceptual evaluation model, such as the one described here, may be useful to nurse academicians in the ongoing evaluation of educational programs, especially those with goals of excellence. Frameworks for evaluating nursing programs are necessary because they offer a way to systematically assess the educational effectiveness of complex nursing programs. This article describes the conceptual framework and its tenets of excellence.

  8. Evaluation of Models of Parkinson's Disease

    PubMed Central

    Jagmag, Shail A.; Tripathi, Naveen; Shukla, Sunil D.; Maiti, Sankar; Khurana, Sukant

    2016-01-01

    Parkinson's disease is one of the most common neurodegenerative diseases. Animal models have contributed a large part to our understanding and therapeutics developed for treatment of PD. There are several more exhaustive reviews of literature that provide the initiated insights into the specific models; however a novel synthesis of the basic advantages and disadvantages of different models is much needed. Here we compare both neurotoxin based and genetic models while suggesting some novel avenues in PD modeling. We also highlight the problems faced and promises of all the mammalian models with the hope of providing a framework for comparison of various systems. PMID:26834536

  9. Evaluating uncertainty in stochastic simulation models

    SciTech Connect

    McKay, M.D.

    1998-02-01

    This paper discusses fundamental concepts of uncertainty analysis relevant to both stochastic simulation models and deterministic models. A stochastic simulation model, called a simulation model, is a stochastic mathematical model that incorporates random numbers in the calculation of the model prediction. Queuing models are familiar simulation models in which random numbers are used for sampling interarrival and service times. Another example of simulation models is found in probabilistic risk assessments where atmospheric dispersion submodels are used to calculate movement of material. For these models, randomness comes not from the sampling of times but from the sampling of weather conditions, which are described by a frequency distribution of atmospheric variables like wind speed and direction as a function of height above ground. A common characteristic of simulation models is that single predictions, based on one interarrival time or one weather condition, for example, are not nearly as informative as the probability distribution of possible predictions induced by sampling the simulation variables like time and weather condition. The language of model analysis is often general and vague, with terms having mostly intuitive meaning. The definition and motivations for some of the commonly used terms and phrases offered in this paper lead to an analysis procedure based on prediction variance. In the following mathematical abstraction the authors present a setting for model analysis, relate practical objectives to mathematical terms, and show how two reasonable premises lead to a viable analysis strategy.

  10. Modeling and Evaluating Emotions Impact on Cognition

    DTIC Science & Technology

    2013-07-01

    International Conference on Automatic Face and Gesture Recognition . Shanghai, China, April 2013 • Wenji Mao and Jonathan Gratch. Modeling Social...Modeling, Lorentz Center, Leiden. August 2011 • Keynote speaker, IEEE International Conference on Automatic Face and Gesture Recognition , Santa

  11. Likelihood-Based Climate Model Evaluation

    NASA Technical Reports Server (NTRS)

    Braverman, Amy; Cressie, Noel; Teixeira, Joao

    2012-01-01

    Climate models are deterministic, mathematical descriptions of the physics of climate. Confidence in predictions of future climate is increased if the physics are verifiably correct. A necessary, (but not sufficient) condition is that past and present climate be simulated well. Quantify the likelihood that a (summary statistic computed from a) set of observations arises from a physical system with the characteristics captured by a model generated time series. Given a prior on models, we can go further: posterior distribution of model given observations.

  12. Evaluation of Traditional Medicines for Neurodegenerative Diseases Using Drosophila Models

    PubMed Central

    Lee, Soojin; Bang, Se Min; Lee, Joon Woo; Cho, Kyoung Sang

    2014-01-01

    Drosophila is one of the oldest and most powerful genetic models and has led to novel insights into a variety of biological processes. Recently, Drosophila has emerged as a model system to study human diseases, including several important neurodegenerative diseases. Because of the genomic similarity between Drosophila and humans, Drosophila neurodegenerative disease models exhibit a variety of human-disease-like phenotypes, facilitating fast and cost-effective in vivo genetic modifier screening and drug evaluation. Using these models, many disease-associated genetic factors have been identified, leading to the identification of compelling drug candidates. Recently, the safety and efficacy of traditional medicines for human diseases have been evaluated in various animal disease models. Despite the advantages of the Drosophila model, its usage in the evaluation of traditional medicines is only nascent. Here, we introduce the Drosophila model for neurodegenerative diseases and some examples demonstrating the successful application of Drosophila models in the evaluation of traditional medicines. PMID:24790636

  13. Evaluation of Fast-Time Wake Vortex Prediction Models

    NASA Technical Reports Server (NTRS)

    Proctor, Fred H.; Hamilton, David W.

    2009-01-01

    Current fast-time wake models are reviewed and three basic types are defined. Predictions from several of the fast-time models are compared. Previous statistical evaluations of the APA-Sarpkaya and D2P fast-time models are discussed. Root Mean Square errors between fast-time model predictions and Lidar wake measurements are examined for a 24 hr period at Denver International Airport. Shortcomings in current methodology for evaluating wake errors are also discussed.

  14. Evaluating Energy Efficiency Policies with Energy-Economy Models

    SciTech Connect

    Mundaca, Luis; Neij, Lena; Worrell, Ernst; McNeil, Michael A.

    2010-08-01

    The growing complexities of energy systems, environmental problems and technology markets are driving and testing most energy-economy models to their limits. To further advance bottom-up models from a multidisciplinary energy efficiency policy evaluation perspective, we review and critically analyse bottom-up energy-economy models and corresponding evaluation studies on energy efficiency policies to induce technological change. We use the household sector as a case study. Our analysis focuses on decision frameworks for technology choice, type of evaluation being carried out, treatment of market and behavioural failures, evaluated policy instruments, and key determinants used to mimic policy instruments. Although the review confirms criticism related to energy-economy models (e.g. unrealistic representation of decision-making by consumers when choosing technologies), they provide valuable guidance for policy evaluation related to energy efficiency. Different areas to further advance models remain open, particularly related to modelling issues, techno-economic and environmental aspects, behavioural determinants, and policy considerations.

  15. Evaluation study of building-resolved urban dispersion models

    SciTech Connect

    Flaherty, Julia E.; Allwine, K Jerry; Brown, Mike J.; Coirier, WIlliam J.; Ericson, Shawn C.; Hansen, Olav R.; Huber, Alan H.; Kim, Sura; Leach, Martin J.; Mirocha, Jeff D.; Newsom, Rob K.; Patnaik, Gopal; Senocak, Inanc

    2007-09-10

    For effective emergency response and recovery planning, it is critically important that building-resolved urban dispersion models be evaluated using field data. Several full-physics computational fluid dynamics (CFD) models and semi-empirical building-resolved (SEB) models are being advanced and applied to simulating flow and dispersion in urban areas. To obtain an estimate of the current state-of-readiness of these classes of models, the Department of Homeland Security (DHS) funded a study to compare five CFD models and one SEB model with tracer data from the extensive Midtown Manhattan field study (MID05) conducted during August 2005 as part of the DHS Urban Dispersion Program (UDP; Allwine and Flaherty 2007). Six days of tracer and meteorological experiments were conducted over an approximately 2-km-by-2-km area in Midtown Manhattan just south of Central Park in New York City. A subset of these data was used for model evaluations. The study was conducted such that an evaluation team, independent of the six modeling teams, provided all the input data (e.g., building data, meteorological data and tracer release rates) and run conditions for each of four experimental periods simulated. Tracer concentration data for two of the four experimental periods were provided to the modeling teams for their own evaluation of their respective models to ensure proper setup and operation. Tracer data were not provided for the second two experimental periods to provide for an independent evaluation of the models. The tracer concentrations resulting from the model simulations were provided to the evaluation team in a standard format for consistency in inter-comparing model results. An overview of the model evaluation approach will be given followed by a discussion on the qualitative comparison of the respective models with the field data. Future model developments efforts needed to address modeling gaps identified from this study will also be discussed.

  16. Evaluation of the BioVapor Model

    EPA Science Inventory

    The BioVapor model addresses transport and biodegradation of petroleum vapors in the subsurface. This presentation describes basic background on the nature and scientific basis of environmental transport models. It then describes a series of parameter uncertainty runs of the Bi...

  17. Evaluation of spinal cord injury animal models

    PubMed Central

    Zhang, Ning; Fang, Marong; Chen, Haohao; Gou, Fangming; Ding, Mingxing

    2014-01-01

    Because there is no curative treatment for spinal cord injury, establishing an ideal animal model is important to identify injury mechanisms and develop therapies for individuals suffering from spinal cord injuries. In this article, we systematically review and analyze various kinds of animal models of spinal cord injury and assess their advantages and disadvantages for further studies. PMID:25598784

  18. Four-dimensional evaluation of regional air quality models

    EPA Science Inventory

    We present highlights of the results obtained in the third phase of the Air Quality Model Evaluation International Initiative (AQMEII3). Activities in AQMEII3 were focused on evaluating the performance of global, hemispheric and regional modeling systems over Europe and North Ame...

  19. Rhode Island Model Evaluation & Support System: Support Professional. Edition II

    ERIC Educational Resources Information Center

    Rhode Island Department of Education, 2015

    2015-01-01

    Rhode Island educators believe that implementing a fair, accurate, and meaningful evaluation and support system for support professionals will help improve student outcomes. The primary purpose of the Rhode Island Model Support Professional Evaluation and Support System (Rhode Island Model) is to help all support professionals do their best work…

  20. The Development of Educational Evaluation Models in Indonesia.

    ERIC Educational Resources Information Center

    Nasoetion, N.; And Others

    The primary purpose of this project was to develop model evaluation procedures that could be applied to large educational undertakings in Indonesia. Three programs underway in Indonesia were selected for the development of evaluation models: the Textbook-Teacher Upgrading Project, the Development School Project, and the Examinations (Item Bank)…

  1. Modelling in Evaluating a Working Life Project in Higher Education

    ERIC Educational Resources Information Center

    Sarja, Anneli; Janhonen, Sirpa; Havukainen, Pirjo; Vesterinen, Anne

    2012-01-01

    This article describes an evaluation method based on collaboration between the higher education, a care home and university, in a R&D project. The aim of the project was to elaborate modelling as a tool of developmental evaluation for innovation and competence in project cooperation. The approach was based on activity theory. Modelling enabled a…

  2. Evaluating a Training Using the "Four Levels Model"

    ERIC Educational Resources Information Center

    Steensma, Herman; Groeneveld, Karin

    2010-01-01

    Purpose: The aims of this study are: to present a training evaluation based on the "four levels model"; to demonstrate the value of experimental designs in evaluation studies; and to take a first step in the development of an evidence-based training program. Design/methodology/approach: The Kirkpatrick four levels model was used to…

  3. A Generalized Evaluation Model for Primary Prevention Programs.

    ERIC Educational Resources Information Center

    Barling, Phillip W.; Cramer, Kathryn D.

    A generalized evaluation model (GEM) has been developed to evaluate primary prevention program impact. The GEM model views primary prevention dynamically; delineating four structural components (program, organization, target population, system) and four developmental stages (initiation, establishment, integration, continuation). The interaction of…

  4. Rhode Island Model Evaluation & Support System: Building Administrator. Edition III

    ERIC Educational Resources Information Center

    Rhode Island Department of Education, 2015

    2015-01-01

    Rhode Island educators believe that implementing a fair, accurate, and meaningful educator evaluation and support system will help improve teaching, learning, and school leadership. The primary purpose of the Rhode Island Model Building Administrator Evaluation and Support System (Rhode Island Model) is to help all building administrators improve.…

  5. Testing of a Program Evaluation Model: Final Report.

    ERIC Educational Resources Information Center

    Nagler, Phyllis J.; Marson, Arthur A.

    A program evaluation model developed by Moraine Park Technical Institute (MPTI) is described in this report. Following background material, the four main evaluation criteria employed in the model are identified as program quality, program relevance to community needs, program impact on MPTI, and the transition and growth of MPTI graduates in the…

  6. A Model for Evaluating Development Programs. Miscellaneous Report.

    ERIC Educational Resources Information Center

    Burton, John E., Jr.; Rogers, David L.

    Taking the position that the Classical Experimental Evaluation (CEE) Model does not do justice to the process of acquiring information necessary for decision making re planning, programming, implementing, and recycling program activities, this paper presents the Inductive, System-Process (ISP) evaluation model as an alternative to be used in…

  7. Program Evaluation: The Accountability Bridge Model for Counselors

    ERIC Educational Resources Information Center

    Astramovich, Randall L.; Coker, J. Kelly

    2007-01-01

    The accountability and reform movements in education and the human services professions have pressured counselors to demonstrate outcomes of counseling programs and services. Evaluation models developed for large-scale evaluations are generally impractical for counselors to implement. Counselors require practical models to guide them in planning…

  8. Center for Integrated Nanotechnologies (CINT) Chemical Release Modeling Evaluation

    SciTech Connect

    Stirrup, Timothy Scott

    2016-12-20

    This evaluation documents the methodology and results of chemical release modeling for operations at Building 518, Center for Integrated Nanotechnologies (CINT) Core Facility. This evaluation is intended to supplement an update to the CINT [Standalone] Hazards Analysis (SHA). This evaluation also updates the original [Design] Hazards Analysis (DHA) completed in 2003 during the design and construction of the facility; since the original DHA, additional toxic materials have been evaluated and modeled to confirm the continued low hazard classification of the CINT facility and operations. This evaluation addresses the potential catastrophic release of the current inventory of toxic chemicals at Building 518 based on a standard query in the Chemical Information System (CIS).

  9. Statistical modeling for visualization evaluation through data fusion.

    PubMed

    Chen, Xiaoyu; Jin, Ran

    2017-01-19

    There is a high demand of data visualization providing insights to users in various applications. However, a consistent, online visualization evaluation method to quantify mental workload or user preference is lacking, which leads to an inefficient visualization and user interface design process. Recently, the advancement of interactive and sensing technologies makes the electroencephalogram (EEG) signals, eye movements as well as visualization logs available in user-centered evaluation. This paper proposes a data fusion model and the application procedure for quantitative and online visualization evaluation. 15 participants joined the study based on three different visualization designs. The results provide a regularized regression model which can accurately predict the user's evaluation of task complexity, and indicate the significance of all three types of sensing data sets for visualization evaluation. This model can be widely applied to data visualization evaluation, and other user-centered designs evaluation and data analysis in human factors and ergonomics.

  10. Designing and Evaluating Representations to Model Pedagogy

    ERIC Educational Resources Information Center

    Masterman, Elizabeth; Craft, Brock

    2013-01-01

    This article presents the case for a theory-informed approach to designing and evaluating representations for implementation in digital tools to support Learning Design, using the framework of epistemic efficacy as an example. This framework, which is rooted in the literature of cognitive psychology, is operationalised through dimensions of fit…

  11. Evaluation of Numerical Storm Surge Models.

    DTIC Science & Technology

    1980-12-01

    of Defense, has primary responsibility for design of coastal protective works and for recommendations, where appropriate, for the management of exposed...coastal areas. In addition, the Federal Insurance Administration (FIA), of the Federal Emergency Management Agency (FFMA), is responsible for...study management and the responsibility to compare and evaluate the results of the computations were assigned to the Committee on Tidal Hydraulics

  12. Ohio Principal Evaluation System: Model Packet

    ERIC Educational Resources Information Center

    Ohio Department of Education, 2011

    2011-01-01

    The Ohio Principal Evaluation System (OPES) was collaboratively developed by Ohio superintendents, school administrators, higher education faculty, and representatives from Ohio's administrator associations. It was designed to be research-based, transparent, fair and adaptable to the specific contexts of Ohio's districts (rural, urban, suburban,…

  13. Model for Evaluating Teacher and Trainer Competences

    ERIC Educational Resources Information Center

    Carioca, Vito; Rodrigues, Clara; Saude, Sandra; Kokosowski, Alain; Harich, Katja; Sau-Ek, Kristiina; Georgogianni, Nicole; Levy, Samuel; Speer, Sandra; Pugh, Terence

    2009-01-01

    A lack of common criteria for comparing education and training systems makes it difficult to recognise qualifications and competences acquired in different environments and levels of training. A valid basis for defining a framework for evaluating professional performance in European educational and training contexts must therefore be established.…

  14. Evaluating Models of Human Performance: Safety-Critical Systems Applications

    NASA Technical Reports Server (NTRS)

    Feary, Michael S.

    2012-01-01

    This presentation is part of panel discussion on Evaluating Models of Human Performance. The purpose of this panel is to discuss the increasing use of models in the world today and specifically focus on how to describe and evaluate models of human performance. My presentation will focus on discussions of generating distributions of performance, and the evaluation of different strategies for humans performing tasks with mixed initiative (Human-Automation) systems. I will also discuss issues with how to provide Human Performance modeling data to support decisions on acceptability and tradeoffs in the design of safety critical systems. I will conclude with challenges for the future.

  15. Models for evaluating the performability of degradable computing systems

    NASA Technical Reports Server (NTRS)

    Wu, L. T.

    1982-01-01

    Recent advances in multiprocessor technology established the need for unified methods to evaluate computing systems performance and reliability. In response to this modeling need, a general modeling framework that permits the modeling, analysis and evaluation of degradable computing systems is considered. Within this framework, several user oriented performance variables are identified and shown to be proper generalizations of the traditional notions of system performance and reliability. Furthermore, a time varying version of the model is developed to generalize the traditional fault tree reliability evaluation methods of phased missions.

  16. Impact of model defect and experimental uncertainties on evaluated output

    NASA Astrophysics Data System (ADS)

    Neudecker, D.; Capote, R.; Leeb, H.

    2013-09-01

    One of the current major problems in nuclear data evaluation is the unreasonably small evaluated uncertainties often obtained. These small uncertainties are partly attributed to missing correlations of experimental uncertainties as well as to deficiencies of the model employed for the prior information. In this article, both uncertainty sources are included in an evaluation of 55Mn cross-sections for incident neutrons. Their impact on the evaluated output is studied using a prior obtained by the Full Bayesian Evaluation Technique and a prior obtained by the nuclear model program EMPIRE. It is shown analytically and by means of an evaluation that unreasonably small evaluated uncertainties can be obtained not only if correlated systematic uncertainties of the experiment are neglected but also if prior uncertainties are smaller or about the same magnitude as the experimental ones. Furthermore, it is shown that including model defect uncertainties in the evaluation of 55Mn leads to larger evaluated uncertainties for channels where the model is deficient. It is concluded that including correlated experimental uncertainties is equally important as model defect uncertainties, if the model calculations deviate significantly from the measurements.

  17. TMDL MODEL EVALUATION AND RESEARCH NEEDS

    EPA Science Inventory

    This review examines the modeling research needs to support environmental decision-making for the 303(d) requirements for development of total maximum daily loads (TMDLs) and related programs such as 319 Nonpoint Source Program activities, watershed management, stormwater permits...

  18. EVALUATING AND USING AIR QUALITY MODELS

    EPA Science Inventory

    Grid-based models are being used to assess the magnitude of the pollution problem and to design emission control strategies to achieve compliance with the relevant air quality standards in the United States.

  19. Evaluation of Surrogate Animal Models of Melioidosis

    PubMed Central

    Warawa, Jonathan Mark

    2010-01-01

    Burkholderia pseudomallei is the Gram-negative bacterial pathogen responsible for the disease melioidosis. B. pseudomallei establishes disease in susceptible individuals through multiple routes of infection, all of which may proceed to a septicemic disease associated with a high mortality rate. B. pseudomallei opportunistically infects humans and a wide range of animals directly from the environment, and modeling of experimental melioidosis has been conducted in numerous biologically relevant models including mammalian and invertebrate hosts. This review seeks to summarize published findings related to established animal models of melioidosis, with an aim to compare and contrast the virulence of B. pseudomallei in these models. The effect of the route of delivery on disease is also discussed for intravenous, intraperitoneal, subcutaneous, intranasal, aerosol, oral, and intratracheal infection methodologies, with a particular focus on how they relate to modeling clinical melioidosis. The importance of the translational validity of the animal models used in B. pseudomallei research is highlighted as these studies have become increasingly therapeutic in nature. PMID:21772830

  20. CFD code evaluation for internal flow modeling

    NASA Technical Reports Server (NTRS)

    Chung, T. J.

    1990-01-01

    Research on the computational fluid dynamics (CFD) code evaluation with emphasis on supercomputing in reacting flows is discussed. Advantages of unstructured grids, multigrids, adaptive methods, improved flow solvers, vector processing, parallel processing, and reduction of memory requirements are discussed. As examples, researchers include applications of supercomputing to reacting flow Navier-Stokes equations including shock waves and turbulence and combustion instability problems associated with solid and liquid propellants. Evaluation of codes developed by other organizations are not included. Instead, the basic criteria for accuracy and efficiency have been established, and some applications on rocket combustion have been made. Research toward an ultimate goal, the most accurate and efficient CFD code, is in progress and will continue for years to come.

  1. Drug Evaluation in the Plasmodium Falciparum - Aotus Model.

    DTIC Science & Technology

    1992-03-23

    AOTUS MODEL PRINCIPAL INVESTIGATOR: Richard N. Rossan, Ph.D. CONTRACTING ORGANIZATION: PROMED TRADING, S.A. P.O. Box 025426, PTY-051 Miami, Florida...91 - 2/28/92) 4. TITLE AND SUBTITLE S. FUNDING NUMBERS DRUG EVALUATION IN THE PLASMODIUM FALCIPARUM - Contract No. AOTUS MODEL DAMD17-91-C-1072 6C...words) Tne Panamanian Autus - PLasmodium falciparum model was used to evaluate potential antimalaria drugs. Neither protriptylene nor tetrandrine, each

  2. Experimental evaluations of the microchannel flow model

    NASA Astrophysics Data System (ADS)

    Parker, K. J.

    2015-06-01

    Recent advances have enabled a new wave of biomechanics measurements, and have renewed interest in selecting appropriate rheological models for soft tissues such as the liver, thyroid, and prostate. The microchannel flow model was recently introduced to describe the linear response of tissue to stimuli such as stress relaxation or shear wave propagation. This model postulates a power law relaxation spectrum that results from a branching distribution of vessels and channels in normal soft tissue such as liver. In this work, the derivation is extended to determine the explicit link between the distribution of vessels and the relaxation spectrum. In addition, liver tissue is modified by temperature or salinity, and the resulting changes in tissue responses (by factors of 1.5 or greater) are reasonably predicted from the microchannel flow model, simply by considering the changes in fluid flow through the modified samples. The 2 and 4 parameter versions of the model are considered, and it is shown that in some cases the maximum time constant (corresponding to the minimum vessel diameters), could be altered in a way that has major impact on the observed tissue response. This could explain why an inflamed region is palpated as a harder bump compared to surrounding normal tissue.

  3. Experimental evaluations of the microchannel flow model.

    PubMed

    Parker, K J

    2015-06-07

    Recent advances have enabled a new wave of biomechanics measurements, and have renewed interest in selecting appropriate rheological models for soft tissues such as the liver, thyroid, and prostate. The microchannel flow model was recently introduced to describe the linear response of tissue to stimuli such as stress relaxation or shear wave propagation. This model postulates a power law relaxation spectrum that results from a branching distribution of vessels and channels in normal soft tissue such as liver. In this work, the derivation is extended to determine the explicit link between the distribution of vessels and the relaxation spectrum. In addition, liver tissue is modified by temperature or salinity, and the resulting changes in tissue responses (by factors of 1.5 or greater) are reasonably predicted from the microchannel flow model, simply by considering the changes in fluid flow through the modified samples. The 2 and 4 parameter versions of the model are considered, and it is shown that in some cases the maximum time constant (corresponding to the minimum vessel diameters), could be altered in a way that has major impact on the observed tissue response. This could explain why an inflamed region is palpated as a harder bump compared to surrounding normal tissue.

  4. Evaluation of biological models using Spacelab

    NASA Technical Reports Server (NTRS)

    Tollinger, D.; Williams, B. A.

    1980-01-01

    Biological models of hypogravity effects are described, including the cardiovascular-fluid shift, musculoskeletal, embryological and space sickness models. These models predict such effects as loss of extracellular fluid and electrolytes, decrease in red blood cell mass, and the loss of muscle and bone mass in weight-bearing portions of the body. Experimentation in Spacelab by the use of implanted electromagnetic flow probes, by fertilizing frog eggs in hypogravity and fixing the eggs at various stages of early development and by assessing the role of the vestibulocular reflex arc in space sickness is suggested. It is concluded that the use of small animals eliminates the uncertainties caused by corrective or preventive measures employed with human subjects.

  5. Evaluating models of climate and forest vegetation

    NASA Technical Reports Server (NTRS)

    Clark, James S.

    1992-01-01

    Understanding how the biosphere may respond to increasing trace gas concentrations in the atmosphere requires models that contain vegetation responses to regional climate. Most of the processes ecologists study in forests, including trophic interactions, nutrient cycling, and disturbance regimes, and vital components of the world economy, such as forest products and agriculture, will be influenced in potentially unexpected ways by changing climate. These vegetation changes affect climate in the following ways: changing C, N, and S pools; trace gases; albedo; and water balance. The complexity of the indirect interactions among variables that depend on climate, together with the range of different space/time scales that best describe these processes, make the problems of modeling and prediction enormously difficult. These problems of predicting vegetation response to climate warming and potential ways of testing model predictions are the subjects of this chapter.

  6. Evaluation of Model Fit in Cognitive Diagnosis Models

    ERIC Educational Resources Information Center

    Hu, Jinxiang; Miller, M. David; Huggins-Manley, Anne Corinne; Chen, Yi-Hsin

    2016-01-01

    Cognitive diagnosis models (CDMs) estimate student ability profiles using latent attributes. Model fit to the data needs to be ascertained in order to determine whether inferences from CDMs are valid. This study investigated the usefulness of some popular model fit statistics to detect CDM fit including relative fit indices (AIC, BIC, and CAIC),…

  7. Modeling procedures for handling qualities evaluation of flexible aircraft

    NASA Technical Reports Server (NTRS)

    Govindaraj, K. S.; Eulrich, B. J.; Chalk, C. R.

    1981-01-01

    This paper presents simplified modeling procedures to evaluate the impact of flexible modes and the unsteady aerodynamic effects on the handling qualities of Supersonic Cruise Aircraft (SCR). The modeling procedures involve obtaining reduced order transfer function models of SCR vehicles, including the important flexible mode responses and unsteady aerodynamic effects, and conversion of the transfer function models to time domain equations for use in simulations. The use of the modeling procedures is illustrated by a simple example.

  8. Evaluation protocol for the WIND system atmospheric models

    SciTech Connect

    Fast, J.D.

    1991-12-31

    Atmospheric transport and diffusion models have been developed for real-time calculations of the location and concentration of toxic or radioactive materials during a accidental release at the Savannah River Site (SRS). These models are have been incorporated into an automated menu-driven computer based system called the WIND (Weather INformation and Display) system. In an effort to establish more formal quality assurance procedures for the WIND system atmospheric codes, a software evaluation protocol is being developed. An evaluation protocol is necessary to determine how well they may perform in emergency response (real-time) situations. The evaluation of high-impact software must be conducted in accordance with WSRC QA Manual, 1Q, QAP 20-1. This report will describe the method that will be used to evaluate the atmospheric models. The evaluation will determine the effectiveness of the atmospheric models in emergency response situations, which is not necessarily the same procedure used for research purposes. The format of the evaluation plan will provide guidance for the evaluation of atmospheric models that may be added to the WIND system in the future. The evaluation plan is designed to provide the user with information about the WIND system atmospheric models that is necessary for emergency response situations.

  9. Evaluation protocol for the WIND system atmospheric models

    SciTech Connect

    Fast, J.D.

    1991-01-01

    Atmospheric transport and diffusion models have been developed for real-time calculations of the location and concentration of toxic or radioactive materials during a accidental release at the Savannah River Site (SRS). These models are have been incorporated into an automated menu-driven computer based system called the WIND (Weather INformation and Display) system. In an effort to establish more formal quality assurance procedures for the WIND system atmospheric codes, a software evaluation protocol is being developed. An evaluation protocol is necessary to determine how well they may perform in emergency response (real-time) situations. The evaluation of high-impact software must be conducted in accordance with WSRC QA Manual, 1Q, QAP 20-1. This report will describe the method that will be used to evaluate the atmospheric models. The evaluation will determine the effectiveness of the atmospheric models in emergency response situations, which is not necessarily the same procedure used for research purposes. The format of the evaluation plan will provide guidance for the evaluation of atmospheric models that may be added to the WIND system in the future. The evaluation plan is designed to provide the user with information about the WIND system atmospheric models that is necessary for emergency response situations.

  10. Cutter Resource Effectiveness Evaluation Model. Executive Summary.

    DTIC Science & Technology

    1977-06-01

    and D. S. Prerau ~~~ Transportation Systems Center Kendall Square, Cambridge, M~ 02142 T w ~~ r4r,S~~~~ June 1977 c~/ FINAL REPORTi • w Ic; E~ Document...I . Work Unit No. (TRAIS) USCG R&D Center Transportation Systems Center _______________________________ Avery Point Kendall Square ~~~~ Contract...document the Cutter Resource Effectiveness Evaluation Project at the CC R&D Center and Transportation Systems Center . ~16. Abstract ~This report provides a

  11. Evaluating the Pedagogical Potential of Hybrid Models

    ERIC Educational Resources Information Center

    Levin, Tzur; Levin, Ilya

    2013-01-01

    The paper examines how the use of hybrid models--that consist of the interacting continuous and discrete processes--may assist in teaching system thinking. We report an experiment in which undergraduate students were asked to choose between a hybrid and a continuous solution for a number of control problems. A correlation has been found between…

  12. Working with Teaching Assistants: Three Models Evaluated

    ERIC Educational Resources Information Center

    Cremin, Hilary; Thomas, Gary; Vincett, Karen

    2005-01-01

    Questions about how best to deploy teaching assistants (TAs) are particularly opposite given the greatly increasing numbers of TAs in British schools and given findings about the difficulty effecting adult teamwork in classrooms. In six classrooms, three models of team organisation and planning for the work of teaching assistants -- "room…

  13. AERMOD: MODEL FORMULATION AND EVALUATION RESULTS

    EPA Science Inventory

    AERMOD is an advanced plume model that incorporates updated treatments of the boundary layer theory, understanding of turbulence and dispersion, and includes handling of terrain interactions. This paper presents an overview of AERMOD's features relative to ISCST3.

    AERM...

  14. Evaluating a Model of Youth Physical Activity

    ERIC Educational Resources Information Center

    Heitzler, Carrie D.; Lytle, Leslie A.; Erickson, Darin J.; Barr-Anderson, Daheia; Sirard, John R.; Story, Mary

    2010-01-01

    Objective: To explore the relationship between social influences, self-efficacy, enjoyment, and barriers and physical activity. Methods: Structural equation modeling examined relationships between parent and peer support, parent physical activity, individual perceptions, and objectively measured physical activity using accelerometers among a…

  15. An Evaluation of Title I Model C1: The Special Regression Model.

    ERIC Educational Resources Information Center

    Mandeville, Garrett K.

    The RMC Research Corporation evaluation model C1--the special regression model (SRM)--was evaluated through a series of computer simulations and compared with an alternative model, the norm referenced model (NRM). Using local data and national norm data to determine reasonable values for sample size and pretest posttest correlation parameters, the…

  16. Evaluation of a hydrological model based on Bidirectional Reach (BReach)

    NASA Astrophysics Data System (ADS)

    Van Eerdenbrugh, Katrien; Van Hoey, Stijn; Verhoest, Niko E. C.

    2016-04-01

    Evaluation and discrimination of model structures is crucial to ensure an appropriate use of hydrological models. When evaluating model results by aggregating their quality in (a subset of) individual observations, overall results of this analysis sometimes conceal important detailed information about model structural deficiencies. Analyzing model results within their local (time) context can uncover this detailed information. In this research, a methodology called Bidirectional Reach (BReach) is proposed to evaluate and analyze results of a hydrological model by assessing the maximum left and right reach in each observation point that is used for model evaluation. These maximum reaches express the capability of the model to describe a subset of the evaluation data both in the direction of the previous (left) and of the following data (right). This capability is evaluated on two levels. First, on the level of individual observations, the combination of a parameter set and an observation is classified as non-acceptable if the deviation between the accompanying model result and the measurement exceeds observational uncertainty. Second, the behavior in a sequence of observations is evaluated by means of a tolerance degree. This tolerance degree expresses the condition for satisfactory model behavior in a data series and is defined by the percentage of observations within this series that can have non-acceptable model results. Based on both criteria, the maximum left and right reaches of a model in an observation represent the data points in the direction of the previous respectively the following observations beyond which none of the sampled parameter sets both are satisfactory and result in an acceptable deviation. After assessing these reaches for a variety of tolerance degrees, results can be plotted in a combined BReach plot that show temporal changes in the behavior of model results. The methodology is applied on a Probability Distributed Model (PDM) of the river

  17. Evaluating the accuracy of diffusion MRI models in white matter.

    PubMed

    Rokem, Ariel; Yeatman, Jason D; Pestilli, Franco; Kay, Kendrick N; Mezer, Aviv; van der Walt, Stefan; Wandell, Brian A

    2015-01-01

    Models of diffusion MRI within a voxel are useful for making inferences about the properties of the tissue and inferring fiber orientation distribution used by tractography algorithms. A useful model must fit the data accurately. However, evaluations of model-accuracy of commonly used models have not been published before. Here, we evaluate model-accuracy of the two main classes of diffusion MRI models. The diffusion tensor model (DTM) summarizes diffusion as a 3-dimensional Gaussian distribution. Sparse fascicle models (SFM) summarize the signal as a sum of signals originating from a collection of fascicles oriented in different directions. We use cross-validation to assess model-accuracy at different gradient amplitudes (b-values) throughout the white matter. Specifically, we fit each model to all the white matter voxels in one data set and then use the model to predict a second, independent data set. This is the first evaluation of model-accuracy of these models. In most of the white matter the DTM predicts the data more accurately than test-retest reliability; SFM model-accuracy is higher than test-retest reliability and also higher than the DTM model-accuracy, particularly for measurements with (a) a b-value above 1000 in locations containing fiber crossings, and (b) in the regions of the brain surrounding the optic radiations. The SFM also has better parameter-validity: it more accurately estimates the fiber orientation distribution function (fODF) in each voxel, which is useful for fiber tracking.

  18. Evaluation of regional-scale receptor modeling.

    PubMed

    Lowenthal, Douglas H; Watson, John G; Koracin, Darko; Chen, L W Antony; Dubois, David; Vellore, Ramesh; Kumar, Naresh; Knipping, Eladio M; Wheeler, Neil; Craig, Kenneth; Reid, Stephen

    2010-01-01

    The ability of receptor models to estimate regional contributions to fine particulate matter (PM2.5) was assessed with synthetic, speciated datasets at Brigantine National Wildlife Refuge (BRIG) in New Jersey and Great Smoky Mountains National Park (GRSM) in Tennessee. Synthetic PM2.5 chemical concentrations were generated for the summer of 2002 using the Community Multiscale Air Quality (CMAQ) model and chemically speciated PM2.5 source profiles from the U.S. Environmental Protection Agency (EPA)'s SPECIATE and Desert Research Institute's source profile databases. CMAQ estimated the "true" contributions of seven regions in the eastern United States to chemical species concentrations and individual source contributions to primary PM2.5 at both sites. A seven-factor solution by the positive matrix factorization (PMF) receptor model explained approximately 99% of the variability in the data at both sites. At BRIG, PMF captured the first four major contributing sources (including a secondary sulfate factor), although diesel and gasoline vehicle contributions were not separated. However, at GRSM, the resolved factors did not correspond well to major PM2.5 sources. There were no correlations between PMF factors and regional contributions to sulfate at either site. Unmix produced five- and seven-factor solutions, including a secondary sulfate factor, at both sites. Some PMF factors were combined or missing in the Unmix factors. The trajectory mass balance regression (TMBR) model apportioned sulfate concentrations to the seven source regions using Hybrid Single-Particle Lagrangian Integrated Trajectory (HYSPLIT) trajectories based on Meteorological Model Version 5 (MM5) and Eta Data Simulation System (EDAS) meteorological input. The largest estimated sulfate contributions at both sites were from the local regions; this agreed qualitatively with the true regional apportionments. Estimated regional contributions depended on the starting elevation of the trajectories and on

  19. Automated expert modeling for automated student evaluation.

    SciTech Connect

    Abbott, Robert G.

    2006-01-01

    The 8th International Conference on Intelligent Tutoring Systems provides a leading international forum for the dissemination of original results in the design, implementation, and evaluation of intelligent tutoring systems and related areas. The conference draws researchers from a broad spectrum of disciplines ranging from artificial intelligence and cognitive science to pedagogy and educational psychology. The conference explores intelligent tutoring systems increasing real world impact on an increasingly global scale. Improved authoring tools and learning object standards enable fielding systems and curricula in real world settings on an unprecedented scale. Researchers deploy ITS's in ever larger studies and increasingly use data from real students, tasks, and settings to guide new research. With high volumes of student interaction data, data mining, and machine learning, tutoring systems can learn from experience and improve their teaching performance. The increasing number of realistic evaluation studies also broaden researchers knowledge about the educational contexts for which ITS's are best suited. At the same time, researchers explore how to expand and improve ITS/student communications, for example, how to achieve more flexible and responsive discourse with students, help students integrate Web resources into learning, use mobile technologies and games to enhance student motivation and learning, and address multicultural perspectives.

  20. Evaluation and development of physically-based embankment breach models

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The CEATI Dam Safety Interest Group (DSIG) working group on embankment erosion and breach modelling has evaluated three physically-based numerical models used to simulate embankment erosion and breach development. The three models identified by the group were considered to be good candidates for fu...

  1. INVERSE MODEL ESTIMATION AND EVALUATION OF SEASONAL NH 3 EMISSIONS

    EPA Science Inventory

    The presentation topic is inverse modeling for estimate and evaluation of emissions. The case study presented is the need for seasonal estimates of NH3 emissions for air quality modeling. The inverse modeling application approach is first described, and then the NH

  2. Class Ranking Models for Deans' Letters and Their Psychometric Evaluation.

    ERIC Educational Resources Information Center

    Blacklow, Robert S.; And Others

    1991-01-01

    A study developed and evaluated five class ranking models for graduating medical students (n=1,283) in which performance data from both basic and clinical sciences could be used to study the models' predictive validity. Two models yielded higher validity; one is recommended for balance of clinical and basic science measures. (MSE)

  3. Teachers' Development Model to Authentic Assessment by Empowerment Evaluation Approach

    ERIC Educational Resources Information Center

    Charoenchai, Charin; Phuseeorn, Songsak; Phengsawat, Waro

    2015-01-01

    The purposes of this study were 1) Study teachers authentic assessment, teachers comprehension of authentic assessment and teachers needs for authentic assessment development. 2) To create teachers development model. 3) Experiment of teachers development model. 4) Evaluate effectiveness of teachers development model. The research is divided into 4…

  4. [Applying multilevel models in evaluation of bioequivalence (I)].

    PubMed

    Liu, Qiao-lan; Shen, Zhuo-zhi; Chen, Feng; Li, Xiao-song; Yang, Min

    2009-12-01

    This study aims to explore the application value of multilevel models for bioequivalence evaluation. Using a real example of 2 x 4 cross-over experimental design in evaluating bioequivalence of antihypertensive drug, this paper explores complex variance components corresponding to criteria statistics in existing methods recommended by FDA but obtained in multilevel models analysis. Results are compared with those from FDA standard Method of Moments, specifically on the feasibility and applicability of multilevel models in directly assessing the bioequivalence (ABE), the population bioequivalence (PBE) and the individual bioequivalence (IBE). When measuring ln (AUC), results from all variance components of the test and reference groups such as total variance (sigma(TT)(2) and sigma(TR)(2)), between-subject variance (sigma(BT)(2) and sigma(BR)(2)) and within-subject variance (sigma(WT)(2) and sigma(WR)(2)) estimated by simple 2-level models are very close to those that using the FDA Method of Moments. In practice, bioequivalence evaluation can be carried out directly by multilevel models, or by FDA criteria, based on variance components estimated from multilevel models. Both approaches produce consistent results. Multilevel models can be used to evaluate bioequivalence in cross-over test design. Compared to FDA methods, this one is more flexible in decomposing total variance into sub components in order to evaluate the ABE, PBE and IBE. Multilevel model provides a new way into the practice of bioequivalence evaluation.

  5. Evaluating the double Poisson generalized linear model.

    PubMed

    Zou, Yaotian; Geedipally, Srinivas Reddy; Lord, Dominique

    2013-10-01

    The objectives of this study are to: (1) examine the applicability of the double Poisson (DP) generalized linear model (GLM) for analyzing motor vehicle crash data characterized by over- and under-dispersion and (2) compare the performance of the DP GLM with the Conway-Maxwell-Poisson (COM-Poisson) GLM in terms of goodness-of-fit and theoretical soundness. The DP distribution has seldom been investigated and applied since its first introduction two decades ago. The hurdle for applying the DP is related to its normalizing constant (or multiplicative constant) which is not available in closed form. This study proposed a new method to approximate the normalizing constant of the DP with high accuracy and reliability. The DP GLM and COM-Poisson GLM were developed using two observed over-dispersed datasets and one observed under-dispersed dataset. The modeling results indicate that the DP GLM with its normalizing constant approximated by the new method can handle crash data characterized by over- and under-dispersion. Its performance is comparable to the COM-Poisson GLM in terms of goodness-of-fit (GOF), although COM-Poisson GLM provides a slightly better fit. For the over-dispersed data, the DP GLM performs similar to the NB GLM. Considering the fact that the DP GLM can be easily estimated with inexpensive computation and that it is simpler to interpret coefficients, it offers a flexible and efficient alternative for researchers to model count data.

  6. Case study of an evaluation coaching model: exploring the role of the evaluator.

    PubMed

    Ensminger, David C; Kallemeyn, Leanne M; Rempert, Tania; Wade, James; Polanin, Megan

    2015-04-01

    This study examined the role of the external evaluator as a coach. More specifically, using an evaluative inquiry framework (Preskill & Torres, 1999a; Preskill & Torres, 1999b), it explored the types of coaching that an evaluator employed to promote individual, team and organizational learning. The study demonstrated that evaluation coaching provided a viable means for an organization with a limited budget to conduct evaluations through support of a coach. It also demonstrated how the coaching processes supported the development of evaluation capacity within the organization. By examining coaching models outside of the field of evaluation, this study identified two forms of coaching--results coaching and developmental coaching--that promoted evaluation capacity building and have not been previously discussed in the evaluation literature.

  7. Rule based design of conceptual models for formative evaluation

    NASA Technical Reports Server (NTRS)

    Moore, Loretta A.; Chang, Kai; Hale, Joseph P.; Bester, Terri; Rix, Thomas; Wang, Yaowen

    1994-01-01

    A Human-Computer Interface (HCI) Prototyping Environment with embedded evaluation capability has been investigated. This environment will be valuable in developing and refining HCI standards and evaluating program/project interface development, especially Space Station Freedom on-board displays for payload operations. This environment, which allows for rapid prototyping and evaluation of graphical interfaces, includes the following four components: (1) a HCI development tool; (2) a low fidelity simulator development tool; (3) a dynamic, interactive interface between the HCI and the simulator; and (4) an embedded evaluator that evaluates the adequacy of a HCI based on a user's performance. The embedded evaluation tool collects data while the user is interacting with the system and evaluates the adequacy of an interface based on a user's performance. This paper describes the design of conceptual models for the embedded evaluation system using a rule-based approach.

  8. Rule based design of conceptual models for formative evaluation

    NASA Technical Reports Server (NTRS)

    Moore, Loretta A.; Chang, Kai; Hale, Joseph P.; Bester, Terri; Rix, Thomas; Wang, Yaowen

    1994-01-01

    A Human-Computer Interface (HCI) Prototyping Environment with embedded evaluation capability has been investigated. This environment will be valuable in developing and refining HCI standards and evaluating program/project interface development, especially Space Station Freedom on-board displays for payload operations. This environment, which allows for rapid prototyping and evaluation of graphical interfaces, includes the following four components: (1) a HCI development tool, (2) a low fidelity simulator development tool, (3) a dynamic, interactive interface between the HCI and the simulator, and (4) an embedded evaluator that evaluates the adequacy of a HCI based on a user's performance. The embedded evaluation tool collects data while the user is interacting with the system and evaluates the adequacy of an interface based on a user's performance. This paper describes the design of conceptual models for the embedded evaluation system using a rule-based approach.

  9. Evaluation of an Infiltration Model with Microchannels

    NASA Astrophysics Data System (ADS)

    Garcia-Serrana, M.; Gulliver, J. S.; Nieber, J. L.

    2015-12-01

    This research goal is to develop and demonstrate the means by which roadside drainage ditches and filter strips can be assigned the appropriate volume reduction credits by infiltration. These vegetated surfaces convey stormwater, infiltrate runoff, and filter and/or settle solids, and are often placed along roads and other impermeable surfaces. Infiltration rates are typically calculated by assuming that water flows as sheet flow over the slope. However, for most intensities water flow occurs in narrow and shallow micro-channels and concentrates in depressions. This channelization reduces the fraction of the soil surface covered with the water coming from the road. The non-uniform distribution of water along a hillslope directly affects infiltration. First, laboratory and field experiments have been conducted to characterize the spatial pattern of flow for stormwater runoff entering onto the surface of a sloped surface in a drainage ditch. In the laboratory experiments different micro-topographies were tested over bare sandy loam soil: a smooth surface, and three and five parallel rills. All the surfaces experienced erosion; the initially smooth surface developed a system of channels over time that increased runoff generation. On average, the initially smooth surfaces infiltrated 10% more volume than the initially rilled surfaces. The field experiments were performed in the side slope of established roadside drainage ditches. Three rates of runoff from a road surface into the swale slope were tested, representing runoff from 1, 2, and 10-year storm events. The average percentage of input runoff water infiltrated in the 32 experiments was 67%, with a 21% standard deviation. Multiple measurements of saturated hydraulic conductivity were conducted to account for its spatial variability. Second, a rate-based coupled infiltration and overland model has been designed that calculates stormwater infiltration efficiency of swales. The Green-Ampt-Mein-Larson assumptions were

  10. Comparisons and Evaluation of Hall Thruster Models

    DTIC Science & Technology

    2007-11-02

    electromagnets and to be unaffected by the discharge, so that it can be treated as input data. Both models calculate , to facilitate the solution of the...large number of neutrals are calculated , where collisions are treated with random numbers. This approach is realistic but takes much computation time...cos, 232 vkT mvvvvvg     −∝ (8) where v is the speed, θ the angle with the axial direction, and φ an angle in the plane perpendicular to

  11. The CREATIVE Decontamination Performance Evaluation Model

    DTIC Science & Technology

    2008-06-01

    Model to Describe Penetration of Skin by Sorbed Liquids by Contact”, CRDEC-CR-87100 5. Clarke, A., “Spreading and Imbibition of Liquid Drops on...δy and δz = f•δx δt ≤ δx2 2 D ( 2 + 1/f2) C @ next time step UNCLASSIFIED/UNLIMITED UNCLASSIFIED/UNLIMITED Finite Difference Application Drop ...boundaries: 1) Drop : constant source until drop disappears 2) Base of substrate considered impenetrable 3) Sides and top of coupon allow mass to escape

  12. A MULTILAYER BIOCHEMICAL DRY DEPOSITION MODEL 2. MODEL EVALUATION

    EPA Science Inventory

    The multilayer biochemical dry deposition model (MLBC) described in the accompanying paper was tested against half-hourly eddy correlation data from six field sites under a wide range of climate conditions with various plant types. Modeled CO2, O3, SO2<...

  13. Structural equation modeling: building and evaluating causal models: Chapter 8

    USGS Publications Warehouse

    Grace, James B.; Scheiner, Samuel M.; Schoolmaster, Donald R.

    2015-01-01

    Scientists frequently wish to study hypotheses about causal relationships, rather than just statistical associations. This chapter addresses the question of how scientists might approach this ambitious task. Here we describe structural equation modeling (SEM), a general modeling framework for the study of causal hypotheses. Our goals are to (a) concisely describe the methodology, (b) illustrate its utility for investigating ecological systems, and (c) provide guidance for its application. Throughout our presentation, we rely on a study of the effects of human activities on wetland ecosystems to make our description of methodology more tangible. We begin by presenting the fundamental principles of SEM, including both its distinguishing characteristics and the requirements for modeling hypotheses about causal networks. We then illustrate SEM procedures and offer guidelines for conducting SEM analyses. Our focus in this presentation is on basic modeling objectives and core techniques. Pointers to additional modeling options are also given.

  14. [Effect evaluation of three cell culture models].

    PubMed

    Wang, Aiguo; Xia, Tao; Yuan, Jing; Chen, Xuemin

    2003-11-01

    Primary rat hepatocytes were cultured using three kinds of models in vitro and the enzyme leakage, albumin secretion, and cytochrome P450 1A (CYP 1A) activity were observed. The results showed that the level of LDH in the medium decreased over time in the period of culture. However, on 5 days, LDH showed a significant increase in monolayer culture (MC) while after 8 days LDH was not detected in sandwich culture (SC). The levels of AST and ALT in the medium did not change significantly over the investigated time. The basic CYP 1A activity gradually decreased with time in MC and SC. The decline of CYP 1A in rat hepatocytes was faster in MC than that in SC. This effect was partially reversed by using cytochrome P450 (CYP450) inducers such as omeprazol and 3-methylcholanthrene (3-MC) and the CYP 1A induction was always higher in MC than that in SC. Basic CYP 1A activity in bioreactor was keeped over 2 weeks and the highest albumin production was observed in bioreactor, and next were SC and MC. In conclusion, our results clearly indicated that there have some advantages and disadvantages in each of models in which can address different questions in metabolism of toxicants and drugs.

  15. Industrial Waste Management Evaluation Model Version 3.1

    EPA Pesticide Factsheets

    IWEM is a screening level ground water model designed to simulate contaminant fate and transport. IWEM v3.1 is the latest version of the IWEM software, which includes additional tools to evaluate the beneficial use of industrial materials

  16. Evaluating Rapid Models for High-Throughput Exposure Forecasting (SOT)

    EPA Science Inventory

    High throughput exposure screening models can provide quantitative predictions for thousands of chemicals; however these predictions must be systematically evaluated for predictive ability. Without the capability to make quantitative, albeit uncertain, forecasts of exposure, the ...

  17. EVALUATION OF MULTIPLE PHARMACOKINETIC MODELING STRUCTURES FOR TRICHLOROETHYLENE

    EPA Science Inventory

    A series of PBPK models were developed for trichloroethylene (TCE) to evaluate biological processes that may affect the absorption, distribution, metabolism and excretion (ADME) of TCE and its metabolites.

  18. The Air Quality Model Evaluation International Initiative (AQMEII)

    EPA Science Inventory

    This presentation provides an overview of the Air Quality Model Evaluation International Initiative (AQMEII). It contains a synopsis of the three phases of AQMEII, including objectives, logistics, and timelines. It also provides a number of examples of analyses conducted through ...

  19. Solid rocket booster performance evaluation model. Volume 4: Program listing

    NASA Technical Reports Server (NTRS)

    1974-01-01

    All subprograms or routines associated with the solid rocket booster performance evaluation model are indexed in this computer listing. An alphanumeric list of each routine in the index is provided in a table of contents.

  20. [Development of human embryonic stem cell model for toxicity evaluation].

    PubMed

    Yu, Guang-yan; Cao, Tong; Ouyang, Hong-wei; Peng, Shuang-qing; Deng, Xu-liang; Li, Sheng-lin; Liu, He; Zou, Xiao-hui; Fu, Xin; Peng, Hui; Wang, Xiao-ying; Zhan, Yuan

    2013-02-18

    The current international standard for toxicity screening of biomedical devices and materials recommend the use of immortalized cell lines because of their homogeneous morphologies and infinite proliferation which provide good reproducibility for in vitro cytotoxicity screening. However, most of the widely used immortalized cell lines are derived from animals and may not be representative of normal human cell behavior in vivo, in particular in terms of the cytotoxic and genotoxic response. Therefore, It is vital to develop a model for toxicity evaluation. In our studies, two Chinese human embryonic stem cell (hESC) lines as toxicity model were established. hESC derived tissue/organ cell model for tissue/organ specific toxicity evaluation were developed. The efficiency and accuracy of using hESC model for cytoxicity, embryotoxicity and genotoxicity evaluation were confirmed. The results indicated that hESCs might be good tools for toxicity testing and biosafety evaluation in vitro.

  1. Incorporating principal component analysis into air quality model evaluation

    EPA Science Inventory

    The efficacy of standard air quality model evaluation techniques is becoming compromised as the simulation periods continue to lengthen in response to ever increasing computing capacity. Accordingly, the purpose of this paper is to demonstrate a statistical approach called Princi...

  2. Faculty performance evaluation: the CIPP-SAPS model.

    PubMed

    Mitcham, M

    1981-11-01

    The issues of faculty performance evaluation for allied health professionals are addressed. Daniel Stufflebeam's CIPP (content-input-process-product) model is introduced and its development in a CIPP-SAPS (self-administrative-peer-student) model is pursued. Data sources for the SAPS portion of the model are discussed. A suggestion for the use of the CIPP-SAPS model within a teaching contract plan is explored.

  3. The Implementation of a District-Wide Evaluation Model.

    ERIC Educational Resources Information Center

    Gess, Diane; And Others

    This publication describes a practicum project that developed a comprehensive educational evaluation system for collecting, storing, and displaying pertinent data for use in planning educational programs at both the district and school level in the City School District of New Rochelle. The resulting New Rochelle Evaluation Model was developed from…

  4. Interrater Agreement Evaluation: A Latent Variable Modeling Approach

    ERIC Educational Resources Information Center

    Raykov, Tenko; Dimitrov, Dimiter M.; von Eye, Alexander; Marcoulides, George A.

    2013-01-01

    A latent variable modeling method for evaluation of interrater agreement is outlined. The procedure is useful for point and interval estimation of the degree of agreement among a given set of judges evaluating a group of targets. In addition, the approach allows one to test for identity in underlying thresholds across raters as well as to identify…

  5. An Evaluation of Cluster Analytic Approaches to Initial Model Specification.

    ERIC Educational Resources Information Center

    Bacon, Donald R.

    2001-01-01

    Evaluated the performance of several alternative cluster analytic approaches to initial model specification using population parameter analyses and a Monte Carlo simulation. Of the six cluster approaches evaluated, the one using the correlations of item correlations as a proximity metric and average linking as a clustering algorithm performed the…

  6. Information and complexity measures for hydrologic model evaluation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Hydrological models are commonly evaluated through the residual-based performance measures such as the root-mean square error or efficiency criteria. Such measures, however, do not evaluate the degree of similarity of patterns in simulated and measured time series. The objective of this study was to...

  7. An Emerging Model for Student Feedback: Electronic Distributed Evaluation

    ERIC Educational Resources Information Center

    Brunk-Chavez, Beth; Arrigucci, Annette

    2012-01-01

    In this article we address several issues and challenges that the evaluation of writing presents individual instructors and composition programs as a whole. We present electronic distributed evaluation, or EDE, as an emerging model for feedback on student writing and describe how it was integrated into our program's course redesign. Because the…

  8. Regime-based evaluation of cloudiness in CMIP5 models

    NASA Astrophysics Data System (ADS)

    Jin, Daeho; Oreopoulos, Lazaros; Lee, Dongmin

    2017-01-01

    The concept of cloud regimes (CRs) is used to develop a framework for evaluating the cloudiness of 12 fifth Coupled Model Intercomparison Project (CMIP5) models. Reference CRs come from existing global International Satellite Cloud Climatology Project (ISCCP) weather states. The evaluation is made possible by the implementation in several CMIP5 models of the ISCCP simulator generating in each grid cell daily joint histograms of cloud optical thickness and cloud top pressure. Model performance is assessed with several metrics such as CR global cloud fraction (CF), CR relative frequency of occurrence (RFO), their product [long-term average total cloud amount (TCA)], cross-correlations of CR RFO maps, and a metric of resemblance between model and ISCCP CRs. In terms of CR global RFO, arguably the most fundamental metric, the models perform unsatisfactorily overall, except for CRs representing thick storm clouds. Because model CR CF is internally constrained by our method, RFO discrepancies yield also substantial TCA errors. Our results support previous findings that CMIP5 models underestimate cloudiness. The multi-model mean performs well in matching observed RFO maps for many CRs, but is still not the best for this or other metrics. When overall performance across all CRs is assessed, some models, despite shortcomings, apparently outperform Moderate Resolution Imaging Spectroradiometer cloud observations evaluated against ISCCP like another model output. Lastly, contrasting cloud simulation performance against each model's equilibrium climate sensitivity in order to gain insight on whether good cloud simulation pairs with particular values of this parameter, yields no clear conclusions.

  9. The Dynamic Integrated Evaluation Model (DIEM): Achieving Sustainability in Organizational Intervention through a Participatory Evaluation Approach.

    PubMed

    von Thiele Schwarz, Ulrica; Lundmark, Robert; Hasson, Henna

    2016-10-01

    Recently, there have been calls to develop ways of using a participatory approach when conducting interventions, including evaluating the process and context to improve and adapt the intervention as it evolves over time. The need to integrate interventions into daily organizational practices, thereby increasing the likelihood of successful implementation and sustainable changes, has also been highlighted. We propose an evaluation model-the Dynamic Integrated Evaluation Model (DIEM)-that takes this into consideration. In the model, evaluation is fitted into a co-created iterative intervention process, in which the intervention activities can be continuously adapted based on collected data. By explicitly integrating process and context factors, DIEM also considers the dynamic sustainability of the intervention over time. It emphasizes the practical value of these evaluations for organizations, as well as the importance of their rigorousness for research purposes. Copyright © 2016 John Wiley & Sons, Ltd.

  10. An evaluation of recent internal field models. [of earth magnetism

    NASA Technical Reports Server (NTRS)

    Mead, G. D.

    1979-01-01

    The paper reviews the current status of internal field models and evaluates several recently published models by comparing their predictions with annual means of the magnetic field measured at 140 magnetic observatories from 1973 to 1977. Three of the four models studied, viz. AWC/75, IGS/75, and Pogo 8/71, were nearly equal in their ability to predict the magnitude and direction of the current field. The fourth model, IGRF 1975, was significantly poorer in its ability to predict the current field. All models seemed to be able to extrapolate predictions quite well several years outside the data range used to construct the models.

  11. Statistical evaluation and choice of soil water retention models

    NASA Astrophysics Data System (ADS)

    Lennartz, Franz; Müller, Hans-Otfried; Nollau, Volker; Schmitz, Gerd H.; El-Shehawy, Shaban A.

    2008-12-01

    This paper presents the results of statistical investigations for the evaluation of soil water retention models (SWRMs). We employed three different methods developed for model selection in the field of nonlinear regression, namely, simulation studies, analysis of nonlinearity measures, and resampling strategies such as cross validation and bootstrap methods. Using these methods together with small data sets, we evaluated the performance of three exemplarily chosen types of SWRMs with respect to their parameter properties and the reliability of model predictions. The resulting rankings of models show that the favorable models are characterized by few parameters with an almost linear estimation behavior and close to symmetric distributions. To further demonstrate the potential of the statistical methods in the field of model selection, a modification of the four-parameter van Genuchten model is proposed which shows significantly improved and robust statistical properties.

  12. An Evaluation of Unsaturated Flow Models in an Arid Climate

    SciTech Connect

    Dixon, J.

    1999-12-01

    The objective of this study was to evaluate the effectiveness of two unsaturated flow models in arid regions. The area selected for the study was the Area 5 Radioactive Waste Management Site (RWMS) at the Nevada Test Site in Nye County, Nevada. The two models selected for this evaluation were HYDRUS-1D [Simunek et al., 1998] and the SHAW model [Flerchinger and Saxton, 1989]. Approximately 5 years of soil-water and atmospheric data collected from an instrumented weighing lysimeter site near the RWMS were used for building the models with actual initial and boundary conditions representative of the site. Physical processes affecting the site and model performance were explored. Model performance was based on a detailed sensitivity analysis and ultimately on storage comparisons. During the process of developing descriptive model input, procedures for converting hydraulic parameters for each model were explored. In addition, the compilation of atmospheric data collected at the site became a useful tool for developing predictive functions for future studies. The final model results were used to evaluate the capacities of the HYDRUS and SHAW models for predicting soil-moisture movement and variable surface phenomena for bare soil conditions in the arid vadose zone. The development of calibrated models along with the atmospheric and soil data collected at the site provide useful information for predicting future site performance at the RWMS.

  13. Evaluation of performance of predictive models for deoxynivalenol in wheat.

    PubMed

    van der Fels-Klerx, H J

    2014-02-01

    The aim of this study was to evaluate the performance of two predictive models for deoxynivalenol contamination of wheat at harvest in the Netherlands, including the use of weather forecast data and external model validation. Data were collected in a different year and from different wheat fields than data used for model development. The two models were run for six preset scenarios, varying in the period for which weather forecast data were used, from zero-day (historical data only) to a 13-day period around wheat flowering. Model predictions using forecast weather data were compared to those using historical data. Furthermore, model predictions using historical weather data were evaluated against observed deoxynivalenol contamination of the wheat fields. Results showed that the use of weather forecast data rather than observed data only slightly influenced model predictions. The percent of correct model predictions, given a threshold of 1,250 μg/kg (legal limit in European Union), was about 95% for the two models. However, only three samples had a deoxynivalenol concentration above this threshold, and the models were not able to predict these samples correctly. It was concluded that two- week weather forecast data can reliable be used in descriptive models for deoxynivalenol contamination of wheat, resulting in more timely model predictions. The two models are able to predict lower deoxynivalenol contamination correctly, but model performance in situations with high deoxynivalenol contamination needs to be further validated. This will need years with conducive environmental conditions for deoxynivalenol contamination of wheat.

  14. Putting Theory-Oriented Evaluation into Practice: A Logic Model Approach for Evaluating SIMGAME

    ERIC Educational Resources Information Center

    Hense, Jan; Kriz, Willy Christian; Wolfe, Joseph

    2009-01-01

    Evaluations of gaming simulations and business games as teaching devices are typically end-state driven. This emphasis fails to detect how the simulation being evaluated does or does not bring about its desired consequences. This paper advances the use of a logic model approach, which possesses a holistic perspective that aims at including all…

  15. Using a model to evaluate nursing education and professional practise.

    PubMed

    Kapborg, Inez; Fischbein, Siv

    2002-01-01

    The concept of evaluation is becoming increasingly ambiguous and a lot of processes may be called evaluation without any clear definitions. A theoretical frame of reference may function as a compass in an evaluation context when collecting, analysing and interpreting data as well as drawing conclusions. The purpose of the present study was to present and discuss the applicability of an educational interaction model for the evaluation of nursing education programs and the professional competence of nurses. The model combines different dimensions in the educational process, using both a student and an educational perspective. It is not uncommon for evaluations to concentrate on one dimension only, which tends to give an insufficient picture of the process of interaction. Examples are provided from nurse students/nurses education and professional practise to show that the relationship between students' abilities and educational factors, in the form of intentional goals and educational frameworks, have an influence on educational outcome.

  16. Faculty Performance Evaluation: The CIPP-SAPS Model.

    ERIC Educational Resources Information Center

    Mitcham, Maralynne

    1981-01-01

    The issues of faculty performance evaluation for allied health professionals are addressed. Daniel Stufflebeam's CIPP (content-imput-process-product) model is introduced and its development into a CIPP-SAPS (self-administrative-peer- student) model is pursued. (Author/CT)

  17. A Model Vocational Evaluation Center in a Public School System.

    ERIC Educational Resources Information Center

    Quinones, Wm. A.

    A model public school vocational evaluation center for handicapped students is described. The model's battery of work samples and tests of vocational aptitudes, personal and social adjustment, physical capacities, and work habits are listed. In addition, observation of such work behaviors as remembering instructions, correcting errors, reacting to…

  18. [Application of multilevel models in the evaluation of bioequivalence (II).].

    PubMed

    Liu, Qiao-lan; Shen, Zhuo-zhi; Li, Xiao-song; Chen, Feng; Yang, Min

    2010-03-01

    The main purpose of this paper is to explore the applicability of multivariate multilevel models for bioequivalence evaluation. Using an example of a 4 x 4 cross-over test design in evaluating bioequivalence of homemade and imported rosiglitazone maleate tablets, this paper illustrated the multivariate-model-based method for partitioning total variances of ln(AUC) and ln(C(max)) in the framework of multilevel models. It examined the feasibility of multivariate multilevel models in directly evaluating average bioequivalence (ABE), population bioequivalence (PBE) and individual bioequivalence (IBE). Taking into account the correlation between ln(AUC) and ln(C(max)) of rosiglitazone maleate tablets, the proposed models suggested no statistical difference between the two effect measures in their ABE bioequivalence via joint tests, whilst a contradictive conclusion was derived based on univariate multilevel models. Furthermore, the PBE and IBE for both ln(AUC) and ln(C(max)) of the two types of tablets were assessed with no statistical difference based on estimates of variance components from the proposed models. Multivariate multilevel models could be used to analyze bioequivalence of multiple effect measures simultaneously and they provided a new way of statistical analysis to evaluate bioequivalence.

  19. NEW CATEGORICAL METRICS FOR AIR QUALITY MODEL EVALUATION

    EPA Science Inventory

    Traditional categorical metrics used in model evaluations are "clear-cut" measures in that the model's ability to predict an exceedance is defined by a fixed threshold concentration and the metrics are defined by observation-forecast sets that are paired both in space and time. T...

  20. Solid rocket booster performance evaluation model. Volume 2: Users manual

    NASA Technical Reports Server (NTRS)

    1974-01-01

    This users manual for the solid rocket booster performance evaluation model (SRB-II) contains descriptions of the model, the program options, the required program inputs, the program output format and the program error messages. SRB-II is written in FORTRAN and is operational on both the IBM 370/155 and the MSFC UNIVAC 1108 computers.

  1. Logic Models: Evaluating Education Doctorates in Educational Administration

    ERIC Educational Resources Information Center

    Creighton, Theodore

    2008-01-01

    The author suggests the Logic Model, used especially in the Health Science field, as a model for evaluating the quality of the educational doctorate (i.e., EdD). The manuscript highlights the newly developed EdD program at Virginia Tech.

  2. An Evaluation Model for Competency Based Teacher Preparatory Programs.

    ERIC Educational Resources Information Center

    Denton, Jon J.

    This discussion describes an evaluation model designed to complement a curriculum development project, the primary goal of which is to structure a performance based program for preservice teachers. Data collected from the implementation of this four-phase model can be used to make decisions for developing and changing performance objectives and…

  3. The Impact of Spatial Correlation and Incommensurability on Model Evaluation

    EPA Science Inventory

    Standard evaluations of air quality models rely heavily on a direct comparison of monitoring data matched with the model output for the grid cell containing the monitor’s location. While such techniques may be adequate for some applications, conclusions are limited by such facto...

  4. A Model for Integrating Program Development and Evaluation.

    ERIC Educational Resources Information Center

    Brown, J. Lynne; Kiernan, Nancy Ellen

    1998-01-01

    A communication model consisting of input from target audience, program delivery, and outcomes (receivers' perception of message) was applied to an osteoporosis-prevention program for working mothers ages 21 to 45. Due to poor completion rate on evaluation instruments and failure of participants to learn key concepts, the model was used to improve…

  5. Evaluation of Computational Codes for Underwater Hull Analysis Model Applications

    DTIC Science & Technology

    2014-02-05

    Elsyca CP Master was selected as the best basis for the Underwater Hull Analysis Model; however, additional work performed with COMSOL Multiphysics...since the selection indicates that COMSOL should be re-evaluated if the Underwater Hull Analysis Model program is renewed at some future date. 05-02...8 4.3 COMSOL MULTIPHYSICS ......................................................................................... 10 4.4 ELSYCA CP

  6. Evaluation of subgrid dispersion models for LES of spray flames

    NASA Astrophysics Data System (ADS)

    Wang, Qing; Zhao, Xinyu; Esclapez, Lucas; Govindaraju, Pavan; Ihme, Matthias

    2016-11-01

    Turbulent dispersion models for particle-laden turbulent flows have been studied extensively over the past few decades, and different modeling approaches have been proposed and tested. However, the significance of the subgrid dispersion model and its influence on the flame dynamics for spray combustion have not been examined. To evaluate the performance of dispersion models for spray combustion, direct numerical simulations (DNS) of three-dimensional counterflow spray flames are studied. The DNS configuration features a series of different droplet sizes to study effects of different Stokes numbers. An a priori comparison of the statistics generated from three subgrid dispersion models is made, for both non-reacting and reacting conditions. Improved agreement with DNS is shown for the stochastic model and the regularized deconvolution model than a closure-free model. The effect of filter sizes in relation to droplet sizes are investigated for all models. Subsequently, a posteriori modeling of the same configuration with different resolutions is performed to compare these models in the presence of other subgrid models. Finally, models for the subgrid closure of scalar transport for multiphase droplet combustion are proposed and evaluated.

  7. Groundwater modeling in RCRA assessment, corrective action design and evaluation

    SciTech Connect

    Rybak, I.; Henley, W.

    1995-12-31

    Groundwater modeling was conducted to design, implement, modify, and terminate corrective action at several RCRA sites in EPA Region 4. Groundwater flow, contaminant transport and unsaturated zone air flow models were used depending on the complexity of the site and the corrective action objectives. Software used included Modflow, Modpath, Quickflow, Bioplume 2, and AIR3D. Site assessment data, such as aquifer properties, site description, and surface water characteristics for each facility were used in constructing the models and designing the remedial systems. Modeling, in turn, specified additional site assessment data requirements for the remedial system design. The specific purpose of computer modeling is discussed with several case studies. These consist, among others, of the following: evaluation of the mechanism of the aquifer system and selection of a cost effective remedial option, evaluation of the capture zone of a pumping system, prediction of the system performance for different and difficult hydrogeologic settings, evaluation of the system performance, and trouble-shooting for the remedial system operation. Modeling is presented as a useful tool for corrective action system design, performance, evaluation, and trouble-shooting. The case studies exemplified the integration of diverse data sources, understanding the mechanism of the aquifer system, and evaluation of the performance of alternative remediation systems in a cost-effective manner. Pollutants of concern include metals and PAHs.

  8. EPA (Environmental Protection Agency) oxidant model: description and evaluation plan

    SciTech Connect

    Schere, K.L.; Fabrick, A.J.

    1985-09-01

    The U.S. EPA Regional Oxidant Model (ROM) and NEROS data base are described. The model incorporates a comprehensive description of the physical and chemical processes thought to be important to tropospheric O3 production on 1000-km scales. The data base employed for the first application of the ROM was collected during the summers of 1979 and 1980 in the Northeast U.S. It contains meteorological and air-quality data from regular monitoring networks and from enhanced networks or special field-project measurements during that period. The evaluation procedure that will be used to determine the ROM performance on this data base is outlined. A number of episodes will be simulated from the period July 23 through August 16, 1980, for which performance statistics will be developed. The evaluation of any given day within an episode will proceed in two distinct stages. The first stage will focus on model performance for an individual model realization, irrespective of all other realizations. Model realizations for a given day are functions of the possible flow fields that existed for the day. The second stage will attempt to evaluate model performance using the full probabilistic abilities of the ROM that consider all realizations concurrently. The focus of the evaluation will be on O3. The exact pathway through the evaluation study will be determined by the resources available at the time.

  9. Effects of distributed database modeling on evaluation of transaction rollbacks

    NASA Technical Reports Server (NTRS)

    Mukkamala, Ravi

    1991-01-01

    Data distribution, degree of data replication, and transaction access patterns are key factors in determining the performance of distributed database systems. In order to simplify the evaluation of performance measures, database designers and researchers tend to make simplistic assumptions about the system. The effect is studied of modeling assumptions on the evaluation of one such measure, the number of transaction rollbacks, in a partitioned distributed database system. Six probabilistic models and expressions are developed for the numbers of rollbacks under each of these models. Essentially, the models differ in terms of the available system information. The analytical results so obtained are compared to results from simulation. From here, it is concluded that most of the probabilistic models yield overly conservative estimates of the number of rollbacks. The effect of transaction commutativity on system throughout is also grossly undermined when such models are employed.

  10. A Qualitative and Quantitative Evaluation of 8 Clear Sky Models.

    PubMed

    Bruneton, Eric

    2016-10-27

    We provide a qualitative and quantitative evaluation of 8 clear sky models used in Computer Graphics. We compare the models with each other as well as with measurements and with a reference model from the physics community. After a short summary of the physics of the problem, we present the measurements and the reference model, and how we "invert" it to get the model parameters. We then give an overview of each CG model, and detail its scope, its algorithmic complexity, and its results using the same parameters as in the reference model. We also compare the models with a perceptual study. Our quantitative results confirm that the less simplifications and approximations are used to solve the physical equations, the more accurate are the results. We conclude with a discussion of the advantages and drawbacks of each model, and how to further improve their accuracy.

  11. Evaluation of the suicide prevention program in Kaohsiung City, Taiwan, using the CIPP evaluation model.

    PubMed

    Ho, Wen-Wei; Chen, Wei-Jen; Ho, Chi-Kung; Lee, Ming-Been; Chen, Cheng-Chung; Chou, Frank Huang-Chih

    2011-10-01

    The purpose of this study is to evaluate the effectiveness of the Kaohsiung Suicide Prevention Center (KSPC) of Kaohsiung City, Taiwan, during the period from June 2005 to June 2008. We used a modified CIPP evaluation model to evaluate the suicide prevention program in Kaohsiung. Four evaluation models were applied to evaluate the KSPC: a context evaluation of the background and origin of the center, an input evaluation of the resources of the center, a process evaluation of the activities of the suicide prevention project, and a product evaluation of the ascertainment of project objectives. The context evaluation revealed that the task of the KSPC is to lower mortality. The input evaluation assessed the efficiency of manpower and the grants supported by Taiwan's Department of Health and Kaohsiung City government's Bureau of Health. In the process evaluation, we inspected the suicide prevention strategies of the KSPC, which are a modified version of the National Suicide Prevention Strategy of Australia. In the product evaluation, four major objectives were evaluated: (1) the suicide rate in Kaohsiung, (2) the reported suicidal cases, (3) crisis line calls, and (4) telephone counseling. From 2005 to 2008, the number of telephone counseling sessions (1,432, 2,010, 7,051, 12,517) and crisis line calls (0, 4,320, 10,339, 14,502) increased. Because of the increase in reported suicidal cases (1,328, 2,625, 2,795, and 2,989, respectively), cases which were underreported in the past, we have increasingly been able to contact the people who need help. During this same time period, the half-year suicide re-attempt rate decreased significantly for those who received services, and the committed suicide rate (21.4, 20.1, 18.2, and 17.8 per 100,000 populations, respectively) also decreased. The suicide prevention program in Kaohsiung is worth implementing on a continual basis if financial constraints are addressed.

  12. Evaluation of dense-gas simulation models. Final report

    SciTech Connect

    Zapert, J.G.; Londergan, R.J.; Thistle, H.

    1991-05-01

    The report describes the approach and presents the results of an evaluation study of seven dense gas simulation models using data from three experimental programs. The models evaluated are two in the public domain (DEGADIS and SLAB) and five that are proprietary (AIRTOX, CHARM, FOCUS, SAFEMODE, and TRACE). The data bases used in the evaluation are the Desert Tortoise Pressurized Ammonia Releases, Burro Liquefied Natural Gas Spill Tests and the Goldfish Anhydrous Hydroflouric Acid Spill Experiments. A uniform set of performance statistics are calculated and tabulated to compare maximum observed concentrations and cloud half-width to those predicted by each model. None of the models demonstrated good performance consistently for all three experimental programs.

  13. How Do You Evaluate Everyone Who Isn't a Teacher? An Adaptable Evaluation Model for Professional Support Personnel.

    ERIC Educational Resources Information Center

    Stronge, James H.; And Others

    The evaluation of professional support personnel in the schools has been a neglected area in educational evaluation. The Center for Research on Educational Accountability and Teacher Evaluation (CREATE) has worked to develop a conceptually sound evaluation model and then to translate the model into practical evaluation procedures that facilitate…

  14. A resource dependent protein synthesis model for evaluating synthetic circuits.

    PubMed

    Halter, Wolfgang; Montenbruck, Jan Maximilian; Tuza, Zoltan A; Allgöwer, Frank

    2017-03-09

    Reliable in silico design of synthetic gene networks necessitates novel approaches to model the process of protein synthesis under the influence of limited resources. We present such a novel protein synthesis model which originates from the Ribosome Flow Model and among other things describes the movement of RNA-polymerase and ribosomes on mRNA and DNA templates, respectively. By analyzing the convergence properties of this model based upon geometric considerations, we present additional insights into the dynamic mechanisms of the process of protein synthesis. Further, we demonstrate how this model can be used to evaluate the performance of synthetic gene circuits under different loading scenarios.

  15. Human Thermal Model Evaluation Using the JSC Human Thermal Database

    NASA Technical Reports Server (NTRS)

    Bue, Grant; Makinen, Janice; Cognata, Thomas

    2012-01-01

    Human thermal modeling has considerable long term utility to human space flight. Such models provide a tool to predict crew survivability in support of vehicle design and to evaluate crew response in untested space environments. It is to the benefit of any such model not only to collect relevant experimental data to correlate it against, but also to maintain an experimental standard or benchmark for future development in a readily and rapidly searchable and software accessible format. The Human thermal database project is intended to do just so; to collect relevant data from literature and experimentation and to store the data in a database structure for immediate and future use as a benchmark to judge human thermal models against, in identifying model strengths and weakness, to support model development and improve correlation, and to statistically quantify a model s predictive quality. The human thermal database developed at the Johnson Space Center (JSC) is intended to evaluate a set of widely used human thermal models. This set includes the Wissler human thermal model, a model that has been widely used to predict the human thermoregulatory response to a variety of cold and hot environments. These models are statistically compared to the current database, which contains experiments of human subjects primarily in air from a literature survey ranging between 1953 and 2004 and from a suited experiment recently performed by the authors, for a quantitative study of relative strength and predictive quality of the models.

  16. Evaluating Vocational Educators' Training Programs: A Kirkpatrick-Inspired Evaluation Model

    ERIC Educational Resources Information Center

    Ravicchio, Fabrizio; Trentin, Guglielmo

    2015-01-01

    The aim of the article is to describe the assessment model adopted by the SCINTILLA Project, a project in Italy aimed at the online vocational training of young, seriously-disabled subjects and their subsequent work inclusion in smart-work mode. It will thus describe the model worked out for evaluation of the training program conceived for the…

  17. A model to evaluate quality and effectiveness of disease management.

    PubMed

    Lemmens, K M M; Nieboer, A P; van Schayck, C P; Asin, J D; Huijsman, R

    2008-12-01

    Disease management has emerged as a new strategy to enhance quality of care for patients suffering from chronic conditions, and to control healthcare costs. So far, however, the effects of this strategy remain unclear. Although current models define the concept of disease management, they do not provide a systematic development or an explanatory theory of how disease management affects the outcomes of care. The objective of this paper is to present a framework for valid evaluation of disease-management initiatives. The evaluation model is built on two pillars of disease management: patient-related and professional-directed interventions. The effectiveness of these interventions is thought to be affected by the organisational design of the healthcare system. Disease management requires a multifaceted approach; hence disease-management programme evaluations should focus on the effects of multiple interventions, namely patient-related, professional-directed and organisational interventions. The framework has been built upon the conceptualisation of these disease-management interventions. Analysis of the underlying mechanisms of these interventions revealed that learning and behavioural theories support the core assumptions of disease management. The evaluation model can be used to identify the components of disease-management programmes and the mechanisms behind them, making valid comparison feasible. In addition, this model links the programme interventions to indicators that can be used to evaluate the disease-management programme. Consistent use of this framework will enable comparisons among disease-management programmes and outcomes in evaluation research.

  18. Evaluating performances of simplified physically based landslide susceptibility models.

    NASA Astrophysics Data System (ADS)

    Capparelli, Giovanna; Formetta, Giuseppe; Versace, Pasquale

    2015-04-01

    Rainfall induced shallow landslides cause significant damages involving loss of life and properties. Prediction of shallow landslides susceptible locations is a complex task that involves many disciplines: hydrology, geotechnical science, geomorphology, and statistics. Usually to accomplish this task two main approaches are used: statistical or physically based model. This paper presents a package of GIS based models for landslide susceptibility analysis. It was integrated in the NewAge-JGrass hydrological model using the Object Modeling System (OMS) modeling framework. The package includes three simplified physically based models for landslides susceptibility analysis (M1, M2, and M3) and a component for models verifications. It computes eight goodness of fit indices (GOF) by comparing pixel-by-pixel model results and measurements data. Moreover, the package integration in NewAge-JGrass allows the use of other components such as geographic information system tools to manage inputs-output processes, and automatic calibration algorithms to estimate model parameters. The system offers the possibility to investigate and fairly compare the quality and the robustness of models and models parameters, according a procedure that includes: i) model parameters estimation by optimizing each of the GOF index separately, ii) models evaluation in the ROC plane by using each of the optimal parameter set, and iii) GOF robustness evaluation by assessing their sensitivity to the input parameter variation. This procedure was repeated for all three models. The system was applied for a case study in Calabria (Italy) along the Salerno-Reggio Calabria highway, between Cosenza and Altilia municipality. The analysis provided that among all the optimized indices and all the three models, Average Index (AI) optimization coupled with model M3 is the best modeling solution for our test case. This research was funded by PON Project No. 01_01503 "Integrated Systems for Hydrogeological Risk

  19. Study on Turbulent Modeling in Gas Entrainment Evaluation Method

    NASA Astrophysics Data System (ADS)

    Ito, Kei; Ohshima, Hiroyuki; Nakamine, Yoshiaki; Imai, Yasutomo

    Suppression of gas entrainment (GE) phenomena caused by free surface vortices are very important to establish an economically superior design of the sodium-cooled fast reactor in Japan (JSFR). However, due to the non-linearity and/or locality of the GE phenomena, it is not easy to evaluate the occurrences of the GE phenomena accurately. In other words, the onset condition of the GE phenomena in the JSFR is not predicted easily based on scaled-model and/or partial-model experiments. Therefore, the authors are developing a CFD-based evaluation method in which the non-linearity and locality of the GE phenomena can be considered. In the evaluation method, macroscopic vortex parameters, e.g. circulation, are determined by three-dimensional CFD and then, GE-related parameters, e.g. gas core (GC) length, are calculated by using the Burgers vortex model. This procedure is efficient to evaluate the GE phenomena in the JSFR. However, it is well known that the Burgers vortex model tends to overestimate the GC length due to the lack of considerations on some physical mechanisms. Therefore, in this study, the authors develop a turbulent vortex model to evaluate the GE phenomena more accurately. Then, the improved GE evaluation method with the turbulent viscosity model is validated by analyzing the GC lengths observed in a simple experiment. The evaluation results show that the GC lengths analyzed by the improved method are shorter in comparison to the original method, and give better agreement with the experimental data.

  20. New model framework and structure and the commonality evaluation model. [concerning unmanned spacecraft projects

    NASA Technical Reports Server (NTRS)

    1977-01-01

    The development of a framework and structure for shuttle era unmanned spacecraft projects and the development of a commonality evaluation model is documented. The methodology developed for model utilization in performing cost trades and comparative evaluations for commonality studies is discussed. The model framework consists of categories of activities associated with the spacecraft system's development process. The model structure describes the physical elements to be treated as separate identifiable entities. Cost estimating relationships for subsystem and program-level components were calculated.

  1. A Generic Evaluation Model for Semantic Web Services

    NASA Astrophysics Data System (ADS)

    Shafiq, Omair

    Semantic Web Services research has gained momentum over the last few Years and by now several realizations exist. They are being used in a number of industrial use-cases. Soon software developers will be expected to use this infrastructure to build their B2B applications requiring dynamic integration. However, there is still a lack of guidelines for the evaluation of tools developed to realize Semantic Web Services and applications built on top of them. In normal software engineering practice such guidelines can already be found for traditional component-based systems. Also some efforts are being made to build performance models for servicebased systems. Drawing on these related efforts in component-oriented and servicebased systems, we identified the need for a generic evaluation model for Semantic Web Services applicable to any realization. The generic evaluation model will help users and customers to orient their systems and solutions towards using Semantic Web Services. In this chapter, we have presented the requirements for the generic evaluation model for Semantic Web Services and further discussed the initial steps that we took to sketch such a model. Finally, we discuss related activities for evaluating semantic technologies.

  2. Compartmental models for apical efflux by P-glycoprotein. Part 1. Evaluation of model complexity

    PubMed Central

    Nagar, Swati; Tucker, Jalia; Weiskircher, Erica A.; Bhoopathy, Siddhartha; Hidalgo, Ismael J.; Korzekwa, Ken

    2013-01-01

    Purpose With the goal of quantifying P-gp transport kinetics, Part 1 of these manuscripts evaluates different compartmental models and Part 2 applies these models to kinetic data. Methods Models were developed to simulate the effect of apical efflux transporters on intracellular concentrations of six drugs. The effect of experimental variability on model predictions was evaluated. Several models were evaluated, and characteristics including membrane configuration, lipid content, and apical surface area (asa) were varied. Results Passive permeabilities from MDCK-MDR1 cells in the presence of cyclosporine gave lower model errors than from MDCK control cells. Consistent with the results in Part 2, model configuration had little impact on calculated model errors. The 5-compartment model was the simplest model that reproduced experimental lag times. Lipid content and asa had minimal effect on model errors, predicted lag times, and intracellular concentrations. Including endogenous basolateral uptake activity can decrease model errors. Models with and without explicit membrane barriers differed markedly in their predicted intracellular concentrations for basolateral drug exposure. Single point data resulted in clearances similar to time course data. Conclusions Compartmental models are useful to evaluate the impact of efflux transporters on intracellular concentrations. Whereas a 3-compartment model may be sufficient to predict the impact of transporters that efflux drugs from the cell, a 5-compartment model with explicit membranes may be required to predict intracellular concentrations when efflux occurs from the membrane. More complex models including additional compartments may be unnecessary. PMID:24019023

  3. Evaluating alternate biokinetic models for trace pollutant cometabolism.

    PubMed

    Liu, Li; Binning, Philip J; Smets, Barth F

    2015-02-17

    Mathematical models of cometabolic biodegradation kinetics can improve our understanding of the relevant microbial reactions and allow us to design in situ or in-reactor applications of cometabolic bioremediation. A variety of models are available, but their ability to describe experimental data has not been systematically evaluated for a variety of operational/experimental conditions. Here five different models were considered: first-order; Michaelis-Menten; reductant; competition; and combined models. The models were assessed on their ability to fit data from simulated batch experiments covering a realistic range of experimental conditions. The simulated observations were generated by using the most complex model structure and parameters based on the literature, with added experimental error. Three criteria were used to evaluate model fit: ability to fit the simulated experimental data, identifiability of parameters using a colinearity analysis, and suitability of the model size and complexity using the Bayesian and Akaike Information criteria. Results show that no single model fits data well for a range of experimental conditions. The reductant model achieved best results, but required very different parameter sets to simulate each experiment. Parameter nonuniqueness was likely to be due to the parameter correlation. These results suggest that the cometabolic models must be further developed if they are to reliably simulate experimental and operational data.

  4. Classification and moral evaluation of uncertainties in engineering modeling.

    PubMed

    Murphy, Colleen; Gardoni, Paolo; Harris, Charles E

    2011-09-01

    Engineers must deal with risks and uncertainties as a part of their professional work and, in particular, uncertainties are inherent to engineering models. Models play a central role in engineering. Models often represent an abstract and idealized version of the mathematical properties of a target. Using models, engineers can investigate and acquire understanding of how an object or phenomenon will perform under specified conditions. This paper defines the different stages of the modeling process in engineering, classifies the various sources of uncertainty that arise in each stage, and discusses the categories into which these uncertainties fall. The paper then considers the way uncertainty and modeling are approached in science and the criteria for evaluating scientific hypotheses, in order to highlight the very different criteria appropriate for the development of models and the treatment of the inherent uncertainties in engineering. Finally, the paper puts forward nine guidelines for the treatment of uncertainty in engineering modeling.

  5. Evaluation of artificial intelligence based models for chemical biodegradability prediction.

    PubMed

    Baker, James R; Gamberger, Dragan; Mihelcic, James R; Sabljić, Aleksandar

    2004-12-31

    This study presents a review of biodegradability modeling efforts including a detailed assessment of two models developed using an artificial intelligence based methodology. Validation results for these models using an independent, quality reviewed database, demonstrate that the models perform well when compared to another commonly used biodegradability model, against the same data. The ability of models induced by an artificial intelligence methodology to accommodate complex interactions in detailed systems, and the demonstrated reliability of the approach evaluated by this study, indicate that the methodology may have application in broadening the scope of biodegradability models. Given adequate data for biodegradability of chemicals under environmental conditions, this may allow for the development of future models that include such things as surface interface impacts on biodegradability for example.

  6. [Evaluation of landscape connectivity based on least-cost model].

    PubMed

    Wu, Chang-Guang; Zhou, Zhi-Xiang; Wang, Peng-Cheng; Xiao, Wen-Fa; Teng, Ming-Jun; Peng, Li

    2009-08-01

    Landscape connectivity, as a dominant factor affecting species dispersal, reflects the degree to which the landscape facilitates or impedes organisms' movement among resources patches. It is also an important indicator in sustainable land use and biological conservation. Least-cost model originates from graph theory, and integrates the detailed geographical information with organisms' behavioral characteristics in the landscape. Through cost distance analysis, this model can describe the species connectivity in heterogeneous landscape intuitively and visually. Due to the simple algorithm performed in GIS packages and the demand of moderate data information, least-cost model has gained extensive attention in the evaluation of large-scale landscape connectivity. Based on the current studies of landscape connectivity, this paper elaborated the significance, principles, and operation processes of least-cost model in evaluating landscape connectivity, and discussed the existing problems of the model in its practical applications, which would benefit the further related studies and biodiversity conservation.

  7. Evaluation of potential crushed-salt constitutive models

    SciTech Connect

    Callahan, G.D.; Loken, M.C.; Sambeek, L.L. Van; Chen, R.; Pfeifle, T.W.; Nieland, J.D.

    1995-12-01

    Constitutive models describing the deformation of crushed salt are presented in this report. Ten constitutive models with potential to describe the phenomenological and micromechanical processes for crushed salt were selected from a literature search. Three of these ten constitutive models, termed Sjaardema-Krieg, Zeuch, and Spiers models, were adopted as candidate constitutive models. The candidate constitutive models were generalized in a consistent manner to three-dimensional states of stress and modified to include the effects of temperature, grain size, and moisture content. A database including hydrostatic consolidation and shear consolidation tests conducted on Waste Isolation Pilot Plant and southeastern New Mexico salt was used to determine material parameters for the candidate constitutive models. Nonlinear least-squares model fitting to data from the hydrostatic consolidation tests, the shear consolidation tests, and a combination of the shear and hydrostatic tests produces three sets of material parameter values for the candidate models. The change in material parameter values from test group to test group indicates the empirical nature of the models. To evaluate the predictive capability of the candidate models, each parameter value set was used to predict each of the tests in the database. Based on the fitting statistics and the ability of the models to predict the test data, the Spiers model appeared to perform slightly better than the other two candidate models. The work reported here is a first-of-its kind evaluation of constitutive models for reconsolidation of crushed salt. Questions remain to be answered. Deficiencies in models and databases are identified and recommendations for future work are made. 85 refs.

  8. Evaluation of an Individual Placement and Support model (IPS) program.

    PubMed

    Lucca, Anna M; Henry, Alexis D; Banks, Steven; Simon, Lorna; Page, Stephanie

    2004-01-01

    While randomized clinical trials (RCTs) have helped to establish Individual Placement and Support (IPS) programs as an evidence-based practice, it is important to evaluate whether "real world" IPS programs can be implemented with fidelity and achieve outcomes comparable to programs evaluated in RCTs. The current evaluation examined retrospectively employment outcomes for go participants from an IPS-model Services for Employment and Education (SEE) program in Massachusetts over a 4.5-year period. Evaluators accessed demographic, functioning, and employment data from three sources--SEE program records/database, clinical records, and the Massachusetts Department of Mental Health Client Tracking system. Results indicate that the SEE program maintained high IPS fidelity and achieved employment outcomes comparable or superior to other SE and IPS model programs described in the literature.

  9. Neutral models as a way to evaluate the Sea Level Affecting Marshes Model (SLAMM)

    EPA Science Inventory

    A commonly used landscape model to simulate wetland change – the Sea Level Affecting Marshes Model(SLAMM) – has rarely been explicitly assessed for its prediction accuracy. Here, we evaluated this model using recently proposed neutral models – including the random constraint matc...

  10. Process evaluation of an integrated model of discharge planning.

    PubMed

    LeClerc, M; Wells, D L

    2001-01-01

    In this study, a new, empirically-derived model of discharge planning for acutely-ill elderly was evaluated to determine (a) whether it could be implemented in a hospital setting, and (b) what facilitated or challenged the implementation. The process evaluation involved four case studies conducted on three in-patient units of two acute-care hospitals. Data were analyzed using explanation-building and case comparison methods. Three main study results emerged: (a) The integrated model had the potential to be implemented in a hospital setting when certain conditions were in place, (b) use of the integrated approach to discharge planning contributed to patient satisfaction, and (c) the materials developed as part of the discharge planning protocol required only minor formatting modifications in order to be rendered user-friendly. In this article, recommendations are made that will facilitate the model's implementation and utilization in other clinical settings and ongoing and future process evaluations.

  11. Evaluation of ADAM/1 model for advanced coal extraction concepts

    NASA Technical Reports Server (NTRS)

    Deshpande, G. K.; Gangal, M. D.

    1982-01-01

    Several existing computer programs for estimating life cycle cost of mining systems were evaluated. A commercially available program, ADAM/1 was found to be satisfactory in relation to the needs of the advanced coal extraction project. Two test cases were run to confirm the ability of the program to handle nonconventional mining equipment and procedures. The results were satisfactory. The model, therefore, is recommended to the project team for evaluation of their conceptual designs.

  12. Advanced Nondestructive Evaluation (NDE) Sensor Modeling For Multisite Inspection

    DTIC Science & Technology

    2008-10-01

    element method is not well suited for open region problems[12] encountered in wave regimes. In the area of antenna and electromagnetic wave...26 Z. Zeng, B. Shanker, and L. Udpa, "Modeling microwave NDE using the element-free Galerkin method ," Electromagnetic Nondestructive Evaluation (IX...applied conventional eddy current method . This result provided a quantitative evaluation of the MR sensor inspection method and validated the

  13. Human Thermal Model Evaluation Using the JSC Human Thermal Database

    NASA Technical Reports Server (NTRS)

    Cognata, T.; Bue, G.; Makinen, J.

    2011-01-01

    The human thermal database developed at the Johnson Space Center (JSC) is used to evaluate a set of widely used human thermal models. This database will facilitate a more accurate evaluation of human thermoregulatory response using in a variety of situations, including those situations that might otherwise prove too dangerous for actual testing--such as extreme hot or cold splashdown conditions. This set includes the Wissler human thermal model, a model that has been widely used to predict the human thermoregulatory response to a variety of cold and hot environments. These models are statistically compared to the current database, which contains experiments of human subjects primarily in air from a literature survey ranging between 1953 and 2004 and from a suited experiment recently performed by the authors, for a quantitative study of relative strength and predictive quality of the models. Human thermal modeling has considerable long term utility to human space flight. Such models provide a tool to predict crew survivability in support of vehicle design and to evaluate crew response in untested environments. It is to the benefit of any such model not only to collect relevant experimental data to correlate it against, but also to maintain an experimental standard or benchmark for future development in a readily and rapidly searchable and software accessible format. The Human thermal database project is intended to do just so; to collect relevant data from literature and experimentation and to store the data in a database structure for immediate and future use as a benchmark to judge human thermal models against, in identifying model strengths and weakness, to support model development and improve correlation, and to statistically quantify a model s predictive quality.

  14. Evaluating human performance modeling for system assessment: Promise and problems

    NASA Technical Reports Server (NTRS)

    Patterson, Robert W.; Young, Michael J.

    1992-01-01

    The development and evaluation of computational human performance models is examined. An intention is to develop models which can be used to interact with system prototypes and simulations to perform system assessment. Currently LR is working on a set of models emulating cognitive, psychomotor, auditory, and visual activity for multiple operator positions of a command and control simulation system. These models, developed in conjunction with BBN Systems and Technologies, function within the simulation environment and allow for both unmanned system assessment and manned (human-in-loop) assessment of system interface and team interactions. These are relatively generic models with built-in flexibility which allows modification of some model parameters. These models have great potential for improving the efficiency and effectiveness of system design, test, and evaluation. However, the extent of the practical utility of these models is unclear. Initial verification efforts comparing model performance within the simulation to actual human operators on a similar, independent simulation have been performed and current efforts are directed at comparing human and model performance within the same simulation environment.

  15. Evaluating snow models with varying process representations for hydrological applications

    NASA Astrophysics Data System (ADS)

    Magnusson, Jan; Wever, Nander; Essery, Richard; Helbig, Nora; Winstral, Adam; Jonas, Tobias

    2015-04-01

    Much effort has been invested in developing snow models over several decades, resulting in a wide variety of empirical and physically based snow models. For the most part, these models are built on similar principles. The greatest differences are found in how each model parameterizes individual processes (e.g., surface albedo and snow compaction). Parameterization choices naturally span a wide range of complexities. In this study, we evaluate the performance of different snow model parameterizations for hydrological applications using an existing multimodel energy-balance framework and data from two well-instrumented alpine sites with seasonal snow cover. We also include two temperature-index snow models and an intensive, physically based multilayer snow model in our analyses. Our results show that snow mass observations provide useful information for evaluating the ability of a model to predict snowpack runoff, whereas snow depth data alone are not. For snow mass and runoff, the energy-balance models appear transferable between our two study sites, a behavior which is not observed for snow surface temperature predictions due to site-specificity of turbulent heat transfer formulations. Errors in the input and validation data, rather than model formulation, seem to be the greatest factor affecting model performance. The three model types provide similar ability to reproduce daily observed snowpack runoff when appropriate model structures are chosen. Model complexity was not a determinant for predicting daily snowpack mass and runoff reliably. Our study shows the usefulness of the multimodel framework for identifying appropriate models under given constraints such as data availability, properties of interest and computational cost.

  16. Evaluation of Trapped Radiation Model Uncertainties for Spacecraft Design

    NASA Technical Reports Server (NTRS)

    Armstrong, T. W.; Colborn, B. L.

    2000-01-01

    The standard AP8 and AE8 models for predicting trapped proton and electron environments have been compared with several sets of flight data to evaluate model uncertainties. Model comparisons are made with flux, dose, and activation measurements made on various U.S. low-Earth orbit satellites (APEX, CRRES, DMSP, LDEF, NOAA) and Space Shuttle flights, on Russian satellites (Photon-8, Cosmos-1887, Cosmos-2044), and on the Russian Mir Space Station. This report gives a summary of the model-data comparisons-detailed results are given in a companion report. Results from the model comparisons with flic,ht data show, for example, the AP8 model underpredicts the trapped proton flux at low altitudes by a factor of about two (independent of proton energy and solar cycle conditions), and that the AE8 model overpredicts the flux in the outer electron belt by an order of magnitude or more.

  17. Evaluation of Trapped Radiation Model Uncertainties for Spacecraft Design

    NASA Technical Reports Server (NTRS)

    Armstrong, T. W.; Colborn, B. L.

    2000-01-01

    The standard AP8 and AE8 models for predicting trapped proton and electron environments have been compared with several sets of flight data to evaluate model uncertainties. Model comparisons are made with flux, dose, and activation measurements made on various U.S. low-Earth orbit satellites (APEX, CRRES, DMSP. LDEF, NOAA) and Space Shuttle flights, on Russian satellites (Photon-8, Cosmos-1887, Cosmos-2044), and on the Russian Mir space station. This report gives a summary of the model-data given in a companion report. Results from the model comparisons with flight data show, for example, that the AP8 model underpredicts the trapped proton flux at low altitudes by a factor of about two (independent of proton energy and solar cycle conditions), and that the AE8 model overpredict the flux in the outer electron belt be an order of magnitude or more.

  18. Mathematical models and lymphatic filariasis control: monitoring and evaluating interventions.

    PubMed

    Michael, Edwin; Malecela-Lazaro, Mwele N; Maegga, Bertha T A; Fischer, Peter; Kazura, James W

    2006-11-01

    Monitoring and evaluation are crucially important to the scientific management of any mass parasite control programme. Monitoring enables the effectiveness of implemented actions to be assessed and necessary adaptations to be identified; it also determines when management objectives are achieved. Parasite transmission models can provide a scientific template for informing the optimal design of such monitoring programmes. Here, we illustrate the usefulness of using a model-based approach for monitoring and evaluating anti-parasite interventions and discuss issues that need addressing. We focus on the use of such an approach for the control and/or elimination of the vector-borne parasitic disease, lymphatic filariasis.

  19. New performance evaluation models for character detection in images

    NASA Astrophysics Data System (ADS)

    Wang, YanWei; Ding, XiaoQing; Liu, ChangSong; Wang, Kongqiao

    2010-02-01

    Detection of characters regions is a meaningful research work for both highlighting region of interest and recognition for further information processing. A lot of researches have been performed on character localization and extraction and this leads to the great needs of performance evaluation scheme to inspect detection algorithms. In this paper, two probability models are established to accomplish evaluation tasks for different applications respectively. For highlighting region of interest, a Gaussian probability model, which simulates the property of a low-pass Gaussian filter of human vision system (HVS), was constructed to allocate different weights to different character parts. It reveals the greatest potential to describe the performance of detectors, especially, when the result detected is an incomplete character, where other methods cannot effectively work. For the recognition destination, we also introduced a weighted probability model to give an appropriate description for the contribution of detection results to final recognition results. The validity of performance evaluation models proposed in this paper are proved by experiments on web images and natural scene images. These models proposed in this paper may also be able to be applied in evaluating algorithms of locating other objects, like face detection and more wide experiments need to be done to examine the assumption.

  20. Evaluating supervised topic models in the presence of OCR errors

    NASA Astrophysics Data System (ADS)

    Walker, Daniel; Ringger, Eric; Seppi, Kevin

    2013-01-01

    Supervised topic models are promising tools for text analytics that simultaneously model topical patterns in document collections and relationships between those topics and document metadata, such as timestamps. We examine empirically the effect of OCR noise on the ability of supervised topic models to produce high quality output through a series of experiments in which we evaluate three supervised topic models and a naive baseline on synthetic OCR data having various levels of degradation and on real OCR data from two different decades. The evaluation includes experiments with and without feature selection. Our results suggest that supervised topic models are no better, or at least not much better in terms of their robustness to OCR errors, than unsupervised topic models and that feature selection has the mixed result of improving topic quality while harming metadata prediction quality. For users of topic modeling methods on OCR data, supervised topic models do not yet solve the problem of finding better topics than the original unsupervised topic models.

  1. Software Platform Evaluation - Verifiable Fuel Cycle Simulation (VISION) Model

    SciTech Connect

    J. J. Jacobson; D. E. Shropshire; W. B. West

    2005-11-01

    The purpose of this Software Platform Evaluation (SPE) is to document the top-level evaluation of potential software platforms on which to construct a simulation model that satisfies the requirements for a Verifiable Fuel Cycle Simulation Model (VISION) of the Advanced Fuel Cycle (AFC). See the Software Requirements Specification for Verifiable Fuel Cycle Simulation (VISION) Model (INEEL/EXT-05-02643, Rev. 0) for a discussion of the objective and scope of the VISION model. VISION is intended to serve as a broad systems analysis and study tool applicable to work conducted as part of the AFCI (including costs estimates) and Generation IV reactor development studies. This document will serve as a guide for selecting the most appropriate software platform for VISION. This is a “living document” that will be modified over the course of the execution of this work.

  2. A research and evaluation capacity building model in Western Australia.

    PubMed

    Lobo, Roanna; Crawford, Gemma; Hallett, Jonathan; Laing, Sue; Mak, Donna B; Jancey, Jonine; Rowell, Sally; McCausland, Kahlia; Bastian, Lisa; Sorenson, Anne; Tilley, P J Matt; Yam, Simon; Comfort, Jude; Brennan, Sean; Doherty, Maryanne

    2016-12-27

    Evaluation of public health programs, services and policies is increasingly required to demonstrate effectiveness. Funding constraints necessitate that existing programs, services and policies be evaluated and their findings disseminated. Evidence-informed practice and policy is also desirable to maximise investments in public health. Partnerships between public health researchers, service providers and policymakers can help address evaluation knowledge and skills gaps. The Western Australian Sexual Health and Blood-borne Virus Applied Research and Evaluation Network (SiREN) aims to build research and evaluation capacity in the sexual health and blood-borne virus sector in Western Australia (WA). Partners' perspectives of the SiREN model after 2 years were explored. Qualitative written responses from service providers, policymakers and researchers about the SiREN model were analysed thematically. Service providers reported that participation in SiREN prompted them to consider evaluation earlier in the planning process and increased their appreciation of the value of evaluation. Policymakers noted benefits of the model in generating local evidence and highlighting local issues of importance for consideration at a national level. Researchers identified challenges communicating the services available through SiREN and the time investment needed to develop effective collaborative partnerships. Stronger engagement between public health researchers, service providers and policymakers through collaborative partnerships has the potential to improve evidence generation and evidence translation. These outcomes require long-term funding and commitment from all partners to develop and maintain partnerships. Ongoing monitoring and evaluation can ensure the partnership remains responsive to the needs of key stakeholders. The findings are applicable to many sectors.

  3. Road network safety evaluation using Bayesian hierarchical joint model.

    PubMed

    Wang, Jie; Huang, Helai

    2016-05-01

    Safety and efficiency are commonly regarded as two significant performance indicators of transportation systems. In practice, road network planning has focused on road capacity and transport efficiency whereas the safety level of a road network has received little attention in the planning stage. This study develops a Bayesian hierarchical joint model for road network safety evaluation to help planners take traffic safety into account when planning a road network. The proposed model establishes relationships between road network risk and micro-level variables related to road entities and traffic volume, as well as socioeconomic, trip generation and network density variables at macro level which are generally used for long term transportation plans. In addition, network spatial correlation between intersections and their connected road segments is also considered in the model. A road network is elaborately selected in order to compare the proposed hierarchical joint model with a previous joint model and a negative binomial model. According to the results of the model comparison, the hierarchical joint model outperforms the joint model and negative binomial model in terms of the goodness-of-fit and predictive performance, which indicates the reasonableness of considering the hierarchical data structure in crash prediction and analysis. Moreover, both random effects at the TAZ level and the spatial correlation between intersections and their adjacent segments are found to be significant, supporting the employment of the hierarchical joint model as an alternative in road-network-level safety modeling as well.

  4. Evaluation of Rainfall-Runoff Models for Mediterranean Subcatchments

    NASA Astrophysics Data System (ADS)

    Cilek, A.; Berberoglu, S.; Donmez, C.

    2016-06-01

    The development and the application of rainfall-runoff models have been a corner-stone of hydrological research for many decades. The amount of rainfall and its intensity and variability control the generation of runoff and the erosional processes operating at different scales. These interactions can be greatly variable in Mediterranean catchments with marked hydrological fluctuations. The aim of the study was to evaluate the performance of rainfall-runoff model, for rainfall-runoff simulation in a Mediterranean subcatchment. The Pan-European Soil Erosion Risk Assessment (PESERA), a simplified hydrological process-based approach, was used in this study to combine hydrological surface runoff factors. In total 128 input layers derived from data set includes; climate, topography, land use, crop type, planting date, and soil characteristics, are required to run the model. Initial ground cover was estimated from the Landsat ETM data provided by ESA. This hydrological model was evaluated in terms of their performance in Goksu River Watershed, Turkey. It is located at the Central Eastern Mediterranean Basin of Turkey. The area is approximately 2000 km2. The landscape is dominated by bare ground, agricultural and forests. The average annual rainfall is 636.4mm. This study has a significant importance to evaluate different model performances in a complex Mediterranean basin. The results provided comprehensive insight including advantages and limitations of modelling approaches in the Mediterranean environment.

  5. Evaluation of thermographic phosphor technology for aerodynamic model testing

    SciTech Connect

    Cates, M.R.; Tobin, K.W.; Smith, D.B.

    1990-08-01

    The goal for this project was to perform technology evaluations applicable to the development of higher-precision, higher-temperature aerodynamic model testing at Arnold Engineering Development Center (AEDC) in Tullahmoa, Tennessee. With the advent of new programs for design of aerospace craft that fly at higher speeds and altitudes, requirements for detailed understanding of high-temperature materials become very important. Model testing is a natural and critical part of the development of these new initiatives. The well-established thermographic phosphor techniques of the Applied Technology Division at Oak Ridge National Laboratory are highly desirable for diagnostic evaluation of materials and aerodynamic shapes as studied in model tests. Combining this state-of-the-art thermographic technique with modern, higher-temperature models will greatly improve the practicability of tests for the advanced aerospace vehicles and will provide higher precision diagnostic information for quantitative evaluation of these tests. The wavelength ratio method for measuring surface temperatures of aerodynamic models was demonstrated in measurements made for this project. In particular, it was shown that the appropriate phosphors could be selected for the temperature range up to {approximately}700 {degree}F or higher and emission line ratios of sufficient sensitivity to measure temperature with 1% precision or better. Further, it was demonstrated that two-dimensional image- processing methods, using standard hardware, can be successfully applied to surface thermography of aerodynamic models for AEDC applications.

  6. Use of field experimental studies to evaluate emergency response models

    SciTech Connect

    Gudiksen, P.H.; Lange, R.; Rodriguez, D.J.; Nasstrom, J.S.

    1985-07-16

    The three-dimensional diagnostic wind field model (MATHEW) and the particle-in-cell atmospheric transport and diffusion model (ADPIC) are used by the Atmospheric Release Advisory Capability to estimate the environmental consequences of accidental releases of radioactivity into the atmosphere. These models have undergone extensive evaluations against field experiments conducted in a variety of environmental settings ranging from relatively flat to very complex terrain areas. Simulations of tracer experiments conducted in a complex mountain valley setting revealed that 35 to 50% of the comparisons between calculated and measured tracer concentrations were within a factor of 5. This may be compared with a factor of 2 for 50% of the comparisons for relatively flat terrain. This degradation of results in complex terrain is due to a variety of factors such as the limited representativeness of measurements in complex terrain, the limited spatial resolution afforded by the models, and the turbulence parameterization based on sigma/sub theta/ measurements to evaluate the eddy diffusivities. Measurements of sigma/sub theta/ in complex terrain exceed those measured over flat terrain by a factor of 2 to 3 leading to eddy diffusivities that are unrealistically high. The results of model evaluations are very sensitive to the quality and the representativeness of the meteorological data. This is particularly true for measurements near the source. The capability of the models to simulate the dispersion of an instantaneously produced cloud of particulates was illustrated to be generally within a factor of 2 over flat terrain. 19 refs., 16 figs.

  7. The Iterative Research Cycle: Process-Based Model Evaluation

    NASA Astrophysics Data System (ADS)

    Vrugt, J. A.

    2014-12-01

    The ever increasing pace of computational power, along with continued advances in measurement technologies and improvements in process understanding has stimulated the development of increasingly complex physics based models that simulate a myriad of processes at different spatial and temporal scales. Reconciling these high-order system models with perpetually larger volumes of field data is becoming more and more difficult, particularly because classical likelihood-based fitting methods lack the power to detect and pinpoint deficiencies in the model structure. In this talk I will give an overview of our latest research on process-based model calibration and evaluation. This approach, rooted in Bayesian theory, uses summary metrics of the calibration data rather than the data itself to help detect which component(s) of the model is (are) malfunctioning and in need of improvement. A few case studies involving hydrologic and geophysical models will be used to demonstrate the proposed methodology.

  8. Animal models to evaluate anti-atherosclerotic drugs.

    PubMed

    Priyadharsini, Raman P

    2015-08-01

    Atherosclerosis is a multifactorial condition characterized by endothelial injury, fatty streak deposition, and stiffening of the blood vessels. The pathogenesis is complex and mediated by adhesion molecules, inflammatory cells, and smooth muscle cells. Statins have been the major drugs in treating hypercholesterolemia for the past two decades despite little efficacy. There is an urgent need for new drugs that can replace statins or combined with statins. The preclinical studies evaluating atherosclerosis require an ideal animal model which resembles the disease condition, but there is no single animal model which mimics the disease. The animal models used are rabbits, rats, mice, hamsters, mini pigs, etc. Each animal model has its own advantages and disadvantages. The method of induction of atherosclerosis includes diet, chemical induction, mechanically induced injuries, and genetically manipulated animal models. This review mainly focuses on the various animal models, method of induction, the advantages, disadvantages, and the current perspectives with regard to preclinical studies on atherosclerosis.

  9. Information technology model for evaluating emergency medicine teaching

    NASA Astrophysics Data System (ADS)

    Vorbach, James; Ryan, James

    1996-02-01

    This paper describes work in progress to develop an Information Technology (IT) model and supporting information system for the evaluation of clinical teaching in the Emergency Medicine (EM) Department of North Shore University Hospital. In the academic hospital setting student physicians, i.e. residents, and faculty function daily in their dual roles as teachers and students respectively, and as health care providers. Databases exist that are used to evaluate both groups in either academic or clinical performance, but rarely has this information been integrated to analyze the relationship between academic performance and the ability to care for patients. The goal of the IT model is to improve the quality of teaching of EM physicians by enabling the development of integrable metrics for faculty and resident evaluation. The IT model will include (1) methods for tracking residents in order to develop experimental databases; (2) methods to integrate lecture evaluation, clinical performance, resident evaluation, and quality assurance databases; and (3) a patient flow system to monitor patient rooms and the waiting area in the Emergency Medicine Department, to record and display status of medical orders, and to collect data for analyses.

  10. Technology evaluation, assessment, modeling, and simulation: the TEAMS capability

    NASA Astrophysics Data System (ADS)

    Holland, Orgal T.; Stiegler, Robert L.

    1998-08-01

    The United States Marine Corps' Technology Evaluation, Assessment, Modeling and Simulation (TEAMS) capability, located at the Naval Surface Warfare Center in Dahlgren Virginia, provides an environment for detailed test, evaluation, and assessment of live and simulated sensor and sensor-to-shooter systems for the joint warfare community. Frequent use of modeling and simulation allows for cost effective testing, bench-marking, and evaluation of various levels of sensors and sensor-to-shooter engagements. Interconnectivity to live, instrumented equipment operating in real battle space environments and to remote modeling and simulation facilities participating in advanced distributed simulations (ADS) exercises is available to support a wide- range of situational assessment requirements. TEAMS provides a valuable resource for a variety of users. Engineers, analysts, and other technology developers can use TEAMS to evaluate, assess and analyze tactical relevant phenomenological data on tactical situations. Expeditionary warfare and USMC concept developers can use the facility to support and execute advanced warfighting experiments (AWE) to better assess operational maneuver from the sea (OMFTS) concepts, doctrines, and technology developments. Developers can use the facility to support sensor system hardware, software and algorithm development as well as combat development, acquisition, and engineering processes. Test and evaluation specialists can use the facility to plan, assess, and augment their processes. This paper presents an overview of the TEAMS capability and focuses specifically on the technical challenges associated with the integration of live sensor hardware into a synthetic environment and how those challenges are being met. Existing sensors, recent experiments and facility specifications are featured.

  11. Evaluation of a Neuromechanical Walking Control Model Using Disturbance Experiments

    PubMed Central

    Song, Seungmoon; Geyer, Hartmut

    2017-01-01

    Neuromechanical simulations have been used to study the spinal control of human locomotion which involves complex mechanical dynamics. So far, most neuromechanical simulation studies have focused on demonstrating the capability of a proposed control model in generating normal walking. As many of these models with competing control hypotheses can generate human-like normal walking behaviors, a more in-depth evaluation is required. Here, we conduct the more in-depth evaluation on a spinal-reflex-based control model using five representative gait disturbances, ranging from electrical stimulation to mechanical perturbation at individual leg joints and at the whole body. The immediate changes in muscle activations of the model are compared to those of humans across different gait phases and disturbance magnitudes. Remarkably similar response trends for the majority of investigated muscles and experimental conditions reinforce the plausibility of the reflex circuits of the model. However, the model's responses lack in amplitude for two experiments with whole body disturbances suggesting that in these cases the proposed reflex circuits need to be amplified by additional control structures such as location-specific cutaneous reflexes. A model that captures these selective amplifications would be able to explain both steady and reactive spinal control of human locomotion. Neuromechanical simulations that investigate hypothesized control models are complementary to gait experiments in better understanding the control of human locomotion. PMID:28381996

  12. Evaluation of articulation simulation system using artificial maxillectomy models.

    PubMed

    Elbashti, M E; Hattori, M; Sumita, Y I; Taniguchi, H

    2015-09-01

    Acoustic evaluation is valuable for guiding the treatment of maxillofacial defects and determining the effectiveness of rehabilitation with an obturator prosthesis. Model simulations are important in terms of pre-surgical planning and pre- and post-operative speech function. This study aimed to evaluate the acoustic characteristics of voice generated by an articulation simulation system using a vocal tract model with or without artificial maxillectomy defects. More specifically, we aimed to establish a speech simulation system for maxillectomy defect models that both surgeons and maxillofacial prosthodontists can use in guiding treatment planning. Artificially simulated maxillectomy defects were prepared according to Aramany's classification (Classes I-VI) in a three-dimensional vocal tract plaster model of a subject uttering the vowel /a/. Formant and nasalance acoustic data were analysed using Computerized Speech Lab and the Nasometer, respectively. Formants and nasalance of simulated /a/ sounds were successfully detected and analysed. Values of Formants 1 and 2 for the non-defect model were 675.43 and 976.64 Hz, respectively. Median values of Formants 1 and 2 for the defect models were 634.36 and 1026.84 Hz, respectively. Nasalance was 11% in the non-defect model, whereas median nasalance was 28% in the defect models. The results suggest that an articulation simulation system can be used to help surgeons and maxillofacial prosthodontists to plan post-surgical defects that will be facilitate maxillofacial rehabilitation.

  13. The Applicability of Selected Evaluation Models to Evolving Investigative Designs.

    ERIC Educational Resources Information Center

    Smith, Nick L.; Hauer, Diane M.

    1990-01-01

    Ten evaluation models are examined in terms of their applicability to investigative, emergent design programs: Stake's portrayal, Wolf's adversary, Patton's utilization, Guba's investigative journalism, Scriven's goal-free, Scriven's modus operandi, Eisner's connoisseurial, Stufflebeam's CIPP, Tyler's objective based, and Levin's cost…

  14. The Application of a Residential Treatment Evaluation Model.

    ERIC Educational Resources Information Center

    Nelson, Ronald H.; And Others

    This study applied a model for the evaluation of a children's residential treatment center. The conclusions are based on data collected for 22 children at four key points: a community baseline relating to families and prior agency contacts, a residential baseline dealing with the child's reported behavior during the first six weeks at the center,…

  15. Support for Career Development in Youth: Program Models and Evaluations

    ERIC Educational Resources Information Center

    Mekinda, Megan A.

    2012-01-01

    This article examines four influential programs--Citizen Schools, After School Matters, career academies, and Job Corps--to demonstrate the diversity of approaches to career programming for youth. It compares the specific program models and draws from the evaluation literature to discuss strengths and weaknesses of each. The article highlights…

  16. Evaluation of a stratiform cloud parameterization for general circulation models

    SciTech Connect

    Ghan, S.J.; Leung, L.R.; McCaa, J.

    1996-04-01

    To evaluate the relative importance of horizontal advection of cloud versus cloud formation within the grid cell of a single column model (SCM), we have performed a series of simulations with our SCM driven by a fixed vertical velocity and various rates of horizontal advection.

  17. Evaluating the Predictive Value of Growth Prediction Models

    ERIC Educational Resources Information Center

    Murphy, Daniel L.; Gaertner, Matthew N.

    2014-01-01

    This study evaluates four growth prediction models--projection, student growth percentile, trajectory, and transition table--commonly used to forecast (and give schools credit for) middle school students' future proficiency. Analyses focused on vertically scaled summative mathematics assessments, and two performance standards conditions (high…

  18. Assessment and Evaluation Modeling. Symposium 38. [AHRD Conference, 2001].

    ERIC Educational Resources Information Center

    2001

    This symposium on assessment and evaluation modeling consists of three presentations. "Training Assessment Among Kenyan Smallholder Entrepreneurs" (George G. Shibanda, Jemymah Ingado, Bernard Nassiuma) reports a study that assessed the extent to which the need for knowledge, information, and skills among small scale farmers can promote…

  19. Evaluation of Infrared Images by Using a Human Thermal Model

    DTIC Science & Technology

    2001-10-25

    thermal environmental history have been recorded. In this case, the thermal environmental history could be estimated from the behavior of a subject... environmental history and physiological condition history. An advantage of the evaluation of IR images using the thermal model is to provide

  20. Evaluation of active appearance models in varying background conditions

    NASA Astrophysics Data System (ADS)

    Kowalski, Marek; Naruniec, Jacek

    2013-10-01

    In this paper we present an evaluation of the chosen versions of Active Appearance Models (AAM) in varying background conditions. Algorithms were tested on a subset of the CMU PIE database and chosen background im- ages. Our experiments prove, that the accuracy of those methods is strictly correlated with the used background, where the differences in the success rate differ even up to 50%.

  1. Air Pollution Data for Model Evaluation and Application

    EPA Science Inventory

    One objective of designing an air pollution monitoring network is to obtain data for evaluating air quality models that are used in the air quality management process and scientific discovery.1.2 A common use is to relate emissions to air quality, including assessing ...

  2. Field Evaluation of an Avian Risk Assessment Model

    EPA Science Inventory

    We conducted two laboratory subacute dietary toxicity tests and one outdoor subacute dietary toxicity test to determine the effectiveness of the U.S. Environmental Protection Agency's deterministic risk assessment model for evaluating the potential of adverse effects to birds in ...

  3. Evaluations of the POP Model for Navy Forecasting Use

    DTIC Science & Technology

    2016-06-07

    the Tropical Atmosphere Ocean (TAO) array in the equatorial Pacific and from Ocean Weather Stations in the Pacifc and Atlantic. Tokmakian performs...the possibility of evaluating the model in terms of acoustic quantities. RELATED PROJECTS A version of NOGAPS that runs on scaleable architecture is

  4. An IPA-Embedded Model for Evaluating Creativity Curricula

    ERIC Educational Resources Information Center

    Chang, Chi-Cheng

    2014-01-01

    How to diagnose the effectiveness of creativity-related curricula is a crucial concern in the pursuit of educational excellence. This paper introduces an importance-performance analysis (IPA)-embedded model for curriculum evaluation, using the example of an IT project implementation course to assess the creativity performance deduced from student…

  5. Using a Project Portfolio: Empowerment Evaluation for Model Demonstration Projects.

    ERIC Educational Resources Information Center

    Baggett, David

    For model postsecondary demonstration projects serving individuals with disabilities, a portfolio of project activities may serve as a method for program evaluation, program replication, and program planning. Using a portfolio for collecting, describing, and documenting a project's successes, efforts, and failures enables project staff to take…

  6. Evaluating Item Fit for Multidimensional Item Response Models

    ERIC Educational Resources Information Center

    Zhang, Bo; Stone, Clement A.

    2008-01-01

    This research examines the utility of the s-x[superscript 2] statistic proposed by Orlando and Thissen (2000) in evaluating item fit for multidimensional item response models. Monte Carlo simulation was conducted to investigate both the Type I error and statistical power of this fit statistic in analyzing two kinds of multidimensional test…

  7. A model for compression after impact strength evaluation

    NASA Technical Reports Server (NTRS)

    Ilcewicz, Larry B.; Dost, Ernst F.; Coggeshall, Randy L.

    1989-01-01

    One key property commonly used for evaluating composite material performance is compression after impact strength (CAI). Standarad CAI tests typically use a specific laminate stacking sequence, coupon geometry, and impact level. In order to understand what material factors affect CAI, evaluation of test results should include more than comparisons of the measured strength for different materials. This study considers the effects of characteristic impact damage state, specimen geometry, material toughness, ply group thickness, undamaged strength, and failure mode. The results of parametric studies, using an analysis model developed to predict CAI, are discussed. Experimental results used to verify the model are also presented. Finally, recommended pre- and post-test CAI evaluation schemes which help link material behavior to structural performance are summarized.

  8. Evaluating models of community psychology: social transformation in South Africa.

    PubMed

    Edwards, Steve

    2002-01-01

    Tricket (1996) described community psychology in terms of contexts of diversity within a diversity of contexts. As abstract representations of reality, various community psychological models provide further diverse contexts through which to view the diversity of community psychological reality. The Zululand Community Psychology Project is a South African initiative aimed at improving community life. This includes treating the violent sequelae of the unjust Apartheid system through improving relationships among communities divided in terms of historical, colonial, racial, ethnic, political, gender, and other boundaries as well as promoting health and social change. The aim of this article is to evaluate the applicability of various models of community psychology used in this project. The initial quantitative investigation in the Zululand Community Psychology Project involved five coresearchers, who evaluated five community psychology models--the mental health, social action, organizational, ecological, and phenomenological models--in terms of their differential applicability in three partnership centers, representing health, education, and business sectors of the local community. In all three contexts, the models were rank ordered by a representative of each center, an intern community psychologist, and his supervisor in terms of the models' respective applicability to the particular partnership center concerned. Results indicated significant agreement with regard to the differential applicability of the mental health, phenomenological, and organizational models in the health, education, and business centers respectively, with the social action model being most generally applicable across all centers. This led to a further qualitative individual and focus group investigation with eight university coresearchers into the experience of social transformation with special reference to social changes needed in the South African context. These social transformation

  9. The Quality, Implementation, and Evaluation Model: A Clinical Practice Model for Sustainable Interventions.

    PubMed

    Talsma, AkkeNeel; McLaughlin, Margaret; Bathish, Melissa; Sirihorachai, Rattima; Kuttner, Rafael

    2014-08-01

    Major efforts have been directed toward the implementation of sustainable quality improvement. To date, progress has been noted using various metrics and performance measures; however, successful implementation has proven challenging. The Quality, Implementation, and Evaluation (QIE) model, derived from Donabedian's structure component, presents a framework for implementation of specific activities. The QIE model consists of Policy, Patient Preparedness, Provider Competency, and Performance and Accountability, to guide specific practice initiatives. The implementation of alcohol-based pre-operative skin prep was evaluated in a sample of 17 hospitals and demonstrated that hospitals actively engaged in the components of the model demonstrated a significantly higher use of alcohol-based skin preparation agent than hospitals that did not engage in QIE model activities. The QIE model presents a powerful and actionable implementation model for mid-level management and clinical leadership. Future studies will further evaluate the impact of the specific components of the QIE model.

  10. Evaluation of battery models for prediction of electric vehicle range

    NASA Technical Reports Server (NTRS)

    Frank, H. A.; Phillips, A. M.

    1977-01-01

    Three analytical models for predicting electric vehicle battery output and the corresponding electric vehicle range for various driving cycles were evaluated. The models were used to predict output and range, and then compared with experimentally determined values determined by laboratory tests on batteries using discharge cycles identical to those encountered by an actual electric vehicle while on SAE cycles. Results indicate that the modified Hoxie model gave the best predictions with an accuracy of about 97 to 98% in the best cases and 86% in the worst case. A computer program was written to perform the lengthy iterative calculations required. The program and hardware used to automatically discharge the battery are described.

  11. Recursive Model Identification for the Evaluation of Baroreflex Sensitivity.

    PubMed

    Le Rolle, Virginie; Beuchée, Alain; Praud, Jean-Paul; Samson, Nathalie; Pladys, Patrick; Hernández, Alfredo I

    2016-12-01

    A method for the recursive identification of physiological models of the cardiovascular baroreflex is proposed and applied to the time-varying analysis of vagal and sympathetic activities. The proposed method was evaluated with data from five newborn lambs, which were acquired during injection of vasodilator and vasoconstrictors and the results show a close match between experimental and simulated signals. The model-based estimation of vagal and sympathetic contributions were consistent with physiological knowledge and the obtained estimators of vagal and sympathetic activities were compared to traditional markers associated with baroreflex sensitivity. High correlations were observed between traditional markers and model-based indices.

  12. Evaluation of the St. Lucia geothermal resource: macroeconomic models

    SciTech Connect

    Burris, A.E.; Trocki, L.K.; Yeamans, M.K.; Kolstad, C.D.

    1984-08-01

    A macroeconometric model describing the St. Lucian economy was developed using 1970 to 1982 economic data. Results of macroeconometric forecasts for the period 1983 through 1985 show an increase in gross domestic product (GDP) for 1983 and 1984 with a decline in 1985. The rate of population growth is expected to exceed GDP growth so that a small decline in per capita GDP will occur. We forecast that garment exports will increase, providing needed employment and foreign exchange. To obtain a longer-term but more general outlook on St. Lucia's economy, and to evaluate the benefit of geothermal energy development, we applied a nonlinear programming model. The model maximizes discounted cumulative consumption.

  13. Evaluating Model Fit for Growth Curve Models: Integration of Fit Indices from SEM and MLM Frameworks

    ERIC Educational Resources Information Center

    Wu, Wei; West, Stephen G.; Taylor, Aaron B.

    2009-01-01

    Evaluating overall model fit for growth curve models involves 3 challenging issues. (a) Three types of longitudinal data with different implications for model fit may be distinguished: balanced on time with complete data, balanced on time with data missing at random, and unbalanced on time. (b) Traditional work on fit from the structural equation…

  14. A comprehensive benchmarking system for evaluating global vegetation models

    NASA Astrophysics Data System (ADS)

    Kelley, D. I.; Prentice, I. C.; Harrison, S. P.; Wang, H.; Simard, M.; Fisher, J. B.; Willis, K. O.

    2013-05-01

    We present a benchmark system for global vegetation models. This system provides a quantitative evaluation of multiple simulated vegetation properties, including primary production; seasonal net ecosystem production; vegetation cover; composition and height; fire regime; and runoff. The benchmarks are derived from remotely sensed gridded datasets and site-based observations. The datasets allow comparisons of annual average conditions and seasonal and inter-annual variability, and they allow the impact of spatial and temporal biases in means and variability to be assessed separately. Specifically designed metrics quantify model performance for each process, and are compared to scores based on the temporal or spatial mean value of the observations and a "random" model produced by bootstrap resampling of the observations. The benchmark system is applied to three models: a simple light-use efficiency and water-balance model (the Simple Diagnostic Biosphere Model: SDBM), the Lund-Potsdam-Jena (LPJ) and Land Processes and eXchanges (LPX) dynamic global vegetation models (DGVMs). In general, the SDBM performs better than either of the DGVMs. It reproduces independent measurements of net primary production (NPP) but underestimates the amplitude of the observed CO2 seasonal cycle. The two DGVMs show little difference for most benchmarks (including the inter-annual variability in the growth rate and seasonal cycle of atmospheric CO2), but LPX represents burnt fraction demonstrably more accurately. Benchmarking also identified several weaknesses common to both DGVMs. The benchmarking system provides a quantitative approach for evaluating how adequately processes are represented in a model, identifying errors and biases, tracking improvements in performance through model development, and discriminating among models. Adoption of such a system would do much to improve confidence in terrestrial model predictions of climate change impacts and feedbacks.

  15. A comprehensive benchmarking system for evaluating global vegetation models

    NASA Astrophysics Data System (ADS)

    Kelley, D. I.; Prentice, I. Colin; Harrison, S. P.; Wang, H.; Simard, M.; Fisher, J. B.; Willis, K. O.

    2012-11-01

    We present a benchmark system for global vegetation models. This system provides a quantitative evaluation of multiple simulated vegetation properties, including primary production; seasonal net ecosystem production; vegetation cover, composition and height; fire regime; and runoff. The benchmarks are derived from remotely sensed gridded datasets and site-based observations. The datasets allow comparisons of annual average conditions and seasonal and inter-annual variability, and they allow the impact of spatial and temporal biases in means and variability to be assessed separately. Specifically designed metrics quantify model performance for each process, and are compared to scores based on the temporal or spatial mean value of the observations and a "random" model produced by bootstrap resampling of the observations. The benchmark system is applied to three models: a simple light-use efficiency and water-balance model (the Simple Diagnostic Biosphere Model: SDBM), and the Lund-Potsdam-Jena (LPJ) and Land Processes and eXchanges (LPX) dynamic global vegetation models (DGVMs). SDBM reproduces observed CO2 seasonal cycles, but its simulation of independent measurements of net primary production (NPP) is too high. The two DGVMs show little difference for most benchmarks (including the inter-annual variability in the growth rate and seasonal cycle of atmospheric CO2), but LPX represents burnt fraction demonstrably more accurately. Benchmarking also identified several weaknesses common to both DGVMs. The benchmarking system provides a quantitative approach for evaluating how adequately processes are represented in a model, identifying errors and biases, tracking improvements in performance through model development, and discriminating among models. Adoption of such a system would do much to improve confidence in terrestrial model predictions of climate change impacts and feedbacks.

  16. Evaluation of mycobacterial virulence using rabbit skin liquefaction model.

    PubMed

    Zhang, Guoping; Zhu, Bingdong; Shi, Wanliang; Wang, Mingzhu; Da, Zejiao; Zhang, Ying

    2010-01-01

    Liquefaction is an important pathological process that can subsequently lead to cavitation where large numbers of bacilli can be coughed up which in turn causes spread of tuberculosis in humans. Current animal models to study the liquefaction process and to evaluate virulence of mycobacteria are tedious. In this study, we evaluated a rabbit skin model as a rapid model for liquefaction and virulence assessment using M. bovis BCG, M. tuberculosis avirulent strain H37Ra, M. smegmatis, and the H37Ra strains complemented with selected genes from virulent M. tuberculosis strain H37Rv. We found that with prime and/or boosting immunization, all of these live bacteria at enough high number could induce liquefaction, and the boosting induced stronger liquefaction and more severe lesions in shorter time compared with the prime injection. The skin lesions caused by high dose live BCG (5×10 (6) ) were the most severe followed by live M. tuberculosis H37Ra with M. smegmatis being the least pathogenic. It is of interest to note that none of the above heat-killed mycobacteria induced liquefaction. When H37Ra was complemented with certain wild type genes of H37Rv, some of the complemented H37Ra strains produced more severe skin lesions than H37Ra. These results suggest that the rabbit skin liquefaction model can be a more visual, convenient, rapid and useful model to evaluate virulence of different mycobacteria and to study the mechanisms of liquefaction.

  17. Toward diagnostic model calibration and evaluation: Approximate Bayesian computation

    NASA Astrophysics Data System (ADS)

    Vrugt, Jasper A.; Sadegh, Mojtaba

    2013-07-01

    The ever increasing pace of computational power, along with continued advances in measurement technologies and improvements in process understanding has stimulated the development of increasingly complex hydrologic models that simulate soil moisture flow, groundwater recharge, surface runoff, root water uptake, and river discharge at different spatial and temporal scales. Reconciling these high-order system models with perpetually larger volumes of field data is becoming more and more difficult, particularly because classical likelihood-based fitting methods lack the power to detect and pinpoint deficiencies in the model structure. Gupta et al. (2008) has recently proposed steps (amongst others) toward the development of a more robust and powerful method of model evaluation. Their diagnostic approach uses signature behaviors and patterns observed in the input-output data to illuminate to what degree a representation of the real world has been adequately achieved and how the model should be improved for the purpose of learning and scientific discovery. In this paper, we introduce approximate Bayesian computation (ABC) as a vehicle for diagnostic model evaluation. This statistical methodology relaxes the need for an explicit likelihood function in favor of one or multiple different summary statistics rooted in hydrologic theory that together have a clearer and more compelling diagnostic power than some average measure of the size of the error residuals. Two illustrative case studies are used to demonstrate that ABC is relatively easy to implement, and readily employs signature based indices to analyze and pinpoint which part of the model is malfunctioning and in need of further improvement.

  18. Evaluating a cognitive model of ALDH2 and drinking behavior

    PubMed Central

    Hendershot, Christian S.; Witkiewitz, Katie; George, William H.; Wall, Tamara L.; Otto, Jacqueline M.; Liang, Tiebing; Larimer, Mary E.

    2010-01-01

    Background Despite evidence for genetic influences on alcohol use and alcohol-related cognitions, genetic factors and endophenotypes are rarely incorporated in cognitive models of drinking behavior. This study evaluated a model of ALDH2 and drinking behavior stipulating cognitive factors and alcohol sensitivity as accounting for genetic influences on drinking outcomes. Methods Participants were Asian-American young adults (n = 171) who completed measures of alcohol cognitions (drinking motives, drinking refusal self-efficacy and alcohol expectancies), alcohol sensitivity, drinking behavior and alcohol-related problems as part a prospective study. Structural equation modeling (SEM) evaluated a model of drinking behavior that stipulated indirect effects of ALDH2 on drinking outcomes through cognitive variables and alcohol sensitivity. Results The full model provided an adequate fit to the observed data, with the measurement model explaining 63% of the variance in baseline heavy drinking and 50% of the variance in alcohol-related problems at follow-up. Associations of ALDH2 with cognitive factors and alcohol sensitivity were significant, whereas the association of ALDH2 with drinking was not significant with these factors included in the model. Mediation tests indicated significant indirect effects of ALDH2 through drinking motives, drinking refusal self-efficacy and alcohol sensitivity. Conclusions Results are consistent with the perspective that genetic influences on drinking behavior can be partly explained by learning mechanisms and implicate cognitive factors as important for characterizing associations of ALDH2 and drinking. PMID:21039630

  19. Evaluating climate models: Should we use weather or climate observations?

    SciTech Connect

    Oglesby, Robert J; Erickson III, David J

    2009-12-01

    Calling the numerical models that we use for simulations of climate change 'climate models' is a bit of a misnomer. These 'general circulation models' (GCMs, AKA global climate models) and their cousins the 'regional climate models' (RCMs) are actually physically-based weather simulators. That is, these models simulate, either globally or locally, daily weather patterns in response to some change in forcing or boundary condition. These simulated weather patterns are then aggregated into climate statistics, very much as we aggregate observations into 'real climate statistics'. Traditionally, the output of GCMs has been evaluated using climate statistics, as opposed to their ability to simulate realistic daily weather observations. At the coarse global scale this may be a reasonable approach, however, as RCM's downscale to increasingly higher resolutions, the conjunction between weather and climate becomes more problematic. We present results from a series of present-day climate simulations using the WRF ARW for domains that cover North America, much of Latin America, and South Asia. The basic domains are at a 12 km resolution, but several inner domains at 4 km have also been simulated. These include regions of complex topography in Mexico, Colombia, Peru, and Sri Lanka, as well as a region of low topography and fairly homogeneous land surface type (the U.S. Great Plains). Model evaluations are performed using standard climate analyses (e.g., reanalyses; NCDC data) but also using time series of daily station observations. Preliminary results suggest little difference in the assessment of long-term mean quantities, but the variability on seasonal and interannual timescales is better described. Furthermore, the value-added by using daily weather observations as an evaluation tool increases with the model resolution.

  20. Evaluating Climate Models: Should We Use Weather or Climate Observations?

    NASA Astrophysics Data System (ADS)

    Oglesby, R. J.; Rowe, C. M.; Maasch, K. A.; Erickson, D. J.; Hays, C.

    2009-12-01

    Calling the numerical models that we use for simulations of climate change 'climate models' is a bit of a misnomer. These 'general circulation models' (GCMs, AKA global climate models) and their cousins the 'regional climate models' (RCMs) are actually physically-based weather simulators. That is, these models simulate, either globally or locally, daily weather patterns in response to some change in forcing or boundary condition. These simulated weather patterns are then aggregated into climate statistics, very much as we aggregate observations into 'real climate statistics'. Traditionally, the output of GCMs has been evaluated using climate statistics, as opposed to their ability to simulate realistic daily weather observations. At the coarse global scale this may be a reasonable approach, however, as RCM's downscale to increasingly higher resolutions, the conjunction between weather and climate becomes more problematic. We present results from a series of present-day climate simulations using the WRF ARW for domains that cover North America, much of Latin America, and South Asia. The basic domains are at a 12 km resolution, but several inner domains at 4 km have also been simulated. These include regions of complex topography in Mexico, Colombia, Peru, and Sri Lanka, as well as a region of low topography and fairly homogeneous land surface type (the U.S. Great Plains). Model evaluations are performed using standard climate analyses (e.g., reanalyses; NCDC data) but also using time series of daily station observations. Preliminary results suggest little difference in the assessment of long-term mean quantities, but the variability on seasonal and interannual timescales is better described. Furthermore, the value-added by using daily weather observations as an evaluation tool increases with the model resolution.

  1. Distributed multi-criteria model evaluation and spatial association analysis

    NASA Astrophysics Data System (ADS)

    Scherer, Laura; Pfister, Stephan

    2015-04-01

    Model performance, if evaluated, is often communicated by a single indicator and at an aggregated level; however, it does not embrace the trade-offs between different indicators and the inherent spatial heterogeneity of model efficiency. In this study, we simulated the water balance of the Mississippi watershed using the Soil and Water Assessment Tool (SWAT). The model was calibrated against monthly river discharge at 131 measurement stations. Its time series were bisected to allow for subsequent validation at the same gauges. Furthermore, the model was validated against evapotranspiration which was available as a continuous raster based on remote sensing. The model performance was evaluated for each of the 451 sub-watersheds using four different criteria: 1) Nash-Sutcliffe efficiency (NSE), 2) percent bias (PBIAS), 3) root mean square error (RMSE) normalized to standard deviation (RSR), as well as 4) a combined indicator of the squared correlation coefficient and the linear regression slope (bR2). Conditions that might lead to a poor model performance include aridity, a very flat and steep relief, snowfall and dams, as indicated by previous research. In an attempt to explain spatial differences in model efficiency, the goodness of the model was spatially compared to these four phenomena by means of a bivariate spatial association measure which combines Pearson's correlation coefficient and Moran's index for spatial autocorrelation. In order to assess the model performance of the Mississippi watershed as a whole, three different averages of the sub-watershed results were computed by 1) applying equal weights, 2) weighting by the mean observed river discharge, 3) weighting by the upstream catchment area and the square root of the time series length. Ratings of model performance differed significantly in space and according to efficiency criterion. The model performed much better in the humid Eastern region than in the arid Western region which was confirmed by the

  2. Evaluation of Generation Alternation Models in Evolutionary Robotics

    NASA Astrophysics Data System (ADS)

    Oiso, Masashi; Matsumura, Yoshiyuki; Yasuda, Toshiyuki; Ohkura, Kazuhiro

    For efficient implementation of Evolutionary Algorithms (EA) to a desktop grid computing environment, we propose a new generation alternation model called Grid-Oriented-Deletion (GOD) based on comparison with the conventional techniques. In previous research, generation alternation models are generally evaluated by using test functions. However, their exploration performance on the real problems such as Evolutionary Robotics (ER) has not been made very clear yet. Therefore we investigate the relationship between the exploration performance of EA on an ER problem and its generation alternation model. We applied four generation alternation models to the Evolutionary Multi-Robotics (EMR), which is the package-pushing problem to investigate their exploration performance. The results show that GOD is more effective than the other conventional models.

  3. Human Modeling Evaluations in Microgravity Workstation and Restraint Development

    NASA Technical Reports Server (NTRS)

    Whitmore, Mihriban; Chmielewski, Cynthia; Wheaton, Aneice; Hancock, Lorraine; Beierle, Jason; Bond, Robert L. (Technical Monitor)

    1999-01-01

    The International Space Station (ISS) will provide long-term missions which will enable the astronauts to live and work, as well as, conduct research in a microgravity environment. The dominant factor in space affecting the crew is "weightlessness" which creates a challenge for establishing workstation microgravity design requirements. The crewmembers will work at various workstations such as Human Research Facility (HRF), Microgravity Sciences Glovebox (MSG) and Life Sciences Glovebox (LSG). Since the crew will spend considerable amount of time at these workstations, it is critical that ergonomic design requirements are integral part of design and development effort. In order to achieve this goal, the Space Human Factors Laboratory in the Johnson Space Center Flight Crew Support Division has been tasked to conduct integrated evaluations of workstations and associated crew restraints. Thus, a two-phase approach was used: 1) ground and microgravity evaluations of the physical dimensions and layout of the workstation components, and 2) human modeling analyses of the user interface. Computer-based human modeling evaluations were an important part of the approach throughout the design and development process. Human modeling during the conceptual design phase included crew reach and accessibility of individual equipment, as well as, crew restraint needs. During later design phases, human modeling has been used in conjunction with ground reviews and microgravity evaluations of the mock-ups in order to verify the human factors requirements. (Specific examples will be discussed.) This two-phase approach was the most efficient method to determine ergonomic design characteristics for workstations and restraints. The real-time evaluations provided a hands-on implementation in a microgravity environment. On the other hand, only a limited number of participants could be tested. The human modeling evaluations provided a more detailed analysis of the setup. The issues identified

  4. Evaluation of Black Carbon Estimations in Global Aerosol Models

    SciTech Connect

    Koch, D.; Schulz, M.; Kinne, Stefan; McNaughton, C. S.; Spackman, J. R.; Balkanski, Y.; Bauer, S.; Berntsen, T.; Bond, Tami C.; Boucher, Olivier; Chin, M.; Clarke, A. D.; De Luca, N.; Dentener, F.; Diehl, T.; Dubovik, O.; Easter, Richard C.; Fahey, D. W.; Feichter, J.; Fillmore, D.; Freitag, S.; Ghan, Steven J.; Ginoux, P.; Gong, S.; Horowitz, L.; Iversen, T.; Kirkevag, A.; Klimont, Z.; Kondo, Yutaka; Krol, M.; Liu, Xiaohong; Miller, R.; Montanaro, V.; Moteki, N.; Myhre, G.; Penner, J.; Perlwitz, Ja; Pitari, G.; Reddy, S.; Sahu, L.; Sakamoto, H.; Schuster, G.; Schwarz, J. P.; Seland, O.; Stier, P.; Takegawa, Nobuyuki; Takemura, T.; Textor, C.; van Aardenne, John; Zhao, Y.

    2009-11-27

    We evaluate black carbon (BC) model predictions from the AeroCom model intercomparison project by considering the diversity among year 2000 model simulations and comparing model predictions with available measurements. These model-measurement intercomparisons include BC surface and aircraft concentrations, aerosol absorption optical depth (AAOD) from AERONET and OMI retrievals and BC column estimations based on AERONET. In regions other than Asia, most models are biased high compared to surface concentration measurements. However compared with (column) AAOD or BC burden retreivals, the models are generally biased low. The average ratio of model to retrieved AAOD is less than 0.7 in South American and 0.6 in African biomass burning regions; both of these regions lack surface concentration measurements. In Asia the average model to observed ratio is 0.6 for AAOD and 0.5 for BC surface concentrations. Compared with aircraft measurements over the Americas at latitudes between 0 and 50N, the average model is a factor of 10 larger than observed, and most models exceed the measured BC standard deviation in the mid to upper troposphere. At higher latitudes the average model to aircraft BC is 0.6 and underestimate the observed BC loading in the lower and middle troposphere associated with springtime Arctic haze. Low model bias for AAOD but overestimation of surface and upper atmospheric BC concentrations at lower latitudes suggests that most models are underestimating BC absorption and should improve estimates for refractive index, particle size, and optical effects of BC coating. Retrieval uncertainties and/or differences with model diagnostic treatment may also contribute to the model-measurement disparity. Largest AeroCom model diversity occurred in northern Eurasia and the remote Arctic, regions influenced by anthropogenic sources. Changing emissions, aging, removal, or optical properties within a single model generated a smaller change in model predictions than the

  5. Criteria for the evaluation of studies in transgenic models.

    PubMed

    Popp, J A

    2001-01-01

    The generation, evaluation, and presentation of data from the ILSI Alternatives to Carcinogenicity Testing (ACT) program was standardized to ensure that the results of studies performed in multiple laboratories could be reliably compared. To this end, standardized experimental protocols, tissue collection procedures, histopathology nomenclature, diagnoses, and terminology were employed by study participants. In the experimental phase, this approach provided important cross-model consistency. To ensure comparability in the data evaluation phase of the project, interpretive criteria were defined to allow the characterization of study outcome as positive, negative, or equivocal in regards to carcinogenic response. These criteria helped to provide consistency across models because separate Assay Working Groups were established to evaluate the results of each model. To organize and compile the data from the ILSI ACT program, a database has been developed and data entered in standardized format to facilitate cross- and intramodel comparisons. In summary, the early development of standardized test protocols, evaluation procedures, and interpretive criteria has resulted in a data set in which users can have a high level of assurance that results in the database reflect consistently applied experimental and interpretive guidelines.

  6. Evaluation of reliability modeling tools for advanced fault tolerant systems

    NASA Technical Reports Server (NTRS)

    Baker, Robert; Scheper, Charlotte

    1986-01-01

    The Computer Aided Reliability Estimation (CARE III) and Automated Reliability Interactice Estimation System (ARIES 82) reliability tools for application to advanced fault tolerance aerospace systems were evaluated. To determine reliability modeling requirements, the evaluation focused on the Draper Laboratories' Advanced Information Processing System (AIPS) architecture as an example architecture for fault tolerance aerospace systems. Advantages and limitations were identified for each reliability evaluation tool. The CARE III program was designed primarily for analyzing ultrareliable flight control systems. The ARIES 82 program's primary use was to support university research and teaching. Both CARE III and ARIES 82 were not suited for determining the reliability of complex nodal networks of the type used to interconnect processing sites in the AIPS architecture. It was concluded that ARIES was not suitable for modeling advanced fault tolerant systems. It was further concluded that subject to some limitations (the difficulty in modeling systems with unpowered spare modules, systems where equipment maintenance must be considered, systems where failure depends on the sequence in which faults occurred, and systems where multiple faults greater than a double near coincident faults must be considered), CARE III is best suited for evaluating the reliability of advanced tolerant systems for air transport.

  7. Evaluation of Medical Education virtual Program: P3 model

    PubMed Central

    REZAEE, RITA; SHOKRPOUR, NASRIN; BOROUMAND, MARYAM

    2016-01-01

    Introduction: In e-learning, people get involved in a process and create the content (product) and make it available for virtual learners. The present study was carried out in order to evaluate the first virtual master program in medical education at Shiraz University of Medical Sciences according to P3 Model. Methods: This is an evaluation research study with post single group design used to determine how effective this program was. All students 60 who participated more than one year in this virtual program and 21 experts including teachers and directors participated in this evaluation project. Based on the P3 e-learning model, an evaluation tool with 5-point Likert rating scale was designed and applied to collect the descriptive data. Results: Students reported storyboard and course design as the most desirable element of learning environment (2.30±0.76), but they declared technical support as the less desirable part (1.17±1.23). Conclusion: Presence of such framework in this regard and using it within the format of appropriate tools for evaluation of e-learning in universities and higher education institutes, which present e-learning curricula in the country, may contribute to implementation of the present and future e-learning curricula efficiently and guarantee its implementation in an appropriate way. PMID:27795971

  8. Evaluation of Stratospheric Transport in New 3D Models Using the Global Modeling Initiative Grading Criteria

    NASA Technical Reports Server (NTRS)

    Strahan, Susan E.; Douglass, Anne R.; Einaudi, Franco (Technical Monitor)

    2001-01-01

    The Global Modeling Initiative (GMI) Team developed objective criteria for model evaluation in order to identify the best representation of the stratosphere. This work created a method to quantitatively and objectively discriminate between different models. In the original GMI study, 3 different meteorological data sets were used to run an offline chemistry and transport model (CTM). Observationally-based grading criteria were derived and applied to these simulations and various aspects of stratospheric transport were evaluated; grades were assigned. Here we report on the application of the GMI evaluation criteria to CTM simulations integrated with a new assimilated wind data set and a new general circulation model (GCM) wind data set. The Finite Volume Community Climate Model (FV-CCM) is a new GCM developed at Goddard which uses the NCAR CCM physics and the Lin and Rood advection scheme. The FV-Data Assimilation System (FV-DAS) is a new data assimilation system which uses the FV-CCM as its core model. One year CTM simulations of 2.5 degrees longitude by 2 degrees latitude resolution were run for each wind data set. We present the evaluation of temperature and annual transport cycles in the lower and middle stratosphere in the two new CTM simulations. We include an evaluation of high latitude transport which was not part of the original GMI criteria. Grades for the new simulations will be compared with those assigned during the original GMT evaluations and areas of improvement will be identified.

  9. Evaluation of regional climate simulations for air quality modelling purposes

    NASA Astrophysics Data System (ADS)

    Menut, Laurent; Tripathi, Om P.; Colette, Augustin; Vautard, Robert; Flaounas, Emmanouil; Bessagnet, Bertrand

    2013-05-01

    In order to evaluate the future potential benefits of emission regulation on regional air quality, while taking into account the effects of climate change, off-line air quality projection simulations are driven using weather forcing taken from regional climate models. These regional models are themselves driven by simulations carried out using global climate models (GCM) and economical scenarios. Uncertainties and biases in climate models introduce an additional "climate modeling" source of uncertainty that is to be added to all other types of uncertainties in air quality modeling for policy evaluation. In this article we evaluate the changes in air quality-related weather variables induced by replacing reanalyses-forced by GCM-forced regional climate simulations. As an example we use GCM simulations carried out in the framework of the ERA-interim programme and of the CMIP5 project using the Institut Pierre-Simon Laplace climate model (IPSLcm), driving regional simulations performed in the framework of the EURO-CORDEX programme. In summer, we found compensating deficiencies acting on photochemistry: an overestimation by GCM-driven weather due to a positive bias in short-wave radiation, a negative bias in wind speed, too many stagnant episodes, and a negative temperature bias. In winter, air quality is mostly driven by dispersion, and we could not identify significant differences in either wind or planetary boundary layer height statistics between GCM-driven and reanalyses-driven regional simulations. However, precipitation appears largely overestimated in GCM-driven simulations, which could significantly affect the simulation of aerosol concentrations. The identification of these biases will help interpreting results of future air quality simulations using these data. Despite these, we conclude that the identified differences should not lead to major difficulties in using GCM-driven regional climate simulations for air quality projections.

  10. Evaluating the adaptive-filter model of the cerebellum.

    PubMed

    Dean, Paul; Porrill, John

    2011-07-15

    The adaptive-filter model of the cerebellar microcircuit is in widespread use, combining as it does an explanation of key microcircuit features with well-specified computational power. Here we consider two methods for its evaluation. One is to test its predictions concerning relations between cerebellar inputs and outputs. Where the relevant experimental data are available, e.g. for the floccular role in image stabilization, the predictions appear to be upheld. However, for the majority of cerebellar microzones these data have yet to be obtained. The second method is to test model predictions about details of the microcircuit. We focus on features apparently incompatible with the model, in particular non-linear patterns in Purkinje cell simple-spike firing. Analysis of these patterns suggests the following three conclusions. (i) It is important to establish whether they can be observed during task-related behaviour. (ii) Highly non-linear models based on these patterns are unlikely to be universal, because they would be incompatible with the (approximately) linear nature of floccular function. (iii) The control tasks for which these models are computationally suited need to be identified. At present, therefore, the adaptive filter remains a candidate model of at least some cerebellar microzones, and its evaluation suggests promising lines for future enquiry.

  11. Evaluation of COMPASS ionospheric model in GNSS positioning

    NASA Astrophysics Data System (ADS)

    Wu, Xiaoli; Hu, Xiaogong; Wang, Gang; Zhong, Huijuan; Tang, Chengpan

    2013-03-01

    As important products of GNSS navigation message, ionospheric delay model parameters are broadcasted for single-frequency users to improve their positioning accuracy. GPS provides daily Klobuchar ionospheric model parameters based on geomagnetic reference frame, while the regional satellite navigation system of China's COMPASS broadcasts an eight-parameter ionospheric model, COMPASS Ionospheric Model(CIM), which was generated by processing data from continuous monitoring stations, with updating the parameters every 2 h. To evaluate its performance, CIM predictions are compared to ionospheric delay measurements, along with GPS positioning accuracy comparisons. Real observed data analysis indicates that CIM provides higher correction precision in middle-latitude regions, but relatively lower correction precision for low-latitude regions where the ionosphere has much higher variability. CIM errors for some users show a common bias for in-coming COMPASS signals from different satellites, and hence ionospheric model errors are somehow translated into the receivers' clock error estimation. In addition, the CIM from the China regional monitoring network are further evaluated for global ionospheric corrections. Results show that in the Northern Hemisphere areas including Asia, Europe and North America, the three-dimensional positioning accuracy using the CIM for ionospheric delay corrections is improved by 7.8%-35.3% when compared to GPS single-frequency positioning ionospheric delay corrections using the Klobuchar model. However, the positioning accuracy in the Southern Hemisphere is degraded due apparently to the lack of monitoring stations there.

  12. Evaluation of the RIO-IFDM-street canyon model chain

    NASA Astrophysics Data System (ADS)

    Lefebvre, W.; Van Poppel, M.; Maiheu, B.; Janssen, S.; Dons, E.

    2013-10-01

    Integration of all relevant spatial scales in concentration modeling is important for assessing the European limit values for NO2. The local NO2-concentrations are influenced by the regional background, the local emissions and the street canyon effects. Therefore, it is important to consistently combine all these contributions in the model setup which is used for such an assessment. In this paper, we present the results of an integrated model chain, consisting of an advanced measurement interpolation model, a bi-Gaussian plume model and a canyon model to simulate the street-level concentrations over the city of Antwerp, Belgium. The results of this model chain are evaluated against independent weekly averaged NO2 measurements at 49 locations in the city of Antwerp, during both a late autumn and a late spring week. It is shown that the model performed well, explaining between 62% and 87% of the spatial variance, with a RMSE between 5 and 6 μg m-1 and small biases. In addition to this overall validation, the performance of different components in the model chain is shown, in order to provide information on the importance of the different constituents.

  13. A Spectral Evaluation of Models Performances in Mediterranean Oak Woodlands

    NASA Astrophysics Data System (ADS)

    Vargas, R.; Baldocchi, D. D.; Abramowitz, G.; Carrara, A.; Correia, A.; Kobayashi, H.; Papale, D.; Pearson, D.; Pereira, J.; Piao, S.; Rambal, S.; Sonnentag, O.

    2009-12-01

    Ecosystem processes are influenced by climatic trends at multiple temporal scales including diel patterns and other mid-term climatic modes, such as interannual and seasonal variability. Because interactions between biophysical components of ecosystem processes are complex, it is important to test how models perform in frequency (e.g. hours, days, weeks, months, years) and time (i.e. day of the year) domains in addition to traditional tests of annual or monthly sums. Here we present a spectral evaluation using wavelet time series analysis of model performance in seven Mediterranean Oak Woodlands that encompass three deciduous and four evergreen sites. We tested the performance of five models (CABLE, ORCHIDEE, BEPS, Biome-BGC, and JULES) on measured variables of gross primary production (GPP) and evapotranspiration (ET). In general, model performance fails at intermediate periods (e.g. weeks to months) likely because these models do not represent the water pulse dynamics that influence GPP and ET at these Mediterranean systems. To improve the performance of a model it is critical to identify first where and when the model fails. Only by identifying where a model fails we can improve the model performance and use them as prognostic tools and to generate further hypotheses that can be tested by new experiments and measurements.

  14. Modeling the dynamics of evaluation: a multilevel neural network implementation of the iterative reprocessing model.

    PubMed

    Ehret, Phillip J; Monroe, Brian M; Read, Stephen J

    2015-05-01

    We present a neural network implementation of central components of the iterative reprocessing (IR) model. The IR model argues that the evaluation of social stimuli (attitudes, stereotypes) is the result of the IR of stimuli in a hierarchy of neural systems: The evaluation of social stimuli develops and changes over processing. The network has a multilevel, bidirectional feedback evaluation system that integrates initial perceptual processing and later developing semantic processing. The network processes stimuli (e.g., an individual's appearance) over repeated iterations, with increasingly higher levels of semantic processing over time. As a result, the network's evaluations of stimuli evolve. We discuss the implications of the network for a number of different issues involved in attitudes and social evaluation. The success of the network supports the IR model framework and provides new insights into attitude theory.

  15. SHEEP AS AN EXPERIMENTAL MODEL FOR BIOMATERIAL IMPLANT EVALUATION

    PubMed Central

    SARTORETTO, SUELEN CRISTINA; UZEDA, MARCELO JOSÉ; MIGUEL, FÚLVIO BORGES; NASCIMENTO, JHONATHAN RAPHAELL; ASCOLI, FABIO; CALASANS-MAIA, MÔNICA DIUANA

    2016-01-01

    ABSTRACT Objective: Based on a literature review and on our own experience, this study proposes sheep as an experimental model to evaluate the bioactive capacity of bone substitute biomaterials, dental implant systems and orthopedics devices. The literature review covered relevant databases available on the Internet from 1990 until to date, and was supplemented by our own experience. Methods: For its resemblance in size and weight to humans, sheep are quite suitable for use as an experimental model. However, information about their utility as an experimental model is limited. The different stages involving sheep experiments were discussed, including the care during breeding and maintenance of the animals obtaining specimens for laboratory processing, and highlighting the unnecessary euthanasia of animals at the end of study, in accordance to the guidelines of the 3Rs Program. Results: All experiments have been completed without any complications regarding the animals and allowed us to evaluate hypotheses and explain their mechanisms. Conclusion: The sheep is an excellent animal model for evaluation of biomaterial for bone regeneration and dental implant osseointegration. From an ethical point of view, one sheep allows for up to 12 implants per animal, permitting to keep them alive at the end of the experiments. Level of Evidence II, Retrospective Study. PMID:28149193

  16. Incorporating principal component analysis into air quality model evaluation

    NASA Astrophysics Data System (ADS)

    Eder, Brian; Bash, Jesse; Foley, Kristen; Pleim, Jon

    2014-01-01

    The efficacy of standard air quality model evaluation techniques is becoming compromised as the simulation periods continue to lengthen in response to ever increasing computing capacity. Accordingly, the purpose of this paper is to demonstrate a statistical approach called Principal Component Analysis (PCA) with the intent of motivating its use by the evaluation community. One of the main objectives of PCA is to identify, through data reduction, the recurring and independent modes of variations (or signals) within a very large dataset, thereby summarizing the essential information of that dataset so that meaningful and descriptive conclusions can be made. In this demonstration, PCA is applied to a simple evaluation metric - the model bias associated with EPA's Community Multi-scale Air Quality (CMAQ) model when compared to weekly observations of sulfate (SO42-) and ammonium (NH4+) ambient air concentrations measured by the Clean Air Status and Trends Network (CASTNet). The advantages of using this technique are demonstrated as it identifies strong and systematic patterns of CMAQ model bias across a myriad of spatial and temporal scales that are neither constrained to geopolitical boundaries nor monthly/seasonal time periods (a limitation of many current studies). The technique also identifies locations (station-grid cell pairs) that are used as indicators for a more thorough diagnostic evaluation thereby hastening and facilitating understanding of the probable mechanisms responsible for the unique behavior among bias regimes. A sampling of results indicates that biases are still prevalent in both SO42- and NH4+ simulations that can be attributed to either: 1) cloud processes in the meteorological model utilized by CMAQ, which are found to overestimated convective clouds and precipitation, while underestimating larger-scale resolved clouds that are less likely to precipitate, and 2) biases associated with Midwest NH3 emissions which may be partially ameliorated

  17. The Acoustic Model Evaluation Committee (AMEC) Reports. Volume 1. Model Evaluation Methodology and Implementation

    DTIC Science & Technology

    1982-09-01

    limited to U.S. Government agencies only. ( APR A 04 Li ".., Other requests for this document must be referred to the Commanding Officer, Naval Ocean...Y, JAGUAR- BRASIL , one or both models). Here, a difficult HEARING STAKE, JOAST, ATOE, and IOMEDEX. practical Issue arises. In its applica- Table 3...198& b1 Ocean Science and Technology Laboratory IS- MUMBER Of PAGES NSTL Station, MS 39529 040 M4. MONITORING AGENCY NAME & ADDRESS( fI different from

  18. Postural effects on intracranial pressure: modeling and clinical evaluation.

    PubMed

    Qvarlander, Sara; Sundström, Nina; Malm, Jan; Eklund, Anders

    2013-11-01

    The physiological effect of posture on intracranial pressure (ICP) is not well described. This study defined and evaluated three mathematical models describing the postural effects on ICP, designed to predict ICP at different head-up tilt angles from the supine ICP value. Model I was based on a hydrostatic indifference point for the cerebrospinal fluid (CSF) system, i.e., the existence of a point in the system where pressure is independent of body position. Models II and III were based on Davson's equation for CSF absorption, which relates ICP to venous pressure, and postulated that gravitational effects within the venous system are transferred to the CSF system. Model II assumed a fully communicating venous system, and model III assumed that collapse of the jugular veins at higher tilt angles creates two separate hydrostatic compartments. Evaluation of the models was based on ICP measurements at seven tilt angles (0-71°) in 27 normal pressure hydrocephalus patients. ICP decreased with tilt angle (ANOVA: P < 0.01). The reduction was well predicted by model III (ANOVA lack-of-fit: P = 0.65), which showed excellent fit against measured ICP. Neither model I nor II adequately described the reduction in ICP (ANOVA lack-of-fit: P < 0.01). Postural changes in ICP could not be predicted based on the currently accepted theory of a hydrostatic indifference point for the CSF system, but a new model combining Davson's equation for CSF absorption and hydrostatic gradients in a collapsible venous system performed well and can be useful in future research on gravity and CSF physiology.

  19. Evaluating Topic Model Interpretability from a Primary Care Physician Perspective

    PubMed Central

    Arnold, Corey W.; Oh, Andrea; Chen, Shawn; Speier, William

    2015-01-01

    Background and Objective Probabilistic topic models provide an unsupervised method for analyzing unstructured text. These models discover semantically coherent combinations of words (topics) that could be integrated in a clinical automatic summarization system for primary care physicians performing chart review. However, the human interpretability of topics discovered from clinical reports is unknown. Our objective is to assess the coherence of topics and their ability to represent the contents of clinical reports from a primary care physician’s point of view. Methods Three latent Dirichlet allocation models (50 topics, 100 topics, and 150 topics) were fit to a large collection of clinical reports. Topics were manually evaluated by primary care physicians and graduate students. Wilcoxon Signed-Rank Tests for Paired Samples were used to evaluate differences between different topic models, while differences in performance between students and primary care physicians (PCPs) were tested using Mann-Whitney U tests for each of the tasks. Results While the 150-topic model produced the best log likelihood, participants were most accurate at identifying words that did not belong in topics learned by the 100-topic model, suggesting that 100 topics provides better relative granularity of discovered semantic themes for the data set used in this study. Models were comparable in their ability to represent the contents of documents. Primary care physicians significantly outperformed students in both tasks. Conclusion This work establishes a baseline of interpretability for topic models trained with clinical reports, and provides insights on the appropriateness of using topic models for informatics applications. Our results indicate that PCPs find discovered topics more coherent and representative of clinical reports relative to students, warranting further research into their use for automatic summarization. PMID:26614020

  20. Methods for evaluating the predictive accuracy of structural dynamic models

    NASA Technical Reports Server (NTRS)

    Hasselman, Timothy K.; Chrostowski, Jon D.

    1991-01-01

    Modeling uncertainty is defined in terms of the difference between predicted and measured eigenvalues and eigenvectors. Data compiled from 22 sets of analysis/test results was used to create statistical databases for large truss-type space structures and both pretest and posttest models of conventional satellite-type space structures. Modeling uncertainty is propagated through the model to produce intervals of uncertainty on frequency response functions, both amplitude and phase. This methodology was used successfully to evaluate the predictive accuracy of several structures, including the NASA CSI Evolutionary Structure tested at Langley Research Center. Test measurements for this structure were within + one-sigma intervals of predicted accuracy for the most part, demonstrating the validity of the methodology and computer code.

  1. obs4MIPS: Satellite Datasets for Model Evaluation

    NASA Astrophysics Data System (ADS)

    Ferraro, R.; Waliser, D. E.; Gleckler, P. J.

    2013-12-01

    This poster will review the current status of the obs4MIPs project, whose purpose is to provide a limited collection of well-established and documented datasets for comparison with Earth system models. These datasets have been reformatted to correspond with the CMIP5 model output requirements, and include technical documentation specifically targeted for their use in model output evaluation. There are currently over 50 datasets containing observations that directly correspond to CMIP5 model output variables. We will review the rational and requirements for obs4MIPs contributions, and provide summary information of the current obs4MIPs holdings on the Earth System Grid Federation. We will also provide some usage statistics, an update on governance for the obs4MIPs project, and plans for supporting CMIP6.

  2. An epidemic model to evaluate the homogeneous mixing assumption

    NASA Astrophysics Data System (ADS)

    Turnes, P. P.; Monteiro, L. H. A.

    2014-11-01

    Many epidemic models are written in terms of ordinary differential equations (ODE). This approach relies on the homogeneous mixing assumption; that is, the topological structure of the contact network established by the individuals of the host population is not relevant to predict the spread of a pathogen in this population. Here, we propose an epidemic model based on ODE to study the propagation of contagious diseases conferring no immunity. The state variables of this model are the percentages of susceptible individuals, infectious individuals and empty space. We show that this dynamical system can experience transcritical and Hopf bifurcations. Then, we employ this model to evaluate the validity of the homogeneous mixing assumption by using real data related to the transmission of gonorrhea, hepatitis C virus, human immunodeficiency virus, and obesity.

  3. scoringRules - A software package for probabilistic model evaluation

    NASA Astrophysics Data System (ADS)

    Lerch, Sebastian; Jordan, Alexander; Krüger, Fabian

    2016-04-01

    Models in the geosciences are generally surrounded by uncertainty, and being able to quantify this uncertainty is key to good decision making. Accordingly, probabilistic forecasts in the form of predictive distributions have become popular over the last decades. With the proliferation of probabilistic models arises the need for decision theoretically principled tools to evaluate the appropriateness of models and forecasts in a generalized way. Various scoring rules have been developed over the past decades to address this demand. Proper scoring rules are functions S(F,y) which evaluate the accuracy of a forecast distribution F , given that an outcome y was observed. As such, they allow to compare alternative models, a crucial ability given the variety of theories, data sources and statistical specifications that is available in many situations. This poster presents the software package scoringRules for the statistical programming language R, which contains functions to compute popular scoring rules such as the continuous ranked probability score for a variety of distributions F that come up in applied work. Two main classes are parametric distributions like normal, t, or gamma distributions, and distributions that are not known analytically, but are indirectly described through a sample of simulation draws. For example, Bayesian forecasts produced via Markov Chain Monte Carlo take this form. Thereby, the scoringRules package provides a framework for generalized model evaluation that both includes Bayesian as well as classical parametric models. The scoringRules package aims to be a convenient dictionary-like reference for computing scoring rules. We offer state of the art implementations of several known (but not routinely applied) formulas, and implement closed-form expressions that were previously unavailable. Whenever more than one implementation variant exists, we offer statistically principled default choices.

  4. The establishment of the evaluation model for pupil's lunch suppliers

    NASA Astrophysics Data System (ADS)

    Lo, Chih-Yao; Hou, Cheng-I.; Ma, Rosa

    2011-10-01

    The aim of this study is the establishment of the evaluation model for the government-controlled private suppliers for school lunches in the public middle and primary schools in Miao-Li County. After finishing the literature search and the integration of the opinions from anonymous experts by Modified Delphi Method, the grade forms from relevant schools in and outside the Miao-Li County will firstly be collected and the delaminated structures for evaluation be constructed. Then, the data analysis will be performed on those retrieved questionnaires designed in accordance with the Analytic Hierarchy Process (AHP). Finally, the evaluation form for the government-controlled private suppliers can be constructed and presented in the hope of benefiting the personnel in charge of school meal purchasing.

  5. The Application of a Model for the Evaluation of Educational Products.

    ERIC Educational Resources Information Center

    Bertram, Charles L.; And Others

    Papers presented at a symposium on "The Application of a Model for the Evaluation of Educational Products" are provided. The papers are: "A Model for the Evaluation of Educational Products" by Charles L. Bertram; "The Application of an Evaluation Model to a Preschool Intervention Program" by Brainard W. Hines; "An Evaluation Model for a Regional…

  6. A practical longitudinal model for evaluating growth in Gelbvieh cattle.

    PubMed

    Robbins, K R; Misztal, I; Bertrand, J K

    2005-01-01

    Genetic evaluation of growth in Gelbvieh beef cattle was examined by multiple-trait (MTM) and random regression (RRM) analysis. The data set comprised 541,108 animals with 1,120,086 records. Approximately 15% of the animals in the data set had at least one record measured outside of the accepted MTM age ranges for weaning weight (Wwt) and yearling weight (Ywt). Fourteen percent of Wwt records and 19% of Ywt records were measured outside the accepted ranges for MTM analysis, and thus were excluded from MTM evaluations. Two RRM evaluations were performed using cubic Legendre polynomials (RRML) and linear splines (RRMS) with three knots at 1, 205, and 365 d of age. Data Set 1 (d1) utilized all available records, whereas Data Set 2 (d2) included only records measured within MTM ranges (1 d, 160 to 250 d, and 320 to 410 d). The RRML models did not reach convergence until diagonalization was imposed. After diagonalization, it was found that all longitudinal models required fewer iterations to converge than the MTM. Correlations between the MTM, RRML-d2, and RRMS-d2 evaluations were >or=0.99 for all three traits, indicating that these models were equivalent when predicting breeding values from data within the MTM age ranges. Correlations between MTM, RRML-d1, and RRMS-d1 were >0.99 for Bwt and >0.95 for Wwt and Ywt. The lower correlations for Wwt and Ywt indicate that the added information does affect breeding value prediction. The RRM has the capability to incorporate records measured at all ages into genetic evaluations at a computing cost similar to the MTM.

  7. Photovoltaic performance models: an evaluation with actual field data

    NASA Astrophysics Data System (ADS)

    TamizhMani, Govindasamy; Ishioye, John-Paul; Voropayev, Arseniy; Kang, Yi

    2008-08-01

    Prediction of energy production is crucial to the design and installation of the building integrated photovoltaic systems. This prediction should be attainable based on the commonly available parameters such as system size, orientation and tilt angle. Several commercially available as well as free downloadable software tools exist to predict energy production. Six software models have been evaluated in this study and they are: PV Watts, PVsyst, MAUI, Clean Power Estimator, Solar Advisor Model (SAM) and RETScreen. This evaluation has been done by comparing the monthly, seasonaly and annually predicted data with the actual, field data obtained over a year period on a large number of residential PV systems ranging between 2 and 3 kWdc. All the systems are located in Arizona, within the Phoenix metropolitan area which lies at latitude 33° North, and longitude 112 West, and are all connected to the electrical grid.

  8. Modelling the cooking doneness via integrating sensory evaluation and kinetics.

    PubMed

    Li, Jingpeng; Deng, Li; Jin, Zhengyu; Yan, Yong

    2017-02-01

    The aim of the current work was to develop a novel method to model and quantitatively determine cooking doneness via integrating sensory evaluation and kinetics based on the maturity value (M value) which was redefined. The well-done food was first selected from a series of samples with different M values by sensory evaluation, the average termination maturity values (AMT values) were obtained in accordance with the weighted M values of the selected doneness samples. Among, the changes of M values were assumed to be accorded with the first-order reaction kinetic model and a specific zM value was set as well. The zM value was then obtained due to the rationality of the hypothesis, which was validated by rigorous data analysis. Results showed that maturity time values (MT values) were existing and stable for specific types of materials and a specific population. Quantitative determination of the degree of doneness has profound significance in industrial production.

  9. Evaluation of Differentiation Strategy in Shipping Enterprises with Simulation Model

    NASA Astrophysics Data System (ADS)

    Vaxevanou, Anthi Z.; Ferfeli, Maria V.; Damianos, Sakas P.

    2009-08-01

    The present inquiring study aims at investigating the circumstances that prevail in the European Shipping Enterprises with special reference to the Greek ones. This investigation is held in order to explore the potential implementation of strategies so as to create a unique competitive advantage [1]. The Shipping sector is composed of enterprises that are mainly activated in the following three areas: the passenger, the commercial and the naval. The main target is to create a dynamic simulation model which, with reference to the STAIR strategic model, will evaluate the strategic differential choice that some of the shipping enterprises have.

  10. Acquiring, Representing, and Evaluating a Competence Model of Diagnostic Strategy.

    DTIC Science & Technology

    1985-08-01

    AUG 65 STAN-CS- 5 -1"? UNCLASSIFIED NNSI4-65-K-03S5 F/O 9/2 M IEhhEIi 1111.0 :t5 - 358 f113 ,,2 , 136 1.8 111111.251 MICROCOPY RESOLUTION TEST CHART...constraints of the diagnostic procedure 32 5 . Evaluating the model: Sufficient performance and plausible constraints 34 5 . 1. Performance of the model...WARD-REASON 64 ’Ii IV. 5 . CLARIFY-FINDING 65 IV.6. PROCESS-FINDING 6 IV.7. PROCESS- HYPOTHESIS 67 IV.8. FINDOUT 68 IV.9. APPLYRULES 70 IV.10. GENERATE

  11. A Model Evaluation Data Set for the Tropical ARM Sites

    DOE Data Explorer

    Jakob, Christian

    2008-01-15

    This data set has been derived from various ARM and external data sources with the main aim of providing modelers easy access to quality controlled data for model evaluation. The data set contains highly aggregated (in time) data from a number of sources at the tropical ARM sites at Manus and Nauru. It spans the years of 1999 and 2000. The data set contains information on downward surface radiation; surface meteorology, including precipitation; atmospheric water vapor and cloud liquid water content; hydrometeor cover as a function of height; and cloud cover, cloud optical thickness and cloud top pressure information provided by the International Satellite Cloud Climatology Project (ISCCP).

  12. Evaluation of a puff dispersion model in complex terrain

    SciTech Connect

    Thuillier, R.H. )

    1992-03-01

    California's Pacific Gas and Electric Company has many power plant operations situated in complex terrain, prominent examples being the Geysers geothermal plant in Lake and Sonoma Counties, and the Diablo Canyon nuclear plant in San Luis Obispo County. Procedures ranging from plant licensing to emergency response require a dispersion modeling capability in a complex terrain environment. This paper describes the performance evaluation of such a capability, the Pacific Gas and Electric Company Modeling System (PGEMS), a fast response Gaussian puff model with a three-dimensional wind field generator. Performance of the model was evaluated for ground level and short stack elevated release on the basis of a special intensive tracer experiment in the complex coastal terrain surrounding the Diablo Canyon Nuclear Power Plant in San Luis Obispo County, California. The model performed well under a variety of meteorological and release conditions within the test region of 20-kilometer radius surrounding the nuclear plant, and turned in a superior performance in the wake of the nuclear plant, using a new wake correction algorithm for ground level and roof-vent releases a that location.

  13. A formative model for student nurse development and evaluation--Part 1--Developing the model.

    PubMed

    van der Merwe, A S; Roos, E C; Mulder, M; Joubert, A; Botha, D E; Coetzee, M H; Lombard, A; van Niekerk, A; Visser, L

    1996-12-01

    Preparing student nurses for the profession is a complex task for nurse educators; especially when dealing with the development of personal and interpersonal skills, qualities and values held in high esteem by the nursing profession and the community they serve. These researchers developed a model for formative evaluation of students by using the principles of inductive and deductive reasoning. This model was implemented in clinical practice situations and evaluated for its usefulness. It seems that the model enhanced the standards of nursing care because it had a positive effect on the behavior of students and they were better motivated; the model also improved interpersonal relationships and communication between practising nurses and students. The fact that students repeatedly use the model as a norm for self evaluation ensures that they are constantly reminded of the standards required of a professional nurse.

  14. An Evaluation of Artificial Neural Network Modeling for Manpower Analysis

    DTIC Science & Technology

    1993-09-01

    This thesis evaluates the capabilities of artificial neural networks in forecasting the take-rates of the Voluntary Separations Incentive/Special...Separations Benefit (VSI/SSB) programs for male, Marine Corps Enlisted Personnel in the grades of E-5 and E-6. The Artificial Neural Networks models are...results indicate that artificial neural networks provide forecasting results at least as good as, if not better than, those obtained using classical

  15. Opioid Abuse after Traumatic Brain Injury: Evaluation Using Rodent Models

    DTIC Science & Technology

    2015-09-01

    neurological changes that increase vulnerability for drug abuse and addiction. Consequently, we have been evaluating the effects of TBI on both the...rewarding effects of opioid drugs as well as the development of tolerance and physical dependence in well-established rat models of abuse-related drug ...brain injured rats have a greater sensitivity to the rewarding effects of oxycodone and will self-administer greater total doses of drug compared to

  16. PREMChlor: Probabilistic Remediation Evaluation Model for Chlorinated Solvents

    DTIC Science & Technology

    2010-03-01

    Evaluation Model for Chlorinated Solvents ESTCP Project ER-0704 MARCH 2010 Hailian Liang, Ph.D. Ronald Falta, Ph.D. Clemson University Charles...PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Clemson University, Clemson ,SC,29634 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING...Certification Program (ESTCP) research project (ER-0704), which was a joint effort between Clemson University, GSI Environmental Inc., and Purdue University. The

  17. Drug Evaluation in the Plasmodium Falciparum-Aotus Model

    DTIC Science & Technology

    1996-03-01

    liver and erythrocytic stages of P. falciparum. If successful, it will establish for the first time that DNA vaccines can protect non- human primates, a...of the Institute of Laboratory Resources, National Research Council (NIH Publication No. 86-23, Revised 1985). For the protection of human subjects...essential that new drugs be evaluated in the preclinical Aotus model for their potential usefulness against human infections. Initially, antimalarial

  18. Drug Evaluation in the Plasmodium Falciparum-Aotus Model

    DTIC Science & Technology

    1996-03-01

    with. drug resistant P. falciparum, chloroquine resist ance-l R) was reversed by chlorpromazine and prochlorperazine. Both water-insoluble and soluble...Animals of the Institute of Laboratory Resources, National Research Council (NIH Publication No. 86-23, Revised 1985) For the protection of human sub...new drugs be evaluated in the preclinical Aotus model for their potential usefulness against human infections. Initially, antimalarial drug studies

  19. Modeling Healthcare Processes Using Commitments: An Empirical Evaluation

    PubMed Central

    2015-01-01

    The two primary objectives of this paper are: (a) to demonstrate how Comma, a business modeling methodology based on commitments, can be applied in healthcare process modeling, and (b) to evaluate the effectiveness of such an approach in producing healthcare process models. We apply the Comma approach on a breast cancer diagnosis process adapted from an HHS committee report, and presents the results of an empirical study that compares Comma with a traditional approach based on the HL7 Messaging Standard (Traditional-HL7). Our empirical study involved 47 subjects, and two phases. In the first phase, we partitioned the subjects into two approximately equal groups. We gave each group the same requirements based on a process scenario for breast cancer diagnosis. Members of one group first applied Traditional-HL7 and then Comma whereas members of the second group first applied Comma and then Traditional-HL7—each on the above-mentioned requirements. Thus, each subject produced two models, each model being a set of UML Sequence Diagrams. In the second phase, we repartitioned the subjects into two groups with approximately equal distributions from both original groups. We developed exemplar Traditional-HL7 and Comma models; we gave one repartitioned group our Traditional-HL7 model and the other repartitioned group our Comma model. We provided the same changed set of requirements to all subjects and asked them to modify the provided exemplar model to satisfy the new requirements. We assessed solutions produced by subjects in both phases with respect to measures of flexibility, time, difficulty, objective quality, and subjective quality. Our study found that Comma is superior to Traditional-HL7 in flexibility and objective quality as validated via Student’s t-test to the 10% level of significance. Comma is a promising new approach for modeling healthcare processes. Further gains could be made through improved tooling and enhanced training of modeling personnel. PMID

  20. Modeling Healthcare Processes Using Commitments: An Empirical Evaluation.

    PubMed

    Telang, Pankaj R; Kalia, Anup K; Singh, Munindar P

    2015-01-01

    The two primary objectives of this paper are: (a) to demonstrate how Comma, a business modeling methodology based on commitments, can be applied in healthcare process modeling, and (b) to evaluate the effectiveness of such an approach in producing healthcare process models. We apply the Comma approach on a breast cancer diagnosis process adapted from an HHS committee report, and presents the results of an empirical study that compares Comma with a traditional approach based on the HL7 Messaging Standard (Traditional-HL7). Our empirical study involved 47 subjects, and two phases. In the first phase, we partitioned the subjects into two approximately equal groups. We gave each group the same requirements based on a process scenario for breast cancer diagnosis. Members of one group first applied Traditional-HL7 and then Comma whereas members of the second group first applied Comma and then Traditional-HL7-each on the above-mentioned requirements. Thus, each subject produced two models, each model being a set of UML Sequence Diagrams. In the second phase, we repartitioned the subjects into two groups with approximately equal distributions from both original groups. We developed exemplar Traditional-HL7 and Comma models; we gave one repartitioned group our Traditional-HL7 model and the other repartitioned group our Comma model. We provided the same changed set of requirements to all subjects and asked them to modify the provided exemplar model to satisfy the new requirements. We assessed solutions produced by subjects in both phases with respect to measures of flexibility, time, difficulty, objective quality, and subjective quality. Our study found that Comma is superior to Traditional-HL7 in flexibility and objective quality as validated via Student's t-test to the 10% level of significance. Comma is a promising new approach for modeling healthcare processes. Further gains could be made through improved tooling and enhanced training of modeling personnel.

  1. Evaluating 1d Seismic Models of the Lunar Interior

    NASA Astrophysics Data System (ADS)

    Yao, Y.; Thorne, M. S.; Weber, R. C.; Schmerr, N. C.

    2012-12-01

    A four station seismic network was established on the Moon from 1969 to 1977 as part of the Apollo Lunar Surface Experiment Package (ALSEP). A total of nine 1D seismic velocity models were generated using a variety of different techniques. In spite of the fact that these models were generated from the same data set, significant differences exist between them. We evaluate these models by comparing predicted travel-times to published catalogs of lunar events. We generate synthetic waveform predictions for 1D lunar models using a modified version of the Green's Function of the Earth by Minor Integration (GEMINI) technique. Our results demonstrate that the mean square errors between predicted and measured P-wave travel times are smaller than those for S-wave travel times in all cases. Moreover, models fit travel times for artificial and meteoroid impacts better than for shallow and deep moonquakes. Overall, models presented by Nakamura [Nakamura, 1983] and Garcia et al. [Garcia et al., 2011] predicted the observed travel times better than all other models and were comparable in their explanation of travel-times. Nevertheless, significant waveform differences exist between these models. In particular, the seismic velocity structure of the lunar crust and regolith strongly affect the waveform characteristics predicted by these models. Further complexity is added by possible mantle discontinuity structure that exists in a subset of these models. We show synthetic waveform predictions for these models demonstrating the role that crustal structure has in generating long duration seismic coda inherent in the lunar waveforms.

  2. Evaluating Arctic warming mechanisms in CMIP5 models

    NASA Astrophysics Data System (ADS)

    Franzke, Christian L. E.; Lee, Sukyoung; Feldstein, Steven B.

    2016-07-01

    Arctic warming is one of the most striking signals of global warming. The Arctic is one of the fastest warming regions on Earth and constitutes, thus, a good test bed to evaluate the ability of climate models to reproduce the physics and dynamics involved in Arctic warming. Different physical and dynamical mechanisms have been proposed to explain Arctic amplification. These mechanisms include the surface albedo feedback and poleward sensible and latent heat transport processes. During the winter season when Arctic amplification is most pronounced, the first mechanism relies on an enhancement in upward surface heat flux, while the second mechanism does not. In these mechanisms, it has been proposed that downward infrared radiation (IR) plays a role to a varying degree. Here, we show that the current generation of CMIP5 climate models all reproduce Arctic warming and there are high pattern correlations—typically greater than 0.9—between the surface air temperature (SAT) trend and the downward IR trend. However, we find that there are two groups of CMIP5 models: one with small pattern correlations between the Arctic SAT trend and the surface vertical heat flux trend (Group 1), and the other with large correlations (Group 2) between the same two variables. The Group 1 models exhibit higher pattern correlations between Arctic SAT and 500 hPa geopotential height trends, than do the Group 2 models. These findings suggest that Arctic warming in Group 1 models is more closely related to changes in the large-scale atmospheric circulation, whereas in Group 2, the albedo feedback effect plays a more important role. Interestingly, while Group 1 models have a warm or weak bias in their Arctic SAT, Group 2 models show large cold biases. This stark difference in model bias leads us to hypothesize that for a given model, the dominant Arctic warming mechanism and trend may be dependent on the bias of the model mean state.

  3. Evaluation of the global aerosol microphysical ModelE2-TOMAS model against satellite and ground-based observations

    NASA Astrophysics Data System (ADS)

    Lee, Y. H.; Adams, P. J.; Shindell, D. T.

    2015-03-01

    The TwO-Moment Aerosol Sectional (TOMAS) microphysics model has been integrated into the state-of-the-art general circulation model, GISS ModelE2. This paper provides a detailed description of the ModelE2-TOMAS model and evaluates the model against various observations including aerosol precursor gas concentrations, aerosol mass and number concentrations, and aerosol optical depths. Additionally, global budgets in ModelE2-TOMAS are compared with those of other global aerosol models, and the ModelE2-TOMAS model is compared to the default aerosol model in ModelE2, which is a one-moment aerosol (OMA) model (i.e. no aerosol microphysics). Overall, the ModelE2-TOMAS predictions are within the range of other global aerosol model predictions, and the model has a reasonable agreement (mostly within a factor of 2) with observations of sulfur species and other aerosol components as well as aerosol optical depth. However, ModelE2-TOMAS (as well as ModelE2-OMA) cannot capture the observed vertical distribution of sulfur dioxide over the Pacific Ocean, possibly due to overly strong convective transport and overpredicted precipitation. The ModelE2-TOMAS model simulates observed aerosol number concentrations and cloud condensation nuclei concentrations roughly within a factor of 2. Anthropogenic aerosol burdens in ModelE2-OMA differ from ModelE2-TOMAS by a few percent to a factor of 2 regionally, mainly due to differences in aerosol processes including deposition, cloud processing, and emission parameterizations. We observed larger differences for naturally emitted aerosols such as sea salt and mineral dust, as those emission rates are quite different due to different upper size cutoff assumptions.

  4. ModelE2-TOMAS development and evaluation using aerosol optical depths, mass and number concentrations

    NASA Astrophysics Data System (ADS)

    Lee, Y. H.; Adams, P. J.; Shindell, D. T.

    2014-09-01

    The TwO-Moment Aerosol Sectional microphysics model (TOMAS) has been integrated into the state-of-the-art general circulation model, GISS ModelE2. TOMAS has the flexibility to select a size resolution as well as the lower size cutoff. A computationally efficient version of TOMAS is used here, which has 15 size bins covering 3 nm to 10 μm aerosol dry diameter. For each bin, it simulates the total aerosol number concentration and mass concentrations of sulphate, pure elementary carbon (hydrophobic), mixed elemental carbon (hydrophilic), hydrophobic organic matter, hydrophilic organic matter, sea salt, mineral dust, ammonium, and aerosol-associated water. This paper provides a detailed description of the ModelE2-TOMAS model and evaluates the model against various observations including aerosol precursor gas concentrations, aerosol mass and number concentrations, and aerosol optical depths. Additionally, global budgets in ModelE2-TOMAS are compared with those of other global aerosol models, and the TOMAS model is compared to the default aerosol model in ModelE2, which is a bulk aerosol model. Overall, the ModelE2-TOMAS predictions are within the range of other global aerosol model predictions, and the model has a reasonable agreement with observations of sulphur species and other aerosol components as well as aerosol optical depth. However, ModelE2-TOMAS (as well as the bulk aerosol model) cannot capture the observed vertical distribution of sulphur dioxide over the Pacific Ocean possibly due to overly strong convective transport. The TOMAS model successfully captures observed aerosol number concentrations and cloud condensation nuclei concentrations. Anthropogenic aerosol burdens in the bulk aerosol model running in the same host model as TOMAS (ModelE2) differ by a few percent to a factor of 2 regionally, mainly due to differences in aerosol processes including deposition, cloud processing, and emission parameterizations. Larger differences are found for naturally

  5. Evaluation of turbulence models on roughened turbine blades

    NASA Astrophysics Data System (ADS)

    Dutta, R.; Nicolle, J.; Giroux, A.-M.; Piomelli, U.

    2016-11-01

    The accuracy of turbulence models for the Reynolds-Averaged Navier-Stokes (RANS) equations in rough-wall flows is evaluated by comparing the model predictions with the data obtained from large-eddy simulations (LES). We have considered boundary layers in favourable and adverse pressure gradients mimicking those encountered in hydroturbines. We find that some features of the flow cannot be captured accurately by any model, due to the fundamental modelling assumptions. An example is the flow reversal that occurs in the roughness sublayer prior to separation, which cannot be predicted by the commonly used approaches, which bypass the roughness sublayer while modifying the boundary conditions. In mild pressure gradients most models are sufficiently accurate for engineering applications, but if strong favourable or adverse pressure gradients are applied (especially those leading to separation) the model performance rapidly degrades. A particularly difficult problem (both for rough- and smooth-wall cases) is the return to equilibrium after a strong perturbation, a known limitation of RANS models. Simulations of real configurations using commercial codes are also considered.

  6. An Evaluation of the Decision-Making Capacity Assessment Model

    PubMed Central

    Brémault-Phillips, Suzette C.; Parmar, Jasneet; Friesen, Steven; Rogers, Laura G.; Pike, Ashley; Sluggett, Bryan

    2016-01-01

    Background The Decision-Making Capacity Assessment (DMCA) Model includes a best-practice process and tools to assess DMCA, and implementation strategies at the organizational and assessor levels to support provision of DMCAs across the care continuum. A Developmental Evaluation of the DMCA Model was conducted. Methods A mixed methods approach was used. Survey (N = 126) and focus group (N = 49) data were collected from practitioners utilizing the Model. Results Strengths of the Model include its best-practice and implementation approach, applicability to independent practitioners and inter-professional teams, focus on training/mentoring to enhance knowledge/skills, and provision of tools/processes. Post-training, participants agreed that they followed the Model’s guiding principles (90%), used problem-solving (92%), understood discipline-specific roles (87%), were confident in their knowledge of DMCAs (75%) and pertinent legislation (72%), accessed consultative services (88%), and received management support (64%). Model implementation is impeded when role clarity, physician engagement, inter-professional buy-in, accountability, dedicated resources, information sharing systems, and remuneration are lacking. Dedicated resources, job descriptions inclusive of DMCAs, ongoing education/mentoring supports, access to consultative services, and appropriate remuneration would support implementation. Conclusions The DMCA Model offers practitioners, inter-professional teams, and organizations a best-practice and implementation approach to DMCAs. Addressing barriers and further contextualizing the Model would be warranted. PMID:27729947

  7. Evaluation of Plaid Models in Biclustering of Gene Expression Data.

    PubMed

    Alavi Majd, Hamid; Shahsavari, Soodeh; Baghestani, Ahmad Reza; Tabatabaei, Seyyed Mohammad; Khadem Bashi, Naghme; Rezaei Tavirani, Mostafa; Hamidpour, Mohsen

    2016-01-01

    Background. Biclustering algorithms for the analysis of high-dimensional gene expression data were proposed. Among them, the plaid model is arguably one of the most flexible biclustering models up to now. Objective. The main goal of this study is to provide an evaluation of plaid models. To that end, we will investigate this model on both simulation data and real gene expression datasets. Methods. Two simulated matrices with different degrees of overlap and noise are generated and then the intrinsic structure of these data is compared with biclusters result. Also, we have searched biologically significant discovered biclusters by GO analysis. Results. When there is no noise the algorithm almost discovered all of the biclusters but when there is moderate noise in the dataset, this algorithm cannot perform very well in finding overlapping biclusters and if noise is big, the result of biclustering is not reliable. Conclusion. The plaid model needs to be modified because when there is a moderate or big noise in the data, it cannot find good biclusters. This is a statistical model and is a quite flexible one. In summary, in order to reduce the errors, model can be manipulated and distribution of error can be changed.

  8. Evaluation of a distributed catchment scale water balance model

    NASA Technical Reports Server (NTRS)

    Troch, Peter A.; Mancini, Marco; Paniconi, Claudio; Wood, Eric F.

    1993-01-01

    The validity of some of the simplifying assumptions in a conceptual water balance model is investigated by comparing simulation results from the conceptual model with simulation results from a three-dimensional physically based numerical model and with field observations. We examine, in particular, assumptions and simplifications related to water table dynamics, vertical soil moisture and pressure head distributions, and subsurface flow contributions to stream discharge. The conceptual model relies on a topographic index to predict saturation excess runoff and on Philip's infiltration equation to predict infiltration excess runoff. The numerical model solves the three-dimensional Richards equation describing flow in variably saturated porous media, and handles seepage face boundaries, infiltration excess and saturation excess runoff production, and soil driven and atmosphere driven surface fluxes. The study catchments (a 7.2 sq km catchment and a 0.64 sq km subcatchment) are located in the North Appalachian ridge and valley region of eastern Pennsylvania. Hydrologic data collected during the MACHYDRO 90 field experiment are used to calibrate the models and to evaluate simulation results. It is found that water table dynamics as predicted by the conceptual model are close to the observations in a shallow water well and therefore, that a linear relationship between a topographic index and the local water table depth is found to be a reasonable assumption for catchment scale modeling. However, the hydraulic equilibrium assumption is not valid for the upper 100 cm layer of the unsaturated zone and a conceptual model that incorporates a root zone is suggested. Furthermore, theoretical subsurface flow characteristics from the conceptual model are found to be different from field observations, numerical simulation results, and theoretical baseflow recession characteristics based on Boussinesq's groundwater equation.

  9. An Evaluation of the Preceptor Model versus the Formal Teaching Model.

    ERIC Educational Resources Information Center

    Shamian, Judith; Lemieux, Suzanne

    1984-01-01

    This study evaluated the effectiveness of two teaching methods to determine which is more effective in enhancing the knowledge base of participating nurses: the preceptor model embodies decentralized instruction by a member of the nursing staff, and the formal teaching model uses centralized teaching by the inservice education department. (JOW)

  10. Evaluating Small Sample Approaches for Model Test Statistics in Structural Equation Modeling

    ERIC Educational Resources Information Center

    Nevitt, Jonathan; Hancock, Gregory R.

    2004-01-01

    Through Monte Carlo simulation, small sample methods for evaluating overall data-model fit in structural equation modeling were explored. Type I error behavior and power were examined using maximum likelihood (ML), Satorra-Bentler scaled and adjusted (SB; Satorra & Bentler, 1988, 1994), residual-based (Browne, 1984), and asymptotically…

  11. A model for evaluating physico-chemical substance properties required by consequence analysis models.

    PubMed

    Nikmo, Juha; Kukkonen, Jaakko; Riikonen, Kari

    2002-04-26

    Modeling systems for analyzing the consequences of chemical emergencies require as input values a number of physico-chemical substance properties, commonly as a function of temperature at atmospheric pressure. This paper presents a mathematical model "CHEMIC", which can be used for evaluating such substance properties, assuming that six basic constant quantities are available (molecular weight, freezing or melting point, normal boiling point, critical temperature, critical pressure and critical volume). The model has been designed to yield reasonably accurate numerical predictions, while at the same time keeping the amount of input data to a minimum. The model is based on molecular theory or thermodynamics, together with empirical corrections. Mostly, model equations are based on the so-called law of corresponding states. The model evaluates substance properties as a function of temperature at atmospheric pressure. These include seven properties commonly required by consequence analysis and heavy gas dispersion modeling systems: vapor pressure, vapor and liquid densities, heat of vaporization, vapor and liquid viscosities and binary diffusion coefficient. The model predictions for vapor pressure, vapor and liquid densities and heat of vaporization have been evaluated by using the Clausius-Clapeyron equation. We have also compared the predictions of the CHEMIC model with those of the DATABANK database (developed by the AEA Technology, UK), which includes detailed semi-empirical correlations. The computer program CHEMIC could be easily introduced into consequence analysis modeling systems in order to extend their performance to address a wider selection of substances.

  12. Evaluation of a chinook salmon (Oncorhynchus tshawytscha) bioenergetics model

    USGS Publications Warehouse

    Madenjian, Charles P.; O'Connor, Daniel V.; Chernyak, Sergei M.; Rediske, Richard R.; O'Keefe, James P.

    2004-01-01

    We evaluated the Wisconsin bioenergetics model for chinook salmon (Oncorhynchus tshawytscha) in both the laboratory and the field. Chinook salmon in laboratory tanks were fed alewife (Alosa pseudoharengus), the predominant food of chinook salmon in Lake Michigan. Food consumption and growth by chinook salmon during the experiment were measured. To estimate the efficiency with which chinook salmon retain polychlorinated biphenyls (PCBs) from their food in the laboratory, PCB concentrations of the alewife and of the chinook salmon at both the beginning and end of the experiment were determined. Based on our laboratory evaluation, the bioenergetics model was furnishing unbiased estimates of food consumption by chinook salmon. Additionally, from the laboratory experiment, we calculated that chinook salmon retained 75% of the PCBs contained within their food. In an earlier study, assimilation rate of PCBs to chinook salmon from their food in Lake Michigan was estimated at 53%, thereby suggesting that the model was substantially overestimating food consumption by chinook salmon in Lake Michigan. However, we concluded that field performance of the model could not be accurately assessed because PCB assimilation efficiency is dependent on feeding rate, and feeding rate of chinook salmon was likely much lower in our laboratory tanks than in Lake Michigan.

  13. The Third Phase of AQMEII: Evaluation Strategy and Multi-Model Performance Analysis

    EPA Science Inventory

    AQMEII (Air Quality Model Evaluation International Initiative) is an extraordinary effort promoting policy-relevant research on regional air quality model evaluation across the European and North American atmospheric modelling communities, providing the ideal platform for advanci...

  14. Looking beyond general metrics for model evaluation - lessons from an international model intercomparison study

    NASA Astrophysics Data System (ADS)

    Bouaziz, Laurène; de Boer-Euser, Tanja; Brauer, Claudia; Drogue, Gilles; Fenicia, Fabrizio; Grelier, Benjamin; de Niel, Jan; Nossent, Jiri; Pereira, Fernando; Savenije, Hubert; Thirel, Guillaume; Willems, Patrick

    2016-04-01

    International collaboration between institutes and universities is a promising way to reach consensus on hydrological model development. Education, experience and expert knowledge of the hydrological community have resulted in the development of a great variety of model concepts, calibration methods and analysis techniques. Although comparison studies are very valuable for international cooperation, they do often not lead to very clear new insights regarding the relevance of the modelled processes. We hypothesise that this is partly caused by model complexity and the used comparison methods, which focus on a good overall performance instead of focusing on specific events. We propose an approach that focuses on the evaluation of specific events. Eight international research groups calibrated their model for the Ourthe catchment in Belgium (1607 km2) and carried out a validation in time for the Ourthe (i.e. on two different periods, one of them on a blind mode for the modellers) and a validation in space for nested and neighbouring catchments of the Meuse in a completely blind mode. For each model, the same protocol was followed and an ensemble of best performing parameter sets was selected. Signatures were first used to assess model performances in the different catchments during validation. Comparison of the models was then followed by evaluation of selected events, which include: low flows, high flows and the transition from low to high flows. While the models show rather similar performances based on general metrics (i.e. Nash-Sutcliffe Efficiency), clear differences can be observed for specific events. While most models are able to simulate high flows well, large differences are observed during low flows and in the ability to capture the first peaks after drier months. The transferability of model parameters to neighbouring and nested catchments is assessed as an additional measure in the model evaluation. This suggested approach helps to select, among competing

  15. Evaluation of cluster recovery for small area relative risk models.

    PubMed

    Rotejanaprasert, Chawarat

    2014-12-01

    The analysis of disease risk is often considered via relative risk. The comparison of relative risk estimation methods with "true risk" scenarios has been considered on various occasions. However, there has been little examination of how well competing methods perform when the focus is clustering of risk. In this paper, a simulated evaluation of a range of potential spatial risk models and a range of measures that can be used for (a) cluster goodness of fit, (b) cluster diagnostics are considered. Results suggest that exceedence probability is a poor measure of hot spot clustering because of model dependence, whereas residual-based methods are less model dependent and perform better. Local deviance information criteria measures perform well, but conditional predictive ordinate measures yield a high false positive rate.

  16. Peformance Tuning and Evaluation of a Parallel Community Climate Model

    SciTech Connect

    Drake, J.B.; Worley, P.H.; Hammond, S.

    1999-11-13

    The Parallel Community Climate Model (PCCM) is a message-passing parallelization of version 2.1 of the Community Climate Model (CCM) developed by researchers at Argonne and Oak Ridge National Laboratories and at the National Center for Atmospheric Research in the early to mid 1990s. In preparation for use in the Department of Energy's Parallel Climate Model (PCM), PCCM has recently been updated with new physics routines from version 3.2 of the CCM, improvements to the parallel implementation, and ports to the SGIKray Research T3E and Origin 2000. We describe our experience in porting and tuning PCCM on these new platforms, evaluating the performance of different parallel algorithm options and comparing performance between the T3E and Origin 2000.

  17. A model for evaluation of satellite population management alternatives

    NASA Astrophysics Data System (ADS)

    Penny, R. E., Jr.; Jones, R. K.

    1983-12-01

    A Q-GERT model was developed to simulate the satellite environment, including the untracked man-made population, and to calculate a probability of collision for any satellite of interest. Provision for launches, explosions, collisions (including ASAT), retrieval, reposition, and decay was made. The model is structured to easily vary the rates at which these activities occur and to observe changes in the satellite population through which a satellite of interest must travel. Variance of the rates, and the resultant change in probability of collision allows evaluation of satellite population management alternatives such as reducing launch rates or increasing retrieval of spent, but still capable of exploding, satellites. The model is proposed for use by both the USAF SPACE COMMAND and NASA.

  18. Evaluations of Particle Scattering Models for Falling Snow

    NASA Astrophysics Data System (ADS)

    Duffy, G.; Nesbitt, S. W.; McFarquhar, G. M.

    2014-12-01

    Several millimeter wavelength scattering models have been developed over the past decade that could potentially be more accurate than the standard "soft sphere" model, a model with is used in GPM algorithms to retrieve snowfall precipitation rates from dual frequency radar measurements. Results from the GCPEx mission, a GPM Ground Validation experiment that flew HVPS and CIP particle imaging probes through snowstorms within fields of Ku/Ka band reflectivity, provide the data necessary to evaluate simulations of non-Rayleigh reflectivity against measured values. This research uses T-Matrix spheroid, RGA spheroid, and Mie Sphere simulations, as well as variations on axial ratio and diameter-density relationships, to quantify the merits and errors of different forward simulation strategies.

  19. The fence experiment — a first evaluation of shelter models

    NASA Astrophysics Data System (ADS)

    Peña, Alfredo; Bechmann, Andreas; Conti, Davide; Angelou, Nikolas; Mann, Jakob

    2016-09-01

    We present a preliminary evaluation of shelter models of different degrees of complexity using full-scale lidar measurements of the shelter on a vertical plane behind and orthogonal to a fence. Model results accounting for the distribution of the relative wind direction within the observed direction interval are in better agreement with the observations than those that correspond to the simulation at the center of the direction interval, particularly in the far-wake region, for six vertical levels up to two fence heights. Generally, the CFD results are in better agreement with the observations than those from two engineering-like obstacle models but the latter two follow well the behavior of the observations in the far-wake region.

  20. The algorithmic anatomy of model-based evaluation.

    PubMed

    Daw, Nathaniel D; Dayan, Peter

    2014-11-05

    Despite many debates in the first half of the twentieth century, it is now largely a truism that humans and other animals build models of their environments and use them for prediction and control. However, model-based (MB) reasoning presents severe computational challenges. Alternative, computationally simpler, model-free (MF) schemes have been suggested in the reinforcement learning literature, and have afforded influential accounts of behavioural and neural data. Here, we study the realization of MB calculations, and the ways that this might be woven together with MF values and evaluation methods. There are as yet mostly only hints in the literature as to the resulting tapestry, so we offer more preview than review.

  1. Performance Evaluation of 3d Modeling Software for Uav Photogrammetry

    NASA Astrophysics Data System (ADS)

    Yanagi, H.; Chikatsu, H.

    2016-06-01

    UAV (Unmanned Aerial Vehicle) photogrammetry, which combines UAV and freely available internet-based 3D modeling software, is widely used as a low-cost and user-friendly photogrammetry technique in the fields such as remote sensing and geosciences. In UAV photogrammetry, only the platform used in conventional aerial photogrammetry is changed. Consequently, 3D modeling software contributes significantly to its expansion. However, the algorithms of the 3D modelling software are black box algorithms. As a result, only a few studies have been able to evaluate their accuracy using 3D coordinate check points. With this motive, Smart3DCapture and Pix4Dmapper were downloaded from the Internet and commercial software PhotoScan was also employed; investigations were performed in this paper using check points and images obtained from UAV.

  2. Animal Models for Evaluation of Bone Implants and Devices: Comparative Bone Structure and Common Model Uses.

    PubMed

    Wancket, L M

    2015-09-01

    Bone implants and devices are a rapidly growing field within biomedical research, and implants have the potential to significantly improve human and animal health. Animal models play a key role in initial product development and are important components of nonclinical data included in applications for regulatory approval. Pathologists are increasingly being asked to evaluate these models at the initial developmental and nonclinical biocompatibility testing stages, and it is important to understand the relative merits and deficiencies of various species when evaluating a new material or device. This article summarizes characteristics of the most commonly used species in studies of bone implant materials, including detailed information about the relevance of a particular model to human bone physiology and pathology. Species reviewed include mice, rats, rabbits, guinea pigs, dogs, sheep, goats, and nonhuman primates. Ultimately, a comprehensive understanding of the benefits and limitations of different model species will aid in rigorously evaluating a novel bone implant material or device.

  3. Evaluation of a laboratory model of human head impact biomechanics

    PubMed Central

    Hernandez, Fidel; Shull, Peter B.; Camarillo, David B.

    2015-01-01

    This work describes methodology for evaluating laboratory models of head impact biomechanics. Using this methodology, we investigated: how closely does twin-wire drop testing model head rotation in American football impacts? Head rotation is believed to cause mild traumatic brain injury (mTBI) but helmet safety standards only model head translations believed to cause severe TBI. It is unknown whether laboratory head impact models in safety standards, like twin-wire drop testing, reproduce six degree-of-freedom (6DOF) head impact biomechanics that may cause mTBI. We compared 6DOF measurements of 421 American football head impacts to twin-wire drop tests at impact sites and velocities weighted to represent typical field exposure. The highest rotational velocities produced by drop testing were the 74th percentile of non-injury field impacts. For a given translational acceleration level, drop testing underestimated field rotational acceleration by 46% and rotational velocity by 72%. Primary rotational acceleration frequencies were much larger in drop tests (~100Hz) than field impacts (~10Hz). Drop testing was physically unable to produce acceleration directions common in field impacts. Initial conditions of a single field impact were highly resolved in stereo high-speed video and reconstructed in a drop test. Reconstruction results reflected aggregate trends of lower amplitude rotational velocity and higher frequency rotational acceleration in drop testing, apparently due to twin-wire constraints and the absence of a neck. These results suggest twin-wire drop testing is limited in modeling head rotation during impact, and motivate continued evaluation of head impact models to ensure helmets are tested under conditions that may cause mTBI. PMID:26117075

  4. Evaluation of a laboratory model of human head impact biomechanics.

    PubMed

    Hernandez, Fidel; Shull, Peter B; Camarillo, David B

    2015-09-18

    This work describes methodology for evaluating laboratory models of head impact biomechanics. Using this methodology, we investigated: how closely does twin-wire drop testing model head rotation in American football impacts? Head rotation is believed to cause mild traumatic brain injury (mTBI) but helmet safety standards only model head translations believed to cause severe TBI. It is unknown whether laboratory head impact models in safety standards, like twin-wire drop testing, reproduce six degree-of-freedom (6DOF) head impact biomechanics that may cause mTBI. We compared 6DOF measurements of 421 American football head impacts to twin-wire drop tests at impact sites and velocities weighted to represent typical field exposure. The highest rotational velocities produced by drop testing were the 74th percentile of non-injury field impacts. For a given translational acceleration level, drop testing underestimated field rotational acceleration by 46% and rotational velocity by 72%. Primary rotational acceleration frequencies were much larger in drop tests (~100 Hz) than field impacts (~10 Hz). Drop testing was physically unable to produce acceleration directions common in field impacts. Initial conditions of a single field impact were highly resolved in stereo high-speed video and reconstructed in a drop test. Reconstruction results reflected aggregate trends of lower amplitude rotational velocity and higher frequency rotational acceleration in drop testing, apparently due to twin-wire constraints and the absence of a neck. These results suggest twin-wire drop testing is limited in modeling head rotation during impact, and motivate continued evaluation of head impact models to ensure helmets are tested under conditions that may cause mTBI.

  5. Statistical procedures for evaluating daily and monthly hydrologic model predictions

    USGS Publications Warehouse

    Coffey, M.E.; Workman, S.R.; Taraba, J.L.; Fogle, A.W.

    2004-01-01

    The overall study objective was to evaluate the applicability of different qualitative and quantitative methods for comparing daily and monthly SWAT computer model hydrologic streamflow predictions to observed data, and to recommend statistical methods for use in future model evaluations. Statistical methods were tested using daily streamflows and monthly equivalent runoff depths. The statistical techniques included linear regression, Nash-Sutcliffe efficiency, nonparametric tests, t-test, objective functions, autocorrelation, and cross-correlation. None of the methods specifically applied to the non-normal distribution and dependence between data points for the daily predicted and observed data. Of the tested methods, median objective functions, sign test, autocorrelation, and cross-correlation were most applicable for the daily data. The robust coefficient of determination (CD*) and robust modeling efficiency (EF*) objective functions were the preferred methods for daily model results due to the ease of comparing these values with a fixed ideal reference value of one. Predicted and observed monthly totals were more normally distributed, and there was less dependence between individual monthly totals than was observed for the corresponding predicted and observed daily values. More statistical methods were available for comparing SWAT model-predicted and observed monthly totals. The 1995 monthly SWAT model predictions and observed data had a regression Rr2 of 0.70, a Nash-Sutcliffe efficiency of 0.41, and the t-test failed to reject the equal data means hypothesis. The Nash-Sutcliffe coefficient and the R r2 coefficient were the preferred methods for monthly results due to the ability to compare these coefficients to a set ideal value of one.

  6. Optical CD metrology model evaluation and refining for manufacturing

    NASA Astrophysics Data System (ADS)

    Wang, S.-B.; Huang, C. L.; Chiu, Y. H.; Tao, H. J.; Mii, Y. J.

    2009-03-01

    Optical critical dimension (OCD) metrology has been well-accepted as standard inline metrology tool in semiconductor manufacturing since 65nm technology node for its un-destructive and versatile advantage. Many geometry parameters can be obtained in a single measurement with good accuracy if model is well established and calibrated by transmission electron microscopy (TEM). However, in the viewpoint of manufacturing, there is no effective index for model quality and, based on that, for model refining. Even, when device structure becomes more complicated, like strained silicon technology, there are more parameters required to be determined in the afterward measurement. The model, therefore, requires more attention to be paid to ensure inline metrology reliability. GOF (goodness-of-fitting), one model index given by a commercial OCD metrology tool, for example, is not sensitive enough while correlation and sensitivity coefficient, the other two indexes, are evaluated under metrology tool noise only and not directly related to inline production measurement uncertainty. In this article, we will propose a sensitivity matrix for measurement uncertainty estimation in which each entry is defined as the correlation coefficient between the corresponding two floating parameters and obtained by linearization theorem. The uncertainty is estimated in combination of production line variation and found, for the first time, much larger than that by metrology tool noise alone that indicates model quality control is critical for nanometer device production control. The uncertainty, in comparison with production requirement, also serves as index for model refining either by grid size rescaling or structure model modification. This method is verified by TEM measurement and, in the final, a flow chart for model refining is proposed.

  7. Evaluation of wave runup predictions from numerical and parametric models

    USGS Publications Warehouse

    Stockdon, Hilary F.; Thompson, David M.; Plant, Nathaniel G.; Long, Joseph W.

    2014-01-01

    Wave runup during storms is a primary driver of coastal evolution, including shoreline and dune erosion and barrier island overwash. Runup and its components, setup and swash, can be predicted from a parameterized model that was developed by comparing runup observations to offshore wave height, wave period, and local beach slope. Because observations during extreme storms are often unavailable, a numerical model is used to simulate the storm-driven runup to compare to the parameterized model and then develop an approach to improve the accuracy of the parameterization. Numerically simulated and parameterized runup were compared to observations to evaluate model accuracies. The analysis demonstrated that setup was accurately predicted by both the parameterized model and numerical simulations. Infragravity swash heights were most accurately predicted by the parameterized model. The numerical model suffered from bias and gain errors that depended on whether a one-dimensional or two-dimensional spatial domain was used. Nonetheless, all of the predictions were significantly correlated to the observations, implying that the systematic errors can be corrected. The numerical simulations did not resolve the incident-band swash motions, as expected, and the parameterized model performed best at predicting incident-band swash heights. An assimilated prediction using a weighted average of the parameterized model and the numerical simulations resulted in a reduction in prediction error variance. Finally, the numerical simulations were extended to include storm conditions that have not been previously observed. These results indicated that the parameterized predictions of setup may need modification for extreme conditions; numerical simulations can be used to extend the validity of the parameterized predictions of infragravity swash; and numerical simulations systematically underpredict incident swash, which is relatively unimportant under extreme conditions.

  8. Model evaluation using a community benchmarking system for land surface models

    NASA Astrophysics Data System (ADS)

    Mu, M.; Hoffman, F. M.; Lawrence, D. M.; Riley, W. J.; Keppel-Aleks, G.; Kluzek, E. B.; Koven, C. D.; Randerson, J. T.

    2014-12-01

    Evaluation of atmosphere, ocean, sea ice, and land surface models is an important step in identifying deficiencies in Earth system models and developing improved estimates of future change. For the land surface and carbon cycle, the design of an open-source system has been an important objective of the International Land Model Benchmarking (ILAMB) project. Here we evaluated CMIP5 and CLM models using a benchmarking system that enables users to specify models, data sets, and scoring systems so that results can be tailored to specific model intercomparison projects. Our scoring system used information from four different aspects of global datasets, including climatological mean spatial patterns, seasonal cycle dynamics, interannual variability, and long-term trends. Variable-to-variable comparisons enable investigation of the mechanistic underpinnings of model behavior, and allow for some control of biases in model drivers. Graphics modules allow users to evaluate model performance at local, regional, and global scales. Use of modular structures makes it relatively easy for users to add new variables, diagnostic metrics, benchmarking datasets, or model simulations. Diagnostic results are automatically organized into HTML files, so users can conveniently share results with colleagues. We used this system to evaluate atmospheric carbon dioxide, burned area, global biomass and soil carbon stocks, net ecosystem exchange, gross primary production, ecosystem respiration, terrestrial water storage, evapotranspiration, and surface radiation from CMIP5 historical and ESM historical simulations. We found that the multi-model mean often performed better than many of the individual models for most variables. We plan to publicly release a stable version of the software during fall of 2014 that has land surface, carbon cycle, hydrology, radiation and energy cycle components.

  9. Comparing the agreement among alternative models in evaluating HMO efficiency.

    PubMed Central

    Bryce, C L; Engberg, J B; Wholey, D R

    2000-01-01

    OBJECTIVE: To describe the efficiency of HMOs and to test the robustness of these findings across alternative models of efficiency. This study examines whether these models, when constructed in parallel to use the same information, provide researchers with the same insights and identify the same trends. DATA SOURCES: A data set containing 585 HMOs operating from 1985 through 1994. Variables include enrollment, utilization, and financial information compiled primarily from Health Care Investment Analysts, InterStudy HMO Census, and Group Health Association of America. STUDY DESIGN: We compute three estimates of efficiency for each HMO and compare the results in terms of individual performance and industry-wide trends. The estimates are then regressed against measures of case mix, quality, and other factors that may be related to the model estimates. PRINCIPAL FINDINGS: The three models identify similar trends for the HMO industry as a whole; however, they assess the relative technical efficiency of individual firms differently. Thus, these techniques are limited for either benchmarking or setting rates because the firms identified as efficient may be a consequence of model selection rather than actual performance. CONCLUSIONS: The estimation technique to evaluate efficient firms can affect the findings themselves. The implications are relevant not only for HMOs, but for efficiency analyses in general. Concurrence among techniques is no guarantee of accuracy, but it is reassuring; conversely, radically distinct inferences across models can be a warning to temper research conclusions. PMID:10857474

  10. Diagnostic Air Quality Model Evaluation of Source-Specific ...

    EPA Pesticide Factsheets

    Ambient measurements of 78 source-specific tracers of primary and secondary carbonaceous fine particulate matter collected at four midwestern United States locations over a full year (March 2004–February 2005) provided an unprecedented opportunity to diagnostically evaluate the results of a numerical air quality model. Previous analyses of these measurements demonstrated excellent mass closure for the variety of contributing sources. In this study, a carbon-apportionment version of the Community Multiscale Air Quality (CMAQ) model was used to track primary organic and elemental carbon emissions from 15 independent sources such as mobile sources and biomass burning in addition to four precursor-specific classes of secondary organic aerosol (SOA) originating from isoprene, terpenes, aromatics, and sesquiterpenes. Conversion of the source-resolved model output into organic tracer concentrations yielded a total of 2416 data pairs for comparison with observations. While emission source contributions to the total model bias varied by season and measurement location, the largest absolute bias of −0.55 μgC/m3 was attributed to insufficient isoprene SOA in the summertime CMAQ simulation. Biomass combustion was responsible for the second largest summertime model bias (−0.46 μgC/m3 on average). Several instances of compensating errors were also evident; model underpredictions in some sectors were masked by overpredictions in others. The National Exposure Research L

  11. [Evaluation of a face model for surgical education].

    PubMed

    Schneider, G; Voigt, S; Rettinger, G

    2011-09-01

    The complex anatomy of the human face requires a high degree of experience and skills in surgical dressing of facial soft tissue defects. The previous education contains literature studies and supervision during surgery, according to surgical spectrum of the educating hospital. A structured education including a training of different surgical methods on a model and slow increase of complexity could improve considerably the following education related to the patient.During a cooperative project, the 3 di GmbH and the Department of Otolaryngology at the Friedrich-Schiller-University Jena developed a face model for surgical education that allows the training of surgical interventions in the face. The model was used during the 6th and 8th Jena Workshop for Functional and Aesthetic Surgery as well as a workshop for surgical suturation, and tested and evaluated by the attendees.The attendees mostly rated the work-ability of the models and the possibility to practice on a realistic face model with artificial skin very well and beneficial. This model allows a repeatable and structured education of surgical standards, and is very helpful in preparation for operating facial defects of a patient.

  12. Evaluation of an Urban Canopy Parameterization in a Mesoscale Model

    SciTech Connect

    Chin, H S; Leach, M J; Sugiyama, G A; Leone, Jr., J M; Walker, H; Nasstrom, J; Brown, M J

    2004-03-18

    A modified urban canopy parameterization (UCP) is developed and evaluated in a three-dimensional mesoscale model to assess the urban impact on surface and lower atmospheric properties. This parameterization accounts for the effects of building drag, turbulent production, radiation balance, anthropogenic heating, and building rooftop heating/cooling. USGS land-use data are also utilized to derive urban infrastructure and urban surface properties needed for driving the UCP. An intensive observational period with clear-sky, strong ambient wind and drainage flow, and the absence of land-lake breeze over the Salt Lake Valley, occurring on 25-26 October 2000, is selected for this study. A series of sensitivity experiments are performed to gain understanding of the urban impact in the mesoscale model. Results indicate that within the selected urban environment, urban surface characteristics and anthropogenic heating play little role in the formation of the modeled nocturnal urban boundary layer. The rooftop effect appears to be the main contributor to this urban boundary layer. Sensitivity experiments also show that for this weak urban heat island case, the model horizontal grid resolution is important in simulating the elevated inversion layer. The root mean square errors of the predicted wind and temperature with respect to surface station measurements exhibit substantially larger discrepancies at the urban locations than the rural counterparts. However, the close agreement of modeled tracer concentration with observations fairly justifies the modeled urban impact on the wind direction shift and wind drag effects.

  13. Performance evaluation of four directional emissivity analytical models with thermal SAIL model and airborne images.

    PubMed

    Ren, Huazhong; Liu, Rongyuan; Yan, Guangjian; Li, Zhao-Liang; Qin, Qiming; Liu, Qiang; Nerry, Françoise

    2015-04-06

    Land surface emissivity is a crucial parameter in the surface status monitoring. This study aims at the evaluation of four directional emissivity models, including two bi-directional reflectance distribution function (BRDF) models and two gap-frequency-based models. Results showed that the kernel-driven BRDF model could well represent directional emissivity with an error less than 0.002, and was consequently used to retrieve emissivity with an accuracy of about 0.012 from an airborne multi-angular thermal infrared data set. Furthermore, we updated the cavity effect factor relating to multiple scattering inside canopy, which improved the performance of the gap-frequency-based models.

  14. Metal mixture modeling evaluation project: 2. Comparison of four modeling approaches.

    PubMed

    Farley, Kevin J; Meyer, Joseph S; Balistrieri, Laurie S; De Schamphelaere, Karel A C; Iwasaki, Yuichi; Janssen, Colin R; Kamo, Masashi; Lofts, Stephen; Mebane, Christopher A; Naito, Wataru; Ryan, Adam C; Santore, Robert C; Tipping, Edward

    2015-04-01

    As part of the Metal Mixture Modeling Evaluation (MMME) project, models were developed by the National Institute of Advanced Industrial Science and Technology (Japan), the US Geological Survey (USA), HDR|HydroQual (USA), and the Centre for Ecology and Hydrology (United Kingdom) to address the effects of metal mixtures on biological responses of aquatic organisms. A comparison of the 4 models, as they were presented at the MMME workshop in Brussels, Belgium (May 2012), is provided in the present study. Overall, the models were found to be similar in structure (free ion activities computed by the Windermere humic aqueous model [WHAM]; specific or nonspecific binding of metals/cations in or on the organism; specification of metal potency factors or toxicity response functions to relate metal accumulation to biological response). Major differences in modeling approaches are attributed to various modeling assumptions (e.g., single vs multiple types of binding sites on the organism) and specific calibration strategies that affected the selection of model parameters. The models provided a reasonable description of additive (or nearly additive) toxicity for a number of individual toxicity test results. Less-than-additive toxicity was more difficult to describe with the available models. Because of limitations in the available datasets and the strong interrelationships among the model parameters (binding constants, potency factors, toxicity response parameters), further evaluation of specific model assumptions and calibration strategies is needed.

  15. Gridded global surface ozone metrics for atmospheric chemistry model evaluation

    NASA Astrophysics Data System (ADS)

    Sofen, E. D.; Bowdalo, D.; Evans, M. J.; Apadula, F.; Bonasoni, P.; Cupeiro, M.; Ellul, R.; Galbally, I. E.; Girgzdiene, R.; Luppo, S.; Mimouni, M.; Nahas, A. C.; Saliba, M.; Tørseth, K.

    2016-02-01

    The concentration of ozone at the Earth's surface is measured at many locations across the globe for the purposes of air quality monitoring and atmospheric chemistry research. We have brought together all publicly available surface ozone observations from online databases from the modern era to build a consistent data set for the evaluation of chemical transport and chemistry-climate (Earth System) models for projects such as the Chemistry-Climate Model Initiative and Aer-Chem-MIP. From a total data set of approximately 6600 sites and 500 million hourly observations from 1971-2015, approximately 2200 sites and 200 million hourly observations pass screening as high-quality sites in regionally representative locations that are appropriate for use in global model evaluation. There is generally good data volume since the start of air quality monitoring networks in 1990 through 2013. Ozone observations are biased heavily toward North America and Europe with sparse coverage over the rest of the globe. This data set is made available for the purposes of model evaluation as a set of gridded metrics intended to describe the distribution of ozone concentrations on monthly and annual timescales. Metrics include the moments of the distribution, percentiles, maximum daily 8-hour average (MDA8), sum of means over 35 ppb (daily maximum 8-h; SOMO35), accumulated ozone exposure above a threshold of 40 ppbv (AOT40), and metrics related to air quality regulatory thresholds. Gridded data sets are stored as netCDF-4 files and are available to download from the British Atmospheric Data Centre (doi: 10.5285/08fbe63d-fa6d-4a7a-b952-5932e3ab0452). We provide recommendations to the ozone measurement community regarding improving metadata reporting to simplify ongoing and future efforts in working with ozone data from disparate networks in a consistent manner.

  16. Gridded global surface ozone metrics for atmospheric chemistry model evaluation

    NASA Astrophysics Data System (ADS)

    Sofen, E. D.; Bowdalo, D.; Evans, M. J.; Apadula, F.; Bonasoni, P.; Cupeiro, M.; Ellul, R.; Galbally, I. E.; Girgzdiene, R.; Luppo, S.; Mimouni, M.; Nahas, A. C.; Saliba, M.; Tørseth, K.; Wmo Gaw, Epa Aqs, Epa Castnet, Capmon, Naps, Airbase, Emep, Eanet Ozone Datasets, All Other Contributors To

    2015-07-01

    The concentration of ozone at the Earth's surface is measured at many locations across the globe for the purposes of air quality monitoring and atmospheric chemistry research. We have brought together all publicly available surface ozone observations from online databases from the modern era to build a consistent dataset for the evaluation of chemical transport and chemistry-climate (Earth System) models for projects such as the Chemistry-Climate Model Initiative and Aer-Chem-MIP. From a total dataset of approximately 6600 sites and 500 million hourly observations from 1971-2015, approximately 2200 sites and 200 million hourly observations pass screening as high-quality sites in regional background locations that are appropriate for use in global model evaluation. There is generally good data volume since the start of air quality monitoring networks in 1990 through 2013. Ozone observations are biased heavily toward North America and Europe with sparse coverage over the rest of the globe. This dataset is made available for the purposes of model evaluation as a set of gridded metrics intended to describe the distribution of ozone concentrations on monthly and annual timescales. Metrics include the moments of the distribution, percentiles, maximum daily eight-hour average (MDA8), SOMO35, AOT40, and metrics related to air quality regulatory thresholds. Gridded datasets are stored as netCDF-4 files and are available to download from the British Atmospheric Data Centre (doi:10.5285/08fbe63d-fa6d-4a7a-b952-5932e3ab0452). We provide recommendations to the ozone measurement community regarding improving metadata reporting to simplify ongoing and future efforts in working with ozone data from disparate networks in a consistent manner.

  17. Evaluating the uncertainty of input quantities in measurement models

    NASA Astrophysics Data System (ADS)

    Possolo, Antonio; Elster, Clemens

    2014-06-01

    The Guide to the Expression of Uncertainty in Measurement (GUM) gives guidance about how values and uncertainties should be assigned to the input quantities that appear in measurement models. This contribution offers a concrete proposal for how that guidance may be updated in light of the advances in the evaluation and expression of measurement uncertainty that were made in the course of the twenty years that have elapsed since the publication of the GUM, and also considering situations that the GUM does not yet contemplate. Our motivation is the ongoing conversation about a new edition of the GUM. While generally we favour a Bayesian approach to uncertainty evaluation, we also recognize the value that other approaches may bring to the problems considered here, and focus on methods for uncertainty evaluation and propagation that are widely applicable, including to cases that the GUM has not yet addressed. In addition to Bayesian methods, we discuss maximum-likelihood estimation, robust statistical methods, and measurement models where values of nominal properties play the same role that input quantities play in traditional models. We illustrate these general-purpose techniques in concrete examples, employing data sets that are realistic but that also are of conveniently small sizes. The supplementary material available online lists the R computer code that we have used to produce these examples (stacks.iop.org/Met/51/3/339/mmedia). Although we strive to stay close to clause 4 of the GUM, which addresses the evaluation of uncertainty for input quantities, we depart from it as we review the classes of measurement models that we believe are generally useful in contemporary measurement science. We also considerably expand and update the treatment that the GUM gives to Type B evaluations of uncertainty: reviewing the state-of-the-art, disciplined approach to the elicitation of expert knowledge, and its encapsulation in probability distributions that are usable in

  18. Evaluation of Data Used for Modelling the Stratosphere of Saturn

    NASA Astrophysics Data System (ADS)

    Armstrong, Eleanor Sophie; Irwin, Patrick G. J.; Moses, Julianne I.

    2015-11-01

    Planetary atmospheres are modeled through the use of a photochemical and kinetic reaction scheme constructed from experimentally and theoretically determined rate coefficients, photoabsorption cross sections and branching ratios for the molecules described within them. The KINETICS architecture has previously been developed to model planetary atmospheres and is applied here to Saturn’s stratosphere. We consider the pathways that comprise the reaction scheme of a current model, and update the reaction scheme according the to findings in a literature investigation. We evaluate contemporary photochemical literature, studying recent data sets of cross-sections and branching ratios for a number of hydrocarbons used in the photochemical scheme of Model C of KINETICS. In particular evaluation of new photodissociation branching ratios for CH4, C2H2, C2H4, C3H3, C3H5 and C4H2, and new cross-sectional data for C2H2, C2H4, C2H6, C3H3, C4H2, C6H2 and C8H2 are considered. By evaluating the techniques used and data sets obtained, a new reaction scheme selection was drawn up. These data are then used within the preferred reaction scheme of the thesis and applied to the KINETICS atmospheric model to produce a model of the stratosphere of Saturn in a steady state. A total output of the preferred reaction scheme is presented, and the data is compared both with the previous reaction scheme and with data from the Cassini spacecraft in orbit around Saturn.One of the key findings of this work is that there is significant change in the model’s output as a result of temperature dependent data determination. Although only shown within the changes to the photochemical portion of the preferred reaction scheme, it is suggested that an equally important temperature dependence will be exhibited in the kinetic section of the reaction scheme. The photochemical model output is shown to be highly dependent on the preferred reaction scheme used within it by this thesis. The importance of correct

  19. Hierarchical Bayesian model averaging for hydrostratigraphic modeling: Uncertainty segregation and comparative evaluation

    NASA Astrophysics Data System (ADS)

    Tsai, Frank T.-C.; Elshall, Ahmed S.

    2013-09-01

    Analysts are often faced with competing propositions for each uncertain model component. How can we judge that we select a correct proposition(s) for an uncertain model component out of numerous possible propositions? We introduce the hierarchical Bayesian model averaging (HBMA) method as a multimodel framework for uncertainty analysis. The HBMA allows for segregating, prioritizing, and evaluating different sources of uncertainty and their corresponding competing propositions through a hierarchy of BMA models that forms a BMA tree. We apply the HBMA to conduct uncertainty analysis on the reconstructed hydrostratigraphic architectures of the Baton Rouge aquifer-fault system, Louisiana. Due to uncertainty in model data, structure, and parameters, multiple possible hydrostratigraphic models are produced and calibrated as base models. The study considers four sources of uncertainty. With respect to data uncertainty, the study considers two calibration data sets. With respect to model structure, the study considers three different variogram models, two geological stationarity assumptions and two fault conceptualizations. The base models are produced following a combinatorial design to allow for uncertainty segregation. Thus, these four uncertain model components with their corresponding competing model propositions result in 24 base models. The results show that the systematic dissection of the uncertain model components along with their corresponding competing propositions allows for detecting the robust model propositions and the major sources of uncertainty.

  20. Metal Mixture Modeling Evaluation project: 2. Comparison of four modeling approaches

    USGS Publications Warehouse

    Farley, Kevin J.; Meyer, Joe; Balistrieri, Laurie S.; DeSchamphelaere, Karl; Iwasaki, Yuichi; Janssen, Colin; Kamo, Masashi; Lofts, Steve; Mebane, Christopher A.; Naito, Wataru; Ryan, Adam C.; Santore, Robert C.; Tipping, Edward

    2015-01-01

    As part of the Metal Mixture Modeling Evaluation (MMME) project, models were developed by the National Institute of Advanced Industrial Science and Technology (Japan), the U.S. Geological Survey (USA), HDR⎪HydroQual, Inc. (USA), and the Centre for Ecology and Hydrology (UK) to address the effects of metal mixtures on biological responses of aquatic organisms. A comparison of the 4 models, as they were presented at the MMME Workshop in Brussels, Belgium (May 2012), is provided herein. Overall, the models were found to be similar in structure (free ion activities computed by WHAM; specific or non-specific binding of metals/cations in or on the organism; specification of metal potency factors and/or toxicity response functions to relate metal accumulation to biological response). Major differences in modeling approaches are attributed to various modeling assumptions (e.g., single versus multiple types of binding site on the organism) and specific calibration strategies that affected the selection of model parameters. The models provided a reasonable description of additive (or nearly additive) toxicity for a number of individual toxicity test results. Less-than-additive toxicity was more difficult to describe with the available models. Because of limitations in the available datasets and the strong inter-relationships among the model parameters (log KM values, potency factors, toxicity response parameters), further evaluation of specific model assumptions and calibration strategies is needed.

  1. Evaluation of weather-based rice yield models in India

    NASA Astrophysics Data System (ADS)

    Sudharsan, D.; Adinarayana, J.; Reddy, D. Raji; Sreenivas, G.; Ninomiya, S.; Hirafuji, M.; Kiura, T.; Tanaka, K.; Desai, U. B.; Merchant, S. N.

    2013-01-01

    The objective of this study was to compare two different rice simulation models—standalone (Decision Support System for Agrotechnology Transfer [DSSAT]) and web based (SImulation Model for RIce-Weather relations [SIMRIW])—with agrometeorological data and agronomic parameters for estimation of rice crop production in southern semi-arid tropics of India. Studies were carried out on the BPT5204 rice variety to evaluate two crop simulation models. Long-term experiments were conducted in a research farm of Acharya N G Ranga Agricultural University (ANGRAU), Hyderabad, India. Initially, the results were obtained using 4 years (1994-1997) of data with weather parameters from a local weather station to evaluate DSSAT simulated results with observed values. Linear regression models used for the purpose showed a close relationship between DSSAT and observed yield. Subsequently, yield comparisons were also carried out with SIMRIW and DSSAT, and validated with actual observed values. Realizing the correlation coefficient values of SIMRIW simulation values in acceptable limits, further rice experiments in monsoon (Kharif) and post-monsoon (Rabi) agricultural seasons (2009, 2010 and 2011) were carried out with a location-specific distributed sensor network system. These proximal systems help to simulate dry weight, leaf area index and potential yield by the Java based SIMRIW on a daily/weekly/monthly/seasonal basis. These dynamic parameters are useful to the farming community for necessary decision making in a ubiquitous manner. However, SIMRIW requires fine tuning for better results/decision making.

  2. A neural network model for credit risk evaluation.

    PubMed

    Khashman, Adnan

    2009-08-01

    Credit scoring is one of the key analytical techniques in credit risk evaluation which has been an active research area in financial risk management. This paper presents a credit risk evaluation system that uses a neural network model based on the back propagation learning algorithm. We train and implement the neural network to decide whether to approve or reject a credit application, using seven learning schemes and real world credit applications from the Australian credit approval datasets. A comparison of the system performance under the different learning schemes is provided, furthermore, we compare the performance of two neural networks; with one and two hidden layers following the ideal learning scheme. Experimental results suggest that neural networks can be effectively used in automatic processing of credit applications.

  3. Reliability modeling and evaluation of HVDC power transmission systems

    SciTech Connect

    Dialynas, E.N.; Koskolos, N.C. . Dept. of Electrical and Computer Engineering)

    1994-04-01

    The objective of this paper is to present an improved computational method for evaluating the reliability indices of HVdc transmission systems. The developed models and computational techniques are described. These can be used to simulate the operational practices and characteristics of a system under study efficiently and realistically. This method is based on the failure modes and effects analysis and uses the event tree method and the minimal cut set approach to represent the system's operational behavior and deduce the appropriate system's failure modes. A set of five reliability indices is evaluated for each output node being analyzed together with the probability and frequency of encountering particular regions of system performance levels. The analysis of an assumed HVdc bipolar transmission system is also included.

  4. Preliminary evaluation of a lake whitefish (Coregonus clupeaformis) bioenergetics model

    USGS Publications Warehouse

    Madenjian, Charles P.; Pothoven, Steven A.; Schneeberger, Philip J.; O'Connor, Daniel V.; Brandt, Stephen B.

    2005-01-01

    We conducted a preliminary evaluation of a lake whitefish (Coregonus clupeaformis) bioenergetics model by applying the model to size-at-age data for lake whitefish from northern Lake Michigan. We then compared estimates of gross growth efficiency (GGE) from our bioenergetis model with previously published estimates of GGE for bloater (C. hoyi) in Lake Michigan and for lake whitefish in Quebec. According to our model, the GGE of Lake Michigan lake whitefish decreased from 0.075 to 0.02 as age increased from 2 to 5 years. In contrast, the GGE of lake whitefish in Quebec inland waters decreased from 0.12 to 0.05 for the same ages. When our swimming-speed submodel was replaced with a submodel that had been used for lake trout (Salvelinus namaycush) in Lake Michigan and an observed predator energy density for Lake Michigan lake whitefish was employed, our model predicted that the GGE of Lake Michigan lake whitefish decreased from 0.12 to 0.04 as age increased from 2 to 5 years.

  5. Proper bibeta ROC model: algorithm, software, and performance evaluation

    NASA Astrophysics Data System (ADS)

    Chen, Weijie; Hu, Nan

    2016-03-01

    Semi-parametric models are often used to fit data collected in receiver operating characteristic (ROC) experiments to obtain a smooth ROC curve and ROC parameters for statistical inference purposes. The proper bibeta model as recently proposed by Mossman and Peng enjoys several theoretical properties. In addition to having explicit density functions for the latent decision variable and an explicit functional form of the ROC curve, the two parameter bibeta model also has simple closed-form expressions for true-positive fraction (TPF), false-positive fraction (FPF), and the area under the ROC curve (AUC). In this work, we developed a computational algorithm and R package implementing this model for ROC curve fitting. Our algorithm can deal with any ordinal data (categorical or continuous). To improve accuracy, efficiency, and reliability of our software, we adopted several strategies in our computational algorithm including: (1) the LABROC4 categorization to obtain the true maximum likelihood estimation of the ROC parameters; (2) a principled approach to initializing parameters; (3) analytical first-order and second-order derivatives of the likelihood function; (4) an efficient optimization procedure (the L-BFGS algorithm in the R package "nlopt"); and (5) an analytical delta method to estimate the variance of the AUC. We evaluated the performance of our software with intensive simulation studies and compared with the conventional binormal and the proper binormal-likelihood-ratio models developed at the University of Chicago. Our simulation results indicate that our software is highly accurate, efficient, and reliable.

  6. Laboratory evaluation of a walleye (Sander vitreus) bioenergetics model

    USGS Publications Warehouse

    Madenjian, C.P.; Wang, C.; O'Brien, T. P.; Holuszko, M.J.; Ogilvie, L.M.; Stickel, R.G.

    2010-01-01

    Walleye (Sander vitreus) is an important game fish throughout much of North America. We evaluated the performance of the Wisconsin bioenergetics model for walleye in the laboratory. Walleyes were fed rainbow smelt (Osmerus mordax) in four laboratory tanks during a 126-day experiment. Based on a statistical comparison of bioenergetics model predictions of monthly consumption with the observed monthly consumption, we concluded that the bioenergetics model significantly underestimated food consumption by walleye in the laboratory. The degree of underestimation appeared to depend on the feeding rate. For the tank with the lowest feeding rate (1.4% of walleye body weight per day), the agreement between the bioenergetics model prediction of cumulative consumption over the entire 126-day experiment and the observed cumulative consumption was remarkably close, as the prediction was within 0.1% of the observed cumulative consumption. Feeding rates in the other three tanks ranged from 1.6% to 1.7% of walleye body weight per day, and bioenergetics model predictions of cumulative consumption over the 126-day experiment ranged between 11 and 15% less than the observed cumulative consumption. ?? 2008 Springer Science+Business Media B.V.

  7. Evaluation of detection model performance in power-law noise

    NASA Astrophysics Data System (ADS)

    Burgess, Arthur E.

    2001-06-01

    Two alternative forced-choice (2AFC) nodule detection performances of a number of model observers were evaluated for detection of simulated nodules in filtered power-law (1/f3) noise. The models included the ideal observer, the channelized Fisher-Hotelling (FH) model with two different basis function sets, the non-prewhitening matched filter with an eye filter (NPWE), and the Rose model with no DC response (RoseNDC). Detectability of the designer nodule signal was investigated. It has equation s((rho) )equalsA*Rect((rho) /2)(1-(rho) 2)v, where (rho) is a normalized distance (r/R), R is the nodule radius and A is signal amplitude. The nodule profile can be changed (designed) by changing the value of v. For example, the result is a sharp-edged, flat-topped disc for v equal to zero and the projection of a sphere for v equal to 0.5. Human observer experiments were done with nodules based on v equal to 0, 0.5 and 1.5. For the v equal to 1.5 case, human results could be well fitted using a variety of models. The human CD diagram slopes were -0.12, +0.27 and +0.44 for v equal to 0, 0.5 and 1.5 respectively.

  8. Advancing Models and Evaluation of Cumulus, Climate and Aerosol Interactions

    SciTech Connect

    Gettelman, Andrew

    2015-10-27

    This project was successfully able to meet its’ goals, but faced some serious challenges due to personnel issues. Nonetheless, it was largely successful. The Project Objectives were as follows: 1. Develop a unified representation of stratifom and cumulus cloud microphysics for NCAR/DOE global community models. 2. Examine the effects of aerosols on clouds and their impact on precipitation in stratiform and cumulus clouds. We will also explore the effects of clouds and precipitation on aerosols. 3. Test these new formulations using advanced evaluation techniques and observations and release

  9. A Framework and Model for Evaluating Clinical Decision Support Architectures

    PubMed Central

    Wright, Adam; Sittig, Dean F.

    2008-01-01

    In this paper, we develop a four-phase model for evaluating architectures for clinical decision support that focuses on: defining a set of desirable features for a decision support architecture; building a proof-of-concept prototype; demonstrating that the architecture is useful by showing that it can be integrated with existing decision support systems and comparing its coverage to that of other architectures. We apply this framework to several well-known decision support architectures, including Arden Syntax, GLIF, SEBASTIAN and SAGE PMID:18462999

  10. Interfacial micromechanics in fibrous composites: design, evaluation, and models.

    PubMed

    Lei, Zhenkun; Li, Xuan; Qin, Fuyong; Qiu, Wei

    2014-01-01

    Recent advances of interfacial micromechanics in fiber reinforced composites using micro-Raman spectroscopy are given. The faced mechanical problems for interface design in fibrous composites are elaborated from three optimization ways: material, interface, and computation. Some reasons are depicted that the interfacial evaluation methods are difficult to guarantee the integrity, repeatability, and consistency. Micro-Raman study on the fiber interface failure behavior and the main interface mechanical problems in fibrous composites are summarized, including interfacial stress transfer, strength criterion of interface debonding and failure, fiber bridging, frictional slip, slip transition, and friction reloading. The theoretical models of above interface mechanical problems are given.

  11. Evaluation of a differentiation model of preschoolers' executive functions.

    PubMed

    Howard, Steven J; Okely, Anthony D; Ellis, Yvonne G

    2015-01-01

    Despite the prominent role of executive functions in children's emerging competencies, there remains debate regarding the structure and development of executive functions. In an attempt to reconcile these discrepancies, a differentiation model of executive function development was evaluated in the early years using 6-month age groupings. Specifically, 281 preschoolers completed measures of working memory, inhibition, and shifting. Results contradicted suggestions that executive functions follow a single trajectory of progressive separation in childhood, instead suggesting that these functions may undergo a period of integration in the preschool years. These results highlight potential problems with current practices and theorizing in executive function research.

  12. Evaluation and cross-validation of Environmental Models

    NASA Astrophysics Data System (ADS)

    Lemaire, Joseph

    Before scientific models (statistical or empirical models based on experimental measurements; physical or mathematical models) can be proposed and selected as ISO Environmental Standards, a Commission of professional experts appointed by an established International Union or Association (e.g. IAGA for Geomagnetism and Aeronomy, . . . ) should have been able to study, document, evaluate and validate the best alternative models available at a given epoch. Examples will be given, indicating that different values for the Earth radius have been employed in different data processing laboratories, institutes or agencies, to process, analyse or retrieve series of experimental observations. Furthermore, invariant magnetic coordinates like B and L, commonly used in the study of Earth's radiation belts fluxes and for their mapping, differ from one space mission data center to the other, from team to team, and from country to country. Worse, users of empirical models generally fail to use the original magnetic model which had been employed to compile B and L , and thus to build these environmental models. These are just some flagrant examples of inconsistencies and misuses identified so far; there are probably more of them to be uncovered by careful, independent examination and benchmarking. A meter prototype, the standard unit length that has been determined on 20 May 1875, during the Diplomatic Conference of the Meter, and deposited at the BIPM (Bureau International des Poids et Mesures). In the same token, to coordinate and safeguard progress in the field of Space Weather, similar initiatives need to be undertaken, to prevent wild, uncontrolled dissemination of pseudo Environmental Models and Standards. Indeed, unless validation tests have been performed, there is guaranty, a priori, that all models on the market place have been built consistently with the same units system, and that they are based on identical definitions for the coordinates systems, etc... Therefore

  13. An evaluation of sex-age-kill (SAK) model performance

    USGS Publications Warehouse

    Millspaugh, Joshua J.; Skalski, John R.; Townsend, Richard L.; Diefenbach, Duane R.; Boyce, Mark S.; Hansen, Lonnie P.; Kammermeyer, Kent

    2009-01-01

    The sex-age-kill (SAK) model is widely used to estimate abundance of harvested large mammals, including white-tailed deer (Odocoileus virginianus). Despite a long history of use, few formal evaluations of SAK performance exist. We investigated how violations of the stable age distribution and stationary population assumption, changes to male or female harvest, stochastic effects (i.e., random fluctuations in recruitment and survival), and sampling efforts influenced SAK estimation. When the simulated population had a stable age distribution and λ > 1, the SAK model underestimated abundance. Conversely, when λ < 1, the SAK overestimated abundance. When changes to male harvest were introduced, SAK estimates were opposite the true population trend. In contrast, SAK estimates were robust to changes in female harvest rates. Stochastic effects caused SAK estimates to fluctuate about their equilibrium abundance, but the effect dampened as the size of the surveyed population increased. When we considered both stochastic effects and sampling error at a deer management unit scale the resultant abundance estimates were within ±121.9% of the true population level 95% of the time. These combined results demonstrate extreme sensitivity to model violations and scale of analysis. Without changes to model formulation, the SAK model will be biased when λ ≠ 1. Furthermore, any factor that alters the male harvest rate, such as changes to regulations or changes in hunter attitudes, will bias population estimates. Sex-age-kill estimates may be precise at large spatial scales, such as the state level, but less so at the individual management unit level. Alternative models, such as statistical age-at-harvest models, which require similar data types, might allow for more robust, broad-scale demographic assessments.

  14. Evaluation of internal noise methods for Hotelling observer models

    SciTech Connect

    Zhang Yani; Pham, Binh T.; Eckstein, Miguel P.

    2007-08-15

    The inclusion of internal noise in model observers is a common method to allow for quantitative comparisons between human and model observer performance in visual detection tasks. In this article, we studied two different strategies for inserting internal noise into Hotelling model observers. In the first strategy, internal noise was added to the output of individual channels: (a) Independent nonuniform channel noise, (b) independent uniform channel noise. In the second strategy, internal noise was added to the decision variable arising from the combination of channel responses. The standard deviation of the zero mean internal noise was either constant or proportional to: (a) the decision variable's standard deviation due to the external noise, (b) the decision variable's variance caused by the external noise, (c) the decision variable magnitude on a trial to trial basis. We tested three model observers: square window Hotelling observer (HO), channelized Hotelling observer (CHO), and Laguerre-Gauss Hotelling observer (LGHO) using a four alternative forced choice (4AFC) signal known exactly but variable task with a simulated signal embedded in real x-ray coronary angiogram backgrounds. The results showed that the internal noise method that led to the best prediction of human performance differed across the studied model observers. The CHO model best predicted human observer performance with the channel internal noise. The HO and LGHO best predicted human observer performance with the decision variable internal noise. The present results might guide researchers with the choice of methods to include internal noise into Hotelling model observers when evaluating and optimizing medical image quality.

  15. Postaudit evaluation of conceptual model uncertainty for a glacial aquifer groundwater flow and contaminant transport model

    NASA Astrophysics Data System (ADS)

    Lemke, Lawrence D.; Cypher, Joseph A.

    2010-06-01

    Numerical groundwater flow and contaminant transport modeling incorporating three alternative conceptual models was conducted in 2005 to assess remedial actions and predict contaminant concentrations in an unconfined glacial aquifer located in Milford, Michigan, USA. Three alternative conceptual models were constructed and independently calibrated to evaluate uncertainty in the geometry of an aquitard underlying the aquifer and the extent to which infiltration from two manmade surface water bodies influenced the groundwater flow field. Contaminant transport for benzene, cis-DCE, and MTBE was modeled for a 5-year period that included a 2-year history match from July 2003 to May 2005 and predictions for a 3-year period ending in July 2008. A postaudit of model performance indicates that predictions for pumping wells, which integrated the transport signal across multiple model layers, were reliable but unable to differentiate between alternative conceptual model responses. In contrast, predictions for individual monitoring wells with limited screened intervals were less consistent, but held promise for evaluating alternative hydrogeologic models. Results of this study suggest that model conceptualization can have important practical implications for the delineation of contaminant transport pathways using monitoring wells, but may exert less influence on integrated predictions for pumping wells screened over multiple numerical model layers.

  16. SPITFIRE within the MPI Earth system model: Model development and evaluation

    NASA Astrophysics Data System (ADS)

    Lasslop, Gitta; Thonicke, Kirsten; Kloster, Silvia

    2014-09-01

    Quantification of the role of fire within the Earth system requires an adequate representation of fire as a climate-controlled process within an Earth system model. To be able to address questions on the interaction between fire and the Earth system, we implemented the mechanistic fire model SPITFIRE, in JSBACH, the land surface model of the MPI Earth system model. Here, we document the model implementation as well as model modifications. We evaluate our model results by comparing the simulation to the GFED version 3 satellite-based data set. In addition, we assess the sensitivity of the model to the meteorological forcing and to the spatial variability of a number of fire relevant model parameters. A first comparison of model results with burned area observations showed a strong correlation of the residuals with wind speed. Further analysis revealed that the response of the fire spread to wind speed was too strong for the application on global scale. Therefore, we developed an improved parametrization to account for this effect. The evaluation of the improved model shows that the model is able to capture the global gradients and the seasonality of burned area. Some areas of model-data mismatch can be explained by differences in vegetation cover compared to observations. We achieve benchmarking scores comparable to other state-of-the-art fire models. The global total burned area is sensitive to the meteorological forcing. Adjustment of parameters leads to similar model results for both forcing data sets with respect to spatial and seasonal patterns. This article was corrected on 29 SEP 2014. See the end of the full text for details.

  17. Physical Model Assisted Probability of Detection in Nondestructive Evaluation

    NASA Astrophysics Data System (ADS)

    Li, M.; Meeker, W. Q.; Thompson, R. B.

    2011-06-01

    Nondestructive evaluation is used widely in many engineering and industrial areas to detect defects or flaws such as cracks inside parts or structures during manufacturing or for products in service. The standard statistical model is a simple empirical linear regression between the (possibly transformed) signal response variables and the (possibly transformed) explanatory variables. For some applications, such a simple empirical approach is inadequate. An important alternative approach is to use knowledge of the physics of the inspection process to provide information about the underlying relationship between the response and explanatory variables. Use of such knowledge can greatly increase the power and accuracy of the statistical analysis and enable, when needed, proper extrapolation outside the range of the observed explanatory variables. This paper describes a set of physical model-assisted analyses to study the capability of two different ultrasonic testing inspection methods to detect synthetic hard alpha inclusion and flat-bottom hole defects in a titanium forging disk.

  18. Statistical evaluation and modeling of Internet dial-up traffic

    NASA Astrophysics Data System (ADS)

    Faerber, Johannes; Bodamer, Stefan; Charzinski, Joachim

    1999-08-01

    In times of Internet access being a popular consumer applications even for `normal' residential users, some telephone exchanges are congested by customers using modem or ISDN dial-up connections to their Internet Service Providers. In order to estimate the number of additional lines and switching capacity required in an exchange or a trunk group, Internet access traffic must be characterized in terms of holding time and call interarrival time distributions. In this paper, we analyze log files tracing the usage of the central ISDN access line pool at University of Stuttgart for a period of six months. Mathematical distributions are fitted to the measured data and the fit quality is evaluated with respect to the blocking probability caused by the synthetic traffic in a multiple server loss system. We show how the synthetic traffic model scales with the number of subscribers and how the model could be applied to compute economy of scale results for Internet access trunks or access servers.

  19. Fast Prediction and Evaluation of Gravitational Waveforms Using Surrogate Models

    NASA Astrophysics Data System (ADS)

    Field, Scott E.; Galley, Chad R.; Hesthaven, Jan S.; Kaye, Jason; Tiglio, Manuel

    2014-07-01

    We propose a solution to the problem of quickly and accurately predicting gravitational waveforms within any given physical model. The method is relevant for both real-time applications and more traditional scenarios where the generation of waveforms using standard methods can be prohibitively expensive. Our approach is based on three offline steps resulting in an accurate reduced order model in both parameter and physical dimensions that can be used as a surrogate for the true or fiducial waveform family. First, a set of m parameter values is determined using a greedy algorithm from which a reduced basis representation is constructed. Second, these m parameters induce the selection of m time values for interpolating a waveform time series using an empirical interpolant that is built for the fiducial waveform family. Third, a fit in the parameter dimension is performed for the waveform's value at each of these m times. The cost of predicting L waveform time samples for a generic parameter choice is of order O(mL+mcfit) online operations, where cfit denotes the fitting function operation count and, typically, m ≪L. The result is a compact, computationally efficient, and accurate surrogate model that retains the original physics of the fiducial waveform family while also being fast to evaluate. We generate accurate surrogate models for effective-one-body waveforms of nonspinning binary black hole coalescences with durations as long as 105M, mass ratios from 1 to 10, and for multiple spherical harmonic modes. We find that these surrogates are more than 3 orders of magnitude faster to evaluate as compared to the cost of generating effective-one-body waveforms in standard ways. Surrogate model building for other waveform families and models follows the same steps and has the same low computational online scaling cost. For expensive numerical simulations of binary black hole coalescences, we thus anticipate extremely large speedups in generating new waveforms with a

  20. visCOS: An R-package to evaluate model performance of hydrological models

    NASA Astrophysics Data System (ADS)

    Klotz, Daniel; Herrnegger, Mathew; Wesemann, Johannes; Schulz, Karsten

    2016-04-01

    The evaluation of model performance is a central part of (hydrological) modelling. Much attention has been given to the development of evaluation criteria and diagnostic frameworks. (Klemeš, 1986; Gupta et al., 2008; among many others). Nevertheless, many applications exist for which objective functions do not yet provide satisfying summaries. Thus, the necessity to visualize results arises in order to explore a wider range of model capacities, be it strengths or deficiencies. Visualizations are usually devised for specific projects and these efforts are often not distributed to a broader community (e.g. via open source software packages). Hence, the opportunity to explicitly discuss a state-of-the-art presentation technique is often missed. We therefore present a comprehensive R-package for evaluating model performance by visualizing and exploring different aspects of hydrological time-series. The presented package comprises a set of useful plots and visualization methods, which complement existing packages, such as hydroGOF (Zambrano-Bigiarini et al., 2012). It is derived from practical applications of the hydrological models COSERO and COSEROreg (Kling et al., 2014). visCOS, providing an interface in R, represents an easy-to-use software package for visualizing and assessing model performance and can be implemented in the process of model calibration or model development. The package provides functions to load hydrological data into R, clean the data, process, visualize, explore and finally save the results in a consistent way. Together with an interactive zoom function of the time series, an online calculation of the objective functions for variable time-windows is included. Common hydrological objective functions, such as the Nash-Sutcliffe Efficiency and the Kling-Gupta Efficiency, can also be evaluated and visualized in different ways for defined sub-periods like hydrological years or seasonal sections. Many hydrologists use long-term water-balances as a

  1. Mesoscale to microscale wind farm flow modeling and evaluation: Mesoscale to Microscale Wind Farm Models

    SciTech Connect

    Sanz Rodrigo, Javier; Chávez Arroyo, Roberto Aurelio; Moriarty, Patrick; Churchfield, Matthew; Kosović, Branko; Réthoré, Pierre-Elouan; Hansen, Kurt Schaldemose; Hahmann, Andrea; Mirocha, Jeffrey D.; Rife, Daran

    2016-08-31

    The increasing size of wind turbines, with rotors already spanning more than 150 m diameter and hub heights above 100 m, requires proper modeling of the atmospheric boundary layer (ABL) from the surface to the free atmosphere. Furthermore, large wind farm arrays create their own boundary layer structure with unique physics. This poses significant challenges to traditional wind engineering models that rely on surface-layer theories and engineering wind farm models to simulate the flow in and around wind farms. However, adopting an ABL approach offers the opportunity to better integrate wind farm design tools and meteorological models. The challenge is how to build the bridge between atmospheric and wind engineering model communities and how to establish a comprehensive evaluation process that identifies relevant physical phenomena for wind energy applications with modeling and experimental requirements. A framework for model verification, validation, and uncertainty quantification is established to guide this process by a systematic evaluation of the modeling system at increasing levels of complexity. In terms of atmospheric physics, 'building the bridge' means developing models for the so-called 'terra incognita,' a term used to designate the turbulent scales that transition from mesoscale to microscale. This range of scales within atmospheric research deals with the transition from parameterized to resolved turbulence and the improvement of surface boundary-layer parameterizations. The coupling of meteorological and wind engineering flow models and the definition of a formal model evaluation methodology, is a strong area of research for the next generation of wind conditions assessment and wind farm and wind turbine design tools. Some fundamental challenges are identified in order to guide future research in this area.

  2. Design, modeling, simulation and evaluation of a distributed energy system

    NASA Astrophysics Data System (ADS)

    Cultura, Ambrosio B., II

    This dissertation presents the design, modeling, simulation and evaluation of distributed energy resources (DER) consisting of photovoltaics (PV), wind turbines, batteries, a PEM fuel cell and supercapacitors. The distributed energy resources installed at UMass Lowell consist of the following: 2.5kW PV, 44kWhr lead acid batteries and 1500W, 500W & 300W wind turbines, which were installed before year 2000. Recently added to that are the following: 10.56 kW PV array, 2.4 kW wind turbine, 29 kWhr Lead acid batteries, a 1.2 kW PEM fuel cell and 4-140F supercapacitors. Each newly added energy resource has been designed, modeled, simulated and evaluated before its integration into the existing PV/Wind grid-connected system. The Mathematical and Simulink model of each system was derived and validated by comparing the simulated and experimental results. The Simulated results of energy generated from a 10.56kW PV system are in good agreement with the experimental results. A detailed electrical model of a 2.4kW wind turbine system equipped with a permanent magnet generator, diode rectifier, boost converter and inverter is presented. The analysis of the results demonstrates the effectiveness of the constructed simulink model, and can be used to predict the performance of the wind turbine. It was observed that a PEM fuel cell has a very fast response to load changes. Moreover, the model has validated the actual operation of the PEM fuel cell, showing that the simulated results in Matlab Simulink are consistent with the experimental results. The equivalent mathematical equation, derived from an electrical model of the supercapacitor, is used to simulate its voltage response. The model is completely capable of simulating its voltage behavior, and can predict the charge time and discharge time of voltages on the supercapacitor. The bi-directional dc-dc converter was designed in order to connect the 48V battery bank storage to the 24V battery bank storage. This connection was

  3. An 8-Stage Model for Evaluating the Tennis Serve

    PubMed Central

    Kovacs, Mark; Ellenbecker, Todd

    2011-01-01

    Background: The tennis serve is a complex stroke characterized by a series of segmental rotations involving the entire kinetic chain. Many overhead athletes use a basic 6-stage throwing model; however, the tennis serve does provide some differences. Evidence Acquisition: To support the present 8-stage descriptive model, data were gathered from PubMed and SPORTDiscus databases using keywords tennis and serve for publications between 1980 and 2010. Results: An 8-stage model of analysis for the tennis serve that includes 3 distinct phases—preparation, acceleration, and follow-through—provides a more tennis-specific analysis than that previously presented in the clinical tennis literature. When a serve is evaluated, the total body perspective is just as important as the individual segments alone. Conclusion: The 8-stage model provides a more in-depth analysis that should be utilized in all tennis players to help better understand areas of weakness, potential areas of injury, as well as components that can be improved for greater performance. PMID:23016050

  4. Probabilistic model for bridge structural evaluation using nondestructive inspection data

    NASA Astrophysics Data System (ADS)

    Carrion, Francisco; Lopez, Jose Alfredo; Balankin, Alexander

    2005-05-01

    A bridge management system developed for the Mexican toll highway network applies a probabilistic-reliability model to estimate load capacity and structural residual life. Basic inputs for the system are the global inspection data (visual inspections and vibration testing), and the information from the environment conditions (weather, traffic, loads, earthquakes); although, the model takes account for additional non-destructive testing or permanent monitoring data. Main outputs are the periodic maintenance, rehabilitation and replacement program, and the updated inspection program. Both programs are custom-made to available funds and scheduled according to a priority assignation criterion. The probabilistic model, tailored to typical bridges, accounts for the size, age, material and structure type. Special bridges in size or type may be included, while in these cases finite element deterministic models are also possible. Key feature is that structural qualification is given in terms of the probability of failure, calculated considering fundamental degradation mechanisms and from actual direct observations and measurements, such as crack distribution and size, materials properties, bridge dimensions, load deflections, and parameters for corrosion evaluation. Vibration measurements are basically used to infer structural resistance and to monitor long term degradation.

  5. An updated summary of MATHEW/ADPIC model evaluation studies

    SciTech Connect

    Foster, K.T.; Dickerson, M.H.

    1990-05-01

    This paper summarizes the major model evaluation studies conducted for the MATHEW/ADPIC atmospheric transport and diffusion models used by the US Department of Energy's Atmospheric Release Advisory Capability. These studies have taken place over the last 15 years and involve field tracer releases influenced by a variety of meteorological and topographical conditions. Neutrally buoyant tracers released both as surface and elevated point sources, as well as material dispersed by explosive, thermally bouyant release mechanisms have been studied. Results from these studies show that the MATHEW/ADPIC models estimate the tracer air concentrations to within a factor of two of the measured values 20% to 50% of the time, and within a factor of five of the measurements 35% to 85% of the time depending on the complexity of the meteorology and terrain, and the release height of the tracer. Comparisons of model estimates to peak downwind deposition and air concentration measurements from explosive releases are shown to be generally within a factor of two to three. 24 refs., 14 figs., 3 tabs.

  6. Sustainable deforestation evaluation model and system dynamics analysis.

    PubMed

    Feng, Huirong; Lim, C W; Chen, Liqun; Zhou, Xinnian; Zhou, Chengjun; Lin, Yi

    2014-01-01

    The current study used the improved fuzzy analytic hierarchy process to construct a sustainable deforestation development evaluation system and evaluation model, which has refined a diversified system to evaluate the theory of sustainable deforestation development. Leveraging the visual image of the system dynamics causal and power flow diagram, we illustrated here that sustainable forestry development is a complex system that encompasses the interaction and dynamic development of ecology, economy, and society and has reflected the time dynamic effect of sustainable forestry development from the three combined effects. We compared experimental programs to prove the direct and indirect impacts of the ecological, economic, and social effects of the corresponding deforest techniques and fully reflected the importance of developing scientific and rational ecological harvesting and transportation technologies. Experimental and theoretical results illustrated that light cableway skidding is an ecoskidding method that is beneficial for the sustainable development of resources, the environment, the economy, and society and forecasted the broad potential applications of light cableway skidding in timber production technology. Furthermore, we discussed the sustainable development countermeasures of forest ecosystems from the aspects of causality, interaction, and harmony.

  7. Sustainable Deforestation Evaluation Model and System Dynamics Analysis

    PubMed Central

    Feng, Huirong; Lim, C. W.; Chen, Liqun; Zhou, Xinnian; Zhou, Chengjun; Lin, Yi

    2014-01-01

    The current study used the improved fuzzy analytic hierarchy process to construct a sustainable deforestation development evaluation system and evaluation model, which has refined a diversified system to evaluate the theory of sustainable deforestation development. Leveraging the visual image of the system dynamics causal and power flow diagram, we illustrated here that sustainable forestry development is a complex system that encompasses the interaction and dynamic development of ecology, economy, and society and has reflected the time dynamic effect of sustainable forestry development from the three combined effects. We compared experimental programs to prove the direct and indirect impacts of the ecological, economic, and social effects of the corresponding deforest techniques and fully reflected the importance of developing scientific and rational ecological harvesting and transportation technologies. Experimental and theoretical results illustrated that light cableway skidding is an ecoskidding method that is beneficial for the sustainable development of resources, the environment, the economy, and society and forecasted the broad potential applications of light cableway skidding in timber production technology. Furthermore, we discussed the sustainable development countermeasures of forest ecosystems from the aspects of causality, interaction, and harmony. PMID:25254225

  8. Evaluation of data driven models for river suspended sediment concentration modeling

    NASA Astrophysics Data System (ADS)

    Zounemat-Kermani, Mohammad; Kişi, Özgür; Adamowski, Jan; Ramezani-Charmahineh, Abdollah

    2016-04-01

    Using eight-year data series from hydrometric stations located in Arkansas, Delaware and Idaho (USA), the ability of artificial neural network (ANN) and support vector regression (SVR) models to forecast/estimate daily suspended sediment concentrations ([SS]d) was evaluated and compared to that of traditional multiple linear regression (MLR) and sediment rating curve (SRC) models. Three different ANN model algorithms were tested [gradient descent, conjugate gradient and Broyden-Fletcher-Goldfarb-Shanno (BFGS)], along with four different SVR model kernels [linear, polynomial, sigmoid and Radial Basis Function (RBF)]. The reliability of the applied models was evaluated based on the statistical performance criteria of root mean square error (RMSE), Pearson's correlation coefficient (PCC) and Nash-Sutcliffe model efficiency coefficient (NSE). Based on RMSE values, and averaged across the three hydrometric stations, the ANN and SVR models showed, respectively, 23% and 18% improvements in forecasting and 18% and 15% improvements in estimation over traditional models. The use of the BFGS training algorithm for ANN, and the RBF kernel function for SVR models are recommended as useful options for simulating hydrological phenomena.

  9. Performance evaluation and modeling techniques for parallel processors

    SciTech Connect

    Dimpsey, R.T.

    1992-01-01

    This thesis addresses the issue of application performance under real operational conditions. A technique is introduced which accurately models the behavior of an application in real workloads. The methodology can evaluate the performance of the application as well as predict the effects on performance of certain system design changes. The constructed model is based on measurements obtained during normal machine operation and captures various performance issues including multiprogramming and system overheads, and contentions for resources. Methodologies to measure multiprogramming overhead (MPO) are introduced and illustrated on an Alliant FX/8, an Alliant Fx/80, and the Cedar parallel supercomputer. The measurements collected suggest that multiprogramming and system overheads can significantly impact application performance. The mean MPO incurred by PERFECT benchmarks executing in real workloads on an Alliant FX/80 is found to consume 16% of the processing power. Flor applications executing Cedar, between 10% and 60% of the application completion time is attributable to overhead caused by multiprogramming. Measurements also identify a Cedar FORTRAN construct (SDOALL) which is susceptible to performance degradation due to multiprogramming. Using the MPO measurements, the application performance model discussed above is constructed for computationally bound, parallel jobs executing on an Alliant FX/80. It is shown that the model can predict application completion time under real workloads. This is illustrated with several examples from the Perfect Benchmark suite. It is also shown that the model can predict the performance impact of system design changes. For example, the completion times of applications under a new scheduling policy are predicted. The model-building methodology is then validated with a number of empirical experiments.

  10. Evaluation of atmospheric chemical models using aircraft data (Invited)

    NASA Astrophysics Data System (ADS)

    Freeman, S.; Grossberg, N.; Pierce, R.; Lee, P.; Ngan, F.; Yates, E. L.; Iraci, L. T.; Lefer, B. L.

    2013-12-01

    Air quality prediction is an important and growing field, as the adverse health effects of ozone (O3) are becoming more important to the general public. Two atmospheric chemical models, the Realtime Air Quality Modeling System (RAQMS) and the Community Multiscale Air Quality modeling system (CMAQ) are evaluated during NASA's Student Airborne Research Project (SARP) and the NASA Alpha Jet Atmospheric eXperiment (AJAX) flights. CO, O3, and NOx data simulated by the models are interpolated using an inverse distance weighting in space and a linear interpolation in time to both the SARP and AJAX flight tracks and compared to the CO, O3, and NOx observations at those points. Results for the seven flights included show moderate error in O3 during the flights, with RAQMS having a high O3 bias (+15.7 ppbv average) above 6 km and a low O3 bias (-17.5 ppbv average) below 4km. CMAQ was found to have a low O3 bias (-13.0 ppbv average) everywhere. Additionally, little bias (-5.36% RAQMS, -11.8% CMAQ) in the CO data was observed with the exception of a wildfire smoke plume that was flown through on one SARP flight, as CMAQ lacks any wildfire sources and RAQMS resolution is too coarse to resolve narrow plumes. This indicates improvement in emissions inventories compared to previous studies. CMAQ additionally incorrectly predicted a NOx plume due to incorrectly vertically advecting it from the surface, which caused NOx titration to occur, limiting the production of ozone. This study shows that these models perform reasonably well in most conditions; however more work must be done to assimilate wildfires, improve emissions inventories, and improve meteorological forecasts for the models.

  11. Evaluating wind extremes in CMIP5 climate models

    NASA Astrophysics Data System (ADS)

    Kumar, Devashish; Mishra, Vimal; Ganguly, Auroop R.

    2014-09-01

    Wind extremes have consequences for renewable energy sectors, critical infrastructures, coastal ecosystems, and insurance industry. Considerable debates remain regarding the impacts of climate change on wind extremes. While climate models have occasionally shown increases in regional wind extremes, a decline in the magnitude of mean and extreme near-surface wind speeds has been recently reported over most regions of the Northern Hemisphere using observed data. Previous studies of wind extremes under climate change have focused on selected regions and employed outputs from the regional climate models (RCMs). However, RCMs ultimately rely on the outputs of global circulation models (GCMs), and the value-addition from the former over the latter has been questioned. Regional model runs rarely employ the full suite of GCM ensembles, and hence may not be able to encapsulate the most likely projections or their variability. Here we evaluate the performance of the latest generation of GCMs, the Coupled Model Intercomparison Project phase 5 (CMIP5), in simulating extreme winds. We find that the multimodel ensemble (MME) mean captures the spatial variability of annual maximum wind speeds over most regions except over the mountainous terrains. However, the historical temporal trends in annual maximum wind speeds for the reanalysis data, ERA-Interim, are not well represented in the GCMs. The historical trends in extreme winds from GCMs are statistically not significant over most regions. The MME model simulates the spatial patterns of extreme winds for 25-100 year return periods. The projected extreme winds from GCMs exhibit statistically less significant trends compared to the historical reference period.

  12. Evaluating wind extremes in CMIP5 climate models

    NASA Astrophysics Data System (ADS)

    Kumar, Devashish; Mishra, Vimal; Ganguly, Auroop R.

    2015-07-01

    Wind extremes have consequences for renewable energy sectors, critical infrastructures, coastal ecosystems, and insurance industry. Considerable debates remain regarding the impacts of climate change on wind extremes. While climate models have occasionally shown increases in regional wind extremes, a decline in the magnitude of mean and extreme near-surface wind speeds has been recently reported over most regions of the Northern Hemisphere using observed data. Previous studies of wind extremes under climate change have focused on selected regions and employed outputs from the regional climate models (RCMs). However, RCMs ultimately rely on the outputs of global circulation models (GCMs), and the value-addition from the former over the latter has been questioned. Regional model runs rarely employ the full suite of GCM ensembles, and hence may not be able to encapsulate the most likely projections or their variability. Here we evaluate the performance of the latest generation of GCMs, the Coupled Model Intercomparison Project phase 5 (CMIP5), in simulating extreme winds. We find that the multimodel ensemble (MME) mean captures the spatial variability of annual maximum wind speeds over most regions except over the mountainous terrains. However, the historical temporal trends in annual maximum wind speeds for the reanalysis data, ERA-Interim, are not well represented in the GCMs. The historical trends in extreme winds from GCMs are statistically not significant over most regions. The MME model simulates the spatial patterns of extreme winds for 25-100 year return periods. The projected extreme winds from GCMs exhibit statistically less significant trends compared to the historical reference period.

  13. Evaluation Digital Elevation Model Generated by Synthetic Aperture Radar Data

    NASA Astrophysics Data System (ADS)

    Makineci, H. B.; Karabörk, H.

    2016-06-01

    Digital elevation model, showing the physical and topographical situation of the earth, is defined a tree-dimensional digital model obtained from the elevation of the surface by using of selected an appropriate interpolation method. DEMs are used in many areas such as management of natural resources, engineering and infrastructure projects, disaster and risk analysis, archaeology, security, aviation, forestry, energy, topographic mapping, landslide and flood analysis, Geographic Information Systems (GIS). Digital elevation models, which are the fundamental components of cartography, is calculated by many methods. Digital elevation models can be obtained terrestrial methods or data obtained by digitization of maps by processing the digital platform in general. Today, Digital elevation model data is generated by the processing of stereo optical satellite images, radar images (radargrammetry, interferometry) and lidar data using remote sensing and photogrammetric techniques with the help of improving technology. One of the fundamental components of remote sensing radar technology is very advanced nowadays. In response to this progress it began to be used more frequently in various fields. Determining the shape of topography and creating digital elevation model comes the beginning topics of these areas. It is aimed in this work , the differences of evaluation of quality between Sentinel-1A SAR image ,which is sent by European Space Agency ESA and Interferometry Wide Swath imaging mode and C band type , and DTED-2 (Digital Terrain Elevation Data) and application between them. The application includes RMS static method for detecting precision of data. Results show us to variance of points make a high decrease from mountain area to plane area.

  14. Evaluation of field development plans using 3-D reservoir modelling

    SciTech Connect

    Seifert, D.; Lewis, J.J.M.; Newbery, J.D.H.

    1997-08-01

    Three-dimensional reservoir modelling has become an accepted tool in reservoir description and is used for various purposes, such as reservoir performance prediction or integration and visualisation of data. In this case study, a small Northern North Sea turbiditic reservoir was to be developed with a line drive strategy utilising a series of horizontal producer and injector pairs, oriented north-south. This development plan was to be evaluated and the expected outcome of the wells was to be assessed and risked. Detailed analyses of core, well log and analogue data has led to the development of two geological {open_quotes}end member{close_quotes} scenarios. Both scenarios have been stochastically modelled using the Sequential Indicator Simulation method. The resulting equiprobable realisations have been subjected to detailed statistical well placement optimisation techniques. Based upon bivariate statistical evaluation of more than 1000 numerical well trajectories for each of the two scenarios, it was found that the wells inclinations and lengths had a great impact on the wells success, whereas the azimuth was found to have only a minor impact. After integration of the above results, the actual well paths were redesigned to meet external drilling constraints, resulting in substantial reductions in drilling time and costs.

  15. Osteoporotic rat models for evaluation of osseointegration of bone implants.

    PubMed

    Alghamdi, Hamdan S; van den Beucken, Jeroen J J P; Jansen, John A

    2014-06-01

    Osseointegration of dental and orthopedic bone implants is the important process that leads to mechanical fixation of implants and warrants implant functionality. In view of increasing numbers of osteoporotic patients, bone implant surface optimization strategies with instructive and drug-loading ability have been heavily explored. However, few animal models are available to study the effect of novel implant surface modifications in osteoporotic conditions. Since laboratory rats comply with a number of practical advantages, including the reliability of several methods for rapid induction of osteoporotic conditions, the present work aimed to define the use of the femoral condyle in osteoporotic female and male rats as a suitable implantation model to study osseointegration of bone implants. The method describes the procedures for induction (by hypogonadism) and assessment (by in vivo micro-computed tomography [CT]) of osteoporotic conditions in both female and male rats. The implantation site architecture (femoral condyle bone properties and dimensions) was comparatively evaluated for female and male rats, and the implant installation procedures are described. Finally, the possible analytical techniques to evaluate bone responses via mechanical tests, ex vivo micro-CT, and histological methods are provided.

  16. Development of Conceptual Benchmark Models to Evaluate Complex Hydrologic Model Calibration in Managed Basins Using Python

    NASA Astrophysics Data System (ADS)

    Hughes, J. D.; White, J.

    2013-12-01

    For many numerical hydrologic models it is a challenge to quantitatively demonstrate that complex models are preferable to simpler models. Typically, a decision is made to develop and calibrate a complex model at the beginning of a study. The value of selecting a complex model over simpler models is commonly inferred from use of a model with fewer simplifications of the governing equations because it can be time consuming to develop another numerical code with data processing and parameter estimation functionality. High-level programming languages like Python can greatly reduce the effort required to develop and calibrate simple models that can be used to quantitatively demonstrate the increased value of a complex model. We have developed and calibrated a spatially-distributed surface-water/groundwater flow model for managed basins in southeast Florida, USA, to (1) evaluate the effect of municipal groundwater pumpage on surface-water/groundwater exchange, (2) investigate how the study area will respond to sea-level rise, and (3) explore combinations of these forcing functions. To demonstrate the increased value of this complex model, we developed a two-parameter conceptual-benchmark-discharge model for each basin in the study area. The conceptual-benchmark-discharge model includes seasonal scaling and lag parameters and is driven by basin rainfall. The conceptual-benchmark-discharge models were developed in the Python programming language and used weekly rainfall data. Calibration was implemented with the Broyden-Fletcher-Goldfarb-Shanno method available in the Scientific Python (SciPy) library. Normalized benchmark efficiencies calculated using output from the complex model and the corresponding conceptual-benchmark-discharge model indicate that the complex model has more explanatory power than the simple model driven only by rainfall.

  17. Addressing Conceptual Model Uncertainty in the Evaluation of Model Prediction Errors

    NASA Astrophysics Data System (ADS)

    Carrera, J.; Pool, M.

    2014-12-01

    Model predictions are uncertain because of errors in model parameters, future forcing terms, and model concepts. The latter remain the largest and most difficult to assess source of uncertainty in long term model predictions. We first review existing methods to evaluate conceptual model uncertainty. We argue that they are highly sensitive to the ingenuity of the modeler, in the sense that they rely on the modeler's ability to propose alternative model concepts. Worse, we find that the standard practice of stochastic methods leads to poor, potentially biased and often too optimistic, estimation of actual model errors. This is bad news because stochastic methods are purported to properly represent uncertainty. We contend that the problem does not lie on the stochastic approach itself, but on the way it is applied. Specifically, stochastic inversion methodologies, which demand quantitative information, tend to ignore geological understanding, which is conceptually rich. We illustrate some of these problems with the application to Mar del Plata aquifer, where extensive data are available for nearly a century. Geologically based models, where spatial variability is handled through zonation, yield calibration fits similar to geostatiscally based models, but much better predictions. In fact, the appearance of the stochastic T fields is similar to the geologically based models only in areas with high density of data. We take this finding to illustrate the ability of stochastic models to accommodate many data, but also, ironically, their inability to address conceptual model uncertainty. In fact, stochastic model realizations tend to be too close to the "most likely" one (i.e., they do not really realize the full conceptualuncertainty). The second part of the presentation is devoted to argue that acknowledging model uncertainty may lead to qualitatively different decisions than just working with "most likely" model predictions. Therefore, efforts should concentrate on

  18. Foreign Exchange Value-at-Risk with Multiple Currency Exposure: A Multivariate and Copula Generalized Autoregressive Conditional Heteroskedasticity Approach

    DTIC Science & Technology

    2014-11-01

    à un risque financier lié aux varia- tions du taux de change, et les responsables de la gestion interne se voient donc pressés de trouver des...Addison Wesley. 42 DRDC-RDDC-2014-R62 DOCUMENT CONTROL DATA (Security markings for the title, abstract and indexing annotation must be entered when the...security marking of the document, including supplemental markings if applicable.) UNCLASSIFIED 2b. CONTROLLED GOODS (NON- CONTROLLED GOODS) DMC A REVIEW

  19. An Evaluation of Evaluative Personality Terms: A Comparison of the Big Seven and Five-Factor Model in Predicting Psychopathology

    ERIC Educational Resources Information Center

    Durrett, Christine; Trull, Timothy J.

    2005-01-01

    Two personality models are compared regarding their relationship with personality disorder (PD) symptom counts and with lifetime Axis I diagnoses. These models share 5 similar domains, and the Big 7 model also includes 2 domains assessing self-evaluation: positive and negative valence. The Big 7 model accounted for more variance in PDs than the…

  20. Evaluation Between Existing and Improved CCF Modeling Using the NRC SPAR Models

    SciTech Connect

    James K. Knudsen

    2010-06-01

    Abstract: The NRC SPAR models currently employ the alpha factor common cause failure (CCF) methodology and model CCF for a group of redundant components as a single “rolled-up” basic event. These SPAR models will be updated to employ a more computationally intensive and accurate approach by expanding the CCF basic events for all active components to include all terms that appear in the Basic Parameter Model (BPM). A discussion is provided to detail the differences between the rolled-up common cause group (CCG) and expanded BPM adjustment concepts based on differences in core damage frequency and individual component importance measures. Lastly, a hypothetical condition is evaluated with a SPAR model to show the difference in results between the current adjustment method (rolled-up CCF events) and the newer method employing all of the expanded terms in the BPM. The event evaluation on the SPAR model employing the expanded terms will be solved using the graphical evaluation module (GEM) and the proposed method discussed in Reference 1.

  1. Use of Numerical Groundwater Modeling to Evaluate Uncertainty in Conceptual Models of Recharge and Hydrostratigraphy

    SciTech Connect

    Pohlmann, Karl; Ye, Ming; Pohll, Greg; Chapman, Jenny

    2007-01-19

    Numerical groundwater models are based on conceptualizations of hydrogeologic systems that are by necessity developed from limited information and therefore are simplifications of real conditions. Each aspect (e.g. recharge, hydrostratigraphy, boundary conditions) of the groundwater model is often based on a single conceptual model that is considered to be the best representation given the available data. However, the very nature of their construction means that each conceptual model is inherently uncertain and the available information may be insufficient to refute plausible alternatives, thereby raising the possibility that the flow model is underestimating overall uncertainty. In this study we use the Death Valley Regional Flow System model developed by the U.S. Geological Survey as a framework to predict regional groundwater flow southward into Yucca Flat on the Nevada Test Site. An important aspect of our work is to evaluate the uncertainty associated with multiple conceptual models of groundwater recharge and subsurface hydrostratigraphy and quantify the impacts of this uncertainty on model predictions. In our study, conceptual model uncertainty arises from two sources: (1) alternative interpretations of the hydrostratigraphy in the northern portion of Yucca Flat where, owing to sparse data, the hydrogeologic system can be conceptualized in different ways, and (2) uncertainty in groundwater recharge in the region as evidenced by the existence of several independent approaches for estimating this aspect of the hydrologic system. The composite prediction of groundwater flow is derived from the regional model that formally incorporates the uncertainty in these alternative input models using the maximum likelihood Bayesian model averaging method. An assessment of the joint predictive uncertainty of the input conceptual models is also produced. During this process, predictions of the alternative models are weighted by model probability, which is the degree of

  2. Model evaluation of marine primary organic aerosol emission schemes

    NASA Astrophysics Data System (ADS)

    Gantt, B.; Johnson, M. S.; Meskhidze, N.; Sciare, J.; Ovadnevaite, J.; Ceburnis, D.; O'Dowd, C. D.

    2012-09-01

    In this study, several marine primary organic aerosol (POA) emission schemes have been evaluated using the GEOS-Chem chemical transport model in order to provide guidance for their implementation in air quality and climate models. These emission schemes, based on varying dependencies of chlorophyll a concentration ([chl a]) and 10 m wind speed (U10), have large differences in their magnitude, spatial distribution, and seasonality. Model comparison with weekly and monthly mean values of the organic aerosol mass concentration at two coastal sites shows that the source function exclusively related to [chl a] does a better job replicating surface observations. Sensitivity simulations in which the negative U10 and positive [chl a] dependence of the organic mass fraction of sea spray aerosol are enhanced show improved prediction of the seasonality of the marine POA concentrations. A top-down estimate of submicron marine POA emissions based on the parameterization that compares best to the observed weekly and monthly mean values of marine organic aerosol surface concentrations has a global average emission rate of 6.3 Tg yr-1. Evaluation of existing marine POA source functions against a case study during which marine POA contributed the major fraction of submicron aerosol mass shows that none of the existing parameterizations are able to reproduce the hourly-averaged observations. Our calculations suggest that in order to capture episodic events and short-term variability in submicron marine POA concentration over the ocean, new source functions need to be developed that are grounded in the physical processes unique to the organic fraction of sea spray aerosol.

  3. From site measurements to spatial modelling - multi-criteria model evaluation

    NASA Astrophysics Data System (ADS)

    Gottschalk, Pia; Roers, Michael; Wechsung, Frank

    2015-04-01

    Hydrological models are traditionally evaluated at gauge stations for river runoff which is assumed to be the valid and global test for model performance. One model output is assumed to reflect the performance of all implemented processes and parameters. It neglects the complex interactions of landscape processes which are actually simulated by the model but not tested. The application of a spatial hydrological model however offers a vast potential of evaluation aspects which shall be presented here with the example of the eco-hydrological model SWIM. We present current activities to evaluate SWIM at the lysimeter site Brandis, the eddy-co-variance site Gebesee and with spatial crop yields of Germany to constrain model performance additionally to river runoff. The lysimeter site is used to evaluate actuall evapotranspiration, total runoff below the soil profile and crop yields. The eddy-covariance site Gebesee offers data to study crop growth via net-ecosystem carbon exchange and actuall evapotranspiration. The performance of the vegetation module is tested via spatial crop yields at county level of Germany. Crop yields are an indirect measure of crop growth which is an important driver of the landscape water balance and therefore eventually determines river runoff as well. First results at the lysimeter site show that simulated soil water dynamics are less sensitive to soil type than measured soil water dynamics. First results from the simulation of actuall evapotranspiration and carbon exchange at Gebesee show a satisfactorily model performance with however difficulties to capture initial vegetation growth in spring. The latter is a hint at problems capturing winter growth conditions and subsequent impacts on crop growth. This is also reflected in the performance of simulated crop yields for Germany where the model reflects crop yields of silage maize much better than of winter wheat. With the given approach we would like to highlight the advantages and

  4. Evaluation of Turbulence-Model Performance in Jet Flows

    NASA Technical Reports Server (NTRS)

    Woodruff, S. L.; Seiner, J. M.; Hussaini, M. Y.; Erlebacher, G.

    2001-01-01

    The importance of reducing jet noise in both commercial and military aircraft applications has made jet acoustics a significant area of research. A technique for jet noise prediction commonly employed in practice is the MGB approach, based on the Lighthill acoustic analogy. This technique requires as aerodynamic input mean flow quantities and turbulence quantities like the kinetic energy and the dissipation. The purpose of the present paper is to assess existing capabilities for predicting these aerodynamic inputs. Two modern Navier-Stokes flow solvers, coupled with several modern turbulence models, are evaluated by comparison with experiment for their ability to predict mean flow properties in a supersonic jet plume. Potential weaknesses are identified for further investigation. Another comparison with similar intent is discussed by Barber et al. The ultimate goal of this research is to develop a reliable flow solver applicable to the low-noise, propulsion-efficient, nozzle exhaust systems being developed in NASA focused programs. These programs address a broad range of complex nozzle geometries operating in high temperature, compressible, flows. Seiner et al. previously discussed the jet configuration examined here. This convergent-divergent nozzle with an exit diameter of 3.6 inches was designed for an exhaust Mach number of 2.0 and a total temperature of 1680 F. The acoustic and aerodynamic data reported by Seiner et al. covered a range of jet total temperatures from 104 F to 2200 F at the fully-expanded nozzle pressure ratio. The aerodynamic data included centerline mean velocity and total temperature profiles. Computations were performed independently with two computational fluid dynamics (CFD) codes, ISAAC and PAB3D. Turbulence models employed include the k-epsilon model, the Gatski-Speziale algebraic-stress model and the Girimaji model, with and without the Sarkar compressibility correction. Centerline values of mean velocity and mean temperature are

  5. Development and Evaluation of Land-Use Regression Models Using Modeled Air Quality Concentrations

    EPA Science Inventory

    Abstract Land-use regression (LUR) models have emerged as a preferred methodology for estimating individual exposure to ambient air pollution in epidemiologic studies in absence of subject-specific measurements. Although there is a growing literature focused on LUR evaluation, fu...

  6. Evaluating biomarkers to model cancer risk post cosmic ray exposure.

    PubMed

    Sridharan, Deepa M; Asaithamby, Aroumougame; Blattnig, Steve R; Costes, Sylvain V; Doetsch, Paul W; Dynan, William S; Hahnfeldt, Philip; Hlatky, Lynn; Kidane, Yared; Kronenberg, Amy; Naidu, Mamta D; Peterson, Leif E; Plante, Ianik; Ponomarev, Artem L; Saha, Janapriya; Snijders, Antoine M; Srinivasan, Kalayarasan; Tang, Jonathan; Werner, Erica; Pluth, Janice M

    2016-06-01

    Robust predictive models are essential to manage the risk of radiation-induced carcinogenesis. Chronic exposure to cosmic rays in the context of the complex deep space environment may place astronauts at high cancer risk. To estimate this risk, it is critical to understand how radiation-induced cellular stress impacts cell fate decisions and how this in turn alters the risk of carcinogenesis. Exposure to the heavy ion component of cosmic rays triggers a multitude of cellular changes, depending on the rate of exposure, the type of damage incurred and individual susceptibility. Heterogeneity in dose, dose rate, radiation quality, energy and particle flux contribute to the complexity of risk assessment. To unravel the impact of each of these factors, it is critical to identify sensitive biomarkers that can serve as inputs for robust modeling of individual risk of cancer or other long-term health consequences of exposure. Limitations in sensitivity of biomarkers to dose and dose rate, and the complexity of longitudinal monitoring, are some of the factors that increase uncertainties in the output from risk prediction models. Here, we critically evaluate candidate early and late biomarkers of radiation exposure and discuss their usefulness in predicting cell fate decisions. Some of the biomarkers we have reviewed include complex clustered DNA damage, persistent DNA repair foci, reactive oxygen species, chromosome aberrations and inflammation. Other biomarkers discussed, often assayed for at longer points post exposure, include mutations, chromosome aberrations, reactive oxygen species and telomere length changes. We discuss the relationship of biomarkers to different potential cell fates, including proliferation, apoptosis, senescence, and loss of stemness, which can propagate genomic instability and alter tissue composition and the underlying mRNA signatures that contribute to cell fate decisions. Our goal is to highlight factors that are important in choosing

  7. Evaluating biomarkers to model cancer risk post cosmic ray exposure

    NASA Astrophysics Data System (ADS)

    Sridharan, Deepa M.; Asaithamby, Aroumougame; Blattnig, Steve R.; Costes, Sylvain V.; Doetsch, Paul W.; Dynan, William S.; Hahnfeldt, Philip; Hlatky, Lynn; Kidane, Yared; Kronenberg, Amy; Naidu, Mamta D.; Peterson, Leif E.; Plante, Ianik; Ponomarev, Artem L.; Saha, Janapriya; Snijders, Antoine M.; Srinivasan, Kalayarasan; Tang, Jonathan; Werner, Erica; Pluth, Janice M.

    2016-06-01

    Robust predictive models are essential to manage the risk of radiation-induced carcinogenesis. Chronic exposure to cosmic rays in the context of the complex deep space environment may place astronauts at high cancer risk. To estimate this risk, it is critical to understand how radiation-induced cellular stress impacts cell fate decisions and how this in turn alters the risk of carcinogenesis. Exposure to the heavy ion component of cosmic rays triggers a multitude of cellular changes, depending on the rate of exposure, the type of damage incurred and individual susceptibility. Heterogeneity in dose, dose rate, radiation quality, energy and particle flux contribute to the complexity of risk assessment. To unravel the impact of each of these factors, it is critical to identify sensitive biomarkers that can serve as inputs for robust modeling of individual risk of cancer or other long-term health consequences of exposure. Limitations in sensitivity of biomarkers to dose and dose rate, and the complexity of longitudinal monitoring, are some of the factors that increase uncertainties in the output from risk prediction models. Here, we critically evaluate candidate early and late biomarkers of radiation exposure and discuss their usefulness in predicting cell fate decisions. Some of the biomarkers we have reviewed include complex clustered DNA damage, persistent DNA repair foci, reactive oxygen species, chromosome aberrations and inflammation. Other biomarkers discussed, often assayed for at longer points post exposure, include mutations, chromosome aberrations, reactive oxygen species and telomere length changes. We discuss the relationship of biomarkers to different potential cell fates, including proliferation, apoptosis, senescence, and loss of stemness, which can propagate genomic instability and alter tissue composition and the underlying mRNA signatures that contribute to cell fate decisions. Our goal is to highlight factors that are important in choosing

  8. New Methods for Air Quality Model Evaluation with Satellite Data

    NASA Astrophysics Data System (ADS)

    Holloway, T.; Harkey, M.

    2015-12-01

    Despite major advances in the ability of satellites to detect gases and aerosols in the atmosphere, there remains significant, untapped potential to apply space-based data to air quality regulatory applications. Here, we showcase research findings geared toward increasing the relevance of satellite data to support operational air quality management, focused on model evaluation. Particular emphasis is given to nitrogen dioxide (NO2) and formaldehyde (HCHO) from the Ozone Monitoring Instrument aboard the NASA Aura satellite, and evaluation of simulations from the EPA Community Multiscale Air Quality (CMAQ) model. This work is part of the NASA Air Quality Applied Sciences Team (AQAST), and is motivated by ongoing dialog with state and federal air quality management agencies. We present the response of satellite-derived NO2 to meteorological conditions, satellite-derived HCHO:NO2 ratios as an indicator of ozone production regime, and the ability of models to capture these sensitivities over the continental U.S. In the case of NO2-weather sensitivities, we find boundary layer height, wind speed, temperature, and relative humidity to be the most important variables in determining near-surface NO2 variability. CMAQ agreed with relationships observed in satellite data, as well as in ground-based data, over most regions. However, we find that the southwest U.S. is a problem area for CMAQ, where modeled NO2 responses to insolation, boundary layer height, and other variables are at odds with the observations. Our analyses utilize a software developed by our team, the Wisconsin Horizontal Interpolation Program for Satellites (WHIPS): a free, open-source program designed to make satellite-derived air quality data more usable. WHIPS interpolates level 2 satellite retrievals onto a user-defined fixed grid, in effect creating custom-gridded level 3 satellite product. Currently, WHIPS can process the following data products: OMI NO2 (NASA retrieval); OMI NO2 (KNMI retrieval); OMI

  9. Experimental performance evaluation of human balance control models.

    PubMed

    Huryn, Thomas P; Blouin, Jean-Sébastien; Croft, Elizabeth A; Koehle, Michael S; Van der Loos, H F Machiel

    2014-11-01

    Two factors commonly differentiate proposed balance control models for quiet human standing: 1) intermittent muscle activation and 2) prediction that overcomes sensorimotor time delays. In this experiment we assessed the viability and performance of intermittent activation and prediction in a balance control loop that included the neuromuscular dynamics of human calf muscles. Muscles were driven by functional electrical stimulation (FES). The performance of the different controllers was compared based on sway patterns and mechanical effort required to balance a human body load on a robotic balance simulator. All evaluated controllers balanced subjects with and without a neural block applied to their common peroneal and tibial nerves, showing that the models can produce stable balance in the absence of natural activation. Intermittent activation required less stimulation energy than continuous control but predisposed the system to increased sway. Relative to intermittent control, continuous control reproduced the sway size of natural standing better. Prediction was not necessary for stable balance control but did improve stability when control was intermittent, suggesting a possible benefit of a predictor for intermittent activation. Further application of intermittent activation and predictive control models may drive prolonged, stable FES-controlled standing that improves quality of life for people with balance impairments.

  10. Evaluation of Influenza Vaccination Efficacy: A Universal Epidemic Model

    PubMed Central

    Bazhan, Sergei I.

    2016-01-01

    By means of a designed epidemic model, we evaluated the influence of seasonal vaccination coverage as well as a potential universal vaccine with differing efficacy on the aftermath of seasonal and pandemic influenza. The results of the modeling enabled us to conclude that, to control a seasonal influenza epidemic with a reproduction coefficient R0 ≤ 1.5, a 35% vaccination coverage with the current seasonal influenza vaccine formulation is sufficient, provided that other epidemiology measures are regularly implemented. Increasing R0 level of pandemic strains will obviously require stronger intervention. In addition, seasonal influenza vaccines fail to confer protection against antigenically distinct pandemic influenza strains. Therefore, the necessity of a universal influenza vaccine is clear. The model predicts that a potential universal vaccine will be able to provide sufficient reliable (90%) protection against pandemic influenza only if its efficacy is comparable with the effectiveness of modern vaccines against seasonal influenza strains (70%–80%); given that at least 40% of the population has been vaccinated in advance, ill individuals have been isolated (observed), and a quarantine has been introduced. If other antiepidemic measures are absent, a vaccination coverage of at least 80% is required. PMID:27668256

  11. Material Model Evaluation of a Composite Honeycomb Energy Absorber

    NASA Technical Reports Server (NTRS)

    Jackson, Karen E.; Annett, Martin S.; Fasanella, Edwin L.; Polanco, Michael A.

    2012-01-01

    A study was conducted to evaluate four different material models in predicting the dynamic crushing response of solid-element-based models of a composite honeycomb energy absorber, designated the Deployable Energy Absorber (DEA). Dynamic crush tests of three DEA components were simulated using the nonlinear, explicit transient dynamic code, LS-DYNA . In addition, a full-scale crash test of an MD-500 helicopter, retrofitted with DEA blocks, was simulated. The four material models used to represent the DEA included: *MAT_CRUSHABLE_FOAM (Mat 63), *MAT_HONEYCOMB (Mat 26), *MAT_SIMPLIFIED_RUBBER/FOAM (Mat 181), and *MAT_TRANSVERSELY_ANISOTROPIC_CRUSHABLE_FOAM (Mat 142). Test-analysis calibration metrics included simple percentage error comparisons of initial peak acceleration, sustained crush stress, and peak compaction acceleration of the DEA components. In addition, the Roadside Safety Verification and Validation Program (RSVVP) was used to assess similarities and differences between the experimental and analytical curves for the full-scale crash test.

  12. Evaluation of Atmospheric Loading and Improved Troposphere Modelling

    NASA Technical Reports Server (NTRS)

    Zelensky, Nikita P.; Chinn, Douglas S.; Lemoine, F. G.; Le Bail, Karine; Pavlis, Despina E.

    2012-01-01

    Forward modeling of non-tidal atmospheric loading displacements at geodetic tracking stations have not routinely been included in Doppler Orbitography and Radiopositionning Integrated by Satellite (DORIS) or Satellite Laser Ranging (SLR) station analyses for either POD applications or reference frame determination. The displacements which are computed from 6-hourly models such as the ECMWF and can amount to 3-10 mm in the east, north and up components depending on the tracking station locations. We evaluate the application of atmospheric loading in a number ways using the NASA GSFC GEODYN software: First we assess the impact on SLR & DORIS-determined orbits such as Jason-2, where we evaluate the impact on the tracking data RMS of fit and how the total orbits are changed with the application of this correction. Preliminary results show an RMS radial change of 0.5 mm for Jason-2 over 54 cycles and a total change in the Z-centering of the orbit of 3 mm peak-to-peak over one year. We also evaluate the effects on other DORIS-satellites such as Cryosat-2, Envisat and the SPOT satellites. In the second step, we produce two SINEX time series based on data from available DORIS satellites and assess the differences in WRMS, scale and Helmert translation parameters. Troposphere refraction is obviously an important correction for radiometric data types such as DORIS. We evaluate recent improvements in DORIS processing at GSFC including the application of the Vienna Mapping Function (VMF1) grids with a-priori hydrostatic (VZHDs) and wet (VZWDs) zenith delays. We reduce the gridded VZHD at the stations height using pressure and temperature derived from GPT (strategy 1) and Saastamoinen. We discuss the validation of the VMF1 implementation and its application to the Jason-2 POD processing, compared to corrections using the Niell mapping function and the GMF. Using one year of data, we also assess the impact of the new troposphere corrections on the DORIS-only solutions, most

  13. Towards systematic evaluation of crop model outputs for global land-use models

    NASA Astrophysics Data System (ADS)

    Leclere, David; Azevedo, Ligia B.; Skalský, Rastislav; Balkovič, Juraj; Havlík, Petr

    2016-04-01

    Land provides vital socioeconomic resources to the society, however at the cost of large environmental degradations. Global integrated models combining high resolution global gridded crop models (GGCMs) and global economic models (GEMs) are increasingly being used to inform sustainable solution for agricultural land-use. However, little effort has yet been done to evaluate and compare the accuracy of GGCM outputs. In addition, GGCM datasets require a large amount of parameters whose values and their variability across space are weakly constrained: increasing the accuracy of such dataset has a very high computing cost. Innovative evaluation methods are required both to ground credibility to the global integrated models, and to allow efficient parameter specification of GGCMs. We propose an evaluation strategy for GGCM datasets in the perspective of use in GEMs, illustrated with preliminary results from a novel dataset (the Hypercube) generated by the EPIC GGCM and used in the GLOBIOM land use GEM to inform on present-day crop yield, water and nutrient input needs for 16 crops x 15 management intensities, at a spatial resolution of 5 arc-minutes. We adopt the following principle: evaluation should provide a transparent diagnosis of model adequacy for its intended use. We briefly describe how the Hypercube data is generated and how it articulates with GLOBIOM in order to transparently identify the performances to be evaluated, as well as the main assumptions and data processing involved. Expected performances include adequately representing the sub-national heterogeneity in crop yield and input needs: i) in space, ii) across crop species, and iii) across management intensities. We will present and discuss measures of these expected performances and weight the relative contribution of crop model, input data and data processing steps in performances. We will also compare obtained yield gaps and main yield-limiting factors against the M3 dataset. Next steps include

  14. A participatory evaluation model for Healthier Communities: developing indicators for New Mexico.

    PubMed

    Wallerstein, N

    2000-01-01

    Participatory evaluation models that invite community coalitions to take an active role in developing evaluations of their programs are a natural fit with Healthy Communities initiatives. The author describes the development of a participatory evaluation model for New Mexico's Healthier Communities program. She describes evaluation principles, research questions, and baseline findings. The evaluation model shows the links between process, community-level system impacts, and population health changes.

  15. EMPIRE: Nuclear Reaction Model Code System for Data Evaluation

    NASA Astrophysics Data System (ADS)

    Herman, M.; Capote, R.; Carlson, B. V.; Obložinský, P.; Sin, M.; Trkov, A.; Wienke, H.; Zerkin, V.

    2007-12-01

    accompanying code EMPEND and completed with neutron resonances extracted from the existing evaluations. The package contains the full EXFOR (CSISRS) library of experimental reaction data that are automatically retrieved during the calculations. Publication quality graphs can be obtained using the powerful and flexible plotting package ZVView. The graphic user interface, written in Tcl/Tk, provides for easy operation of the system. This paper describes the capabilities of the code, outlines physical models and indicates parameter libraries used by EMPIRE to predict reaction cross sections and spectra, mainly for nucleon-induced reactions. Selected applications of EMPIRE are discussed, the most important being an extensive use of the code in evaluations of neutron reactions for the new US library ENDF/B-VII.0. Future extensions of the system are outlined, including neutron resonance module as well as capabilities of generating covariances, using both KALMAN and Monte-Carlo methods, that are still being advanced and refined.

  16. A Student Evaluation of Molecular Modeling in First Year College Chemistry.

    ERIC Educational Resources Information Center

    Ealy, Julie B.

    1999-01-01

    Evaluates first-year college students' perceptions of molecular modeling. Examines the effectiveness, integration with course content, interests, benefits, advantages, and disadvantages of molecular modeling. (Author/CCM)

  17. Interactive model evaluation tool based on IPython notebook

    NASA Astrophysics Data System (ADS)

    Balemans, Sophie; Van Hoey, Stijn; Nopens, Ingmar; Seuntjes, Piet

    2015-04-01

    In hydrological modelling, some kind of parameter optimization is mostly performed. This can be the selection of a single best parameter set, a split in behavioural and non-behavioural parameter sets based on a selected threshold or a posterior parameter distribution derived with a formal Bayesian approach. The selection of the criterion to measure the goodness of fit (likelihood or any objective function) is an essential step in all of these methodologies and will affect the final selected parameter subset. Moreover, the discriminative power of the objective function is also dependent from the time period used. In practice, the optimization process is an iterative procedure. As such, in the course of the modelling process, an increasing amount of simulations is performed. However, the information carried by these simulation outputs is not always fully exploited. In this respect, we developed and present an interactive environment that enables the user to intuitively evaluate the model performance. The aim is to explore the parameter space graphically and to visualize the impact of the selected objective function on model behaviour. First, a set of model simulation results is loaded along with the corresponding parameter sets and a data set of the same variable as the model outcome (mostly discharge). The ranges of the loaded parameter sets define the parameter space. A selection of the two parameters visualised can be made by the user. Furthermore, an objective function and a time period of interest need to be selected. Based on this information, a two-dimensional parameter response surface is created, which actually just shows a scatter plot of the parameter combinations and assigns a color scale corresponding with the goodness of fit of each parameter combination. Finally, a slider is available to change the color mapping of the points. Actually, the slider provides a threshold to exclude non behaviour parameter sets and the color scale is only attributed to the

  18. Evaluation of statistical models for forecast errors from the HBV model

    NASA Astrophysics Data System (ADS)

    Engeland, Kolbjørn; Renard, Benjamin; Steinsland, Ingelin; Kolberg, Sjur

    2010-04-01

    SummaryThree statistical models for the forecast errors for inflow into the Langvatn reservoir in Northern Norway have been constructed and tested according to the agreement between (i) the forecast distribution and the observations and (ii) median values of the forecast distribution and the observations. For the first model observed and forecasted inflows were transformed by the Box-Cox transformation before a first order auto-regressive model was constructed for the forecast errors. The parameters were conditioned on weather classes. In the second model the Normal Quantile Transformation (NQT) was applied on observed and forecasted inflows before a similar first order auto-regressive model was constructed for the forecast errors. For the third model positive and negative errors were modeled separately. The errors were first NQT-transformed before conditioning the mean error values on climate, forecasted inflow and yesterday's error. To test the three models we applied three criterions: we wanted (a) the forecast distribution to be reliable; (b) the forecast intervals to be narrow; (c) the median values of the forecast distribution to be close to the observed values. Models 1 and 2 gave almost identical results. The median values improved the forecast with Nash-Sutcliffe R eff increasing from 0.77 for the original forecast to 0.87 for the corrected forecasts. Models 1 and 2 over-estimated the forecast intervals but gave the narrowest intervals. Their main drawback was that the distributions are less reliable than Model 3. For Model 3 the median values did not fit well since the auto-correlation was not accounted for. Since Model 3 did not benefit from the potential variance reduction that lies in bias estimation and removal it gave on average wider forecasts intervals than the two other models. At the same time Model 3 on average slightly under-estimated the forecast intervals, probably explained by the use of average measures to evaluate the fit.

  19. Model selection on solid ground: Rigorous comparison of nine ways to evaluate Bayesian model evidence

    NASA Astrophysics Data System (ADS)

    Schöniger, Anneli; Wöhling, Thomas; Samaniego, Luis; Nowak, Wolfgang

    2014-12-01

    Bayesian model selection or averaging objectively ranks a number of plausible, competing conceptual models based on Bayes' theorem. It implicitly performs an optimal trade-off between performance in fitting available data and minimum model complexity. The procedure requires determining Bayesian model evidence (BME), which is the likelihood of the observed data integrated over each model's parameter space. The computation of this integral is highly challenging because it is as high-dimensional as the number of model parameters. Three classes of techniques to compute BME are available, each with its own challenges and limitations: (1) Exact and fast analytical solutions are limited by strong assumptions. (2) Numerical evaluation quickly becomes unfeasible for expensive models. (3) Approximations known as information criteria (ICs) such as the AIC, BIC, or KIC (Akaike, Bayesian, or Kashyap information criterion, respectively) yield contradicting results with regard to model ranking. Our study features a theory-based intercomparison of these techniques. We further assess their accuracy in a simplistic synthetic example where for some scenarios an exact analytical solution exists. In more challenging scenarios, we use a brute-force Monte Carlo integration method as reference. We continue this analysis with a real-world application of hydrological model selection. This is a first-time benchmarking of the various methods for BME evaluation against true solutions. Results show that BME values from ICs are often heavily biased and that the choice of approximation method substantially influences the accuracy of model ranking. For reliable model selection, bias-free numerical methods should be preferred over ICs whenever computationally feasible.

  20. A generalised model for traffic induced road dust emissions. Model description and evaluation

    NASA Astrophysics Data System (ADS)

    Berger, Janne; Denby, Bruce

    2011-07-01

    This paper concerns the development and evaluation of a new and generalised road dust emission model. Most of today's road dust emission models are based on local measurements and/or contain empirical emission factors that are specific for a given road environment. In this study, a more generalised road dust emission model is presented and evaluated. We have based the emissions on road, tyre and brake wear rates and used the mass balance concept to describe the build-up of road dust on the road surface and road shoulder. The model separates the emissions into a direct part and a resuspension part, and treats the road surface and road shoulder as two different sources. We tested the model under idealized conditions as well as on two datasets in and just outside of Oslo in Norway during the studded tyre season. We found that the model reproduced the observed increase in road dust emissions directly after drying of the road surface. The time scale for the build-up of road dust on the road surface is less than an hour for medium to heavy traffic density. The model performs well for temperatures above 0 °C and less well during colder periods. Since the model does not yet include salting as an additional mass source, underestimations are evident under dry periods with temperatures around 0 °C, under which salting occurs. The model overestimates the measured PM 10 (particulate matter less than 10 μm in diameter) concentrations under heavy precipitation events since the model does not take the amount of precipitation into account. There is a strong sensitivity of the modelled emissions to the road surface conditions and the current parameterisations of the effect of precipitation, runoff and evaporation seem inadequate.

  1. Modelling phosphorus intake, digestion, retention and excretion in growing and finishing pig: model evaluation.

    PubMed

    Symeou, V; Leinonen, I; Kyriazakis, I

    2014-10-01

    A deterministic, dynamic model was developed, to enable predictions of phosphorus (P) digested, retained and excreted for different pig genotypes and under different dietary conditions. Before confidence can be placed on the predictions of the model, its evaluation was required. A sensitivity analysis of model predictions to ±20% changes in the model parameters was undertaken using a basal UK industry standard diet and a pig genotype characterized by British Society Animal Science as being of 'intermediate growth'. Model outputs were most sensitive to the values of the efficiency of digestible P utilization for growth and the non-phytate P absorption coefficient from the small intestine into the bloodstream; all other model parameters influenced model outputs by <10%, with the majority of the parameters influencing outputs by <5%. Independent data sets of published experiments were used to evaluate model performance based on graphical comparisons and statistical analysis. The literature studies were selected on the basis of the following criteria: they were within the BW range of 20 to 120 kg, pigs grew in a thermo-neutral environment; and they provided information on P intake, retention and excretion. In general, the model predicted satisfactorily the quantitative pig responses, in terms of P digested, retained and excreted, to variation in dietary inorganic P supply, Ca and phytase supplementation. The model performed well with 'conventional', European feed ingredients and poorly with 'less conventional' ones, such as dried distillers grains with solubles and canola meal. Explanations for these inconsistencies in the predictions are offered in the paper and they are expected to lead to further model development and improvement. The latter would include the characterization of the origin of phytate in pig diets.

  2. Model selection on solid ground: Rigorous comparison of nine ways to evaluate Bayesian model evidence

    PubMed Central

    Schöniger, Anneli; Wöhling, Thomas; Samaniego, Luis; Nowak, Wolfgang

    2014-01-01

    Bayesian model selection or averaging objectively ranks a number of plausible, competing conceptual models based on Bayes' theorem. It implicitly performs an optimal trade-off between performance in fitting available data and minimum model complexity. The procedure requires determining Bayesian model evidence (BME), which is the likelihood of the observed data integrated over each model's parameter space. The computation of this integral is highly challenging because it is as high-dimensional as the number of model parameters. Three classes of techniques to compute BME are available, each with its own challenges and limitations: (1) Exact and fast analytical solutions are limited by strong assumptions. (2) Numerical evaluation quickly becomes unfeasible for expensive models. (3) Approximations known as information criteria (ICs) such as the AIC, BIC, or KIC (Akaike, Bayesian, or Kashyap information criterion, respectively) yield contradicting results with regard to model ranking. Our study features a theory-based intercomparison of these techniques. We further assess their accuracy in a simplistic synthetic example where for some scenarios an exact analytical solution exists. In more challenging scenarios, we use a brute-force Monte Carlo integration method as reference. We continue this analysis with a real-world application of hydrological model selection. This is a first-time benchmarking of the various methods for BME evaluation against true solutions. Results show that BME values from ICs are often heavily biased and that the choice of approximation method substantially influences the accuracy of model ranking. For reliable model selection, bias-free numerical methods should be preferred over ICs whenever computationally feasible. PMID:25745272

  3. Evaluation of an object-based data model implemented over a proprietary, legacy data model.

    PubMed Central

    Pollard, D. L.; Hales, J. W.

    1995-01-01

    Most computerized medical information today is contained in legacy systems. As vendors slowly move to open systems, legacy systems remain in use and contain valuable information. This paper evaluates the use of an object model imposed on an existing database to improve the ease with which data can be accessed. This study demonstrates that data elements can be retrieved without specific programming knowledge of the underlying data structure. It also suggests that underlying data structures can be changed without updating application code. Programs written using the object model were easier to program but ran greater than one order of magnitude slower than traditionally coded programs. In this paper, the legacy information system is introduced, the methods used to implement and evaluate the object-based data model are explained, and the results and conclusions are presented. PMID:8563303

  4. Novel Planar Electromagnetic Sensors: Modeling and Performance Evaluation

    PubMed Central

    Mukhopadhyay, Subhas C.

    2005-01-01

    High performance planar electromagnetic sensors, their modeling and a few applications have been reported in this paper. The researches employing planar type electromagnetic sensors have started quite a few years back with the initial emphasis on the inspection of defects on printed circuit board. The use of the planar type sensing system has been extended for the evaluation of near-surface material properties such as conductivity, permittivity, permeability etc and can also be used for the inspection of defects in the near-surface of materials. Recently the sensor has been used for the inspection of quality of saxophone reeds and dairy products. The electromagnetic responses of planar interdigital sensors with pork meats have been investigated.

  5. Model fit versus biological relevance: Evaluating photosynthesis-temperature models for three tropical seagrass species

    NASA Astrophysics Data System (ADS)

    Adams, Matthew P.; Collier, Catherine J.; Uthicke, Sven; Ow, Yan X.; Langlois, Lucas; O’Brien, Katherine R.

    2017-01-01

    When several models can describe a biological process, the equation that best fits the data is typically considered the best. However, models are most useful when they also possess biologically-meaningful parameters. In particular, model parameters should be stable, physically interpretable, and transferable to other contexts, e.g. for direct indication of system state, or usage in other model types. As an example of implementing these recommended requirements for model parameters, we evaluated twelve published empirical models for temperature-dependent tropical seagrass photosynthesis, based on two criteria: (1) goodness of fit, and (2) how easily biologically-meaningful parameters can be obtained. All models were formulated in terms of parameters characterising the thermal optimum (Topt) for maximum photosynthetic rate (Pmax). These parameters indicate the upper thermal limits of seagrass photosynthetic capacity, and hence can be used to assess the vulnerability of seagrass to temperature change. Our study exemplifies an approach to model selection which optimises the usefulness of empirical models for both modellers and ecologists alike.

  6. Model fit versus biological relevance: Evaluating photosynthesis-temperature models for three tropical seagrass species

    PubMed Central

    Adams, Matthew P.; Collier, Catherine J.; Uthicke, Sven; Ow, Yan X.; Langlois, Lucas; O’Brien, Katherine R.

    2017-01-01

    When several models can describe a biological process, the equation that best fits the data is typically considered the best. However, models are most useful when they also possess biologically-meaningful parameters. In particular, model parameters should be stable, physically interpretable, and transferable to other contexts, e.g. for direct indication of system state, or usage in other model types. As an example of implementing these recommended requirements for model parameters, we evaluated twelve published empirical models for temperature-dependent tropical seagrass photosynthesis, based on two criteria: (1) goodness of fit, and (2) how easily biologically-meaningful parameters can be obtained. All models were formulated in terms of parameters characterising the thermal optimum (Topt) for maximum photosynthetic rate (Pmax). These parameters indicate the upper thermal limits of seagrass photosynthetic capacity, and hence can be used to assess the vulnerability of seagrass to temperature change. Our study exemplifies an approach to model selection which optimises the usefulness of empirical models for both modellers and ecologists alike. PMID:28051123

  7. Evaluating Status Change of Soil Potassium from Path Model

    PubMed Central

    He, Wenming; Chen, Fang

    2013-01-01

    The purpose of this study is to determine critical environmental parameters of soil K availability and to quantify those contributors by using a proposed path model. In this study, plot experiments were designed into different treatments, and soil samples were collected and further analyzed in laboratory to investigate soil properties influence on soil potassium forms (water soluble K, exchangeable K, non-exchangeable K). Furthermore, path analysis based on proposed path model was carried out to evaluate the relationship between potassium forms and soil properties. Research findings were achieved as followings. Firstly, key direct factors were soil S, ratio of sodium-potassium (Na/K), the chemical index of alteration (CIA), Soil Organic Matter in soil solution (SOM), Na and total nitrogen in soil solution (TN), and key indirect factors were Carbonate (CO3), Mg, pH, Na, S, and SOM. Secondly, path model can effectively determine direction and quantities of potassium status changes between Exchangeable potassium (eK), Non-exchangeable potassium (neK) and water-soluble potassium (wsK) under influences of specific environmental parameters. In reversible equilibrium state of , K balance state was inclined to be moved into β and χ directions in treatments of potassium shortage. However in reversible equilibrium of , K balance state was inclined to be moved into θ and λ directions in treatments of water shortage. Results showed that the proposed path model was able to quantitatively disclose moving direction of K status and quantify its equilibrium threshold. It provided a theoretical and practical basis for scientific and effective fertilization in agricultural plants growth. PMID:24204659

  8. Development and evaluation of a bioenergetics model for bull trout

    USGS Publications Warehouse

    Mesa, Matthew G.; Welland, Lisa K.; Christiansen, Helena E.; Sauter, Sally T.; Beauchamp, David A.

    2013-01-01

    We conducted laboratory experiments to parameterize a bioenergetics model for wild Bull Trout Salvelinus confluentus, estimating the effects of body mass (12–1,117 g) and temperature (3–20°C) on maximum consumption (C max) and standard metabolic rates. The temperature associated with the highest C max was 16°C, and C max showed the characteristic dome-shaped temperature-dependent response. Mass-dependent values of C max (N = 28) at 16°C ranged from 0.03 to 0.13 g·g−1·d−1. The standard metabolic rates of fish (N = 110) ranged from 0.0005 to 0.003 g·O2·g−1·d−1 and increased with increasing temperature but declined with increasing body mass. In two separate evaluation experiments, which were conducted at only one ration level (40% of estimated C max), the model predicted final weights that were, on average, within 1.2 ± 2.5% (mean ± SD) of observed values for fish ranging from 119 to 573 g and within 3.5 ± 4.9% of values for 31–65 g fish. Model-predicted consumption was within 5.5 ± 10.9% of observed values for larger fish and within 12.4 ± 16.0% for smaller fish. Our model should be useful to those dealing with issues currently faced by Bull Trout, such as climate change or alterations in prey availability.

  9. Statistical models and computation to evaluate measurement uncertainty

    NASA Astrophysics Data System (ADS)

    Possolo, Antonio

    2014-08-01

    In the course of the twenty years since the publication of the Guide to the Expression of Uncertainty in Measurement (GUM), the recognition has been steadily growing of the value that statistical models and statistical computing bring to the evaluation of measurement uncertainty, and of how they enable its probabilistic interpretation. These models and computational methods can address all the problems originally discussed and illustrated in the GUM, and enable addressing other, more challenging problems, that measurement science is facing today and that it is expected to face in the years ahead. These problems that lie beyond the reach of the techniques in the GUM include (i) characterizing the uncertainty associated with the assignment of value to measurands of greater complexity than, or altogether different in nature from, the scalar or vectorial measurands entertained in the GUM: for example, sequences of nucleotides in DNA, calibration functions and optical and other spectra, spatial distribution of radioactivity over a geographical region, shape of polymeric scaffolds for bioengineering applications, etc; (ii) incorporating relevant information about the measurand that predates or is otherwise external to the measurement experiment; (iii) combining results from measurements of the same measurand that are mutually independent, obtained by different methods or produced by different laboratories. This review of several of these statistical models and computational methods illustrates some of the advances that they have enabled, and in the process invites a reflection on the interesting historical fact that these very same models and methods, by and large, were already available twenty years ago, when the GUM was first published—but then the dialogue between metrologists, statisticians and mathematicians was still in bud. It is in full bloom today, much to the benefit of all.

  10. Storytelling Voice Conversion: Evaluation Experiment Using Gaussian Mixture Models

    NASA Astrophysics Data System (ADS)

    Přibil, Jiří; Přibilová, Anna; Ďuračková, Daniela

    2015-07-01

    In the development of the voice conversion and personification of the text-to-speech (TTS) systems, it is very necessary to have feedback information about the users' opinion on the resulting synthetic speech quality. Therefore, the main aim of the experiments described in this paper was to find out whether the classifier based on Gaussian mixture models (GMM) could be applied for evaluation of different storytelling voices created by transformation of the sentences generated by the Czech and Slovak TTS system. We suppose that it is possible to combine this GMM-based statistical evaluation with the classical one in the form of listening tests or it can replace them. The results obtained in this way were in good correlation with the results of the conventional listening test, so they confirm practical usability of the developed GMM classifier. With the help of the performed analysis, the optimal setting of the initial parameters and the structure of the input feature set for recognition of the storytelling voices was finally determined.

  11. Evaluation of two pollutant dispersion models over continental scales

    NASA Astrophysics Data System (ADS)

    Rodriguez, D.; Walker, H.; Klepikova, N.; Kostrikov, A.; Zhuk, Y.

    Two long-range, emergency response models—one based on the particle-in-cell method of pollutant representation (ADPIC/U.S.) the other based on the superposition of Gaussian puffs released periodically in time (EXPRESS/Russia)—are evaluated using perfluorocarbon tracer data from the Across North America Tracer Experiment (ANATEX). The purpose of the study is to assess our current capabilities for simulating continental-scale dispersion processes and to use these assessments as a means to improve our modeling tools. The criteria for judging model performance are based on protocols devised by the Environmental Protection Agency and on other complementary tests. Most of these measures require the formation and analysis of surface concentration footprints (the surface manifestations of tracer clouds, which are sampled over 24-h intervals), whose dimensions, center-of-mass coordinates and integral characteristics provide a basis for comparing observed and calculated concentration distributions. Generally speaking, the plumes associated with the 20 releases of perfluorocarbon (10 each from sources at Glasgow, MT and St. Cloud, MN) in January 1987, are poorly resolved by the sampling network when the source-to-receptor distances are less than about 1000 km. Within this undersampled region, both models chronically overpredict the sampler concentrations. Given this tendency, the computed areas of the surface footprints and their integral concentrations are likewise excessive. When the actual plumes spread out sufficiently for reasonable resolution, the observed ( O) and calculated ( C) footprint areas are usually within a factor of two of one another, thereby suggesting that the models possess some skill in the prediction of long-range diffusion. Deviations in the O and C plume trajectories, as measured by the distances of separation between the plume centroids, are on the other of 125 km d -1 for both models. It appears that the inability of the models to simulate large

  12. A FRAMEWORK FOR EVALUATING REGIONAL-SCALE NUMERICAL PHOTOCHEMICAL MODELING SYSTEMS

    PubMed Central

    Dennis, Robin; Fox, Tyler; Fuentes, Montse; Gilliland, Alice; Hanna, Steven; Hogrefe, Christian; Irwin, John; Rao, S.Trivikrama.; Scheffe, Richard; Schere, Kenneth; Steyn, Douw; Venkatram, Akula

    2011-01-01

    This paper discusses the need for critically evaluating regional-scale (~200-2000 km) three-dimensional numerical photochemical air quality modeling systems to establish a model’s credibility in simulating the spatio-temporal features embedded in the observations. Because of limitations of currently used approaches for evaluating regional air quality models, a framework for model evaluation is introduced here for determining the suitability of a modeling system for a given application, distinguishing the performance between different models through confidence-testing of model results, guiding model development, and analyzing the impacts of regulatory policy options. The framework identifies operational, diagnostic, dynamic, and probabilistic types of model evaluation. Operational evaluation techniques include statistical and graphical analyses aimed at determining whether model estimates are in agreement with the observations in an overall sense. Diagnostic evaluation focuses on process-oriented analyses to determine whether the individual processes and components of the model system are working correctly, both independently and in combination. Dynamic evaluation assesses the ability of the air quality model to simulate changes in air quality stemming from changes in source emissions and/or meteorology, the principal forces that drive the air quality model. Probabilistic evaluation attempts to assess the confidence that can be placed in model predictions using techniques such as ensemble modeling and Bayesian model averaging. The advantages of these types of model evaluation approaches are discussed in this paper. PMID:21461126

  13. [Tridimensional evaluation model of health promotion in school -- a proposition].

    PubMed

    Kulmatycki, Lesław

    2005-01-01

    A good school health programme can be one of the most cost effective investments for simultaneously improving education and health. The general direction of WHO's European Network of Health Promoting Schools and Global Schools Health Initiative is guided by the holistic approach and the Ottawa Charter for Health Promotion (1986). A health promoting school strives to improve the health and well-being of school pupils as well as school personnel, families and community members; and works with community leaders to help them understand how the community contributes to health and education. Evaluation research is essential to describe the nature and effectiveness of school health promoting activity. The overall aim of this paper is to help school leaders and health promotion coordinators to measure their work well and effectively. The specific aim is to offer a practical three-dimensional evaluation model for health promoting schools. The material is presented in two sections. The first one is a 'theoretical base' for health promotion which was identified from broad based daily health promotion practical activities, strategies and intersectional interventions closely related to the philosophy of the holistic approach. The three dimensions refer to: 1. 'areas' -- according to the mandala of health. 2. 'actions' -- according to Ottawa Charter strategies which should be adapted to the local school networks. 3. 'data'-- according to different groups of evidence (process, changes and progress). The second one, as a result of the mentioned base, represents the three 'core elements': standards, criteria and indicators. In conclusion, this article provides a practical answer to the dilemma of the evaluation model in the network of local school environment. This proposition is addressed to school staff and school health promotion providers to make their work as effective as possible to improve pupils health. Health promoting school can be characterized as a school constantly

  14. A linear programming model for optimizing HDR brachytherapy dose distributions with respect to mean dose in the DVH-tail

    SciTech Connect

    Holm, Åsa; Larsson, Torbjörn; Tedgren, Åsa Carlsson

    2013-08-15

    Purpose: Recent research has shown that the optimization model hitherto used in high-dose-rate (HDR) brachytherapy corresponds weakly to the dosimetric indices used to evaluate the quality of a dose distribution. Although alternative models that explicitly include such dosimetric indices have been presented, the inclusion of the dosimetric indices explicitly yields intractable models. The purpose of this paper is to develop a model for optimizing dosimetric indices that is easier to solve than those proposed earlier.Methods: In this paper, the authors present an alternative approach for optimizing dose distributions for HDR brachytherapy where dosimetric indices are taken into account through surrogates based on the conditional value-at-risk concept. This yields a linear optimization model that is easy to solve, and has the advantage that the constraints are easy to interpret and modify to obtain satisfactory dose distributions.Results: The authors show by experimental comparisons, carried out retrospectively for a set of prostate cancer patients, that their proposed model corresponds well with constraining dosimetric indices. All modifications of the parameters in the authors' model yield the expected result. The dose distributions generated are also comparable to those generated by the standard model with respect to the dosimetric indices that are used for evaluating quality.Conclusions: The authors' new model is a viable surrogate to optimizing dosimetric indices and quickly and easily yields high quality dose distributions.

  15. An evaluation of evaluative personality terms: a comparison of the big seven and five-factor model in predicting psychopathology.

    PubMed

    Durrett, Christine; Trull, Timothy J

    2005-09-01

    Two personality models are compared regarding their relationship with personality disorder (PD) symptom counts and with lifetime Axis I diagnoses. These models share 5 similar domains, and the Big 7 model also includes 2 domains assessing self-evaluation: positive and negative valence. The Big 7 model accounted for more variance in PDs than the 5-factor model, primarily because of the association of negative valence with most PDs. Although low-positive valence was associated with most Axis I diagnoses, the 5-factor model generally accounted for more variance in Axis I diagnoses than the Big 7 model. Some predicted associations between self-evaluation and psychopathology were not found, and unanticipated associations emerged. These findings are discussed regarding the utility of evaluative terms in clinical assessment.

  16. A merged model of quality improvement and evaluation: maximizing return on investment.

    PubMed

    Woodhouse, Lynn D; Toal, Russ; Nguyen, Trang; Keene, DeAnna; Gunn, Laura; Kellum, Andrea; Nelson, Gary; Charles, Simone; Tedders, Stuart; Williams, Natalie; Livingood, William C

    2013-11-01

    Quality improvement (QI) and evaluation are frequently considered to be alternative approaches for monitoring and assessing program implementation and impact. The emphasis on third-party evaluation, particularly associated with summative evaluation, and the grounding of evaluation in the social and behavioral science contrast with an emphasis on the integration of QI process within programs or organizations and its origins in management science and industrial engineering. Working with a major philanthropic organization in Georgia, we illustrate how a QI model is integrated with evaluation for five asthma prevention and control sites serving poor and underserved communities in rural and urban Georgia. A primary foundation of this merged model of QI and evaluation is a refocusing of the evaluation from an intimidating report card summative evaluation by external evaluators to an internally engaged program focus on developmental evaluation. The benefits of the merged model to both QI and evaluation are discussed. The use of evaluation based logic models can help anchor a QI program in evidence-based practice and provide linkage between process and outputs with the longer term distal outcomes. Merging the QI approach with evaluation has major advantages, particularly related to enhancing the funder's return on investment. We illustrate how a Plan-Do-Study-Act model of QI can (a) be integrated with evaluation based logic models, (b) help refocus emphasis from summative to developmental evaluation, (c) enhance program ownership and engagement in evaluation activities, and (d) increase the role of evaluators in providing technical assistance and support.

  17. Risk evaluation of uranium mining: A geochemical inverse modelling approach

    NASA Astrophysics Data System (ADS)

    Rillard, J.; Zuddas, P.; Scislewski, A.

    2011-12-01

    It is well known that uranium extraction operations can increase risks linked to radiation exposure. The toxicity of uranium and associated heavy metals is the main environmental concern regarding exploitation and processing of U-ore. In areas where U mining is planned, a careful assessment of toxic and radioactive element concentrations is recommended before the start of mining activities. A background evaluation of harmful elements is important in order to prevent and/or quantify future water contamination resulting from possible migration of toxic metals coming from ore and waste water interaction. Controlled leaching experiments were carried out to investigate processes of ore and waste (leached ore) degradation, using samples from the uranium exploitation site located in Caetité-Bahia, Brazil. In experiments in which the reaction of waste with water was tested, we found that the water had low pH and high levels of sulphates and aluminium. On the other hand, in experiments in which ore was tested, the water had a chemical composition comparable to natural water found in the region of Caetité. On the basis of our experiments, we suggest that waste resulting from sulphuric acid treatment can induce acidification and salinization of surface and ground water. For this reason proper storage of waste is imperative. As a tool to evaluate the risks, a geochemical inverse modelling approach was developed to estimate the water-mineral interaction involving the presence of toxic elements. We used a method earlier described by Scislewski and Zuddas 2010 (Geochim. Cosmochim. Acta 74, 6996-7007) in which the reactive surface area of mineral dissolution can be estimated. We found that the reactive surface area of rock parent minerals is not constant during time but varies according to several orders of magnitude in only two months of interaction. We propose that parent mineral heterogeneity and particularly, neogenic phase formation may explain the observed variation of the

  18. Model-based evaluation of scientific impact indicators

    NASA Astrophysics Data System (ADS)

    Medo, Matúš; Cimini, Giulio

    2016-09-01

    Using bibliometric data artificially generated through a model of citation dynamics calibrated on empirical data, we compare several indicators for the scientific impact of individual researchers. The use of such a controlled setup has the advantage of avoiding the biases present in real databases, and it allows us to assess which aspects of the model dynamics and which traits of individual researchers a particular indicator actually reflects. We find that the simple average citation count of the authored papers performs well in capturing the intrinsic scientific ability of researchers, regardless of the length of their career. On the other hand, when productivity complements ability in the evaluation process, the notorious h and g indices reveal their potential, yet their normalized variants do not always yield a fair comparison between researchers at different career stages. Notably, the use of logarithmic units for citation counts allows us to build simple indicators with performance equal to that of h and g . Our analysis may provide useful hints for a proper use of bibliometric indicators. Additionally, our framework can be extended by including other aspects of the scientific production process and citation dynamics, with the potential to become a standard tool for the assessment of impact metrics.

  19. Solid rocket booster performance evaluation model. Volume 1: Engineering description

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The space shuttle solid rocket booster performance evaluation model (SRB-II) is made up of analytical and functional simulation techniques linked together so that a single pass through the model will predict the performance of the propulsion elements of a space shuttle solid rocket booster. The available options allow the user to predict static test performance, predict nominal and off nominal flight performance, and reconstruct actual flight and static test performance. Options selected by the user are dependent on the data available. These can include data derived from theoretical analysis, small scale motor test data, large motor test data and motor configuration data. The user has several options for output format that include print, cards, tape and plots. Output includes all major performance parameters (Isp, thrust, flowrate, mass accounting and operating pressures) as a function of time as well as calculated single point performance data. The engineering description of SRB-II discusses the engineering and programming fundamentals used, the function of each module, and the limitations of each module.

  20. Reliability of Bolton analysis evaluation in tridimensional virtual models

    PubMed Central

    Brandão, Marianna Mendonca; Sobral, Marcio Costal; Vogel, Carlos Jorge

    2015-01-01

    Objective: The present study aimed at evaluating the reliability of Bolton analysis in tridimensional virtual models, comparing it with the manual method carried out with dental casts. Methods: The present investigation was performed using 56 pairs of dental casts produced from the dental arches of patients in perfect conditions and randomly selected from Universidade Federal da Bahia, School of Dentistry, Orthodontics Postgraduate Program. Manual measurements were obtained with the aid of a digital Cen-Tech 4"(r) caliper (Harpor Freight Tools, Calabasas, CA, USA). Subsequently, samples were digitized on 3Shape(r) R-700T scanner (Copenhagen, Denmark) and digital measures were obtained by Ortho Analyzer software. Results: Data were subject to statistical analysis and results revealed that there were no statistically significant differences between measurements with p-values equal to p = 0.173 and p= 0.239 for total and anterior proportions, respectively. Conclusion: Based on these findings, it is possible to deduce that Bolton analysis performed on tridimensional virtual models is as reliable as measurements obtained from dental casts with satisfactory agreement. PMID:26560824

  1. Evaluation of Medical Cystine Stone Prevention in an Animal Model

    NASA Astrophysics Data System (ADS)

    Sagi, Sreedhar; Wendt-Nordahl, Gunnar; Alken, Peter; Knoll, Thomas

    2007-04-01

    Medical treatment for cystinuria aims to decrease the concentration of cystine in the urine, increase its solubility and therefore prevent stone formation. Ascorbic acid and captopril have been recommended as alternatives to thiol drugs, though conflicting data undermining their efficacy has been widely reported, too. The aim of this study was to verify the effects of ascorbic acid and captopril on cystine stone formation in the cystinuria mouse model. A total of 28 male homozygous pebbles mice were used for characterizing the mice on normal diet, ascorbic acid and captopril supplemented diets. The baseline physiological parameters of the mice were determined initially. The normal diet was then replaced with the supplemented diet (ascorbic acid/captopril) for the next 48 weeks and various biochemical parameters in urine and plasma were analyzed. All homozygous mice developed urinary cystine stones during the first year of life. No reduction in the urinary cystine concentration was seen with either of the supplemented diets. The stone mass varied widely in the study and a beneficial effect of ascorbic acid in some of the animals was possible though an overall statistical significance was not seen. Conclusions: The cystinuria mouse model provides an ideal tool for evaluation of stone preventive measures in a standardized environment. This study confirms that ascorbic acid and captopril are not effective in cystinuria.

  2. An in vivo model for evaluation of the postantibiotic effect.

    PubMed

    Odenholt, I; Holm, S E; Cars, O

    1988-01-01

    A new experimental model to evaluate the postantibiotic effect (PAE) in vivo was developed using subcutaneously implanted tissue cages in rabbits with normal host defence mechanisms. The rabbits received benzylpenicillin i.v. in a dose giving a free penicillin concentration of 10 X MIC in the tissue cage fluid (TCF). A log-phase suspension of group A streptococci was injected into the tissue cages exposing them to penicillin in vivo. After 2 h bacterial samples were withdrawn, treated with penicillinase and transferred to 2 tissue cages in untreated rabbits. Simultaneously, unexposed streptococci were implanted in 2 other cages in the same animals. By repeated sampling of TCF, growth curves of the streptococci exposed to penicillin and the controls could be compared and a PAE of 1.6-2.4 h demonstrated. The PAE was of the same magnitude as that found in vitro. The model has several advantages for the demonstration of PAE in vivo: repeated samplings are easy to perform percutaneously, the effect of subinhibitory antibiotic concentrations are avoided, interindividual variations are eliminated since each animal is its own control, and the experiments can be performed in animals with undisturbed host defence mechanisms.

  3. The western Pacific monsoon in CMIP5 models: Model evaluation and projections

    NASA Astrophysics Data System (ADS)

    Brown, Josephine R.; Colman, Robert A.; Moise, Aurel F.; Smith, Ian N.

    2013-11-01

    ability of 35 models from the Coupled Model Intercomparison Project Phase 5 (CMIP5) to simulate the western Pacific (WP) monsoon is evaluated over four representative regions around Timor, New Guinea, the Solomon Islands and Palau. Coupled model simulations are compared with atmosphere-only model simulations (with observed sea surface temperatures, SSTs) to determine the impact of SST biases on model performance. Overall, the CMIP5 models simulate the WP monsoon better than previous-generation Coupled Model Intercomparison Project Phase 3 (CMIP3) models, but some systematic biases remain. The atmosphere-only models are better able to simulate the seasonal cycle of zonal winds than the coupled models, but display comparable biases in the rainfall. The CMIP5 models are able to capture features of interannual variability in response to the El Niño-Southern Oscillation. In climate projections under the RCP8.5 scenario, monsoon rainfall is increased over most of the WP monsoon domain, while wind changes are small. Widespread rainfall increases at low latitudes in the summer hemisphere appear robust as a large majority of models agree on the sign of the change. There is less agreement on rainfall changes in winter. Interannual variability of monsoon wet season rainfall is increased in a warmer climate, particularly over Palau, Timor and the Solomon Islands. A subset of the models showing greatest skill in the current climate confirms the overall projections, although showing markedly smaller rainfall increases in the western equatorial Pacific. The changes found here may have large impacts on Pacific island countries influenced by the WP monsoon.

  4. Evaluation of Spatial Agreement of Distinct Landslide Prediction Models

    NASA Astrophysics Data System (ADS)

    Sterlacchini, Simone; Bordogna, Gloria; Frigerio, Ivan

    2013-04-01

    derived to test agreement among the maps. Nevertheless, no information was made available about the location where the prediction of two or more maps agreed and where they did not. Thus we wanted to study if also the spatial agreements of the models predicted the same or similar values. To this end we adopted a soft image fusion approach proposed in. It is defined as a group decision making model for ranking spatial alternatives based on a soft fusion of coherent evaluations. In order to apply this approach, the prediction maps were categorized into 10 distinct classes by using an equal-area criterion to compare the predicted results. Thus we applied soft fusion of the prediction maps regarded as evaluations of distinct human experts. The fusion process needs the definition of the concept of "fuzzy majority", provided by a linguistic quantifier, in order to determine the coherence of a majority of maps in each pixel of the territory. Based on this, the overall spatial coherence among the majority of the prediction maps was evaluated. The spatial coherence among a fuzzy majority is defined based on the Minkowski OWA operators. The result made it possible to spatially identify sectors of the study area in which the predictions were in agreement for the same or for close classes of susceptibility, or discordant, or even distant classes. We studied the spatial agreement among a "fuzzy majority" defined as "80% of the 13 coherent maps", thus requiring that at least 11 out of 13 agree, since from previous results we knew that two maps were in disagreement. So the fuzzy majority AtLeast80% was defined by a quantifier with linear increasing membership function (0.8, 1). The coherence metric used was the Euclidean distance. We thus computed the soft fusion of AtLeast80% coherent maps for homogeneous groups of classes. We considered as homogeneous classes the highest two classes (9 and 10), the lowest two classes, and the central classes (4, 5 and 6). We then fused the maps

  5. Evaluation of Model Microphysics Within Precipitation Bands of Extratropical Cyclones

    NASA Technical Reports Server (NTRS)

    Colle, Brian A.; Molthan, Andrew; Yu, Ruyi; Stark, David; Yuter, Sandra; Nesbitt, Steven

    2013-01-01

    Recent studies evaluating the bulk microphysical schemes (BMPs) within cloud resolving models (CRMs) have indicated large uncertainties and errors in the amount and size distributions of snow and cloud ice aloft. The snow prediction is sensitive to the snow densities, habits, and degree of riming within the BMPs. Improving these BMPs is a crucial step toward improving both weather forecasting and climate predictions. Several microphysical schemes in the Weather Research and Forecasting (WRF) model down to 1.33-km grid spacing are evaluated using aircraft, radar, and ground in situ data from the Global Precipitation Mission Coldseason Precipitation Experiment (GCPEx) experiment, as well as a few years (15 winter storms) of surface measurements of riming, crystal habit, snow density, and radar measurements at Stony Brook, NY (SBNY on north shore of Long Island) during the 2009-2012 winter seasons. Surface microphysical measurements at SBNY were taken every 15 to 30 minutes using a stereo microscope and camera, and snow depth and snow density were also recorded. During these storms, a vertically-pointing Ku-band radar was used to observe the vertical evolution of reflectivity and Doppler vertical velocities. A Particle Size and Velocity (PARSIVEL) disdrometer was also used to measure the surface size distribution and fall speeds of snow at SBNY. For the 15 cases at SBNY, the WSM6, Morrison (MORR), Thompson (THOM2), and Stony Brook (SBU-YLIN) BMPs were validated. A non-spherical snow assumption (THOM2 and SBU-YLIN) simulated a more realistic distribution of reflectivity than spherical snow assumptions in the WSM6 and MORR schemes. The MORR, WSM6, and SBU-YLIN schemes are comparable to the observed velocity distribution in light and moderate riming periods. The THOM2 is 0.25 meters per second too slow with its velocity distribution in these periods. In heavier riming, the vertical Doppler velocities in the WSM6, THOM2, and MORR schemes were 0.25 meters per second too

  6. Evaluation of Model Microphysics within Precipitation Bands of Extratropical Cyclones

    NASA Technical Reports Server (NTRS)

    Colle, Brian A.; Yu, Ruyi; Molthan, Andrew L.; Nesbitt, Steven

    2013-01-01

    Recent studies evaluating the bulk microphysical schemes (BMPs) within cloud resolving models (CRMs) have indicated large uncertainties and errors in the amount and size distributions of snow and cloud ice aloft. The snow prediction is sensitive to the snow densities, habits, and degree of riming within the BMPs. Improving these BMPs is a crucial step toward improving both weather forecasting and climate predictions. Several microphysical schemes in the Weather Research and Forecasting (WRF) model down to 1.33-km grid spacing are evaluated using aircraft, radar, and ground in situ data from the Global Precipitation Mission Coldseason Precipitation Experiment (GCPEx) experiment, as well as a few years (15 winter storms) of surface measurements of riming, crystal habit, snow density, and radar measurements at Stony Brook, NY (SBNY on north shore of Long Island) during the 2009-2012 winter seasons. Surface microphysical measurements at SBNY were taken every 15 to 30 minutes using a stereo microscope and camera, and snow depth and snow density were also recorded. During these storms, a vertically-pointing Ku-band radar was used to observe the vertical evolution of reflectivity and Doppler vertical velocities. A Particle Size and Velocity (PARSIVEL) disdrometer was also used to measure the surface size distribution and fall speeds of snow at SBNY. For the 15 cases at SBNY, the WSM6, Morrison (MORR), Thompson (THOM2), and Stony Brook (SBU-YLIN) BMPs were validated. A non-spherical snow assumption (THOM2 and SBU-YLIN) simulated a more realistic distribution of reflectivity than spherical snow assumptions in the WSM6 and MORR schemes. The MORR, WSM6, and SBU-YLIN schemes are comparable to the observed velocity distribution in light and moderate riming periods. The THOM2 is approx 0.25 m/s too slow with its velocity distribution in these periods. In heavier riming, the vertical Doppler velocities in the WSM6, THOM2, and MORR schemes were approx 0.25 m/s too slow, while the

  7. Evaluation of Model Microphysics Within Precipitation Bands of Extratropical Cyclones

    NASA Astrophysics Data System (ADS)

    Colle, B.; Molthan, A.; Yu, R.; Stark, D.; Yuter, S. E.; Nesbitt, S. W.

    2013-12-01

    Recent studies evaluating the bulk microphysical schemes (BMPs) within cloud resolving models (CRMs) have indicated large uncertainties and errors in the amount and size distributions of snow and cloud ice aloft. The snow prediction is sensitive to the snow densities, habits, and degree of riming within the BMPs. Improving these BMPs is a crucial step toward improving both weather forecasting and climate predictions. Several microphysical schemes in the Weather Research and Forecasting (WRF) model down to 1.33-km grid spacing are evaluated using aircraft, radar, and ground in situ data from the Global Precipitation Mission Cold-season Precipitation Experiment (GCPEx) experiment, as well as a few years (15 winter storms) of surface measurements of riming, crystal habit, snow density, and radar measurements at Stony Brook, NY (SBNY on north shore of Long Island) during the 2009-2012 winter seasons. Surface microphysical measurements at SBNY were taken every 15 to 30 minutes using a stereo microscope and camera, and snow depth and snow density were also recorded. During these storms, a vertically-pointing Ku-band radar was used to observe the vertical evolution of reflectivity and Doppler vertical velocities. A Particle Size and Velocity (PARSIVEL) disdrometer was also used to measure the surface size distribution and fall speeds of snow at SBNY. For the 15 cases at SBNY, the WSM6, Morrison (MORR), Thompson (THOM2), and Stony Brook (SBU-YLIN) BMPs were validated. A non-spherical snow assumption (THOM2 and SBU-YLIN) simulated a more realistic distribution of reflectivity than spherical snow assumptions in the WSM6 and MORR schemes. The MORR, WSM6, and SBU-YLIN schemes are comparable to the observed velocity distribution in light and moderate riming periods. The THOM2 is ~0.25 m s-1 too slow with its velocity distribution in these periods. In heavier riming, the vertical Doppler velocities in the WSM6, THOM2, and MORR schemes were ~0.25 m s-1 too slow, while the SBU

  8. Collaborative evaluation and market research converge: an innovative model agricultural development program evaluation in Southern Sudan.

    PubMed

    O'Sullivan, John M; O'Sullivan, Rita

    2012-11-01

    In June and July 2006 a team of outside experts arrived in Yei, Southern Sudan through an AID project to provide support to a local agricultural development project. The team brought evaluation, agricultural marketing and financial management expertise to the in-country partners looking at steps to rebuild the economy of the war ravaged region. A partnership of local officials, agricultural development staff, and students worked with the outside team to craft a survey of agricultural traders working between northern Uganda and Southern Sudan the steps approach of a collaborative model. The goal was to create a market directory of use to producers, government officials and others interested in stimulating agricultural trade. The directory of agricultural producers and distributors served as an agricultural development and promotion tool as did the collaborative process itself.

  9. Evaluation and Applications of the Prediction of Intensity Model Error (PRIME) Model

    NASA Astrophysics Data System (ADS)

    Bhatia, K. T.; Nolan, D. S.; Demaria, M.; Schumacher, A.

    2015-12-01

    Forecasters and end users of tropical cyclone (TC) intensity forecasts would greatly benefit from a reliable expectation of model error to counteract the lack of consistency in TC intensity forecast performance. As a first step towards producing error predictions to accompany each TC intensity forecast, Bhatia and Nolan (2013) studied the relationship between synoptic parameters, TC attributes, and forecast errors. In this study, we build on previous results of Bhatia and Nolan (2013) by testing the ability of the Prediction of Intensity Model Error (PRIME) model to forecast the absolute error and bias of four leading intensity models available for guidance in the Atlantic basin. PRIME forecasts are independently evaluated at each 12-hour interval from 12 to 120 hours during the 2007-2014 Atlantic hurricane seasons. The absolute error and bias predictions of PRIME are compared to their respective climatologies to determine their skill. In addition to these results, we will present the performance of the operational version of PRIME run during the 2015 hurricane season. PRIME verification results show that it can reliably anticipate situations where particular models excel, and therefore could lead to a more informed protocol for hurricane evacuations and storm preparations. These positive conclusions suggest that PRIME forecasts also have the potential to lower the error in the original intensity forecasts of each model. As a result, two techniques are proposed to develop a post-processing procedure for a multimodel ensemble based on PRIME. The first approach is to inverse-weight models using PRIME absolute error predictions (higher predicted absolute error corresponds to lower weights). The second multimodel ensemble applies PRIME bias predictions to each model's intensity forecast and the mean of the corrected models is evaluated. The forecasts of both of these experimental ensembles are compared to those of the equal-weight ICON ensemble, which currently

  10. Global Modeling of Tropospheric Chemistry with Assimilated Meteorology: Model Description and Evaluation

    NASA Technical Reports Server (NTRS)

    Bey, Isabelle; Jacob, Daniel J.; Yantosca, Robert M.; Logan, Jennifer A.; Field, Brendan D.; Fiore, Arlene M.; Li, Qin-Bin; Liu, Hong-Yu; Mickley, Loretta J.; Schultz, Martin G.

    2001-01-01

    We present a first description and evaluation of GEOS-CHEM, a global three-dimensional (3-D) model of tropospheric chemistry driven by assimilated meteorological observations from the Goddard Earth Observing System (GEOS) of the NASA Data Assimilation Office (DAO). The model is applied to a 1-year simulation of tropospheric ozone-NOx-hydrocarbon chemistry for 1994, and is evaluated with observations both for 1994 and for other years. It reproduces usually to within 10 ppb the concentrations of ozone observed from the worldwide ozonesonde data network. It simulates correctly the seasonal phases and amplitudes of ozone concentrations for different regions and altitudes, but tends to underestimate the seasonal amplitude at northern midlatitudes. Observed concentrations of NO and peroxyacetylnitrate (PAN) observed in aircraft campaigns are generally reproduced to within a factor of 2 and often much better. Concentrations of HNO3 in the remote troposphere are overestimated typically by a factor of 2-3, a common problem in global models that may reflect a combination of insufficient precipitation scavenging and gas-aerosol partitioning not resolved by the model. The model yields an atmospheric lifetime of methylchloroform (proxy for global OH) of 5.1 years, as compared to a best estimate from observations of 5.5 plus or minus 0.8 years, and simulates H2O2 concentrations observed from aircraft with significant regional disagreements but no global bias. The OH concentrations are approximately 20% higher than in our previous global 3-D model which included an UV-absorbing aerosol. Concentrations of CO tend to be underestimated by the model, often by 10-30 ppb, which could reflect a combination of excessive OH (a 20% decrease in model OH could be accommodated by the methylchloroform constraint) and an underestimate of CO sources (particularly biogenic). The model underestimates observed acetone concentrations over the South Pacific in fall by a factor of 3; a missing source

  11. Simulation Modeling and Performance Evaluation of Space Networks

    NASA Technical Reports Server (NTRS)

    Jennings, Esther H.; Segui, John

    2006-01-01

    In space exploration missions, the coordinated use of spacecraft as communication relays increases the efficiency of the endeavors. To conduct trade-off studies of the performance and resource usage of different communication protocols and network designs, JPL designed a comprehensive extendable tool, the Multi-mission Advanced Communications Hybrid Environment for Test and Evaluation (MACHETE). The design and development of MACHETE began in 2000 and is constantly evolving. Currently, MACHETE contains Consultative Committee for Space Data Systems (CCSDS) protocol standards such as Proximity-1, Advanced Orbiting Systems (AOS), Packet Telemetry/Telecommand, Space Communications Protocol Specification (SCPS), and the CCSDS File Delivery Protocol (CFDP). MACHETE uses the Aerospace Corporation s Satellite Orbital Analysis Program (SOAP) to generate the orbital geometry information and contact opportunities. Matlab scripts provide the link characteristics. At the core of MACHETE is a discrete event simulator, QualNet. Delay Tolerant Networking (DTN) is an end-to-end architecture providing communication in and/or through highly stressed networking environments. Stressed networking environments include those with intermittent connectivity, large and/or variable delays, and high bit error rates. To provide its services, the DTN protocols reside at the application layer of the constituent internets, forming a store-and-forward overlay network. The key capabilities of the bundling protocols include custody-based reliability, ability to cope with intermittent connectivity, ability to take advantage of scheduled and opportunistic connectivity, and late binding of names to addresses. In this presentation, we report on the addition of MACHETE models needed to support DTN, namely: the Bundle Protocol (BP) model. To illustrate the use of MACHETE with the additional DTN model, we provide an example simulation to benchmark its performance. We demonstrate the use of the DTN protocol

  12. Evaluation of probiotic treatment in a neonatal animal model.

    PubMed

    Lee, D J; Drongowski, R A; Coran, A G; Harmon, C M

    2000-01-01

    The clinical use of probiotic agents such as enteral Lactobacillus to enhance intestinal defense against potential luminal pathogens has been tested in vivo; however, an understanding of the mechanisms responsible for the observed protection is lacking. The purpose of this study was to evaluate the effects of Lactobacillus on bacterial translocation (BT) in a neonatal animal model. Newborn New Zealand white rabbit pups were enterally fed a 10% Formulac solution inoculated with or without a 10(8) suspension of ampicillin-resistant Escherichia coli K1 (E. coli K1A) and/or Lactobacillus casei GG (Lacto GG). Pups received either no bacteria (n = 10), Lacto GG (n = 8), E. coli K1A (n = 26), or a combination of Lacto GG and E. coli K1A (n = 33). On day 3, representative tissue specimens from the mesenteric lymph nodes (MLN), spleen (SPL), and liver (LIV) were aseptically harvested in addition to a small-bowel (SB) sample that was rinsed to remove luminal contents. The specimens were then cultured in organism-specific media. Statistical analysis was by one-way ANOVA with P values less than 0.05 considered significant. Neonatal rabbits receiving Lacto GG-supplemented formula exhibited a 25% decrease (P < 0.05) in small-bowel colonization by E. coli K1A. In addition, Lacto GG decreased the frequency of extraintestinal BT by 46% (P < 0.05), 61% (P < 0.05), and 23%, respectively, in the MLN, SPL, and LIV. We have shown that enterally-administered Lacto GG decreases the frequency of E. coli K1A translocation in a neonatal rabbit model. These results may have significant implications for the treatment of BT and sepsis in the human neonate and provide a model for further studies.

  13. Analysis and evaluation of channel models: simulations of alamethicin.

    PubMed Central

    Tieleman, D Peter; Hess, Berk; Sansom, Mark S P

    2002-01-01

    Alamethicin is an antimicrobial peptide that forms stable channels with well-defined conductance levels. We have used extended molecular dynamics simulations of alamethicin bundles consisting of 4, 5, 6, 7, and 8 helices in a palmitoyl-oleolyl-phosphatidylcholine bilayer to evaluate and analyze channel models and to link the models to the experimentally measured conductance levels. Our results suggest that four helices do not form a stable water-filled channel and might not even form a stable intermediate. The lowest measurable conductance level is likely to correspond to the pentamer. At higher aggregation numbers the bundles become less symmetrical. Water properties inside the different-sized bundles are similar. The hexamer is the most stable model with a stability comparable with simulations based on crystal structures. The simulation was extended from 4 to 20 ns or several times the mean passage time of an ion. Essential dynamics analyses were used to test the hypothesis that correlated motions of the helical bundles account for high-frequency noise observed in open channel measurements. In a 20-ns simulation of a hexameric alamethicin bundle, the main motions are those of individual helices, not of the bundle as a whole. A detailed comparison of simulations using different methods to treat long-range electrostatic interactions (a twin range cutoff, Particle Mesh Ewald, and a twin range cutoff combined with a reaction field correction) shows that water orientation inside the alamethicin channels is sensitive to the algorithms used. In all cases, water ordering due to the protein structure is strong, although the exact profile changes somewhat. Adding an extra 4-nm layer of water only changes the water ordering slightly in the case of particle mesh Ewald, suggesting that periodicity artifacts for this system are not serious. PMID:12414676

  14. MODELING AND BIOPHARMACEUTICAL EVALUATION OF SEMISOLID SYSTEMS WITH ROSEMARY EXTRACT.

    PubMed

    Ramanauskiene, Kristina; Zilius, Modestas; Kancauskas, Marius; Juskaite, Vaida; Cizinauskas, Vytis; Inkeniene, Asta; Petrikaite, Vilma; Rimdeika, Rytis; Briedis, Vitalis

    2016-01-01

    Scientific literature provides a great deal of studies supporting antioxidant effects of rosemary, protecting the body's cells against reactive oxygen species and their negative impact. Ethanol rosemary extracts were produced by maceration method. To assess biological activity of rosemary extracts, antioxidant and antimicrobial activity tests were performed. Antimicrobial activity tests revealed that G+ microorganisms are most sensitive to liquid rosemary extract, while G-microorganisms are most resistant to it. For the purposes of experimenting, five types of semisolid systems were modeled: hydrogel, oleogel, absorption-hydrophobic ointment, oil-in-water-type cream and water-in-oil-type cream, which contained rosemary extract as an active ingredient. Study results show that liquid rosemary extract was distributed evenly in the aqueous phase of water-in-oil-type system, forming the stable emulsion systems. The following research aim was chosen to evaluate the semisolid systems with rosemary exctract: to model semisolid preparations with liquid rosemary extract and determine the influence of excipients on their quality, and perform in vitro study of the release of active ingredients and antimicrobial activity. It was found that oil-in-water type gel-cream has antimicrobial activity against Staphylococcus epidermidis bacteria and Candida albicans fungus, while hydrogel affected only Candida albicans. According to the results of biopharmaceutical study, modeled semisolid systems with rosemary extract can be arranged in an ascending order of the release of phenolic compounds from the forms: water-in-oil-type cream < absorption-hydrophobic ointment < Pionier PLW oleogel < oil-in-water-type eucerin cream < hydrogel < oil-in-water-type gel-cream. Study results showed that oil-in-water-type gel-cream is the most suitable vehicle for liquid rosemary extract used as an active ingredient.

  15. EVALUATION OF AN IN VITRO TOXICOGENETIC MOUSE MODEL FOR HEPATOTOXICITY

    PubMed Central

    Martinez, Stephanie M.; Bradford, Blair U.; Soldatow, Valerie Y.; Kosyk, Oksana; Sandot, Amelia; Witek, Rafal; Kaiser, Robert; Stewart, Todd; Amaral, Kirsten; Freeman, Kimberly; Black, Chris; LeCluyse, Edward L.; Ferguson, Stephen S.; Rusyn, Ivan

    2010-01-01

    Numerous studies support the fact that a genetically diverse mouse population may be useful as an animal model to understand and predict toxicity in humans. We hypothesized that cultures of hepatocytes obtained from a large panel of inbred mouse strains can produce data indicative of inter-individual differences in in vivo responses to hepato-toxicants. In order to test this hypothesis and establish whether in vitro studies using cultured hepatocytes from genetically distinct mouse strains are feasible, we aimed to determine whether viable cells may be isolated from different mouse inbred strains, evaluate the reproducibility of cell yield, viability and functionality over subsequent isolations, and assess the utility of the model for toxicity screening. Hepatocytes were isolated from 15 strains of mice (A/J, B6C3F1, BALB/cJ, C3H/HeJ, C57BL/6J, CAST/EiJ, DBA/2J, FVB/NJ, BALB/cByJ, AKR/J, MRL/MpJ, NOD/LtJ, NZW/LacJ, PWD/PhJ and WSB/EiJ, males) and cultured for up to 7 days in traditional 2-dimesional culture. Cells from B6C3F1, C57BL/6J, and NOD/LtJ strains were treated with acetaminophen, WY-14,643 or rifampin and concentration-response effects on viability and function were established. Our data suggest that high yield and viability can be achieved across a panel of strains. Cell function and expression of key liver specific genes of hepatocytes isolated from different strains and cultured under standardized conditions is comparable. Strain-specific responses to toxicant exposure have been observed in cultured hepatocytes and these experiments open new opportunities for further developments of in vitro models of hepatotoxicity in a genetically diverse population. PMID:20869979

  16. Model-based damage evaluation of layered CFRP structures

    NASA Astrophysics Data System (ADS)

    Munoz, Rafael; Bochud, Nicolas; Rus, Guillermo; Peralta, Laura; Melchor, Juan; Chiachío, Juan; Chiachío, Manuel; Bond, Leonard J.

    2015-03-01

    An ultrasonic evaluation technique for damage identification of layered CFRP structures is presented. This approach relies on a model-based estimation procedure that combines experimental data and simulation of ultrasonic damage-propagation interactions. The CFPR structure, a [0/90]4s lay-up, has been tested in an immersion through transmission experiment, where a scan has been performed on a damaged specimen. Most ultrasonic techniques in industrial practice consider only a few features of the received signals, namely, time of flight, amplitude, attenuation, frequency contents, and so forth. In this case, once signals are captured, an algorithm is used to reconstruct the complete signal waveform and extract the unknown damage parameters by means of modeling procedures. A linear version of the data processing has been performed, where only Young modulus has been monitored and, in a second nonlinear version, the first order nonlinear coefficient β was incorporated to test the possibility of detection of early damage. The aforementioned physical simulation models are solved by the Transfer Matrix formalism, which has been extended from linear to nonlinear harmonic generation technique. The damage parameter search strategy is based on minimizing the mismatch between the captured and simulated signals in the time domain in an automated way using Genetic Algorithms. Processing all scanned locations, a C-scan of the parameter of each layer can be reconstructed, obtaining the information describing the state of each layer and each interface. Damage can be located and quantified in terms of changes in the selected parameter with a measurable extension. In the case of the nonlinear coefficient of first order, evidence of higher sensitivity to damage than imaging the linearly estimated Young Modulus is provided.

  17. Evaluation of interstitial protein delivery in multicellular layers model.

    PubMed

    Kim, Soo-Yeon; Kim, Tae Hyung; Choi, Jong Hoon; Lee, Kang Choon; Park, Ki Dong; Lee, Seung-Jin; Kuh, Hyo-Jeong

    2012-03-01

    The limited efficacy of anticancer protein drugs is related to their poor distribution in tumor tissue. We examined interstitial delivery of four model proteins of different molecular size and bioaffinity in multicellular layers (MCL) of human cancer cells. Model proteins were tumor necrosis factor-related apoptosis-including ligand (TRAIL), cetuximab, RNase A, and IgG. MCLs were cultured in Transwell inserts, exposed to drugs, then cryo-sectioned for image acquisition using fluorescence microscopy (fluorescent dye-labeled TRAIL, RNase A, IgG) or immunohistochemistry (cetuximab). TRAIL and cetuximab showed partial penetration into MCLs, whereas RNase A and IgG showed insignificant penetration. At 10-fold higher dose, a significant increase in penetration was observed for IgG only, while cetuximab showed an intense accumulation limited to the front layers. PEGylated TRAIL and RNase A formulated in a heparin-Pluronic (HP) nanogel showed significantly improved penetration attributable to increased stability and extracellular matrix binding, respectively. IgG penetration was significantly enhanced with paclitaxel pretreatment as a penetration enhancer. The present study suggests that MCL culture may be useful in evaluation of protein delivery in the tumor interstitium. Four model proteins showed limited interstitial penetration in MCL cultures. Bioaffinity, rather than molecular size, seems to have a positive effect on tissue penetration, although high binding affinity may lead to sequestration in the front cell layers. Polymer conjugation and nanoformulation, such as PEGylation and HP nanogel, or use of penetration enhancers are potential strategies to increase interstitial delivery of anticancer protein drugs.

  18. Using hybrid models to support the development of organizational evaluation capacity: a case narrative.

    PubMed

    Bourgeois, Isabelle; Hart, Rebecca E; Townsend, Shannon H; Gagné, Marc

    2011-08-01

    The ongoing need for public sector organizations to enhance their internal evaluation capacity is increasingly resulting in the use of hybrid evaluation project models, where internal evaluators work with external contracted evaluators to complete evaluative work. This paper first seeks to identify what is currently known about internal evaluation through a synthesis of the literature in this area. It then presents a case narrative illustrating how internal and external evaluation approaches may be used together to strengthen an evaluation project and to develop the evaluation capacity of the organization. Lessons learned include the need to integrate internal and external resources throughout the evaluation and to clarify expectations at the outset of the project.

  19. A multimedia fate and chemical transport modeling system for pesticides: II. Model evaluation

    NASA Astrophysics Data System (ADS)

    Li, Rong; Scholtz, M. Trevor; Yang, Fuquan; Sloan, James J.

    2011-07-01

    Pesticides have adverse health effects and can be transported over long distances to contaminate sensitive ecosystems. To address problems caused by environmental pesticides we developed a multimedia multi-pollutant modeling system, and here we present an evaluation of the model by comparing modeled results against measurements. The modeled toxaphene air concentrations for two sites, in Louisiana (LA) and Michigan (MI), are in good agreement with measurements (average concentrations agree to within a factor of 2). Because the residue inventory showed no soil residues at these two sites, resulting in no emissions, the concentrations must be caused by transport; the good agreement between the modeled and measured concentrations suggests that the model simulates atmospheric transport accurately. Compared to the LA and MI sites, the measured air concentrations at two other sites having toxaphene soil residues leading to emissions, in Indiana and Arkansas, showed more pronounced seasonal variability (higher in warmer months); this pattern was also captured by the model. The model-predicted toxaphene concentration fraction on particles (0.5-5%) agrees well with measurement-based estimates (3% or 6%). There is also good agreement between modeled and measured dry (1:1) and wet (within a factor of less than 2) depositions in Lake Ontario. Additionally this study identified erroneous soil residue data around a site in Texas in a published US toxaphene residue inventory, which led to very low modeled air concentrations at this site. Except for the erroneous soil residue data around this site, the good agreement between the modeled and observed results implies that both the US and Mexican toxaphene soil residue inventories are reasonably good. This agreement also suggests that the modeling system is capable of simulating the important physical and chemical processes in the multimedia compartments.

  20. MIRAGE: Model Description and Evaluation of Aerosols and Trace Gases

    SciTech Connect

    Easter, Richard C.; Ghan, Steven J.; Zhang, Yang; Saylor, Rick D.; Chapman, Elaine G.; Laulainen, Nels S.; Abdul-Razzak, Hayder; Leung, Lai-Yung R.; Bian, Xindi; Zaveri, Rahul A.

    2004-10-27

    The MIRAGE (Model for Integrated Research on Atmospheric Global Exchanges) modeling system, designed to study the impacts of anthropogenic aerosols on the global environment, is described. MIRAGE consists of a chemical transport model coupled on line with a global climate model. The chemical transport model simulates trace gases, aerosol number, and aerosol chemical component mass [sulfate, MSA, organic matter, black carbon (BC), sea salt, mineral dust] for four aerosol modes (Aitken, accumulation, coarse sea salt, coarse mineral dust) using the modal aerosol dynamics approach. Cloud-phase and interstitial aerosol are predicted separately. The climate model, based on the CCM2, has physically-based treatments of aerosol direct and indirect forcing. Stratiform cloud water and droplet number are simulated using a bulk microphysics parameterization that includes aerosol activation. Aerosol and trace gas species simulated by MIRAGE are presented and evaluated using surface and aircraft measurements. Surface-level SO2 in N. American and European source regions is higher than observed. SO2 above the boundary layer is in better agreement with observations, and surface-level SO2 at marine locations is somewhat lower than observed. Comparison with other models suggests insufficient SO2 dry deposition; increasing the deposition velocity improves simulated SO2. Surface-level sulfate in N. American and European source regions is in good agreement with observations, although the seasonal cycle in Europe is stronger than observed. Surface-level sulfate at high-latitude and marine locations, and sulfate above the boundary layer, are higher than observed. This is attributed primarily to insufficient wet removal; increasing the wet removal improves simulated sulfate at remote locations and aloft. Because of the high sulfate bias, radiative forcing estimates for anthropogenic sulfur in Ghan et al. [2001c] are probably too high. Surface-level DMS is {approx}40% higher than observed

  1. Wind-blown sand on beaches: an evaluation of models

    NASA Astrophysics Data System (ADS)

    Sherman, Douglas J.; Jackson, Derek W. T.; Namikas, Steven L.; Wang, Jinkang

    1998-03-01

    Five models for predicting rates of aeolian sand transport were evaluated using empirical data obtained from field experiments conducted in April, 1994 at a beach on Inch Spit, Co. Kerry, Republic of Ireland. Measurements were made of vertical wind profiles (to derive shear velocity estimates), beach slope, and rates of sand transport. Sediment samples were taken to assess characteristics of grain size and surface moisture content. Estimates of threshold shear velocity were derived using grain size data. After parsing the field data on the basis of the quality of shear velocity estimation and the occurrence of blowing sand, 51 data sets describing rates of sand transport and environmental conditions were retained. Mean grain diameter was 0.17 mm. Surface slopes ranged from 0.02 on the foreshore to about 0.11 near the dune toe. Mean shear velocities ranged from 0.23 m s -1 (just above the observed transport threshold) to 0.65 m s -1. Rates of transport ranged from 0.02 kg m -1 h -1 to more than 80 kg m -1 h -1. These data were used as input to the models of Bagnold [Bagnold, R.A., 1936. The Movement of Desert Sand. Proc. R. Soc. London, A157, 594-620], Kawamura [Kawamura, R., 1951. Study of Sand Movement by Wind. Translated (1965) as University of California Hydraulics Engineering Laboratory Report HEL 2-8, Berkeley], Zingg [Zingg, A.W., 1953. Wind tunnel studies of the movement of sedimentary material. Proc. 5th Hydraulics Conf. Bull. 34, Iowa City, Inst. of Hydraulics, pp. 111-135], Kadib [Kadib, A.A., 1965. A function for sand movement by wind. University of California Hydraulics Engineering Laboratory Report HEL 2-8, Berkeley], and Lettau and Lettau [Lettau, K. and Lettau, H., 1977. Experimental and Micrometeorological Field Studies of Dune Migration. In: K. Lettau and H. Lettau (Eds.), Exploring the World's Driest Climate. University of Wisconsin-Madison, IES Report 101, pp. 110-147]. Correction factors to adjust predictions of the rate of transport to account

  2. Physically-based landslide susceptibility modelling: geotechnical testing and model evaluation issues

    NASA Astrophysics Data System (ADS)

    Marchesini, Ivan; Mergili, Martin; Schneider-Muntau, Barbara; Alvioli, Massimiliano; Rossi, Mauro; Guzzetti, Fausto

    2015-04-01

    We used the software r.slope.stability for physically-based landslide susceptibility modelling in the 90 km² Collazzone area, Central Italy, exploiting a comprehensive set of lithological, geotechnical, and landslide inventory data. The model results were evaluated against the inventory. r.slope.stability is a GIS-supported tool for modelling shallow and deep-seated slope stability and slope failure probability at comparatively broad scales. Developed as a raster module of the GRASS GIS software, r.slope.stability evaluates the slope stability for a large number of randomly selected ellipsoidal potential sliding surfaces. The bottom of the soil (for shallow slope stability) or the bedding planes of lithological layers (for deep-seated slope stability) are taken as potential sliding surfaces by truncating the ellipsoids, allowing for the analysis of relatively complex geological structures. To take account for the uncertain geotechnical and geometric parameters, r.slope.stability computes the slope failure probability by testing multiple parameter combinations sampled deterministically or stochastically, and evaluating the ratio between the number of parameter combinations yielding a factor of safety below 1 and the total number of tested combinations. Any single raster cell may be intersected by multiple sliding surfaces, each associated with a slope failure probability. The most critical sliding surface is relevant for each pixel. Intensive use of r.slope.stability in the Collazzone Area has opened up two questions elaborated in the present work: (i) To what extent does a larger number of geotechnical tests help to better constrain the geotechnical characteristics of the study area and, consequently, to improve the model results? The ranges of values of cohesion and angle of internal friction obtained through 13 direct shear tests corresponds remarkably well to the range of values suggested by a geotechnical textbook. We elaborate how far an increased number of

  3. Critical evaluation and modeling of algal harvesting using dissolved air flotation. DAF Algal Harvesting Modeling

    SciTech Connect

    Zhang, Xuezhi; Hewson, John C.; Amendola, Pasquale; Reynoso, Monica; Sommerfeld, Milton; Chen, Yongsheng; Hu, Qiang

    2014-07-14

    In our study, Chlorella zofingiensis harvesting by dissolved air flotation (DAF) was critically evaluated with regard to algal concentration, culture conditions, type and dosage of coagulants, and recycle ratio. Harvesting efficiency increased with coagulant dosage and leveled off at 81%, 86%, 91%, and 87% when chitosan, Al3+, Fe3+, and cetyl trimethylammonium bromide (CTAB) were used at dosages of 70, 180, 250, and 500 mg g-1, respectively. The DAF efficiency-coagulant dosage relationship changed with algal culture conditions. In evaluating the influence of the initial algal concentration and recycle ratio revealed that, under conditions typical for algal harvesting, we found that it is possible that the number of bubbles is insufficient. A DAF algal harvesting model was developed to explain this observation by introducing mass-based floc size distributions and a bubble limitation into the white water blanket model. Moreover, the model revealed the importance of coagulation to increase floc-bubble collision and attachment, and the preferential interaction of bubbles with larger flocs, which limited the availability of bubbles to the smaller sized flocs. The harvesting efficiencies predicted by the model agree reasonably with experimental data obtained at different Al3+ dosages, algal concentrations, and recycle ratios. Based on this modeling, critical parameters for efficient algal harvesting were identified.

  4. Critical evaluation and modeling of algal harvesting using dissolved air flotation. DAF Algal Harvesting Modeling

    DOE PAGES

    Zhang, Xuezhi; Hewson, John C.; Amendola, Pasquale; ...

    2014-07-14

    In our study, Chlorella zofingiensis harvesting by dissolved air flotation (DAF) was critically evaluated with regard to algal concentration, culture conditions, type and dosage of coagulants, and recycle ratio. Harvesting efficiency increased with coagulant dosage and leveled off at 81%, 86%, 91%, and 87% when chitosan, Al3+, Fe3+, and cetyl trimethylammonium bromide (CTAB) were used at dosages of 70, 180, 250, and 500 mg g-1, respectively. The DAF efficiency-coagulant dosage relationship changed with algal culture conditions. In evaluating the influence of the initial algal concentration and recycle ratio revealed that, under conditions typical for algal harvesting, we found that itmore » is possible that the number of bubbles is insufficient. A DAF algal harvesting model was developed to explain this observation by introducing mass-based floc size distributions and a bubble limitation into the white water blanket model. Moreover, the model revealed the importance of coagulation to increase floc-bubble collision and attachment, and the preferential interaction of bubbles with larger flocs, which limited the availability of bubbles to the smaller sized flocs. The harvesting efficiencies predicted by the model agree reasonably with experimental data obtained at different Al3+ dosages, algal concentrations, and recycle ratios. Based on this modeling, critical parameters for efficient algal harvesting were identified.« less

  5. Evaluation model for the implementation results of mine law based on neural network

    NASA Astrophysics Data System (ADS)

    Gu, Tao; Li, Xu

    2010-04-01

    To evaluate the implementation results of mine safety production law, the evaluation model based on neural network is presented. In this model, 63 indicators which can describe the mine law effectively are proposed. The evaluation system is developed by using the model and the 63 indicators. The evaluation results show that the proposed method has high accuracy. We can effectively estimate the score of one mine for its carrying out the safety law. The estimate results are of scientific credibility and impartiality.

  6. Radar derived storm dynamics for cloud-resolving model evaluation and climate model parameterization development

    NASA Astrophysics Data System (ADS)

    Collis, S. M.; May, P. T.; Protat, A.; Fridlind, A. M.; Ackerman, A. S.; Williams, C. R.; Varble, A.; Zipser, E. J.

    2010-12-01

    The Tropical Warm Pool-International Cloud Experiment (TWP-ICE) was conducted in and around the US Department of Energy’s Atmospheric Radiation Measurement (ARM) Climate Research Facility (ACRF) Darwin site during January and February 2006. The field program gathered observations that have been used for initializing and driving cloud-resolving models (CRMs, with periodic boundary conditions) and limited-area models (LAMs, with open boundary conditions) for submission to the model intercomparison study, which is organized by the ARM and GEWEX Cloud System Study (GCSS) programs. Measurements also included an extensive set of remotely sensed and in-situ quantities to evaluate model performance, assisting climate model parameterization development. For example, using a combination of operational Doppler radar and CPOL polartimetric research radar data vector winds have been retrieved in storms for part of the model intercomparison period. This presentation will outline the retrieval technique, show preliminary verification of the retrieved updraft intensities and showcase model-measurement comparison with output from the DHARMA cloud-resolving model focusing on vertical winds, a crucial aspect of simulated storm dynamics which exhibit a high degree of model to model variability. Initial comparison has most model updraft speeds substantially higher those retrieved from radar measurements. Investigations into the impact of sampling, scale differences and the cause for this discrepancy are ongoing as is the extension of comparisons to all CRM and LAM submissions. Details on the roll out of the American Recovery and Reinvestment Act funded precipitation radar infrastructure for ACRF and plans for geophysical retrievals from this new instrumentation will also be presented.

  7. Modeling irrigation-based climate change adaptation in agriculture: Model development and evaluation in Northeast China

    NASA Astrophysics Data System (ADS)

    Okada, Masashi; Iizumi, Toshichika; Sakurai, Gen; Hanasaki, Naota; Sakai, Toru; Okamoto, Katsuo; Yokozawa, Masayuki

    2015-09-01

    Replacing a rainfed cropping system with an irrigated one is widely assumed to be an effective measure for climate change adaptation. However, many agricultural impact studies have not necessarily accounted for the space-time variations in the water availability under changing climate and land use. Moreover, many hydrologic and agricultural assessments of climate change impacts are not fully integrated. To overcome this shortcoming, a tool that can simultaneously simulate the dynamic interactions between crop production and water resources in a watershed is essential. Here we propose the regional production and circulation coupled model (CROVER) by embedding the PRYSBI-2 (Process-based Regional Yield Simulator with Bayesian Inference version 2) large-area crop model into the global water resources model (called H08), and apply this model to the Songhua River watershed in Northeast China. The evaluation reveals that the model's performance in capturing the major characteristics of historical change in surface soil moisture, river discharge, actual crop evapotranspiration, and soybean yield relative to the reference data during the interval 1979-2010 is satisfactory accurate. The simulation experiments using the model demonstrated that subregional irrigation management, such as designating the area to which irrigation is primarily applied, has measurable influences on the regional crop production in a drought year. This finding suggests that reassessing climate change risk in agriculture using this type of modeling is crucial not to overestimate potential of irrigation-based adaptation.

  8. Logic Models for Program Design, Implementation, and Evaluation: Workshop Toolkit. REL 2015-057

    ERIC Educational Resources Information Center

    Shakman, Karen; Rodriguez, Sheila M.

    2015-01-01

    The Logic Model Workshop Toolkit is designed to help practitioners learn the purpose of logic models, the different elements of a logic model, and the appropriate steps for developing and using a logic model for program evaluation. Topics covered in the sessions include an overview of logic models, the elements of a logic model, an introduction to…

  9. A Program Evaluation Model: Using Bloom's Taxonomy to Identify Outcome Indicators in Outcomes-Based Program Evaluations

    ERIC Educational Resources Information Center

    McNeil, Rita C.

    2011-01-01

    Outcomes-based program evaluation is a systematic approach to identifying outcome indicators and measuring results against those indicators. One dimension of program evaluation is assessing the level of learner acquisition to determine if learning objectives were achieved as intended. The purpose of the proposed model is to use Bloom's Taxonomy to…

  10. Field evaluation of an avian risk assessment model

    USGS Publications Warehouse

    Vyas, N.B.; Spann, J.W.; Hulse, C.S.; Borges, S.L.; Bennett, R.S.; Torrez, M.; Williams, B.I.; Leffel, R.

    2006-01-01

    We conducted two laboratory subacute dietary toxicity tests and one outdoor subacute dietary toxicity test to determine the effectiveness of the U.S. Environmental Protection Agency's deterministic risk assessment model for evaluating the potential of adverse effects to birds in the field. We tested technical-grade diazinon and its D Z N- 50W (50% diazinon active ingredient wettable powder) formulation on Canada goose (Branta canadensis) goslings. Brain acetylcholinesterase activity was measured, and the feathers and skin, feet. and gastrointestinal contents were analyzed for diazinon residues. The dose-response curves showed that diazinon was significantly more toxic to goslings in the outdoor test than in the laboratory tests. The deterministic risk assessment method identified the potential for risk to birds in general, but the factors associated with extrapolating from the laboratory to the field, and from the laboratory test species to other species, resulted in the underestimation of risk to the goslings. The present study indicates that laboratory-based risk quotients should be interpreted with caution.

  11. Modelling and Evaluation of Spectra in Beam Aided Spectroscopy

    SciTech Connect

    Hellermann, M. G. von; Delabie, E.; Jaspers, R.; Lotte, P.; Summers, H. P.

    2008-10-22

    The evaluation of active beam induced spectra requires advanced modelling of both active and passive features. Three types of line shapes are addressed in this paper: Thermal spectra representing Maxwellian distribution functions described by Gaussian-like line shapes, secondly broad-band fast ion spectra with energies well above local ion temperatures, and, finally, the narrow lines shapes of the equi-spaced Motion Stark multiplet (MSE) of excited neutral beam particles travelling through the magnetic field confining the plasma. In each case additional line shape broadening caused by Gaussian-like instrument functions is taken into account. Further broadening effects are induced by collision velocity dependent effective atomic rates where the observed spectral shape is the result of a convolution of emission rate function and velocity distribution function projected into the direction of observation. In the case of Beam Emission Spectroscopy which encompasses the Motional Stark features, line broadening is also caused by the finite angular spread of injected neutrals and secondly by a ripple in the acceleration voltage associated with high energy neutral beams.

  12. Evaluation of the Actuator Line Model with coarse resolutions

    NASA Astrophysics Data System (ADS)

    Draper, M.; Usera, G.

    2015-06-01

    The aim of the present paper is to evaluate the Actuator Line Model (ALM) in spatial resolutions coarser than what is generally recommended, also using larger time steps. To accomplish this, the ALM has been implemented in the open source code caffa3d.MBRi and validated against experimental measurements of two wind tunnel campaigns (stand alone wind turbine and two wind turbines in line, case A and B respectively), taking into account two spatial resolutions: R/8 and R/15 (R is the rotor radius). A sensitivity analysis in case A was performed in order to get some insight into the influence of the smearing factor (3D Gaussian distribution) and time step size in power and thrust, as well as in the wake, without applying a tip loss correction factor (TLCF), for one tip speed ratio (TSR). It is concluded that as the smearing factor is larger or time step size is smaller the power is increased, but the velocity deficit is not as much affected. From this analysis, a smearing factor was obtained in order to calculate precisely the power coefficient for that TSR without applying TLCF. Results with this approach were compared with another simulation choosing a larger smearing factor and applying Prandtl's TLCF, for three values of TSR. It is found that applying the TLCF improves the power estimation and weakens the influence of the smearing factor. Finally, these 2 alternatives were tested in case B, confirming that conclusion.

  13. Evaluation of air pollution modelling tools as environmental engineering courseware.

    PubMed

    Souto González, J A; Bello Bugallo, P M; Casares Long, J J

    2004-01-01

    The study of phenomena related to the dispersion of pollutants usually takes advantage of the use of mathematical models based on the description of the different processes involved. This educational approach is especially important in air pollution dispersion, when the processes follow a non-linear behaviour so it is difficult to understand the relationships between inputs and outputs, and in a 3D context where it becomes hard to analyze alphanumeric results. In this work, three different software tools, as computer solvers for typical air pollution dispersion phenomena, are presented. Each software tool developed to be implemented on PCs, follows approaches that represent three generations of programming languages (Fortran 77, VisualBasic and Java), applied over three different environments: MS-DOS, MS-Windows and the world wide web. The software tools were tested by students of environmental engineering (undergraduate) and chemical engineering (postgraduate), in order to evaluate the ability of these software tools to improve both theoretical and practical knowledge of the air pollution dispersion problem, and the impact of the different environment in the learning process in terms of content, ease of use and visualization of results.

  14. Evaluation of chronic immune system stimulation models in growing pigs.

    PubMed

    Rakhshandeh, A; de Lange, C F M

    2012-02-01

    Two experiments (EXPs) were conducted to evaluate models of immune system stimulation (ISS) that can be used in nutrient metabolism studies in growing pigs. In EXP I, the pig's immune response to three non-pathogenic immunogens was evaluated, whereas in EXP II the pig's more general response to one of the immunogens was contrasted with observations on non-ISS pigs. In EXP I, nine growing barrows were fitted with a jugular catheter, and after recovery assigned to one of three treatments. Three immunogens were tested during a 10-day ISS period: (i) repeated injection of increasing amounts of Escherichia coli lipopolysaccharide (LPS); (ii) repeated subcutaneous injection of turpentine (TURP); and (iii) feeding grains naturally contaminated with mycotoxins (MYCO). In EXP II, 36 growing barrows were injected repeatedly with either saline (n = 12) or increasing amounts of LPS (n = 24) for 7 days (initial dose 60 μg/kg body weight). Treating pigs with TURP and LPS reduced feed intake (P < 0.02), whereas feed intake was not reduced in pigs on MYCO. Average daily gain (ADG; kg/day) of pigs on LPS (0.50) was higher than that of pigs on TURP (0.19), but lower than that of pigs on MYCO (0.61; P < 0.01). Body temperature was elevated in pigs on LPS and TURP, by 0.8°C and 0.7°C, respectively, relative to pre-ISS challenge values (39.3°C; P < 0.02), but remained unchanged in pigs on MYCO. Plasma concentrations of interleukin-1β were increased in pigs treated with LPS and TURP (56% and 55%, respectively, relative to 22.3 pg/ml for pre-ISS; P < 0.01), but not in MYCO-treated pigs. Plasma cortisol concentrations remained unchanged for pigs on MYCO and TURP, but were reduced in LPS-treated pigs (30% relative to 29.8 ng/ml for pre-ISS; P < 0.05). Red blood cell glutathione concentrations were lower in TURP-treated pigs (13% relative to 1.38 μM for pre-ISS; P < 0.05), but were unaffected in pigs on LPS and MYCO. In EXP I, TURP caused severe responses including skin ulceration and

  15. A Meta-Model for Evaluating Information Retrieval Serviceability.

    ERIC Educational Resources Information Center

    Hjerppe, Roland

    This document first outlines considerations relative to a systems approach to evaluation, and then argues for such an approach to the evaluation of information retrieval systems (ISR). The criterion of such evaluations should be the utility of the information retrieved to the user, and the ISR ought to be regarded as one of three interrelated…

  16. Using the CIPP Model to Evaluate Reading Instruction.

    ERIC Educational Resources Information Center

    Nicholson, Tom

    1989-01-01

    Presents an approach to evaluation of reading instruction called CIPP (context, input, process, product), including: methods for discovering the needs of each student, getting input from students and colleagues concerning possible action, implementing evaluation in the process of instruction, and then carrying out an evaluation of the final…

  17. Using Hierarchical Linear Modeling for Proformative Evaluation: A Case Example

    ERIC Educational Resources Information Center

    Coryn, Chris L. S.

    2007-01-01

    Proformative evaluation--first introduced in Scriven's (2006) "The great enigma: An evaluation design puzzle"--"is motivated, like formative evaluation, by the intention to improve something that is still developing, but unlike formative, the improvement is only possible by taking action, hence proactive instead of reactive, hence both, hence…

  18. Evaluation of Aerosol-Cloud Interactions in GISS ModelE Using ASR Observations

    NASA Astrophysics Data System (ADS)

    de Boer, G.; Menon, S.; Bauer, S. E.; Toto, T.; Bennartz, R.; Cribb, M.

    2011-12-01

    The impacts of aerosol particles on clouds continue to rank among the largest uncertainties in global climate simulation. In this work we assess the capability of the NASA GISS ModelE, coupled to MATRIX aerosol microphysics, in correctly representing warm-phase aerosol-cloud interactions. This evaluation is completed through the analysis of a nudged, multi-year global simulation using measurements from various US Department of Energy sponsored measurement campaigns and satellite-based observations. Campaign observations include the Aerosol Intensive Operations Period (Aerosol IOP) and Routine ARM Arial Facility Clouds with Low Optical Water Depths (CLOWD) Optical Radiative Observations (RACORO) at the Southern Great Plains site in Oklahoma, the Marine Stratus Radiation, Aerosol, and Drizzle (MASRAD) campaign at Pt. Reyes, California, and the ARM mobile facility's 2008 deployment to China. This combination of datasets provides a variety of aerosol and atmospheric conditions under which to test ModelE parameterizations. In addition to these localized comparisons, we provide the results of global evaluations completed using measurements derived from satellite remote sensors. We will provide a basic overview of simulation performance, as well as a detailed analysis of parameterizations relevant to aerosol indirect effects.

  19. A critical evaluation of modeled solar irradiance over California for hydrologic and land surface modeling

    NASA Astrophysics Data System (ADS)

    Lapo, Karl E.; Hinkelman, Laura M.; Sumargo, Edwin; Hughes, Mimi; Lundquist, Jessica D.

    2017-01-01

    Studies of land surface processes in complex terrain often require estimates of meteorological variables, i.e., the incoming solar irradiance (Qsi), to force land surface models. However, estimates of Qsi are rarely evaluated within mountainous environments. We evaluated four methods of estimating Qsi: the CERES Synoptic Radiative Fluxes and Clouds (SYN) product, MTCLIM, a regional reanalysis product derived from a long-term Weather Research and Forecast simulation, and Mountain Microclimate Simulation Model (MTCLIM). These products are evaluated over the Central Valley and Sierra Nevada mountains in California, a region with meteorology strongly impacted by complex topography. We used a spatially dense network of Qsi observations (n = 70) to characterize the spatial characteristics of Qsi uncertainty. Observation sites were grouped into five subregions, and Qsi estimates were evaluated against observations in each subregion. Large monthly biases (up to 80 W m-2) outside the observational uncertainty were found for all estimates in all subregions examined, typically reaching a maximum in the spring. We found that MTCLIM and SYN generally perform the best across all subregions. Differences between Qsi estimates were largest over the Sierra Nevada, with seasonal differences exceeding 50 W m-2. Disagreements in Qsi were especially pronounced when averaging over high-elevation basins, with monthly differences up to 80 W m-2. Biases in estimated Qsi predominantly occurred with darker than normal conditions associated with precipitation (a proxy for cloud cover), while the presence of aerosols and water vapor was unable to explain the biases. Users of Qsi estimates in regions of complex topography, especially those estimating Qsi to force land surface models, need to be aware of this source of uncertainty.

  20. THE DEVELOPMENT AND TESTING OF AN EVALUATION MODEL FOR VOCATIONAL PILOT PROGRAMS. FINAL REPORT.

    ERIC Educational Resources Information Center

    TUCKMAN, BRUCE W.

    THE OBJECTIVES OF THE PROJECT WERE (1) TO DEVELOP AN EVALUATION MODEL IN THE FORM OF A HOW-TO-DO-IT MANUAL WHICH OUTLINES PROCEDURES FOR OBTAINING IMMEDIATE INFORMATION REGARDING THE DEGREE TO WHICH A PILOT PROGRAM ACHIEVES ITS STATED FINAL OBJECTIVES, (2) TO EVALUATE THIS MODEL BY USING IT TO EVALUATE TWO ONGOING PILOT PROGRAMS, AND (3) TO…