Sample records for evaluation model based

  1. Agent-based modeling as a tool for program design and evaluation.

    PubMed

    Lawlor, Jennifer A; McGirr, Sara

    2017-12-01

    Recently, systems thinking and systems science approaches have gained popularity in the field of evaluation; however, there has been relatively little exploration of how evaluators could use quantitative tools to assist in the implementation of systems approaches therein. The purpose of this paper is to explore potential uses of one such quantitative tool, agent-based modeling, in evaluation practice. To this end, we define agent-based modeling and offer potential uses for it in typical evaluation activities, including: engaging stakeholders, selecting an intervention, modeling program theory, setting performance targets, and interpreting evaluation results. We provide demonstrative examples from published agent-based modeling efforts both inside and outside the field of evaluation for each of the evaluative activities discussed. We further describe potential pitfalls of this tool and offer cautions for evaluators who may chose to implement it in their practice. Finally, the article concludes with a discussion of the future of agent-based modeling in evaluation practice and a call for more formal exploration of this tool as well as other approaches to simulation modeling in the field. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Teacher Evaluation Models: Compliance or Growth Oriented?

    ERIC Educational Resources Information Center

    Clenchy, Kelly R.

    2017-01-01

    This research study reviewed literature specific to the evolution of teacher evaluation models and explored the effectiveness of standards-based evaluation models' potential to facilitate professional growth. The researcher employed descriptive phenomenology to conduct a study of teachers' perceptions of a standard-based evaluation model's…

  3. Impact on house staff evaluation scores when changing from a Dreyfus- to a Milestone-based evaluation model: one internal medicine residency program's findings.

    PubMed

    Friedman, Karen A; Balwan, Sandy; Cacace, Frank; Katona, Kyle; Sunday, Suzanne; Chaudhry, Saima

    2014-01-01

    As graduate medical education (GME) moves into the Next Accreditation System (NAS), programs must take a critical look at their current models of evaluation and assess how well they align with reporting outcomes. Our objective was to assess the impact on house staff evaluation scores when transitioning from a Dreyfus-based model of evaluation to a Milestone-based model of evaluation. Milestones are a key component of the NAS. We analyzed all end of rotation evaluations of house staff completed by faculty for academic years 2010-2011 (pre-Dreyfus model) and 2011-2012 (post-Milestone model) in one large university-based internal medicine residency training program. Main measures included change in PGY-level average score; slope, range, and separation of average scores across all six Accreditation Council for Graduate Medical Education (ACGME) competencies. Transitioning from a Dreyfus-based model to a Milestone-based model resulted in a larger separation in the scores between our three post-graduate year classes, a steeper progression of scores in the PGY-1 class, a wider use of the 5-point scale on our global end of rotation evaluation form, and a downward shift in the PGY-1 scores and an upward shift in the PGY-3 scores. For faculty trained in both models of assessment, the Milestone-based model had greater discriminatory ability as evidenced by the larger separation in the scores for all the classes, in particular the PGY-1 class.

  4. Impact on house staff evaluation scores when changing from a Dreyfus- to a Milestone-based evaluation model: one internal medicine residency program's findings.

    PubMed

    Friedman, Karen A; Balwan, Sandy; Cacace, Frank; Katona, Kyle; Sunday, Suzanne; Chaudhry, Saima

    2014-01-01

    Purpose As graduate medical education (GME) moves into the Next Accreditation System (NAS), programs must take a critical look at their current models of evaluation and assess how well they align with reporting outcomes. Our objective was to assess the impact on house staff evaluation scores when transitioning from a Dreyfus-based model of evaluation to a Milestone-based model of evaluation. Milestones are a key component of the NAS. Method We analyzed all end of rotation evaluations of house staff completed by faculty for academic years 2010-2011 (pre-Dreyfus model) and 2011-2012 (post-Milestone model) in one large university-based internal medicine residency training program. Main measures included change in PGY-level average score; slope, range, and separation of average scores across all six Accreditation Council for Graduate Medical Education (ACGME) competencies. Results Transitioning from a Dreyfus-based model to a Milestone-based model resulted in a larger separation in the scores between our three post-graduate year classes, a steeper progression of scores in the PGY-1 class, a wider use of the 5-point scale on our global end of rotation evaluation form, and a downward shift in the PGY-1 scores and an upward shift in the PGY-3 scores. Conclusions For faculty trained in both models of assessment, the Milestone-based model had greater discriminatory ability as evidenced by the larger separation in the scores for all the classes, in particular the PGY-1 class.

  5. Field Evaluation of the Pedostructure-Based Model (Kamel®)

    USDA-ARS?s Scientific Manuscript database

    This study involves a field evaluation of the pedostructure-based model Kamel and comparisons between Kamel and the Hydrus-1D model for predicting profile soil moisture. This paper also presents a sensitivity analysis of Kamel with an evaluation field site used as the base scenario. The field site u...

  6. Evaluating Emulation-based Models of Distributed Computing Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, Stephen T.; Gabert, Kasimir G.; Tarman, Thomas D.

    Emulation-based models of distributed computing systems are collections of virtual ma- chines, virtual networks, and other emulation components configured to stand in for oper- ational systems when performing experimental science, training, analysis of design alterna- tives, test and evaluation, or idea generation. As with any tool, we should carefully evaluate whether our uses of emulation-based models are appropriate and justified. Otherwise, we run the risk of using a model incorrectly and creating meaningless results. The variety of uses of emulation-based models each have their own goals and deserve thoughtful evaluation. In this paper, we enumerate some of these uses andmore » describe approaches that one can take to build an evidence-based case that a use of an emulation-based model is credible. Predictive uses of emulation-based models, where we expect a model to tell us something true about the real world, set the bar especially high and the principal evaluation method, called validation , is comensurately rigorous. We spend the majority of our time describing and demonstrating the validation of a simple predictive model using a well-established methodology inherited from decades of development in the compuational science and engineering community.« less

  7. Does an expert-based evaluation allow us to go beyond the Impact Factor? Experiences from building a ranking of national journals in Poland.

    PubMed

    Kulczycki, Emanuel; Rozkosz, Ewa A

    2017-01-01

    This article discusses the Polish Journal Ranking, which is used in the research evaluation system in Poland. In 2015, the ranking, which represents all disciplines, allocated 17,437 journals into three lists: A, B, and C. The B list constitutes a ranking of Polish journals that are indexed neither in the Web of Science nor the European Reference Index for the Humanities. This ranking was built by evaluating journals in three dimensions: formal, bibliometric, and expert-based. We have analysed data on 2035 Polish journals from the B list. Our study aims to determine how an expert-based evaluation influenced the results of final evaluation. In our study, we used structural equation modelling, which is regression based, and we designed three pairs of theoretical models for three fields of science: (1) humanities, (2) social sciences, and (3) engineering, natural sciences, and medical sciences. Each pair consisted of the full model and the reduced model (i.e., the model without the expert-based evaluation). Our analysis revealed that the multidimensional evaluation of local journals should not rely only on the bibliometric indicators, which are based on the Web of Science or Scopus. Moreover, we have shown that the expert-based evaluation plays a major role in all fields of science. We conclude with recommendations that the formal evaluation should be reduced to verifiable parameters and that the expert-based evaluation should be based on common guidelines for the experts.

  8. FRAMEWORK FOR EVALUATION OF PHYSIOLOGICALLY-BASED PHARMACOKINETIC MODELS FOR USE IN SAFETY OR RISK ASSESSMENT

    EPA Science Inventory

    ABSTRACT

    Proposed applications of increasingly sophisticated biologically-based computational models, such as physiologically-based pharmacokinetic (PBPK) models, raise the issue of how to evaluate whether the models are adequate for proposed uses including safety or risk ...

  9. A Course Evaluation System in an Open University.

    ERIC Educational Resources Information Center

    Chacon, Fabio J.

    A model is presented for response to evaluating instruction in a university based on the teaching-at-a-distance concept. Technically appropriate and operationally viable, this model is applied to the National Open University of Venezuela (UNA). The model is based on two principles of educational evaluation: (1) the concept of evaluation as a…

  10. Intelligent Evaluation Method of Tank Bottom Corrosion Status Based on Improved BP Artificial Neural Network

    NASA Astrophysics Data System (ADS)

    Qiu, Feng; Dai, Guang; Zhang, Ying

    According to the acoustic emission information and the appearance inspection information of tank bottom online testing, the external factors associated with tank bottom corrosion status are confirmed. Applying artificial neural network intelligent evaluation method, three tank bottom corrosion status evaluation models based on appearance inspection information, acoustic emission information, and online testing information are established. Comparing with the result of acoustic emission online testing through the evaluation of test sample, the accuracy of the evaluation model based on online testing information is 94 %. The evaluation model can evaluate tank bottom corrosion accurately and realize acoustic emission online testing intelligent evaluation of tank bottom.

  11. EVALUATION OF THE REAL-TIME AIR-QUALITY MODEL USING THE RAPS (REGIONAL AIR POLLUTION STUDY) DATA BASE. VOLUME 4. EVALUATION GUIDE

    EPA Science Inventory

    The theory and programming of statistical tests for evaluating the Real-Time Air-Quality Model (RAM) using the Regional Air Pollution Study (RAPS) data base are fully documented in four volumes. Moreover, the tests are generally applicable to other model evaluation problems. Volu...

  12. a model based on crowsourcing for detecting natural hazards

    NASA Astrophysics Data System (ADS)

    Duan, J.; Ma, C.; Zhang, J.; Liu, S.; Liu, J.

    2015-12-01

    Remote Sensing Technology provides a new method for the detecting,early warning,mitigation and relief of natural hazards. Given the suddenness and the unpredictability of the location of natural hazards as well as the actual demands for hazards work, this article proposes an evaluation model for remote sensing detecting of natural hazards based on crowdsourcing. Firstly, using crowdsourcing model and with the help of the Internet and the power of hundreds of millions of Internet users, this evaluation model provides visual interpretation of high-resolution remote sensing images of hazards area and collects massive valuable disaster data; secondly, this evaluation model adopts the strategy of dynamic voting consistency to evaluate the disaster data provided by the crowdsourcing workers; thirdly, this evaluation model pre-estimates the disaster severity with the disaster pre-evaluation model based on regional buffers; lastly, the evaluation model actuates the corresponding expert system work according to the forecast results. The idea of this model breaks the boundaries between geographic information professionals and the public, makes the public participation and the citizen science eventually be realized, and improves the accuracy and timeliness of hazards assessment results.

  13. Validating the ACE Model for Evaluating Student Performance Using a Teaching-Learning Process Based on Computational Modeling Systems

    ERIC Educational Resources Information Center

    Louzada, Alexandre Neves; Elia, Marcos da Fonseca; Sampaio, Fábio Ferrentini; Vidal, Andre Luiz Pestana

    2014-01-01

    The aim of this work is to adapt and test, in a Brazilian public school, the ACE model proposed by Borkulo for evaluating student performance as a teaching-learning process based on computational modeling systems. The ACE model is based on different types of reasoning involving three dimensions. In addition to adapting the model and introducing…

  14. Initial draft of CSE-UCLA evaluation model based on weighted product in order to optimize digital library services in computer college in Bali

    NASA Astrophysics Data System (ADS)

    Divayana, D. G. H.; Adiarta, A.; Abadi, I. B. G. S.

    2018-01-01

    The aim of this research was to create initial design of CSE-UCLA evaluation model modified with Weighted Product in evaluating digital library service at Computer College in Bali. The method used in this research was developmental research method and developed by Borg and Gall model design. The results obtained from the research that conducted earlier this month was a rough sketch of Weighted Product based CSE-UCLA evaluation model that the design had been able to provide a general overview of the stages of weighted product based CSE-UCLA evaluation model used in order to optimize the digital library services at the Computer Colleges in Bali.

  15. Evaluation of six NEHRP B/C crustal amplification models proposed for use in western North America

    USGS Publications Warehouse

    Boore, David; Campbell, Kenneth W.

    2016-01-01

    We evaluate six crustal amplification models based on National Earthquake Hazards Reduction Program (NEHRP) B/C crustal profiles proposed for use in western North America (WNA) and often used in other active crustal regions where crustal properties are unknown. One of the models is based on an interpolation of generic rock velocity profiles previously proposed for WNA and central and eastern North America (CENA), in conjunction with material densities based on an updated velocity–density relationship. A second model is based on the velocity profile used to develop amplification factors for the Next Generation Attenuation (NGA)‐West2 project. A third model is based on a near‐surface velocity profile developed from the NGA‐West2 site database. A fourth model is based on velocity and density profiles originally proposed for use in CENA but recently used to represent crustal properties in California. We propose two alternatives to this latter model that more closely represent WNA crustal properties. We adopt a value of site attenuation (κ0) for each model that is either recommended by the author of the model or proposed by us. Stochastic simulation is used to evaluate the Fourier amplification factors and their impact on response spectra associated with each model. Based on this evaluation, we conclude that among the available models evaluated in this study the NEHRP B/C amplification model of Boore (2016) best represents median crustal amplification in WNA, although the amplification models based on the crustal profiles of Kamai et al. (2013, 2016, unpublished manuscript, see Data and Resources) and Yenier and Atkinson (2015), the latter adjusted to WNA crustal properties, can be used to represent epistemic uncertainty.

  16. Applying Constructivist and Objectivist Learning Theories in the Design of a Web-based Course: Implications for Practice.

    ERIC Educational Resources Information Center

    Moallem, Mahnaz

    2001-01-01

    Provides an overview of the process of designing and developing a Web-based course using instructional design principles and models, including constructivist and objectivist theories. Explains the process of implementing an instructional design model in designing a Web-based undergraduate course and evaluates the model based on course evaluations.…

  17. Did you have an impact? A theory-based method for planning and evaluating knowledge-transfer and exchange activities in occupational health and safety.

    PubMed

    Kramer, Desré M; Wells, Richard P; Carlan, Nicolette; Aversa, Theresa; Bigelow, Philip P; Dixon, Shane M; McMillan, Keith

    2013-01-01

    Few evaluation tools are available to assess knowledge-transfer and exchange interventions. The objective of this paper is to develop and demonstrate a theory-based knowledge-transfer and exchange method of evaluation (KEME) that synthesizes 3 theoretical frameworks: the promoting action on research implementation of health services (PARiHS) model, the transtheoretical model of change, and a model of knowledge use. It proposes a new term, keme, to mean a unit of evidence-based transferable knowledge. The usefulness of the evaluation method is demonstrated with 4 occupational health and safety knowledge transfer and exchange (KTE) implementation case studies that are based upon the analysis of over 50 pre-existing interviews. The usefulness of the evaluation model has enabled us to better understand stakeholder feedback, frame our interpretation, and perform a more comprehensive evaluation of the knowledge use outcomes of our KTE efforts.

  18. EVALUATION OF THE REAL-TIME AIR-QUALITY MODEL USING THE RAPS (REGIONAL AIR POLLUTION STUDY) DATA BASE. VOLUME 1. OVERVIEW

    EPA Science Inventory

    The theory and programming of statistical tests for evaluating the Real-Time Air-Quality Model (RAM) using the Regional Air Pollution Study (RAPS) data base are fully documented in four report volumes. Moreover, the tests are generally applicable to other model evaluation problem...

  19. A merged model of quality improvement and evaluation: maximizing return on investment.

    PubMed

    Woodhouse, Lynn D; Toal, Russ; Nguyen, Trang; Keene, DeAnna; Gunn, Laura; Kellum, Andrea; Nelson, Gary; Charles, Simone; Tedders, Stuart; Williams, Natalie; Livingood, William C

    2013-11-01

    Quality improvement (QI) and evaluation are frequently considered to be alternative approaches for monitoring and assessing program implementation and impact. The emphasis on third-party evaluation, particularly associated with summative evaluation, and the grounding of evaluation in the social and behavioral science contrast with an emphasis on the integration of QI process within programs or organizations and its origins in management science and industrial engineering. Working with a major philanthropic organization in Georgia, we illustrate how a QI model is integrated with evaluation for five asthma prevention and control sites serving poor and underserved communities in rural and urban Georgia. A primary foundation of this merged model of QI and evaluation is a refocusing of the evaluation from an intimidating report card summative evaluation by external evaluators to an internally engaged program focus on developmental evaluation. The benefits of the merged model to both QI and evaluation are discussed. The use of evaluation based logic models can help anchor a QI program in evidence-based practice and provide linkage between process and outputs with the longer term distal outcomes. Merging the QI approach with evaluation has major advantages, particularly related to enhancing the funder's return on investment. We illustrate how a Plan-Do-Study-Act model of QI can (a) be integrated with evaluation based logic models, (b) help refocus emphasis from summative to developmental evaluation, (c) enhance program ownership and engagement in evaluation activities, and (d) increase the role of evaluators in providing technical assistance and support.

  20. The Application of FIA-based Data to Wildlife Habitat Modeling: A Comparative Study

    Treesearch

    Thomas C., Jr. Edwards; Gretchen G. Moisen; Tracey S. Frescino; Randall J. Schultz

    2005-01-01

    We evaluated the capability of two types of models, one based on spatially explicit variables derived from FIA data and one using so-called traditional habitat evaluation methods, for predicting the presence of cavity-nesting bird habitat in Fishlake National Forest, Utah. Both models performed equally well, in measures of predictive accuracy, with the FIA-based model...

  1. Decision-relevant evaluation of climate models: A case study of chill hours in California

    NASA Astrophysics Data System (ADS)

    Jagannathan, K. A.; Jones, A. D.; Kerr, A. C.

    2017-12-01

    The past decade has seen a proliferation of different climate datasets with over 60 climate models currently in use. Comparative evaluation and validation of models can assist practitioners chose the most appropriate models for adaptation planning. However, such assessments are usually conducted for `climate metrics' such as seasonal temperature, while sectoral decisions are often based on `decision-relevant outcome metrics' such as growing degree days or chill hours. Since climate models predict different metrics with varying skill, the goal of this research is to conduct a bottom-up evaluation of model skill for `outcome-based' metrics. Using chill hours (number of hours in winter months where temperature is lesser than 45 deg F) in Fresno, CA as a case, we assess how well different GCMs predict the historical mean and slope of chill hours, and whether and to what extent projections differ based on model selection. We then compare our results with other climate-based evaluations of the region, to identify similarities and differences. For the model skill evaluation, historically observed chill hours were compared with simulations from 27 GCMs (and multiple ensembles). Model skill scores were generated based on a statistical hypothesis test of the comparative assessment. Future projections from RCP 8.5 runs were evaluated, and a simple bias correction was also conducted. Our analysis indicates that model skill in predicting chill hour slope is dependent on its skill in predicting mean chill hours, which results from the non-linear nature of the chill metric. However, there was no clear relationship between the models that performed well for the chill hour metric and those that performed well in other temperature-based evaluations (such winter minimum temperature or diurnal temperature range). Further, contrary to conclusions from other studies, we also found that the multi-model mean or large ensemble mean results may not always be most appropriate for this outcome metric. Our assessment sheds light on key differences between global versus local skill, and broad versus specific skill of climate models, highlighting that decision-relevant model evaluation may be crucial for providing practitioners with the best available climate information for their specific needs.

  2. Model-based choices involve prospective neural activity

    PubMed Central

    Doll, Bradley B.; Duncan, Katherine D.; Simon, Dylan A.; Shohamy, Daphna; Daw, Nathaniel D.

    2015-01-01

    Decisions may arise via “model-free” repetition of previously reinforced actions, or by “model-based” evaluation, which is widely thought to follow from prospective anticipation of action consequences using a learned map or model. While choices and neural correlates of decision variables sometimes reflect knowledge of their consequences, it remains unclear whether this actually arises from prospective evaluation. Using functional MRI and a sequential reward-learning task in which paths contained decodable object categories, we found that humans’ model-based choices were associated with neural signatures of future paths observed at decision time, suggesting a prospective mechanism for choice. Prospection also covaried with the degree of model-based influences on neural correlates of decision variables, and was inversely related to prediction error signals thought to underlie model-free learning. These results dissociate separate mechanisms underlying model-based and model-free evaluation and support the hypothesis that model-based influences on choices and neural decision variables result from prospection. PMID:25799041

  3. Milestone-specific, Observed data points for evaluating levels of performance (MODEL) assessment strategy for anesthesiology residency programs.

    PubMed

    Nagy, Christopher J; Fitzgerald, Brian M; Kraus, Gregory P

    2014-01-01

    Anesthesiology residency programs will be expected to have Milestones-based evaluation systems in place by July 2014 as part of the Next Accreditation System. The San Antonio Uniformed Services Health Education Consortium (SAUSHEC) anesthesiology residency program developed and implemented a Milestones-based feedback and evaluation system a year ahead of schedule. It has been named the Milestone-specific, Observed Data points for Evaluating Levels of performance (MODEL) assessment strategy. The "MODEL Menu" and the "MODEL Blueprint" are tools that other anesthesiology residency programs can use in developing their own Milestones-based feedback and evaluation systems prior to ACGME-required implementation. Data from our early experience with the streamlined MODEL blueprint assessment strategy showed substantially improved faculty compliance with reporting requirements. The MODEL assessment strategy provides programs with a workable assessment method for residents, and important Milestones data points to programs for ACGME reporting.

  4. Modelling in Evaluating a Working Life Project in Higher Education

    ERIC Educational Resources Information Center

    Sarja, Anneli; Janhonen, Sirpa; Havukainen, Pirjo; Vesterinen, Anne

    2012-01-01

    This article describes an evaluation method based on collaboration between the higher education, a care home and university, in a R&D project. The aim of the project was to elaborate modelling as a tool of developmental evaluation for innovation and competence in project cooperation. The approach was based on activity theory. Modelling enabled a…

  5. EVALUATION OF THE REAL-TIME AIR-QUALITY MODEL USING THE RAPS (REGIONAL AIR POLLUTION STUDY) DATA BASE. VOLUME 3. PROGRAM USER'S GUIDE

    EPA Science Inventory

    The theory and programming of statistical tests for evaluating the Real-Time Air-Quality Model (RAM) using the Regional Air Pollution Study (RAPS) data base are fully documented in four volumes. Moreover, the tests are generally applicable to other model evaluation problems. Volu...

  6. Highway Air Pollution Dispersion Modeling : Preliminary Evaluation of Thirteen Models

    DOT National Transportation Integrated Search

    1978-06-01

    Thirteen highway air pollution dispersion models have been tested, using a portion of the Airedale air quality data base. The Transportation Air Pollution Studies (TAPS) System, a data base management system specifically designed for evaluating dispe...

  7. Highway Air Pollution Dispersion Modeling : Preliminary Evaluation of Thirteen Models

    DOT National Transportation Integrated Search

    1977-01-01

    Thirteen highway air pollution dispersion models have been tested, using a portion of the Airedale air quality data base. The Transportation Air Pollution Studies (TAPS) System, a data base management system specifically designed for evaluating dispe...

  8. Model-Based Economic Evaluation of Treatments for Depression: A Systematic Literature Review.

    PubMed

    Kolovos, Spyros; Bosmans, Judith E; Riper, Heleen; Chevreul, Karine; Coupé, Veerle M H; van Tulder, Maurits W

    2017-09-01

    An increasing number of model-based studies that evaluate the cost effectiveness of treatments for depression are being published. These studies have different characteristics and use different simulation methods. We aimed to systematically review model-based studies evaluating the cost effectiveness of treatments for depression and examine which modelling technique is most appropriate for simulating the natural course of depression. The literature search was conducted in the databases PubMed, EMBASE and PsycInfo between 1 January 2002 and 1 October 2016. Studies were eligible if they used a health economic model with quality-adjusted life-years or disability-adjusted life-years as an outcome measure. Data related to various methodological characteristics were extracted from the included studies. The available modelling techniques were evaluated based on 11 predefined criteria. This methodological review included 41 model-based studies, of which 21 used decision trees (DTs), 15 used cohort-based state-transition Markov models (CMMs), two used individual-based state-transition models (ISMs), and three used discrete-event simulation (DES) models. Just over half of the studies (54%) evaluated antidepressants compared with a control condition. The data sources, time horizons, cycle lengths, perspectives adopted and number of health states/events all varied widely between the included studies. DTs scored positively in four of the 11 criteria, CMMs in five, ISMs in six, and DES models in seven. There were substantial methodological differences between the studies. Since the individual history of each patient is important for the prognosis of depression, DES and ISM simulation methods may be more appropriate than the others for a pragmatic representation of the course of depression. However, direct comparisons between the available modelling techniques are necessary to yield firm conclusions.

  9. Dynamic Chest Image Analysis: Evaluation of Model-Based Pulmonary Perfusion Analysis With Pyramid Images

    DTIC Science & Technology

    2001-10-25

    Image Analysis aims to develop model-based computer analysis and visualization methods for showing focal and general abnormalities of lung ventilation and perfusion based on a sequence of digital chest fluoroscopy frames collected with the Dynamic Pulmonary Imaging technique 18,5,17,6. We have proposed and evaluated a multiresolutional method with an explicit ventilation model based on pyramid images for ventilation analysis. We have further extended the method for ventilation analysis to pulmonary perfusion. This paper focuses on the clinical evaluation of our method for

  10. Dig into Learning: A Program Evaluation of an Agricultural Literacy Innovation

    ERIC Educational Resources Information Center

    Edwards, Erica Brown

    2016-01-01

    This study is a mixed-methods program evaluation of an agricultural literacy innovation in a local school district in rural eastern North Carolina. This evaluation describes the use of a theory-based framework, the Concerns-Based Adoption Model (CBAM), in accordance with Stufflebeam's Context, Input, Process, Product (CIPP) model by evaluating the…

  11. Evaluation of Turkish and Mathematics Curricula According to Value-Based Evaluation Model

    ERIC Educational Resources Information Center

    Duman, Serap Nur; Akbas, Oktay

    2017-01-01

    This study evaluated secondary school seventh-grade Turkish and mathematics programs using the Context-Input-Process-Product Evaluation Model based on student, teacher, and inspector views. The convergent parallel mixed method design was used in the study. Student values were identified using the scales for socio-level identification, traditional…

  12. Report of the Inter-Organizational Committee on Evaluation. Internal Evaluation Model.

    ERIC Educational Resources Information Center

    White, Roy; Murray, John

    Based upon the premise that school divisions in Manitoba, Canada, should evaluate and improve upon themselves, this evaluation model was developed. The participating personnel and the development of the evaluation model are described. The model has 11 parts: (1) needs assessment; (2) statement of objectives; (3) definition of objectives; (4)…

  13. An Evaluation of High School Curricula Employing Using the Element-Based Curriculum Development Model

    ERIC Educational Resources Information Center

    Aslan, Dolgun; Günay, Rafet

    2016-01-01

    This study was conducted with the aim of evaluating the curricula that constitute the basis of education provision at high schools in Turkey from the perspective of the teachers involved. A descriptive survey model, a quantitative research method was employed in this study. An item-based curriculum evaluation model was employed as part of the…

  14. Metrics for Performance Evaluation of Patient Exercises during Physical Therapy.

    PubMed

    Vakanski, Aleksandar; Ferguson, Jake M; Lee, Stephen

    2017-06-01

    The article proposes a set of metrics for evaluation of patient performance in physical therapy exercises. Taxonomy is employed that classifies the metrics into quantitative and qualitative categories, based on the level of abstraction of the captured motion sequences. Further, the quantitative metrics are classified into model-less and model-based metrics, in reference to whether the evaluation employs the raw measurements of patient performed motions, or whether the evaluation is based on a mathematical model of the motions. The reviewed metrics include root-mean square distance, Kullback Leibler divergence, log-likelihood, heuristic consistency, Fugl-Meyer Assessment, and similar. The metrics are evaluated for a set of five human motions captured with a Kinect sensor. The metrics can potentially be integrated into a system that employs machine learning for modelling and assessment of the consistency of patient performance in home-based therapy setting. Automated performance evaluation can overcome the inherent subjectivity in human performed therapy assessment, and it can increase the adherence to prescribed therapy plans, and reduce healthcare costs.

  15. Evaluating University-Industry Collaboration: The European Foundation of Quality Management Excellence Model-Based Evaluation of University-Industry Collaboration

    ERIC Educational Resources Information Center

    Kauppila, Osmo; Mursula, Anu; Harkonen, Janne; Kujala, Jaakko

    2015-01-01

    The growth in university-industry collaboration has resulted in an increasing demand for methods to evaluate it. This paper presents one way to evaluate an organization's collaborative activities based on the European Foundation of Quality Management excellence model. Success factors of collaboration are derived from literature and compared…

  16. Predictive representations can link model-based reinforcement learning to model-free mechanisms.

    PubMed

    Russek, Evan M; Momennejad, Ida; Botvinick, Matthew M; Gershman, Samuel J; Daw, Nathaniel D

    2017-09-01

    Humans and animals are capable of evaluating actions by considering their long-run future rewards through a process described using model-based reinforcement learning (RL) algorithms. The mechanisms by which neural circuits perform the computations prescribed by model-based RL remain largely unknown; however, multiple lines of evidence suggest that neural circuits supporting model-based behavior are structurally homologous to and overlapping with those thought to carry out model-free temporal difference (TD) learning. Here, we lay out a family of approaches by which model-based computation may be built upon a core of TD learning. The foundation of this framework is the successor representation, a predictive state representation that, when combined with TD learning of value predictions, can produce a subset of the behaviors associated with model-based learning, while requiring less decision-time computation than dynamic programming. Using simulations, we delineate the precise behavioral capabilities enabled by evaluating actions using this approach, and compare them to those demonstrated by biological organisms. We then introduce two new algorithms that build upon the successor representation while progressively mitigating its limitations. Because this framework can account for the full range of observed putatively model-based behaviors while still utilizing a core TD framework, we suggest that it represents a neurally plausible family of mechanisms for model-based evaluation.

  17. Predictive representations can link model-based reinforcement learning to model-free mechanisms

    PubMed Central

    Botvinick, Matthew M.

    2017-01-01

    Humans and animals are capable of evaluating actions by considering their long-run future rewards through a process described using model-based reinforcement learning (RL) algorithms. The mechanisms by which neural circuits perform the computations prescribed by model-based RL remain largely unknown; however, multiple lines of evidence suggest that neural circuits supporting model-based behavior are structurally homologous to and overlapping with those thought to carry out model-free temporal difference (TD) learning. Here, we lay out a family of approaches by which model-based computation may be built upon a core of TD learning. The foundation of this framework is the successor representation, a predictive state representation that, when combined with TD learning of value predictions, can produce a subset of the behaviors associated with model-based learning, while requiring less decision-time computation than dynamic programming. Using simulations, we delineate the precise behavioral capabilities enabled by evaluating actions using this approach, and compare them to those demonstrated by biological organisms. We then introduce two new algorithms that build upon the successor representation while progressively mitigating its limitations. Because this framework can account for the full range of observed putatively model-based behaviors while still utilizing a core TD framework, we suggest that it represents a neurally plausible family of mechanisms for model-based evaluation. PMID:28945743

  18. A Model for the Evaluation of Educational Products.

    ERIC Educational Resources Information Center

    Bertram, Charles L.

    A model for the evaluation of educational products based on experience with development of three such products is described. The purpose of the evaluation model is to indicate the flow of evaluation activity as products undergo development. Evaluation is given Stufflebeam's definition as the process of delineating, obtaining, and providing useful…

  19. Evaluating Computer-Based Assessment in a Risk-Based Model

    ERIC Educational Resources Information Center

    Zakrzewski, Stan; Steven, Christine; Ricketts, Chris

    2009-01-01

    There are three purposes for evaluation: evaluation for action to aid the decision making process, evaluation for understanding to further enhance enlightenment and evaluation for control to ensure compliance to standards. This article argues that the primary function of evaluation in the "Catherine Wheel" computer-based assessment (CBA)…

  20. An interdisciplinary framework for participatory modeling design and evaluation—What makes models effective participatory decision tools?

    NASA Astrophysics Data System (ADS)

    Falconi, Stefanie M.; Palmer, Richard N.

    2017-02-01

    Increased requirements for public involvement in water resources management (WRM) over the past century have stimulated the development of more collaborative decision-making methods. Participatory modeling (PM) uses computer models to inform and engage stakeholders in the planning process in order to influence collaborative decisions in WRM. Past evaluations of participatory models focused on process and final outcomes, yet, were hindered by diversity of purpose and inconsistent documentation. This paper presents a two-stage framework for evaluating PM based on mechanisms for improving model effectiveness as participatory tools. The five dimensions characterize the "who, when, how, and why" of each participatory effort (stage 1). Models are evaluated as "boundary objects," a concept used to describe tools that bridge understanding and translate different bodies of knowledge to improve credibility, salience, and legitimacy (stage 2). This evaluation framework is applied to five existing case studies from the literature. Though the goals of participation can be diverse, the novel contribution of the two-stage proposed framework is the flexibility it has to evaluate a wide range of cases that differ in scope, modeling approach, and participatory context. Also, the evaluation criteria provide a structured vocabulary based on clear mechanisms that extend beyond previous process-based and outcome-based evaluations. Effective models are those that take advantage of mechanisms that facilitate dialogue and resolution and improve the accessibility and applicability of technical knowledge. Furthermore, the framework can help build more complete records and systematic documentation of evidence to help standardize the field of PM.

  1. Integrating distributional, spatial prioritization, and individual-based models to evaluate potential critical habitat networks: A case study using the Northern Spotted Owl

    EPA Science Inventory

    As part of the northern spotted owl recovery planning effort, we evaluated a series of alternative critical habitat scenarios using a species-distribution model (MaxEnt), a conservation-planning model (Zonation), and an individual-based population model (HexSim). With this suite ...

  2. Community-Based Rehabilitation (CBR): Problems and Possibilities.

    ERIC Educational Resources Information Center

    O'Toole, Brian

    1987-01-01

    The institution-based model for providing services to individuals with disabilities has limitations in both developing and developed countries. The community-based rehabilitation model was positively evaluated by the World Health Organization as an alternative approach, but the evaluation is questioned on methodological and philosophical grounds.…

  3. Formal implementation of a performance evaluation model for the face recognition system.

    PubMed

    Shin, Yong-Nyuo; Kim, Jason; Lee, Yong-Jun; Shin, Woochang; Choi, Jin-Young

    2008-01-01

    Due to usability features, practical applications, and its lack of intrusiveness, face recognition technology, based on information, derived from individuals' facial features, has been attracting considerable attention recently. Reported recognition rates of commercialized face recognition systems cannot be admitted as official recognition rates, as they are based on assumptions that are beneficial to the specific system and face database. Therefore, performance evaluation methods and tools are necessary to objectively measure the accuracy and performance of any face recognition system. In this paper, we propose and formalize a performance evaluation model for the biometric recognition system, implementing an evaluation tool for face recognition systems based on the proposed model. Furthermore, we performed evaluations objectively by providing guidelines for the design and implementation of a performance evaluation system, formalizing the performance test process.

  4. An Inter-Personal Information Sharing Model Based on Personalized Recommendations

    NASA Astrophysics Data System (ADS)

    Kamei, Koji; Funakoshi, Kaname; Akahani, Jun-Ichi; Satoh, Tetsuji

    In this paper, we propose an inter-personal information sharing model among individuals based on personalized recommendations. In the proposed model, we define an information resource as shared between people when both of them consider it important --- not merely when they both possess it. In other words, the model defines the importance of information resources based on personalized recommendations from identifiable acquaintances. The proposed method is based on a collaborative filtering system that focuses on evaluations from identifiable acquaintances. It utilizes both user evaluations for documents and their contents. In other words, each user profile is represented as a matrix of credibility to the other users' evaluations on each domain of interests. We extended the content-based collaborative filtering method to distinguish other users to whom the documents should be recommended. We also applied a concept-based vector space model to represent the domain of interests instead of the previous method which represented them by a term-based vector space model. We introduce a personalized concept-base compiled from each user's information repository to improve the information retrieval in the user's environment. Furthermore, the concept-spaces change from user to user since they reflect the personalities of the users. Because of different concept-spaces, the similarity between a document and a user's interest varies for each user. As a result, a user receives recommendations from other users who have different view points, achieving inter-personal information sharing based on personalized recommendations. This paper also describes an experimental simulation of our information sharing model. In our laboratory, five participants accumulated a personal repository of e-mails and web pages from which they built their own concept-base. Then we estimated the user profiles according to personalized concept-bases and sets of documents which others evaluated. We simulated inter-personal recommendation based on the user profiles and evaluated the performance of the recommendation method by comparing the recommended documents to the result of the content-based collaborative filtering.

  5. Evaluating Curriculum-Based Measurement from a Behavioral Assessment Perspective

    ERIC Educational Resources Information Center

    Ardoin, Scott P.; Roof, Claire M.; Klubnick, Cynthia; Carfolite, Jessica

    2008-01-01

    Curriculum-based measurement Reading (CBM-R) is an assessment procedure used to evaluate students' relative performance compared to peers and to evaluate their growth in reading. Within the response to intervention (RtI) model, CBM-R data are plotted in time series fashion as a means modeling individual students' response to varying levels of…

  6. Research and Evaluation in Operational Competency-Based Teacher Education Programs.

    ERIC Educational Resources Information Center

    Dickson, George E., Ed.

    1975-01-01

    This is a collection of papers presented at a 1974 conference on research and evaluation in operational competency-based teacher education (CBTE) programs. Two conceptual models for research and evaluation of CBTE activities were presented at the conference and the presentations of these models are the first two chapters of this collection: "A…

  7. Rule based design of conceptual models for formative evaluation

    NASA Technical Reports Server (NTRS)

    Moore, Loretta A.; Chang, Kai; Hale, Joseph P.; Bester, Terri; Rix, Thomas; Wang, Yaowen

    1994-01-01

    A Human-Computer Interface (HCI) Prototyping Environment with embedded evaluation capability has been investigated. This environment will be valuable in developing and refining HCI standards and evaluating program/project interface development, especially Space Station Freedom on-board displays for payload operations. This environment, which allows for rapid prototyping and evaluation of graphical interfaces, includes the following four components: (1) a HCI development tool; (2) a low fidelity simulator development tool; (3) a dynamic, interactive interface between the HCI and the simulator; and (4) an embedded evaluator that evaluates the adequacy of a HCI based on a user's performance. The embedded evaluation tool collects data while the user is interacting with the system and evaluates the adequacy of an interface based on a user's performance. This paper describes the design of conceptual models for the embedded evaluation system using a rule-based approach.

  8. Rule based design of conceptual models for formative evaluation

    NASA Technical Reports Server (NTRS)

    Moore, Loretta A.; Chang, Kai; Hale, Joseph P.; Bester, Terri; Rix, Thomas; Wang, Yaowen

    1994-01-01

    A Human-Computer Interface (HCI) Prototyping Environment with embedded evaluation capability has been investigated. This environment will be valuable in developing and refining HCI standards and evaluating program/project interface development, especially Space Station Freedom on-board displays for payload operations. This environment, which allows for rapid prototyping and evaluation of graphical interfaces, includes the following four components: (1) a HCI development tool, (2) a low fidelity simulator development tool, (3) a dynamic, interactive interface between the HCI and the simulator, and (4) an embedded evaluator that evaluates the adequacy of a HCI based on a user's performance. The embedded evaluation tool collects data while the user is interacting with the system and evaluates the adequacy of an interface based on a user's performance. This paper describes the design of conceptual models for the embedded evaluation system using a rule-based approach.

  9. Models of morality

    PubMed Central

    Crockett, Molly J.

    2013-01-01

    Moral dilemmas engender conflicts between two traditions: consequentialism, which evaluates actions based on their outcomes, and deontology, which evaluates actions themselves. These strikingly resemble two distinct decision-making architectures: a model-based system that selects actions based on inferences about their consequences; and a model-free system that selects actions based on their reinforcement history. Here, I consider how these systems, along with a Pavlovian system that responds reflexively to rewards and punishments, can illuminate puzzles in moral psychology. PMID:23845564

  10. Maximizing the Impact of Program Evaluation: A Discrepancy-Based Process for Educational Program Evaluation.

    ERIC Educational Resources Information Center

    Cantor, Jeffrey A.

    This paper describes a formative/summative process for educational program evaluation, which is appropriate for higher education programs and is based on M. Provus' Discrepancy Evaluation Model and the principles of instructional design. The Discrepancy Based Methodology for Educational Program Evaluation facilitates systematic and detailed…

  11. Applying Model Analysis to a Resource-Based Analysis of the Force and Motion Conceptual Evaluation

    ERIC Educational Resources Information Center

    Smith, Trevor I.; Wittmann, Michael C.; Carter, Tom

    2014-01-01

    Previously, we analyzed the Force and Motion Conceptual Evaluation in terms of a resources-based model that allows for clustering of questions so as to provide useful information on how students correctly or incorrectly reason about physics. In this paper, we apply model analysis to show that the associated model plots provide more information…

  12. Comprehensive Aspectual UML approach to support AspectJ.

    PubMed

    Magableh, Aws; Shukur, Zarina; Ali, Noorazean Mohd

    2014-01-01

    Unified Modeling Language is the most popular and widely used Object-Oriented modelling language in the IT industry. This study focuses on investigating the ability to expand UML to some extent to model crosscutting concerns (Aspects) to support AspectJ. Through a comprehensive literature review, we identify and extensively examine all the available Aspect-Oriented UML modelling approaches and find that the existing Aspect-Oriented Design Modelling approaches using UML cannot be considered to provide a framework for a comprehensive Aspectual UML modelling approach and also that there is a lack of adequate Aspect-Oriented tool support. This study also proposes a set of Aspectual UML semantic rules and attempts to generate AspectJ pseudocode from UML diagrams. The proposed Aspectual UML modelling approach is formally evaluated using a focus group to test six hypotheses regarding performance; a "good design" criteria-based evaluation to assess the quality of the design; and an AspectJ-based evaluation as a reference measurement-based evaluation. The results of the focus group evaluation confirm all the hypotheses put forward regarding the proposed approach. The proposed approach provides a comprehensive set of Aspectual UML structural and behavioral diagrams, which are designed and implemented based on a comprehensive and detailed set of AspectJ programming constructs.

  13. Comprehensive Aspectual UML Approach to Support AspectJ

    PubMed Central

    Magableh, Aws; Shukur, Zarina; Mohd. Ali, Noorazean

    2014-01-01

    Unified Modeling Language is the most popular and widely used Object-Oriented modelling language in the IT industry. This study focuses on investigating the ability to expand UML to some extent to model crosscutting concerns (Aspects) to support AspectJ. Through a comprehensive literature review, we identify and extensively examine all the available Aspect-Oriented UML modelling approaches and find that the existing Aspect-Oriented Design Modelling approaches using UML cannot be considered to provide a framework for a comprehensive Aspectual UML modelling approach and also that there is a lack of adequate Aspect-Oriented tool support. This study also proposes a set of Aspectual UML semantic rules and attempts to generate AspectJ pseudocode from UML diagrams. The proposed Aspectual UML modelling approach is formally evaluated using a focus group to test six hypotheses regarding performance; a “good design” criteria-based evaluation to assess the quality of the design; and an AspectJ-based evaluation as a reference measurement-based evaluation. The results of the focus group evaluation confirm all the hypotheses put forward regarding the proposed approach. The proposed approach provides a comprehensive set of Aspectual UML structural and behavioral diagrams, which are designed and implemented based on a comprehensive and detailed set of AspectJ programming constructs. PMID:25136656

  14. Using satellite observations in performance evaluation for regulatory air quality modeling: Comparison with ground-level measurements

    NASA Astrophysics Data System (ADS)

    Odman, M. T.; Hu, Y.; Russell, A.; Chai, T.; Lee, P.; Shankar, U.; Boylan, J.

    2012-12-01

    Regulatory air quality modeling, such as State Implementation Plan (SIP) modeling, requires that model performance meets recommended criteria in the base-year simulations using period-specific, estimated emissions. The goal of the performance evaluation is to assure that the base-year modeling accurately captures the observed chemical reality of the lower troposphere. Any significant deficiencies found in the performance evaluation must be corrected before any base-case (with typical emissions) and future-year modeling is conducted. Corrections are usually made to model inputs such as emission-rate estimates or meteorology and/or to the air quality model itself, in modules that describe specific processes. Use of ground-level measurements that follow approved protocols is recommended for evaluating model performance. However, ground-level monitoring networks are spatially sparse, especially for particulate matter. Satellite retrievals of atmospheric chemical properties such as aerosol optical depth (AOD) provide spatial coverage that can compensate for the sparseness of ground-level measurements. Satellite retrievals can also help diagnose potential model or data problems in the upper troposphere. It is possible to achieve good model performance near the ground, but have, for example, erroneous sources or sinks in the upper troposphere that may result in misleading and unrealistic responses to emission reductions. Despite these advantages, satellite retrievals are rarely used in model performance evaluation, especially for regulatory modeling purposes, due to the high uncertainty in retrievals associated with various contaminations, for example by clouds. In this study, 2007 was selected as the base year for SIP modeling in the southeastern U.S. Performance of the Community Multiscale Air Quality (CMAQ) model, at a 12-km horizontal resolution, for this annual simulation is evaluated using both recommended ground-level measurements and non-traditional satellite retrievals. Evaluation results are assessed against recommended criteria and peer studies in the literature. Further analysis is conducted, based upon these assessments, to discover likely errors in model inputs and potential deficiencies in the model itself. Correlations as well as differences in input errors and model deficiencies revealed by ground-level measurements versus satellite observations are discussed. Additionally, sensitivity analyses are employed to investigate errors in emission-rate estimates using either ground-level measurements or satellite retrievals, and the results are compared against each other considering observational uncertainties. Recommendations are made for how to effectively utilize satellite retrievals in regulatory air quality modeling.

  15. An Evaluation Model for Competency Based Teacher Preparatory Programs.

    ERIC Educational Resources Information Center

    Denton, Jon J.

    This discussion describes an evaluation model designed to complement a curriculum development project, the primary goal of which is to structure a performance based program for preservice teachers. Data collected from the implementation of this four-phase model can be used to make decisions for developing and changing performance objectives and…

  16. Modeling subjective evaluation of soundscape quality in urban open spaces: An artificial neural network approach.

    PubMed

    Yu, Lei; Kang, Jian

    2009-09-01

    This research aims to explore the feasibility of using computer-based models to predict the soundscape quality evaluation of potential users in urban open spaces at the design stage. With the data from large scale field surveys in 19 urban open spaces across Europe and China, the importance of various physical, behavioral, social, demographical, and psychological factors for the soundscape evaluation has been statistically analyzed. Artificial neural network (ANN) models have then been explored at three levels. It has been shown that for both subjective sound level and acoustic comfort evaluation, a general model for all the case study sites is less feasible due to the complex physical and social environments in urban open spaces; models based on individual case study sites perform well but the application range is limited; and specific models for certain types of location/function would be reliable and practical. The performance of acoustic comfort models is considerably better than that of sound level models. Based on the ANN models, soundscape quality maps can be produced and this has been demonstrated with an example.

  17. A Multi Criteria Group Decision-Making Model for Teacher Evaluation in Higher Education Based on Cloud Model and Decision Tree

    ERIC Educational Resources Information Center

    Chang, Ting-Cheng; Wang, Hui

    2016-01-01

    This paper proposes a cloud multi-criteria group decision-making model for teacher evaluation in higher education which is involving subjectivity, imprecision and fuzziness. First, selecting the appropriate evaluation index depending on the evaluation objectives, indicating a clear structural relationship between the evaluation index and…

  18. Evaluation of Smoking Prevention Television Messages Based on the Elaboration Likelihood Model

    ERIC Educational Resources Information Center

    Flynn, Brian S.; Worden, John K.; Bunn, Janice Yanushka; Connolly, Scott W.; Dorwaldt, Anne L.

    2011-01-01

    Progress in reducing youth smoking may depend on developing improved methods to communicate with higher risk youth. This study explored the potential of smoking prevention messages based on the Elaboration Likelihood Model (ELM) to address these needs. Structured evaluations of 12 smoking prevention messages based on three strategies derived from…

  19. Presenting an Evaluation Model for the Cancer Registry Software.

    PubMed

    Moghaddasi, Hamid; Asadi, Farkhondeh; Rabiei, Reza; Rahimi, Farough; Shahbodaghi, Reihaneh

    2017-12-01

    As cancer is increasingly growing, cancer registry is of great importance as the main core of cancer control programs, and many different software has been designed for this purpose. Therefore, establishing a comprehensive evaluation model is essential to evaluate and compare a wide range of such software. In this study, the criteria of the cancer registry software have been determined by studying the documents and two functional software of this field. The evaluation tool was a checklist and in order to validate the model, this checklist was presented to experts in the form of a questionnaire. To analyze the results of validation, an agreed coefficient of %75 was determined in order to apply changes. Finally, when the model was approved, the final version of the evaluation model for the cancer registry software was presented. The evaluation model of this study contains tool and method of evaluation. The evaluation tool is a checklist including the general and specific criteria of the cancer registry software along with their sub-criteria. The evaluation method of this study was chosen as a criteria-based evaluation method based on the findings. The model of this study encompasses various dimensions of cancer registry software and a proper method for evaluating it. The strong point of this evaluation model is the separation between general criteria and the specific ones, while trying to fulfill the comprehensiveness of the criteria. Since this model has been validated, it can be used as a standard to evaluate the cancer registry software.

  20. Uncertainty Evaluation and Appropriate Distribution for the RDHM in the Rockies

    NASA Astrophysics Data System (ADS)

    Kim, J.; Bastidas, L. A.; Clark, E. P.

    2010-12-01

    The problems that hydrologic models have in properly reproducing the processes involved in mountainous areas, and in particular the Rocky Mountains, are widely acknowledged. Herein, we present an application of the National Weather Service RDHM distributed model over the Durango River basin in Colorado. We focus primarily in the assessment of the model prediction uncertainty associated with the parameter estimation and the comparison of the model performance using parameters obtained with a priori estimation following the procedure of Koren et al., and those obtained via inverse modeling using a variety of Markov chain Monte Carlo based optimization algorithms. The model evaluation is based on traditional procedures as well as non-traditional ones based on the use of shape matching functions, which are more appropriate for the evaluation of distributed information (e.g. Hausdorff distance, earth movers distance). The variables used for the model performance evaluation are discharge (with internal nodes), snow cover and snow water equivalent. An attempt to establish the proper degree of distribution, for the Durango basin with the RDHM model, is also presented.

  1. Method Development for Clinical Comprehensive Evaluation of Pediatric Drugs Based on Multi-Criteria Decision Analysis: Application to Inhaled Corticosteroids for Children with Asthma.

    PubMed

    Yu, Yuncui; Jia, Lulu; Meng, Yao; Hu, Lihua; Liu, Yiwei; Nie, Xiaolu; Zhang, Meng; Zhang, Xuan; Han, Sheng; Peng, Xiaoxia; Wang, Xiaoling

    2018-04-01

    Establishing a comprehensive clinical evaluation system is critical in enacting national drug policy and promoting rational drug use. In China, the 'Clinical Comprehensive Evaluation System for Pediatric Drugs' (CCES-P) project, which aims to compare drugs based on clinical efficacy and cost effectiveness to help decision makers, was recently proposed; therefore, a systematic and objective method is required to guide the process. An evidence-based multi-criteria decision analysis model that involved an analytic hierarchy process (AHP) was developed, consisting of nine steps: (1) select the drugs to be reviewed; (2) establish the evaluation criterion system; (3) determine the criterion weight based on the AHP; (4) construct the evidence body for each drug under evaluation; (5) select comparative measures and calculate the original utility score; (6) place a common utility scale and calculate the standardized utility score; (7) calculate the comprehensive utility score; (8) rank the drugs; and (9) perform a sensitivity analysis. The model was applied to the evaluation of three different inhaled corticosteroids (ICSs) used for asthma management in children (a total of 16 drugs with different dosage forms and strengths or different manufacturers). By applying the drug analysis model, the 16 ICSs under review were successfully scored and evaluated. Budesonide suspension for inhalation (drug ID number: 7) ranked the highest, with comprehensive utility score of 80.23, followed by fluticasone propionate inhaled aerosol (drug ID number: 16), with a score of 79.59, and budesonide inhalation powder (drug ID number: 6), with a score of 78.98. In the sensitivity analysis, the ranking of the top five and lowest five drugs remains unchanged, suggesting this model is generally robust. An evidence-based drug evaluation model based on AHP was successfully developed. The model incorporates sufficient utility and flexibility for aiding the decision-making process, and can be a useful tool for the CCES-P.

  2. Evaluating energy saving system of data centers based on AHP and fuzzy comprehensive evaluation model

    NASA Astrophysics Data System (ADS)

    Jiang, Yingni

    2018-03-01

    Due to the high energy consumption of communication, energy saving of data centers must be enforced. But the lack of evaluation mechanisms has restrained the process on energy saving construction of data centers. In this paper, energy saving evaluation index system of data centers was constructed on the basis of clarifying the influence factors. Based on the evaluation index system, analytical hierarchy process was used to determine the weights of the evaluation indexes. Subsequently, a three-grade fuzzy comprehensive evaluation model was constructed to evaluate the energy saving system of data centers.

  3. Evaluation of Model Recognition for Grammar-Based Automatic 3d Building Model Reconstruction

    NASA Astrophysics Data System (ADS)

    Yu, Qian; Helmholz, Petra; Belton, David

    2016-06-01

    In recent years, 3D city models are in high demand by many public and private organisations, and the steadily growing capacity in both quality and quantity are increasing demand. The quality evaluation of these 3D models is a relevant issue both from the scientific and practical points of view. In this paper, we present a method for the quality evaluation of 3D building models which are reconstructed automatically from terrestrial laser scanning (TLS) data based on an attributed building grammar. The entire evaluation process has been performed in all the three dimensions in terms of completeness and correctness of the reconstruction. Six quality measures are introduced to apply on four datasets of reconstructed building models in order to describe the quality of the automatic reconstruction, and also are assessed on their validity from the evaluation point of view.

  4. From creatures of habit to goal-directed learners: Tracking the developmental emergence of model-based reinforcement learning

    PubMed Central

    Decker, Johannes H.; Otto, A. Ross; Daw, Nathaniel D.; Hartley, Catherine A.

    2016-01-01

    Theoretical models distinguish two decision-making strategies that have been formalized in reinforcement-learning theory. A model-based strategy leverages a cognitive model of potential actions and their consequences to make goal-directed choices, whereas a model-free strategy evaluates actions based solely on their reward history. Research in adults has begun to elucidate the psychological mechanisms and neural substrates underlying these learning processes and factors that influence their relative recruitment. However, the developmental trajectory of these evaluative strategies has not been well characterized. In this study, children, adolescents, and adults, performed a sequential reinforcement-learning task that enables estimation of model-based and model-free contributions to choice. Whereas a model-free strategy was evident in choice behavior across all age groups, evidence of a model-based strategy only emerged during adolescence and continued to increase into adulthood. These results suggest that recruitment of model-based valuation systems represents a critical cognitive component underlying the gradual maturation of goal-directed behavior. PMID:27084852

  5. Comparative evaluation of urban storm water quality models

    NASA Astrophysics Data System (ADS)

    Vaze, J.; Chiew, Francis H. S.

    2003-10-01

    The estimation of urban storm water pollutant loads is required for the development of mitigation and management strategies to minimize impacts to receiving environments. Event pollutant loads are typically estimated using either regression equations or "process-based" water quality models. The relative merit of using regression models compared to process-based models is not clear. A modeling study is carried out here to evaluate the comparative ability of the regression equations and process-based water quality models to estimate event diffuse pollutant loads from impervious surfaces. The results indicate that, once calibrated, both the regression equations and the process-based model can estimate event pollutant loads satisfactorily. In fact, the loads estimated using the regression equation as a function of rainfall intensity and runoff rate are better than the loads estimated using the process-based model. Therefore, if only estimates of event loads are required, regression models should be used because they are simpler and require less data compared to process-based models.

  6. An Approach to the Evaluation of Hypermedia.

    ERIC Educational Resources Information Center

    Knussen, Christina; And Others

    1991-01-01

    Discusses methods that may be applied to the evaluation of hypermedia, based on six models described by Lawton. Techniques described include observation, self-report measures, interviews, automated measures, psychometric tests, checklists and criterion-based techniques, process models, Experimentally Measuring Usability (EMU), and a naturalistic…

  7. A proposed model for economic evaluations of major depressive disorder.

    PubMed

    Haji Ali Afzali, Hossein; Karnon, Jonathan; Gray, Jodi

    2012-08-01

    In countries like UK and Australia, the comparability of model-based analyses is an essential aspect of reimbursement decisions for new pharmaceuticals, medical services and technologies. Within disease areas, the use of models with alternative structures, type of modelling techniques and/or data sources for common parameters reduces the comparability of evaluations of alternative technologies for the same condition. The aim of this paper is to propose a decision analytic model to evaluate long-term costs and benefits of alternative management options in patients with depression. The structure of the proposed model is based on the natural history of depression and includes clinical events that are important from both clinical and economic perspectives. Considering its greater flexibility with respect to handling time, discrete event simulation (DES) is an appropriate simulation platform for modelling studies of depression. We argue that the proposed model can be used as a reference model in model-based studies of depression improving the quality and comparability of studies.

  8. Objectively Determining the Educational Potential of Computer and Video-Based Courseware; or, Producing Reliable Evaluations Despite the Dog and Pony Show.

    ERIC Educational Resources Information Center

    Barrett, Andrew J.; And Others

    The Center for Interactive Technology, Applications, and Research at the College of Engineering of the University of South Florida (Tampa) has developed objective and descriptive evaluation models to assist in determining the educational potential of computer and video courseware. The computer-based courseware evaluation model and the video-based…

  9. Evaluation of an alcohol-based surgical hand disinfectant containing a synergistic combination of farnesol and benzethonium chloride for immediate and persistent activity against resident hand flora of volunteers and with a novel in vitro pig skin model.

    PubMed

    Shintre, Milind S; Gaonkar, Trupti A; Modak, Shanta M

    2007-02-01

    To evaluate the immediate, persistent and sustained in vivo activity of an alcohol-based surgical hand disinfectant, consisting of a zinc gel and a preservative system containing a synergistic combination of farnesol and benzethonium chloride (ZBF disinfectant), and to develop a pig skin model for in vitro evaluation of the immediate and persistent efficacy of alcohol-based surgical hand disinfectants against resident hand flora. The in vivo immediate, persistent, and sustained activity of ZBF disinfectant was evaluated using human volunteers and the "glove-juice" method described in the US Food and Drug Administration's Tentative Final Monograph (FDA-TFM) for Healthcare Antiseptic Products. A novel in vitro pig skin model was developed to compare the immediate and persistent activity of alcohol-based surgical hand disinfectants against resident flora using Staphylococcus epidermidis as the test organism. Four alcohol-based surgical hand disinfectants were evaluated using this model. The results for the ZBF disinfectant exceed the FDA-TFM criteria for immediate, persistent, and sustained activity required for surgical hand disinfectants. The reduction factors for the 4 hand disinfectants obtained using the pig skin model show good agreement with the log(10) reductions in concentrations of hand flora obtained using human volunteers to test for immediate and persistent activity. The ZBF disinfectant we evaluated met the FDA-TFM criteria for surgical hand disinfectants. The immediate and persistent efficacy of the surgical hand disinfectants evaluated with the novel pig skin model described in this study shows good agreement with the results obtained in vivo.

  10. Evaluating models of healthcare delivery using the Model of Care Evaluation Tool (MCET).

    PubMed

    Hudspeth, Randall S; Vogt, Marjorie; Wysocki, Ken; Pittman, Oralea; Smith, Susan; Cooke, Cindy; Dello Stritto, Rita; Hoyt, Karen Sue; Merritt, T Jeanne

    2016-08-01

    Our aim was to provide the outcome of a structured Model of Care (MoC) Evaluation Tool (MCET), developed by an FAANP Best-practices Workgroup, that can be used to guide the evaluation of existing MoCs being considered for use in clinical practice. Multiple MoCs are available, but deciding which model of health care delivery to use can be confusing. This five-component tool provides a structured assessment approach to model selection and has universal application. A literature review using CINAHL, PubMed, Ovid, and EBSCO was conducted. The MCET evaluation process includes five sequential components with a feedback loop from component 5 back to component 3 for reevaluation of any refinements. The components are as follows: (1) Background, (2) Selection of an MoC, (3) Implementation, (4) Evaluation, and (5) Sustainability and Future Refinement. This practical resource considers an evidence-based approach to use in determining the best model to implement based on need, stakeholder considerations, and feasibility. ©2015 American Association of Nurse Practitioners.

  11. Introducing Multisensor Satellite Radiance-Based Evaluation for Regional Earth System Modeling

    NASA Technical Reports Server (NTRS)

    Matsui, T.; Santanello, J.; Shi, J. J.; Tao, W.-K.; Wu, D.; Peters-Lidard, C.; Kemp, E.; Chin, M.; Starr, D.; Sekiguchi, M.; hide

    2014-01-01

    Earth System modeling has become more complex, and its evaluation using satellite data has also become more difficult due to model and data diversity. Therefore, the fundamental methodology of using satellite direct measurements with instrumental simulators should be addressed especially for modeling community members lacking a solid background of radiative transfer and scattering theory. This manuscript introduces principles of multisatellite, multisensor radiance-based evaluation methods for a fully coupled regional Earth System model: NASA-Unified Weather Research and Forecasting (NU-WRF) model. We use a NU-WRF case study simulation over West Africa as an example of evaluating aerosol-cloud-precipitation-land processes with various satellite observations. NU-WRF-simulated geophysical parameters are converted to the satellite-observable raw radiance and backscatter under nearly consistent physics assumptions via the multisensor satellite simulator, the Goddard Satellite Data Simulator Unit. We present varied examples of simple yet robust methods that characterize forecast errors and model physics biases through the spatial and statistical interpretation of various satellite raw signals: infrared brightness temperature (Tb) for surface skin temperature and cloud top temperature, microwave Tb for precipitation ice and surface flooding, and radar and lidar backscatter for aerosol-cloud profiling simultaneously. Because raw satellite signals integrate many sources of geophysical information, we demonstrate user-defined thresholds and a simple statistical process to facilitate evaluations, including the infrared-microwave-based cloud types and lidar/radar-based profile classifications.

  12. A Java-based fMRI processing pipeline evaluation system for assessment of univariate general linear model and multivariate canonical variate analysis-based pipelines.

    PubMed

    Zhang, Jing; Liang, Lichen; Anderson, Jon R; Gatewood, Lael; Rottenberg, David A; Strother, Stephen C

    2008-01-01

    As functional magnetic resonance imaging (fMRI) becomes widely used, the demands for evaluation of fMRI processing pipelines and validation of fMRI analysis results is increasing rapidly. The current NPAIRS package, an IDL-based fMRI processing pipeline evaluation framework, lacks system interoperability and the ability to evaluate general linear model (GLM)-based pipelines using prediction metrics. Thus, it can not fully evaluate fMRI analytical software modules such as FSL.FEAT and NPAIRS.GLM. In order to overcome these limitations, a Java-based fMRI processing pipeline evaluation system was developed. It integrated YALE (a machine learning environment) into Fiswidgets (a fMRI software environment) to obtain system interoperability and applied an algorithm to measure GLM prediction accuracy. The results demonstrated that the system can evaluate fMRI processing pipelines with univariate GLM and multivariate canonical variates analysis (CVA)-based models on real fMRI data based on prediction accuracy (classification accuracy) and statistical parametric image (SPI) reproducibility. In addition, a preliminary study was performed where four fMRI processing pipelines with GLM and CVA modules such as FSL.FEAT and NPAIRS.CVA were evaluated with the system. The results indicated that (1) the system can compare different fMRI processing pipelines with heterogeneous models (NPAIRS.GLM, NPAIRS.CVA and FSL.FEAT) and rank their performance by automatic performance scoring, and (2) the rank of pipeline performance is highly dependent on the preprocessing operations. These results suggest that the system will be of value for the comparison, validation, standardization and optimization of functional neuroimaging software packages and fMRI processing pipelines.

  13. Approaches for the Application of Physiologically Based ...

    EPA Pesticide Factsheets

    EPA released the final report, Approaches for the Application of Physiologically Based Pharmacokinetic (PBPK) Models and Supporting Data in Risk Assessment as announced in a September 22 2006 Federal Register Notice.This final report addresses the application and evaluation of PBPK models for risk assessment purposes. These models represent an important class of dosimetry models that are useful for predicting internal dose at target organs for risk assessment applications. EPA is releasing a final report describing the evaluation and applications of physiologically based pharmacokinetic (PBPK) models in health risk assessment. This was announced in the September 22 2006 Federal Register Notice.

  14. Software for occupational health and safety risk analysis based on a fuzzy model.

    PubMed

    Stefanovic, Miladin; Tadic, Danijela; Djapan, Marko; Macuzic, Ivan

    2012-01-01

    Risk and safety management are very important issues in healthcare systems. Those are complex systems with many entities, hazards and uncertainties. In such an environment, it is very hard to introduce a system for evaluating and simulating significant hazards. In this paper, we analyzed different types of hazards in healthcare systems and we introduced a new fuzzy model for evaluating and ranking hazards. Finally, we presented a developed software solution, based on the suggested fuzzy model for evaluating and monitoring risk.

  15. Multiple attribute decision making model and application to food safety risk evaluation.

    PubMed

    Ma, Lihua; Chen, Hong; Yan, Huizhe; Yang, Lifeng; Wu, Lifeng

    2017-01-01

    Decision making for supermarket food purchase decisions are characterized by network relationships. This paper analyzed factors that influence supermarket food selection and proposes a supplier evaluation index system based on the whole process of food production. The author established the intuitive interval value fuzzy set evaluation model based on characteristics of the network relationship among decision makers, and validated for a multiple attribute decision making case study. Thus, the proposed model provides a reliable, accurate method for multiple attribute decision making.

  16. Survival modeling for the estimation of transition probabilities in model-based economic evaluations in the absence of individual patient data: a tutorial.

    PubMed

    Diaby, Vakaramoko; Adunlin, Georges; Montero, Alberto J

    2014-02-01

    Survival modeling techniques are increasingly being used as part of decision modeling for health economic evaluations. As many models are available, it is imperative for interested readers to know about the steps in selecting and using the most suitable ones. The objective of this paper is to propose a tutorial for the application of appropriate survival modeling techniques to estimate transition probabilities, for use in model-based economic evaluations, in the absence of individual patient data (IPD). An illustration of the use of the tutorial is provided based on the final progression-free survival (PFS) analysis of the BOLERO-2 trial in metastatic breast cancer (mBC). An algorithm was adopted from Guyot and colleagues, and was then run in the statistical package R to reconstruct IPD, based on the final PFS analysis of the BOLERO-2 trial. It should be emphasized that the reconstructed IPD represent an approximation of the original data. Afterwards, we fitted parametric models to the reconstructed IPD in the statistical package Stata. Both statistical and graphical tests were conducted to verify the relative and absolute validity of the findings. Finally, the equations for transition probabilities were derived using the general equation for transition probabilities used in model-based economic evaluations, and the parameters were estimated from fitted distributions. The results of the application of the tutorial suggest that the log-logistic model best fits the reconstructed data from the latest published Kaplan-Meier (KM) curves of the BOLERO-2 trial. Results from the regression analyses were confirmed graphically. An equation for transition probabilities was obtained for each arm of the BOLERO-2 trial. In this paper, a tutorial was proposed and used to estimate the transition probabilities for model-based economic evaluation, based on the results of the final PFS analysis of the BOLERO-2 trial in mBC. The results of our study can serve as a basis for any model (Markov) that needs the parameterization of transition probabilities, and only has summary KM plots available.

  17. Multi-objective optimization for generating a weighted multi-model ensemble

    NASA Astrophysics Data System (ADS)

    Lee, H.

    2017-12-01

    Many studies have demonstrated that multi-model ensembles generally show better skill than each ensemble member. When generating weighted multi-model ensembles, the first step is measuring the performance of individual model simulations using observations. There is a consensus on the assignment of weighting factors based on a single evaluation metric. When considering only one evaluation metric, the weighting factor for each model is proportional to a performance score or inversely proportional to an error for the model. While this conventional approach can provide appropriate combinations of multiple models, the approach confronts a big challenge when there are multiple metrics under consideration. When considering multiple evaluation metrics, it is obvious that a simple averaging of multiple performance scores or model ranks does not address the trade-off problem between conflicting metrics. So far, there seems to be no best method to generate weighted multi-model ensembles based on multiple performance metrics. The current study applies the multi-objective optimization, a mathematical process that provides a set of optimal trade-off solutions based on a range of evaluation metrics, to combining multiple performance metrics for the global climate models and their dynamically downscaled regional climate simulations over North America and generating a weighted multi-model ensemble. NASA satellite data and the Regional Climate Model Evaluation System (RCMES) software toolkit are used for assessment of the climate simulations. Overall, the performance of each model differs markedly with strong seasonal dependence. Because of the considerable variability across the climate simulations, it is important to evaluate models systematically and make future projections by assigning optimized weighting factors to the models with relatively good performance. Our results indicate that the optimally weighted multi-model ensemble always shows better performance than an arithmetic ensemble mean and may provide reliable future projections.

  18. Test of Antifibrotic Drugs in a Cellular Model of Fibrosis Based on Muscle-Derived Fibroblasts from Duchenne Muscular Dystrophy Patients.

    PubMed

    Zanotti, Simona; Mora, Marina

    2018-01-01

    An in vitro model of muscle fibrosis, based on the use of primary human fibroblasts isolated from muscle biopsies of patients affected by Duchenne muscular dystrophies (DMD) and cultivated in monolayer and 3D conditions, is used to test the potential antifibrotic activity of pirfenidone (PFD). This in vitro model may be usefully also to evaluate the toxicity and efficacy of other candidate molecules for the treatment of fibrosis. The drug toxicity is evaluated using a colorimetric assay based on the conversion of tetrazolium salt (MTT) to insoluble formazan, while the effect of the drug on cell proliferation is measured with the bromodeoxyuridine incorporation assay. The efficacy of the drug is evaluated in fibroblast monolayers by quantitating synthesis and deposition of intracellular collagen with a spectrophotometric picrosirius red-based assay, and by quantitating cell migration using a "scratch" assay. The efficacy of PFD as antifibrotic drug is also evaluated in a 3D fibroblast model by measuring diameters and number of nodules.

  19. Disability Policy Evaluation: Combining Logic Models and Systems Thinking.

    PubMed

    Claes, Claudia; Ferket, Neelke; Vandevelde, Stijn; Verlet, Dries; De Maeyer, Jessica

    2017-07-01

    Policy evaluation focuses on the assessment of policy-related personal, family, and societal changes or benefits that follow as a result of the interventions, services, and supports provided to those persons to whom the policy is directed. This article describes a systematic approach to policy evaluation based on an evaluation framework and an evaluation process that combine the use of logic models and systems thinking. The article also includes an example of how the framework and process have recently been used in policy development and evaluation in Flanders (Belgium), as well as four policy evaluation guidelines based on relevant published literature.

  20. Catchment area-based evaluation of the AMC-dependent SCS-CN-based rainfall-runoff models

    NASA Astrophysics Data System (ADS)

    Mishra, S. K.; Jain, M. K.; Pandey, R. P.; Singh, V. P.

    2005-09-01

    Using a large set of rainfall-runoff data from 234 watersheds in the USA, a catchment area-based evaluation of the modified version of the Mishra and Singh (2002a) model was performed. The model is based on the Soil Conservation Service Curve Number (SCS-CN) methodology and incorporates the antecedent moisture in computation of direct surface runoff. Comparison with the existing SCS-CN method showed that the modified version performed better than did the existing one on the data of all seven area-based groups of watersheds ranging from 0.01 to 310.3 km2.

  1. A Two-Stage Multi-Agent Based Assessment Approach to Enhance Students' Learning Motivation through Negotiated Skills Assessment

    ERIC Educational Resources Information Center

    Chadli, Abdelhafid; Bendella, Fatima; Tranvouez, Erwan

    2015-01-01

    In this paper we present an Agent-based evaluation approach in a context of Multi-agent simulation learning systems. Our evaluation model is based on a two stage assessment approach: (1) a Distributed skill evaluation combining agents and fuzzy sets theory; and (2) a Negotiation based evaluation of students' performance during a training…

  2. Evaluation of Student Models on Current Socio-Scientific Topics Based on System Dynamics

    ERIC Educational Resources Information Center

    Nuhoglu, Hasret

    2014-01-01

    This study aims to 1) enable primary school students to develop models that will help them understand and analyze a system, through a learning process based on system dynamics approach, 2) examine and evaluate students' models related to socio-scientific issues using certain criteria. The research method used is a case study. The study sample…

  3. Function-based payment model for inpatient medical rehabilitation: an evaluation.

    PubMed

    Sutton, J P; DeJong, G; Wilkerson, D

    1996-07-01

    To describe the components of a function-based prospective payment model for inpatient medical rehabilitation that parallels diagnosis-related groups (DRGs), to evaluate this model in relation to stakeholder objectives, and to detail the components of a quality of care incentive program that, when combined with this payment model, creates an incentive for provides to maximize functional outcomes. This article describes a conceptual model, involving no data collection or data synthesis. The basic payment model described parallels DRGs. Information on the potential impact of this model on medical rehabilitation is gleaned from the literature evaluating the impact of DRGs. The conceptual model described is evaluated against the results of a Delphi Survey of rehabilitation providers, consumers, policymakers, and researchers previously conducted by members of the research team. The major shortcoming of a function-based prospective payment model for inpatient medical rehabilitation is that it contains no inherent incentive to maximize functional outcomes. Linkage of reimbursement to outcomes, however, by withholding a fixed proportion of the standard FRG payment amount, placing that amount in a "quality of care" pool, and distributing that pool annually among providers whose predesignated, facility-level, case-mix-adjusted outcomes are attained, may be one strategy for maximizing outcome goals.

  4. Using Computer Simulations for Promoting Model-based Reasoning. Epistemological and Educational Dimensions

    NASA Astrophysics Data System (ADS)

    Develaki, Maria

    2017-11-01

    Scientific reasoning is particularly pertinent to science education since it is closely related to the content and methodologies of science and contributes to scientific literacy. Much of the research in science education investigates the appropriate framework and teaching methods and tools needed to promote students' ability to reason and evaluate in a scientific way. This paper aims (a) to contribute to an extended understanding of the nature and pedagogical importance of model-based reasoning and (b) to exemplify how using computer simulations can support students' model-based reasoning. We provide first a background for both scientific reasoning and computer simulations, based on the relevant philosophical views and the related educational discussion. This background suggests that the model-based framework provides an epistemologically valid and pedagogically appropriate basis for teaching scientific reasoning and for helping students develop sounder reasoning and decision-taking abilities and explains how using computer simulations can foster these abilities. We then provide some examples illustrating the use of computer simulations to support model-based reasoning and evaluation activities in the classroom. The examples reflect the procedure and criteria for evaluating models in science and demonstrate the educational advantages of their application in classroom reasoning activities.

  5. A sound quality model for objective synthesis evaluation of vehicle interior noise based on artificial neural network

    NASA Astrophysics Data System (ADS)

    Wang, Y. S.; Shen, G. Q.; Xing, Y. F.

    2014-03-01

    Based on the artificial neural network (ANN) technique, an objective sound quality evaluation (SQE) model for synthesis annoyance of vehicle interior noises is presented in this paper. According to the standard named GB/T18697, firstly, the interior noises under different working conditions of a sample vehicle are measured and saved in a noise database. Some mathematical models for loudness, sharpness and roughness of the measured vehicle noises are established and performed by Matlab programming. Sound qualities of the vehicle interior noises are also estimated by jury tests following the anchored semantic differential (ASD) procedure. Using the objective and subjective evaluation results, furthermore, an ANN-based model for synthetical annoyance evaluation of vehicle noises, so-called ANN-SAE, is developed. Finally, the ANN-SAE model is proved by some verification tests with the leave-one-out algorithm. The results suggest that the proposed ANN-SAE model is accurate and effective and can be directly used to estimate sound quality of the vehicle interior noises, which is very helpful for vehicle acoustical designs and improvements. The ANN-SAE approach may be extended to deal with other sound-related fields for product quality evaluations in SQE engineering.

  6. A condition metric for Eucalyptus woodland derived from expert evaluations.

    PubMed

    Sinclair, Steve J; Bruce, Matthew J; Griffioen, Peter; Dodd, Amanda; White, Matthew D

    2018-02-01

    The evaluation of ecosystem quality is important for land-management and land-use planning. Evaluation is unavoidably subjective, and robust metrics must be based on consensus and the structured use of observations. We devised a transparent and repeatable process for building and testing ecosystem metrics based on expert data. We gathered quantitative evaluation data on the quality of hypothetical grassy woodland sites from experts. We used these data to train a model (an ensemble of 30 bagged regression trees) capable of predicting the perceived quality of similar hypothetical woodlands based on a set of 13 site variables as inputs (e.g., cover of shrubs, richness of native forbs). These variables can be measured at any site and the model implemented in a spreadsheet as a metric of woodland quality. We also investigated the number of experts required to produce an opinion data set sufficient for the construction of a metric. The model produced evaluations similar to those provided by experts, as shown by assessing the model's quality scores of expert-evaluated test sites not used to train the model. We applied the metric to 13 woodland conservation reserves and asked managers of these sites to independently evaluate their quality. To assess metric performance, we compared the model's evaluation of site quality with the managers' evaluations through multidimensional scaling. The metric performed relatively well, plotting close to the center of the space defined by the evaluators. Given the method provides data-driven consensus and repeatability, which no single human evaluator can provide, we suggest it is a valuable tool for evaluating ecosystem quality in real-world contexts. We believe our approach is applicable to any ecosystem. © 2017 State of Victoria.

  7. A Survey of Model Evaluation Approaches with a Tutorial on Hierarchical Bayesian Methods

    ERIC Educational Resources Information Center

    Shiffrin, Richard M.; Lee, Michael D.; Kim, Woojae; Wagenmakers, Eric-Jan

    2008-01-01

    This article reviews current methods for evaluating models in the cognitive sciences, including theoretically based approaches, such as Bayes factors and minimum description length measures; simulation approaches, including model mimicry evaluations; and practical approaches, such as validation and generalization measures. This article argues…

  8. Model-based surgical planning and simulation of cranial base surgery.

    PubMed

    Abe, M; Tabuchi, K; Goto, M; Uchino, A

    1998-11-01

    Plastic skull models of seven individual patients were fabricated by stereolithography from three-dimensional data based on computed tomography bone images. Skull models were utilized for neurosurgical planning and simulation in the seven patients with cranial base lesions that were difficult to remove. Surgical approaches and areas of craniotomy were evaluated using the fabricated skull models. In preoperative simulations, hand-made models of the tumors, major vessels and nerves were placed in the skull models. Step-by-step simulation of surgical procedures was performed using actual surgical tools. The advantages of using skull models to plan and simulate cranial base surgery include a better understanding of anatomic relationships, preoperative evaluation of the proposed procedure, increased understanding by the patient and family, and improved educational experiences for residents and other medical staff. The disadvantages of using skull models include the time and cost of making the models. The skull models provide a more realistic tool that is easier to handle than computer-graphic images. Surgical simulation using models facilitates difficult cranial base surgery and may help reduce surgical complications.

  9. Fuzzy Risk Evaluation in Failure Mode and Effects Analysis Using a D Numbers Based Multi-Sensor Information Fusion Method.

    PubMed

    Deng, Xinyang; Jiang, Wen

    2017-09-12

    Failure mode and effect analysis (FMEA) is a useful tool to define, identify, and eliminate potential failures or errors so as to improve the reliability of systems, designs, and products. Risk evaluation is an important issue in FMEA to determine the risk priorities of failure modes. There are some shortcomings in the traditional risk priority number (RPN) approach for risk evaluation in FMEA, and fuzzy risk evaluation has become an important research direction that attracts increasing attention. In this paper, the fuzzy risk evaluation in FMEA is studied from a perspective of multi-sensor information fusion. By considering the non-exclusiveness between the evaluations of fuzzy linguistic variables to failure modes, a novel model called D numbers is used to model the non-exclusive fuzzy evaluations. A D numbers based multi-sensor information fusion method is proposed to establish a new model for fuzzy risk evaluation in FMEA. An illustrative example is provided and examined using the proposed model and other existing method to show the effectiveness of the proposed model.

  10. Fuzzy Risk Evaluation in Failure Mode and Effects Analysis Using a D Numbers Based Multi-Sensor Information Fusion Method

    PubMed Central

    Deng, Xinyang

    2017-01-01

    Failure mode and effect analysis (FMEA) is a useful tool to define, identify, and eliminate potential failures or errors so as to improve the reliability of systems, designs, and products. Risk evaluation is an important issue in FMEA to determine the risk priorities of failure modes. There are some shortcomings in the traditional risk priority number (RPN) approach for risk evaluation in FMEA, and fuzzy risk evaluation has become an important research direction that attracts increasing attention. In this paper, the fuzzy risk evaluation in FMEA is studied from a perspective of multi-sensor information fusion. By considering the non-exclusiveness between the evaluations of fuzzy linguistic variables to failure modes, a novel model called D numbers is used to model the non-exclusive fuzzy evaluations. A D numbers based multi-sensor information fusion method is proposed to establish a new model for fuzzy risk evaluation in FMEA. An illustrative example is provided and examined using the proposed model and other existing method to show the effectiveness of the proposed model. PMID:28895905

  11. Improving the Impact and Implementation of Disaster Education: Programs for Children Through Theory-Based Evaluation.

    PubMed

    Johnson, Victoria A; Ronan, Kevin R; Johnston, David M; Peace, Robin

    2016-11-01

    A main weakness in the evaluation of disaster education programs for children is evaluators' propensity to judge program effectiveness based on changes in children's knowledge. Few studies have articulated an explicit program theory of how children's education would achieve desired outcomes and impacts related to disaster risk reduction in households and communities. This article describes the advantages of constructing program theory models for the purpose of evaluating disaster education programs for children. Following a review of some potential frameworks for program theory development, including the logic model, the program theory matrix, and the stage step model, the article provides working examples of these frameworks. The first example is the development of a program theory matrix used in an evaluation of ShakeOut, an earthquake drill practiced in two Washington State school districts. The model illustrates a theory of action; specifically, the effectiveness of school earthquake drills in preventing injuries and deaths during disasters. The second example is the development of a stage step model used for a process evaluation of What's the Plan Stan?, a voluntary teaching resource distributed to all New Zealand primary schools for curricular integration of disaster education. The model illustrates a theory of use; specifically, expanding the reach of disaster education for children through increased promotion of the resource. The process of developing the program theory models for the purpose of evaluation planning is discussed, as well as the advantages and shortcomings of the theory-based approaches. © 2015 Society for Risk Analysis.

  12. Evaluating AIDS Prevention: Contributions of Multiple Disciplines.

    ERIC Educational Resources Information Center

    Leviton, Laura C., Ed.; And Others

    1990-01-01

    Seven essays on efforts of evaluate prevention programs aimed at the acquired immune deficiency syndrome (AIDS) are presented. Topics include public health psychology, mathematical models of epidemiology, estimates of incubation periods, ethnographic evaluations of AIDS prevention programs, an AIDS education model, theory-based evaluation, and…

  13. An evaluation of Computational Fluid dynamics model for flood risk analysis

    NASA Astrophysics Data System (ADS)

    Di Francesco, Silvia; Biscarini, Chiara; Montesarchio, Valeria

    2014-05-01

    This work presents an analysis of the hydrological-hydraulic engineering requisites for Risk evaluation and efficient flood damage reduction plans. Most of the research efforts have been dedicated to the scientific and technical aspects of risk assessment, providing estimates of possible alternatives and of the risk associated. In the decision making process for mitigation plan, the contribute of scientist is crucial, due to the fact that Risk-Damage analysis is based on evaluation of flow field ,of Hydraulic Risk and on economical and societal considerations. The present paper will focus on the first part of process, the mathematical modelling of flood events which is the base for all further considerations. The evaluation of potential catastrophic damage consequent to a flood event and in particular to dam failure requires modelling of the flood with sufficient detail so to capture the spatial and temporal evolutions of the event, as well of the velocity field. Thus, the selection of an appropriate mathematical model to correctly simulate flood routing is an essential step. In this work we present the application of two 3D Computational fluid dynamics models to a synthetic and real case study in order to evaluate the correct evolution of flow field and the associated flood Risk . The first model is based on a opensource CFD platform called openFoam. Water flow is schematized with a classical continuum approach based on Navier-Stokes equation coupled with Volume of fluid (VOF) method to take in account the multiphase character of river bottom-water- air systems. The second model instead is based on the Lattice Boltzmann method, an innovative numerical fluid dynamics scheme based on Boltzmann's kinetic equation that represents the flow dynamics at the macroscopic level by incorporating a microscopic kinetic approach. Fluid is seen as composed by particles that can move and collide among them. Simulation results from both models are promising and congruent to experimental results available in literature, thought the LBM model requires less computational effort respect to the NS one.

  14. A modular method for evaluating the performance of picture archiving and communication systems.

    PubMed

    Sanders, W H; Kant, L A; Kudrimoti, A

    1993-08-01

    Modeling can be used to predict the performance of picture archiving and communication system (PACS) configurations under various load conditions at an early design stage. This is important because choices made early in the design of a system can have a significant impact on the performance of the resulting implementation. Because PACS consist of many types of components, it is important to do such evaluations in a modular manner, so that alternative configurations and designs can be easily investigated. Stochastic activity networks (SANs) and reduced base model construction methods can aid in doing this. SANs are a model type particularly suited to the evaluation of systems in which several activities may be in progress concurrently, and each activity may affect the others through the results of its completion. Together with SANs, reduced base model construction methods provide a means to build highly modular models, in which models of particular components can be easily reused. In this article, we investigate the use of SANs and reduced base model construction techniques in evaluating PACS. Construction and solution of the models is done using UltraSAN, a graphic-oriented software tool for model specification, analysis, and simulation. The method is illustrated via the evaluation of a realistically sized PACS for a typical United States hospital of 300 to 400 beds, and the derivation of system response times and component utilizations.

  15. dETECT: A Model for the Evaluation of Instructional Units for Teaching Computing in Middle School

    ERIC Educational Resources Information Center

    von Wangenheim, Christiane G.; Petri, Giani; Zibertti, André W.; Borgatto, Adriano F.; Hauck, Jean C. R.; Pacheco, Fernando S.; Filho, Raul Missfeldt

    2017-01-01

    The objective of this article is to present the development and evaluation of dETECT (Evaluating TEaching CompuTing), a model for the evaluation of the quality of instructional units for teaching computing in middle school based on the students' perception collected through a measurement instrument. The dETECT model was systematically developed…

  16. Investigating Island Evolution: A Galapagos-Based Lesson Using the 5E Instructional Model.

    ERIC Educational Resources Information Center

    DeFina, Anthony V.

    2002-01-01

    Introduces an inquiry-based lesson plan on evolution and the Galapagos Islands. Uses the 5E instructional model which includes phases of engagement, exploration, explanation, elaboration, and evaluation. Includes information on species for exploration and elaboration purposes, and a general rubric for student evaluation. (YDS)

  17. EVALUATION OF BIOLOGICALLY BASED DOSE-RESPONSE MODELING FOR DEVELOPMENTAL TOXICITY: A WORKSHOP REPORT

    EPA Science Inventory

    Evaluation of biologically based dose-response modeling for developmental toxicity: a workshop report.

    Lau C, Andersen ME, Crawford-Brown DJ, Kavlock RJ, Kimmel CA, Knudsen TB, Muneoka K, Rogers JM, Setzer RW, Smith G, Tyl R.

    Reproductive Toxicology Division, NHEERL...

  18. Diagnosing Alzheimer's disease: a systematic review of economic evaluations.

    PubMed

    Handels, Ron L H; Wolfs, Claire A G; Aalten, Pauline; Joore, Manuela A; Verhey, Frans R J; Severens, Johan L

    2014-03-01

    The objective of this study is to systematically review the literature on economic evaluations of interventions for the early diagnosis of Alzheimer's disease (AD) and related disorders and to describe their general and methodological characteristics. We focused on the diagnostic aspects of the decision models to assess the applicability of existing decision models for the evaluation of the recently revised diagnostic research criteria for AD. PubMed and the National Institute for Health Research Economic Evaluation database were searched for English-language publications related to economic evaluations on diagnostic technologies. Trial-based economic evaluations were assessed using the Consensus on Health Economic Criteria list. Modeling studies were assessed using the framework for quality assessment of decision-analytic models. The search retrieved 2109 items, from which eight decision-analytic modeling studies and one trial-based economic evaluation met all eligibility criteria. Diversity among the study objective and characteristics was considerable and, despite considerable methodological quality, several flaws were indicated. Recommendations were focused on diagnostic aspects and the applicability of existing models for the evaluation of recently revised diagnostic research criteria for AD. Copyright © 2014 The Alzheimer's Association. Published by Elsevier Inc. All rights reserved.

  19. Evaluating model accuracy for model-based reasoning

    NASA Technical Reports Server (NTRS)

    Chien, Steve; Roden, Joseph

    1992-01-01

    Described here is an approach to automatically assessing the accuracy of various components of a model. In this approach, actual data from the operation of a target system is used to drive statistical measures to evaluate the prediction accuracy of various portions of the model. We describe how these statistical measures of model accuracy can be used in model-based reasoning for monitoring and design. We then describe the application of these techniques to the monitoring and design of the water recovery system of the Environmental Control and Life Support System (ECLSS) of Space Station Freedom.

  20. Community-Based Participatory Evaluation: The Healthy Start Approach

    PubMed Central

    Braithwaite, Ronald L.; McKenzie, Robetta D.; Pruitt, Vikki; Holden, Kisha B.; Aaron, Katrina; Hollimon, Chavone

    2013-01-01

    The use of community-based participatory research has gained momentum as a viable approach to academic and community engagement for research over the past 20 years. This article discusses an approach for extending the process with an emphasis on evaluation of a community partnership–driven initiative and thus advances the concept of conducting community-based participatory evaluation (CBPE) through a model used by the Healthy Start project of the Augusta Partnership for Children, Inc., in Augusta, Georgia. Application of the CBPE approach advances the importance of bilateral engagements with consumers and academic evaluators. The CBPE model shows promise as a reliable and credible evaluation approach for community-level assessment of health promotion programs. PMID:22461687

  1. Community-based participatory evaluation: the healthy start approach.

    PubMed

    Braithwaite, Ronald L; McKenzie, Robetta D; Pruitt, Vikki; Holden, Kisha B; Aaron, Katrina; Hollimon, Chavone

    2013-03-01

    The use of community-based participatory research has gained momentum as a viable approach to academic and community engagement for research over the past 20 years. This article discusses an approach for extending the process with an emphasis on evaluation of a community partnership-driven initiative and thus advances the concept of conducting community-based participatory evaluation (CBPE) through a model used by the Healthy Start project of the Augusta Partnership for Children, Inc., in Augusta, Georgia. Application of the CBPE approach advances the importance of bilateral engagements with consumers and academic evaluators. The CBPE model shows promise as a reliable and credible evaluation approach for community-level assessment of health promotion programs.

  2. Multi-Fidelity Framework for Modeling Combustion Instability

    DTIC Science & Technology

    2016-07-27

    generated from the reduced-domain dataset. Evaluations of the framework are performed based on simplified test problems for a model rocket combustor showing...generated from the reduced-domain dataset. Evaluations of the framework are performed based on simplified test problems for a model rocket combustor...of Aeronautics and Astronautics and Associate Fellow AIAA. ‡ Professor Emeritus. § Senior Scientist, Rocket Propulsion Division and Senior Member

  3. Evaluation of NCAR CAM5 Simulated Marine Boundary Layer Cloud Properties Using a Combination of Satellite and Surface Observations

    NASA Astrophysics Data System (ADS)

    Zhang, Z.; Song, H.; Wang, M.; Ghan, S. J.; Dong, X.

    2016-12-01

    he main objective of this study is to systematically evaluate the MBL cloud properties simulated in CAM5 family models using a combination of satellite-based CloudSat/MODIS observations and ground-based observations from the ARM Azores site, with a special focus on MBL cloud microphysics and warm rain process. First, we will present a global evaluation based on satellite observations and retrievals. We will compare global cloud properties (e.g., cloud fraction, cloud vertical structure, cloud CER, COT, and LWP, as well as drizzle frequency and intensity diagnosed using the CAM5-COSP instrumental simulators) simulated in the CAM5 models with the collocated CloudSat and MODIS observations. We will also present some preliminary results from a regional evaluation based mainly on ground observations from ARM Azores site. We will compare MBL cloud properties simulated in CAM5 models over the ARM Azores site with collocated satellite (MODIS and CloudSat) and ground-based observations from the ARM site.

  4. The Evaluation of Land Ecological Safety of Chengchao Iron Mine Based on PSR and MEM

    NASA Astrophysics Data System (ADS)

    Jin, Xiangdong; Chen, Yong

    2018-01-01

    Land ecological security is of vital importance to local security and sustainable development of mining activities. The study has analyzed the potential causal chains between the land ecological security of Iron Mine mining environment, mine resource and the social-economic background. On the base of Pressure-State-Response model, the paper set up a matter element evaluation model of land ecological security, and applies it in Chengchao iron mine. The evaluation result proves to be effective in land ecological evaluation.

  5. A systematic review of model-based economic evaluations of diagnostic and therapeutic strategies for lower extremity artery disease.

    PubMed

    Vaidya, Anil; Joore, Manuela A; ten Cate-Hoek, Arina J; Kleinegris, Marie-Claire; ten Cate, Hugo; Severens, Johan L

    2014-01-01

    Lower extremity artery disease (LEAD) is a sign of wide spread atherosclerosis also affecting coronary, cerebral and renal arteries and is associated with increased risk of cardiovascular events. Many economic evaluations have been published for LEAD due to its clinical, social and economic importance. The aim of this systematic review was to assess modelling methods used in published economic evaluations in the field of LEAD. Our review appraised and compared the general characteristics, model structure and methodological quality of published models. Electronic databases MEDLINE and EMBASE were searched until February 2013 via OVID interface. Cochrane database of systematic reviews, Health Technology Assessment database hosted by National Institute for Health research and National Health Services Economic Evaluation Database (NHSEED) were also searched. The methodological quality of the included studies was assessed by using the Philips' checklist. Sixteen model-based economic evaluations were identified and included. Eleven models compared therapeutic health technologies; three models compared diagnostic tests and two models compared a combination of diagnostic and therapeutic options for LEAD. Results of this systematic review revealed an acceptable to low methodological quality of the included studies. Methodological diversity and insufficient information posed a challenge for valid comparison of the included studies. In conclusion, there is a need for transparent, methodologically comparable and scientifically credible model-based economic evaluations in the field of LEAD. Future modelling studies should include clinically and economically important cardiovascular outcomes to reflect the wider impact of LEAD on individual patients and on the society.

  6. Evaluation of image quality

    NASA Technical Reports Server (NTRS)

    Pavel, M.

    1993-01-01

    This presentation outlines in viewgraph format a general approach to the evaluation of display system quality for aviation applications. This approach is based on the assumption that it is possible to develop a model of the display which captures most of the significant properties of the display. The display characteristics should include spatial and temporal resolution, intensity quantizing effects, spatial sampling, delays, etc. The model must be sufficiently well specified to permit generation of stimuli that simulate the output of the display system. The first step in the evaluation of display quality is an analysis of the tasks to be performed using the display. Thus, for example, if a display is used by a pilot during a final approach, the aesthetic aspects of the display may be less relevant than its dynamic characteristics. The opposite task requirements may apply to imaging systems used for displaying navigation charts. Thus, display quality is defined with regard to one or more tasks. Given a set of relevant tasks, there are many ways to approach display evaluation. The range of evaluation approaches includes visual inspection, rapid evaluation, part-task simulation, and full mission simulation. The work described is focused on two complementary approaches to rapid evaluation. The first approach is based on a model of the human visual system. A model of the human visual system is used to predict the performance of the selected tasks. The model-based evaluation approach permits very rapid and inexpensive evaluation of various design decisions. The second rapid evaluation approach employs specifically designed critical tests that embody many important characteristics of actual tasks. These are used in situations where a validated model is not available. These rapid evaluation tests are being implemented in a workstation environment.

  7. From Creatures of Habit to Goal-Directed Learners: Tracking the Developmental Emergence of Model-Based Reinforcement Learning.

    PubMed

    Decker, Johannes H; Otto, A Ross; Daw, Nathaniel D; Hartley, Catherine A

    2016-06-01

    Theoretical models distinguish two decision-making strategies that have been formalized in reinforcement-learning theory. A model-based strategy leverages a cognitive model of potential actions and their consequences to make goal-directed choices, whereas a model-free strategy evaluates actions based solely on their reward history. Research in adults has begun to elucidate the psychological mechanisms and neural substrates underlying these learning processes and factors that influence their relative recruitment. However, the developmental trajectory of these evaluative strategies has not been well characterized. In this study, children, adolescents, and adults performed a sequential reinforcement-learning task that enabled estimation of model-based and model-free contributions to choice. Whereas a model-free strategy was apparent in choice behavior across all age groups, a model-based strategy was absent in children, became evident in adolescents, and strengthened in adults. These results suggest that recruitment of model-based valuation systems represents a critical cognitive component underlying the gradual maturation of goal-directed behavior. © The Author(s) 2016.

  8. Vestibular models for design and evaluation of flight simulator motion

    NASA Technical Reports Server (NTRS)

    Bussolari, S. R.; Sullivan, R. B.; Young, L. R.

    1986-01-01

    The use of spatial orientation models in the design and evaluation of control systems for motion-base flight simulators is investigated experimentally. The development of a high-fidelity motion drive controller using an optimal control approach based on human vestibular models is described. The formulation and implementation of the optimal washout system are discussed. The effectiveness of the motion washout system was evaluated by studying the response of six motion washout systems to the NASA/AMES Vertical Motion Simulator for a single dash-quick-stop maneuver. The effects of the motion washout system on pilot performance and simulator acceptability are examined. The data reveal that human spatial orientation models are useful for the design and evaluation of flight simulator motion fidelity.

  9. A Comprehensive Model for Developing and Evaluating Study Abroad Programs in Counselor Education

    ERIC Educational Resources Information Center

    Santos, Syntia Dinora

    2014-01-01

    This paper introduces a model to guide the process of designing and evaluating study abroad programs, addressing particular stages and influential factors. The main purpose of the model is to serve as a basic structure for those who want to develop their own program or evaluate previous cultural immersion experiences. The model is based on the…

  10. Industry Software Trustworthiness Criterion Research Based on Business Trustworthiness

    NASA Astrophysics Data System (ADS)

    Zhang, Jin; Liu, Jun-fei; Jiao, Hai-xing; Shen, Yi; Liu, Shu-yuan

    To industry software Trustworthiness problem, an idea aiming to business to construct industry software trustworthiness criterion is proposed. Based on the triangle model of "trustworthy grade definition-trustworthy evidence model-trustworthy evaluating", the idea of business trustworthiness is incarnated from different aspects of trustworthy triangle model for special industry software, power producing management system (PPMS). Business trustworthiness is the center in the constructed industry trustworthy software criterion. Fusing the international standard and industry rules, the constructed trustworthy criterion strengthens the maneuverability and reliability. Quantitive evaluating method makes the evaluating results be intuitionistic and comparable.

  11. Study on process evaluation model of students' learning in practical course

    NASA Astrophysics Data System (ADS)

    Huang, Jie; Liang, Pei; Shen, Wei-min; Ye, Youxiang

    2017-08-01

    In practical course teaching based on project object method, the traditional evaluation methods include class attendance, assignments and exams fails to give incentives to undergraduate students to learn innovatively and autonomously. In this paper, the element such as creative innovation, teamwork, document and reporting were put into process evaluation methods, and a process evaluation model was set up. Educational practice shows that the evaluation model makes process evaluation of students' learning more comprehensive, accurate, and fairly.

  12. TTI CM/AQ evaluation model user`s guide and workshop training materials. Interim research report, September 1993-August 1996

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1995-08-01

    The TTI CM/AQ Evaluation Model evaluates potential projects based on the following criteria: eligibility, travel impacts, emission impacts, and cost-effectiveness. To compare independent projects within a region during the decision process for CM/AQ funding, each project evaluated with this model is given an overall score based on the project`s effects for the criteria listed above. Training workshops were held by TTI in the first quarter of 1995 to teach metropolitan planning organization, state department of transportation, and regional air quality organization staff how to use this model. Basics of sketch-planning applications were also taught. The DRCOG and TTI CM/AQ Evaluationmore » Models represent significant steps toward the development of analytical methodologies for selecting projects for CM/AQ funding. Because the needs of nonattainment and attainment areas change over time, this model is particularly useful as key evaluation criteria can be modified to reflect the changing needs of a metropolitan area.« less

  13. Learners' Epistemic Criteria for Good Scientific Models

    ERIC Educational Resources Information Center

    Pluta, William J.; Chinn, Clark A.; Duncan, Ravit Golan

    2011-01-01

    Epistemic criteria are the standards used to evaluate scientific products (e.g., models, evidence, arguments). In this study, we analyzed epistemic criteria for good models generated by 324 middle-school students. After evaluating a range of scientific models, but before extensive instruction or experience with model-based reasoning practices,…

  14. An Evaluation of Three Approximate Item Response Theory Models for Equating Test Scores.

    ERIC Educational Resources Information Center

    Marco, Gary L.; And Others

    Three item response models were evaluated for estimating item parameters and equating test scores. The models, which approximated the traditional three-parameter model, included: (1) the Rasch one-parameter model, operationalized in the BICAL computer program; (2) an approximate three-parameter logistic model based on coarse group data divided…

  15. ASSESSMENT OF TWO PHYSICALLY BASED WATERSHED MODELS BASED ON THEIR PERFORMANCES OF SIMULATING SEDIMENT MOVEMENT OVER SMALL WATERSHEDS

    EPA Science Inventory


    Abstract: Two physically based and deterministic models, CASC2-D and KINEROS are evaluated and compared for their performances on modeling sediment movement on a small agricultural watershed over several events. Each model has different conceptualization of a watershed. CASC...

  16. ASSESSMENT OF TWO PHYSICALLY-BASED WATERSHED MODELS BASED ON THEIR PERFORMANCES OF SIMULATING WATER AND SEDIMENT MOVEMENT

    EPA Science Inventory

    Two physically based watershed models, GSSHA and KINEROS-2 are evaluated and compared for their performances on modeling flow and sediment movement. Each model has a different watershed conceptualization. GSSHA divides the watershed into cells, and flow and sediments are routed t...

  17. Evaluation of animal models of neurobehavioral disorders

    PubMed Central

    van der Staay, F Josef; Arndt, Saskia S; Nordquist, Rebecca E

    2009-01-01

    Animal models play a central role in all areas of biomedical research. The process of animal model building, development and evaluation has rarely been addressed systematically, despite the long history of using animal models in the investigation of neuropsychiatric disorders and behavioral dysfunctions. An iterative, multi-stage trajectory for developing animal models and assessing their quality is proposed. The process starts with defining the purpose(s) of the model, preferentially based on hypotheses about brain-behavior relationships. Then, the model is developed and tested. The evaluation of the model takes scientific and ethical criteria into consideration. Model development requires a multidisciplinary approach. Preclinical and clinical experts should establish a set of scientific criteria, which a model must meet. The scientific evaluation consists of assessing the replicability/reliability, predictive, construct and external validity/generalizability, and relevance of the model. We emphasize the role of (systematic and extended) replications in the course of the validation process. One may apply a multiple-tiered 'replication battery' to estimate the reliability/replicability, validity, and generalizability of result. Compromised welfare is inherent in many deficiency models in animals. Unfortunately, 'animal welfare' is a vaguely defined concept, making it difficult to establish exact evaluation criteria. Weighing the animal's welfare and considerations as to whether action is indicated to reduce the discomfort must accompany the scientific evaluation at any stage of the model building and evaluation process. Animal model building should be discontinued if the model does not meet the preset scientific criteria, or when animal welfare is severely compromised. The application of the evaluation procedure is exemplified using the rat with neonatal hippocampal lesion as a proposed model of schizophrenia. In a manner congruent to that for improving animal models, guided by the procedure expounded upon in this paper, the developmental and evaluation procedure itself may be improved by careful definition of the purpose(s) of a model and by defining better evaluation criteria, based on the proposed use of the model. PMID:19243583

  18. Improved prediction of tacrolimus concentrations early after kidney transplantation using theory-based pharmacokinetic modelling.

    PubMed

    Størset, Elisabet; Holford, Nick; Hennig, Stefanie; Bergmann, Troels K; Bergan, Stein; Bremer, Sara; Åsberg, Anders; Midtvedt, Karsten; Staatz, Christine E

    2014-09-01

    The aim was to develop a theory-based population pharmacokinetic model of tacrolimus in adult kidney transplant recipients and to externally evaluate this model and two previous empirical models. Data were obtained from 242 patients with 3100 tacrolimus whole blood concentrations. External evaluation was performed by examining model predictive performance using Bayesian forecasting. Pharmacokinetic disposition parameters were estimated based on tacrolimus plasma concentrations, predicted from whole blood concentrations, haematocrit and literature values for tacrolimus binding to red blood cells. Disposition parameters were allometrically scaled to fat free mass. Tacrolimus whole blood clearance/bioavailability standardized to haematocrit of 45% and fat free mass of 60 kg was estimated to be 16.1 l h−1 [95% CI 12.6, 18.0 l h−1]. Tacrolimus clearance was 30% higher (95% CI 13, 46%) and bioavailability 18% lower (95% CI 2, 29%) in CYP3A5 expressers compared with non-expressers. An Emax model described decreasing tacrolimus bioavailability with increasing prednisolone dose. The theory-based model was superior to the empirical models during external evaluation displaying a median prediction error of −1.2% (95% CI −3.0, 0.1%). Based on simulation, Bayesian forecasting led to 65% (95% CI 62, 68%) of patients achieving a tacrolimus average steady-state concentration within a suggested acceptable range. A theory-based population pharmacokinetic model was superior to two empirical models for prediction of tacrolimus concentrations and seemed suitable for Bayesian prediction of tacrolimus doses early after kidney transplantation.

  19. Uncertainty evaluation of dead zone of diagnostic ultrasound equipment

    NASA Astrophysics Data System (ADS)

    Souza, R. M.; Alvarenga, A. V.; Braz, D. S.; Petrella, L. I.; Costa-Felix, R. P. B.

    2016-07-01

    This paper presents a model for evaluating measurement uncertainty of a feature used in the assessment of ultrasound images: dead zone. The dead zone was measured by two technicians of the INMETRO's Laboratory of Ultrasound using a phantom and following the standard IEC/TS 61390. The uncertainty model was proposed based on the Guide to the Expression of Uncertainty in Measurement. For the tested equipment, results indicate a dead zone of 1.01 mm, and based on the proposed model, the expanded uncertainty was 0.17 mm. The proposed uncertainty model contributes as a novel way for metrological evaluation of diagnostic imaging by ultrasound.

  20. Two Methods of Automatic Evaluation of Speech Signal Enhancement Recorded in the Open-Air MRI Environment

    NASA Astrophysics Data System (ADS)

    Přibil, Jiří; Přibilová, Anna; Frollo, Ivan

    2017-12-01

    The paper focuses on two methods of evaluation of successfulness of speech signal enhancement recorded in the open-air magnetic resonance imager during phonation for the 3D human vocal tract modeling. The first approach enables to obtain a comparison based on statistical analysis by ANOVA and hypothesis tests. The second method is based on classification by Gaussian mixture models (GMM). The performed experiments have confirmed that the proposed ANOVA and GMM classifiers for automatic evaluation of the speech quality are functional and produce fully comparable results with the standard evaluation based on the listening test method.

  1. Theory-Based Stakeholder Evaluation

    ERIC Educational Resources Information Center

    Hansen, Morten Balle; Vedung, Evert

    2010-01-01

    This article introduces a new approach to program theory evaluation called theory-based stakeholder evaluation or the TSE model for short. Most theory-based approaches are program theory driven and some are stakeholder oriented as well. Practically, all of the latter fuse the program perceptions of the various stakeholder groups into one unitary…

  2. Betterment, undermining, support and distortion: A heuristic model for the analysis of pressure on evaluators.

    PubMed

    Pleger, Lyn; Sager, Fritz

    2016-09-18

    Evaluations can only serve as a neutral evidence base for policy decision-making as long as they have not been altered along non-scientific criteria. Studies show that evaluators are repeatedly put under pressure to deliver results in line with given expectations. The study of pressure and influence to misrepresent findings is hence an important research strand for the development of evaluation praxis. A conceptual challenge in the area of evaluation ethics research is the fact that pressure can be not only negative, but also positive. We develop a heuristic model of influence on evaluations that does justice to this ambivalence of influence: the BUSD-model (betterment, undermining, support, distortion). The model is based on the distinction of two dimensions, namely 'explicitness of pressure' and 'direction of influence'. We demonstrate how the model can be applied to understand pressure and offer a practical tool to distinguish positive from negative influence in the form of three so-called differentiators (awareness, accordance, intention). The differentiators comprise a practical component by assisting evaluators who are confronted with influence. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Argumentation in Science Education: A Model-based Framework

    NASA Astrophysics Data System (ADS)

    Böttcher, Florian; Meisert, Anke

    2011-02-01

    The goal of this article is threefold: First, the theoretical background for a model-based framework of argumentation to describe and evaluate argumentative processes in science education is presented. Based on the general model-based perspective in cognitive science and the philosophy of science, it is proposed to understand arguments as reasons for the appropriateness of a theoretical model which explains a certain phenomenon. Argumentation is considered to be the process of the critical evaluation of such a model if necessary in relation to alternative models. Secondly, some methodological details are exemplified for the use of a model-based analysis in the concrete classroom context. Third, the application of the approach in comparison with other analytical models will be presented to demonstrate the explicatory power and depth of the model-based perspective. Primarily, the framework of Toulmin to structurally analyse arguments is contrasted with the approach presented here. It will be demonstrated how common methodological and theoretical problems in the context of Toulmin's framework can be overcome through a model-based perspective. Additionally, a second more complex argumentative sequence will also be analysed according to the invented analytical scheme to give a broader impression of its potential in practical use.

  4. The Development and Evaluation of Speaking Learning Model by Cooperative Approach

    ERIC Educational Resources Information Center

    Darmuki, Agus; Andayani; Nurkamto, Joko; Saddhono, Kundharu

    2018-01-01

    A cooperative approach-based Speaking Learning Model (SLM) has been developed to improve speaking skill of Higher Education students. This research aimed at evaluating the effectiveness of cooperative-based SLM viewed from the development of student's speaking ability and its effectiveness on speaking activity. This mixed method study combined…

  5. A Causal Modelling Approach to the Development of Theory-Based Behaviour Change Programmes for Trial Evaluation

    ERIC Educational Resources Information Center

    Hardeman, Wendy; Sutton, Stephen; Griffin, Simon; Johnston, Marie; White, Anthony; Wareham, Nicholas J.; Kinmonth, Ann Louise

    2005-01-01

    Theory-based intervention programmes to support health-related behaviour change aim to increase health impact and improve understanding of mechanisms of behaviour change. However, the science of intervention development remains at an early stage. We present a causal modelling approach to developing complex interventions for evaluation in…

  6. Occupational hazard evaluation model underground coal mine based on unascertained measurement theory

    NASA Astrophysics Data System (ADS)

    Deng, Quanlong; Jiang, Zhongan; Sun, Yaru; Peng, Ya

    2017-05-01

    In order to study how to comprehensively evaluate the influence of several occupational hazard on miners’ physical and mental health, based on unascertained measurement theory, occupational hazard evaluation indicator system was established to make quantitative and qualitative analysis. Determining every indicator weight by information entropy and estimating the occupational hazard level by credible degree recognition criteria, the evaluation model was programmed by Visual Basic, applying the evaluation model to occupational hazard comprehensive evaluation of six posts under a coal mine, and the occupational hazard degree was graded, the evaluation results are consistent with actual situation. The results show that dust and noise is most obvious among the coal mine occupational hazard factors. Excavation face support workers are most affected, secondly, heading machine drivers, coal cutter drivers, coalface move support workers, the occupational hazard degree of these four types workers is II mild level. The occupational hazard degree of ventilation workers and safety inspection workers is I level. The evaluation model could evaluate underground coal mine objectively and accurately, and can be employed to the actual engineering.

  7. Climate change impacts on tree ranges: model intercomparison facilitates understanding and quantification of uncertainty.

    PubMed

    Cheaib, Alissar; Badeau, Vincent; Boe, Julien; Chuine, Isabelle; Delire, Christine; Dufrêne, Eric; François, Christophe; Gritti, Emmanuel S; Legay, Myriam; Pagé, Christian; Thuiller, Wilfried; Viovy, Nicolas; Leadley, Paul

    2012-06-01

    Model-based projections of shifts in tree species range due to climate change are becoming an important decision support tool for forest management. However, poorly evaluated sources of uncertainty require more scrutiny before relying heavily on models for decision-making. We evaluated uncertainty arising from differences in model formulations of tree response to climate change based on a rigorous intercomparison of projections of tree distributions in France. We compared eight models ranging from niche-based to process-based models. On average, models project large range contractions of temperate tree species in lowlands due to climate change. There was substantial disagreement between models for temperate broadleaf deciduous tree species, but differences in the capacity of models to account for rising CO(2) impacts explained much of the disagreement. There was good quantitative agreement among models concerning the range contractions for Scots pine. For the dominant Mediterranean tree species, Holm oak, all models foresee substantial range expansion. © 2012 Blackwell Publishing Ltd/CNRS.

  8. A new harvest operation cost model to evaluate forest harvest layout alternatives

    Treesearch

    Mark M. Clark; Russell D. Meller; Timothy P. McDonald; Chao Chi Ting

    1997-01-01

    The authors develop a new model for harvest operation costs that can be used to evaluate stands for potential harvest. The model is based on felling, extraction, and access costs, and is unique in its consideration of the interaction between harvest area shapes and access roads. The scientists illustrate the model and evaluate the impact of stand size, volume, and road...

  9. A Perspective on Computational Human Performance Models as Design Tools

    NASA Technical Reports Server (NTRS)

    Jones, Patricia M.

    2010-01-01

    The design of interactive systems, including levels of automation, displays, and controls, is usually based on design guidelines and iterative empirical prototyping. A complementary approach is to use computational human performance models to evaluate designs. An integrated strategy of model-based and empirical test and evaluation activities is particularly attractive as a methodology for verification and validation of human-rated systems for commercial space. This talk will review several computational human performance modeling approaches and their applicability to design of display and control requirements.

  10. The Discrepancy Evaluation Model: A Systematic Approach for the Evaluation of Career Planning and Placement Programs.

    ERIC Educational Resources Information Center

    Buttram, Joan L.; Covert, Robert W.

    The Discrepancy Evaluation Model (DEM), developed in 1966 by Malcolm Provus, provides information for program assessment and program improvement. Under the DEM, evaluation is defined as the comparison of an actual performance to a desired standard. The DEM embodies five stages of evaluation based upon a program's natural development: program…

  11. A systematic review of economic evaluations of population-based sodium reduction interventions.

    PubMed

    Hope, Silvia F; Webster, Jacqui; Trieu, Kathy; Pillay, Arti; Ieremia, Merina; Bell, Colin; Snowdon, Wendy; Neal, Bruce; Moodie, Marj

    2017-01-01

    To summarise evidence describing the cost-effectiveness of population-based interventions targeting sodium reduction. A systematic search of published and grey literature databases and websites was conducted using specified key words. Characteristics of identified economic evaluations were recorded, and included studies were appraised for reporting quality using the Consolidated Health Economic Evaluation Reporting Standards (CHEERS) checklist. Twenty studies met the study inclusion criteria and received a full paper review. Fourteen studies were identified as full economic evaluations in that they included both costs and benefits associated with an intervention measured against a comparator. Most studies were modelling exercises based on scenarios for achieving salt reduction and assumed effects on health outcomes. All 14 studies concluded that their specified intervention(s) targeting reductions in population sodium consumption were cost-effective, and in the majority of cases, were cost saving. Just over half the studies (8/14) were assessed as being of 'excellent' reporting quality, five studies fell into the 'very good' quality category and one into the 'good' category. All of the identified evaluations were based on modelling, whereby inputs for all the key parameters including the effect size were either drawn from published datasets, existing literature or based on expert advice. Despite a clear increase in evaluations of salt reduction programs in recent years, this review identified relatively few economic evaluations of population salt reduction interventions. None of the studies were based on actual implementation of intervention(s) and the associated collection of new empirical data. The studies universally showed that population-based salt reduction strategies are likely to be cost effective or cost saving. However, given the reliance on modelling, there is a need for the effectiveness of new interventions to be evaluated in the field using strong study designs and parallel economic evaluations.

  12. Logistics Enterprise Evaluation Model Based On Fuzzy Clustering Analysis

    NASA Astrophysics Data System (ADS)

    Fu, Pei-hua; Yin, Hong-bo

    In this thesis, we introduced an evaluation model based on fuzzy cluster algorithm of logistics enterprises. First of all,we present the evaluation index system which contains basic information, management level, technical strength, transport capacity,informatization level, market competition and customer service. We decided the index weight according to the grades, and evaluated integrate ability of the logistics enterprises using fuzzy cluster analysis method. In this thesis, we introduced the system evaluation module and cluster analysis module in detail and described how we achieved these two modules. At last, we gave the result of the system.

  13. A Regional Climate Model Evaluation System based on contemporary Satellite and other Observations for Assessing Regional Climate Model Fidelity

    NASA Astrophysics Data System (ADS)

    Waliser, D. E.; Kim, J.; Mattman, C.; Goodale, C.; Hart, A.; Zimdars, P.; Lean, P.

    2011-12-01

    Evaluation of climate models against observations is an essential part of assessing the impact of climate variations and change on regionally important sectors and improving climate models. Regional climate models (RCMs) are of a particular concern. RCMs provide fine-scale climate needed by the assessment community via downscaling global climate model projections such as those contributing to the Coupled Model Intercomparison Project (CMIP) that form one aspect of the quantitative basis of the IPCC Assessment Reports. The lack of reliable fine-resolution observational data and formal tools and metrics has represented a challenge in evaluating RCMs. Recent satellite observations are particularly useful as they provide a wealth of information and constraints on many different processes within the climate system. Due to their large volume and the difficulties associated with accessing and using contemporary observations, however, these datasets have been generally underutilized in model evaluation studies. Recognizing this problem, NASA JPL and UCLA have developed the Regional Climate Model Evaluation System (RCMES) to help make satellite observations, in conjunction with in-situ and reanalysis datasets, more readily accessible to the regional modeling community. The system includes a central database (Regional Climate Model Evaluation Database: RCMED) to store multiple datasets in a common format and codes for calculating and plotting statistical metrics to assess model performance (Regional Climate Model Evaluation Tool: RCMET). This allows the time taken to compare model data with satellite observations to be reduced from weeks to days. RCMES is a component of the recent ExArch project, an international effort for facilitating the archive and access of massive amounts data for users using cloud-based infrastructure, in this case as applied to the study of climate and climate change. This presentation will describe RCMES and demonstrate its utility using examples from RCMs applied to the southwest US as well as to Africa based on output from the CORDEX activity. Application of RCMES to the evaluation of multi-RCM hindcast for CORDEX-Africa will be presented in a companion paper in A41.

  14. Capturing microscopic features of bone remodeling into a macroscopic model based on biological rationales of bone adaptation.

    PubMed

    Kim, Young Kwan; Kameo, Yoshitaka; Tanaka, Sakae; Adachi, Taiji

    2017-10-01

    To understand Wolff's law, bone adaptation by remodeling at the cellular and tissue levels has been discussed extensively through experimental and simulation studies. For the clinical application of a bone remodeling simulation, it is significant to establish a macroscopic model that incorporates clarified microscopic mechanisms. In this study, we proposed novel macroscopic models based on the microscopic mechanism of osteocytic mechanosensing, in which the flow of fluid in the lacuno-canalicular porosity generated by fluid pressure gradients plays an important role, and theoretically evaluated the proposed models, taking biological rationales of bone adaptation into account. The proposed models were categorized into two groups according to whether the remodeling equilibrium state was defined globally or locally, i.e., the global or local uniformity models. Each remodeling stimulus in the proposed models was quantitatively evaluated through image-based finite element analyses of a swine cancellous bone, according to two introduced criteria associated with the trabecular volume and orientation at remodeling equilibrium based on biological rationales. The evaluation suggested that nonuniformity of the mean stress gradient in the local uniformity model, one of the proposed stimuli, has high validity. Furthermore, the adaptive potential of each stimulus was discussed based on spatial distribution of a remodeling stimulus on the trabecular surface. The theoretical consideration of a remodeling stimulus based on biological rationales of bone adaptation would contribute to the establishment of a clinically applicable and reliable simulation model of bone remodeling.

  15. Comparison of land use regression models for NO2 based on routine and campaign monitoring data from an urban area of Japan.

    PubMed

    Kashima, Saori; Yorifuji, Takashi; Sawada, Norie; Nakaya, Tomoki; Eboshida, Akira

    2018-08-01

    Typically, land use regression (LUR) models have been developed using campaign monitoring data rather than routine monitoring data. However, the latter have advantages such as low cost and long-term coverage. Based on the idea that LUR models representing regional differences in air pollution and regional road structures are optimal, the objective of this study was to evaluate the validity of LUR models for nitrogen dioxide (NO 2 ) based on routine and campaign monitoring data obtained from an urban area. We selected the city of Suita in Osaka (Japan). We built a model based on routine monitoring data obtained from all sites (routine-LUR-All), and a model based on campaign monitoring data (campaign-LUR) within the city. Models based on routine monitoring data obtained from background sites (routine-LUR-BS) and based on data obtained from roadside sites (routine-LUR-RS) were also built. The routine LUR models were based on monitoring networks across two prefectures (i.e., Osaka and Hyogo prefectures). We calculated the predictability of the each model. We then compared the predicted NO 2 concentrations from each model with measured annual average NO 2 concentrations from evaluation sites. The routine-LUR-All and routine-LUR-BS models both predicted NO 2 concentrations well: adjusted R 2 =0.68 and 0.76, respectively, and root mean square error=3.4 and 2.1ppb, respectively. The predictions from the routine-LUR-All model were highly correlated with the measured NO 2 concentrations at evaluation sites. Although the predicted NO 2 concentrations from each model were correlated, the LUR models based on routine networks, and particularly those based on all monitoring sites, provided better visual representations of the local road conditions in the city. The present study demonstrated that LUR models based on routine data could estimate local traffic-related air pollution in an urban area. The importance and usefulness of data from routine monitoring networks should be acknowledged. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. The Iterative Research Cycle: Process-Based Model Evaluation

    NASA Astrophysics Data System (ADS)

    Vrugt, J. A.

    2014-12-01

    The ever increasing pace of computational power, along with continued advances in measurement technologies and improvements in process understanding has stimulated the development of increasingly complex physics based models that simulate a myriad of processes at different spatial and temporal scales. Reconciling these high-order system models with perpetually larger volumes of field data is becoming more and more difficult, particularly because classical likelihood-based fitting methods lack the power to detect and pinpoint deficiencies in the model structure. In this talk I will give an overview of our latest research on process-based model calibration and evaluation. This approach, rooted in Bayesian theory, uses summary metrics of the calibration data rather than the data itself to help detect which component(s) of the model is (are) malfunctioning and in need of improvement. A few case studies involving hydrologic and geophysical models will be used to demonstrate the proposed methodology.

  17. Model-based economic evaluations in smoking cessation and their transferability to new contexts: a systematic review.

    PubMed

    Berg, Marrit L; Cheung, Kei Long; Hiligsmann, Mickaël; Evers, Silvia; de Kinderen, Reina J A; Kulchaitanaroaj, Puttarin; Pokhrel, Subhash

    2017-06-01

    To identify different types of models used in economic evaluations of smoking cessation, analyse the quality of the included models examining their attributes and ascertain their transferability to a new context. A systematic review of the literature on the economic evaluation of smoking cessation interventions published between 1996 and April 2015, identified via Medline, EMBASE, National Health Service (NHS) Economic Evaluation Database (NHS EED), Health Technology Assessment (HTA). The checklist-based quality of the included studies and transferability scores was based on the European Network of Health Economic Evaluation Databases (EURONHEED) criteria. Studies that were not in smoking cessation, not original research, not a model-based economic evaluation, that did not consider adult population and not from a high-income country were excluded. Among the 64 economic evaluations included in the review, the state-transition Markov model was the most frequently used method (n = 30/64), with quality adjusted life years (QALY) being the most frequently used outcome measure in a life-time horizon. A small number of the included studies (13 of 64) were eligible for EURONHEED transferability checklist. The overall transferability scores ranged from 0.50 to 0.97, with an average score of 0.75. The average score per section was 0.69 (range = 0.35-0.92). The relative transferability of the studies could not be established due to a limitation present in the EURONHEED method. All existing economic evaluations in smoking cessation lack in one or more key study attributes necessary to be fully transferable to a new context. © 2017 The Authors. Addiction published by John Wiley & Sons Ltd on behalf of Society for the Study of Addiction.

  18. IRLT: Integrating Reputation and Local Trust for Trustworthy Service Recommendation in Service-Oriented Social Networks

    PubMed Central

    Liu, Zhiquan; Ma, Jianfeng; Jiang, Zhongyuan; Miao, Yinbin; Gao, Cong

    2016-01-01

    With the prevalence of Social Networks (SNs) and services, plenty of trust models for Trustworthy Service Recommendation (TSR) in Service-oriented SNs (S-SNs) have been proposed. The reputation-based schemes usually do not contain user preferences and are vulnerable to unfair rating attacks. Meanwhile, the local trust-based schemes generally have low reliability or even fail to work when the trust path is too long or does not exist. Thus it is beneficial to integrate them for TSR in S-SNs. This work improves the state-of-the-art Combining Global and Local Trust (CGLT) scheme and proposes a novel Integrating Reputation and Local Trust (IRLT) model which mainly includes four modules, namely Service Recommendation Interface (SRI) module, Local Trust-based Trust Evaluation (LTTE) module, Reputation-based Trust Evaluation (RTE) module and Aggregation Trust Evaluation (ATE) module. Besides, a synthetic S-SN based on the famous Advogato dataset is deployed and the well-known Discount Cumulative Gain (DCG) metric is employed to measure the service recommendation performance of our IRLT model with comparing to that of the excellent CGLT model. The results illustrate that our IRLT model is slightly superior to the CGLT model in honest environment and significantly outperforms the CGLT model in terms of the robustness against unfair rating attacks. PMID:26963089

  19. IRLT: Integrating Reputation and Local Trust for Trustworthy Service Recommendation in Service-Oriented Social Networks.

    PubMed

    Liu, Zhiquan; Ma, Jianfeng; Jiang, Zhongyuan; Miao, Yinbin; Gao, Cong

    2016-01-01

    With the prevalence of Social Networks (SNs) and services, plenty of trust models for Trustworthy Service Recommendation (TSR) in Service-oriented SNs (S-SNs) have been proposed. The reputation-based schemes usually do not contain user preferences and are vulnerable to unfair rating attacks. Meanwhile, the local trust-based schemes generally have low reliability or even fail to work when the trust path is too long or does not exist. Thus it is beneficial to integrate them for TSR in S-SNs. This work improves the state-of-the-art Combining Global and Local Trust (CGLT) scheme and proposes a novel Integrating Reputation and Local Trust (IRLT) model which mainly includes four modules, namely Service Recommendation Interface (SRI) module, Local Trust-based Trust Evaluation (LTTE) module, Reputation-based Trust Evaluation (RTE) module and Aggregation Trust Evaluation (ATE) module. Besides, a synthetic S-SN based on the famous Advogato dataset is deployed and the well-known Discount Cumulative Gain (DCG) metric is employed to measure the service recommendation performance of our IRLT model with comparing to that of the excellent CGLT model. The results illustrate that our IRLT model is slightly superior to the CGLT model in honest environment and significantly outperforms the CGLT model in terms of the robustness against unfair rating attacks.

  20. Argumentation in Science Education: A Model-Based Framework

    ERIC Educational Resources Information Center

    Bottcher, Florian; Meisert, Anke

    2011-01-01

    The goal of this article is threefold: First, the theoretical background for a model-based framework of argumentation to describe and evaluate argumentative processes in science education is presented. Based on the general model-based perspective in cognitive science and the philosophy of science, it is proposed to understand arguments as reasons…

  1. [Simulation and data analysis of stereological modeling based on virtual slices].

    PubMed

    Wang, Hao; Shen, Hong; Bai, Xiao-yan

    2008-05-01

    To establish a computer-assisted stereological model for simulating the process of slice section and evaluate the relationship between section surface and estimated three-dimensional structure. The model was designed by mathematic method as a win32 software based on the MFC using Microsoft visual studio as IDE for simulating the infinite process of sections and analysis of the data derived from the model. The linearity of the fitting of the model was evaluated by comparison with the traditional formula. The win32 software based on this algorithm allowed random sectioning of the particles distributed randomly in an ideal virtual cube. The stereological parameters showed very high throughput (>94.5% and 92%) in homogeneity and independence tests. The data of density, shape and size of the section were tested to conform to normal distribution. The output of the model and that from the image analysis system showed statistical correlation and consistency. The algorithm we described can be used for evaluating the stereologic parameters of the structure of tissue slices.

  2. Climate Model Diagnostic Analyzer Web Service System

    NASA Astrophysics Data System (ADS)

    Lee, S.; Pan, L.; Zhai, C.; Tang, B.; Kubar, T. L.; Li, J.; Zhang, J.; Wang, W.

    2015-12-01

    Both the National Research Council Decadal Survey and the latest Intergovernmental Panel on Climate Change Assessment Report stressed the need for the comprehensive and innovative evaluation of climate models with the synergistic use of global satellite observations in order to improve our weather and climate simulation and prediction capabilities. The abundance of satellite observations for fundamental climate parameters and the availability of coordinated model outputs from CMIP5 for the same parameters offer a great opportunity to understand and diagnose model biases in climate models. In addition, the Obs4MIPs efforts have created several key global observational datasets that are readily usable for model evaluations. However, a model diagnostic evaluation process requires physics-based multi-variable comparisons that typically involve large-volume and heterogeneous datasets, making them both computationally- and data-intensive. In response, we have developed a novel methodology to diagnose model biases in contemporary climate models and implementing the methodology as a web-service based, cloud-enabled, provenance-supported climate-model evaluation system. The evaluation system is named Climate Model Diagnostic Analyzer (CMDA), which is the product of the research and technology development investments of several current and past NASA ROSES programs. The current technologies and infrastructure of CMDA are designed and selected to address several technical challenges that the Earth science modeling and model analysis community faces in evaluating and diagnosing climate models. In particular, we have three key technology components: (1) diagnostic analysis methodology; (2) web-service based, cloud-enabled technology; (3) provenance-supported technology. The diagnostic analysis methodology includes random forest feature importance ranking, conditional probability distribution function, conditional sampling, and time-lagged correlation map. We have implemented the new methodology as web services and incorporated the system into the Cloud. We have also developed a provenance management system for CMDA where CMDA service semantics modeling, service search and recommendation, and service execution history management are designed and implemented.

  3. TElehealth in CHronic disease: mixed-methods study to develop the TECH conceptual model for intervention design and evaluation.

    PubMed

    Salisbury, Chris; Thomas, Clare; O'Cathain, Alicia; Rogers, Anne; Pope, Catherine; Yardley, Lucy; Hollinghurst, Sandra; Fahey, Tom; Lewis, Glyn; Large, Shirley; Edwards, Louisa; Rowsell, Alison; Segar, Julia; Brownsell, Simon; Montgomery, Alan A

    2015-02-06

    To develop a conceptual model for effective use of telehealth in the management of chronic health conditions, and to use this to develop and evaluate an intervention for people with two exemplar conditions: raised cardiovascular disease risk and depression. The model was based on several strands of evidence: a metareview and realist synthesis of quantitative and qualitative evidence on telehealth for chronic conditions; a qualitative study of patients' and health professionals' experience of telehealth; a quantitative survey of patients' interest in using telehealth; and review of existing models of chronic condition management and evidence-based treatment guidelines. Based on these evidence strands, a model was developed and then refined at a stakeholder workshop. Then a telehealth intervention ('Healthlines') was designed by incorporating strategies to address each of the model components. The model also provided a framework for evaluation of this intervention within parallel randomised controlled trials in the two exemplar conditions, and the accompanying process evaluations and economic evaluations. Primary care. The TElehealth in CHronic Disease (TECH) model proposes that attention to four components will offer interventions the best chance of success: (1) engagement of patients and health professionals, (2) effective chronic disease management (including subcomponents of self-management, optimisation of treatment, care coordination), (3) partnership between providers and (4) patient, social and health system context. Key intended outcomes are improved health, access to care, patient experience and cost-effective care. A conceptual model has been developed based on multiple sources of evidence which articulates how telehealth may best provide benefits for patients with chronic health conditions. It can be used to structure the design and evaluation of telehealth programmes which aim to be acceptable to patients and providers, and cost-effective. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  4. Evaluation of parameters of color profile models of LCD and LED screens

    NASA Astrophysics Data System (ADS)

    Zharinov, I. O.; Zharinov, O. O.

    2017-12-01

    The purpose of the research relates to the problem of parametric identification of the color profile model of LCD (liquid crystal display) and LED (light emitting diode) screens. The color profile model of a screen is based on the Grassmann’s Law of additive color mixture. Mathematically the problem is to evaluate unknown parameters (numerical coefficients) of the matrix transformation between different color spaces. Several methods of evaluation of these screen profile coefficients were developed. These methods are based either on processing of some colorimetric measurements or on processing of technical documentation data.

  5. A Model for Administrative Evaluation by Subordinates.

    ERIC Educational Resources Information Center

    Budig, Jeanne E.

    Under the administrator evaluation program adopted at Vincennes University, all faculty and professional staff are invited to evaluate each administrator above them in the chain of command. Originally based on the Purdue University "cafeteria" system, this evaluation model has been used biannually for 10 years. In an effort to simplify the system,…

  6. An Evaluation of the Private High School Curriculum in Turkey

    ERIC Educational Resources Information Center

    Aslan, Dolgun

    2016-01-01

    This study aims at evaluating curricula of private high schools in line with opinions of teachers working at the related high schools, and identifying any related problems. Screening model is used as a quantitative research method in the study. The "element-based curriculum evaluation model" is taken as basis for evaluation of the…

  7. Reliability evaluation of microgrid considering incentive-based demand response

    NASA Astrophysics Data System (ADS)

    Huang, Ting-Cheng; Zhang, Yong-Jun

    2017-07-01

    Incentive-based demand response (IBDR) can guide customers to adjust their behaviour of electricity and curtail load actively. Meanwhile, distributed generation (DG) and energy storage system (ESS) can provide time for the implementation of IBDR. The paper focus on the reliability evaluation of microgrid considering IBDR. Firstly, the mechanism of IBDR and its impact on power supply reliability are analysed. Secondly, the IBDR dispatch model considering customer’s comprehensive assessment and the customer response model are developed. Thirdly, the reliability evaluation method considering IBDR based on Monte Carlo simulation is proposed. Finally, the validity of the above models and method is studied through numerical tests on modified RBTS Bus6 test system. Simulation results demonstrated that IBDR can improve the reliability of microgrid.

  8. Brief Lags in Interrupted Sequential Performance: Evaluating a Model and Model Evaluation Method

    DTIC Science & Technology

    2015-01-05

    rehearsal mechanism in the model. To evaluate the model we developed a simple new goodness-of-fit test based on analysis of variance that offers an...repeated step). Sequen- tial constraints are common in medicine, equipment maintenance, computer programming and technical support, data analysis ...legal analysis , accounting, and many other home and workplace environ- ments. Sequential constraints also play a role in such basic cognitive processes

  9. Performance evaluation of four directional emissivity analytical models with thermal SAIL model and airborne images.

    PubMed

    Ren, Huazhong; Liu, Rongyuan; Yan, Guangjian; Li, Zhao-Liang; Qin, Qiming; Liu, Qiang; Nerry, Françoise

    2015-04-06

    Land surface emissivity is a crucial parameter in the surface status monitoring. This study aims at the evaluation of four directional emissivity models, including two bi-directional reflectance distribution function (BRDF) models and two gap-frequency-based models. Results showed that the kernel-driven BRDF model could well represent directional emissivity with an error less than 0.002, and was consequently used to retrieve emissivity with an accuracy of about 0.012 from an airborne multi-angular thermal infrared data set. Furthermore, we updated the cavity effect factor relating to multiple scattering inside canopy, which improved the performance of the gap-frequency-based models.

  10. Sensitivity analysis, calibration, and testing of a distributed hydrological model using error‐based weighting and one objective function

    USGS Publications Warehouse

    Foglia, L.; Hill, Mary C.; Mehl, Steffen W.; Burlando, P.

    2009-01-01

    We evaluate the utility of three interrelated means of using data to calibrate the fully distributed rainfall‐runoff model TOPKAPI as applied to the Maggia Valley drainage area in Switzerland. The use of error‐based weighting of observation and prior information data, local sensitivity analysis, and single‐objective function nonlinear regression provides quantitative evaluation of sensitivity of the 35 model parameters to the data, identification of data types most important to the calibration, and identification of correlations among parameters that contribute to nonuniqueness. Sensitivity analysis required only 71 model runs, and regression required about 50 model runs. The approach presented appears to be ideal for evaluation of models with long run times or as a preliminary step to more computationally demanding methods. The statistics used include composite scaled sensitivities, parameter correlation coefficients, leverage, Cook's D, and DFBETAS. Tests suggest predictive ability of the calibrated model typical of hydrologic models.

  11. Modeling of dispersion near roadways based on the vehicle-induced turbulence concept

    NASA Astrophysics Data System (ADS)

    Sahlodin, Ali M.; Sotudeh-Gharebagh, Rahmat; Zhu, Yifang

    A mathematical model is developed for dispersion near roadways by incorporating vehicle-induced turbulence (VIT) into Gaussian dispersion modeling using computational fluid dynamics (CFD). The model is based on the Gaussian plume equation in which roadway is regarded as a series of point sources. The Gaussian dispersion parameters are modified by simulation of the roadway using CFD in order to evaluate turbulent kinetic energy (TKE) as a measure of VIT. The model was evaluated against experimental carbon monoxide concentrations downwind of two major freeways reported in the literature. Good agreements were achieved between model results and the literature data. A significant difference was observed between the model results with and without considering VIT. The difference is rather high for data very close to the freeways. This model, after evaluation with additional data, may be used as a framework for predicting dispersion and deposition from any roadway for different traffic (vehicle type and speed) conditions.

  12. Operating Comfort Prediction Model of Human-Machine Interface Layout for Cabin Based on GEP.

    PubMed

    Deng, Li; Wang, Guohua; Chen, Bo

    2015-01-01

    In view of the evaluation and decision-making problem of human-machine interface layout design for cabin, the operating comfort prediction model is proposed based on GEP (Gene Expression Programming), using operating comfort to evaluate layout scheme. Through joint angles to describe operating posture of upper limb, the joint angles are taken as independent variables to establish the comfort model of operating posture. Factor analysis is adopted to decrease the variable dimension; the model's input variables are reduced from 16 joint angles to 4 comfort impact factors, and the output variable is operating comfort score. The Chinese virtual human body model is built by CATIA software, which will be used to simulate and evaluate the operators' operating comfort. With 22 groups of evaluation data as training sample and validation sample, GEP algorithm is used to obtain the best fitting function between the joint angles and the operating comfort; then, operating comfort can be predicted quantitatively. The operating comfort prediction result of human-machine interface layout of driller control room shows that operating comfort prediction model based on GEP is fast and efficient, it has good prediction effect, and it can improve the design efficiency.

  13. Operating Comfort Prediction Model of Human-Machine Interface Layout for Cabin Based on GEP

    PubMed Central

    Wang, Guohua; Chen, Bo

    2015-01-01

    In view of the evaluation and decision-making problem of human-machine interface layout design for cabin, the operating comfort prediction model is proposed based on GEP (Gene Expression Programming), using operating comfort to evaluate layout scheme. Through joint angles to describe operating posture of upper limb, the joint angles are taken as independent variables to establish the comfort model of operating posture. Factor analysis is adopted to decrease the variable dimension; the model's input variables are reduced from 16 joint angles to 4 comfort impact factors, and the output variable is operating comfort score. The Chinese virtual human body model is built by CATIA software, which will be used to simulate and evaluate the operators' operating comfort. With 22 groups of evaluation data as training sample and validation sample, GEP algorithm is used to obtain the best fitting function between the joint angles and the operating comfort; then, operating comfort can be predicted quantitatively. The operating comfort prediction result of human-machine interface layout of driller control room shows that operating comfort prediction model based on GEP is fast and efficient, it has good prediction effect, and it can improve the design efficiency. PMID:26448740

  14. SECURITY MODELING FOR MARITIME PORT DEFENSE RESOURCE ALLOCATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harris, S.; Dunn, D.

    2010-09-07

    Redeployment of existing law enforcement resources and optimal use of geographic terrain are examined for countering the threat of a maritime based small-vessel radiological or nuclear attack. The evaluation was based on modeling conducted by the Savannah River National Laboratory that involved the development of options for defensive resource allocation that can reduce the risk of a maritime based radiological or nuclear threat. A diverse range of potential attack scenarios has been assessed. As a result of identifying vulnerable pathways, effective countermeasures can be deployed using current resources. The modeling involved the use of the Automated Vulnerability Evaluation for Risksmore » of Terrorism (AVERT{reg_sign}) software to conduct computer based simulation modeling. The models provided estimates for the probability of encountering an adversary based on allocated resources including response boats, patrol boats and helicopters over various environmental conditions including day, night, rough seas and various traffic flow rates.« less

  15. Using a data base management system for modelling SSME test history data

    NASA Technical Reports Server (NTRS)

    Abernethy, K.

    1985-01-01

    The usefulness of a data base management system (DBMS) for modelling historical test data for the complete series of static test firings for the Space Shuttle Main Engine (SSME) was assessed. From an analysis of user data base query requirements, it became clear that a relational DMBS which included a relationally complete query language would permit a model satisfying the query requirements. Representative models and sample queries are discussed. A list of environment-particular evaluation criteria for the desired DBMS was constructed; these criteria include requirements in the areas of user-interface complexity, program independence, flexibility, modifiability, and output capability. The evaluation process included the construction of several prototype data bases for user assessement. The systems studied, representing the three major DBMS conceptual models, were: MIRADS, a hierarchical system; DMS-1100, a CODASYL-based network system; ORACLE, a relational system; and DATATRIEVE, a relational-type system.

  16. A systematic and critical review of model-based economic evaluations of pharmacotherapeutics in patients with bipolar disorder.

    PubMed

    Mohiuddin, Syed

    2014-08-01

    Bipolar disorder (BD) is a chronic and relapsing mental illness with a considerable health-related and economic burden. The primary goal of pharmacotherapeutics for BD is to improve patients' well-being. The use of decision-analytic models is key in assessing the added value of the pharmacotherapeutics aimed at treating the illness, but concerns have been expressed about the appropriateness of different modelling techniques and about the transparency in the reporting of economic evaluations. This paper aimed to identify and critically appraise published model-based economic evaluations of pharmacotherapeutics in BD patients. A systematic review combining common terms for BD and economic evaluation was conducted in MEDLINE, EMBASE, PSYCINFO and ECONLIT. Studies identified were summarised and critically appraised in terms of the use of modelling technique, model structure and data sources. Considering the prognosis and management of BD, the possible benefits and limitations of each modelling technique are discussed. Fourteen studies were identified using model-based economic evaluations of pharmacotherapeutics in BD patients. Of these 14 studies, nine used Markov, three used discrete-event simulation (DES) and two used decision-tree models. Most of the studies (n = 11) did not include the rationale for the choice of modelling technique undertaken. Half of the studies did not include the risk of mortality. Surprisingly, no study considered the risk of having a mixed bipolar episode. This review identified various modelling issues that could potentially reduce the comparability of one pharmacotherapeutic intervention with another. Better use and reporting of the modelling techniques in the future studies are essential. DES modelling appears to be a flexible and comprehensive technique for evaluating the comparability of BD treatment options because of its greater flexibility of depicting the disease progression over time. However, depending on the research question, modelling techniques other than DES might also be appropriate in some cases.

  17. AQMEII: A New International Initiative on Air Quality Model Evaluation

    EPA Science Inventory

    We provide a conceptual view of the process of evaluating regional-scale three-dimensional numerical photochemical air quality modeling system, based on an examination of existing approached to the evaluation of such systems as they are currently used in a variety of application....

  18. Modeling the effect of land use change on hydrology of a forested watershed in coastal South Carolina.

    Treesearch

    Zhaohua Dai; Devendra M. Amatya; Ge Sun; Changsheng Li; Carl C. Trettin; Harbin Li

    2009-01-01

    Since hydrology is one of main factors controlling wetland functions, hydrologic models are useful for evaluating the effects of land use change on we land ecosystems. We evaluated two process-based hydrologic models with...

  19. [A new model fo the evaluation of measurements of the neurocranium].

    PubMed

    Seidler, H; Wilfing, H; Weber, G; Traindl-Prohazka, M; zur Nedden, D; Platzer, W

    1993-12-01

    A simple and user-friendly model for trigonometric description of the neurocranium based on newly defined points of measurement is presented. This model not only provides individual description, but also allows for an evaluation of developmental and phylogenetic aspects.

  20. Research on efficiency evaluation model of integrated energy system based on hybrid multi-attribute decision-making.

    PubMed

    Li, Yan

    2017-05-25

    The efficiency evaluation model of integrated energy system, involving many influencing factors, and the attribute values are heterogeneous and non-deterministic, usually cannot give specific numerical or accurate probability distribution characteristics, making the final evaluation result deviation. According to the characteristics of the integrated energy system, a hybrid multi-attribute decision-making model is constructed. The evaluation model considers the decision maker's risk preference. In the evaluation of the efficiency of the integrated energy system, the evaluation value of some evaluation indexes is linguistic value, or the evaluation value of the evaluation experts is not consistent. These reasons lead to ambiguity in the decision information, usually in the form of uncertain linguistic values and numerical interval values. In this paper, the risk preference of decision maker is considered when constructing the evaluation model. Interval-valued multiple-attribute decision-making method and fuzzy linguistic multiple-attribute decision-making model are proposed. Finally, the mathematical model of efficiency evaluation of integrated energy system is constructed.

  1. An Object-Based Approach to Evaluation of Climate Variability Projections and Predictions

    NASA Astrophysics Data System (ADS)

    Ammann, C. M.; Brown, B.; Kalb, C. P.; Bullock, R.

    2017-12-01

    Evaluations of the performance of earth system model predictions and projections are of critical importance to enhance usefulness of these products. Such evaluations need to address specific concerns depending on the system and decisions of interest; hence, evaluation tools must be tailored to inform about specific issues. Traditional approaches that summarize grid-based comparisons of analyses and models, or between current and future climate, often do not reveal important information about the models' performance (e.g., spatial or temporal displacements; the reason behind a poor score) and are unable to accommodate these specific information needs. For example, summary statistics such as the correlation coefficient or the mean-squared error provide minimal information to developers, users, and decision makers regarding what is "right" and "wrong" with a model. New spatial and temporal-spatial object-based tools from the field of weather forecast verification (where comparisons typically focus on much finer temporal and spatial scales) have been adapted to more completely answer some of the important earth system model evaluation questions. In particular, the Method for Object-based Diagnostic Evaluation (MODE) tool and its temporal (three-dimensional) extension (MODE-TD) have been adapted for these evaluations. More specifically, these tools can be used to address spatial and temporal displacements in projections of El Nino-related precipitation and/or temperature anomalies, ITCZ-associated precipitation areas, atmospheric rivers, seasonal sea-ice extent, and other features of interest. Examples of several applications of these tools in a climate context will be presented, using output of the CESM large ensemble. In general, these tools provide diagnostic information about model performance - accounting for spatial, temporal, and intensity differences - that cannot be achieved using traditional (scalar) model comparison approaches. Thus, they can provide more meaningful information that can be used in decision-making and planning. Future extensions and applications of these tools in a climate context will be considered.

  2. Surrogate Based Uni/Multi-Objective Optimization and Distribution Estimation Methods

    NASA Astrophysics Data System (ADS)

    Gong, W.; Duan, Q.; Huo, X.

    2017-12-01

    Parameter calibration has been demonstrated as an effective way to improve the performance of dynamic models, such as hydrological models, land surface models, weather and climate models etc. Traditional optimization algorithms usually cost a huge number of model evaluations, making dynamic model calibration very difficult, or even computationally prohibitive. With the help of a serious of recently developed adaptive surrogate-modelling based optimization methods: uni-objective optimization method ASMO, multi-objective optimization method MO-ASMO, and probability distribution estimation method ASMO-PODE, the number of model evaluations can be significantly reduced to several hundreds, making it possible to calibrate very expensive dynamic models, such as regional high resolution land surface models, weather forecast models such as WRF, and intermediate complexity earth system models such as LOVECLIM. This presentation provides a brief introduction to the common framework of adaptive surrogate-based optimization algorithms of ASMO, MO-ASMO and ASMO-PODE, a case study of Common Land Model (CoLM) calibration in Heihe river basin in Northwest China, and an outlook of the potential applications of the surrogate-based optimization methods.

  3. Interpreting experimental data on egg production--applications of dynamic differential equations.

    PubMed

    France, J; Lopez, S; Kebreab, E; Dijkstra, J

    2013-09-01

    This contribution focuses on applying mathematical models based on systems of ordinary first-order differential equations to synthesize and interpret data from egg production experiments. Models based on linear systems of differential equations are contrasted with those based on nonlinear systems. Regression equations arising from analytical solutions to linear compartmental schemes are considered as candidate functions for describing egg production curves, together with aspects of parameter estimation. Extant candidate functions are reviewed, a role for growth functions such as the Gompertz equation suggested, and a function based on a simple new model outlined. Structurally, the new model comprises a single pool with an inflow and an outflow. Compartmental simulation models based on nonlinear systems of differential equations, and thus requiring numerical solution, are next discussed, and aspects of parameter estimation considered. This type of model is illustrated in relation to development and evaluation of a dynamic model of calcium and phosphorus flows in layers. The model consists of 8 state variables representing calcium and phosphorus pools in the crop, stomachs, plasma, and bone. The flow equations are described by Michaelis-Menten or mass action forms. Experiments that measure Ca and P uptake in layers fed different calcium concentrations during shell-forming days are used to evaluate the model. In addition to providing a useful management tool, such a simulation model also provides a means to evaluate feeding strategies aimed at reducing excretion of potential pollutants in poultry manure to the environment.

  4. A human factors systems approach to understanding team-based primary care: a qualitative analysis

    PubMed Central

    Mundt, Marlon P.; Swedlund, Matthew P.

    2016-01-01

    Background. Research shows that high-functioning teams improve patient outcomes in primary care. However, there is no consensus on a conceptual model of team-based primary care that can be used to guide measurement and performance evaluation of teams. Objective. To qualitatively understand whether the Systems Engineering Initiative for Patient Safety (SEIPS) model could serve as a framework for creating and evaluating team-based primary care. Methods. We evaluated qualitative interview data from 19 clinicians and staff members from 6 primary care clinics associated with a large Midwestern university. All health care clinicians and staff in the study clinics completed a survey of their communication connections to team members. Social network analysis identified key informants for interviews by selecting the respondents with the highest frequency of communication ties as reported by their teammates. Semi-structured interviews focused on communication patterns, team climate and teamwork. Results. Themes derived from the interviews lent support to the SEIPS model components, such as the work system (Team, Tools and Technology, Physical Environment, Tasks and Organization), team processes and team outcomes. Conclusions. Our qualitative data support the SEIPS model as a promising conceptual framework for creating and evaluating primary care teams. Future studies of team-based care may benefit from using the SEIPS model to shift clinical practice to high functioning team-based primary care. PMID:27578837

  5. Modifying climate change habitat models using tree species-specific assessments of model uncertainty and life history-factors

    Treesearch

    Stephen N. Matthews; Louis R. Iverson; Anantha M. Prasad; Matthew P. Peters; Paul G. Rodewald

    2011-01-01

    Species distribution models (SDMs) to evaluate trees' potential responses to climate change are essential for developing appropriate forest management strategies. However, there is a great need to better understand these models' limitations and evaluate their uncertainties. We have previously developed statistical models of suitable habitat, based on both...

  6. Development of a Logic Model to Guide Evaluations of the ASCA National Model for School Counseling Programs

    ERIC Educational Resources Information Center

    Martin, Ian; Carey, John

    2014-01-01

    A logic model was developed based on an analysis of the 2012 American School Counselor Association (ASCA) National Model in order to provide direction for program evaluation initiatives. The logic model identified three outcomes (increased student achievement/gap reduction, increased school counseling program resources, and systemic change and…

  7. An evaluation method of power quality about electrified railways connected to power grid based on PSCAD/EMTDC

    NASA Astrophysics Data System (ADS)

    Liang, Weibin; Ouyang, Sen; Huang, Xiang; Su, Weijian

    2017-05-01

    The existing modeling process of power quality about electrified railways connected to power grid is complicated and the simulation scene is incomplete, so this paper puts forward a novel evaluation method of power quality based on PSCAD/ETMDC. Firstly, a model of power quality about electrified railways connected to power grid is established, which is based on testing report or measured data. The equivalent model of electrified locomotive contains power characteristic and harmonic characteristic, which are substituted by load and harmonic source. Secondly, in order to make evaluation more complete, an analysis scheme has been put forward. The scheme uses a combination of three-dimensions of electrified locomotive, which contains types, working conditions and quantity. At last, Shenmao Railway is taken as example to evaluate the power quality at different scenes, and the result shows electrified railways connected to power grid have significant effect on power quality.

  8. Using a Systematic Conceptual Model for a Process Evaluation of a Middle School Obesity Risk-Reduction Nutrition Curriculum Intervention: "Choice, Control & Change"

    ERIC Educational Resources Information Center

    Lee, Heewon; Contento, Isobel R.; Koch, Pamela

    2013-01-01

    Objective: To use and review a conceptual model of process evaluation and to examine the implementation of a nutrition education curriculum, "Choice, Control & Change", designed to promote dietary and physical activity behaviors that reduce obesity risk. Design: A process evaluation study based on a systematic conceptual model. Setting: Five…

  9. Requirements Modeling with the Aspect-oriented User Requirements Notation (AoURN): A Case Study

    NASA Astrophysics Data System (ADS)

    Mussbacher, Gunter; Amyot, Daniel; Araújo, João; Moreira, Ana

    The User Requirements Notation (URN) is a recent ITU-T standard that supports requirements engineering activities. The Aspect-oriented URN (AoURN) adds aspect-oriented concepts to URN, creating a unified framework that allows for scenario-based, goal-oriented, and aspect-oriented modeling. AoURN is applied to the car crash crisis management system (CCCMS), modeling its functional and non-functional requirements (NFRs). AoURN generally models all use cases, NFRs, and stakeholders as individual concerns and provides general guidelines for concern identification. AoURN handles interactions between concerns, capturing their dependencies and conflicts as well as the resolutions. We present a qualitative comparison of aspect-oriented techniques for scenario-based and goal-oriented requirements engineering. An evaluation carried out based on the metrics adapted from literature and a task-based evaluation suggest that AoURN models are more scalable than URN models and exhibit better modularity, reusability, and maintainability.

  10. Model-based segmentation in orbital volume measurement with cone beam computed tomography and evaluation against current concepts.

    PubMed

    Wagner, Maximilian E H; Gellrich, Nils-Claudius; Friese, Karl-Ingo; Becker, Matthias; Wolter, Franz-Erich; Lichtenstein, Juergen T; Stoetzer, Marcus; Rana, Majeed; Essig, Harald

    2016-01-01

    Objective determination of the orbital volume is important in the diagnostic process and in evaluating the efficacy of medical and/or surgical treatment of orbital diseases. Tools designed to measure orbital volume with computed tomography (CT) often cannot be used with cone beam CT (CBCT) because of inferior tissue representation, although CBCT has the benefit of greater availability and lower patient radiation exposure. Therefore, a model-based segmentation technique is presented as a new method for measuring orbital volume and compared to alternative techniques. Both eyes from thirty subjects with no known orbital pathology who had undergone CBCT as a part of routine care were evaluated (n = 60 eyes). Orbital volume was measured with manual, atlas-based, and model-based segmentation methods. Volume measurements, volume determination time, and usability were compared between the three methods. Differences in means were tested for statistical significance using two-tailed Student's t tests. Neither atlas-based (26.63 ± 3.15 mm(3)) nor model-based (26.87 ± 2.99 mm(3)) measurements were significantly different from manual volume measurements (26.65 ± 4.0 mm(3)). However, the time required to determine orbital volume was significantly longer for manual measurements (10.24 ± 1.21 min) than for atlas-based (6.96 ± 2.62 min, p < 0.001) or model-based (5.73 ± 1.12 min, p < 0.001) measurements. All three orbital volume measurement methods examined can accurately measure orbital volume, although atlas-based and model-based methods seem to be more user-friendly and less time-consuming. The new model-based technique achieves fully automated segmentation results, whereas all atlas-based segmentations at least required manipulations to the anterior closing. Additionally, model-based segmentation can provide reliable orbital volume measurements when CT image quality is poor.

  11. Risk evaluation of highway engineering project based on the fuzzy-AHP

    NASA Astrophysics Data System (ADS)

    Yang, Qian; Wei, Yajun

    2011-10-01

    Engineering projects are social activities, which integrate with technology, economy, management and organization. There are uncertainties in each respect of engineering projects, and it needs to strengthen risk management urgently. Based on the analysis of the characteristics of highway engineering, and the study of the basic theory on risk evaluation, the paper built an index system of highway project risk evaluation. Besides based on fuzzy mathematics principle, analytical hierarchy process was used and as a result, the model of the comprehensive appraisal method of fuzzy and AHP was set up for the risk evaluation of express way concessionary project. The validity and the practicability of the risk evaluation of expressway concessionary project were verified after the model was applied to the practice of a project.

  12. Evaluation of three energy balance-based evaporation models for estimating monthly evaporation for five lakes using derived heat storage changes from a hysteresis model

    NASA Astrophysics Data System (ADS)

    Duan, Zheng; Bastiaanssen, W. G. M.

    2017-02-01

    The heat storage changes (Q t) can be a significant component of the energy balance in lakes, and it is important to account for Q t for reasonable estimation of evaporation at monthly and finer timescales if the energy balance-based evaporation models are used. However, Q t has been often neglected in many studies due to the lack of required water temperature data. A simple hysteresis model (Q t = a*Rn + b + c* dRn/dt) has been demonstrated to reasonably estimate Q t from the readily available net all wave radiation (Rn) and three locally calibrated coefficients (a-c) for lakes and reservoirs. As a follow-up study, we evaluated whether this hysteresis model could enable energy balance-based evaporation models to yield good evaporation estimates. The representative monthly evaporation data were compiled from published literature and used as ground-truth to evaluate three energy balance-based evaporation models for five lakes. The three models in different complexity are De Bruin-Keijman (DK), Penman, and a new model referred to as Duan-Bastiaanssen (DB). All three models require Q t as input. Each model was run in three scenarios differing in the input Q t (S1: measured Q t; S2: modelled Q t from the hysteresis model; S3: neglecting Q t) to evaluate the impact of Q t on the modelled evaporation. Evaluation showed that the modelled Q t agreed well with measured counterparts for all five lakes. It was confirmed that the hysteresis model with locally calibrated coefficients can predict Q t with good accuracy for the same lake. Using modelled Q t as inputs all three evaporation models yielded comparably good monthly evaporation to those using measured Q t as inputs and significantly better than those neglecting Q t for the five lakes. The DK model requiring minimum data generally performed the best, followed by the Penman and DB model. This study demonstrated that once three coefficients are locally calibrated using historical data the simple hysteresis model can offer reasonable Q t to force energy balance-based evaporation models to improve evaporation modelling at monthly timescales for conditions and long-term periods when measured Q t are not available. We call on scientific community to further test and refine the hysteresis model in more lakes in different geographic locations and environments.

  13. Forecasting plant phenology: evaluating the phenological models for Betula pendula and Padus racemosa spring phases, Latvia.

    PubMed

    Kalvāns, Andis; Bitāne, Māra; Kalvāne, Gunta

    2015-02-01

    A historical phenological record and meteorological data of the period 1960-2009 are used to analyse the ability of seven phenological models to predict leaf unfolding and beginning of flowering for two tree species-silver birch Betula pendula and bird cherry Padus racemosa-in Latvia. Model stability is estimated performing multiple model fitting runs using half of the data for model training and the other half for evaluation. Correlation coefficient, mean absolute error and mean squared error are used to evaluate model performance. UniChill (a model using sigmoidal development rate and temperature relationship and taking into account the necessity for dormancy release) and DDcos (a simple degree-day model considering the diurnal temperature fluctuations) are found to be the best models for describing the considered spring phases. A strong collinearity between base temperature and required heat sum is found for several model fitting runs of the simple degree-day based models. Large variation of the model parameters between different model fitting runs in case of more complex models indicates similar collinearity and over-parameterization of these models. It is suggested that model performance can be improved by incorporating the resolved daily temperature fluctuations of the DDcos model into the framework of the more complex models (e.g. UniChill). The average base temperature, as found by DDcos model, for B. pendula leaf unfolding is 5.6 °C and for the start of the flowering 6.7 °C; for P. racemosa, the respective base temperatures are 3.2 °C and 3.4 °C.

  14. Performance assessment of geospatial simulation models of land-use change--a landscape metric-based approach.

    PubMed

    Sakieh, Yousef; Salmanmahiny, Abdolrassoul

    2016-03-01

    Performance evaluation is a critical step when developing land-use and cover change (LUCC) models. The present study proposes a spatially explicit model performance evaluation method, adopting a landscape metric-based approach. To quantify GEOMOD model performance, a set of composition- and configuration-based landscape metrics including number of patches, edge density, mean Euclidean nearest neighbor distance, largest patch index, class area, landscape shape index, and splitting index were employed. The model takes advantage of three decision rules including neighborhood effect, persistence of change direction, and urbanization suitability values. According to the results, while class area, largest patch index, and splitting indices demonstrated insignificant differences between spatial pattern of ground truth and simulated layers, there was a considerable inconsistency between simulation results and real dataset in terms of the remaining metrics. Specifically, simulation outputs were simplistic and the model tended to underestimate number of developed patches by producing a more compact landscape. Landscape-metric-based performance evaluation produces more detailed information (compared to conventional indices such as the Kappa index and overall accuracy) on the model's behavior in replicating spatial heterogeneity features of a landscape such as frequency, fragmentation, isolation, and density. Finally, as the main characteristic of the proposed method, landscape metrics employ the maximum potential of observed and simulated layers for a performance evaluation procedure, provide a basis for more robust interpretation of a calibration process, and also deepen modeler insight into the main strengths and pitfalls of a specific land-use change model when simulating a spatiotemporal phenomenon.

  15. Information and complexity measures for hydrologic model evaluation

    USDA-ARS?s Scientific Manuscript database

    Hydrological models are commonly evaluated through the residual-based performance measures such as the root-mean square error or efficiency criteria. Such measures, however, do not evaluate the degree of similarity of patterns in simulated and measured time series. The objective of this study was to...

  16. A standard telemental health evaluation model: the time is now.

    PubMed

    Kramer, Greg M; Shore, Jay H; Mishkind, Matt C; Friedl, Karl E; Poropatich, Ronald K; Gahm, Gregory A

    2012-05-01

    The telehealth field has advanced historic promises to improve access, cost, and quality of care. However, the extent to which it is delivering on its promises is unclear as the scientific evidence needed to justify success is still emerging. Many have identified the need to advance the scientific knowledge base to better quantify success. One method for advancing that knowledge base is a standard telemental health evaluation model. Telemental health is defined here as the provision of mental health services using live, interactive video-teleconferencing technology. Evaluation in the telemental health field largely consists of descriptive and small pilot studies, is often defined by the individual goals of the specific programs, and is typically focused on only one outcome. The field should adopt new evaluation methods that consider the co-adaptive interaction between users (patients and providers), healthcare costs and savings, and the rapid evolution in communication technologies. Acceptance of a standard evaluation model will improve perceptions of telemental health as an established field, promote development of a sounder empirical base, promote interagency collaboration, and provide a framework for more multidisciplinary research that integrates measuring the impact of the technology and the overall healthcare aspect. We suggest that consideration of a standard model is timely given where telemental health is at in terms of its stage of scientific progress. We will broadly recommend some elements of what such a standard evaluation model might include for telemental health and suggest a way forward for adopting such a model.

  17. An Evaluation of the Preceptor Model versus the Formal Teaching Model.

    ERIC Educational Resources Information Center

    Shamian, Judith; Lemieux, Suzanne

    1984-01-01

    This study evaluated the effectiveness of two teaching methods to determine which is more effective in enhancing the knowledge base of participating nurses: the preceptor model embodies decentralized instruction by a member of the nursing staff, and the formal teaching model uses centralized teaching by the inservice education department. (JOW)

  18. Surface Modeling, Solid Modeling and Finite Element Modeling. Analysis Capabilities of Computer-Assisted Design and Manufacturing Systems.

    ERIC Educational Resources Information Center

    Nee, John G.; Kare, Audhut P.

    1987-01-01

    Explores several concepts in computer assisted design/computer assisted manufacturing (CAD/CAM). Defines, evaluates, reviews and compares advanced computer-aided geometric modeling and analysis techniques. Presents the results of a survey to establish the capabilities of minicomputer based-systems with the CAD/CAM packages evaluated. (CW)

  19. Evaluating Pillar Industry’s Transformation Capability: A Case Study of Two Chinese Steel-Based Cities

    PubMed Central

    Li, Zhidong; Marinova, Dora; Guo, Xiumei; Gao, Yuan

    2015-01-01

    Many steel-based cities in China were established between the 1950s and 1960s. After more than half a century of development and boom, these cities are starting to decline and industrial transformation is urgently needed. This paper focuses on evaluating the transformation capability of resource-based cities building an evaluation model. Using Text Mining and the Document Explorer technique as a way of extracting text features, the 200 most frequently used words are derived from 100 publications related to steel- and other resource-based cities. The Expert Evaluation Method (EEM) and Analytic Hierarchy Process (AHP) techniques are then applied to select 53 indicators, determine their weights and establish an index system for evaluating the transformation capability of the pillar industry of China’s steel-based cities. Using real data and expert reviews, the improved Fuzzy Relation Matrix (FRM) method is applied to two case studies in China, namely Panzhihua and Daye, and the evaluation model is developed using Fuzzy Comprehensive Evaluation (FCE). The cities’ abilities to carry out industrial transformation are evaluated with concerns expressed for the case of Daye. The findings have policy implications for the potential and required industrial transformation in the two selected cities and other resource-based towns. PMID:26422266

  20. Evaluating Pillar Industry's Transformation Capability: A Case Study of Two Chinese Steel-Based Cities.

    PubMed

    Li, Zhidong; Marinova, Dora; Guo, Xiumei; Gao, Yuan

    2015-01-01

    Many steel-based cities in China were established between the 1950s and 1960s. After more than half a century of development and boom, these cities are starting to decline and industrial transformation is urgently needed. This paper focuses on evaluating the transformation capability of resource-based cities building an evaluation model. Using Text Mining and the Document Explorer technique as a way of extracting text features, the 200 most frequently used words are derived from 100 publications related to steel- and other resource-based cities. The Expert Evaluation Method (EEM) and Analytic Hierarchy Process (AHP) techniques are then applied to select 53 indicators, determine their weights and establish an index system for evaluating the transformation capability of the pillar industry of China's steel-based cities. Using real data and expert reviews, the improved Fuzzy Relation Matrix (FRM) method is applied to two case studies in China, namely Panzhihua and Daye, and the evaluation model is developed using Fuzzy Comprehensive Evaluation (FCE). The cities' abilities to carry out industrial transformation are evaluated with concerns expressed for the case of Daye. The findings have policy implications for the potential and required industrial transformation in the two selected cities and other resource-based towns.

  1. Discussion on accuracy degree evaluation of accident velocity reconstruction model

    NASA Astrophysics Data System (ADS)

    Zou, Tiefang; Dai, Yingbiao; Cai, Ming; Liu, Jike

    In order to investigate the applicability of accident velocity reconstruction model in different cases, a method used to evaluate accuracy degree of accident velocity reconstruction model is given. Based on pre-crash velocity in theory and calculation, an accuracy degree evaluation formula is obtained. With a numerical simulation case, Accuracy degrees and applicability of two accident velocity reconstruction models are analyzed; results show that this method is feasible in practice.

  2. Evaluating a Control System Architecture Based on a Formally Derived AOCS Model

    NASA Astrophysics Data System (ADS)

    Ilic, Dubravka; Latvala, Timo; Varpaaniemi, Kimmo; Vaisanen, Pauli; Troubitsyna, Elena; Laibinis, Linas

    2010-08-01

    Attitude & Orbit Control System (AOCS) refers to a wider class of control systems which are used to determine and control the attitude of the spacecraft while in orbit, based on the information obtained from various sensors. In this paper, we propose an approach to evaluate a typical (yet somewhat simplified) AOCS architecture using formal development - based on the Event-B method. As a starting point, an Ada specification of the AOCS is translated into a formal specification and further refined to incorporate all the details of its original source code specification. This way we are able not only to evaluate the Ada specification by expressing and verifying specific system properties in our formal models, but also to determine how well the chosen modelling framework copes with the level of detail required for an actual implementation and code generation from the derived models.

  3. Performance comparison of LUR and OK in PM2.5 concentration mapping: a multidimensional perspective

    PubMed Central

    Zou, Bin; Luo, Yanqing; Wan, Neng; Zheng, Zhong; Sternberg, Troy; Liao, Yilan

    2015-01-01

    Methods of Land Use Regression (LUR) modeling and Ordinary Kriging (OK) interpolation have been widely used to offset the shortcomings of PM2.5 data observed at sparse monitoring sites. However, traditional point-based performance evaluation strategy for these methods remains stagnant, which could cause unreasonable mapping results. To address this challenge, this study employs ‘information entropy’, an area-based statistic, along with traditional point-based statistics (e.g. error rate, RMSE) to evaluate the performance of LUR model and OK interpolation in mapping PM2.5 concentrations in Houston from a multidimensional perspective. The point-based validation reveals significant differences between LUR and OK at different test sites despite the similar end-result accuracy (e.g. error rate 6.13% vs. 7.01%). Meanwhile, the area-based validation demonstrates that the PM2.5 concentrations simulated by the LUR model exhibits more detailed variations than those interpolated by the OK method (i.e. information entropy, 7.79 vs. 3.63). Results suggest that LUR modeling could better refine the spatial distribution scenario of PM2.5 concentrations compared to OK interpolation. The significance of this study primarily lies in promoting the integration of point- and area-based statistics for model performance evaluation in air pollution mapping. PMID:25731103

  4. Parameterising User Uptake in Economic Evaluations: The role of discrete choice experiments.

    PubMed

    Terris-Prestholt, Fern; Quaife, Matthew; Vickerman, Peter

    2016-02-01

    Model-based economic evaluations of new interventions have shown that user behaviour (uptake) is a critical driver of overall impact achieved. However, early economic evaluations, prior to introduction, often rely on assumed levels of uptake based on expert opinion or uptake of similar interventions. In addition to the likely uncertainty surrounding these uptake assumptions, they also do not allow for uptake to be a function of product, intervention, or user characteristics. This letter proposes using uptake projections from discrete choice experiments (DCE) to better parameterize uptake and substitution in cost-effectiveness models. A simple impact model is developed and illustrated using an example from the HIV prevention field in South Africa. Comparison between the conventional approach and the DCE-based approach shows that, in our example, DCE-based impact predictions varied by up to 50% from conventional estimates and provided far more nuanced projections. In the absence of observed uptake data and to model the effect of variations in intervention characteristics, DCE-based uptake predictions are likely to greatly improve models parameterizing uptake solely based on expert opinion. This is particularly important for global and national level decision making around introducing new and probably more expensive interventions, particularly where resources are most constrained. © 2016 The Authors. Health Economics published by John Wiley & Sons Ltd.

  5. Reliability and performance evaluation of systems containing embedded rule-based expert systems

    NASA Technical Reports Server (NTRS)

    Beaton, Robert M.; Adams, Milton B.; Harrison, James V. A.

    1989-01-01

    A method for evaluating the reliability of real-time systems containing embedded rule-based expert systems is proposed and investigated. It is a three stage technique that addresses the impact of knowledge-base uncertainties on the performance of expert systems. In the first stage, a Markov reliability model of the system is developed which identifies the key performance parameters of the expert system. In the second stage, the evaluation method is used to determine the values of the expert system's key performance parameters. The performance parameters can be evaluated directly by using a probabilistic model of uncertainties in the knowledge-base or by using sensitivity analyses. In the third and final state, the performance parameters of the expert system are combined with performance parameters for other system components and subsystems to evaluate the reliability and performance of the complete system. The evaluation method is demonstrated in the context of a simple expert system used to supervise the performances of an FDI algorithm associated with an aircraft longitudinal flight-control system.

  6. A Model-Based Evaluation of a Cultural Mediator Outreach Program for HIV+ Ethiopian Immigrants in Israel.

    ERIC Educational Resources Information Center

    Kaplan, Edward H.; Soskolne, Varda; Adler, Bella; Leventhal, Alex; Shtarkshall, Ronny A.

    2002-01-01

    Conducted a model-based evaluation of a program designed to reduce HIV transmission from HIV-infected Ethiopian immigrants in Israel. Focused on pregnancy rate reduction as a measure of sexual exposure. Results for 145 female and 176 male clients in the intervention suggest reduction in unprotected sexual exposures among program participants. (SLD)

  7. An evaluation of the hemiplegic subject based on the Bobath approach. Part I: The model.

    PubMed

    Guarna, F; Corriveau, H; Chamberland, J; Arsenault, A B; Dutil, E; Drouin, G

    1988-01-01

    An evaluation, based on the Bobath approach to treatment has been developed. A model, substantiating this evaluation is presented. In this model, the three stages of motor recovery presented by Bobath have been extended to six, to better follow the progression of the patient. Six parameters have also been identified. These are the elements to be quantified so that the progress of the patient through the stages of motor recovery can be followed. Four of these parameters are borrowed from the Bobath approach, that is: postural reaction, muscle tone, reflex activity and active movement. Two have been added: sensorium and pain. An accompanying paper presents the evaluation protocol along with the operational definition of each of these parameters.

  8. Core Professionalism Education in Surgery: A Systematic Review.

    PubMed

    Sarıoğlu Büke, Akile; Karabilgin Öztürkçü, Özlem Sürel; Yılmaz, Yusuf; Sayek, İskender

    2018-03-15

    Professionalism education is one of the major elements of surgical residency education. To evaluate the studies on core professionalism education programs in surgical professionalism education. Systematic review. This systematic literature review was performed to analyze core professionalism programs for surgical residency education published in English with at least three of the following features: program developmental model/instructional design method, aims and competencies, methods of teaching, methods of assessment, and program evaluation model or method. A total of 27083 articles were retrieved using EBSCOHOST, PubMed, Science Direct, Web of Science, and manual search. Eight articles met the selection criteria. The instructional design method was presented in only one article, which described the Analysis, Design, Development, Implementation, and Evaluation model. Six articles were based on the Accreditation Council for Graduate Medical Education criterion, although there was significant variability in content. The most common teaching method was role modeling with scenario- and case-based learning. A wide range of assessment methods for evaluating professionalism education were reported. The Kirkpatrick model was reported in one article as a method for program evaluation. It is suggested that for a core surgical professionalism education program, developmental/instructional design model, aims and competencies, content, teaching methods, assessment methods, and program evaluation methods/models should be well defined, and the content should be comparable.

  9. Scoring annual earthquake predictions in China

    NASA Astrophysics Data System (ADS)

    Zhuang, Jiancang; Jiang, Changsheng

    2012-02-01

    The Annual Consultation Meeting on Earthquake Tendency in China is held by the China Earthquake Administration (CEA) in order to provide one-year earthquake predictions over most China. In these predictions, regions of concern are denoted together with the corresponding magnitude range of the largest earthquake expected during the next year. Evaluating the performance of these earthquake predictions is rather difficult, especially for regions that are of no concern, because they are made on arbitrary regions with flexible magnitude ranges. In the present study, the gambling score is used to evaluate the performance of these earthquake predictions. Based on a reference model, this scoring method rewards successful predictions and penalizes failures according to the risk (probability of being failure) that the predictors have taken. Using the Poisson model, which is spatially inhomogeneous and temporally stationary, with the Gutenberg-Richter law for earthquake magnitudes as the reference model, we evaluate the CEA predictions based on 1) a partial score for evaluating whether issuing the alarmed regions is based on information that differs from the reference model (knowledge of average seismicity level) and 2) a complete score that evaluates whether the overall performance of the prediction is better than the reference model. The predictions made by the Annual Consultation Meetings on Earthquake Tendency from 1990 to 2003 are found to include significant precursory information, but the overall performance is close to that of the reference model.

  10. Damage evaluation by a guided wave-hidden Markov model based method

    NASA Astrophysics Data System (ADS)

    Mei, Hanfei; Yuan, Shenfang; Qiu, Lei; Zhang, Jinjin

    2016-02-01

    Guided wave based structural health monitoring has shown great potential in aerospace applications. However, one of the key challenges of practical engineering applications is the accurate interpretation of the guided wave signals under time-varying environmental and operational conditions. This paper presents a guided wave-hidden Markov model based method to improve the damage evaluation reliability of real aircraft structures under time-varying conditions. In the proposed approach, an HMM based unweighted moving average trend estimation method, which can capture the trend of damage propagation from the posterior probability obtained by HMM modeling is used to achieve a probabilistic evaluation of the structural damage. To validate the developed method, experiments are performed on a hole-edge crack specimen under fatigue loading condition and a real aircraft wing spar under changing structural boundary conditions. Experimental results show the advantage of the proposed method.

  11. An Evaluation Research Model for System-Wide Textbook Selection.

    ERIC Educational Resources Information Center

    Talmage, Harriet; Walberg, Herbert T.

    One component of an evaluation research model for system-wide selection of curriculum materials is reported: implementation of an evaluation design for obtaining data that permits professional and lay persons to base curriculum materials decisions on a "best fit" principle. The design includes teacher characteristics, learning environment…

  12. Risk Evaluation of Railway Coal Transportation Network Based on Multi Level Grey Evaluation Model

    NASA Astrophysics Data System (ADS)

    Niu, Wei; Wang, Xifu

    2018-01-01

    The railway transport mode is currently the most important way of coal transportation, and now China’s railway coal transportation network has become increasingly perfect, but there is still insufficient capacity, some lines close to saturation and other issues. In this paper, the theory and method of risk assessment, analytic hierarchy process and multi-level gray evaluation model are applied to the risk evaluation of coal railway transportation network in China. Based on the example analysis of Shanxi railway coal transportation network, to improve the internal structure and the competitiveness of the market.

  13. A modeling framework for evaluating streambank stabilization practices for reach-scale sediment reduction

    USDA-ARS?s Scientific Manuscript database

    Streambank stabilization techniques are often implemented to reduce sediment loads from unstable streambanks. Process-based models can predict sediment yields with stabilization scenarios prior to implementation. However, a framework does not exist on how to effectively utilize these models to evalu...

  14. An open-source Java-based Toolbox for environmental model evaluation: The MOUSE Software Application

    USDA-ARS?s Scientific Manuscript database

    A consequence of environmental model complexity is that the task of understanding how environmental models work and identifying their sensitivities/uncertainties, etc. becomes progressively more difficult. Comprehensive numerical and visual evaluation tools have been developed such as the Monte Carl...

  15. Integrate Data into Scientific Workflows for Terrestrial Biosphere Model Evaluation through Brokers

    NASA Astrophysics Data System (ADS)

    Wei, Y.; Cook, R. B.; Du, F.; Dasgupta, A.; Poco, J.; Huntzinger, D. N.; Schwalm, C. R.; Boldrini, E.; Santoro, M.; Pearlman, J.; Pearlman, F.; Nativi, S.; Khalsa, S.

    2013-12-01

    Terrestrial biosphere models (TBMs) have become integral tools for extrapolating local observations and process-level understanding of land-atmosphere carbon exchange to larger regions. Model-model and model-observation intercomparisons are critical to understand the uncertainties within model outputs, to improve model skill, and to improve our understanding of land-atmosphere carbon exchange. The DataONE Exploration, Visualization, and Analysis (EVA) working group is evaluating TBMs using scientific workflows in UV-CDAT/VisTrails. This workflow-based approach promotes collaboration and improved tracking of evaluation provenance. But challenges still remain. The multi-scale and multi-discipline nature of TBMs makes it necessary to include diverse and distributed data resources in model evaluation. These include, among others, remote sensing data from NASA, flux tower observations from various organizations including DOE, and inventory data from US Forest Service. A key challenge is to make heterogeneous data from different organizations and disciplines discoverable and readily integrated for use in scientific workflows. This presentation introduces the brokering approach taken by the DataONE EVA to fill the gap between TBMs' evaluation scientific workflows and cross-organization and cross-discipline data resources. The DataONE EVA started the development of an Integrated Model Intercomparison Framework (IMIF) that leverages standards-based discovery and access brokers to dynamically discover, access, and transform (e.g. subset and resampling) diverse data products from DataONE, Earth System Grid (ESG), and other data repositories into a format that can be readily used by scientific workflows in UV-CDAT/VisTrails. The discovery and access brokers serve as an independent middleware that bridge existing data repositories and TBMs evaluation scientific workflows but introduce little overhead to either component. In the initial work, an OpenSearch-based discovery broker is leveraged to provide a consistent mechanism for data discovery. Standards-based data services, including Open Geospatial Consortium (OGC) Web Coverage Service (WCS) and THREDDS are leveraged to provide on-demand data access and transformations through the data access broker. To ease the adoption of broker services, a package of broker client VisTrails modules have been developed to be easily plugged into scientific workflows. The initial IMIF has been successfully tested in selected model evaluation scenarios involved in the NASA-funded Multi-scale Synthesis and Terrestrial Model Intercomparison Project (MsTMIP).

  16. Evaluation of Troxler model 3411 nuclear gage.

    DOT National Transportation Integrated Search

    1978-01-01

    The performance of the Troxler Electronics Laboratory Model 3411 nuclear gage was evaluated through laboratory tests on the Department's density and moisture standards and field tests on various soils, base courses, and bituminous concrete overlays t...

  17. Quality evaluation on an e-learning system in continuing professional education of nurses.

    PubMed

    Lin, I-Chun; Chien, Yu-Mei; Chang, I-Chiu

    2006-01-01

    Maintaining high quality in Web-based learning is a powerful means of increasing the overall efficiency and effectiveness of distance learning. Many studies have evaluated Web-based learning but seldom evaluate from the information systems (IS) perspective. This study applied the famous IS Success model in measuring the quality of a Web-based learning system using a Web-based questionnaire for data collection. One hundred and fifty four nurses participated in the survey. Based on confirmatory factor analysis, the variables of the research model fit for measuring the quality of a Web-based learning system. As Web-based education continues to grow worldwide, the results of this study may assist the system adopter (hospital executives), the learner (nurses), and the system designers in making reasonable and informed judgments with regard to the quality of Web-based learning system in continuing professional education.

  18. Comparison of Marker-Based Genomic Estimated Breeding Values and Phenotypic Evaluation for Selection of Bacterial Spot Resistance in Tomato.

    PubMed

    Liabeuf, Debora; Sim, Sung-Chur; Francis, David M

    2018-03-01

    Bacterial spot affects tomato crops (Solanum lycopersicum) grown under humid conditions. Major genes and quantitative trait loci (QTL) for resistance have been described, and multiple loci from diverse sources need to be combined to improve disease control. We investigated genomic selection (GS) prediction models for resistance to Xanthomonas euvesicatoria and experimentally evaluated the accuracy of these models. The training population consisted of 109 families combining resistance from four sources and directionally selected from a population of 1,100 individuals. The families were evaluated on a plot basis in replicated inoculated trials and genotyped with single nucleotide polymorphisms (SNP). We compared the prediction ability of models developed with 14 to 387 SNP. Genomic estimated breeding values (GEBV) were derived using Bayesian least absolute shrinkage and selection operator regression (BL) and ridge regression (RR). Evaluations were based on leave-one-out cross validation and on empirical observations in replicated field trials using the next generation of inbred progeny and a hybrid population resulting from selections in the training population. Prediction ability was evaluated based on correlations between GEBV and phenotypes (r g ), percentage of coselection between genomic and phenotypic selection, and relative efficiency of selection (r g /r p ). Results were similar with BL and RR models. Models using only markers previously identified as significantly associated with resistance but weighted based on GEBV and mixed models with markers associated with resistance treated as fixed effects and markers distributed in the genome treated as random effects offered greater accuracy and a high percentage of coselection. The accuracy of these models to predict the performance of progeny and hybrids exceeded the accuracy of phenotypic selection.

  19. Evaluation of integrated assessment model hindcast experiments: a case study of the GCAM 3.0 land use module

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Snyder, Abigail C.; Link, Robert P.; Calvin, Katherine V.

    Hindcasting experiments (conducting a model forecast for a time period in which observational data are available) are being undertaken increasingly often by the integrated assessment model (IAM) community, across many scales of models. When they are undertaken, the results are often evaluated using global aggregates or otherwise highly aggregated skill scores that mask deficiencies. We select a set of deviation-based measures that can be applied on different spatial scales (regional versus global) to make evaluating the large number of variable–region combinations in IAMs more tractable. We also identify performance benchmarks for these measures, based on the statistics of the observationalmore » dataset, that allow a model to be evaluated in absolute terms rather than relative to the performance of other models at similar tasks. An ideal evaluation method for hindcast experiments in IAMs would feature both absolute measures for evaluation of a single experiment for a single model and relative measures to compare the results of multiple experiments for a single model or the same experiment repeated across multiple models, such as in community intercomparison studies. The performance benchmarks highlight the use of this scheme for model evaluation in absolute terms, providing information about the reasons a model may perform poorly on a given measure and therefore identifying opportunities for improvement. To demonstrate the use of and types of results possible with the evaluation method, the measures are applied to the results of a past hindcast experiment focusing on land allocation in the Global Change Assessment Model (GCAM) version 3.0. The question of how to more holistically evaluate models as complex as IAMs is an area for future research. We find quantitative evidence that global aggregates alone are not sufficient for evaluating IAMs that require global supply to equal global demand at each time period, such as GCAM. The results of this work indicate it is unlikely that a single evaluation measure for all variables in an IAM exists, and therefore sector-by-sector evaluation may be necessary.« less

  20. Evaluation of integrated assessment model hindcast experiments: a case study of the GCAM 3.0 land use module

    DOE PAGES

    Snyder, Abigail C.; Link, Robert P.; Calvin, Katherine V.

    2017-11-29

    Hindcasting experiments (conducting a model forecast for a time period in which observational data are available) are being undertaken increasingly often by the integrated assessment model (IAM) community, across many scales of models. When they are undertaken, the results are often evaluated using global aggregates or otherwise highly aggregated skill scores that mask deficiencies. We select a set of deviation-based measures that can be applied on different spatial scales (regional versus global) to make evaluating the large number of variable–region combinations in IAMs more tractable. We also identify performance benchmarks for these measures, based on the statistics of the observationalmore » dataset, that allow a model to be evaluated in absolute terms rather than relative to the performance of other models at similar tasks. An ideal evaluation method for hindcast experiments in IAMs would feature both absolute measures for evaluation of a single experiment for a single model and relative measures to compare the results of multiple experiments for a single model or the same experiment repeated across multiple models, such as in community intercomparison studies. The performance benchmarks highlight the use of this scheme for model evaluation in absolute terms, providing information about the reasons a model may perform poorly on a given measure and therefore identifying opportunities for improvement. To demonstrate the use of and types of results possible with the evaluation method, the measures are applied to the results of a past hindcast experiment focusing on land allocation in the Global Change Assessment Model (GCAM) version 3.0. The question of how to more holistically evaluate models as complex as IAMs is an area for future research. We find quantitative evidence that global aggregates alone are not sufficient for evaluating IAMs that require global supply to equal global demand at each time period, such as GCAM. The results of this work indicate it is unlikely that a single evaluation measure for all variables in an IAM exists, and therefore sector-by-sector evaluation may be necessary.« less

  1. Evaluation of integrated assessment model hindcast experiments: a case study of the GCAM 3.0 land use module

    NASA Astrophysics Data System (ADS)

    Snyder, Abigail C.; Link, Robert P.; Calvin, Katherine V.

    2017-11-01

    Hindcasting experiments (conducting a model forecast for a time period in which observational data are available) are being undertaken increasingly often by the integrated assessment model (IAM) community, across many scales of models. When they are undertaken, the results are often evaluated using global aggregates or otherwise highly aggregated skill scores that mask deficiencies. We select a set of deviation-based measures that can be applied on different spatial scales (regional versus global) to make evaluating the large number of variable-region combinations in IAMs more tractable. We also identify performance benchmarks for these measures, based on the statistics of the observational dataset, that allow a model to be evaluated in absolute terms rather than relative to the performance of other models at similar tasks. An ideal evaluation method for hindcast experiments in IAMs would feature both absolute measures for evaluation of a single experiment for a single model and relative measures to compare the results of multiple experiments for a single model or the same experiment repeated across multiple models, such as in community intercomparison studies. The performance benchmarks highlight the use of this scheme for model evaluation in absolute terms, providing information about the reasons a model may perform poorly on a given measure and therefore identifying opportunities for improvement. To demonstrate the use of and types of results possible with the evaluation method, the measures are applied to the results of a past hindcast experiment focusing on land allocation in the Global Change Assessment Model (GCAM) version 3.0. The question of how to more holistically evaluate models as complex as IAMs is an area for future research. We find quantitative evidence that global aggregates alone are not sufficient for evaluating IAMs that require global supply to equal global demand at each time period, such as GCAM. The results of this work indicate it is unlikely that a single evaluation measure for all variables in an IAM exists, and therefore sector-by-sector evaluation may be necessary.

  2. Studying the teaching of kindness: A conceptual model for evaluating kindness education programs in schools.

    PubMed

    Kaplan, Deanna M; deBlois, Madeleine; Dominguez, Violeta; Walsh, Michele E

    2016-10-01

    Recent research suggests that school-based kindness education programs may benefit the learning and social-emotional development of youth and may improve school climate and school safety outcomes. However, how and to what extent kindness education programming influences positive outcomes in schools is poorly understood, and such programs are difficult to evaluate in the absence of a conceptual model for studying their effectiveness. In partnership with Kind Campus, a widely adopted school-based kindness education program that uses a bottom-up program framework, a methodology called concept mapping was used to develop a conceptual model for evaluating school-based kindness education programs from the input of 123 middle school students and approximately 150 educators, school professionals, and academic scholars. From the basis of this model, recommendations for processes and outcomes that would be useful to assess in evaluations of kindness education programs are made, and areas where additional instrument development may be necessary are highlighted. The utility of the concept mapping method as an initial step in evaluating other grassroots or non-traditional educational programming is also discussed. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Using the Many-Facet Rasch Model to Evaluate Standard-Setting Judgments: Setting Performance Standards for Advanced Placement® Examinations

    ERIC Educational Resources Information Center

    Kaliski, Pamela; Wind, Stefanie A.; Engelhard, George, Jr.; Morgan, Deanna; Plake, Barbara; Reshetar, Rosemary

    2012-01-01

    The Many-Facet Rasch (MFR) Model is traditionally used to evaluate the quality of ratings on constructed response assessments; however, it can also be used to evaluate the quality of judgments from panel-based standard setting procedures. The current study illustrates the use of the MFR Model by examining the quality of ratings obtained from a…

  4. An inverse problem strategy based on forward model evaluations: Gradient-based optimization without adjoint solves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aguilo Valentin, Miguel Alejandro

    2016-07-01

    This study presents a new nonlinear programming formulation for the solution of inverse problems. First, a general inverse problem formulation based on the compliance error functional is presented. The proposed error functional enables the computation of the Lagrange multipliers, and thus the first order derivative information, at the expense of just one model evaluation. Therefore, the calculation of the Lagrange multipliers does not require the solution of the computationally intensive adjoint problem. This leads to significant speedups for large-scale, gradient-based inverse problems.

  5. Equivalent model construction for a non-linear dynamic system based on an element-wise stiffness evaluation procedure and reduced analysis of the equivalent system

    NASA Astrophysics Data System (ADS)

    Kim, Euiyoung; Cho, Maenghyo

    2017-11-01

    In most non-linear analyses, the construction of a system matrix uses a large amount of computation time, comparable to the computation time required by the solving process. If the process for computing non-linear internal force matrices is substituted with an effective equivalent model that enables the bypass of numerical integrations and assembly processes used in matrix construction, efficiency can be greatly enhanced. A stiffness evaluation procedure (STEP) establishes non-linear internal force models using polynomial formulations of displacements. To efficiently identify an equivalent model, the method has evolved such that it is based on a reduced-order system. The reduction process, however, makes the equivalent model difficult to parameterize, which significantly affects the efficiency of the optimization process. In this paper, therefore, a new STEP, E-STEP, is proposed. Based on the element-wise nature of the finite element model, the stiffness evaluation is carried out element-by-element in the full domain. Since the unit of computation for the stiffness evaluation is restricted by element size, and since the computation is independent, the equivalent model can be constructed efficiently in parallel, even in the full domain. Due to the element-wise nature of the construction procedure, the equivalent E-STEP model is easily characterized by design parameters. Various reduced-order modeling techniques can be applied to the equivalent system in a manner similar to how they are applied in the original system. The reduced-order model based on E-STEP is successfully demonstrated for the dynamic analyses of non-linear structural finite element systems under varying design parameters.

  6. [Economic Evaluation of Integrated Care Systems - Scientific Standard Specifications, Challenges, Best Practice Model].

    PubMed

    Pimperl, A; Schreyögg, J; Rothgang, H; Busse, R; Glaeske, G; Hildebrandt, H

    2015-12-01

     Transparency of economic performance of integrated care systems (IV) is a basic requirement for the acceptance and further development of integrated care. Diverse evaluation methods are used but are seldom openly discussed because of the proprietary nature of the different business models. The aim of this article is to develop a generic model for measuring economic performance of IV interventions.  A catalogue of five quality criteria is used to discuss different evaluation methods -(uncontrolled before-after-studies, control group-based approaches, regression models). On this -basis a best practice model is proposed.  A regression model based on the German morbidity-based risk structure equalisation scheme (MorbiRSA) has some benefits in comparison to the other methods mentioned. In particular it requires less resources to be implemented and offers advantages concerning the relia-bility and the transparency of the method (=important for acceptance). Also validity is sound. Although RCTs and - also to a lesser -extent - complex difference-in-difference matching approaches can lead to a higher validity of the results, their feasibility in real life settings is limited due to economic and practical reasons. That is why central criticisms of a MorbiRSA-based model were addressed, adaptions proposed and incorporated in a best practice model: Population-oriented morbidity adjusted margin improvement model (P-DBV(MRSA)).  The P-DBV(MRSA) approach may be used as a standardised best practice model for the economic evaluation of IV. Parallel to the proposed approach for measuring economic performance a balanced, quality-oriented performance measurement system should be introduced. This should prevent incentivising IV-players to undertake short-term cost cutting at the expense of quality. © Georg Thieme Verlag KG Stuttgart · New York.

  7. Evaluating simplistic methods to understand current distributions and forecast distribution changes under climate change scenarios: An example with coypu (Myocastor coypus)

    USGS Publications Warehouse

    Jarnevich, Catherine S.; Young, Nicholas E; Sheffels, Trevor R.; Carter, Jacoby; Systma, Mark D.; Talbert, Colin

    2017-01-01

    Invasive species provide a unique opportunity to evaluate factors controlling biogeographic distributions; we can consider introduction success as an experiment testing suitability of environmental conditions. Predicting potential distributions of spreading species is not easy, and forecasting potential distributions with changing climate is even more difficult. Using the globally invasive coypu (Myocastor coypus [Molina, 1782]), we evaluate and compare the utility of a simplistic ecophysiological based model and a correlative model to predict current and future distribution. The ecophysiological model was based on winter temperature relationships with nutria survival. We developed correlative statistical models using the Software for Assisted Habitat Modeling and biologically relevant climate data with a global extent. We applied the ecophysiological based model to several global circulation model (GCM) predictions for mid-century. We used global coypu introduction data to evaluate these models and to explore a hypothesized physiological limitation, finding general agreement with known coypu distribution locally and globally and support for an upper thermal tolerance threshold. Global circulation model based model results showed variability in coypu predicted distribution among GCMs, but had general agreement of increasing suitable area in the USA. Our methods highlighted the dynamic nature of the edges of the coypu distribution due to climate non-equilibrium, and uncertainty associated with forecasting future distributions. Areas deemed suitable habitat, especially those on the edge of the current known range, could be used for early detection of the spread of coypu populations for management purposes. Combining approaches can be beneficial to predicting potential distributions of invasive species now and in the future and in exploring hypotheses of factors controlling distributions.

  8. Comparisons of Four Methods for Estimating a Dynamic Factor Model

    ERIC Educational Resources Information Center

    Zhang, Zhiyong; Hamaker, Ellen L.; Nesselroade, John R.

    2008-01-01

    Four methods for estimating a dynamic factor model, the direct autoregressive factor score (DAFS) model, are evaluated and compared. The first method estimates the DAFS model using a Kalman filter algorithm based on its state space model representation. The second one employs the maximum likelihood estimation method based on the construction of a…

  9. Normal and hemiparetic walking

    NASA Astrophysics Data System (ADS)

    Pfeiffer, Friedrich; König, Eberhard

    2013-01-01

    The idea of a model-based control of rehabilitation for hemiparetic patients requires efficient models of human walking, healthy walking as well as hemiparetic walking. Such models are presented in this paper. They include 42 degrees of freedom and allow especially the evaluation of kinetic magnitudes with the goal to evaluate measures for the hardness of hemiparesis. As far as feasible, the simulations have been compared successfully with measurements, thus improving the confidence level for an application in clinical practice. The paper is mainly based on the dissertation [19].

  10. Development and validation of a Markov microsimulation model for the economic evaluation of treatments in osteoporosis.

    PubMed

    Hiligsmann, Mickaël; Ethgen, Olivier; Bruyère, Olivier; Richy, Florent; Gathon, Henry-Jean; Reginster, Jean-Yves

    2009-01-01

    Markov models are increasingly used in economic evaluations of treatments for osteoporosis. Most of the existing evaluations are cohort-based Markov models missing comprehensive memory management and versatility. In this article, we describe and validate an original Markov microsimulation model to accurately assess the cost-effectiveness of prevention and treatment of osteoporosis. We developed a Markov microsimulation model with a lifetime horizon and a direct health-care cost perspective. The patient history was recorded and was used in calculations of transition probabilities, utilities, and costs. To test the internal consistency of the model, we carried out an example calculation for alendronate therapy. Then, external consistency was investigated by comparing absolute lifetime risk of fracture estimates with epidemiologic data. For women at age 70 years, with a twofold increase in the fracture risk of the average population, the costs per quality-adjusted life-year gained for alendronate therapy versus no treatment were estimated at €9105 and €15,325, respectively, under full and realistic adherence assumptions. All the sensitivity analyses in terms of model parameters and modeling assumptions were coherent with expected conclusions and absolute lifetime risk of fracture estimates were within the range of previous estimates, which confirmed both internal and external consistency of the model. Microsimulation models present some major advantages over cohort-based models, increasing the reliability of the results and being largely compatible with the existing state of the art, evidence-based literature. The developed model appears to be a valid model for use in economic evaluations in osteoporosis.

  11. The Implementation and Evaluation of a Project-Oriented Problem-Based Learning Module in a First Year Engineering Programme

    ERIC Educational Resources Information Center

    McLoone, Seamus C.; Lawlor, Bob J.; Meehan, Andrew R.

    2016-01-01

    This paper describes how a circuits-based project-oriented problem-based learning educational model was integrated into the first year of a Bachelor of Engineering in Electronic Engineering programme at Maynooth University, Ireland. While many variations of problem based learning exist, the presented model is closely aligned with the model used in…

  12. Evaluation of the Professional Development Program on Web Based Content Development

    ERIC Educational Resources Information Center

    Yurdakul, Bünyamin; Uslu, Öner; Çakar, Esra; Yildiz, Derya G.

    2014-01-01

    The aim of this study is to evaluate the professional development program on web based content development (WBCD) designed by the Ministry of National Education (MoNE). Based on the theoretical CIPP model by Stufflebeam and Guskey's levels of evaluation, the study was carried out as a case study. The study group consisted of the courses that…

  13. Monte-Carlo-based uncertainty propagation with hierarchical models—a case study in dynamic torque

    NASA Astrophysics Data System (ADS)

    Klaus, Leonard; Eichstädt, Sascha

    2018-04-01

    For a dynamic calibration, a torque transducer is described by a mechanical model, and the corresponding model parameters are to be identified from measurement data. A measuring device for the primary calibration of dynamic torque, and a corresponding model-based calibration approach, have recently been developed at PTB. The complete mechanical model of the calibration set-up is very complex, and involves several calibration steps—making a straightforward implementation of a Monte Carlo uncertainty evaluation tedious. With this in mind, we here propose to separate the complete model into sub-models, with each sub-model being treated with individual experiments and analysis. The uncertainty evaluation for the overall model then has to combine the information from the sub-models in line with Supplement 2 of the Guide to the Expression of Uncertainty in Measurement. In this contribution, we demonstrate how to carry this out using the Monte Carlo method. The uncertainty evaluation involves various input quantities of different origin and the solution of a numerical optimisation problem.

  14. Addressing the translational dilemma: dynamic knowledge representation of inflammation using agent-based modeling.

    PubMed

    An, Gary; Christley, Scott

    2012-01-01

    Given the panoply of system-level diseases that result from disordered inflammation, such as sepsis, atherosclerosis, cancer, and autoimmune disorders, understanding and characterizing the inflammatory response is a key target of biomedical research. Untangling the complex behavioral configurations associated with a process as ubiquitous as inflammation represents a prototype of the translational dilemma: the ability to translate mechanistic knowledge into effective therapeutics. A critical failure point in the current research environment is a throughput bottleneck at the level of evaluating hypotheses of mechanistic causality; these hypotheses represent the key step toward the application of knowledge for therapy development and design. Addressing the translational dilemma will require utilizing the ever-increasing power of computers and computational modeling to increase the efficiency of the scientific method in the identification and evaluation of hypotheses of mechanistic causality. More specifically, development needs to focus on facilitating the ability of non-computer trained biomedical researchers to utilize and instantiate their knowledge in dynamic computational models. This is termed "dynamic knowledge representation." Agent-based modeling is an object-oriented, discrete-event, rule-based simulation method that is well suited for biomedical dynamic knowledge representation. Agent-based modeling has been used in the study of inflammation at multiple scales. The ability of agent-based modeling to encompass multiple scales of biological process as well as spatial considerations, coupled with an intuitive modeling paradigm, suggest that this modeling framework is well suited for addressing the translational dilemma. This review describes agent-based modeling, gives examples of its applications in the study of inflammation, and introduces a proposed general expansion of the use of modeling and simulation to augment the generation and evaluation of knowledge by the biomedical research community at large.

  15. Evaluation of liquefaction potential of soil based on standard penetration test using multi-gene genetic programming model

    NASA Astrophysics Data System (ADS)

    Muduli, Pradyut; Das, Sarat

    2014-06-01

    This paper discusses the evaluation of liquefaction potential of soil based on standard penetration test (SPT) dataset using evolutionary artificial intelligence technique, multi-gene genetic programming (MGGP). The liquefaction classification accuracy (94.19%) of the developed liquefaction index (LI) model is found to be better than that of available artificial neural network (ANN) model (88.37%) and at par with the available support vector machine (SVM) model (94.19%) on the basis of the testing data. Further, an empirical equation is presented using MGGP to approximate the unknown limit state function representing the cyclic resistance ratio (CRR) of soil based on developed LI model. Using an independent database of 227 cases, the overall rates of successful prediction of occurrence of liquefaction and non-liquefaction are found to be 87, 86, and 84% by the developed MGGP based model, available ANN and the statistical models, respectively, on the basis of calculated factor of safety (F s) against the liquefaction occurrence.

  16. Intelligent Physiologic Modeling: An Application of Knowledge Based Systems Technology to Medical Education

    PubMed Central

    Kunstaetter, Robert

    1986-01-01

    This presentation describes the design and implementation of a knowledge based physiologic modeling system (KBPMS) and a preliminary evaluation of its use as a learning resource within the context of an experimental medical curriculum -- the Harvard New Pathway. KBPMS possesses combined numeric and qualitative simulation capabilities and can provide explanations of its knowledge and behaviour. It has been implemented on a microcomputer with a user interface incorporating interactive graphics. The preliminary evaluation of KBPMS is based on anecdotal data which suggests that the system might have pedagogic potential. Much work remains to be done in enhancing and further evaluating KBPMS.

  17. Results from the VALUE perfect predictor experiment: process-based evaluation

    NASA Astrophysics Data System (ADS)

    Maraun, Douglas; Soares, Pedro; Hertig, Elke; Brands, Swen; Huth, Radan; Cardoso, Rita; Kotlarski, Sven; Casado, Maria; Pongracz, Rita; Bartholy, Judit

    2016-04-01

    Until recently, the evaluation of downscaled climate model simulations has typically been limited to surface climatologies, including long term means, spatial variability and extremes. But these aspects are often, at least partly, tuned in regional climate models to match observed climate. The tuning issue is of course particularly relevant for bias corrected regional climate models. In general, a good performance of a model for these aspects in present climate does therefore not imply a good performance in simulating climate change. It is now widely accepted that, to increase our condidence in climate change simulations, it is necessary to evaluate how climate models simulate relevant underlying processes. In other words, it is important to assess whether downscaling does the right for the right reason. Therefore, VALUE has carried out a broad process-based evaluation study based on its perfect predictor experiment simulations: the downscaling methods are driven by ERA-Interim data over the period 1979-2008, reference observations are given by a network of 85 meteorological stations covering all European climates. More than 30 methods participated in the evaluation. In order to compare statistical and dynamical methods, only variables provided by both types of approaches could be considered. This limited the analysis to conditioning local surface variables on variables from driving processes that are simulated by ERA-Interim. We considered the following types of processes: at the continental scale, we evaluated the performance of downscaling methods for positive and negative North Atlantic Oscillation, Atlantic ridge and blocking situations. At synoptic scales, we considered Lamb weather types for selected European regions such as Scandinavia, the United Kingdom, the Iberian Pensinsula or the Alps. At regional scales we considered phenomena such as the Mistral, the Bora or the Iberian coastal jet. Such process-based evaluation helps to attribute biases in surface variables to underlying processes and ultimately to improve climate models.

  18. Clinical evaluation of BrainTree, a motor imagery hybrid BCI speller

    NASA Astrophysics Data System (ADS)

    Perdikis, S.; Leeb, R.; Williamson, J.; Ramsay, A.; Tavella, M.; Desideri, L.; Hoogerwerf, E.-J.; Al-Khodairy, A.; Murray-Smith, R.; Millán, J. d. R.

    2014-06-01

    Objective. While brain-computer interfaces (BCIs) for communication have reached considerable technical maturity, there is still a great need for state-of-the-art evaluation by the end-users outside laboratory environments. To achieve this primary objective, it is necessary to augment a BCI with a series of components that allow end-users to type text effectively. Approach. This work presents the clinical evaluation of a motor imagery (MI) BCI text-speller, called BrainTree, by six severely disabled end-users and ten able-bodied users. Additionally, we define a generic model of code-based BCI applications, which serves as an analytical tool for evaluation and design. Main results. We show that all users achieved remarkable usability and efficiency outcomes in spelling. Furthermore, our model-based analysis highlights the added value of human-computer interaction techniques and hybrid BCI error-handling mechanisms, and reveals the effects of BCI performances on usability and efficiency in code-based applications. Significance. This study demonstrates the usability potential of code-based MI spellers, with BrainTree being the first to be evaluated by a substantial number of end-users, establishing them as a viable, competitive alternative to other popular BCI spellers. Another major outcome of our model-based analysis is the derivation of a 80% minimum command accuracy requirement for successful code-based application control, revising upwards previous estimates attempted in the literature.

  19. Clinical evaluation of BrainTree, a motor imagery hybrid BCI speller.

    PubMed

    Perdikis, S; Leeb, R; Williamson, J; Ramsay, A; Tavella, M; Desideri, L; Hoogerwerf, E-J; Al-Khodairy, A; Murray-Smith, R; Millán, J D R

    2014-06-01

    While brain-computer interfaces (BCIs) for communication have reached considerable technical maturity, there is still a great need for state-of-the-art evaluation by the end-users outside laboratory environments. To achieve this primary objective, it is necessary to augment a BCI with a series of components that allow end-users to type text effectively. This work presents the clinical evaluation of a motor imagery (MI) BCI text-speller, called BrainTree, by six severely disabled end-users and ten able-bodied users. Additionally, we define a generic model of code-based BCI applications, which serves as an analytical tool for evaluation and design. We show that all users achieved remarkable usability and efficiency outcomes in spelling. Furthermore, our model-based analysis highlights the added value of human-computer interaction techniques and hybrid BCI error-handling mechanisms, and reveals the effects of BCI performances on usability and efficiency in code-based applications. This study demonstrates the usability potential of code-based MI spellers, with BrainTree being the first to be evaluated by a substantial number of end-users, establishing them as a viable, competitive alternative to other popular BCI spellers. Another major outcome of our model-based analysis is the derivation of a 80% minimum command accuracy requirement for successful code-based application control, revising upwards previous estimates attempted in the literature.

  20. Evaluation of a watershed model for estimating daily flow using limited flow measurements

    USDA-ARS?s Scientific Manuscript database

    The Soil and Water Assessment Tool (SWAT) model was evaluated for estimation of continuous daily flow based on limited flow measurements in the Upper Oyster Creek (UOC) watershed. SWAT was calibrated against limited measured flow data and then validated. The Nash-Sutcliffe model Efficiency (NSE) and...

  1. Local Difference Measures between Complex Networks for Dynamical System Model Evaluation

    PubMed Central

    Lange, Stefan; Donges, Jonathan F.; Volkholz, Jan; Kurths, Jürgen

    2015-01-01

    A faithful modeling of real-world dynamical systems necessitates model evaluation. A recent promising methodological approach to this problem has been based on complex networks, which in turn have proven useful for the characterization of dynamical systems. In this context, we introduce three local network difference measures and demonstrate their capabilities in the field of climate modeling, where these measures facilitate a spatially explicit model evaluation. Building on a recent study by Feldhoff et al. [1] we comparatively analyze statistical and dynamical regional climate simulations of the South American monsoon system. Three types of climate networks representing different aspects of rainfall dynamics are constructed from the modeled precipitation space-time series. Specifically, we define simple graphs based on positive as well as negative rank correlations between rainfall anomaly time series at different locations, and such based on spatial synchronizations of extreme rain events. An evaluation against respective networks built from daily satellite data provided by the Tropical Rainfall Measuring Mission 3B42 V7 reveals far greater differences in model performance between network types for a fixed but arbitrary climate model than between climate models for a fixed but arbitrary network type. We identify two sources of uncertainty in this respect. Firstly, climate variability limits fidelity, particularly in the case of the extreme event network; and secondly, larger geographical link lengths render link misplacements more likely, most notably in the case of the anticorrelation network; both contributions are quantified using suitable ensembles of surrogate networks. Our model evaluation approach is applicable to any multidimensional dynamical system and especially our simple graph difference measures are highly versatile as the graphs to be compared may be constructed in whatever way required. Generalizations to directed as well as edge- and node-weighted graphs are discussed. PMID:25856374

  2. Local difference measures between complex networks for dynamical system model evaluation.

    PubMed

    Lange, Stefan; Donges, Jonathan F; Volkholz, Jan; Kurths, Jürgen

    2015-01-01

    A faithful modeling of real-world dynamical systems necessitates model evaluation. A recent promising methodological approach to this problem has been based on complex networks, which in turn have proven useful for the characterization of dynamical systems. In this context, we introduce three local network difference measures and demonstrate their capabilities in the field of climate modeling, where these measures facilitate a spatially explicit model evaluation.Building on a recent study by Feldhoff et al. [8] we comparatively analyze statistical and dynamical regional climate simulations of the South American monsoon system [corrected]. types of climate networks representing different aspects of rainfall dynamics are constructed from the modeled precipitation space-time series. Specifically, we define simple graphs based on positive as well as negative rank correlations between rainfall anomaly time series at different locations, and such based on spatial synchronizations of extreme rain events. An evaluation against respective networks built from daily satellite data provided by the Tropical Rainfall Measuring Mission 3B42 V7 reveals far greater differences in model performance between network types for a fixed but arbitrary climate model than between climate models for a fixed but arbitrary network type. We identify two sources of uncertainty in this respect. Firstly, climate variability limits fidelity, particularly in the case of the extreme event network; and secondly, larger geographical link lengths render link misplacements more likely, most notably in the case of the anticorrelation network; both contributions are quantified using suitable ensembles of surrogate networks. Our model evaluation approach is applicable to any multidimensional dynamical system and especially our simple graph difference measures are highly versatile as the graphs to be compared may be constructed in whatever way required. Generalizations to directed as well as edge- and node-weighted graphs are discussed.

  3. Study on an Air Quality Evaluation Model for Beijing City Under Haze-Fog Pollution Based on New Ambient Air Quality Standards

    PubMed Central

    Li, Li; Liu, Dong-Jun

    2014-01-01

    Since 2012, China has been facing haze-fog weather conditions, and haze-fog pollution and PM2.5 have become hot topics. It is very necessary to evaluate and analyze the ecological status of the air environment of China, which is of great significance for environmental protection measures. In this study the current situation of haze-fog pollution in China was analyzed first, and the new Ambient Air Quality Standards were introduced. For the issue of air quality evaluation, a comprehensive evaluation model based on an entropy weighting method and nearest neighbor method was developed. The entropy weighting method was used to determine the weights of indicators, and the nearest neighbor method was utilized to evaluate the air quality levels. Then the comprehensive evaluation model was applied into the practical evaluation problems of air quality in Beijing to analyze the haze-fog pollution. Two simulation experiments were implemented in this study. One experiment included the indicator of PM2.5 and was carried out based on the new Ambient Air Quality Standards (GB 3095-2012); the other experiment excluded PM2.5 and was carried out based on the old Ambient Air Quality Standards (GB 3095-1996). Their results were compared, and the simulation results showed that PM2.5 was an important indicator for air quality and the evaluation results of the new Air Quality Standards were more scientific than the old ones. The haze-fog pollution situation in Beijing City was also analyzed based on these results, and the corresponding management measures were suggested. PMID:25170682

  4. Integrating model behavior, optimization, and sensitivity/uncertainty analysis: overview and application of the MOUSE software toolbox

    USDA-ARS?s Scientific Manuscript database

    This paper provides an overview of the Model Optimization, Uncertainty, and SEnsitivity Analysis (MOUSE) software application, an open-source, Java-based toolbox of visual and numerical analysis components for the evaluation of environmental models. MOUSE is based on the OPTAS model calibration syst...

  5. DeltaSA tool for source apportionment benchmarking, description and sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Pernigotti, D.; Belis, C. A.

    2018-05-01

    DeltaSA is an R-package and a Java on-line tool developed at the EC-Joint Research Centre to assist and benchmark source apportionment applications. Its key functionalities support two critical tasks in this kind of studies: the assignment of a factor to a source in factor analytical models (source identification) and the model performance evaluation. The source identification is based on the similarity between a given factor and source chemical profiles from public databases. The model performance evaluation is based on statistical indicators used to compare model output with reference values generated in intercomparison exercises. The references values are calculated as the ensemble average of the results reported by participants that have passed a set of testing criteria based on chemical profiles and time series similarity. In this study, a sensitivity analysis of the model performance criteria is accomplished using the results of a synthetic dataset where "a priori" references are available. The consensus modulated standard deviation punc gives the best choice for the model performance evaluation when a conservative approach is adopted.

  6. Planting Healthy Roots: Using Documentary Film to Evaluate and Disseminate Community-Based Participatory Research.

    PubMed

    Brandt, Heather M; Freedman, Darcy A; Friedman, Daniela B; Choi, Seul Ki; Seel, Jessica S; Guest, M Aaron; Khang, Leepao

    2016-01-01

    Documentary filmmaking approaches incorporating community engagement and awareness raising strategies may be a promising approach to evaluate community-based participatory research. The study purpose was 2-fold: (1) to evaluate a documentary film featuring the formation and implementation of a farmers' market and (2) to assess whether the film affected awareness regarding food access issues in a food-desert community with high rates of obesity. The coalition model of filmmaking, a model consistent with a community-based participatory research (CBPR) approach, and personal stories, community profiles, and expert interviews were used to develop a documentary film (Planting Healthy Roots). The evaluation demonstrated high levels of approval and satisfaction with the film and CBPR essence of the film. The documentary film aligned with a CBPR approach to document, evaluate, and disseminate research processes and outcomes.

  7. Data envelopment analysis in service quality evaluation: an empirical study

    NASA Astrophysics Data System (ADS)

    Najafi, Seyedvahid; Saati, Saber; Tavana, Madjid

    2015-09-01

    Service quality is often conceptualized as the comparison between service expectations and the actual performance perceptions. It enhances customer satisfaction, decreases customer defection, and promotes customer loyalty. Substantial literature has examined the concept of service quality, its dimensions, and measurement methods. We introduce the perceived service quality index (PSQI) as a single measure for evaluating the multiple-item service quality construct based on the SERVQUAL model. A slack-based measure (SBM) of efficiency with constant inputs is used to calculate the PSQI. In addition, a non-linear programming model based on the SBM is proposed to delineate an improvement guideline and improve service quality. An empirical study is conducted to assess the applicability of the method proposed in this study. A large number of studies have used DEA as a benchmarking tool to measure service quality. These models do not propose a coherent performance evaluation construct and consequently fail to deliver improvement guidelines for improving service quality. The DEA models proposed in this study are designed to evaluate and improve service quality within a comprehensive framework and without any dependency on external data.

  8. A Cyclic-Plasticity-Based Mechanistic Approach for Fatigue Evaluation of 316 Stainless Steel Under Arbitrary Loading

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barua, Bipul; Mohanty, Subhasish; Listwan, Joseph T.

    In this paper, a cyclic-plasticity based fully mechanistic fatigue modeling approach is presented. This is based on time-dependent stress-strain evolution of the material over the entire fatigue life rather than just based on the end of live information typically used for empirical S~N curve based fatigue evaluation approaches. Previously we presented constant amplitude fatigue test based related material models for 316 SS base, 508 LAS base and 316 SS- 316 SS weld which are used in nuclear reactor components such as pressure vessels, nozzles, and surge line pipes. However, we found that constant amplitude fatigue data based models have limitationmore » in capturing the stress-strain evolution under arbitrary fatigue loading. To address the above mentioned limitation, in this paper, we present a more advanced approach that can be used for modeling the cyclic stress-strain evolution and fatigue life not only under constant amplitude but also under any arbitrary (random/variable) fatigue loading. The related material model and analytical model results are presented for 316 SS base metal. Two methodologies (either based on time/cycle or based on accumulated plastic strain energy) to track the material parameters at a given time/cycle are discussed and associated analytical model results are presented. From the material model and analytical cyclic plasticity model results, it is found that the proposed cyclic plasticity model can predict all the important stages of material behavior during the entire fatigue life of the specimens with more than 90% accuracy« less

  9. A Cyclic-Plasticity-Based Mechanistic Approach for Fatigue Evaluation of 316 Stainless Steel Under Arbitrary Loading

    DOE PAGES

    Barua, Bipul; Mohanty, Subhasish; Listwan, Joseph T.; ...

    2017-12-05

    In this paper, a cyclic-plasticity based fully mechanistic fatigue modeling approach is presented. This is based on time-dependent stress-strain evolution of the material over the entire fatigue life rather than just based on the end of live information typically used for empirical S~N curve based fatigue evaluation approaches. Previously we presented constant amplitude fatigue test based related material models for 316 SS base, 508 LAS base and 316 SS- 316 SS weld which are used in nuclear reactor components such as pressure vessels, nozzles, and surge line pipes. However, we found that constant amplitude fatigue data based models have limitationmore » in capturing the stress-strain evolution under arbitrary fatigue loading. To address the above mentioned limitation, in this paper, we present a more advanced approach that can be used for modeling the cyclic stress-strain evolution and fatigue life not only under constant amplitude but also under any arbitrary (random/variable) fatigue loading. The related material model and analytical model results are presented for 316 SS base metal. Two methodologies (either based on time/cycle or based on accumulated plastic strain energy) to track the material parameters at a given time/cycle are discussed and associated analytical model results are presented. From the material model and analytical cyclic plasticity model results, it is found that the proposed cyclic plasticity model can predict all the important stages of material behavior during the entire fatigue life of the specimens with more than 90% accuracy« less

  10. ThinTool: a spreadsheet model to evaluate fuel reduction thinning cost, net energy output, and nutrient impacts

    Treesearch

    Sang-Kyun Han; Han-Sup Han; William J. Elliot; Edward M. Bilek

    2017-01-01

    We developed a spreadsheet-based model, named ThinTool, to evaluate the cost of mechanical fuel reduction thinning including biomass removal, to predict net energy output, and to assess nutrient impacts from thinning treatments in northern California and southern Oregon. A combination of literature reviews, field-based studies, and contractor surveys was used to...

  11. Effects of streamflow diversion on a fish population: combining empirical data and individual-based models in a site-specific evaluation

    Treesearch

    Bret C. Harvey; Jason L. White; Rodney J. Nakamoto; Steven F. Railsback

    2014-01-01

    Resource managers commonly face the need to evaluate the ecological consequences of specific water diversions of small streams. We addressed this need by conducting 4 years of biophysical monitoring of stream reaches above and below a diversion and applying two individual-based models of salmonid fish that simulated different levels of behavioral complexity. The...

  12. Performance Evaluation of Bucket based Excavating, Loading and Transport (BELT) Equipment - An OEE Approach

    NASA Astrophysics Data System (ADS)

    Mohammadi, Mousa; Rai, Piyush; Gupta, Suprakash

    2017-03-01

    Overall Equipment Effectiveness (OEE) has been used since last over two decades as a measure of performance in manufacturing industries. Unfortunately, enough, application of OEE in mining and excavation industry has not been duly adopted. In this paper an effort has been made to identify the OEE for performance evaluation of Bucket based Excavating, Loading and Transport (BELT) equipment. The conceptual model of OEE, as used in the manufacturing industries, has been revised to adapt to the BELT equipment. The revised and adapted model considered the operational time, speed and bucket capacity utilization losses as the key OEE components for evaluating the performance of BELT equipment. To illustrate the efficacy of the devised model on real-time basis, a case study was undertaken on the biggest single bucket excavating equipment - the dragline, in a large surface coal mine. One-year data was collected in order to evaluate the proposed OEE model.

  13. Summary report on the evaluation of a 1977--1985 edited sorption data base for isotherm modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Polzer, W.L.; Beckman, R.J.; Fuentes, H.R.

    1993-09-01

    Sorption data bases collected by Los Alamos National Laboratory (LANL) from 1977 to 1985 for the Yucca Mountain Project.(YMP) have been inventoried and fitted with isotherm expressions. Effects of variables (e.g., particle size) on the isotherm were also evaluated. The sorption data are from laboratory batch measurements which were not designed specifically for isotherm modeling. However a limited number of data sets permitted such modeling. The analysis of those isotherm data can aid in the design of future sorption experiments and can provide expressions to be used in radionuclide transport modeling. Over 1200 experimental observations were inventoried for their adequacymore » to be modeled b isotherms and to evaluate the effects of variables on isotherms. About 15% of the observations provided suitable data sets for modeling. The data sets were obtained under conditions that include ambient temperature and two atmospheres, air and CO{sub 2}.« less

  14. A human factors systems approach to understanding team-based primary care: a qualitative analysis.

    PubMed

    Mundt, Marlon P; Swedlund, Matthew P

    2016-12-01

    Research shows that high-functioning teams improve patient outcomes in primary care. However, there is no consensus on a conceptual model of team-based primary care that can be used to guide measurement and performance evaluation of teams. To qualitatively understand whether the Systems Engineering Initiative for Patient Safety (SEIPS) model could serve as a framework for creating and evaluating team-based primary care. We evaluated qualitative interview data from 19 clinicians and staff members from 6 primary care clinics associated with a large Midwestern university. All health care clinicians and staff in the study clinics completed a survey of their communication connections to team members. Social network analysis identified key informants for interviews by selecting the respondents with the highest frequency of communication ties as reported by their teammates. Semi-structured interviews focused on communication patterns, team climate and teamwork. Themes derived from the interviews lent support to the SEIPS model components, such as the work system (Team, Tools and Technology, Physical Environment, Tasks and Organization), team processes and team outcomes. Our qualitative data support the SEIPS model as a promising conceptual framework for creating and evaluating primary care teams. Future studies of team-based care may benefit from using the SEIPS model to shift clinical practice to high functioning team-based primary care. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  15. A Model-Based Approach to Support Validation of Medical Cyber-Physical Systems.

    PubMed

    Silva, Lenardo C; Almeida, Hyggo O; Perkusich, Angelo; Perkusich, Mirko

    2015-10-30

    Medical Cyber-Physical Systems (MCPS) are context-aware, life-critical systems with patient safety as the main concern, demanding rigorous processes for validation to guarantee user requirement compliance and specification-oriented correctness. In this article, we propose a model-based approach for early validation of MCPS, focusing on promoting reusability and productivity. It enables system developers to build MCPS formal models based on a library of patient and medical device models, and simulate the MCPS to identify undesirable behaviors at design time. Our approach has been applied to three different clinical scenarios to evaluate its reusability potential for different contexts. We have also validated our approach through an empirical evaluation with developers to assess productivity and reusability. Finally, our models have been formally verified considering functional and safety requirements and model coverage.

  16. A Model-Based Approach to Support Validation of Medical Cyber-Physical Systems

    PubMed Central

    Silva, Lenardo C.; Almeida, Hyggo O.; Perkusich, Angelo; Perkusich, Mirko

    2015-01-01

    Medical Cyber-Physical Systems (MCPS) are context-aware, life-critical systems with patient safety as the main concern, demanding rigorous processes for validation to guarantee user requirement compliance and specification-oriented correctness. In this article, we propose a model-based approach for early validation of MCPS, focusing on promoting reusability and productivity. It enables system developers to build MCPS formal models based on a library of patient and medical device models, and simulate the MCPS to identify undesirable behaviors at design time. Our approach has been applied to three different clinical scenarios to evaluate its reusability potential for different contexts. We have also validated our approach through an empirical evaluation with developers to assess productivity and reusability. Finally, our models have been formally verified considering functional and safety requirements and model coverage. PMID:26528982

  17. A systematic review of economic evaluations of population-based sodium reduction interventions

    PubMed Central

    Hope, Silvia F.; Webster, Jacqui; Trieu, Kathy; Pillay, Arti; Ieremia, Merina; Bell, Colin; Snowdon, Wendy; Neal, Bruce; Moodie, Marj

    2017-01-01

    Objective To summarise evidence describing the cost-effectiveness of population-based interventions targeting sodium reduction. Methods A systematic search of published and grey literature databases and websites was conducted using specified key words. Characteristics of identified economic evaluations were recorded, and included studies were appraised for reporting quality using the Consolidated Health Economic Evaluation Reporting Standards (CHEERS) checklist. Results Twenty studies met the study inclusion criteria and received a full paper review. Fourteen studies were identified as full economic evaluations in that they included both costs and benefits associated with an intervention measured against a comparator. Most studies were modelling exercises based on scenarios for achieving salt reduction and assumed effects on health outcomes. All 14 studies concluded that their specified intervention(s) targeting reductions in population sodium consumption were cost-effective, and in the majority of cases, were cost saving. Just over half the studies (8/14) were assessed as being of ‘excellent’ reporting quality, five studies fell into the ‘very good’ quality category and one into the ‘good’ category. All of the identified evaluations were based on modelling, whereby inputs for all the key parameters including the effect size were either drawn from published datasets, existing literature or based on expert advice. Conclusion Despite a clear increase in evaluations of salt reduction programs in recent years, this review identified relatively few economic evaluations of population salt reduction interventions. None of the studies were based on actual implementation of intervention(s) and the associated collection of new empirical data. The studies universally showed that population-based salt reduction strategies are likely to be cost effective or cost saving. However, given the reliance on modelling, there is a need for the effectiveness of new interventions to be evaluated in the field using strong study designs and parallel economic evaluations. PMID:28355231

  18. Wind Energy Conversion System Analysis Model (WECSAM) computer program documentation

    NASA Astrophysics Data System (ADS)

    Downey, W. T.; Hendrick, P. L.

    1982-07-01

    Described is a computer-based wind energy conversion system analysis model (WECSAM) developed to predict the technical and economic performance of wind energy conversion systems (WECS). The model is written in CDC FORTRAN V. The version described accesses a data base containing wind resource data, application loads, WECS performance characteristics, utility rates, state taxes, and state subsidies for a six state region (Minnesota, Michigan, Wisconsin, Illinois, Ohio, and Indiana). The model is designed for analysis at the county level. The computer model includes a technical performance module and an economic evaluation module. The modules can be run separately or together. The model can be run for any single user-selected county within the region or looped automatically through all counties within the region. In addition, the model has a restart capability that allows the user to modify any data-base value written to a scratch file prior to the technical or economic evaluation.

  19. Critical-Inquiry-Based-Learning: Model of Learning to Promote Critical Thinking Ability of Pre-service Teachers

    NASA Astrophysics Data System (ADS)

    Prayogi, S.; Yuanita, L.; Wasis

    2018-01-01

    This study aimed to develop Critical-Inquiry-Based-Learning (CIBL) learning model to promote critical thinking (CT) ability of preservice teachers. The CIBL learning model was developed by meeting the criteria of validity, practicality, and effectiveness. Validation of the model involves 4 expert validators through the mechanism of the focus group discussion (FGD). CIBL learning model declared valid to promote CT ability, with the validity level (Va) of 4.20 and reliability (r) of 90,1% (very reliable). The practicality of the model was evaluated when it was implemented that involving 17 of preservice teachers. The CIBL learning model had been declared practice, its measuring from learning feasibility (LF) with very good criteria (LF-score = 4.75). The effectiveness of the model was evaluated from the improvement CT ability after the implementation of the model. CT ability were evaluated using the scoring technique adapted from Ennis-Weir Critical Thinking Essay Test. The average score of CT ability on pretest is - 1.53 (uncritical criteria), whereas on posttest is 8.76 (critical criteria), with N-gain score of 0.76 (high criteria). Based on the results of this study, it can be concluded that developed CIBL learning model is feasible to promote CT ability of preservice teachers.

  20. The performance evaluation model of mining project founded on the weight optimization entropy value method

    NASA Astrophysics Data System (ADS)

    Mao, Chao; Chen, Shou

    2017-01-01

    According to the traditional entropy value method still have low evaluation accuracy when evaluating the performance of mining projects, a performance evaluation model of mineral project founded on improved entropy is proposed. First establish a new weight assignment model founded on compatible matrix analysis of analytic hierarchy process (AHP) and entropy value method, when the compatibility matrix analysis to achieve consistency requirements, if it has differences between subjective weights and objective weights, moderately adjust both proportions, then on this basis, the fuzzy evaluation matrix for performance evaluation. The simulation experiments show that, compared with traditional entropy and compatible matrix analysis method, the proposed performance evaluation model of mining project based on improved entropy value method has higher accuracy assessment.

  1. Sensor fusion display evaluation using information integration models in enhanced/synthetic vision applications

    NASA Technical Reports Server (NTRS)

    Foyle, David C.

    1993-01-01

    Based on existing integration models in the psychological literature, an evaluation framework is developed to assess sensor fusion displays as might be implemented in an enhanced/synthetic vision system. The proposed evaluation framework for evaluating the operator's ability to use such systems is a normative approach: The pilot's performance with the sensor fusion image is compared to models' predictions based on the pilot's performance when viewing the original component sensor images prior to fusion. This allows for the determination as to when a sensor fusion system leads to: poorer performance than one of the original sensor displays, clearly an undesirable system in which the fused sensor system causes some distortion or interference; better performance than with either single sensor system alone, but at a sub-optimal level compared to model predictions; optimal performance compared to model predictions; or, super-optimal performance, which may occur if the operator were able to use some highly diagnostic 'emergent features' in the sensor fusion display, which were unavailable in the original sensor displays.

  2. A simulation-based approach for evaluating logging residue handling systems.

    Treesearch

    B. Bruce Bare; Benjamin A. Jayne; Brian F. Anholt

    1976-01-01

    Describes a computer simulation model for evaluating logging residue handling systems. The flow of resources is traced through a prespecified combination of operations including yarding, chipping, sorting, loading, transporting, and unloading. The model was used to evaluate the feasibility of converting logging residues to chips that could be used, for example, to...

  3. Growth Models and Teacher Evaluation: What Teachers Need to Know and Do

    ERIC Educational Resources Information Center

    Katz, Daniel S.

    2016-01-01

    Including growth models based on student test scores in teacher evaluations effectively holds teachers individually accountable for students improving their test scores. While an attractive policy for state administrators and advocates of education reform, value-added measures have been fraught with problems, and their use in teacher evaluation is…

  4. Using the Many-Faceted Rasch Model to Evaluate Standard Setting Judgments: An Illustration with the Advanced Placement Environmental Science Exam

    ERIC Educational Resources Information Center

    Kaliski, Pamela K.; Wind, Stefanie A.; Engelhard, George, Jr.; Morgan, Deanna L.; Plake, Barbara S.; Reshetar, Rosemary A.

    2013-01-01

    The many-faceted Rasch (MFR) model has been used to evaluate the quality of ratings on constructed response assessments; however, it can also be used to evaluate the quality of judgments from panel-based standard setting procedures. The current study illustrates the use of the MFR model for examining the quality of ratings obtained from a standard…

  5. Software Quality Evaluation Models Applicable in Health Information and Communications Technologies. A Review of the Literature.

    PubMed

    Villamor Ordozgoiti, Alberto; Delgado Hito, Pilar; Guix Comellas, Eva María; Fernandez Sanchez, Carlos Manuel; Garcia Hernandez, Milagros; Lluch Canut, Teresa

    2016-01-01

    Information and Communications Technologies in healthcare has increased the need to consider quality criteria through standardised processes. The aim of this study was to analyse the software quality evaluation models applicable to healthcare from the perspective of ICT-purchasers. Through a systematic literature review with the keywords software, product, quality, evaluation and health, we selected and analysed 20 original research papers published from 2005-2016 in health science and technology databases. The results showed four main topics: non-ISO models, software quality evaluation models based on ISO/IEC standards, studies analysing software quality evaluation models, and studies analysing ISO standards for software quality evaluation. The models provide cost-efficiency criteria for specific software, and improve use outcomes. The ISO/IEC25000 standard is shown as the most suitable for evaluating the quality of ICTs for healthcare use from the perspective of institutional acquisition.

  6. Quantitative methods to direct exploration based on hydrogeologic information

    USGS Publications Warehouse

    Graettinger, A.J.; Lee, J.; Reeves, H.W.; Dethan, D.

    2006-01-01

    Quantitatively Directed Exploration (QDE) approaches based on information such as model sensitivity, input data covariance and model output covariance are presented. Seven approaches for directing exploration are developed, applied, and evaluated on a synthetic hydrogeologic site. The QDE approaches evaluate input information uncertainty, subsurface model sensitivity and, most importantly, output covariance to identify the next location to sample. Spatial input parameter values and covariances are calculated with the multivariate conditional probability calculation from a limited number of samples. A variogram structure is used during data extrapolation to describe the spatial continuity, or correlation, of subsurface information. Model sensitivity can be determined by perturbing input data and evaluating output response or, as in this work, sensitivities can be programmed directly into an analysis model. Output covariance is calculated by the First-Order Second Moment (FOSM) method, which combines the covariance of input information with model sensitivity. A groundwater flow example, modeled in MODFLOW-2000, is chosen to demonstrate the seven QDE approaches. MODFLOW-2000 is used to obtain the piezometric head and the model sensitivity simultaneously. The seven QDE approaches are evaluated based on the accuracy of the modeled piezometric head after information from a QDE sample is added. For the synthetic site used in this study, the QDE approach that identifies the location of hydraulic conductivity that contributes the most to the overall piezometric head variance proved to be the best method to quantitatively direct exploration. ?? IWA Publishing 2006.

  7. Development of the IMB Model and an Evidence-Based Diabetes Self-management Mobile Application.

    PubMed

    Jeon, Eunjoo; Park, Hyeoun-Ae

    2018-04-01

    This study developed a diabetes self-management mobile application based on the information-motivation-behavioral skills (IMB) model, evidence extracted from clinical practice guidelines, and requirements identified through focus group interviews (FGIs) with diabetes patients. We developed a diabetes self-management (DSM) app in accordance with the following four stages of the system development life cycle. The functional and knowledge requirements of the users were extracted through FGIs with 19 diabetes patients. A system diagram, data models, a database, an algorithm, screens, and menus were designed. An Android app and server with an SSL protocol were developed. The DSM app algorithm and heuristics, as well as the usability of the DSM app were evaluated, and then the DSM app was modified based on heuristics and usability evaluation. A total of 11 requirement themes were identified through the FGIs. Sixteen functions and 49 knowledge rules were extracted. The system diagram consisted of a client part and server part, 78 data models, a database with 10 tables, an algorithm, and a menu structure with 6 main menus, and 40 user screens were developed. The DSM app was Android version 4.4 or higher for Bluetooth connectivity. The proficiency and efficiency scores of the algorithm were 90.96% and 92.39%, respectively. Fifteen issues were revealed through the heuristic evaluation, and the app was modified to address three of these issues. It was also modified to address five comments received by the researchers through the usability evaluation. The DSM app was developed based on behavioral change theory through IMB models. It was designed to be evidence-based, user-centered, and effective. It remains necessary to fully evaluate the effect of the DSM app on the DSM behavior changes of diabetes patients.

  8. Development of the IMB Model and an Evidence-Based Diabetes Self-management Mobile Application

    PubMed Central

    Jeon, Eunjoo

    2018-01-01

    Objectives This study developed a diabetes self-management mobile application based on the information-motivation-behavioral skills (IMB) model, evidence extracted from clinical practice guidelines, and requirements identified through focus group interviews (FGIs) with diabetes patients. Methods We developed a diabetes self-management (DSM) app in accordance with the following four stages of the system development life cycle. The functional and knowledge requirements of the users were extracted through FGIs with 19 diabetes patients. A system diagram, data models, a database, an algorithm, screens, and menus were designed. An Android app and server with an SSL protocol were developed. The DSM app algorithm and heuristics, as well as the usability of the DSM app were evaluated, and then the DSM app was modified based on heuristics and usability evaluation. Results A total of 11 requirement themes were identified through the FGIs. Sixteen functions and 49 knowledge rules were extracted. The system diagram consisted of a client part and server part, 78 data models, a database with 10 tables, an algorithm, and a menu structure with 6 main menus, and 40 user screens were developed. The DSM app was Android version 4.4 or higher for Bluetooth connectivity. The proficiency and efficiency scores of the algorithm were 90.96% and 92.39%, respectively. Fifteen issues were revealed through the heuristic evaluation, and the app was modified to address three of these issues. It was also modified to address five comments received by the researchers through the usability evaluation. Conclusions The DSM app was developed based on behavioral change theory through IMB models. It was designed to be evidence-based, user-centered, and effective. It remains necessary to fully evaluate the effect of the DSM app on the DSM behavior changes of diabetes patients. PMID:29770246

  9. Efficient physics-based tracking of heart surface motion for beating heart surgery robotic systems.

    PubMed

    Bogatyrenko, Evgeniya; Pompey, Pascal; Hanebeck, Uwe D

    2011-05-01

    Tracking of beating heart motion in a robotic surgery system is required for complex cardiovascular interventions. A heart surface motion tracking method is developed, including a stochastic physics-based heart surface model and an efficient reconstruction algorithm. The algorithm uses the constraints provided by the model that exploits the physical characteristics of the heart. The main advantage of the model is that it is more realistic than most standard heart models. Additionally, no explicit matching between the measurements and the model is required. The application of meshless methods significantly reduces the complexity of physics-based tracking. Based on the stochastic physical model of the heart surface, this approach considers the motion of the intervention area and is robust to occlusions and reflections. The tracking algorithm is evaluated in simulations and experiments on an artificial heart. Providing higher accuracy than the standard model-based methods, it successfully copes with occlusions and provides high performance even when all measurements are not available. Combining the physical and stochastic description of the heart surface motion ensures physically correct and accurate prediction. Automatic initialization of the physics-based cardiac motion tracking enables system evaluation in a clinical environment.

  10. Modeling IoT-Based Solutions Using Human-Centric Wireless Sensor Networks

    PubMed Central

    Monares, Álvaro; Ochoa, Sergio F.; Santos, Rodrigo; Orozco, Javier; Meseguer, Roc

    2014-01-01

    The Internet of Things (IoT) has inspired solutions that are already available for addressing problems in various application scenarios, such as healthcare, security, emergency support and tourism. However, there is no clear approach to modeling these systems and envisioning their capabilities at the design time. Therefore, the process of designing these systems is ad hoc and its real impact is evaluated once the solution is already implemented, which is risky and expensive. This paper proposes a modeling approach that uses human-centric wireless sensor networks to specify and evaluate models of IoT-based systems at the time of design, avoiding the need to spend time and effort on early implementations of immature designs. It allows designers to focus on the system design, leaving the implementation decisions for a next phase. The article illustrates the usefulness of this proposal through a running example, showing the design of an IoT-based solution to support the first responses during medium-sized or large urban incidents. The case study used in the proposal evaluation is based on a real train crash. The proposed modeling approach can be used to design IoT-based systems for other application scenarios, e.g., to support security operatives or monitor chronic patients in their homes. PMID:25157549

  11. Modeling IoT-based solutions using human-centric wireless sensor networks.

    PubMed

    Monares, Álvaro; Ochoa, Sergio F; Santos, Rodrigo; Orozco, Javier; Meseguer, Roc

    2014-08-25

    The Internet of Things (IoT) has inspired solutions that are already available for addressing problems in various application scenarios, such as healthcare, security, emergency support and tourism. However, there is no clear approach to modeling these systems and envisioning their capabilities at the design time. Therefore, the process of designing these systems is ad hoc and its real impact is evaluated once the solution is already implemented, which is risky and expensive. This paper proposes a modeling approach that uses human-centric wireless sensor networks to specify and evaluate models of IoT-based systems at the time of design, avoiding the need to spend time and effort on early implementations of immature designs. It allows designers to focus on the system design, leaving the implementation decisions for a next phase. The article illustrates the usefulness of this proposal through a running example, showing the design of an IoT-based solution to support the first responses during medium-sized or large urban incidents. The case study used in the proposal evaluation is based on a real train crash. The proposed modeling approach can be used to design IoT-based systems for other application scenarios, e.g., to support security operatives or monitor chronic patients in their homes.

  12. Cyberpsychology: a human-interaction perspective based on cognitive modeling.

    PubMed

    Emond, Bruno; West, Robert L

    2003-10-01

    This paper argues for the relevance of cognitive modeling and cognitive architectures to cyberpsychology. From a human-computer interaction point of view, cognitive modeling can have benefits both for theory and model building, and for the design and evaluation of sociotechnical systems usability. Cognitive modeling research applied to human-computer interaction has two complimentary objectives: (1) to develop theories and computational models of human interactive behavior with information and collaborative technologies, and (2) to use the computational models as building blocks for the design, implementation, and evaluation of interactive technologies. From the perspective of building theories and models, cognitive modeling offers the possibility to anchor cyberpsychology theories and models into cognitive architectures. From the perspective of the design and evaluation of socio-technical systems, cognitive models can provide the basis for simulated users, which can play an important role in usability testing. As an example of application of cognitive modeling to technology design, the paper presents a simulation of interactive behavior with five different adaptive menu algorithms: random, fixed, stacked, frequency based, and activation based. Results of the simulation indicate that fixed menu positions seem to offer the best support for classification like tasks such as filing e-mails. This research is part of the Human-Computer Interaction, and the Broadband Visual Communication research programs at the National Research Council of Canada, in collaboration with the Carleton Cognitive Modeling Lab at Carleton University.

  13. Developing a good practice model to evaluate the effectiveness of comprehensive primary health care in local communities

    PubMed Central

    2014-01-01

    Background This paper describes the development of a model of Comprehensive Primary Health Care (CPHC) applicable to the Australian context. CPHC holds promise as an effective model of health system organization able to improve population health and increase health equity. However, there is little literature that describes and evaluates CPHC as a whole, with most evaluation focusing on specific programs. The lack of a consensus on what constitutes CPHC, and the complex and context-sensitive nature of CPHC are all barriers to evaluation. Methods The research was undertaken in partnership with six Australian primary health care services: four state government funded and managed services, one sexual health non-government organization, and one Aboriginal community controlled health service. A draft model was crafted combining program logic and theory-based approaches, drawing on relevant literature, 68 interviews with primary health care service staff, and researcher experience. The model was then refined through an iterative process involving two to three workshops at each of the six participating primary health care services, engaging health service staff, regional health executives and central health department staff. Results The resultant Southgate Model of CPHC in Australia model articulates the theory of change of how and why CPHC service components and activities, based on the theory, evidence and values which underpin a CPHC approach, are likely to lead to individual and population health outcomes and increased health equity. The model captures the importance of context, the mechanisms of CPHC, and the space for action services have to work within. The process of development engendered and supported collaborative relationships between researchers and stakeholders and the product provided a description of CPHC as a whole and a framework for evaluation. The model was endorsed at a research symposium involving investigators, service staff, and key stakeholders. Conclusions The development of a theory-based program logic model provided a framework for evaluation that allows the tracking of progress towards desired outcomes and exploration of the particular aspects of context and mechanisms that produce outcomes. This is important because there are no existing models which enable the evaluation of CPHC services in their entirety. PMID:24885812

  14. Learning predictive models that use pattern discovery--a bootstrap evaluative approach applied in organ functioning sequences.

    PubMed

    Toma, Tudor; Bosman, Robert-Jan; Siebes, Arno; Peek, Niels; Abu-Hanna, Ameen

    2010-08-01

    An important problem in the Intensive Care is how to predict on a given day of stay the eventual hospital mortality for a specific patient. A recent approach to solve this problem suggested the use of frequent temporal sequences (FTSs) as predictors. Methods following this approach were evaluated in the past by inducing a model from a training set and validating the prognostic performance on an independent test set. Although this evaluative approach addresses the validity of the specific models induced in an experiment, it falls short of evaluating the inductive method itself. To achieve this, one must account for the inherent sources of variation in the experimental design. The main aim of this work is to demonstrate a procedure based on bootstrapping, specifically the .632 bootstrap procedure, for evaluating inductive methods that discover patterns, such as FTSs. A second aim is to apply this approach to find out whether a recently suggested inductive method that discovers FTSs of organ functioning status is superior over a traditional method that does not use temporal sequences when compared on each successive day of stay at the Intensive Care Unit. The use of bootstrapping with logistic regression using pre-specified covariates is known in the statistical literature. Using inductive methods of prognostic models based on temporal sequence discovery within the bootstrap procedure is however novel at least in predictive models in the Intensive Care. Our results of applying the bootstrap-based evaluative procedure demonstrate the superiority of the FTS-based inductive method over the traditional method in terms of discrimination as well as accuracy. In addition we illustrate the insights gained by the analyst into the discovered FTSs from the bootstrap samples. Copyright 2010 Elsevier Inc. All rights reserved.

  15. Coupling of the simultaneous heat and water model with a distributed hydrological model and evaluation of the combined model in a cold region watershed

    USDA-ARS?s Scientific Manuscript database

    To represent the effects of frozen soil on hydrology in cold regions, a new physically based distributed hydrological model has been developed by coupling the simultaneous heat and water model (SHAW) with the geomorphology based distributed hydrological model (GBHM), under the framework of the water...

  16. A Bifactor Multidimensional Item Response Theory Model for Differential Item Functioning Analysis on Testlet-Based Items

    ERIC Educational Resources Information Center

    Fukuhara, Hirotaka; Kamata, Akihito

    2011-01-01

    A differential item functioning (DIF) detection method for testlet-based data was proposed and evaluated in this study. The proposed DIF model is an extension of a bifactor multidimensional item response theory (MIRT) model for testlets. Unlike traditional item response theory (IRT) DIF models, the proposed model takes testlet effects into…

  17. Better and Worse: A Dual-Process Model of the Relationship between Core Self-evaluation and Work-Family Conflict.

    PubMed

    Yu, Kun

    2016-01-01

    Based on both resource allocation theory (Becker, 1965; Bergeron, 2007) and role theory (Katz and Kahn, 1978), the current study aims to uncover the relationship between core self-evaluation (CSE) and three dimensions of work interference with family (WIF). A dual-process model was proposed, in which both work stress and career resilience mediate the CSE-WIF relationship. The mediation model was tested with a sample of employees from various organizations ( N = 561). The results first showed that CSE was negatively related to time-based and strain-based WIF and positively related to behavior-based WIF via the mediation of work stress. Moreover, CSE was positively associated with behavior-based and strain-based WIF via the mediation of career resilience, suggesting that CSE may also have its "dark-side."

  18. Better and Worse: A Dual-Process Model of the Relationship between Core Self-evaluation and Work-Family Conflict

    PubMed Central

    Yu, Kun

    2016-01-01

    Based on both resource allocation theory (Becker, 1965; Bergeron, 2007) and role theory (Katz and Kahn, 1978), the current study aims to uncover the relationship between core self-evaluation (CSE) and three dimensions of work interference with family (WIF). A dual-process model was proposed, in which both work stress and career resilience mediate the CSE-WIF relationship. The mediation model was tested with a sample of employees from various organizations (N = 561). The results first showed that CSE was negatively related to time-based and strain-based WIF and positively related to behavior-based WIF via the mediation of work stress. Moreover, CSE was positively associated with behavior-based and strain-based WIF via the mediation of career resilience, suggesting that CSE may also have its “dark-side.” PMID:27790177

  19. Scaling up the evaluation of psychotherapy: evaluating motivational interviewing fidelity via statistical text classification

    PubMed Central

    2014-01-01

    Background Behavioral interventions such as psychotherapy are leading, evidence-based practices for a variety of problems (e.g., substance abuse), but the evaluation of provider fidelity to behavioral interventions is limited by the need for human judgment. The current study evaluated the accuracy of statistical text classification in replicating human-based judgments of provider fidelity in one specific psychotherapy—motivational interviewing (MI). Method Participants (n = 148) came from five previously conducted randomized trials and were either primary care patients at a safety-net hospital or university students. To be eligible for the original studies, participants met criteria for either problematic drug or alcohol use. All participants received a type of brief motivational interview, an evidence-based intervention for alcohol and substance use disorders. The Motivational Interviewing Skills Code is a standard measure of MI provider fidelity based on human ratings that was used to evaluate all therapy sessions. A text classification approach called a labeled topic model was used to learn associations between human-based fidelity ratings and MI session transcripts. It was then used to generate codes for new sessions. The primary comparison was the accuracy of model-based codes with human-based codes. Results Receiver operating characteristic (ROC) analyses of model-based codes showed reasonably strong sensitivity and specificity with those from human raters (range of area under ROC curve (AUC) scores: 0.62 – 0.81; average AUC: 0.72). Agreement with human raters was evaluated based on talk turns as well as code tallies for an entire session. Generated codes had higher reliability with human codes for session tallies and also varied strongly by individual code. Conclusion To scale up the evaluation of behavioral interventions, technological solutions will be required. The current study demonstrated preliminary, encouraging findings regarding the utility of statistical text classification in bridging this methodological gap. PMID:24758152

  20. Anthropometric measures in cardiovascular disease prediction: comparison of laboratory-based versus non-laboratory-based model.

    PubMed

    Dhana, Klodian; Ikram, M Arfan; Hofman, Albert; Franco, Oscar H; Kavousi, Maryam

    2015-03-01

    Body mass index (BMI) has been used to simplify cardiovascular risk prediction models by substituting total cholesterol and high-density lipoprotein cholesterol. In the elderly, the ability of BMI as a predictor of cardiovascular disease (CVD) declines. We aimed to find the most predictive anthropometric measure for CVD risk to construct a non-laboratory-based model and to compare it with the model including laboratory measurements. The study included 2675 women and 1902 men aged 55-79 years from the prospective population-based Rotterdam Study. We used Cox proportional hazard regression analysis to evaluate the association of BMI, waist circumference, waist-to-hip ratio and a body shape index (ABSI) with CVD, including coronary heart disease and stroke. The performance of the laboratory-based and non-laboratory-based models was evaluated by studying the discrimination, calibration, correlation and risk agreement. Among men, ABSI was the most informative measure associated with CVD, therefore ABSI was used to construct the non-laboratory-based model. Discrimination of the non-laboratory-based model was not different than laboratory-based model (c-statistic: 0.680-vs-0.683, p=0.71); both models were well calibrated (15.3% observed CVD risk vs 16.9% and 17.0% predicted CVD risks by the non-laboratory-based and laboratory-based models, respectively) and Spearman rank correlation and the agreement between non-laboratory-based and laboratory-based models were 0.89 and 91.7%, respectively. Among women, none of the anthropometric measures were independently associated with CVD. Among middle-aged and elderly where the ability of BMI to predict CVD declines, the non-laboratory-based model, based on ABSI, could predict CVD risk as accurately as the laboratory-based model among men. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  1. Estimation of Carbon Flux of Forest Ecosystem over Qilian Mountains by BIOME-BGC Model

    NASA Astrophysics Data System (ADS)

    Yan, Min; Tian, Xin; Li, Zengyuan; Chen, Erxue; Li, Chunmei

    2014-11-01

    The gross primary production (GPP) and net ecosystem exchange (NEE) are important indicators for carbon fluxes. This study aims at evaluating the forest GPP and NEE over the Qilian Mountains using meteorological, remotely sensed and other ancillary data at large scale. To realize this, the widely used ecological-process-based model, Biome-BGC, and remote-sensing-based model, MODIS GPP algorithm, were selected for the simulation of the forest carbon fluxes. The combination of these two models was based on calibrating the Biome-BGC by the optimized MODIS GPP algorithm. The simulated GPP and NEE values were evaluated against the eddy covariance observed GPPs and NEEs, and the well agreements have been reached, with R2=0.76, 0.67 respectively.

  2. Estimation of Carbon Flux of Forest Ecosystem over Qilian Mountains by BIOME-BGC Model

    NASA Astrophysics Data System (ADS)

    Yan, Min; Tian, Xin; Li, Zengyuan; Chen, Erxue; Li, Chunmei

    2014-11-01

    The gross primary production (GPP) and net ecosystem exchange (NEE) are important indicators for carbon fluxes. This study aims at evaluating the forest GPP and NEE over the Qilian Mountains using meteorological, remotely sensed and other ancillary data at large scale. To realize this, the widely used ecological-process- based model, Biome-BGC, and remote-sensing-based model, MODIS GPP algorithm, were selected for the simulation of the forest carbon fluxes. The combination of these two models was based on calibrating the Biome-BGC by the optimized MODIS GPP algorithm. The simulated GPP and NEE values were evaluated against the eddy covariance observed GPPs and NEEs, and the well agreements have been reached, with R2=0.76, 0.67 respectively.

  3. Three-dimensional prostate tumor model based on a hyaluronic acid-alginate hydrogel for evaluation of anti-cancer drug efficacy.

    PubMed

    Tang, Yadong; Huang, Boxin; Dong, Yuqin; Wang, Wenlong; Zheng, Xi; Zhou, Wei; Zhang, Kun; Du, Zhiyun

    2017-10-01

    In vitro cell-based assays are widely applied to evaluate anti-cancer drug efficacy. However, the conventional approaches are mostly based on two-dimensional (2D) culture systems, making it difficult to recapitulate the in vivo tumor scenario because of spatial limitations. Here, we develop an in vitro three-dimensional (3D) prostate tumor model based on a hyaluronic acid (HA)-alginate hybrid hydrogel to bridge the gap between in vitro and in vivo anticancer drug evaluations. In situ encapsulation of PCa cells was achieved by mixing HA and alginate aqueous solutions in the presence of cells and then crosslinking with calcium ions. Unlike in 2D culture, cells were found to aggregate into spheroids in a 3D matrix. The expression of epithelial to mesenchyme transition (EMT) biomarkers was found to be largely enhanced, indicating an increased invasion and metastasis potential in the hydrogel matrix. A significant up-regulation of proangiogenic growth factors (IL-8, VEGF) and matrix metalloproteinases (MMPs) was observed in 3D-cultured PCa cells. The results of anti-cancer drug evaluation suggested a higher drug tolerance within the 3D tumor model compared to conventional 2D-cultured cells. Finally, we found that the drug effect within the in vitro 3D cancer model based on HA-alginate matrix exhibited better predictability for in vivo drug efficacy.

  4. Evaluation of a lake whitefish bioenergetics model

    USGS Publications Warehouse

    Madenjian, Charles P.; O'Connor, Daniel V.; Pothoven, Steven A.; Schneeberger, Philip J.; Rediske, Richard R.; O'Keefe, James P.; Bergstedt, Roger A.; Argyle, Ray L.; Brandt, Stephen B.

    2006-01-01

    We evaluated the Wisconsin bioenergetics model for lake whitefish Coregonus clupeaformis in the laboratory and in the field. For the laboratory evaluation, lake whitefish were fed rainbow smelt Osmerus mordax in four laboratory tanks during a 133-d experiment. Based on a comparison of bioenergetics model predictions of lake whitefish food consumption and growth with observed consumption and growth, we concluded that the bioenergetics model furnished significantly biased estimates of both food consumption and growth. On average, the model overestimated consumption by 61% and underestimated growth by 16%. The source of the bias was probably an overestimation of the respiration rate. We therefore adjusted the respiration component of the bioenergetics model to obtain a good fit of the model to the observed consumption and growth in our laboratory tanks. Based on the adjusted model, predictions of food consumption over the 133-d period fell within 5% of observed consumption in three of the four tanks and within 9% of observed consumption in the remaining tank. We used polychlorinated biphenyls (PCBs) as a tracer to evaluate model performance in the field. Based on our laboratory experiment, the efficiency with which lake whitefish retained PCBs from their food (I?) was estimated at 0.45. We applied the bioenergetics model to Lake Michigan lake whitefish and then used PCB determinations of both lake whitefish and their prey from Lake Michigan to estimate p in the field. Application of the original model to Lake Michigan lake whitefish yielded a field estimate of 0.28, implying that the original formulation of the model overestimated consumption in Lake Michigan by 61%. Application of the bioenergetics model with the adjusted respiration component resulted in a field I? estimate of 0.56, implying that this revised model underestimated consumption by 20%.

  5. Evaluation of Deep Learning Based Stereo Matching Methods: from Ground to Aerial Images

    NASA Astrophysics Data System (ADS)

    Liu, J.; Ji, S.; Zhang, C.; Qin, Z.

    2018-05-01

    Dense stereo matching has been extensively studied in photogrammetry and computer vision. In this paper we evaluate the application of deep learning based stereo methods, which were raised from 2016 and rapidly spread, on aerial stereos other than ground images that are commonly used in computer vision community. Two popular methods are evaluated. One learns matching cost with a convolutional neural network (known as MC-CNN); the other produces a disparity map in an end-to-end manner by utilizing both geometry and context (known as GC-net). First, we evaluate the performance of the deep learning based methods for aerial stereo images by a direct model reuse. The models pre-trained on KITTI 2012, KITTI 2015 and Driving datasets separately, are directly applied to three aerial datasets. We also give the results of direct training on target aerial datasets. Second, the deep learning based methods are compared to the classic stereo matching method, Semi-Global Matching(SGM), and a photogrammetric software, SURE, on the same aerial datasets. Third, transfer learning strategy is introduced to aerial image matching based on the assumption of a few target samples available for model fine tuning. It experimentally proved that the conventional methods and the deep learning based methods performed similarly, and the latter had greater potential to be explored.

  6. ArgoEcoSystem-watershed (AgES-W) model evaluation for streamflow and nitrogen/sediment dynamics on a midwest agricultural watershed

    USDA-ARS?s Scientific Manuscript database

    AgroEcoSystem-Watershed (AgES-W) is a modular, Java-based spatially distributed model which implements hydrologic/water quality simulation components under the Object Modeling System Version 3 (OMS3). The AgES-W model was previously evaluated for streamflow and recently has been enhanced with the ad...

  7. Core Professionalism Education in Surgery: A Systematic Review

    PubMed Central

    Sarıoğlu Büke, Akile; Karabilgin Öztürkçü, Özlem Sürel; Yılmaz, Yusuf; Sayek, İskender

    2018-01-01

    Background: Professionalism education is one of the major elements of surgical residency education. Aims: To evaluate the studies on core professionalism education programs in surgical professionalism education. Study Design: Systematic review. Methods: This systematic literature review was performed to analyze core professionalism programs for surgical residency education published in English with at least three of the following features: program developmental model/instructional design method, aims and competencies, methods of teaching, methods of assessment, and program evaluation model or method. A total of 27083 articles were retrieved using EBSCOHOST, PubMed, Science Direct, Web of Science, and manual search. Results: Eight articles met the selection criteria. The instructional design method was presented in only one article, which described the Analysis, Design, Development, Implementation, and Evaluation model. Six articles were based on the Accreditation Council for Graduate Medical Education criterion, although there was significant variability in content. The most common teaching method was role modeling with scenario- and case-based learning. A wide range of assessment methods for evaluating professionalism education were reported. The Kirkpatrick model was reported in one article as a method for program evaluation. Conclusion: It is suggested that for a core surgical professionalism education program, developmental/instructional design model, aims and competencies, content, teaching methods, assessment methods, and program evaluation methods/models should be well defined, and the content should be comparable. PMID:29553464

  8. Critical Evaluation of Prediction Models for Phosphorus Partition between CaO-based Slags and Iron-based Melts during Dephosphorization Processes

    NASA Astrophysics Data System (ADS)

    Yang, Xue-Min; Li, Jin-Yan; Chai, Guo-Ming; Duan, Dong-Ping; Zhang, Jian

    2016-08-01

    According to the experimental results of hot metal dephosphorization by CaO-based slags at a commercial-scale hot metal pretreatment station, the collected 16 models of equilibrium quotient k_{{P}} or phosphorus partition L_{{P}} between CaO-based slags and iron-based melts from the literature have been evaluated. The collected 16 models for predicting equilibrium quotient k_{{P}} can be transferred to predict phosphorus partition L_{{P}} . The predicted results by the collected 16 models cannot be applied to be criteria for evaluating k_{{P}} or L_{{P}} due to various forms or definitions of k_{{P}} or L_{{P}} . Thus, the measured phosphorus content [pct P] in a hot metal bath at the end point of the dephosphorization pretreatment process is applied to be the fixed criteria for evaluating the collected 16 models. The collected 16 models can be described in the form of linear functions as y = c0 + c1 x , in which independent variable x represents the chemical composition of slags, intercept c0 including the constant term depicts the temperature effect and other unmentioned or acquiescent thermodynamic factors, and slope c1 is regressed by the experimental results of k_{{P}} or L_{{P}} . Thus, a general approach to developing the thermodynamic model for predicting equilibrium quotient k_{{P}} or phosphorus partition L P or [pct P] in iron-based melts during the dephosphorization process is proposed by revising the constant term in intercept c0 for the summarized 15 models except for Suito's model (M3). The better models with an ideal revising possibility or flexibility among the collected 16 models have been selected and recommended. Compared with the predicted result by the revised 15 models and Suito's model (M3), the developed IMCT- L_{{P}} model coupled with the proposed dephosphorization mechanism by the present authors can be applied to accurately predict phosphorus partition L_{{P}} with the lowest mean deviation δ_{{L_{{P}} }} of log L_{{P}} as 2.33, as well as to predict [pct P] in a hot metal bath with the smallest mean deviation δ_{{[% {{ P}}]}} of [pct P] as 12.31.

  9. Application of Support Vector Machine to Forex Monitoring

    NASA Astrophysics Data System (ADS)

    Kamruzzaman, Joarder; Sarker, Ruhul A.

    Previous studies have demonstrated superior performance of artificial neural network (ANN) based forex forecasting models over traditional regression models. This paper applies support vector machines to build a forecasting model from the historical data using six simple technical indicators and presents a comparison with an ANN based model trained by scaled conjugate gradient (SCG) learning algorithm. The models are evaluated and compared on the basis of five commonly used performance metrics that measure closeness of prediction as well as correctness in directional change. Forecasting results of six different currencies against Australian dollar reveal superior performance of SVM model using simple linear kernel over ANN-SCG model in terms of all the evaluation metrics. The effect of SVM parameter selection on prediction performance is also investigated and analyzed.

  10. Construction risk assessment of deep foundation pit in metro station based on G-COWA method

    NASA Astrophysics Data System (ADS)

    You, Weibao; Wang, Jianbo; Zhang, Wei; Liu, Fangmeng; Yang, Diying

    2018-05-01

    In order to get an accurate understanding of the construction safety of deep foundation pit in metro station and reduce the probability and loss of risk occurrence, a risk assessment method based on G-COWA is proposed. Firstly, relying on the specific engineering examples and the construction characteristics of deep foundation pit, an evaluation index system based on the five factors of “human, management, technology, material and environment” is established. Secondly, the C-OWA operator is introduced to realize the evaluation index empowerment and weaken the negative influence of expert subjective preference. The gray cluster analysis and fuzzy comprehensive evaluation method are combined to construct the construction risk assessment model of deep foundation pit, which can effectively solve the uncertainties. Finally, the model is applied to the actual project of deep foundation pit of Qingdao Metro North Station, determine its construction risk rating is “medium”, evaluate the model is feasible and reasonable. And then corresponding control measures are put forward and useful reference are provided.

  11. Challenges Associated With Applying Physiologically Based Pharmacokinetic Modeling for Public Health Decision-Making

    EPA Science Inventory

    The development and application of physiologically based pharmacokinetic (PBPK) models in chemical toxicology have grown steadily since their emergence in the 1980s. However, critical evaluation of PBPK models to support public health decision-making across federal agencies has t...

  12. Development , Implementation and Evaluation of a Physics-Base Windblown Dust Emission Model

    EPA Science Inventory

    A physics-based windblown dust emission parametrization scheme is developed and implemented in the CMAQ modeling system. A distinct feature of the present model includes the incorporation of a newly developed, dynamic relation for the surface roughness length, which is important ...

  13. Roughness modelling based on human auditory perception for sound quality evaluation of vehicle interior noise

    NASA Astrophysics Data System (ADS)

    Wang, Y. S.; Shen, G. Q.; Guo, H.; Tang, X. L.; Hamade, T.

    2013-08-01

    In this paper, a roughness model, which is based on human auditory perception (HAP) and known as HAP-RM, is developed for the sound quality evaluation (SQE) of vehicle noise. First, the interior noise signals are measured for a sample vehicle and prepared for roughness modelling. The HAP-RM model is based on the process of sound transfer and perception in the human auditory system by combining the structural filtering function and nonlinear perception characteristics of the ear. The HAP-RM model is applied to the measured vehicle interior noise signals by considering the factors that affect hearing, such as the modulation and carrier frequencies, the time and frequency maskings and the correlations of the critical bands. The HAP-RM model is validated by jury tests. An anchor-scaled scoring method (ASM) is used for subjective evaluations in the jury tests. The verification results show that the novel developed model can accurately calculate vehicle noise roughness below 0.6 asper. Further investigation shows that the total roughness of the vehicle interior noise can mainly be attributed to frequency components below 12 Bark. The time masking effects of the modelling procedure enable the application of the HAP-RM model to stationary and nonstationary vehicle noise signals and the SQE of other sound-related signals in engineering problems.

  14. Local spatio-temporal analysis in vision systems

    NASA Astrophysics Data System (ADS)

    Geisler, Wilson S.; Bovik, Alan; Cormack, Lawrence; Ghosh, Joydeep; Gildeen, David

    1994-07-01

    The aims of this project are the following: (1) develop a physiologically and psychophysically based model of low-level human visual processing (a key component of which are local frequency coding mechanisms); (2) develop image models and image-processing methods based upon local frequency coding; (3) develop algorithms for performing certain complex visual tasks based upon local frequency representations, (4) develop models of human performance in certain complex tasks based upon our understanding of low-level processing; and (5) develop a computational testbed for implementing, evaluating and visualizing the proposed models and algorithms, using a massively parallel computer. Progress has been substantial on all aims. The highlights include the following: (1) completion of a number of psychophysical and physiological experiments revealing new, systematic and exciting properties of the primate (human and monkey) visual system; (2) further development of image models that can accurately represent the local frequency structure in complex images; (3) near completion in the construction of the Texas Active Vision Testbed; (4) development and testing of several new computer vision algorithms dealing with shape-from-texture, shape-from-stereo, and depth-from-focus; (5) implementation and evaluation of several new models of human visual performance; and (6) evaluation, purchase and installation of a MasPar parallel computer.

  15. ANALYSIS OF MERCURY IN VERMONT AND NEW HAMPSHIRE LAKES: EVALUATION OF THE REGIONAL MERCURY CYCLING MODEL

    EPA Science Inventory

    An evaluation of the Regional Mercury Cycling Model (R-MCM, a steady-state fate and transport model used to simulate mercury concentrations in lakes) is presented based on its application to a series of 91 lakes in Vermont and New Hampshire. Visual and statistical analyses are pr...

  16. SEASONAL NH 3 EMISSIONS FOR THE CONTINENTAL UNITED STATES: INVERSE MODEL ESTIMATION AND EVALUATION

    EPA Science Inventory

    An inverse modeling study has been conducted here to evaluate a prior estimate of seasonal ammonia (NH3) emissions. The prior estimates were based on a previous inverse modeling study and two other bottom-up inventory studies. The results suggest that the prior estim...

  17. Integrating species distributional, conservation planning, and individual based population models: A case study in critical habitat evaluation for the Northern Spotted Owl

    EPA Science Inventory

    Background / Question / Methods As part of the ongoing northern spotted owl recovery planning effort, we evaluated a series of alternative potential critical habitat scenarios using a species-distribution model (MaxEnt), a conservation-planning model (Zonation), and an individua...

  18. DYNAMIC EVALUATION OF REGIONAL AIR QUALITY MODELS: ASSESSING CHANGES TO O 3 STEMMING FROM CHANGES IN EMISSIONS AND METEOROLOGY

    EPA Science Inventory

    Regional-scale air quality models are used to estimate the response of air pollutants to potential emission control strategies as part of the decision-making process. Traditionally, the model predicted pollutant concentrations are evaluated for the “base case” to assess a model’s...

  19. Simulating forage crop production in a northern climate with the Integrated Farm System Model

    USDA-ARS?s Scientific Manuscript database

    Whole-farm simulation models are useful tools for evaluating the effect of management practices and climate variability on the agro-environmental and economic performance of farms. A few process-based farm-scale models have been developed, but none have been evaluated in a northern region with a sho...

  20. Evaluation of iodide deficiency in the lactating rat and pup using a biologically based dose-response model

    EPA Science Inventory

    A biologically-based dose response (BBDR) model for the hypothalamic-pituitary thyroid (BPT) axis in the lactating rat and nursing pup was developed to describe the perturbations caused by iodide deficiency on the HPT axis. Model calibrations, carried out by adjusting key model p...

  1. Modeling Research Project Risks with Fuzzy Maps

    ERIC Educational Resources Information Center

    Bodea, Constanta Nicoleta; Dascalu, Mariana Iuliana

    2009-01-01

    The authors propose a risks evaluation model for research projects. The model is based on fuzzy inference. The knowledge base for fuzzy process is built with a causal and cognitive map of risks. The map was especially developed for research projects, taken into account their typical lifecycle. The model was applied to an e-testing research…

  2. Evaluation of iodide deficiency in the lactating rat and pup using a biologically based dose response (BBDR) Model***

    EPA Science Inventory

    A biologically-based dose response (BBDR) model for the hypothalamic-pituitary thyroid (HPT) axis in the lactating rat and nursing pup was developed to describe the perturbations caused by iodide deficiency on the 1-IPT axis. Model calibrations, carried out by adjusting key model...

  3. On prognostic models, artificial intelligence and censored observations.

    PubMed

    Anand, S S; Hamilton, P W; Hughes, J G; Bell, D A

    2001-03-01

    The development of prognostic models for assisting medical practitioners with decision making is not a trivial task. Models need to possess a number of desirable characteristics and few, if any, current modelling approaches based on statistical or artificial intelligence can produce models that display all these characteristics. The inability of modelling techniques to provide truly useful models has led to interest in these models being purely academic in nature. This in turn has resulted in only a very small percentage of models that have been developed being deployed in practice. On the other hand, new modelling paradigms are being proposed continuously within the machine learning and statistical community and claims, often based on inadequate evaluation, being made on their superiority over traditional modelling methods. We believe that for new modelling approaches to deliver true net benefits over traditional techniques, an evaluation centric approach to their development is essential. In this paper we present such an evaluation centric approach to developing extensions to the basic k-nearest neighbour (k-NN) paradigm. We use standard statistical techniques to enhance the distance metric used and a framework based on evidence theory to obtain a prediction for the target example from the outcome of the retrieved exemplars. We refer to this new k-NN algorithm as Censored k-NN (Ck-NN). This reflects the enhancements made to k-NN that are aimed at providing a means for handling censored observations within k-NN.

  4. Evaluating 1-, 2- and 3- Parameter Logistic Models Using Model-Based and Empirically-Based Simulations under Homogeneous and Heterogeneous Set Conditions

    ERIC Educational Resources Information Center

    Rizavi, Saba; Way, Walter D.; Lu, Ying; Pitoniak, Mary; Steffen, Manfred

    2004-01-01

    The purpose of this study was to use realistically simulated data to evaluate various CAT designs for use with the verbal reasoning measure of the Medical College Admissions Test (MCAT). Factors such as item pool depth, content constraints, and item formats often cause repeated adaptive administrations of an item at ability levels that are not…

  5. Un modelo para el control de calidad academica de los textos de instruccion a distancia (A Model for Controlling the Academic Quality of Distance Education Texts).

    ERIC Educational Resources Information Center

    Bolanos-Mora, Guiselle; And Others

    1992-01-01

    In response to the need for a system of control over the academic quality of distance education texts, this article proposes a methodological model based on criteria that evaluate written materials based on their instructional quality, design, and production. A discussion and figures evaluate educational aspects of content, communication,…

  6. Experimental Evaluation of the Effects of a Research-Based Preschool Mathematics Curriculum

    ERIC Educational Resources Information Center

    Clements, Douglas H.; Sarama, Julie

    2008-01-01

    A randomized-trials design was used to evaluate the effectiveness of a preschool mathematics program based on a comprehensive model of research-based curricula development. Thirty-six preschool classrooms were assigned to experimental (Building Blocks), comparison (a different preschool mathematics curriculum), or control conditions. Children were…

  7. Competency-Based Evaluation in Higher Education--Design and Use of Competence Rubrics by University Educators

    ERIC Educational Resources Information Center

    Velasco-Martínez, Leticia-Concepción; Tójar-Hurtado, Juan-Carlos

    2018-01-01

    Competency-based learning requires making changes in the higher education model in response to current socio-educational demands. Rubrics are an innovative educational tool for competence evaluation, for both students and educators. Ever since arriving at the university systems, the application of rubrics in evaluation programs has grown…

  8. Development and Evaluation of Computer-Based Laboratory Practical Learning Tool

    ERIC Educational Resources Information Center

    Gandole, Y. B.

    2006-01-01

    Effective evaluation of educational software is a key issue for successful introduction of advanced tools in the curriculum. This paper details to developing and evaluating a tool for computer assisted learning of science laboratory courses. The process was based on the generic instructional system design model. Various categories of educational…

  9. Teaching Single-Case Evaluation to Graduate Social Work Students: A Replication

    ERIC Educational Resources Information Center

    Wong, Stephen E.; O'Driscoll, Janice

    2017-01-01

    A course teaching graduate social work students to use an evidence-based model and to evaluate their own practice was replicated and evaluated. Students conducted a project in which they reviewed published research to achieve a clinical goal, applied quantitative measures for ongoing assessment, implemented evidence-based interventions, and…

  10. Integrating WEPP into the WEPS infrastructure

    USDA-ARS?s Scientific Manuscript database

    The Wind Erosion Prediction System (WEPS) and the Water Erosion Prediction Project (WEPP) share a common modeling philosophy, that of moving away from primarily empirically based models based on indices or "average conditions", and toward a more process based approach which can be evaluated using ac...

  11. An OpenACC-Based Unified Programming Model for Multi-accelerator Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Jungwon; Lee, Seyong; Vetter, Jeffrey S

    2015-01-01

    This paper proposes a novel SPMD programming model of OpenACC. Our model integrates the different granularities of parallelism from vector-level parallelism to node-level parallelism into a single, unified model based on OpenACC. It allows programmers to write programs for multiple accelerators using a uniform programming model whether they are in shared or distributed memory systems. We implement a prototype of our model and evaluate its performance with a GPU-based supercomputer using three benchmark applications.

  12. Developing evaluation instrument based on CIPP models on the implementation of portfolio assessment

    NASA Astrophysics Data System (ADS)

    Kurnia, Feni; Rosana, Dadan; Supahar

    2017-08-01

    This study aimed to develop an evaluation instrument constructed by CIPP model on the implementation of portfolio assessment in science learning. This study used research and development (R & D) method; adapting 4-D by the development of non-test instrument, and the evaluation instrument constructed by CIPP model. CIPP is the abbreviation of Context, Input, Process, and Product. The techniques of data collection were interviews, questionnaires, and observations. Data collection instruments were: 1) the interview guidelines for the analysis of the problems and the needs, 2) questionnaire to see level of accomplishment of portfolio assessment instrument, and 3) observation sheets for teacher and student to dig up responses to the portfolio assessment instrument. The data obtained was quantitative data obtained from several validators. The validators consist of two lecturers as the evaluation experts, two practitioners (science teachers), and three colleagues. This paper shows the results of content validity obtained from the validators and the analysis result of the data obtained by using Aikens' V formula. The results of this study shows that the evaluation instrument based on CIPP models is proper to evaluate the implementation of portfolio assessment instruments. Based on the experts' judgments, practitioners, and colleagues, the Aikens' V coefficient was between 0.86-1,00 which means that it is valid and can be used in the limited trial and operational field trial.

  13. Estimation of Transmittance of Solar Radiation in the Visible Domain Based on Remote Sensing: Evaluation of Models Using In Situ Data

    NASA Astrophysics Data System (ADS)

    Zoffoli, M. Laura; Lee, Zhongping; Ondrusek, Michael; Lin, Junfang; Kovach, Charles; Wei, Jianwei; Lewis, Marlon

    2017-11-01

    The transmittance of solar radiation in the oceanic water column plays an important role in heat transfer and photosynthesis, with implications for the global carbon cycle, global circulation, and climate. Globally, the transmittance of solar radiation in the visible domain (˜400-700 nm) (TRVIS) through the water column, which determines the vertical distribution of visible light, has to be based on remote sensing products. There are models centered on chlorophyll-a (Chl) concentration or Inherent Optical Properties (IOPs) as both can be derived from ocean color measurements. We present evaluations of both schemes with field data from clear oceanic and from coastal waters. Here five models were evaluated: (1) Morel and Antoine (1994) (MA94), (2) Ohlmann and Siegel (2000) (OS00), (3) Murtugudde et al. (2002) (MU02), (4) Manizza et al. (2005) (MA05), and (5) Lee et al. ([Lee, Z., 2005]) (IOPs05), where the first four are Chl-based and the last one is IOPs-based, with all inputs derived from remote sensing reflectance. It is found that the best performing model is the IOPs05, with Unbiased Absolute Percent Difference (UAPD) ˜23%, while Chl-based models show higher uncertainties (UAPD for MA94: ˜54%, OS00: ˜133%, MU02: ˜56%, and MA05: ˜39%). The IOPs-based model was insensitive to the type of water, allowing it to be applied in most marine environments; whereas some of the Chl-based models (MU02 and MA05) show much higher sensitivities in coastal turbid waters (higher Chl waters). These results highlight the applicablity of using IOPs products for such applications.

  14. Automated glioblastoma segmentation based on a multiparametric structured unsupervised classification.

    PubMed

    Juan-Albarracín, Javier; Fuster-Garcia, Elies; Manjón, José V; Robles, Montserrat; Aparici, F; Martí-Bonmatí, L; García-Gómez, Juan M

    2015-01-01

    Automatic brain tumour segmentation has become a key component for the future of brain tumour treatment. Currently, most of brain tumour segmentation approaches arise from the supervised learning standpoint, which requires a labelled training dataset from which to infer the models of the classes. The performance of these models is directly determined by the size and quality of the training corpus, whose retrieval becomes a tedious and time-consuming task. On the other hand, unsupervised approaches avoid these limitations but often do not reach comparable results than the supervised methods. In this sense, we propose an automated unsupervised method for brain tumour segmentation based on anatomical Magnetic Resonance (MR) images. Four unsupervised classification algorithms, grouped by their structured or non-structured condition, were evaluated within our pipeline. Considering the non-structured algorithms, we evaluated K-means, Fuzzy K-means and Gaussian Mixture Model (GMM), whereas as structured classification algorithms we evaluated Gaussian Hidden Markov Random Field (GHMRF). An automated postprocess based on a statistical approach supported by tissue probability maps is proposed to automatically identify the tumour classes after the segmentations. We evaluated our brain tumour segmentation method with the public BRAin Tumor Segmentation (BRATS) 2013 Test and Leaderboard datasets. Our approach based on the GMM model improves the results obtained by most of the supervised methods evaluated with the Leaderboard set and reaches the second position in the ranking. Our variant based on the GHMRF achieves the first position in the Test ranking of the unsupervised approaches and the seventh position in the general Test ranking, which confirms the method as a viable alternative for brain tumour segmentation.

  15. Partnerships Enhancing Practice: A Preliminary Model of Technology-Based Peer-to-Peer Evaluations of Teaching in Higher Education

    ERIC Educational Resources Information Center

    Servilio, Kathryn L.; Hollingshead, Aleksandra; Hott, Brittany L.

    2017-01-01

    In higher education, current teaching evaluation models typically involve senior faculty evaluating junior faculty. However, there is evidence that peer-to-peer junior faculty observations and feedback may be just as effective. This descriptive case study utilized an inductive analysis to examine experiences of six special education early career…

  16. Analysis on mechanics response of long-life asphalt pavement at moist hot heavy loading area

    NASA Astrophysics Data System (ADS)

    Xu, Xinquan; Li, Hao; Wu, Chuanhai; Li, Shanqiang

    2018-04-01

    Based on the durability of semi-rigid base asphalt pavement test road in Guangdong Yunluo expressway, by comparing the mechanics response of modified semi-rigid base, RCC base and inverted semi-rigid base with the state of continuous, using four unit five parameter model to evaluate rut depth of asphalt pavement structure, and through commonly used fatigue life prediction model to evaluate fatigue performance of three types of asphalt pavement structure. Theoretical calculation and four years tracking observation results of test road show that rut depth of modified semi-rigid base asphalt pavement is the minimum, the road performance is the best, and the fatigue performance is the optimal.

  17. A Field-Based Curriculum Model for Earth Science Teacher-Preparation Programs.

    ERIC Educational Resources Information Center

    Dubois, David D.

    1979-01-01

    This study proposed a model set of cognitive-behavioral objectives for field-based teacher education programs for earth science teachers. It describes field experience integration into teacher education programs. The model is also applicable for evaluation of earth science teacher education programs. (RE)

  18. Evaluation of Low-Voltage Distribution Network Index Based on Improved Principal Component Analysis

    NASA Astrophysics Data System (ADS)

    Fan, Hanlu; Gao, Suzhou; Fan, Wenjie; Zhong, Yinfeng; Zhu, Lei

    2018-01-01

    In order to evaluate the development level of the low-voltage distribution network objectively and scientifically, chromatography analysis method is utilized to construct evaluation index model of low-voltage distribution network. Based on the analysis of principal component and the characteristic of logarithmic distribution of the index data, a logarithmic centralization method is adopted to improve the principal component analysis algorithm. The algorithm can decorrelate and reduce the dimensions of the evaluation model and the comprehensive score has a better dispersion degree. The clustering method is adopted to analyse the comprehensive score because the comprehensive score of the courts is concentrated. Then the stratification evaluation of the courts is realized. An example is given to verify the objectivity and scientificity of the evaluation method.

  19. Anisotropic constitutive modeling for nickel-base single crystal superalloys. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Sheh, Michael Y.

    1988-01-01

    An anisotropic constitutive model was developed based on crystallographic slip theory for nickel base single crystal superalloys. The constitutive equations developed utilizes drag stress and back stress state variables to model the local inelastic flow. Specially designed experiments were conducted to evaluate the existence of back stress in single crystal superalloy Rene N4 at 982 C. The results suggest that: (1) the back stress is orientation dependent; and (2) the back stress state variable is required for the current model to predict material anelastic recovery behavior. The model was evaluated for its predictive capability on single crystal material behavior including orientation dependent stress-strain response, tension/compression asymmetry, strain rate sensitivity, anelastic recovery behavior, cyclic hardening and softening, stress relaxation, creep and associated crystal lattice rotation. Limitation and future development needs are discussed.

  20. TEAM-HF Cost-Effectiveness Model: A Web-Based Program Designed to Evaluate the Cost-Effectiveness of Disease Management Programs in Heart Failure

    PubMed Central

    Reed, Shelby D.; Neilson, Matthew P.; Gardner, Matthew; Li, Yanhong; Briggs, Andrew H.; Polsky, Daniel E.; Graham, Felicia L.; Bowers, Margaret T.; Paul, Sara C.; Granger, Bradi B.; Schulman, Kevin A.; Whellan, David J.; Riegel, Barbara; Levy, Wayne C.

    2015-01-01

    Background Heart failure disease management programs can influence medical resource use and quality-adjusted survival. Because projecting long-term costs and survival is challenging, a consistent and valid approach to extrapolating short-term outcomes would be valuable. Methods We developed the Tools for Economic Analysis of Patient Management Interventions in Heart Failure (TEAM-HF) Cost-Effectiveness Model, a Web-based simulation tool designed to integrate data on demographic, clinical, and laboratory characteristics, use of evidence-based medications, and costs to generate predicted outcomes. Survival projections are based on a modified Seattle Heart Failure Model (SHFM). Projections of resource use and quality of life are modeled using relationships with time-varying SHFM scores. The model can be used to evaluate parallel-group and single-cohort designs and hypothetical programs. Simulations consist of 10,000 pairs of virtual cohorts used to generate estimates of resource use, costs, survival, and incremental cost-effectiveness ratios from user inputs. Results The model demonstrated acceptable internal and external validity in replicating resource use, costs, and survival estimates from 3 clinical trials. Simulations to evaluate the cost-effectiveness of heart failure disease management programs across 3 scenarios demonstrate how the model can be used to design a program in which short-term improvements in functioning and use of evidence-based treatments are sufficient to demonstrate good long-term value to the health care system. Conclusion The TEAM-HF Cost-Effectiveness Model provides researchers and providers with a tool for conducting long-term cost-effectiveness analyses of disease management programs in heart failure. PMID:26542504

  1. Data Base Descriptors for Electro-Optical Sensor Simulation. Final Report, May 1977 through June 1978.

    ERIC Educational Resources Information Center

    Zimmerlin, Timothy A.; And Others

    An effort to construct a model of the thermal properties of materials based on theoretical thermo-electromagnetic models, to construct a data base of the dense cultural hospital scene according to Defense Mapping Agency Aerospace Center (DMAAC) specifications, and to design and implement a program to evaluate the tonal model and generate imagery…

  2. The Nature of Study Programmes in Vocational Education: Evaluation of the Model for Comprehensive Competence-Based Vocational Education in the Netherlands

    ERIC Educational Resources Information Center

    Sturing, Lidwien; Biemans, Harm J. A.; Mulder, Martin; de Bruijn, Elly

    2011-01-01

    In a previous series of studies, a model of comprehensive competence-based vocational education (CCBE model) was developed, consisting of eight principles of competence-based vocational education (CBE) that were elaborated for four implementation levels (Wesselink et al. "European journal of vocational training" 40:38-51 2007a). The…

  3. Evaluation of brightness temperature from a forward model of ground-based microwave radiometer

    NASA Astrophysics Data System (ADS)

    Rambabu, S.; Pillai, J. S.; Agarwal, A.; Pandithurai, G.

    2014-06-01

    Ground-based microwave radiometers are getting great attention in recent years due to their capability to profile the temperature and humidity at high temporal and vertical resolution in the lower troposphere. The process of retrieving these parameters from the measurements of radiometric brightness temperature ( T B ) includes the inversion algorithm, which uses the back ground information from a forward model. In the present study, an algorithm development and evaluation of this forward model for a ground-based microwave radiometer, being developed by Society for Applied Microwave Electronics Engineering and Research (SAMEER) of India, is presented. Initially, the analysis of absorption coefficient and weighting function at different frequencies was made to select the channels. Further the range of variation of T B for these selected channels for the year 2011, over the two stations Mumbai and Delhi is discussed. Finally the comparison between forward-model simulated T B s and radiometer measured T B s at Mahabaleshwar (73.66 ∘E and 17.93∘N) is done to evaluate the model. There is good agreement between model simulations and radiometer observations, which suggests that these forward model simulations can be used as background for inversion models for retrieving the temperature and humidity profiles.

  4. History, Epidemic Evolution, and Model Burn-In for a Network of Annual Invasion: Soybean Rust.

    PubMed

    Sanatkar, M R; Scoglio, C; Natarajan, B; Isard, S A; Garrett, K A

    2015-07-01

    Ecological history may be an important driver of epidemics and disease emergence. We evaluated the role of history and two related concepts, the evolution of epidemics and the burn-in period required for fitting a model to epidemic observations, for the U.S. soybean rust epidemic (caused by Phakopsora pachyrhizi). This disease allows evaluation of replicate epidemics because the pathogen reinvades the United States each year. We used a new maximum likelihood estimation approach for fitting the network model based on observed U.S. epidemics. We evaluated the model burn-in period by comparing model fit based on each combination of other years of observation. When the miss error rates were weighted by 0.9 and false alarm error rates by 0.1, the mean error rate did decline, for most years, as more years were used to construct models. Models based on observations in years closer in time to the season being estimated gave lower miss error rates for later epidemic years. The weighted mean error rate was lower in backcasting than in forecasting, reflecting how the epidemic had evolved. Ongoing epidemic evolution, and potential model failure, can occur because of changes in climate, host resistance and spatial patterns, or pathogen evolution.

  5. Evaluating hydrological model performance using information theory-based metrics

    USDA-ARS?s Scientific Manuscript database

    The accuracy-based model performance metrics not necessarily reflect the qualitative correspondence between simulated and measured streamflow time series. The objective of this work was to use the information theory-based metrics to see whether they can be used as complementary tool for hydrologic m...

  6. A holistic model for evaluating the impact of individual technology-enhanced learning resources.

    PubMed

    Pickering, James D; Joynes, Viktoria C T

    2016-12-01

    The use of technology within education has now crossed the Rubicon; student expectations, the increasing availability of both hardware and software and the push to fully blended learning environments mean that educational institutions cannot afford to turn their backs on technology-enhanced learning (TEL). The ability to meaningfully evaluate the impact of TEL resources nevertheless remains problematic. This paper aims to establish a robust means of evaluating individual resources and meaningfully measure their impact upon learning within the context of the program in which they are used. Based upon the experience of developing and evaluating a range of mobile and desktop based TEL resources, this paper outlines a new four-stage evaluation process, taking into account learner satisfaction, learner gain, and the impact of a resource on both the individual and the institution in which it has been adapted. A new multi-level model of TEL resource evaluation is proposed, which includes a preliminary evaluation of need, learner satisfaction and gain, learner impact and institutional impact. Each of these levels are discussed in detail, and in relation to existing TEL evaluation frameworks. This paper details a holistic, meaningful evaluation model for individual TEL resources within the specific context in which they are used. It is proposed that this model is adopted to ensure that TEL resources are evaluated in a more meaningful and robust manner than is currently undertaken.

  7. Evaluation of Savannah River Plant emergency response models using standard and nonstandard meteorological data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoel, D.D.

    1984-01-01

    Two computer codes have been developed for operational use in performing real time evaluations of atmospheric releases from the Savannah River Plant (SRP) in South Carolina. These codes, based on mathematical models, are part of the SRP WIND (Weather Information and Display) automated emergency response system. Accuracy of ground level concentrations from a Gaussian puff-plume model and a two-dimensional sequential puff model are being evaluated with data from a series of short range diffusion experiments using sulfur hexafluoride as a tracer. The models use meteorological data collected from 7 towers on SRP and at the 300 m WJBF-TV tower aboutmore » 15 km northwest of SRP. The winds and the stability, which is based on turbulence measurements, are measured at the 60 m stack heights. These results are compared to downwind concentrations using only standard meteorological data, i.e., adjusted 10 m winds and stability determined by the Pasquill-Turner stability classification method. Scattergrams and simple statistics were used for model evaluations. Results indicate predictions within accepted limits for the puff-plume code and a bias in the sequential puff model predictions using the meteorologist-adjusted nonstandard data. 5 references, 4 figures, 2 tables.« less

  8. Description and initial evaluation of an educational and psychosocial support model for adults with congenitally malformed hearts.

    PubMed

    Rönning, Helén; Nielsen, Niels Erik; Swahn, Eva; Strömberg, Anna

    2011-05-01

    Various programmes for adults with congenitally malformed hearts have been developed, but detailed descriptions of content, rationale and goals are often missing. The aim of this study was to describe and make an initial evaluation of a follow-up model for adults with congenitally malformed hearts, focusing on education and psychosocial support by a multidisciplinary team (EPS). The model is described in steps and evaluated with regards to perceptions of knowledge, anxiety and satisfaction. The EPS model included a policlinic visit to the physician/nurse (medical consultation, computer-based and individual education face-to-face as well as psychosocial support) and a 1-month telephone follow-up. Fifty-five adults (mean age 34, 29 women) with the nine most common forms of congenitally malformed hearts participated in the EPS model as well as the 3-months follow-up. Knowledge about congenital heart malformation had increased in 40% of the participants at the 3-months follow-up. This study describes and evaluates a model that combines a multidisciplinary approach and computer-based education for follow-up of adults with congenitally malformed hearts. The EPS model was found to increase self-estimated knowledge, but further evaluations need to be conducted to prove patient-centred outcomes over time. The model is now ready to be implemented in adults with congenitally malformed hearts. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  9. Valence and arousal-based affective evaluations of foods.

    PubMed

    Woodward, Halley E; Treat, Teresa A; Cameron, C Daryl; Yegorova, Vitaliya

    2017-01-01

    We investigated the nutrient-specific and individual-specific validity of dual-process models of valenced and arousal-based affective evaluations of foods across the disordered eating spectrum. 283 undergraduate women provided implicit and explicit valence and arousal-based evaluations of 120 food photos with known nutritional information on structurally similar indirect and direct affect misattribution procedures (AMP; Payne et al., 2005, 2008), and completed questionnaires assessing body mass index (BMI), hunger, restriction, and binge eating. Nomothetically, added fat and added sugar enhance evaluations of foods. Idiographically, hunger and binge eating enhance activation, whereas BMI and restriction enhance pleasantness. Added fat is salient for women who are heavier, hungrier, or who restrict; added sugar is influential for less hungry women. Restriction relates only to valence, whereas binge eating relates only to arousal. Findings are similar across implicit and explicit affective evaluations, albeit stronger for explicit, providing modest support for dual-process models of affective evaluation of foods. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. On modelling the pressure-strain correlations in wall bounded flows

    NASA Technical Reports Server (NTRS)

    Peltier, L. J.; Biringen, S.

    1990-01-01

    Turbulence models for the pressure-strain term of the Reynolds-stress equations in the vicinity of a moving wall are evaluated for a high Reynolds number flow using decaying grid turbulence as a model problem. The data of Thomas and Hancock are used as a base for evaluating the different turbulence models. In particular, the Rotta model for return-to-isotropy is evaluated both in its inclusion into the Reynolds-stress equation model and in comparison to a nonlinear model advanced by Sarkar and Speziale. Further, models for the wall correction to the transfer term advanced by Launder et al., Shir, and Shih and Lumley are compared. Initial data using the decaying grid turbulence experiment as a base suggests that the coefficients proposed for these models are high perhaps by as much as an order of magnitude. The Shih and Lumley model which satisfies realizability constraints, in particular, seems to hold promise in adequately modeling the Reynolds stress components of this flow. Extensions of this work are to include testing the homogeneous transfer model by Shih and Lumley and the testing of the wall transfer models using their proposed coefficients and the coefficients chosen from this work in a flow with mean shear component.

  11. The creation and evaluation of a model predicting the probability of conception in seasonal-calving, pasture-based dairy cows.

    PubMed

    Fenlon, Caroline; O'Grady, Luke; Doherty, Michael L; Dunnion, John; Shalloo, Laurence; Butler, Stephen T

    2017-07-01

    Reproductive performance in pasture-based production systems has a fundamentally important effect on economic efficiency. The individual factors affecting the probability of submission and conception are multifaceted and have been extensively researched. The present study analyzed some of these factors in relation to service-level probability of conception in seasonal-calving pasture-based dairy cows to develop a predictive model of conception. Data relating to 2,966 services from 737 cows on 2 research farms were used for model development and data from 9 commercial dairy farms were used for model testing, comprising 4,212 services from 1,471 cows. The data spanned a 15-yr period and originated from seasonal-calving pasture-based dairy herds in Ireland. The calving season for the study herds extended from January to June, with peak calving in February and March. A base mixed-effects logistic regression model was created using a stepwise model-building strategy and incorporated parity, days in milk, interservice interval, calving difficulty, and predicted transmitting abilities for calving interval and milk production traits. To attempt to further improve the predictive capability of the model, the addition of effects that were not statistically significant was considered, resulting in a final model composed of the base model with the inclusion of BCS at service. The models' predictions were evaluated using discrimination to measure their ability to correctly classify positive and negative cases. Precision, recall, F-score, and area under the receiver operating characteristic curve (AUC) were calculated. Calibration tests measured the accuracy of the predicted probabilities. These included tests of overall goodness-of-fit, bias, and calibration error. Both models performed better than using the population average probability of conception. Neither of the models showed high levels of discrimination (base model AUC 0.61, final model AUC 0.62), possibly because of the narrow central range of conception rates in the study herds. The final model was found to reliably predict the probability of conception without bias when evaluated against the full external data set, with a mean absolute calibration error of 2.4%. The chosen model could be used to support a farmer's decision-making and in stochastic simulation of fertility in seasonal-calving pasture-based dairy cows. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  12. Model-Based and Model-Free Pavlovian Reward Learning: Revaluation, Revision and Revelation

    PubMed Central

    Dayan, Peter; Berridge, Kent C.

    2014-01-01

    Evidence supports at least two methods for learning about reward and punishment and making predictions for guiding actions. One method, called model-free, progressively acquires cached estimates of the long-run values of circumstances and actions from retrospective experience. The other method, called model-based, uses representations of the environment, expectations and prospective calculations to make cognitive predictions of future value. Extensive attention has been paid to both methods in computational analyses of instrumental learning. By contrast, although a full computational analysis has been lacking, Pavlovian learning and prediction has typically been presumed to be solely model-free. Here, we revise that presumption and review compelling evidence from Pavlovian revaluation experiments showing that Pavlovian predictions can involve their own form of model-based evaluation. In model-based Pavlovian evaluation, prevailing states of the body and brain influence value computations, and thereby produce powerful incentive motivations that can sometimes be quite new. We consider the consequences of this revised Pavlovian view for the computational landscape of prediction, response and choice. We also revisit differences between Pavlovian and instrumental learning in the control of incentive motivation. PMID:24647659

  13. Model-based and model-free Pavlovian reward learning: revaluation, revision, and revelation.

    PubMed

    Dayan, Peter; Berridge, Kent C

    2014-06-01

    Evidence supports at least two methods for learning about reward and punishment and making predictions for guiding actions. One method, called model-free, progressively acquires cached estimates of the long-run values of circumstances and actions from retrospective experience. The other method, called model-based, uses representations of the environment, expectations, and prospective calculations to make cognitive predictions of future value. Extensive attention has been paid to both methods in computational analyses of instrumental learning. By contrast, although a full computational analysis has been lacking, Pavlovian learning and prediction has typically been presumed to be solely model-free. Here, we revise that presumption and review compelling evidence from Pavlovian revaluation experiments showing that Pavlovian predictions can involve their own form of model-based evaluation. In model-based Pavlovian evaluation, prevailing states of the body and brain influence value computations, and thereby produce powerful incentive motivations that can sometimes be quite new. We consider the consequences of this revised Pavlovian view for the computational landscape of prediction, response, and choice. We also revisit differences between Pavlovian and instrumental learning in the control of incentive motivation.

  14. Development of a Clinical Forecasting Model to Predict Comorbid Depression Among Diabetes Patients and an Application in Depression Screening Policy Making.

    PubMed

    Jin, Haomiao; Wu, Shinyi; Di Capua, Paul

    2015-09-03

    Depression is a common but often undiagnosed comorbid condition of people with diabetes. Mass screening can detect undiagnosed depression but may require significant resources and time. The objectives of this study were 1) to develop a clinical forecasting model that predicts comorbid depression among patients with diabetes and 2) to evaluate a model-based screening policy that saves resources and time by screening only patients considered as depressed by the clinical forecasting model. We trained and validated 4 machine learning models by using data from 2 safety-net clinical trials; we chose the one with the best overall predictive ability as the ultimate model. We compared model-based policy with alternative policies, including mass screening and partial screening, on the basis of depression history or diabetes severity. Logistic regression had the best overall predictive ability of the 4 models evaluated and was chosen as the ultimate forecasting model. Compared with mass screening, the model-based policy can save approximately 50% to 60% of provider resources and time but will miss identifying about 30% of patients with depression. Partial-screening policy based on depression history alone found only a low rate of depression. Two other heuristic-based partial screening policies identified depression at rates similar to those of the model-based policy but cost more in resources and time. The depression prediction model developed in this study has compelling predictive ability. By adopting the model-based depression screening policy, health care providers can use their resources and time better and increase their efficiency in managing their patients with depression.

  15. The Air Quality Model Evaluation International Initiative ...

    EPA Pesticide Factsheets

    This presentation provides an overview of the Air Quality Model Evaluation International Initiative (AQMEII). It contains a synopsis of the three phases of AQMEII, including objectives, logistics, and timelines. It also provides a number of examples of analyses conducted through AQMEII with a particular focus on past and future analyses of deposition. The National Exposure Research Laboratory (NERL) Computational Exposure Division (CED) develops and evaluates data, decision-support tools, and models to be applied to media-specific or receptor-specific problem areas. CED uses modeling-based approaches to characterize exposures, evaluate fate and transport, and support environmental diagnostics/forensics with input from multiple data sources. It also develops media- and receptor-specific models, process models, and decision support tools for use both within and outside of EPA.

  16. [Application of entropy-weight TOPSIS model in synthetical quality evaluation of Angelica sinensis growing in Gansu Province].

    PubMed

    Gu, Zhi-rong; Wang, Ya-li; Sun, Yu-jing; Dind, Jun-xia

    2014-09-01

    To investigate the establishment and application methods of entropy-weight TOPSIS model in synthetical quality evaluation of traditional Chinese medicine with Angelica sinensis growing in Gansu Province as an example. The contents of ferulic acid, 3-butylphthalide, Z-butylidenephthalide, Z-ligustilide, linolic acid, volatile oil, and ethanol soluble extractive were used as an evaluation index set. The weights of each evaluation index were determined by information entropy method. The entropyweight TOPSIS model was established to synthetically evaluate the quality of Angelica sinensis growing in Gansu Province by Euclid closeness degree. The results based on established model were in line with the daodi meaning and the knowledge of clinical experience. The established model was simple in calculation, objective, reliable, and can be applied to synthetical quality evaluation of traditional Chinese medicine.

  17. Performance evaluation of an agent-based occupancy simulation model

    DOE PAGES

    Luo, Xuan; Lam, Khee Poh; Chen, Yixing; ...

    2017-01-17

    Occupancy is an important factor driving building performance. Static and homogeneous occupant schedules, commonly used in building performance simulation, contribute to issues such as performance gaps between simulated and measured energy use in buildings. Stochastic occupancy models have been recently developed and applied to better represent spatial and temporal diversity of occupants in buildings. However, there is very limited evaluation of the usability and accuracy of these models. This study used measured occupancy data from a real office building to evaluate the performance of an agent-based occupancy simulation model: the Occupancy Simulator. The occupancy patterns of various occupant types weremore » first derived from the measured occupant schedule data using statistical analysis. Then the performance of the simulation model was evaluated and verified based on (1) whether the distribution of observed occupancy behavior patterns follows the theoretical ones included in the Occupancy Simulator, and (2) whether the simulator can reproduce a variety of occupancy patterns accurately. Results demonstrated the feasibility of applying the Occupancy Simulator to simulate a range of occupancy presence and movement behaviors for regular types of occupants in office buildings, and to generate stochastic occupant schedules at the room and individual occupant levels for building performance simulation. For future work, model validation is recommended, which includes collecting and using detailed interval occupancy data of all spaces in an office building to validate the simulated occupant schedules from the Occupancy Simulator.« less

  18. Performance evaluation of an agent-based occupancy simulation model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Xuan; Lam, Khee Poh; Chen, Yixing

    Occupancy is an important factor driving building performance. Static and homogeneous occupant schedules, commonly used in building performance simulation, contribute to issues such as performance gaps between simulated and measured energy use in buildings. Stochastic occupancy models have been recently developed and applied to better represent spatial and temporal diversity of occupants in buildings. However, there is very limited evaluation of the usability and accuracy of these models. This study used measured occupancy data from a real office building to evaluate the performance of an agent-based occupancy simulation model: the Occupancy Simulator. The occupancy patterns of various occupant types weremore » first derived from the measured occupant schedule data using statistical analysis. Then the performance of the simulation model was evaluated and verified based on (1) whether the distribution of observed occupancy behavior patterns follows the theoretical ones included in the Occupancy Simulator, and (2) whether the simulator can reproduce a variety of occupancy patterns accurately. Results demonstrated the feasibility of applying the Occupancy Simulator to simulate a range of occupancy presence and movement behaviors for regular types of occupants in office buildings, and to generate stochastic occupant schedules at the room and individual occupant levels for building performance simulation. For future work, model validation is recommended, which includes collecting and using detailed interval occupancy data of all spaces in an office building to validate the simulated occupant schedules from the Occupancy Simulator.« less

  19. Comparative Logic Modeling for Policy Analysis: The Case of HIV Testing Policy Change at the Department of Veterans Affairs

    PubMed Central

    Langer, Erika M; Gifford, Allen L; Chan, Kee

    2011-01-01

    Objective Logic models have been used to evaluate policy programs, plan projects, and allocate resources. Logic Modeling for policy analysis has been used rarely in health services research but can be helpful in evaluating the content and rationale of health policies. Comparative Logic Modeling is used here on human immunodeficiency virus (HIV) policy statements from the Department of Veterans Affairs (VA) and Centers for Disease Control and Prevention (CDC). We created visual representations of proposed HIV screening policy components in order to evaluate their structural logic and research-based justifications. Data Sources and Study Design We performed content analysis of VA and CDC HIV testing policy documents in a retrospective case study. Data Collection Using comparative Logic Modeling, we examined the content and primary sources of policy statements by the VA and CDC. We then quantified evidence-based causal inferences within each statement. Principal Findings VA HIV testing policy structure largely replicated that of the CDC guidelines. Despite similar design choices, chosen research citations did not overlap. The agencies used evidence to emphasize different components of the policies. Conclusion Comparative Logic Modeling can be used by health services researchers and policy analysts more generally to evaluate structural differences in health policies and to analyze research-based rationales used by policy makers. PMID:21689094

  20. A Tentative Study on the Evaluation of Community Health Service Quality*

    NASA Astrophysics Data System (ADS)

    Ma, Zhi-qiang; Zhu, Yong-yue

    Community health service is the key point of health reform in China. Based on pertinent studies, this paper constructed an indicator system for the community health service quality evaluation from such five perspectives as visible image, reliability, responsiveness, assurance and sympathy, according to service quality evaluation scale designed by Parasuraman, Zeithaml and Berry. A multilevel fuzzy synthetical evaluation model was constructed to evaluate community health service by fuzzy mathematics theory. The applicability and maneuverability of the evaluation indicator system and evaluation model were verified by empirical analysis.

  1. Fuzzy Evaluating Customer Satisfaction of Jet Fuel Companies

    NASA Astrophysics Data System (ADS)

    Cheng, Haiying; Fang, Guoyi

    Based on the market characters of jet fuel companies, the paper proposes an evaluation index system of jet fuel company customer satisfaction from five dimensions as time, business, security, fee and service. And a multi-level fuzzy evaluation model composing with the analytic hierarchy process approach and fuzzy evaluation approach is given. Finally a case of one jet fuel company customer satisfaction evaluation is studied and the evaluation results response the feelings of the jet fuel company customers, which shows the fuzzy evaluation model is effective and efficient.

  2. Distributive Education Competency-Based Curriculum Models by Occupational Clusters. Final Report.

    ERIC Educational Resources Information Center

    Davis, Rodney E.; Husted, Stewart W.

    To meet the needs of distributive education teachers and students, a project was initiated to develop competency-based curriculum models for marketing and distributive education clusters. The models which were developed incorporate competencies, materials and resources, teaching methodologies/learning activities, and evaluative criteria for the…

  3. DEVELOPMENT OF A DIETARY EXPOSURE POTENTIAL MODEL FOR EVALUATING DIETARY EXPOSURE TO CHEMICAL RESIDUES IN FOOD

    EPA Science Inventory

    The Dietary Exposure Potential Model (DEPM) is a computer-based model developed for estimating dietary exposure to chemical residues in food. The DEPM is based on food consumption data from the 1987-1988 Nationwide Food Consumption Survey (NFCS) administered by the United States ...

  4. Creating Needs-Based Tiered Models for Assisted Living Reimbursement

    ERIC Educational Resources Information Center

    Howell-White, Sandra; Gaboda, Dorothy; Rosato, Nancy Scotto; Lucas, Judith A.

    2006-01-01

    Purpose: This research provides state policy makers and others interested in developing needs-based reimbursement models for Medicaid-funded assisted living with an evaluation of different methodologies that affect the structure and outcomes of these models. Design and Methods: We used assessment data from Medicaid-enrolled assisted living…

  5. Predicting Plywood Properties with Wood-based Composite Models

    Treesearch

    Christopher Adam Senalik; Robert J. Ross

    2015-01-01

    Previous research revealed that stress wave nondestructive testing techniques could be used to evaluate the tensile and flexural properties of wood-based composite materials. Regression models were developed that related stress wave transmission characteristics (velocity and attenuation) to modulus of elasticity and strength. The developed regression models accounted...

  6. A Memory-Based Model of Hick's Law

    ERIC Educational Resources Information Center

    Schneider, Darryl W.; Anderson, John R.

    2011-01-01

    We propose and evaluate a memory-based model of Hick's law, the approximately linear increase in choice reaction time with the logarithm of set size (the number of stimulus-response alternatives). According to the model, Hick's law reflects a combination of associative interference during retrieval from declarative memory and occasional savings…

  7. Development and evaluation of a physics-based windblown dust emission scheme implemented in the CMAQ modeling system

    EPA Science Inventory

    A new windblown dust emission treatment was incorporated in the Community Multiscale Air Quality (CMAQ) modeling system. This new model treatment has been built upon previously developed physics-based parameterization schemes from the literature. A distinct and novel feature of t...

  8. Multi-criteria comparative evaluation of spallation reaction models

    NASA Astrophysics Data System (ADS)

    Andrianov, Andrey; Andrianova, Olga; Konobeev, Alexandr; Korovin, Yury; Kuptsov, Ilya

    2017-09-01

    This paper presents an approach to a comparative evaluation of the predictive ability of spallation reaction models based on widely used, well-proven multiple-criteria decision analysis methods (MAVT/MAUT, AHP, TOPSIS, PROMETHEE) and the results of such a comparison for 17 spallation reaction models in the presence of the interaction of high-energy protons with natPb.

  9. Evaluation of average daily gain predictions by the integrated farm system model for forage-finished beef steers

    USDA-ARS?s Scientific Manuscript database

    Representing the performance of cattle finished on an all forage diet in process-based whole farm system models has presented a challenge. To address this challenge, a study was done to evaluate average daily gain (ADG) predictions of the Integrated Farm System Model (IFSM) for steers consuming all-...

  10. Video Modeling and Observational Learning to Teach Gaming Access to Students with ASD

    ERIC Educational Resources Information Center

    Spriggs, Amy D.; Gast, David L.; Knight, Victoria F.

    2016-01-01

    The purpose of this study was to evaluate both video modeling and observational learning to teach age-appropriate recreation and leisure skills (i.e., accessing video games) to students with autism spectrum disorder. Effects of video modeling were evaluated via a multiple probe design across participants and criteria for mastery were based on…

  11. Predicting Internalizing and Externalizing Symptoms in Children with ASD: Evaluation of a Contextual Model of Parental Factors

    ERIC Educational Resources Information Center

    McRae, Elizabeth M.; Stoppelbein, Laura; O'Kelley, Sarah E.; Fite, Paula; Greening, Leilani

    2018-01-01

    Parental adjustment, parenting behaviors, and child routines have been linked to internalizing and externalizing child behavior. The purpose of the present study was to evaluate a comprehensive model examining relations among these variables in children with ASD and their parents. Based on Sameroff's Transactional Model of Development (Sameroff…

  12. EVALUATING REGIONAL PREDICTIVE CAPACITY OF A PROCESS-BASED MERCURY EXPOSURE MODEL, REGIONAL-MERCURY CYCLING MODEL (R-MCM), APPLIED TO 91 VERMONT AND NEW HAMPSHIRE LAKES AND PONDS, USA

    EPA Science Inventory

    Regulatory agencies must develop fish consumption advisories for many lakes and rivers with limited resources. Process-based mathematical models are potentially valuable tools for developing regional fish advisories. The Regional Mercury Cycling model (R-MCM) was specifically d...

  13. A component-based, integrated spatially distributed hydrologic/water quality model: AgroEcoSystem-Watershed (AgES-W) overview and application

    USDA-ARS?s Scientific Manuscript database

    AgroEcoSystem-Watershed (AgES-W) is a modular, Java-based spatially distributed model which implements hydrologic/water quality simulation components. The AgES-W model was previously evaluated for streamflow and recently has been enhanced with the addition of nitrogen (N) and sediment modeling compo...

  14. Automated time activity classification based on global positioning system (GPS) tracking data

    PubMed Central

    2011-01-01

    Background Air pollution epidemiological studies are increasingly using global positioning system (GPS) to collect time-location data because they offer continuous tracking, high temporal resolution, and minimum reporting burden for participants. However, substantial uncertainties in the processing and classifying of raw GPS data create challenges for reliably characterizing time activity patterns. We developed and evaluated models to classify people's major time activity patterns from continuous GPS tracking data. Methods We developed and evaluated two automated models to classify major time activity patterns (i.e., indoor, outdoor static, outdoor walking, and in-vehicle travel) based on GPS time activity data collected under free living conditions for 47 participants (N = 131 person-days) from the Harbor Communities Time Location Study (HCTLS) in 2008 and supplemental GPS data collected from three UC-Irvine research staff (N = 21 person-days) in 2010. Time activity patterns used for model development were manually classified by research staff using information from participant GPS recordings, activity logs, and follow-up interviews. We evaluated two models: (a) a rule-based model that developed user-defined rules based on time, speed, and spatial location, and (b) a random forest decision tree model. Results Indoor, outdoor static, outdoor walking and in-vehicle travel activities accounted for 82.7%, 6.1%, 3.2% and 7.2% of manually-classified time activities in the HCTLS dataset, respectively. The rule-based model classified indoor and in-vehicle travel periods reasonably well (Indoor: sensitivity > 91%, specificity > 80%, and precision > 96%; in-vehicle travel: sensitivity > 71%, specificity > 99%, and precision > 88%), but the performance was moderate for outdoor static and outdoor walking predictions. No striking differences in performance were observed between the rule-based and the random forest models. The random forest model was fast and easy to execute, but was likely less robust than the rule-based model under the condition of biased or poor quality training data. Conclusions Our models can successfully identify indoor and in-vehicle travel points from the raw GPS data, but challenges remain in developing models to distinguish outdoor static points and walking. Accurate training data are essential in developing reliable models in classifying time-activity patterns. PMID:22082316

  15. Automated time activity classification based on global positioning system (GPS) tracking data.

    PubMed

    Wu, Jun; Jiang, Chengsheng; Houston, Douglas; Baker, Dean; Delfino, Ralph

    2011-11-14

    Air pollution epidemiological studies are increasingly using global positioning system (GPS) to collect time-location data because they offer continuous tracking, high temporal resolution, and minimum reporting burden for participants. However, substantial uncertainties in the processing and classifying of raw GPS data create challenges for reliably characterizing time activity patterns. We developed and evaluated models to classify people's major time activity patterns from continuous GPS tracking data. We developed and evaluated two automated models to classify major time activity patterns (i.e., indoor, outdoor static, outdoor walking, and in-vehicle travel) based on GPS time activity data collected under free living conditions for 47 participants (N = 131 person-days) from the Harbor Communities Time Location Study (HCTLS) in 2008 and supplemental GPS data collected from three UC-Irvine research staff (N = 21 person-days) in 2010. Time activity patterns used for model development were manually classified by research staff using information from participant GPS recordings, activity logs, and follow-up interviews. We evaluated two models: (a) a rule-based model that developed user-defined rules based on time, speed, and spatial location, and (b) a random forest decision tree model. Indoor, outdoor static, outdoor walking and in-vehicle travel activities accounted for 82.7%, 6.1%, 3.2% and 7.2% of manually-classified time activities in the HCTLS dataset, respectively. The rule-based model classified indoor and in-vehicle travel periods reasonably well (Indoor: sensitivity > 91%, specificity > 80%, and precision > 96%; in-vehicle travel: sensitivity > 71%, specificity > 99%, and precision > 88%), but the performance was moderate for outdoor static and outdoor walking predictions. No striking differences in performance were observed between the rule-based and the random forest models. The random forest model was fast and easy to execute, but was likely less robust than the rule-based model under the condition of biased or poor quality training data. Our models can successfully identify indoor and in-vehicle travel points from the raw GPS data, but challenges remain in developing models to distinguish outdoor static points and walking. Accurate training data are essential in developing reliable models in classifying time-activity patterns.

  16. Likelihood-Based Random-Effect Meta-Analysis of Binary Events.

    PubMed

    Amatya, Anup; Bhaumik, Dulal K; Normand, Sharon-Lise; Greenhouse, Joel; Kaizar, Eloise; Neelon, Brian; Gibbons, Robert D

    2015-01-01

    Meta-analysis has been used extensively for evaluation of efficacy and safety of medical interventions. Its advantages and utilities are well known. However, recent studies have raised questions about the accuracy of the commonly used moment-based meta-analytic methods in general and for rare binary outcomes in particular. The issue is further complicated for studies with heterogeneous effect sizes. Likelihood-based mixed-effects modeling provides an alternative to moment-based methods such as inverse-variance weighted fixed- and random-effects estimators. In this article, we compare and contrast different mixed-effect modeling strategies in the context of meta-analysis. Their performance in estimation and testing of overall effect and heterogeneity are evaluated when combining results from studies with a binary outcome. Models that allow heterogeneity in both baseline rate and treatment effect across studies have low type I and type II error rates, and their estimates are the least biased among the models considered.

  17. Simulink-Based Simulation Architecture for Evaluating Controls for Aerospace Vehicles (SAREC-ASV)

    NASA Technical Reports Server (NTRS)

    Christhilf, David m.; Bacon, Barton J.

    2006-01-01

    The Simulation Architecture for Evaluating Controls for Aerospace Vehicles (SAREC-ASV) is a Simulink-based approach to providing an engineering quality desktop simulation capability for finding trim solutions, extracting linear models for vehicle analysis and control law development, and generating open-loop and closed-loop time history responses for control system evaluation. It represents a useful level of maturity rather than a finished product. The layout is hierarchical and supports concurrent component development and validation, with support from the Concurrent Versions System (CVS) software management tool. Real Time Workshop (RTW) is used to generate pre-compiled code for substantial component modules, and templates permit switching seamlessly between original Simulink and code compiled for various platforms. Two previous limitations are addressed. Turn around time for incorporating tabular model components was improved through auto-generation of required Simulink diagrams based on data received in XML format. The layout was modified to exploit a Simulink "compile once, evaluate multiple times" capability for zero elapsed time for use in trimming and linearizing. Trim is achieved through a Graphical User Interface (GUI) with a narrow, script definable interface to the vehicle model which facilitates incorporating new models.

  18. Evaluation of portfolio credit risk based on survival analysis for progressive censored data

    NASA Astrophysics Data System (ADS)

    Jaber, Jamil J.; Ismail, Noriszura; Ramli, Siti Norafidah Mohd

    2017-04-01

    In credit risk management, the Basel committee provides a choice of three approaches to the financial institutions for calculating the required capital: the standardized approach, the Internal Ratings-Based (IRB) approach, and the Advanced IRB approach. The IRB approach is usually preferred compared to the standard approach due to its higher accuracy and lower capital charges. This paper use several parametric models (Exponential, log-normal, Gamma, Weibull, Log-logistic, Gompertz) to evaluate the credit risk of the corporate portfolio in the Jordanian banks based on the monthly sample collected from January 2010 to December 2015. The best model is selected using several goodness-of-fit criteria (MSE, AIC, BIC). The results indicate that the Gompertz distribution is the best model parametric model for the data.

  19. Evaluation of Automated Model Calibration Techniques for Residential Building Energy Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robertson, J.; Polly, B.; Collis, J.

    2013-09-01

    This simulation study adapts and applies the general framework described in BESTEST-EX (Judkoff et al 2010) for self-testing residential building energy model calibration methods. BEopt/DOE-2.2 is used to evaluate four mathematical calibration methods in the context of monthly, daily, and hourly synthetic utility data for a 1960's-era existing home in a cooling-dominated climate. The home's model inputs are assigned probability distributions representing uncertainty ranges, random selections are made from the uncertainty ranges to define 'explicit' input values, and synthetic utility billing data are generated using the explicit input values. The four calibration methods evaluated in this study are: an ASHRAEmore » 1051-RP-based approach (Reddy and Maor 2006), a simplified simulated annealing optimization approach, a regression metamodeling optimization approach, and a simple output ratio calibration approach. The calibration methods are evaluated for monthly, daily, and hourly cases; various retrofit measures are applied to the calibrated models and the methods are evaluated based on the accuracy of predicted savings, computational cost, repeatability, automation, and ease of implementation.« less

  20. Evaluation of Automated Model Calibration Techniques for Residential Building Energy Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    and Ben Polly, Joseph Robertson; Polly, Ben; Collis, Jon

    2013-09-01

    This simulation study adapts and applies the general framework described in BESTEST-EX (Judkoff et al 2010) for self-testing residential building energy model calibration methods. BEopt/DOE-2.2 is used to evaluate four mathematical calibration methods in the context of monthly, daily, and hourly synthetic utility data for a 1960's-era existing home in a cooling-dominated climate. The home's model inputs are assigned probability distributions representing uncertainty ranges, random selections are made from the uncertainty ranges to define "explicit" input values, and synthetic utility billing data are generated using the explicit input values. The four calibration methods evaluated in this study are: an ASHRAEmore » 1051-RP-based approach (Reddy and Maor 2006), a simplified simulated annealing optimization approach, a regression metamodeling optimization approach, and a simple output ratio calibration approach. The calibration methods are evaluated for monthly, daily, and hourly cases; various retrofit measures are applied to the calibrated models and the methods are evaluated based on the accuracy of predicted savings, computational cost, repeatability, automation, and ease of implementation.« less

  1. HESS Opinions: The need for process-based evaluation of large-domain hyper-resolution models

    NASA Astrophysics Data System (ADS)

    Melsen, Lieke A.; Teuling, Adriaan J.; Torfs, Paul J. J. F.; Uijlenhoet, Remko; Mizukami, Naoki; Clark, Martyn P.

    2016-03-01

    A meta-analysis on 192 peer-reviewed articles reporting on applications of the variable infiltration capacity (VIC) model in a distributed way reveals that the spatial resolution at which the model is applied has increased over the years, while the calibration and validation time interval has remained unchanged. We argue that the calibration and validation time interval should keep pace with the increase in spatial resolution in order to resolve the processes that are relevant at the applied spatial resolution. We identified six time concepts in hydrological models, which all impact the model results and conclusions. Process-based model evaluation is particularly relevant when models are applied at hyper-resolution, where stakeholders expect credible results both at a high spatial and temporal resolution.

  2. HESS Opinions: The need for process-based evaluation of large-domain hyper-resolution models

    NASA Astrophysics Data System (ADS)

    Melsen, L. A.; Teuling, A. J.; Torfs, P. J. J. F.; Uijlenhoet, R.; Mizukami, N.; Clark, M. P.

    2015-12-01

    A meta-analysis on 192 peer-reviewed articles reporting applications of the Variable Infiltration Capacity (VIC) model in a distributed way reveals that the spatial resolution at which the model is applied has increased over the years, while the calibration and validation time interval has remained unchanged. We argue that the calibration and validation time interval should keep pace with the increase in spatial resolution in order to resolve the processes that are relevant at the applied spatial resolution. We identified six time concepts in hydrological models, which all impact the model results and conclusions. Process-based model evaluation is particularly relevant when models are applied at hyper-resolution, where stakeholders expect credible results both at a high spatial and temporal resolution.

  3. Colorimetric characterization models based on colorimetric characteristics evaluation for active matrix organic light emitting diode panels.

    PubMed

    Gong, Rui; Xu, Haisong; Tong, Qingfen

    2012-10-20

    The colorimetric characterization of active matrix organic light emitting diode (AMOLED) panels suffers from their poor channel independence. Based on the colorimetric characteristics evaluation of channel independence and chromaticity constancy, an accurate colorimetric characterization method, namely, the polynomial compensation model (PC model) considering channel interactions was proposed for AMOLED panels. In this model, polynomial expressions are employed to calculate the relationship between the prediction errors of XYZ tristimulus values and the digital inputs to compensate the XYZ prediction errors of the conventional piecewise linear interpolation assuming the variable chromaticity coordinates (PLVC) model. The experimental results indicated that the proposed PC model outperformed other typical characterization models for the two tested AMOLED smart-phone displays and for the professional liquid crystal display monitor as well.

  4. An Investigation of Research-Based Teaching Practices through the Teacher Evaluations in Indiana Public Schools

    ERIC Educational Resources Information Center

    Sargent, Michael Steven

    2014-01-01

    The purpose of this study was to identify if a relationship existed between the implementation of professional evaluation processes and the use of research-based teaching practices, factoring in both perceptions of principals and practicing teachers. The variables of professional development on the evaluation model and the principal's years of…

  5. In Pursuit of Social Betterment: A Proposal to Evaluate the Da Vinci Learning Model

    ERIC Educational Resources Information Center

    Henry, Gary T.

    2005-01-01

    The author presents a proposal that is roughly based on a contingency-based theory of evaluation developed in his book, "Evaluation: An Integrated Framework for Understanding, Guiding, and Improving Policies and Programs" (Mark, Henry, and Julnes, 2000). He and his coauthors stated in this book that social betterment was the ultimate goal of…

  6. Postural effects on intracranial pressure: modeling and clinical evaluation.

    PubMed

    Qvarlander, Sara; Sundström, Nina; Malm, Jan; Eklund, Anders

    2013-11-01

    The physiological effect of posture on intracranial pressure (ICP) is not well described. This study defined and evaluated three mathematical models describing the postural effects on ICP, designed to predict ICP at different head-up tilt angles from the supine ICP value. Model I was based on a hydrostatic indifference point for the cerebrospinal fluid (CSF) system, i.e., the existence of a point in the system where pressure is independent of body position. Models II and III were based on Davson's equation for CSF absorption, which relates ICP to venous pressure, and postulated that gravitational effects within the venous system are transferred to the CSF system. Model II assumed a fully communicating venous system, and model III assumed that collapse of the jugular veins at higher tilt angles creates two separate hydrostatic compartments. Evaluation of the models was based on ICP measurements at seven tilt angles (0-71°) in 27 normal pressure hydrocephalus patients. ICP decreased with tilt angle (ANOVA: P < 0.01). The reduction was well predicted by model III (ANOVA lack-of-fit: P = 0.65), which showed excellent fit against measured ICP. Neither model I nor II adequately described the reduction in ICP (ANOVA lack-of-fit: P < 0.01). Postural changes in ICP could not be predicted based on the currently accepted theory of a hydrostatic indifference point for the CSF system, but a new model combining Davson's equation for CSF absorption and hydrostatic gradients in a collapsible venous system performed well and can be useful in future research on gravity and CSF physiology.

  7. Childhood Obesity Research Demonstration Project: Cross-Site Evaluation Methods

    PubMed Central

    Lee, Rebecca E.; Mehta, Paras; Thompson, Debbe; Bhargava, Alok; Carlson, Coleen; Kao, Dennis; Layne, Charles S.; Ledoux, Tracey; O'Connor, Teresia; Rifai, Hanadi; Gulley, Lauren; Hallett, Allen M.; Kudia, Ousswa; Joseph, Sitara; Modelska, Maria; Ortega, Dana; Parker, Nathan; Stevens, Andria

    2015-01-01

    Abstract Introduction: The Childhood Obesity Research Demonstration (CORD) project links public health and primary care interventions in three projects described in detail in accompanying articles in this issue of Childhood Obesity. This article describes a comprehensive evaluation plan to determine the extent to which the CORD model is associated with changes in behavior, body weight, BMI, quality of life, and healthcare satisfaction in children 2–12 years of age. Design/Methods: The CORD Evaluation Center (EC-CORD) will analyze the pooled data from three independent demonstration projects that each integrate public health and primary care childhood obesity interventions. An extensive set of common measures at the family, facility, and community levels were defined by consensus among the CORD projects and EC-CORD. Process evaluation will assess reach, dose delivered, and fidelity of intervention components. Impact evaluation will use a mixed linear models approach to account for heterogeneity among project-site populations and interventions. Sustainability evaluation will assess the potential for replicability, continuation of benefits beyond the funding period, institutionalization of the intervention activities, and community capacity to support ongoing program delivery. Finally, cost analyses will assess how much benefit can potentially be gained per dollar invested in programs based on the CORD model. Conclusions: The keys to combining and analyzing data across multiple projects include the CORD model framework and common measures for the behavioral and health outcomes along with important covariates at the individual, setting, and community levels. The overall objective of the comprehensive evaluation will develop evidence-based recommendations for replicating and disseminating community-wide, integrated public health and primary care programs based on the CORD model. PMID:25679060

  8. Effects of sample survey design on the accuracy of classification tree models in species distribution models

    USGS Publications Warehouse

    Edwards, T.C.; Cutler, D.R.; Zimmermann, N.E.; Geiser, L.; Moisen, Gretchen G.

    2006-01-01

    We evaluated the effects of probabilistic (hereafter DESIGN) and non-probabilistic (PURPOSIVE) sample surveys on resultant classification tree models for predicting the presence of four lichen species in the Pacific Northwest, USA. Models derived from both survey forms were assessed using an independent data set (EVALUATION). Measures of accuracy as gauged by resubstitution rates were similar for each lichen species irrespective of the underlying sample survey form. Cross-validation estimates of prediction accuracies were lower than resubstitution accuracies for all species and both design types, and in all cases were closer to the true prediction accuracies based on the EVALUATION data set. We argue that greater emphasis should be placed on calculating and reporting cross-validation accuracy rates rather than simple resubstitution accuracy rates. Evaluation of the DESIGN and PURPOSIVE tree models on the EVALUATION data set shows significantly lower prediction accuracy for the PURPOSIVE tree models relative to the DESIGN models, indicating that non-probabilistic sample surveys may generate models with limited predictive capability. These differences were consistent across all four lichen species, with 11 of the 12 possible species and sample survey type comparisons having significantly lower accuracy rates. Some differences in accuracy were as large as 50%. The classification tree structures also differed considerably both among and within the modelled species, depending on the sample survey form. Overlap in the predictor variables selected by the DESIGN and PURPOSIVE tree models ranged from only 20% to 38%, indicating the classification trees fit the two evaluated survey forms on different sets of predictor variables. The magnitude of these differences in predictor variables throws doubt on ecological interpretation derived from prediction models based on non-probabilistic sample surveys. ?? 2006 Elsevier B.V. All rights reserved.

  9. Evaluation of Cost Leadership Strategy in Shipping Enterprises with Simulation Model

    NASA Astrophysics Data System (ADS)

    Ferfeli, Maria V.; Vaxevanou, Anthi Z.; Damianos, Sakas P.

    2009-08-01

    The present study will attempt the evaluation of cost leadership strategy that prevails in certain shipping enterprises and the creation of simulation models based on strategic model STAIR. The above model is an alternative method of strategic applications evaluation. This is held in order to be realised if the strategy of cost leadership creates competitive advantage [1] and this will be achieved via the technical simulation which appreciates the interactions between the operations of an enterprise and the decision-making strategy in conditions of uncertainty with reduction of undertaken risk.

  10. Tools for Economic Analysis of Patient Management Interventions in Heart Failure Cost-Effectiveness Model: A Web-based program designed to evaluate the cost-effectiveness of disease management programs in heart failure.

    PubMed

    Reed, Shelby D; Neilson, Matthew P; Gardner, Matthew; Li, Yanhong; Briggs, Andrew H; Polsky, Daniel E; Graham, Felicia L; Bowers, Margaret T; Paul, Sara C; Granger, Bradi B; Schulman, Kevin A; Whellan, David J; Riegel, Barbara; Levy, Wayne C

    2015-11-01

    Heart failure disease management programs can influence medical resource use and quality-adjusted survival. Because projecting long-term costs and survival is challenging, a consistent and valid approach to extrapolating short-term outcomes would be valuable. We developed the Tools for Economic Analysis of Patient Management Interventions in Heart Failure Cost-Effectiveness Model, a Web-based simulation tool designed to integrate data on demographic, clinical, and laboratory characteristics; use of evidence-based medications; and costs to generate predicted outcomes. Survival projections are based on a modified Seattle Heart Failure Model. Projections of resource use and quality of life are modeled using relationships with time-varying Seattle Heart Failure Model scores. The model can be used to evaluate parallel-group and single-cohort study designs and hypothetical programs. Simulations consist of 10,000 pairs of virtual cohorts used to generate estimates of resource use, costs, survival, and incremental cost-effectiveness ratios from user inputs. The model demonstrated acceptable internal and external validity in replicating resource use, costs, and survival estimates from 3 clinical trials. Simulations to evaluate the cost-effectiveness of heart failure disease management programs across 3 scenarios demonstrate how the model can be used to design a program in which short-term improvements in functioning and use of evidence-based treatments are sufficient to demonstrate good long-term value to the health care system. The Tools for Economic Analysis of Patient Management Interventions in Heart Failure Cost-Effectiveness Model provides researchers and providers with a tool for conducting long-term cost-effectiveness analyses of disease management programs in heart failure. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Utilization of two web-based continuing education courses evaluated by Markov chain model.

    PubMed

    Tian, Hao; Lin, Jin-Mann S; Reeves, William C

    2012-01-01

    To evaluate the web structure of two web-based continuing education courses, identify problems and assess the effects of web site modifications. Markov chain models were built from 2008 web usage data to evaluate the courses' web structure and navigation patterns. The web site was then modified to resolve identified design issues and the improvement in user activity over the subsequent 12 months was quantitatively evaluated. Web navigation paths were collected between 2008 and 2010. The probability of navigating from one web page to another was analyzed. The continuing education courses' sequential structure design was clearly reflected in the resulting actual web usage models, and none of the skip transitions provided was heavily used. The web navigation patterns of the two different continuing education courses were similar. Two possible design flaws were identified and fixed in only one of the two courses. Over the following 12 months, the drop-out rate in the modified course significantly decreased from 41% to 35%, but remained unchanged in the unmodified course. The web improvement effects were further verified via a second-order Markov chain model. The results imply that differences in web content have less impact than web structure design on how learners navigate through continuing education courses. Evaluation of user navigation can help identify web design flaws and guide modifications. This study showed that Markov chain models provide a valuable tool to evaluate web-based education courses. Both the results and techniques in this study would be very useful for public health education and research specialists.

  12. Utilization of two web-based continuing education courses evaluated by Markov chain model

    PubMed Central

    Lin, Jin-Mann S; Reeves, William C

    2011-01-01

    Objectives To evaluate the web structure of two web-based continuing education courses, identify problems and assess the effects of web site modifications. Design Markov chain models were built from 2008 web usage data to evaluate the courses' web structure and navigation patterns. The web site was then modified to resolve identified design issues and the improvement in user activity over the subsequent 12 months was quantitatively evaluated. Measurements Web navigation paths were collected between 2008 and 2010. The probability of navigating from one web page to another was analyzed. Results The continuing education courses' sequential structure design was clearly reflected in the resulting actual web usage models, and none of the skip transitions provided was heavily used. The web navigation patterns of the two different continuing education courses were similar. Two possible design flaws were identified and fixed in only one of the two courses. Over the following 12 months, the drop-out rate in the modified course significantly decreased from 41% to 35%, but remained unchanged in the unmodified course. The web improvement effects were further verified via a second-order Markov chain model. Conclusions The results imply that differences in web content have less impact than web structure design on how learners navigate through continuing education courses. Evaluation of user navigation can help identify web design flaws and guide modifications. This study showed that Markov chain models provide a valuable tool to evaluate web-based education courses. Both the results and techniques in this study would be very useful for public health education and research specialists. PMID:21976027

  13. Program evaluation of an Integrated Basic Science Medical Curriculum in Shiraz Medical School, Using CIPP Evaluation Model

    PubMed Central

    ROOHOLAMINI, AZADEH; AMINI, MITRA; BAZRAFKAN, LEILA; DEHGHANI, MOHAMMAD REZA; ESMAEILZADEH, ZOHREH; NABEIEI, PARISA; REZAEE, RITA; KOJURI, JAVAD

    2017-01-01

    Introduction: In recent years curriculum reform and integration was done in many medical schools. The integrated curriculum is a popular concept all over the world. In Shiraz medical school, the reform was initiated by stablishing the horizontal basic science integration model and Early Clinical Exposure (ECE) for undergraduate medical education. The purpose of this study was to provide the required data for the program evaluation of this curriculum for undergraduate medical students, using CIPP program evaluation model. Methods: This study is an analytic descriptive and triangulation mixed method study which was carried out in Shiraz Medical School in 2012, based on the views of professors of basic sciences courses and first and second year medical students. The study evaluated the quality of the relationship between basic sciences and clinical courses and the method of presenting such courses based on the Context, Input, Process and Product (CIPP) model. The tools for collecting data, both quantitatively and qualitatively, were some questionnaires, content analysis of portfolios, semi- structured interview and brain storming sessions. For quantitative data analysis, SPSS software, version 14, was used. Results: In the context evaluation by modified DREEM questionnaire, 77.75%of the students believed that this educational system encourages them to actively participate in classes. Course schedule and atmosphere of class were reported suitable by 87.81% and 83.86% of students. In input domain that was measured by a researcher made questionnaire, the facilities for education were acceptable except for shortage of cadavers. In process evaluation, the quality of integrated modules presentation and Early Clinical Exposure (ECE) was good from the students’ viewpoint. In product evaluation, students’ brain storming, students’ portfolio and semi-structured interview with faculties were done, showing some positive aspects of integration and some areas that need improvement. Conclusion: The main advantage of assessing an educational program based on CIPP evaluation model is that the context, input, process and product of the program are viewed and evaluated systematically. This will help the educational authorities to make proper decisions based on the weaknesses and strengths of the program on its continuation, cessation and revision. Based on the results of this study, the integrated basic sciences course for undergraduate medical students in Shiraz Medical School is at a desirable level. However, attempts to improve or reform some sections and continual evaluation of the program and its accreditation seem to be necessary. PMID:28761888

  14. Stress evaluation of metallic material under steady state based on nonlinear critically refracted longitudinal wave

    NASA Astrophysics Data System (ADS)

    Mao, Hanling; Zhang, Yuhua; Mao, Hanying; Li, Xinxin; Huang, Zhenfeng

    2018-06-01

    This paper presents the study of applying the nonlinear ultrasonic wave to evaluate the stress state of metallic materials under steady state. The pre-stress loading method is applied to guarantee components with steady stress. Three kinds of nonlinear ultrasonic experiments based on critically refracted longitudinal wave are conducted on components which the critically refracted longitudinal wave propagates along x, x1 and x2 direction. Experimental results indicate the second and third order relative nonlinear coefficients monotonically increase with stress, and the normalized relationship is consistent with simplified dislocation models, which indicates the experimental result is logical. The combined ultrasonic nonlinear parameter is proposed, and three stress evaluation models at x direction are established based on three ultrasonic nonlinear parameters, which the estimation error is below 5%. Then two stress detection models at x1 and x2 direction are built based on combined ultrasonic nonlinear parameter, the stress synthesis method is applied to calculate the magnitude and direction of principal stress. The results show the prediction error is within 5% and the angle deviation is within 1.5°. Therefore the nonlinear ultrasonic technique based on LCR wave could be applied to nondestructively evaluate the stress of metallic materials under steady state which the magnitude and direction are included.

  15. Modeling companion diagnostics in economic evaluations of targeted oncology therapies: systematic review and methodological checklist.

    PubMed

    Doble, Brett; Tan, Marcus; Harris, Anthony; Lorgelly, Paula

    2015-02-01

    The successful use of a targeted therapy is intrinsically linked to the ability of a companion diagnostic to correctly identify patients most likely to benefit from treatment. The aim of this study was to review the characteristics of companion diagnostics that are of importance for inclusion in an economic evaluation. Approaches for including these characteristics in model-based economic evaluations are compared with the intent to describe best practice methods. Five databases and government agency websites were searched to identify model-based economic evaluations comparing a companion diagnostic and subsequent treatment strategy to another alternative treatment strategy with model parameters for the sensitivity and specificity of the companion diagnostic (primary synthesis). Economic evaluations that limited model parameters for the companion diagnostic to only its cost were also identified (secondary synthesis). Quality was assessed using the Quality of Health Economic Studies instrument. 30 studies were included in the review (primary synthesis n = 12; secondary synthesis n = 18). Incremental cost-effectiveness ratios may be lower when the only parameter for the companion diagnostic included in a model is the cost of testing. Incorporating the test's accuracy in addition to its cost may be a more appropriate methodological approach. Altering the prevalence of the genetic biomarker, specific population tested, type of test, test accuracy and timing/sequence of multiple tests can all impact overall model results. The impact of altering a test's threshold for positivity is unknown as it was not addressed in any of the included studies. Additional quality criteria as outlined in our methodological checklist should be considered due to the shortcomings of standard quality assessment tools in differentiating studies that incorporate important test-related characteristics and those that do not. There is a need to refine methods for incorporating the characteristics of companion diagnostics into model-based economic evaluations to ensure consistent and transparent reimbursement decisions are made.

  16. Pharmaceutical treatments to prevent recurrence of endometriosis following surgery: a model-based economic evaluation

    PubMed Central

    Sanghera, Sabina; Barton, Pelham; Bhattacharya, Siladitya; Horne, Andrew W; Roberts, Tracy Elizabeth

    2016-01-01

    Objective Conduct an economic evaluation based on best currently available evidence comparing alternative treatments levonorgestrel-releasing intrauterine system, depot-medroxyprogesterone acetate, combined oral contraceptive pill (COCP) and ‘no treatment’ to prevent recurrence of endometriosis after conservative surgery in primary care, and to inform the design of a planned trial-based economic evaluation. Methods We developed a state transition (Markov) model with a 36-month follow-up. The model structure was informed by a pragmatic review and clinical experts. The economic evaluation adopted a UK National Health Service perspective and was based on an outcome of incremental cost per quality-adjusted life year (QALY). As available data were limited, intentionally wide distributions were assigned around model inputs, and the average costs and outcome of the probabilistic sensitivity analyses were reported. Results On average, all strategies were more expensive and generated fewer QALYs compared to no treatment. However, uncertainty attributing to the transition probabilities affected the results. Inputs relating to effectiveness, changes in treatment and the time at which the change is made were the main causes of uncertainty, illustrating areas where robust and specific data collection is required. Conclusions There is currently no evidence to support any treatment being recommended to prevent the recurrence of endometriosis following conservative surgery. The study highlights the importance of developing decision models at the outset of a trial to identify data requirements to conduct a robust post-trial analysis. PMID:27084280

  17. 75 FR 2523 - Office of Innovation and Improvement; Overview Information; Arts in Education Model Development...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-15

    ... that is based on rigorous scientifically based research methods to assess the effectiveness of a...) Relies on measurements or observational methods that provide reliable and valid data across evaluators... of innovative, cohesive models that are based on research and have demonstrated that they effectively...

  18. Evaluation of the base/subgrade soil under repeated loading : phase I--laboratory testing and numerical modeling of geogrid reinforced bases in flexible pavement.

    DOT National Transportation Integrated Search

    2009-10-01

    This report documents the results of a study that was conducted to characterize the behavior of geogrid reinforced base : course materials. The research was conducted through an experimental testing and numerical modeling programs. The : experimental...

  19. Implementation and Evaluation of Multiple Adaptive Control Technologies for a Generic Transport Aircraft Simulation

    NASA Technical Reports Server (NTRS)

    Campbell, Stefan F.; Kaneshige, John T.; Nguyen, Nhan T.; Krishakumar, Kalmanje S.

    2010-01-01

    Presented here is the evaluation of multiple adaptive control technologies for a generic transport aircraft simulation. For this study, seven model reference adaptive control (MRAC) based technologies were considered. Each technology was integrated into an identical dynamic-inversion control architecture and tuned using a methodology based on metrics and specific design requirements. Simulation tests were then performed to evaluate each technology s sensitivity to time-delay, flight condition, model uncertainty, and artificially induced cross-coupling. The resulting robustness and performance characteristics were used to identify potential strengths, weaknesses, and integration challenges of the individual adaptive control technologies

  20. Land Ecological Security Evaluation of Underground Iron Mine Based on PSR Model

    NASA Astrophysics Data System (ADS)

    Xiao, Xiao; Chen, Yong; Ruan, Jinghua; Hong, Qiang; Gan, Yong

    2018-01-01

    Iron ore mine provides an important strategic resource to the national economy while it also causes many serious ecological problems to the environment. The study summed up the characteristics of ecological environment problems of underground iron mine. Considering the mining process of underground iron mine, we analysis connections between mining production, resource, environment and economical background. The paper proposed a land ecological security evaluation system and method of underground iron mine based on Pressure-State-Response model. Our application in Chengchao iron mine proves its efficiency and promising guide on land ecological security evaluation.

  1. Evaluation of CNN as anthropomorphic model observer

    NASA Astrophysics Data System (ADS)

    Massanes, Francesc; Brankov, Jovan G.

    2017-03-01

    Model observers (MO) are widely used in medical imaging to act as surrogates of human observers in task-based image quality evaluation, frequently towards optimization of reconstruction algorithms. In this paper, we explore the use of convolutional neural networks (CNN) to be used as MO. We will compare CNN MO to alternative MO currently being proposed and used such as the relevance vector machine based MO and channelized Hotelling observer (CHO). As the success of the CNN, and other deep learning approaches, is rooted in large data sets availability, which is rarely the case in medical imaging systems task-performance evaluation, we will evaluate CNN performance on both large and small training data sets.

  2. Planting Healthy Roots: Using Documentary Film to Evaluate and Disseminate Community-Based Participatory Research

    PubMed Central

    Brandt, Heather M.; Freedman, Darcy A.; Friedman, Daniela B.; Choi, Seul Ki; Seel, Jessica S.; Guest, M. Aaron; Khang, Leepao

    2016-01-01

    The study purpose was twofold: (1) to evaluate a documentary film featuring the formation and implementation of a farmers’ market and (2) to assess whether the film affected awareness regarding food access issues in a food desert community with high rates of obesity. The coalition model of filmmaking, a model consistent with a community-based participatory research (CBPR) approach, and personal stories, community profiles, and expert interviews were used to develop a documentary film (Planting Healthy Roots). Evaluation demonstrated high levels of approval and satisfaction with the film and CBPR essence of the film. The documentary film aligned with a CBPR approach to document, evaluate, and disseminate research processes and outcomes. PMID:27536929

  3. Projection pursuit water quality evaluation model based on chicken swam algorithm

    NASA Astrophysics Data System (ADS)

    Hu, Zhe

    2018-03-01

    In view of the uncertainty and ambiguity of each index in water quality evaluation, in order to solve the incompatibility of evaluation results of individual water quality indexes, a projection pursuit model based on chicken swam algorithm is proposed. The projection index function which can reflect the water quality condition is constructed, the chicken group algorithm (CSA) is introduced, the projection index function is optimized, the best projection direction of the projection index function is sought, and the best projection value is obtained to realize the water quality evaluation. The comparison between this method and other methods shows that it is reasonable and feasible to provide decision-making basis for water pollution control in the basin.

  4. StirMark Benchmark: audio watermarking attacks based on lossy compression

    NASA Astrophysics Data System (ADS)

    Steinebach, Martin; Lang, Andreas; Dittmann, Jana

    2002-04-01

    StirMark Benchmark is a well-known evaluation tool for watermarking robustness. Additional attacks are added to it continuously. To enable application based evaluation, in our paper we address attacks against audio watermarks based on lossy audio compression algorithms to be included in the test environment. We discuss the effect of different lossy compression algorithms like MPEG-2 audio Layer 3, Ogg or VQF on a selection of audio test data. Our focus is on changes regarding the basic characteristics of the audio data like spectrum or average power and on removal of embedded watermarks. Furthermore we compare results of different watermarking algorithms and show that lossy compression is still a challenge for most of them. There are two strategies for adding evaluation of robustness against lossy compression to StirMark Benchmark: (a) use of existing free compression algorithms (b) implementation of a generic lossy compression simulation. We discuss how such a model can be implemented based on the results of our tests. This method is less complex, as no real psycho acoustic model has to be applied. Our model can be used for audio watermarking evaluation of numerous application fields. As an example, we describe its importance for e-commerce applications with watermarking security.

  5. Validation of the NASA Dryden X-31 simulation and evaluation of mechanization techniques

    NASA Technical Reports Server (NTRS)

    Dickes, Edward; Kay, Jacob; Ralston, John

    1994-01-01

    This paper shall discuss the evaluation of the original Dryden X-31 aerodynamic math model, processes involved in the justification and creation of the modified data base, and comparison time history results of the model response with flight test.

  6. Development of Monitoring and Diagnostic Methods for Robots Used In Remediation of Waste Sites - Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martin, M.

    2000-04-01

    This project is the first evaluation of model-based diagnostics to hydraulic robot systems. A greater understanding of fault detection for hydraulic robots has been gained, and a new theoretical fault detection model developed and evaluated.

  7. The evaluation system of city's smart growth success rates

    NASA Astrophysics Data System (ADS)

    Huang, Yifan

    2018-04-01

    "Smart growth" is to pursue the best integrated perform+-ance of the Economically prosperous, socially Equitable, and Environmentally Sustainable(3E). Firstly, we establish the smart growth evaluation system(SGI) and the sustainable development evaluation system(SDI). Based on the ten principles and the definition of three E's of sustainability. B y using the Z-score method and the principal component analysis method, we evaluate and quantify indexes synthetically. Then we define the success of smart growth as the ratio of the SDI to the SGI composite score growth rate (SSG). After that we select two cities — Canberra and Durres as the objects of our model in view of the model. Based on the development plans and key data of these two cities, we can figure out the success of smart growth. And according to our model, we adjust some of the growth indicators for both cities. Then observe the results before and after adjustment, and finally verify the accuracy of the model.

  8. Evaluation of training programs and entry-level qualifications for nuclear-power-plant control-room personnel based on the systems approach to training

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haas, P M; Selby, D L; Hanley, M J

    1983-09-01

    This report summarizes results of research sponsored by the US Nuclear Regulatory Commission (NRC) Office of Nuclear Regulatory Research to initiate the use of the Systems Approach to Training in the evaluation of training programs and entry level qualifications for nuclear power plant (NPP) personnel. Variables (performance shaping factors) of potential importance to personnel selection and training are identified, and research to more rigorously define an operationally useful taxonomy of those variables is recommended. A high-level model of the Systems Approach to Training for use in the nuclear industry, which could serve as a model for NRC evaluation of industrymore » programs, is presented. The model is consistent with current publically stated NRC policy, with the approach being followed by the Institute for Nuclear Power Operations, and with current training technology. Checklists to be used by NRC evaluators to assess training programs for NPP control-room personnel are proposed which are based on this model.« less

  9. Proposed evaluation framework for assessing operator performance with multisensor displays

    NASA Technical Reports Server (NTRS)

    Foyle, David C.

    1992-01-01

    Despite aggressive work on the development of sensor fusion algorithms and techniques, no formal evaluation procedures have been proposed. Based on existing integration models in the literature, an evaluation framework is developed to assess an operator's ability to use multisensor, or sensor fusion, displays. The proposed evaluation framework for evaluating the operator's ability to use such systems is a normative approach: The operator's performance with the sensor fusion display can be compared to the models' predictions based on the operator's performance when viewing the original sensor displays prior to fusion. This allows for the determination as to when a sensor fusion system leads to: 1) poorer performance than one of the original sensor displays (clearly an undesirable system in which the fused sensor system causes some distortion or interference); 2) better performance than with either single sensor system alone, but at a sub-optimal (compared to the model predictions) level; 3) optimal performance (compared to model predictions); or, 4) super-optimal performance, which may occur if the operator were able to use some highly diagnostic 'emergent features' in the sensor fusion display, which were unavailable in the original sensor displays. An experiment demonstrating the usefulness of the proposed evaluation framework is discussed.

  10. Ottawa Model of Implementation Leadership and Implementation Leadership Scale: mapping concepts for developing and evaluating theory-based leadership interventions.

    PubMed

    Gifford, Wendy; Graham, Ian D; Ehrhart, Mark G; Davies, Barbara L; Aarons, Gregory A

    2017-01-01

    Leadership in health care is instrumental to creating a supportive organizational environment and positive staff attitudes for implementing evidence-based practices to improve patient care and outcomes. The purpose of this study is to demonstrate the alignment of the Ottawa Model of Implementation Leadership (O-MILe), a theoretical model for developing implementation leadership, with the Implementation Leadership Scale (ILS), an empirically validated tool for measuring implementation leadership. A secondary objective is to describe the methodological process for aligning concepts of a theoretical model with an independently established measurement tool for evaluating theory-based interventions. Modified template analysis was conducted to deductively map items of the ILS onto concepts of the O-MILe. An iterative process was used in which the model and scale developers (n=5) appraised the relevance, conceptual clarity, and fit of each ILS items with the O-MILe concepts through individual feedback and group discussions until consensus was reached. All 12 items of the ILS correspond to at least one O-MILe concept, demonstrating compatibility of the ILS as a measurement tool for the O-MILe theoretical constructs. The O-MILe provides a theoretical basis for developing implementation leadership, and the ILS is a compatible tool for measuring leadership based on the O-MILe. Used together, the O-MILe and ILS provide an evidence- and theory-based approach for developing and measuring leadership for implementing evidence-based practices in health care. Template analysis offers a convenient approach for determining the compatibility of independently developed evaluation tools to test theoretical models.

  11. Ottawa Model of Implementation Leadership and Implementation Leadership Scale: mapping concepts for developing and evaluating theory-based leadership interventions

    PubMed Central

    Gifford, Wendy; Graham, Ian D; Ehrhart, Mark G; Davies, Barbara L; Aarons, Gregory A

    2017-01-01

    Purpose Leadership in health care is instrumental to creating a supportive organizational environment and positive staff attitudes for implementing evidence-based practices to improve patient care and outcomes. The purpose of this study is to demonstrate the alignment of the Ottawa Model of Implementation Leadership (O-MILe), a theoretical model for developing implementation leadership, with the Implementation Leadership Scale (ILS), an empirically validated tool for measuring implementation leadership. A secondary objective is to describe the methodological process for aligning concepts of a theoretical model with an independently established measurement tool for evaluating theory-based interventions. Methods Modified template analysis was conducted to deductively map items of the ILS onto concepts of the O-MILe. An iterative process was used in which the model and scale developers (n=5) appraised the relevance, conceptual clarity, and fit of each ILS items with the O-MILe concepts through individual feedback and group discussions until consensus was reached. Results All 12 items of the ILS correspond to at least one O-MILe concept, demonstrating compatibility of the ILS as a measurement tool for the O-MILe theoretical constructs. Conclusion The O-MILe provides a theoretical basis for developing implementation leadership, and the ILS is a compatible tool for measuring leadership based on the O-MILe. Used together, the O-MILe and ILS provide an evidence- and theory-based approach for developing and measuring leadership for implementing evidence-based practices in health care. Template analysis offers a convenient approach for determining the compatibility of independently developed evaluation tools to test theoretical models. PMID:29355212

  12. A Comparison of Dose-Response Models for the Parotid Gland in a Large Group of Head-and-Neck Cancer Patients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Houweling, Antonetta C., E-mail: A.Houweling@umcutrecht.n; Philippens, Marielle E.P.; Dijkema, Tim

    2010-03-15

    Purpose: The dose-response relationship of the parotid gland has been described most frequently using the Lyman-Kutcher-Burman model. However, various other normal tissue complication probability (NTCP) models exist. We evaluated in a large group of patients the value of six NTCP models that describe the parotid gland dose response 1 year after radiotherapy. Methods and Materials: A total of 347 patients with head-and-neck tumors were included in this prospective parotid gland dose-response study. The patients were treated with either conventional radiotherapy or intensity-modulated radiotherapy. Dose-volume histograms for the parotid glands were derived from three-dimensional dose calculations using computed tomography scans. Stimulatedmore » salivary flow rates were measured before and 1 year after radiotherapy. A threshold of 25% of the pretreatment flow rate was used to define a complication. The evaluated models included the Lyman-Kutcher-Burman model, the mean dose model, the relative seriality model, the critical volume model, the parallel functional subunit model, and the dose-threshold model. The goodness of fit (GOF) was determined by the deviance and a Monte Carlo hypothesis test. Ranking of the models was based on Akaike's information criterion (AIC). Results: None of the models was rejected based on the evaluation of the GOF. The mean dose model was ranked as the best model based on the AIC. The TD{sub 50} in these models was approximately 39 Gy. Conclusions: The mean dose model was preferred for describing the dose-response relationship of the parotid gland.« less

  13. Marker-based or model-based RSA for evaluation of hip resurfacing arthroplasty? A clinical validation and 5-year follow-up.

    PubMed

    Lorenzen, Nina Dyrberg; Stilling, Maiken; Jakobsen, Stig Storgaard; Gustafson, Klas; Søballe, Kjeld; Baad-Hansen, Thomas

    2013-11-01

    The stability of implants is vital to ensure a long-term survival. RSA determines micro-motions of implants as a predictor of early implant failure. RSA can be performed as a marker- or model-based analysis. So far, CAD and RE model-based RSA have not been validated for use in hip resurfacing arthroplasty (HRA). A phantom study determined the precision of marker-based and CAD and RE model-based RSA on a HRA implant. In a clinical study, 19 patients were followed with stereoradiographs until 5 years after surgery. Analysis of double-examination migration results determined the clinical precision of marker-based and CAD model-based RSA, and at the 5-year follow-up, results of the total translation (TT) and the total rotation (TR) for marker- and CAD model-based RSA were compared. The phantom study showed that comparison of the precision (SDdiff) in marker-based RSA analysis was more precise than model-based RSA analysis in TT (p CAD < 0.001; p RE = 0.04) and TR (p CAD = 0.01; p RE < 0.001). The clinical precision (double examination in 8 patients) comparing the precision SDdiff was better evaluating the TT using the marker-based RSA analysis (p = 0.002), but showed no difference between the marker- and CAD model-based RSA analysis regarding the TR (p = 0.91). Comparing the mean signed values regarding the TT and the TR at the 5-year follow-up in 13 patients, the TT was lower (p = 0.03) and the TR higher (p = 0.04) in the marker-based RSA compared to CAD model-based RSA. The precision of marker-based RSA was significantly better than model-based RSA. However, problems with occluded markers lead to exclusion of many patients which was not a problem with model-based RSA. HRA were stable at the 5-year follow-up. The detection limit was 0.2 mm TT and 1° TR for marker-based and 0.5 mm TT and 1° TR for CAD model-based RSA for HRA.

  14. A logic model framework for evaluation and planning in a primary care practice-based research network (PBRN)

    PubMed Central

    Hayes, Holly; Parchman, Michael L.; Howard, Ray

    2012-01-01

    Evaluating effective growth and development of a Practice-Based Research Network (PBRN) can be challenging. The purpose of this article is to describe the development of a logic model and how the framework has been used for planning and evaluation in a primary care PBRN. An evaluation team was formed consisting of the PBRN directors, staff and its board members. After the mission and the target audience were determined, facilitated meetings and discussions were held with stakeholders to identify the assumptions, inputs, activities, outputs, outcomes and outcome indicators. The long-term outcomes outlined in the final logic model are two-fold: 1.) Improved health outcomes of patients served by PBRN community clinicians; and 2.) Community clinicians are recognized leaders of quality research projects. The Logic Model proved useful in identifying stakeholder interests and dissemination activities as an area that required more attention in the PBRN. The logic model approach is a useful planning tool and project management resource that increases the probability that the PBRN mission will be successfully implemented. PMID:21900441

  15. Poisson regression models outperform the geometrical model in estimating the peak-to-trough ratio of seasonal variation: a simulation study.

    PubMed

    Christensen, A L; Lundbye-Christensen, S; Dethlefsen, C

    2011-12-01

    Several statistical methods of assessing seasonal variation are available. Brookhart and Rothman [3] proposed a second-order moment-based estimator based on the geometrical model derived by Edwards [1], and reported that this estimator is superior in estimating the peak-to-trough ratio of seasonal variation compared with Edwards' estimator with respect to bias and mean squared error. Alternatively, seasonal variation may be modelled using a Poisson regression model, which provides flexibility in modelling the pattern of seasonal variation and adjustments for covariates. Based on a Monte Carlo simulation study three estimators, one based on the geometrical model, and two based on log-linear Poisson regression models, were evaluated in regards to bias and standard deviation (SD). We evaluated the estimators on data simulated according to schemes varying in seasonal variation and presence of a secular trend. All methods and analyses in this paper are available in the R package Peak2Trough[13]. Applying a Poisson regression model resulted in lower absolute bias and SD for data simulated according to the corresponding model assumptions. Poisson regression models had lower bias and SD for data simulated to deviate from the corresponding model assumptions than the geometrical model. This simulation study encourages the use of Poisson regression models in estimating the peak-to-trough ratio of seasonal variation as opposed to the geometrical model. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  16. Evaluation of Supply Chain Efficiency Based on a Novel Network of Data Envelopment Analysis Model

    NASA Astrophysics Data System (ADS)

    Fu, Li Fang; Meng, Jun; Liu, Ying

    2015-12-01

    Performance evaluation of supply chain (SC) is a vital topic in SC management and inherently complex problems with multilayered internal linkages and activities of multiple entities. Recently, various Network Data Envelopment Analysis (NDEA) models, which opened the “black box” of conventional DEA, were developed and applied to evaluate the complex SC with a multilayer network structure. However, most of them are input or output oriented models which cannot take into consideration the nonproportional changes of inputs and outputs simultaneously. This paper extends the Slack-based measure (SBM) model to a nonradial, nonoriented network model named as U-NSBM with the presence of undesirable outputs in the SC. A numerical example is presented to demonstrate the applicability of the model in quantifying the efficiency and ranking the supply chain performance. By comparing with the CCR and U-SBM models, it is shown that the proposed model has higher distinguishing ability and gives feasible solution in the presence of undesirable outputs. Meanwhile, it provides more insights for decision makers about the source of inefficiency as well as the guidance to improve the SC performance.

  17. A System Computational Model of Implicit Emotional Learning

    PubMed Central

    Puviani, Luca; Rama, Sidita

    2016-01-01

    Nowadays, the experimental study of emotional learning is commonly based on classical conditioning paradigms and models, which have been thoroughly investigated in the last century. Unluckily, models based on classical conditioning are unable to explain or predict important psychophysiological phenomena, such as the failure of the extinction of emotional responses in certain circumstances (for instance, those observed in evaluative conditioning, in post-traumatic stress disorders and in panic attacks). In this manuscript, starting from the experimental results available from the literature, a computational model of implicit emotional learning based both on prediction errors computation and on statistical inference is developed. The model quantitatively predicts (a) the occurrence of evaluative conditioning, (b) the dynamics and the resistance-to-extinction of the traumatic emotional responses, (c) the mathematical relation between classical conditioning and unconditioned stimulus revaluation. Moreover, we discuss how the derived computational model can lead to the development of new animal models for resistant-to-extinction emotional reactions and novel methodologies of emotions modulation. PMID:27378898

  18. A System Computational Model of Implicit Emotional Learning.

    PubMed

    Puviani, Luca; Rama, Sidita

    2016-01-01

    Nowadays, the experimental study of emotional learning is commonly based on classical conditioning paradigms and models, which have been thoroughly investigated in the last century. Unluckily, models based on classical conditioning are unable to explain or predict important psychophysiological phenomena, such as the failure of the extinction of emotional responses in certain circumstances (for instance, those observed in evaluative conditioning, in post-traumatic stress disorders and in panic attacks). In this manuscript, starting from the experimental results available from the literature, a computational model of implicit emotional learning based both on prediction errors computation and on statistical inference is developed. The model quantitatively predicts (a) the occurrence of evaluative conditioning, (b) the dynamics and the resistance-to-extinction of the traumatic emotional responses, (c) the mathematical relation between classical conditioning and unconditioned stimulus revaluation. Moreover, we discuss how the derived computational model can lead to the development of new animal models for resistant-to-extinction emotional reactions and novel methodologies of emotions modulation.

  19. Strategic directions for agent-based modeling: avoiding the YAAWN syndrome.

    PubMed

    O'Sullivan, David; Evans, Tom; Manson, Steven; Metcalf, Sara; Ligmann-Zielinska, Arika; Bone, Chris

    In this short communication, we examine how agent-based modeling has become common in land change science and is increasingly used to develop case studies for particular times and places. There is a danger that the research community is missing a prime opportunity to learn broader lessons from the use of agent-based modeling (ABM), or at the very least not sharing these lessons more widely. How do we find an appropriate balance between empirically rich, realistic models and simpler theoretically grounded models? What are appropriate and effective approaches to model evaluation in light of uncertainties not only in model parameters but also in model structure? How can we best explore hybrid model structures that enable us to better understand the dynamics of the systems under study, recognizing that no single approach is best suited to this task? Under what circumstances - in terms of model complexity, model evaluation, and model structure - can ABMs be used most effectively to lead to new insight for stakeholders? We explore these questions in the hope of helping the growing community of land change scientists using models in their research to move from 'yet another model' to doing better science with models.

  20. Efficacy of a surfactant-based wound dressing on biofilm control.

    PubMed

    Percival, Steven L; Mayer, Dieter; Salisbury, Anne-Marie

    2017-09-01

    The aim of this study was to evaluate the efficacy of both a nonantimicrobial and antimicrobial (1% silver sulfadiazine-SSD) surfactant-based wound dressing in the control of Pseudomonas aeruginosa, Enterococcus sp, Staphylococcus epidermidis, Staphylococcus aureus, and methicillin-resistant S. aureus (MRSA) biofilms. Anti-biofilm efficacy was evaluated in numerous adapted American Standards for Testing and Materials (ASTM) standard biofilm models and other bespoke biofilm models. The ASTM standard models employed included the Minimum biofilm eradication concentration (MBEC) biofilm model (ASTM E2799) and the Centers for Disease Control (CDC) biofilm reactor model (ASTM 2871). Such bespoke biofilm models included the filter biofilm model and the chamberslide biofilm model. Results showed complete kill of microorganisms within a biofilm using the antimicrobial surfactant-based wound dressing. Interestingly, the nonantimicrobial surfactant-based dressing could disrupt existing biofilms by causing biofilm detachment. Prior to biofilm detachment, we demonstrated, using confocal laser scanning microscopy (CLSM), the dispersive effect of the nonantimicrobial surfactant-based wound dressing on the biofilm within 10 minutes of treatment. Furthermore, the non-antimicrobial surfactant-based wound dressing caused an increase in microbial flocculation/aggregation, important for microbial concentration. In conclusion, this nonantimicrobial surfactant-based wound dressing leads to the effective detachment and dispersion of in vitro biofilms. The use of surfactant-based wound dressings in a clinical setting may help to disrupt existing biofilm from wound tissue and may increase the action of antimicrobial treatment. © 2017 by the Wound Healing Society.

  1. Model Selection in Historical Research Using Approximate Bayesian Computation

    PubMed Central

    Rubio-Campillo, Xavier

    2016-01-01

    Formal Models and History Computational models are increasingly being used to study historical dynamics. This new trend, which could be named Model-Based History, makes use of recently published datasets and innovative quantitative methods to improve our understanding of past societies based on their written sources. The extensive use of formal models allows historians to re-evaluate hypotheses formulated decades ago and still subject to debate due to the lack of an adequate quantitative framework. The initiative has the potential to transform the discipline if it solves the challenges posed by the study of historical dynamics. These difficulties are based on the complexities of modelling social interaction, and the methodological issues raised by the evaluation of formal models against data with low sample size, high variance and strong fragmentation. Case Study This work examines an alternate approach to this evaluation based on a Bayesian-inspired model selection method. The validity of the classical Lanchester’s laws of combat is examined against a dataset comprising over a thousand battles spanning 300 years. Four variations of the basic equations are discussed, including the three most common formulations (linear, squared, and logarithmic) and a new variant introducing fatigue. Approximate Bayesian Computation is then used to infer both parameter values and model selection via Bayes Factors. Impact Results indicate decisive evidence favouring the new fatigue model. The interpretation of both parameter estimations and model selection provides new insights into the factors guiding the evolution of warfare. At a methodological level, the case study shows how model selection methods can be used to guide historical research through the comparison between existing hypotheses and empirical evidence. PMID:26730953

  2. Comparison of Pre-Service Physics Teachers' Conceptual Understanding of Dynamics in Model-Based Scientific Inquiry and Scientific Inquiry Environments

    ERIC Educational Resources Information Center

    Arslan Buyruk, Arzu; Ogan Bekiroglu, Feral

    2018-01-01

    The focus of this study was to evaluate the impact of model-based inquiry on pre-service physics teachers' conceptual understanding of dynamics. Theoretical framework of this research was based on models-of-data theory. True-experimental design using quantitative and qualitative research methods was carried out for this research. Participants of…

  3. Index-based groundwater vulnerability mapping models using hydrogeological settings: A critical evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumar, Prashant, E-mail: prashantkumar@csio.res.in; Academy of Scientific and Innovative Research—CSIO, Chandigarh 160030; Bansod, Baban K.S.

    2015-02-15

    Groundwater vulnerability maps are useful for decision making in land use planning and water resource management. This paper reviews the various groundwater vulnerability assessment models developed across the world. Each model has been evaluated in terms of its pros and cons and the environmental conditions of its application. The paper further discusses the validation techniques used for the generated vulnerability maps by various models. Implicit challenges associated with the development of the groundwater vulnerability assessment models have also been identified with scientific considerations to the parameter relations and their selections. - Highlights: • Various index-based groundwater vulnerability assessment models havemore » been discussed. • A comparative analysis of the models and its applicability in different hydrogeological settings has been discussed. • Research problems of underlying vulnerability assessment models are also reported in this review paper.« less

  4. Formalizing the Role of Agent-Based Modeling in Causal Inference and Epidemiology

    PubMed Central

    Marshall, Brandon D. L.; Galea, Sandro

    2015-01-01

    Calls for the adoption of complex systems approaches, including agent-based modeling, in the field of epidemiology have largely centered on the potential for such methods to examine complex disease etiologies, which are characterized by feedback behavior, interference, threshold dynamics, and multiple interacting causal effects. However, considerable theoretical and practical issues impede the capacity of agent-based methods to examine and evaluate causal effects and thus illuminate new areas for intervention. We build on this work by describing how agent-based models can be used to simulate counterfactual outcomes in the presence of complexity. We show that these models are of particular utility when the hypothesized causal mechanisms exhibit a high degree of interdependence between multiple causal effects and when interference (i.e., one person's exposure affects the outcome of others) is present and of intrinsic scientific interest. Although not without challenges, agent-based modeling (and complex systems methods broadly) represent a promising novel approach to identify and evaluate complex causal effects, and they are thus well suited to complement other modern epidemiologic methods of etiologic inquiry. PMID:25480821

  5. Evaluation of outbreak detection performance using multi-stream syndromic surveillance for influenza-like illness in rural Hubei Province, China: a temporal simulation model based on healthcare-seeking behaviors.

    PubMed

    Fan, Yunzhou; Wang, Ying; Jiang, Hongbo; Yang, Wenwen; Yu, Miao; Yan, Weirong; Diwan, Vinod K; Xu, Biao; Dong, Hengjin; Palm, Lars; Nie, Shaofa

    2014-01-01

    Syndromic surveillance promotes the early detection of diseases outbreaks. Although syndromic surveillance has increased in developing countries, performance on outbreak detection, particularly in cases of multi-stream surveillance, has scarcely been evaluated in rural areas. This study introduces a temporal simulation model based on healthcare-seeking behaviors to evaluate the performance of multi-stream syndromic surveillance for influenza-like illness. Data were obtained in six towns of rural Hubei Province, China, from April 2012 to June 2013. A Susceptible-Exposed-Infectious-Recovered model generated 27 scenarios of simulated influenza A (H1N1) outbreaks, which were converted into corresponding simulated syndromic datasets through the healthcare-behaviors model. We then superimposed converted syndromic datasets onto the baselines obtained to create the testing datasets. Outbreak performance of single-stream surveillance of clinic visit, frequency of over the counter drug purchases, school absenteeism, and multi-stream surveillance of their combinations were evaluated using receiver operating characteristic curves and activity monitoring operation curves. In the six towns examined, clinic visit surveillance and school absenteeism surveillance exhibited superior performances of outbreak detection than over the counter drug purchase frequency surveillance; the performance of multi-stream surveillance was preferable to signal-stream surveillance, particularly at low specificity (Sp <90%). The temporal simulation model based on healthcare-seeking behaviors offers an accessible method for evaluating the performance of multi-stream surveillance.

  6. Electromagnetic interference modeling and suppression techniques in variable-frequency drive systems

    NASA Astrophysics Data System (ADS)

    Yang, Le; Wang, Shuo; Feng, Jianghua

    2017-11-01

    Electromagnetic interference (EMI) causes electromechanical damage to the motors and degrades the reliability of variable-frequency drive (VFD) systems. Unlike fundamental frequency components in motor drive systems, high-frequency EMI noise, coupled with the parasitic parameters of the trough system, are difficult to analyze and reduce. In this article, EMI modeling techniques for different function units in a VFD system, including induction motors, motor bearings, and rectifierinverters, are reviewed and evaluated in terms of applied frequency range, model parameterization, and model accuracy. The EMI models for the motors are categorized based on modeling techniques and model topologies. Motor bearing and shaft models are also reviewed, and techniques that are used to eliminate bearing current are evaluated. Modeling techniques for conventional rectifierinverter systems are also summarized. EMI noise suppression techniques, including passive filter, Wheatstone bridge balance, active filter, and optimized modulation, are reviewed and compared based on the VFD system models.

  7. Model selection for the North American Breeding Bird Survey: A comparison of methods

    USGS Publications Warehouse

    Link, William; Sauer, John; Niven, Daniel

    2017-01-01

    The North American Breeding Bird Survey (BBS) provides data for >420 bird species at multiple geographic scales over 5 decades. Modern computational methods have facilitated the fitting of complex hierarchical models to these data. It is easy to propose and fit new models, but little attention has been given to model selection. Here, we discuss and illustrate model selection using leave-one-out cross validation, and the Bayesian Predictive Information Criterion (BPIC). Cross-validation is enormously computationally intensive; we thus evaluate the performance of the Watanabe-Akaike Information Criterion (WAIC) as a computationally efficient approximation to the BPIC. Our evaluation is based on analyses of 4 models as applied to 20 species covered by the BBS. Model selection based on BPIC provided no strong evidence of one model being consistently superior to the others; for 14/20 species, none of the models emerged as superior. For the remaining 6 species, a first-difference model of population trajectory was always among the best fitting. Our results show that WAIC is not reliable as a surrogate for BPIC. Development of appropriate model sets and their evaluation using BPIC is an important innovation for the analysis of BBS data.

  8. A resilience-based model for performance evaluation of information systems: the case of a gas company

    NASA Astrophysics Data System (ADS)

    Azadeh, A.; Salehi, V.; Salehi, R.

    2017-10-01

    Information systems (IS) are strongly influenced by changes in new technology and should react swiftly in response to external conditions. Resilience engineering is a new method that can enable these systems to absorb changes. In this study, a new framework is presented for performance evaluation of IS that includes DeLone and McLean's factors of success in addition to resilience. Hence, this study is an attempt to evaluate the impact of resilience on IS by the proposed model in Iranian Gas Engineering and Development Company via the data obtained from questionnaires and Fuzzy Data Envelopment Analysis (FDEA) approach. First, FDEA model with α-cut = 0.05 was identified as the most suitable model to this application by performing all Banker, Charnes and Cooper and Charnes, Cooper and Rhodes models of and FDEA and selecting the appropriate model based on maximum mean efficiency. Then, the factors were ranked based on the results of sensitivity analysis, which showed resilience had a significantly higher impact on the proposed model relative to other factors. The results of this study were then verified by conducting the related ANOVA test. This is the first study that examines the impact of resilience on IS by statistical and mathematical approaches.

  9. Evaluation of optimal reservoir prospectivity using acoustic-impedance model inversion: A case study of an offshore field, western Niger Delta, Nigeria

    NASA Astrophysics Data System (ADS)

    Oyeyemi, Kehinde D.; Olowokere, Mary T.; Aizebeokhai, Ahzegbobor P.

    2017-12-01

    The evaluation of economic potential of any hydrocarbon field involves the understanding of the reservoir lithofacies and porosity variations. This in turns contributes immensely towards subsequent reservoir management and field development. In this study, integrated 3D seismic data and well log data were employed to assess the quality and prospectivity of the delineated reservoirs (H1-H5) within the OPO field, western Niger Delta using a model-based seismic inversion technique. The model inversion results revealed four distinct sedimentary packages based on the subsurface acoustic impedance properties and shale contents. Low acoustic impedance model values were associated with the delineated hydrocarbon bearing units, denoting their high porosity and good quality. Application of model-based inverted velocity, density and acoustic impedance properties on the generated time slices of reservoirs also revealed a regional fault and prospects within the field.

  10. Simplified ISCCP cloud regimes for evaluating cloudiness in CMIP5 models

    NASA Astrophysics Data System (ADS)

    Jin, Daeho; Oreopoulos, Lazaros; Lee, Dongmin

    2017-01-01

    We take advantage of ISCCP simulator data available for many models that participated in CMIP5, in order to introduce a framework for comparing model cloud output with corresponding ISCCP observations based on the cloud regime (CR) concept. Simplified global CRs are employed derived from the co-variations of three variables, namely cloud optical thickness, cloud top pressure and cloud fraction ( τ, p c , CF). Following evaluation criteria established in a companion paper of ours (Jin et al. 2016), we assess model cloud simulation performance based on how well the simplified CRs are simulated in terms of similarity of centroids, global values and map correlations of relative-frequency-of-occurrence, and long-term total cloud amounts. Mirroring prior results, modeled clouds tend to be too optically thick and not as extensive as in observations. CRs with high-altitude clouds from storm activity are not as well simulated here compared to the previous study, but other regimes containing near-overcast low clouds show improvement. Models that have performed well in the companion paper against CRs defined by joint τ- p c histograms distinguish themselves again here, but improvements for previously underperforming models are also seen. Averaging across models does not yield a drastically better picture, except for cloud geographical locations. Cloud evaluation with simplified regimes seems thus more forgiving than that using histogram-based CRs while still strict enough to reveal model weaknesses.

  11. Selection of Polynomial Chaos Bases via Bayesian Model Uncertainty Methods with Applications to Sparse Approximation of PDEs with Stochastic Inputs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karagiannis, Georgios; Lin, Guang

    2014-02-15

    Generalized polynomial chaos (gPC) expansions allow the representation of the solution of a stochastic system as a series of polynomial terms. The number of gPC terms increases dramatically with the dimension of the random input variables. When the number of the gPC terms is larger than that of the available samples, a scenario that often occurs if the evaluations of the system are expensive, the evaluation of the gPC expansion can be inaccurate due to over-fitting. We propose a fully Bayesian approach that allows for global recovery of the stochastic solution, both in spacial and random domains, by coupling Bayesianmore » model uncertainty and regularization regression methods. It allows the evaluation of the PC coefficients on a grid of spacial points via (1) Bayesian model average or (2) medial probability model, and their construction as functions on the spacial domain via spline interpolation. The former accounts the model uncertainty and provides Bayes-optimal predictions; while the latter, additionally, provides a sparse representation of the solution by evaluating the expansion on a subset of dominating gPC bases when represented as a gPC expansion. Moreover, the method quantifies the importance of the gPC bases through inclusion probabilities. We design an MCMC sampler that evaluates all the unknown quantities without the need of ad-hoc techniques. The proposed method is suitable for, but not restricted to, problems whose stochastic solution is sparse at the stochastic level with respect to the gPC bases while the deterministic solver involved is expensive. We demonstrate the good performance of the proposed method and make comparisons with others on 1D, 14D and 40D in random space elliptic stochastic partial differential equations.« less

  12. Developing Statistical Physics Course Handout on Distribution Function Materials Based on Science, Technology, Engineering, and Mathematics

    NASA Astrophysics Data System (ADS)

    Riandry, M. A.; Ismet, I.; Akhsan, H.

    2017-09-01

    This study aims to produce a valid and practical statistical physics course handout on distribution function materials based on STEM. Rowntree development model is used to produce this handout. The model consists of three stages: planning, development and evaluation stages. In this study, the evaluation stage used Tessmer formative evaluation. It consists of 5 stages: self-evaluation, expert review, one-to-one evaluation, small group evaluation and field test stages. However, the handout is limited to be tested on validity and practicality aspects, so the field test stage is not implemented. The data collection technique used walkthroughs and questionnaires. Subjects of this study are students of 6th and 8th semester of academic year 2016/2017 Physics Education Study Program of Sriwijaya University. The average result of expert review is 87.31% (very valid category). One-to-one evaluation obtained the average result is 89.42%. The result of small group evaluation is 85.92%. From one-to-one and small group evaluation stages, averagestudent response to this handout is 87,67% (very practical category). Based on the results of the study, it can be concluded that the handout is valid and practical.

  13. Evaluation of Second-Level Inference in fMRI Analysis

    PubMed Central

    Roels, Sanne P.; Loeys, Tom; Moerkerke, Beatrijs

    2016-01-01

    We investigate the impact of decisions in the second-level (i.e., over subjects) inferential process in functional magnetic resonance imaging on (1) the balance between false positives and false negatives and on (2) the data-analytical stability, both proxies for the reproducibility of results. Second-level analysis based on a mass univariate approach typically consists of 3 phases. First, one proceeds via a general linear model for a test image that consists of pooled information from different subjects. We evaluate models that take into account first-level (within-subjects) variability and models that do not take into account this variability. Second, one proceeds via inference based on parametrical assumptions or via permutation-based inference. Third, we evaluate 3 commonly used procedures to address the multiple testing problem: familywise error rate correction, False Discovery Rate (FDR) correction, and a two-step procedure with minimal cluster size. Based on a simulation study and real data we find that the two-step procedure with minimal cluster size results in most stable results, followed by the familywise error rate correction. The FDR results in most variable results, for both permutation-based inference and parametrical inference. Modeling the subject-specific variability yields a better balance between false positives and false negatives when using parametric inference. PMID:26819578

  14. Statistical modeling for visualization evaluation through data fusion.

    PubMed

    Chen, Xiaoyu; Jin, Ran

    2017-11-01

    There is a high demand of data visualization providing insights to users in various applications. However, a consistent, online visualization evaluation method to quantify mental workload or user preference is lacking, which leads to an inefficient visualization and user interface design process. Recently, the advancement of interactive and sensing technologies makes the electroencephalogram (EEG) signals, eye movements as well as visualization logs available in user-centered evaluation. This paper proposes a data fusion model and the application procedure for quantitative and online visualization evaluation. 15 participants joined the study based on three different visualization designs. The results provide a regularized regression model which can accurately predict the user's evaluation of task complexity, and indicate the significance of all three types of sensing data sets for visualization evaluation. This model can be widely applied to data visualization evaluation, and other user-centered designs evaluation and data analysis in human factors and ergonomics. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Evaluation of coral reef carbonate production models at a global scale

    NASA Astrophysics Data System (ADS)

    Jones, N. S.; Ridgwell, A.; Hendy, E. J.

    2014-09-01

    Calcification by coral reef communities is estimated to account for half of all carbonate produced in shallow water environments and more than 25% of the total carbonate buried in marine sediments globally. Production of calcium carbonate by coral reefs is therefore an important component of the global carbon cycle. It is also threatened by future global warming and other global change pressures. Numerical models of reefal carbonate production are essential for understanding how carbonate deposition responds to environmental conditions including future atmospheric CO2 concentrations, but these models must first be evaluated in terms of their skill in recreating present day calcification rates. Here we evaluate four published model descriptions of reef carbonate production in terms of their predictive power, at both local and global scales, by comparing carbonate budget outputs with independent estimates. We also compile available global data on reef calcification to produce an observation-based dataset for the model evaluation. The four calcification models are based on functions sensitive to combinations of light availability, aragonite saturation (Ωa) and temperature and were implemented within a specifically-developed global framework, the Global Reef Accretion Model (GRAM). None of the four models correlated with independent rate estimates of whole reef calcification. The temperature-only based approach was the only model output to significantly correlate with coral-calcification rate observations. The absence of any predictive power for whole reef systems, even when consistent at the scale of individual corals, points to the overriding importance of coral cover estimates in the calculations. Our work highlights the need for an ecosystem modeling approach, accounting for population dynamics in terms of mortality and recruitment and hence coral cover, in estimating global reef carbonate budgets. In addition, validation of reef carbonate budgets is severely hampered by limited and inconsistent methodology in reef-scale observations.

  16. Strategic directions for agent-based modeling: avoiding the YAAWN syndrome

    PubMed Central

    O’Sullivan, David; Evans, Tom; Manson, Steven; Metcalf, Sara; Ligmann-Zielinska, Arika; Bone, Chris

    2015-01-01

    In this short communication, we examine how agent-based modeling has become common in land change science and is increasingly used to develop case studies for particular times and places. There is a danger that the research community is missing a prime opportunity to learn broader lessons from the use of agent-based modeling (ABM), or at the very least not sharing these lessons more widely. How do we find an appropriate balance between empirically rich, realistic models and simpler theoretically grounded models? What are appropriate and effective approaches to model evaluation in light of uncertainties not only in model parameters but also in model structure? How can we best explore hybrid model structures that enable us to better understand the dynamics of the systems under study, recognizing that no single approach is best suited to this task? Under what circumstances – in terms of model complexity, model evaluation, and model structure – can ABMs be used most effectively to lead to new insight for stakeholders? We explore these questions in the hope of helping the growing community of land change scientists using models in their research to move from ‘yet another model’ to doing better science with models. PMID:27158257

  17. Conceptual astronomy: A novel model for teaching postsecondary science courses

    NASA Astrophysics Data System (ADS)

    Zeilik, Michael; Schau, Candace; Mattern, Nancy; Hall, Shannon; Teague, Kathleen W.; Bisard, Walter

    1997-10-01

    An innovative, conceptually based instructional model for teaching large undergraduate astronomy courses was designed, implemented, and evaluated in the Fall 1995 semester. This model was based on cognitive and educational theories of knowledge and, we believe, is applicable to other large postsecondary science courses. Major components were: (a) identification of the basic important concepts and their interrelationships that are necessary for connected understanding of astronomy in novice students; (b) use of these concepts and their interrelationships throughout the design, implementation, and evaluation stages of the model; (c) identification of students' prior knowledge and misconceptions; and (d) implementation of varied instructional strategies targeted toward encouraging conceptual understanding in students (i.e., instructional concept maps, cooperative small group work, homework assignments stressing concept application, and a conceptually based student assessment system). Evaluation included the development and use of three measures of conceptual understanding and one of attitudes toward studying astronomy. Over the semester, students showed very large increases in their understanding as assessed by a conceptually based multiple-choice measure of misconceptions, a select-and-fill-in concept map measure, and a relatedness-ratings measure. Attitudes, which were slightly positive before the course, changed slightly in a less favorable direction.

  18. Researches of fruit quality prediction model based on near infrared spectrum

    NASA Astrophysics Data System (ADS)

    Shen, Yulin; Li, Lian

    2018-04-01

    With the improvement in standards for food quality and safety, people pay more attention to the internal quality of fruits, therefore the measurement of fruit internal quality is increasingly imperative. In general, nondestructive soluble solid content (SSC) and total acid content (TAC) analysis of fruits is vital and effective for quality measurement in global fresh produce markets, so in this paper, we aim at establishing a novel fruit internal quality prediction model based on SSC and TAC for Near Infrared Spectrum. Firstly, the model of fruit quality prediction based on PCA + BP neural network, PCA + GRNN network, PCA + BP adaboost strong classifier, PCA + ELM and PCA + LS_SVM classifier are designed and implemented respectively; then, in the NSCT domain, the median filter and the SavitzkyGolay filter are used to preprocess the spectral signal, Kennard-Stone algorithm is used to automatically select the training samples and test samples; thirdly, we achieve the optimal models by comparing 15 kinds of prediction model based on the theory of multi-classifier competition mechanism, specifically, the non-parametric estimation is introduced to measure the effectiveness of proposed model, the reliability and variance of nonparametric estimation evaluation of each prediction model to evaluate the prediction result, while the estimated value and confidence interval regard as a reference, the experimental results demonstrate that this model can better achieve the optimal evaluation of the internal quality of fruit; finally, we employ cat swarm optimization to optimize two optimal models above obtained from nonparametric estimation, empirical testing indicates that the proposed method can provide more accurate and effective results than other forecasting methods.

  19. A Social Systems Approach to Evaluation Research.

    ERIC Educational Resources Information Center

    Olien, C. N.; And Others

    An information-control systems model for evaluation of adult education programs is offered and illustrated. The model is based upon identifying principal subsystems, such as source, channel and audience, which are involved in initiation, production, delivery and reception of educational messages. These subsystems are seen as separate but…

  20. Evaluating Uncertainty in Integrated Environmental Models: A Review of Concepts and Tools

    EPA Science Inventory

    This paper reviews concepts for evaluating integrated environmental models and discusses a list of relevant software-based tools. A simplified taxonomy for sources of uncertainty and a glossary of key terms with standard definitions are provided in the context of integrated appro...

  1. MODEL UNCERTAINTY ANALYSIS, FIELD DATA COLLECTION AND ANALYSIS OF CONTAMINATED VAPOR INTRUSION INTO BUILDINGS

    EPA Science Inventory

    To address uncertainty associated with the evaluation of vapor intrusion problems we are working on a three part strategy that includes: evaluation of uncertainty in model-based assessments; collection of field data and assessment of sites using EPA and state protocols.

  2. Development of a distributed biosphere hydrological model and its evaluation with the Southern Great Plains Experiments (SGP97 and SGP99)

    USDA-ARS?s Scientific Manuscript database

    A distributed biosphere hydrological model, the so called water and energy budget-based distributed hydrological model (WEB-DHM), has been developed by fully coupling a biosphere scheme (SiB2) with a geomorphology-based hydrological model (GBHM). SiB2 describes the transfer of turbulent fluxes (ener...

  3. Longitudinal Evaluation of a Scale-up Model for Teaching Mathematics with Trajectories and Technologies: Persistence of Effects in the Third Year

    ERIC Educational Resources Information Center

    Clements, Douglas H.; Sarama, Julie; Wolfe, Christopher B.; Spitler, Mary Elaine

    2013-01-01

    Using a cluster randomized trial design, we evaluated the persistence of effects of a research-based model for scaling up educational interventions. The model was implemented in 42 schools in two city districts serving low-resource communities, randomly assigned to three conditions. In pre-kindergarten, the two experimental interventions were…

  4. Automated Glioblastoma Segmentation Based on a Multiparametric Structured Unsupervised Classification

    PubMed Central

    Juan-Albarracín, Javier; Fuster-Garcia, Elies; Manjón, José V.; Robles, Montserrat; Aparici, F.; Martí-Bonmatí, L.; García-Gómez, Juan M.

    2015-01-01

    Automatic brain tumour segmentation has become a key component for the future of brain tumour treatment. Currently, most of brain tumour segmentation approaches arise from the supervised learning standpoint, which requires a labelled training dataset from which to infer the models of the classes. The performance of these models is directly determined by the size and quality of the training corpus, whose retrieval becomes a tedious and time-consuming task. On the other hand, unsupervised approaches avoid these limitations but often do not reach comparable results than the supervised methods. In this sense, we propose an automated unsupervised method for brain tumour segmentation based on anatomical Magnetic Resonance (MR) images. Four unsupervised classification algorithms, grouped by their structured or non-structured condition, were evaluated within our pipeline. Considering the non-structured algorithms, we evaluated K-means, Fuzzy K-means and Gaussian Mixture Model (GMM), whereas as structured classification algorithms we evaluated Gaussian Hidden Markov Random Field (GHMRF). An automated postprocess based on a statistical approach supported by tissue probability maps is proposed to automatically identify the tumour classes after the segmentations. We evaluated our brain tumour segmentation method with the public BRAin Tumor Segmentation (BRATS) 2013 Test and Leaderboard datasets. Our approach based on the GMM model improves the results obtained by most of the supervised methods evaluated with the Leaderboard set and reaches the second position in the ranking. Our variant based on the GHMRF achieves the first position in the Test ranking of the unsupervised approaches and the seventh position in the general Test ranking, which confirms the method as a viable alternative for brain tumour segmentation. PMID:25978453

  5. A novel individual-cell-based mathematical model based on multicellular tumour spheroids for evaluating doxorubicin-related delivery in avascular regions.

    PubMed

    Liu, Jiali; Yan, Fangrong; Chen, Hongzhu; Wang, Wenjie; Liu, Wenyue; Hao, Kun; Wang, Guangji; Zhou, Fang; Zhang, Jingwei

    2017-09-01

    Effective drug delivery in the avascular regions of tumours, which is crucial for the promising antitumour activity of doxorubicin-related therapy, is governed by two inseparable processes: intercellular diffusion and intracellular retention. To accurately evaluate doxorubicin-related delivery in the avascular regions, these two processes should be assessed together. Here we describe a new approach to such an assessment. An individual-cell-based mathematical model based on multicellular tumour spheroids was developed that describes the different intercellular diffusion and intracellular retention kinetics of doxorubicin in each cell layer. The different effects of a P-glycoprotein inhibitor (LY335979) and a hypoxia inhibitor (YC-1) were quantitatively evaluated and compared, in vitro (tumour spheroids) and in vivo (HepG2 tumours in mice). This approach was further tested by evaluating in these models, an experimental doxorubicin derivative, INNO 206, which is in Phase II clinical trials. Inhomogeneous, hypoxia-induced, P-glycoprotein expression compromised active transport of doxorubicin in the central area, that is, far from the vasculature. LY335979 inhibited efflux due to P-glycoprotein but limited levels of doxorubicin outside the inner cells, whereas YC-1 co-administration specifically increased doxorubicin accumulation in the inner cells without affecting the extracellular levels. INNO 206 exhibited a more effective distribution profile than doxorubicin. The individual-cell-based mathematical model accurately evaluated and predicted doxorubicin-related delivery and regulation in the avascular regions of tumours. The described framework provides a mechanistic basis for the proper development of doxorubicin-related drug co-administration profiles and nanoparticle development and could avoid unnecessary clinical trials. © 2017 The British Pharmacological Society.

  6. Neural-genetic synthesis for state-space controllers based on linear quadratic regulator design for eigenstructure assignment.

    PubMed

    da Fonseca Neto, João Viana; Abreu, Ivanildo Silva; da Silva, Fábio Nogueira

    2010-04-01

    Toward the synthesis of state-space controllers, a neural-genetic model based on the linear quadratic regulator design for the eigenstructure assignment of multivariable dynamic systems is presented. The neural-genetic model represents a fusion of a genetic algorithm and a recurrent neural network (RNN) to perform the selection of the weighting matrices and the algebraic Riccati equation solution, respectively. A fourth-order electric circuit model is used to evaluate the convergence of the computational intelligence paradigms and the control design method performance. The genetic search convergence evaluation is performed in terms of the fitness function statistics and the RNN convergence, which is evaluated by landscapes of the energy and norm, as a function of the parameter deviations. The control problem solution is evaluated in the time and frequency domains by the impulse response, singular values, and modal analysis.

  7. Evaluation of Disaster Preparedness Based on Simulation Exercises: A Comparison of Two Models.

    PubMed

    Rüter, Andres; Kurland, Lisa; Gryth, Dan; Murphy, Jason; Rådestad, Monica; Djalali, Ahmadreza

    2016-08-01

    The objective of this study was to highlight 2 models, the Hospital Incident Command System (HICS) and the Disaster Management Indicator model (DiMI), for evaluating the in-hospital management of a disaster situation through simulation exercises. Two disaster exercises, A and B, with similar scenarios were performed. Both exercises were evaluated with regard to actions, processes, and structures. After the exercises, the results were calculated and compared. In exercise A the HICS model indicated that 32% of the required positions for the immediate phase were taken under consideration with an average performance of 70%. For exercise B, the corresponding scores were 42% and 68%, respectively. According to the DiMI model, the results for exercise A were a score of 68% for management processes and 63% for management structure (staff skills). In B the results were 77% and 86%, respectively. Both models demonstrated acceptable results in relation to previous studies. More research in this area is needed to validate which of these methods best evaluates disaster preparedness based on simulation exercises or whether the methods are complementary and should therefore be used together. (Disaster Med Public Health Preparedness. 2016;10:544-548).

  8. A photosynthesis-based two-leaf canopy stomatal ...

    EPA Pesticide Factsheets

    A coupled photosynthesis-stomatal conductance model with single-layer sunlit and shaded leaf canopy scaling is implemented and evaluated in a diagnostic box model with the Pleim-Xiu land surface model (PX LSM) and ozone deposition model components taken directly from the meteorology and air quality modeling system—WRF/CMAQ (Weather Research and Forecast model and Community Multiscale Air Quality model). The photosynthesis-based model for PX LSM (PX PSN) is evaluated at a FLUXNET site for implementation against different parameterizations and the current PX LSM approach with a simple Jarvis function (PX Jarvis). Latent heat flux (LH) from PX PSN is further evaluated at five FLUXNET sites with different vegetation types and landscape characteristics. Simulated ozone deposition and flux from PX PSN are evaluated at one of the sites with ozone flux measurements. Overall, the PX PSN simulates LH as well as the PX Jarvis approach. The PX PSN, however, shows distinct advantages over the PX Jarvis approach for grassland that likely result from its treatment of C3 and C4 plants for CO2 assimilation. Simulations using Moderate Resolution Imaging Spectroradiometer (MODIS) leaf area index (LAI) rather than LAI measured at each site assess how the model would perform with grid averaged data used in WRF/CMAQ. MODIS LAI estimates degrade model performance at all sites but one site having exceptionally old and tall trees. Ozone deposition velocity and ozone flux along with LH

  9. Evaluation of the Community Multi-scale Air Quality (CMAQ) ...

    EPA Pesticide Factsheets

    The Community Multiscale Air Quality (CMAQ) model is a state-of-the-science air quality model that simulates the emission, transport and fate of numerous air pollutants, including ozone and particulate matter. The Computational Exposure Division (CED) of the U.S. Environmental Protection Agency develops the CMAQ model and periodically releases new versions of the model that include bug fixes and various other improvements to the modeling system. In the fall of 2015, CMAQ version 5.1 was released. This new version of CMAQ will contain important bug fixes to several issues that were identified in CMAQv5.0.2 and additionally include updates to other portions of the code. Several annual, and numerous episodic, CMAQv5.1 simulations were performed to assess the impact of these improvements on the model results. These results will be presented, along with a base evaluation of the performance of the CMAQv5.1 modeling system against available surface and upper-air measurements available during the time period simulated. The National Exposure Research Laboratory (NERL) Computational Exposure Division (CED) develops and evaluates data, decision-support tools, and models to be applied to media-specific or receptor-specific problem areas. CED uses modeling-based approaches to characterize exposures, evaluate fate and transport, and support environmental diagnostics/forensics with input from multiple data sources. It also develops media- and receptor-specific models, proces

  10. Lumped Parameter Models for Predicting Nitrogen Transport in Lower Coastal Plain Watersheds

    Treesearch

    Devendra M. Amatya; George M. Chescheir; Glen P. Fernandez; R. Wayne Skaggs; F. Birgand; J.W. Gilliam

    2003-01-01

    hl recent years physically based comprehensive disfributed watershed scale hydrologic/water quality models have been developed and applied 10 evaluate cumulative effects of land arld water management practices on receiving waters, Although fhesc complex physically based models are capable of simulating the impacts ofthese changes in large watersheds, they are often...

  11. Intelligent evaluation of color sensory quality of black tea by visible-near infrared spectroscopy technology: A comparison of spectra and color data information

    NASA Astrophysics Data System (ADS)

    Ouyang, Qin; Liu, Yan; Chen, Quansheng; Zhang, Zhengzhu; Zhao, Jiewen; Guo, Zhiming; Gu, Hang

    2017-06-01

    Instrumental test of black tea samples instead of human panel test is attracting massive attention recently. This study focused on an investigation of the feasibility for estimation of the color sensory quality of black tea samples using the VIS-NIR spectroscopy technique, comparing the performances of models based on the spectra and color information. In model calibration, the variables were first selected by genetic algorithm (GA); then the nonlinear back propagation-artificial neural network (BPANN) models were established based on the optimal variables. In comparison with the other models, GA-BPANN models from spectra data information showed the best performance, with the correlation coefficient of 0.8935, and the root mean square error of 0.392 in the prediction set. In addition, models based on the spectra information provided better performance than that based on the color parameters. Therefore, the VIS-NIR spectroscopy technique is a promising tool for rapid and accurate evaluation of the sensory quality of black tea samples.

  12. Intelligent evaluation of color sensory quality of black tea by visible-near infrared spectroscopy technology: A comparison of spectra and color data information.

    PubMed

    Ouyang, Qin; Liu, Yan; Chen, Quansheng; Zhang, Zhengzhu; Zhao, Jiewen; Guo, Zhiming; Gu, Hang

    2017-06-05

    Instrumental test of black tea samples instead of human panel test is attracting massive attention recently. This study focused on an investigation of the feasibility for estimation of the color sensory quality of black tea samples using the VIS-NIR spectroscopy technique, comparing the performances of models based on the spectra and color information. In model calibration, the variables were first selected by genetic algorithm (GA); then the nonlinear back propagation-artificial neural network (BPANN) models were established based on the optimal variables. In comparison with the other models, GA-BPANN models from spectra data information showed the best performance, with the correlation coefficient of 0.8935, and the root mean square error of 0.392 in the prediction set. In addition, models based on the spectra information provided better performance than that based on the color parameters. Therefore, the VIS-NIR spectroscopy technique is a promising tool for rapid and accurate evaluation of the sensory quality of black tea samples. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. [Parameter of evidence-based medicine in health care economics].

    PubMed

    Wasem, J; Siebert, U

    1999-08-01

    In the view of scarcity of resources, economic evaluations in health care, in which not only effects but also costs related to a medical intervention are examined and a incremental cost-outcome-ratio is build, are an important supplement to the program of evidence based medicine. Outcomes of a medical intervention can be measured by clinical effectiveness, quality-adjusted life years, and monetary evaluation of benefits. As far as costs are concerned, direct medical costs, direct non-medical costs and indirect costs have to be considered in an economic evaluation. Data can be used from primary studies or secondary analysis; metaanalysis for synthesizing of data may be adequate. For calculation of incremental cost-benefit-ratios, models of decision analysis (decision tree models, Markov-models) often are necessary. Methodological and ethical limits for application of the results of economic evaluation in resource allocation decision in health care have to be regarded: Economic evaluations and the calculation of cost-outcome-rations should only support decision making but cannot replace it.

  14. A prognostic pollen emissions model for climate models (PECM1.0)

    NASA Astrophysics Data System (ADS)

    Wozniak, Matthew C.; Steiner, Allison L.

    2017-11-01

    We develop a prognostic model called Pollen Emissions for Climate Models (PECM) for use within regional and global climate models to simulate pollen counts over the seasonal cycle based on geography, vegetation type, and meteorological parameters. Using modern surface pollen count data, empirical relationships between prior-year annual average temperature and pollen season start dates and end dates are developed for deciduous broadleaf trees (Acer, Alnus, Betula, Fraxinus, Morus, Platanus, Populus, Quercus, Ulmus), evergreen needleleaf trees (Cupressaceae, Pinaceae), grasses (Poaceae; C3, C4), and ragweed (Ambrosia). This regression model explains as much as 57 % of the variance in pollen phenological dates, and it is used to create a climate-flexible phenology that can be used to study the response of wind-driven pollen emissions to climate change. The emissions model is evaluated in the Regional Climate Model version 4 (RegCM4) over the continental United States by prescribing an emission potential from PECM and transporting pollen as aerosol tracers. We evaluate two different pollen emissions scenarios in the model using (1) a taxa-specific land cover database, phenology, and emission potential, and (2) a plant functional type (PFT) land cover, phenology, and emission potential. The simulated surface pollen concentrations for both simulations are evaluated against observed surface pollen counts in five climatic subregions. Given prescribed pollen emissions, the RegCM4 simulates observed concentrations within an order of magnitude, although the performance of the simulations in any subregion is strongly related to the land cover representation and the number of observation sites used to create the empirical phenological relationship. The taxa-based model provides a better representation of the phenology of tree-based pollen counts than the PFT-based model; however, we note that the PFT-based version provides a useful and climate-flexible emissions model for the general representation of the pollen phenology over the United States.

  15. Model-based economic evaluation in Alzheimer's disease: a review of the methods available to model Alzheimer's disease progression.

    PubMed

    Green, Colin; Shearer, James; Ritchie, Craig W; Zajicek, John P

    2011-01-01

    To consider the methods available to model Alzheimer's disease (AD) progression over time to inform on the structure and development of model-based evaluations, and the future direction of modelling methods in AD. A systematic search of the health care literature was undertaken to identify methods to model disease progression in AD. Modelling methods are presented in a descriptive review. The literature search identified 42 studies presenting methods or applications of methods to model AD progression over time. The review identified 10 general modelling frameworks available to empirically model the progression of AD as part of a model-based evaluation. Seven of these general models are statistical models predicting progression of AD using a measure of cognitive function. The main concerns with models are on model structure, around the limited characterization of disease progression, and on the use of a limited number of health states to capture events related to disease progression over time. None of the available models have been able to present a comprehensive model of the natural history of AD. Although helpful, there are serious limitations in the methods available to model progression of AD over time. Advances are needed to better model the progression of AD and the effects of the disease on peoples' lives. Recent evidence supports the need for a multivariable approach to the modelling of AD progression, and indicates that a latent variable analytic approach to characterising AD progression is a promising avenue for advances in the statistical development of modelling methods. Copyright © 2011 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  16. Commodity-based Approach for Evaluating the Value of Freight Moving on Texas’ Roadway Network

    DOT National Transportation Integrated Search

    2017-12-10

    The researchers took a commodity-based approach to evaluate the value of a list of selected commodities moved on the Texas freight network. This approach takes advantage of commodity-specific data sources and modeling processes. It provides a unique ...

  17. Activity-based costing: a practical model for cost calculation in radiotherapy.

    PubMed

    Lievens, Yolande; van den Bogaert, Walter; Kesteloot, Katrien

    2003-10-01

    The activity-based costing method was used to compute radiotherapy costs. This report describes the model developed, the calculated costs, and possible applications for the Leuven radiotherapy department. Activity-based costing is an advanced cost calculation technique that allocates resource costs to products based on activity consumption. In the Leuven model, a complex allocation principle with a large diversity of cost drivers was avoided by introducing an extra allocation step between activity groups and activities. A straightforward principle of time consumption, weighed by some factors of treatment complexity, was used. The model was developed in an iterative way, progressively defining the constituting components (costs, activities, products, and cost drivers). Radiotherapy costs are predominantly determined by personnel and equipment cost. Treatment-related activities consume the greatest proportion of the resource costs, with treatment delivery the most important component. This translates into products that have a prolonged total or daily treatment time being the most costly. The model was also used to illustrate the impact of changes in resource costs and in practice patterns. The presented activity-based costing model is a practical tool to evaluate the actual cost structure of a radiotherapy department and to evaluate possible resource or practice changes.

  18. Evaluation of soy-based surface active copolymers as surfactant ingredients in model shampoo formulations.

    PubMed

    Popadyuk, A; Kalita, H; Chisholm, B J; Voronov, A

    2014-12-01

    A new non-toxic soybean oil-based polymeric surfactant (SBPS) for personal-care products was developed and extensively characterized, including an evaluation of the polymeric surfactant performance in model shampoo formulations. To experimentally assure applicability of the soy-based macromolecules in shampoos, either in combination with common anionic surfactants (in this study, sodium lauryl sulfate, SLS) or as a single surface-active ingredient, the testing of SBPS physicochemical properties, performance and visual assessment of SBPS-based model shampoos was carried out. The results obtained, including foaming and cleaning ability of model formulations, were compared to those with only SLS as a surfactant as well as to SLS-free shampoos. Overall, the results show that the presence of SBPS improves cleaning, foaming, and conditioning of model formulations. SBPS-based formulations meet major requirements of multifunctional shampoos - mild detergency, foaming, good conditioning, and aesthetic appeal, which are comparable to commercially available shampoos. In addition, examination of SBPS/SLS mixtures in model shampoos showed that the presence of the SBPS enables the concentration of SLS to be significantly reduced without sacrificing shampoo performance. © 2014 Society of Cosmetic Scientists and the Société Française de Cosmétologie.

  19. Evaluating the performance of a fault detection and diagnostic system for vapor compression equipment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Breuker, M.S.; Braun, J.E.

    This paper presents a detailed evaluation of the performance of a statistical, rule-based fault detection and diagnostic (FDD) technique presented by Rossi and Braun (1997). Steady-state and transient tests were performed on a simple rooftop air conditioner over a range of conditions and fault levels. The steady-state data without faults were used to train models that predict outputs for normal operation. The transient data with faults were used to evaluate FDD performance. The effect of a number of design variables on FDD sensitivity for different faults was evaluated and two prototype systems were specified for more complete evaluation. Good performancemore » was achieved in detecting and diagnosing five faults using only six temperatures (2 input and 4 output) and linear models. The performance improved by about a factor of two when ten measurements (three input and seven output) and higher order models were used. This approach for evaluating and optimizing the performance of the statistical, rule-based FDD technique could be used as a design and evaluation tool when applying this FDD method to other packaged air-conditioning systems. Furthermore, the approach could also be modified to evaluate the performance of other FDD methods.« less

  20. Evaluate Yourself. Evaluation: Research-Based Decision Making Series, Number 9304.

    ERIC Educational Resources Information Center

    Fetterman, David M.

    This document considers both self-examination and external evaluation of gifted and talented education programs. Principles of the self-examination process are offered, noting similarities to external evaluation models. Principles of self-evaluation efforts include the importance of maintaining a nonjudgmental orientation, soliciting views from…

  1. Mosquito population dynamics from cellular automata-based simulation

    NASA Astrophysics Data System (ADS)

    Syafarina, Inna; Sadikin, Rifki; Nuraini, Nuning

    2016-02-01

    In this paper we present an innovative model for simulating mosquito-vector population dynamics. The simulation consist of two stages: demography and dispersal dynamics. For demography simulation, we follow the existing model for modeling a mosquito life cycles. Moreover, we use cellular automata-based model for simulating dispersal of the vector. In simulation, each individual vector is able to move to other grid based on a random walk. Our model is also capable to represent immunity factor for each grid. We simulate the model to evaluate its correctness. Based on the simulations, we can conclude that our model is correct. However, our model need to be improved to find a realistic parameters to match real data.

  2. The AgESGUI geospatial simulation system for environmental model application and evaluation

    USDA-ARS?s Scientific Manuscript database

    Practical decision making in spatially-distributed environmental assessment and management is increasingly being based on environmental process-based models linked to geographical information systems (GIS). Furthermore, powerful computers and Internet-accessible assessment tools are providing much g...

  3. Educational Television: Brazil.

    ERIC Educational Resources Information Center

    Bretz, R.; Shinar, D.

    Based on evaluation of nine Brazilian educational television centers, an Instructional Television Training Model (ITV) was developed to aid in determining and designing training requirements for instructional television systems. Analysis based on this model would include these tasks: (1) determine instructional purpose of the television…

  4. Evaluating performances of simplified physically based landslide susceptibility models.

    NASA Astrophysics Data System (ADS)

    Capparelli, Giovanna; Formetta, Giuseppe; Versace, Pasquale

    2015-04-01

    Rainfall induced shallow landslides cause significant damages involving loss of life and properties. Prediction of shallow landslides susceptible locations is a complex task that involves many disciplines: hydrology, geotechnical science, geomorphology, and statistics. Usually to accomplish this task two main approaches are used: statistical or physically based model. This paper presents a package of GIS based models for landslide susceptibility analysis. It was integrated in the NewAge-JGrass hydrological model using the Object Modeling System (OMS) modeling framework. The package includes three simplified physically based models for landslides susceptibility analysis (M1, M2, and M3) and a component for models verifications. It computes eight goodness of fit indices (GOF) by comparing pixel-by-pixel model results and measurements data. Moreover, the package integration in NewAge-JGrass allows the use of other components such as geographic information system tools to manage inputs-output processes, and automatic calibration algorithms to estimate model parameters. The system offers the possibility to investigate and fairly compare the quality and the robustness of models and models parameters, according a procedure that includes: i) model parameters estimation by optimizing each of the GOF index separately, ii) models evaluation in the ROC plane by using each of the optimal parameter set, and iii) GOF robustness evaluation by assessing their sensitivity to the input parameter variation. This procedure was repeated for all three models. The system was applied for a case study in Calabria (Italy) along the Salerno-Reggio Calabria highway, between Cosenza and Altilia municipality. The analysis provided that among all the optimized indices and all the three models, Average Index (AI) optimization coupled with model M3 is the best modeling solution for our test case. This research was funded by PON Project No. 01_01503 "Integrated Systems for Hydrogeological Risk Monitoring, Early Warning and Mitigation Along the Main Lifelines", CUP B31H11000370005, in the framework of the National Operational Program for "Research and Competitiveness" 2007-2013.

  5. Word sense disambiguation in the clinical domain: a comparison of knowledge-rich and knowledge-poor unsupervised methods

    PubMed Central

    Chasin, Rachel; Rumshisky, Anna; Uzuner, Ozlem; Szolovits, Peter

    2014-01-01

    Objective To evaluate state-of-the-art unsupervised methods on the word sense disambiguation (WSD) task in the clinical domain. In particular, to compare graph-based approaches relying on a clinical knowledge base with bottom-up topic-modeling-based approaches. We investigate several enhancements to the topic-modeling techniques that use domain-specific knowledge sources. Materials and methods The graph-based methods use variations of PageRank and distance-based similarity metrics, operating over the Unified Medical Language System (UMLS). Topic-modeling methods use unlabeled data from the Multiparameter Intelligent Monitoring in Intensive Care (MIMIC II) database to derive models for each ambiguous word. We investigate the impact of using different linguistic features for topic models, including UMLS-based and syntactic features. We use a sense-tagged clinical dataset from the Mayo Clinic for evaluation. Results The topic-modeling methods achieve 66.9% accuracy on a subset of the Mayo Clinic's data, while the graph-based methods only reach the 40–50% range, with a most-frequent-sense baseline of 56.5%. Features derived from the UMLS semantic type and concept hierarchies do not produce a gain over bag-of-words features in the topic models, but identifying phrases from UMLS and using syntax does help. Discussion Although topic models outperform graph-based methods, semantic features derived from the UMLS prove too noisy to improve performance beyond bag-of-words. Conclusions Topic modeling for WSD provides superior results in the clinical domain; however, integration of knowledge remains to be effectively exploited. PMID:24441986

  6. ECO-DRIVING MODELING ENVIRONMENT

    DOT National Transportation Integrated Search

    2015-11-01

    This research project aims to examine the eco-driving modeling capabilities of different traffic modeling tools available and to develop a driver-simulator-based eco-driving modeling tool to evaluate driver behavior and to reliably estimate or measur...

  7. Semi-Markov adjunction to the Computer-Aided Markov Evaluator (CAME)

    NASA Technical Reports Server (NTRS)

    Rosch, Gene; Hutchins, Monica A.; Leong, Frank J.; Babcock, Philip S., IV

    1988-01-01

    The rule-based Computer-Aided Markov Evaluator (CAME) program was expanded in its ability to incorporate the effect of fault-handling processes into the construction of a reliability model. The fault-handling processes are modeled as semi-Markov events and CAME constructs and appropriate semi-Markov model. To solve the model, the program outputs it in a form which can be directly solved with the Semi-Markov Unreliability Range Evaluator (SURE) program. As a means of evaluating the alterations made to the CAME program, the program is used to model the reliability of portions of the Integrated Airframe/Propulsion Control System Architecture (IAPSA 2) reference configuration. The reliability predictions are compared with a previous analysis. The results bear out the feasibility of utilizing CAME to generate appropriate semi-Markov models to model fault-handling processes.

  8. Evaluation of statistically downscaled GCM output as input for hydrological and stream temperature simulation in the Apalachicola–Chattahoochee–Flint River Basin (1961–99)

    USGS Publications Warehouse

    Hay, Lauren E.; LaFontaine, Jacob H.; Markstrom, Steven

    2014-01-01

    The accuracy of statistically downscaled general circulation model (GCM) simulations of daily surface climate for historical conditions (1961–99) and the implications when they are used to drive hydrologic and stream temperature models were assessed for the Apalachicola–Chattahoochee–Flint River basin (ACFB). The ACFB is a 50 000 km2 basin located in the southeastern United States. Three GCMs were statistically downscaled, using an asynchronous regional regression model (ARRM), to ⅛° grids of daily precipitation and minimum and maximum air temperature. These ARRM-based climate datasets were used as input to the Precipitation-Runoff Modeling System (PRMS), a deterministic, distributed-parameter, physical-process watershed model used to simulate and evaluate the effects of various combinations of climate and land use on watershed response. The ACFB was divided into 258 hydrologic response units (HRUs) in which the components of flow (groundwater, subsurface, and surface) are computed in response to climate, land surface, and subsurface characteristics of the basin. Daily simulations of flow components from PRMS were used with the climate to simulate in-stream water temperatures using the Stream Network Temperature (SNTemp) model, a mechanistic, one-dimensional heat transport model for branched stream networks.The climate, hydrology, and stream temperature for historical conditions were evaluated by comparing model outputs produced from historical climate forcings developed from gridded station data (GSD) versus those produced from the three statistically downscaled GCMs using the ARRM methodology. The PRMS and SNTemp models were forced with the GSD and the outputs produced were treated as “truth.” This allowed for a spatial comparison by HRU of the GSD-based output with ARRM-based output. Distributional similarities between GSD- and ARRM-based model outputs were compared using the two-sample Kolmogorov–Smirnov (KS) test in combination with descriptive metrics such as the mean and variance and an evaluation of rare and sustained events. In general, precipitation and streamflow quantities were negatively biased in the downscaled GCM outputs, and results indicate that the downscaled GCM simulations consistently underestimate the largest precipitation events relative to the GSD. The KS test results indicate that ARRM-based air temperatures are similar to GSD at the daily time step for the majority of the ACFB, with perhaps subweekly averaging for stream temperature. Depending on GCM and spatial location, ARRM-based precipitation and streamflow requires averaging of up to 30 days to become similar to the GSD-based output.Evaluation of the model skill for historical conditions suggests some guidelines for use of future projections; while it seems correct to place greater confidence in evaluation metrics which perform well historically, this does not necessarily mean those metrics will accurately reflect model outputs for future climatic conditions. Results from this study indicate no “best” overall model, but the breadth of analysis can be used to give the product users an indication of the applicability of the results to address their particular problem. Since results for historical conditions indicate that model outputs can have significant biases associated with them, the range in future projections examined in terms of change relative to historical conditions for each individual GCM may be more appropriate.

  9. Assessing Graduate Attributes: Building a Criteria-Based Competency Model

    ERIC Educational Resources Information Center

    Ipperciel, Donald; ElAtia, Samira

    2014-01-01

    Graduate attributes (GAs) have become a necessary framework of reference for the 21st century competency-based model of higher education. However, the issue of evaluating and assessing GAs still remains unchartered territory. In this article, we present a criteria-based method of assessment that allows for an institution-wide comparison of the…

  10. A Multilevel Latent Growth Curve Approach to Predicting Student Proficiency

    ERIC Educational Resources Information Center

    Choi, Kilchan; Goldschmidt, Pete

    2012-01-01

    Value-added models and growth-based accountability aim to evaluate school's performance based on student growth in learning. The current focus is on linking the results from value-added models to the ones from growth-based accountability systems including Adequate Yearly Progress decisions mandated by No Child Left Behind. We present a new…

  11. Fine-Scale Exposure to Allergenic Pollen in the Urban Environment: Evaluation of Land Use Regression Approach.

    PubMed

    Hjort, Jan; Hugg, Timo T; Antikainen, Harri; Rusanen, Jarmo; Sofiev, Mikhail; Kukkonen, Jaakko; Jaakkola, Maritta S; Jaakkola, Jouni J K

    2016-05-01

    Despite the recent developments in physically and chemically based analysis of atmospheric particles, no models exist for resolving the spatial variability of pollen concentration at urban scale. We developed a land use regression (LUR) approach for predicting spatial fine-scale allergenic pollen concentrations in the Helsinki metropolitan area, Finland, and evaluated the performance of the models against available empirical data. We used grass pollen data monitored at 16 sites in an urban area during the peak pollen season and geospatial environmental data. The main statistical method was generalized linear model (GLM). GLM-based LURs explained 79% of the spatial variation in the grass pollen data based on all samples, and 47% of the variation when samples from two sites with very high concentrations were excluded. In model evaluation, prediction errors ranged from 6% to 26% of the observed range of grass pollen concentrations. Our findings support the use of geospatial data-based statistical models to predict the spatial variation of allergenic grass pollen concentrations at intra-urban scales. A remote sensing-based vegetation index was the strongest predictor of pollen concentrations for exposure assessments at local scales. The LUR approach provides new opportunities to estimate the relations between environmental determinants and allergenic pollen concentration in human-modified environments at fine spatial scales. This approach could potentially be applied to estimate retrospectively pollen concentrations to be used for long-term exposure assessments. Hjort J, Hugg TT, Antikainen H, Rusanen J, Sofiev M, Kukkonen J, Jaakkola MS, Jaakkola JJ. 2016. Fine-scale exposure to allergenic pollen in the urban environment: evaluation of land use regression approach. Environ Health Perspect 124:619-626; http://dx.doi.org/10.1289/ehp.1509761.

  12. An introductory pharmacy practice experience based on a medication therapy management service model.

    PubMed

    Agness, Chanel F; Huynh, Donna; Brandt, Nicole

    2011-06-10

    To implement and evaluate an introductory pharmacy practice experience (IPPE) based on the medication therapy management (MTM) service model. Patient Care 2 is an IPPE that introduces third-year pharmacy students to the MTM service model. Students interacted with older adults to identify medication-related problems and develop recommendations using core MTM elements. Course outcome evaluations were based on number of documented medication-related problems, recommendations, and student reviews. Fifty-seven older adults participated in the course. Students identified 52 medication-related problems and 66 medical problems, and documented 233 recommendations relating to health maintenance and wellness, pharmacotherapy, referrals, and education. Students reported having adequate experience performing core MTM elements. Patient Care 2 may serve as an experiential learning model for pharmacy schools to teach the core elements of MTM and provide patient care services to the community.

  13. CrowdMapping: A Crowdsourcing-Based Terminology Mapping Method for Medical Data Standardization.

    PubMed

    Mao, Huajian; Chi, Chenyang; Huang, Boyu; Meng, Haibin; Yu, Jinghui; Zhao, Dongsheng

    2017-01-01

    Standardized terminology is the prerequisite of data exchange in analysis of clinical processes. However, data from different electronic health record systems are based on idiosyncratic terminology systems, especially when the data is from different hospitals and healthcare organizations. Terminology standardization is necessary for the medical data analysis. We propose a crowdsourcing-based terminology mapping method, CrowdMapping, to standardize the terminology in medical data. CrowdMapping uses a confidential model to determine how terminologies are mapped to a standard system, like ICD-10. The model uses mappings from different health care organizations and evaluates the diversity of the mapping to determine a more sophisticated mapping rule. Further, the CrowdMapping model enables users to rate the mapping result and interact with the model evaluation. CrowdMapping is a work-in-progress system, we present initial results mapping terminologies.

  14. Recent Achievements of the Collaboratory for the Study of Earthquake Predictability

    NASA Astrophysics Data System (ADS)

    Jordan, T. H.; Liukis, M.; Werner, M. J.; Schorlemmer, D.; Yu, J.; Maechling, P. J.; Jackson, D. D.; Rhoades, D. A.; Zechar, J. D.; Marzocchi, W.

    2016-12-01

    The Collaboratory for the Study of Earthquake Predictability (CSEP) supports a global program to conduct prospective earthquake forecasting experiments. CSEP testing centers are now operational in California, New Zealand, Japan, China, and Europe with 442 models under evaluation. The California testing center, started by SCEC, Sept 1, 2007, currently hosts 30-minute, 1-day, 3-month, 1-year and 5-year forecasts, both alarm-based and probabilistic, for California, the Western Pacific, and worldwide. Our tests are now based on the hypocentral locations and magnitudes of cataloged earthquakes, but we plan to test focal mechanisms, seismic hazard models, ground motion forecasts, and finite rupture forecasts as well. We have increased computational efficiency for high-resolution global experiments, such as the evaluation of the Global Earthquake Activity Rate (GEAR) model, introduced Bayesian ensemble models, and implemented support for non-Poissonian simulation-based forecasts models. We are currently developing formats and procedures to evaluate externally hosted forecasts and predictions. CSEP supports the USGS program in operational earthquake forecasting and a DHS project to register and test external forecast procedures from experts outside seismology. We found that earthquakes as small as magnitude 2.5 provide important information on subsequent earthquakes larger than magnitude 5. A retrospective experiment for the 2010-2012 Canterbury earthquake sequence showed that some physics-based and hybrid models outperform catalog-based (e.g., ETAS) models. This experiment also demonstrates the ability of the CSEP infrastructure to support retrospective forecast testing. Current CSEP development activities include adoption of the Comprehensive Earthquake Catalog (ComCat) as an authorized data source, retrospective testing of simulation-based forecasts, and support for additive ensemble methods. We describe the open-source CSEP software that is available to researchers as they develop their forecast models. We also discuss how CSEP procedures are being adapted to intensity and ground motion prediction experiments as well as hazard model testing.

  15. Evaluation of climatic changes in South-Asia

    NASA Astrophysics Data System (ADS)

    Kjellstrom, Erik; Rana, Arun; Grigory, Nikulin; Renate, Wilcke; Hansson, Ulf; Kolax, Michael

    2016-04-01

    Literature has sufficient evidences of climate change impact all over the world and its impact on various sectors. In light of new advancements made in climate modeling, availability of several climate downscaling approaches, the more robust bias correction methods with varying complexities and strengths, in the present study we performed a systematic evaluation of climate change impact over South-Asia region. We have used different Regional Climate Models (RCMs) (from CORDEX domain), (Global Climate Models GCMs) and gridded observations for the study area to evaluate the models in historical/control period (1980-2010) and changes in future period (2010-2099). Firstly, GCMs and RCMs are evaluated against the Gridded observational datasets in the area using precipitation and temperature as indicative variables. Observational dataset are also evaluated against the reliable set of observational dataset, as pointed in literature. Bias, Correlation, and changes (among other statistical measures) are calculated for the entire region and both the variables. Eventually, the region was sub-divided into various smaller domains based on homogenous precipitation zones to evaluate the average changes over time period. Spatial and temporal changes for the region are then finally calculated to evaluate the future changes in the region. Future changes are calculated for 2 Representative Concentration Pathways (RCPs), the middle emission (RCP4.5) and high emission (RCP8.5) and for both climatic variables, precipitation and temperature. Lastly, Evaluation of Extremes is performed based on precipitation and temperature based indices for whole region in future dataset. Results have indicated that the whole study region is under extreme stress in future climate scenarios for both climatic variables i.e. precipitation and temperature. Precipitation variability is dependent on the location in the area leading to droughts and floods in various regions in future. Temperature is hinting towards a constant increase throughout the region regardless of location.

  16. Study on the evaluation method for fault displacement based on characterized source model

    NASA Astrophysics Data System (ADS)

    Tonagi, M.; Takahama, T.; Matsumoto, Y.; Inoue, N.; Irikura, K.; Dalguer, L. A.

    2016-12-01

    In IAEA Specific Safety Guide (SSG) 9 describes that probabilistic methods for evaluating fault displacement should be used if no sufficient basis is provided to decide conclusively that the fault is not capable by using the deterministic methodology. In addition, International Seismic Safety Centre compiles as ANNEX to realize seismic hazard for nuclear facilities described in SSG-9 and shows the utility of the deterministic and probabilistic evaluation methods for fault displacement. In Japan, it is required that important nuclear facilities should be established on ground where fault displacement will not arise when earthquakes occur in the future. Under these situations, based on requirements, we need develop evaluation methods for fault displacement to enhance safety in nuclear facilities. We are studying deterministic and probabilistic methods with tentative analyses using observed records such as surface fault displacement and near-fault strong ground motions of inland crustal earthquake which fault displacements arose. In this study, we introduce the concept of evaluation methods for fault displacement. After that, we show parts of tentative analysis results for deterministic method as follows: (1) For the 1999 Chi-Chi earthquake, referring slip distribution estimated by waveform inversion, we construct a characterized source model (Miyake et al., 2003, BSSA) which can explain observed near-fault broad band strong ground motions. (2) Referring a characterized source model constructed in (1), we study an evaluation method for surface fault displacement using hybrid method, which combines particle method and distinct element method. At last, we suggest one of the deterministic method to evaluate fault displacement based on characterized source model. This research was part of the 2015 research project `Development of evaluating method for fault displacement` by the Secretariat of Nuclear Regulation Authority (S/NRA), Japan.

  17. a New Multi-Criteria Evaluation Model Based on the Combination of Non-Additive Fuzzy Ahp, Choquet Integral and Sugeno λ-MEASURE

    NASA Astrophysics Data System (ADS)

    Nadi, S.; Samiei, M.; Salari, H. R.; Karami, N.

    2017-09-01

    This paper proposes a new model for multi-criteria evaluation under uncertain condition. In this model we consider the interaction between criteria as one of the most challenging issues especially in the presence of uncertainty. In this case usual pairwise comparisons and weighted sum cannot be used to calculate the importance of criteria and to aggregate them. Our model is based on the combination of non-additive fuzzy linguistic preference relation AHP (FLPRAHP), Choquet integral and Sugeno λ-measure. The proposed model capture fuzzy preferences of users and fuzzy values of criteria and uses Sugeno λ -measure to determine the importance of criteria and their interaction. Then, integrating Choquet integral and FLPRAHP, all the interaction between criteria are taken in to account with least number of comparison and the final score for each alternative is determined. So we would model a comprehensive set of interactions between criteria that can lead us to more reliable result. An illustrative example presents the effectiveness and capability of the proposed model to evaluate different alternatives in a multi-criteria decision problem.

  18. Climate Model Diagnostic Analyzer

    NASA Technical Reports Server (NTRS)

    Lee, Seungwon; Pan, Lei; Zhai, Chengxing; Tang, Benyang; Kubar, Terry; Zhang, Zia; Wang, Wei

    2015-01-01

    The comprehensive and innovative evaluation of climate models with newly available global observations is critically needed for the improvement of climate model current-state representation and future-state predictability. A climate model diagnostic evaluation process requires physics-based multi-variable analyses that typically involve large-volume and heterogeneous datasets, making them both computation- and data-intensive. With an exploratory nature of climate data analyses and an explosive growth of datasets and service tools, scientists are struggling to keep track of their datasets, tools, and execution/study history, let alone sharing them with others. In response, we have developed a cloud-enabled, provenance-supported, web-service system called Climate Model Diagnostic Analyzer (CMDA). CMDA enables the physics-based, multivariable model performance evaluations and diagnoses through the comprehensive and synergistic use of multiple observational data, reanalysis data, and model outputs. At the same time, CMDA provides a crowd-sourcing space where scientists can organize their work efficiently and share their work with others. CMDA is empowered by many current state-of-the-art software packages in web service, provenance, and semantic search.

  19. Using a shared governance structure to evaluate the implementation of a new model of care: the shared experience of a performance improvement committee.

    PubMed

    Myers, Mary; Parchen, Debra; Geraci, Marilla; Brenholtz, Roger; Knisely-Carrigan, Denise; Hastings, Clare

    2013-10-01

    Sustaining change in the behaviors and habits of experienced practicing nurses can be frustrating and daunting, even when changes are based on evidence. Partnering with an active shared governance structure to communicate change and elicit feedback is an established method to foster partnership, equity, accountability, and ownership. Few recent exemplars in the literature link shared governance, change management, and evidence-based practice to transitions in care models. This article describes an innovative staff-driven approach used by nurses in a shared governance performance improvement committee to use evidence-based practice in determining the best methods to evaluate the implementation of a new model of care.

  20. Using a Shared Governance Structure to Evaluate the Implementation of a New Model of Care: The Shared Experience of a Performance Improvement Committee

    PubMed Central

    Myers, Mary; Parchen, Debra; Geraci, Marilla; Brenholtz, Roger; Knisely-Carrigan, Denise; Hastings, Clare

    2013-01-01

    Sustaining change in the behaviors and habits of experienced practicing nurses can be frustrating and daunting, even when changes are based on evidence. Partnering with an active shared governance structure to communicate change and elicit feedback is an established method to foster partnership, equity, accountability and ownership. Few recent exemplars in the literature link shared governance, change management and evidence-based practice to transitions in care models. This article describes an innovative staff-driven approach used by nurses in a shared governance performance improvement committee to use evidence based practice in determining the best methods to evaluate the implementation of a new model of care. PMID:24061583

  1. The algorithmic anatomy of model-based evaluation

    PubMed Central

    Daw, Nathaniel D.; Dayan, Peter

    2014-01-01

    Despite many debates in the first half of the twentieth century, it is now largely a truism that humans and other animals build models of their environments and use them for prediction and control. However, model-based (MB) reasoning presents severe computational challenges. Alternative, computationally simpler, model-free (MF) schemes have been suggested in the reinforcement learning literature, and have afforded influential accounts of behavioural and neural data. Here, we study the realization of MB calculations, and the ways that this might be woven together with MF values and evaluation methods. There are as yet mostly only hints in the literature as to the resulting tapestry, so we offer more preview than review. PMID:25267820

  2. Development of Weeds Density Evaluation System Based on RGB Sensor

    NASA Astrophysics Data System (ADS)

    Solahudin, M.; Slamet, W.; Wahyu, W.

    2018-05-01

    Weeds are plant competitors which potentially reduce the yields due to competition for sunlight, water and soil nutrients. Recently, for chemical-based weed control, site-specific weed management that accommodates spatial and temporal diversity of weeds attack in determining the appropriate dose of herbicide based on Variable Rate Technology (VRT) is preferable than traditional approach with single dose herbicide application. In such application, determination of the level of weed density is an important task. Several methods have been studied to evaluate the density of weed attack. The objective of this study is to develop a system that is able to evaluate weed density based on RGB (Red, Green, and Blue) sensors. RGB sensor was used to acquire the RGB values of the surface of the field. An artificial neural network (ANN) model was then used for determining the weed density. In this study the ANN model was trained with 280 training data (70%), 60 validation data (15%), and 60 testing data (15%). Based on the field test, using the proposed method the weed density could be evaluated with an accuracy of 83.75%.

  3. Evidence used in model-based economic evaluations for evaluating pharmacogenetic and pharmacogenomic tests: a systematic review protocol

    PubMed Central

    Peters, Jaime L; Cooper, Chris; Buchanan, James

    2015-01-01

    Introduction Decision models can be used to conduct economic evaluations of new pharmacogenetic and pharmacogenomic tests to ensure they offer value for money to healthcare systems. These models require a great deal of evidence, yet research suggests the evidence used is diverse and of uncertain quality. By conducting a systematic review, we aim to investigate the test-related evidence used to inform decision models developed for the economic evaluation of genetic tests. Methods and analysis We will search electronic databases including MEDLINE, EMBASE and NHS EEDs to identify model-based economic evaluations of pharmacogenetic and pharmacogenomic tests. The search will not be limited by language or date. Title and abstract screening will be conducted independently by 2 reviewers, with screening of full texts and data extraction conducted by 1 reviewer, and checked by another. Characteristics of the decision problem, the decision model and the test evidence used to inform the model will be extracted. Specifically, we will identify the reported evidence sources for the test-related evidence used, describe the study design and how the evidence was identified. A checklist developed specifically for decision analytic models will be used to critically appraise the models described in these studies. Variations in the test evidence used in the decision models will be explored across the included studies, and we will identify gaps in the evidence in terms of both quantity and quality. Dissemination The findings of this work will be disseminated via a peer-reviewed journal publication and at national and international conferences. PMID:26560056

  4. Independent Evaluation of Heavy-Truck Safety Applications Based on Vehicle-to-Vehicle and Vehicle-to-Infrastructure Communications Used in the Safety Pilot Model Deployment

    DOT National Transportation Integrated Search

    2016-01-01

    This report presents the methodology and results of the independent evaluation of heavy trucks (HTs) in the Safety Pilot Model Deployment (SPMD); part of the United States Department of Transportations Intelligent Transportation Systems research p...

  5. Independent evaluation of light-vehicle safety applications based on vehicle-to-vehicle communications used in the 2012-2013 safety pilot model deployment

    DOT National Transportation Integrated Search

    2015-12-01

    This report presents the methodology and results of the independent evaluation of safety applications for passenger vehicles in the 2012-2013 Safety Pilot Model Deployment, part of the United States Department of Transportations Intelligent Transp...

  6. Teaching, Learning and Evaluation Techniques in the Engineering Courses.

    ERIC Educational Resources Information Center

    Vermaas, Luiz Lenarth G.; Crepaldi, Paulo Cesar; Fowler, Fabio Roberto

    This article presents some techniques of professional formation from the Petra Model that can be applied in Engineering Programs. It shows its philosophy, teaching methods for listening, making abstracts, studying, researching, team working and problem solving. Some questions regarding planning and evaluation, based in the model are, as well,…

  7. Edge gradients evaluation for 2D hybrid finite volume method model

    USDA-ARS?s Scientific Manuscript database

    In this study, a two-dimensional depth-integrated hydrodynamic model was developed using FVM on a hybrid unstructured collocated mesh system. To alleviate the negative effects of mesh irregularity and non-uniformity, a conservative evaluation method for edge gradients based on the second-order Tayl...

  8. EVALUATION AND SENSITIVITY ANALYSES RESULTS OF THE MESOPUFF II MODEL WITH CAPTEX MEASUREMENTS

    EPA Science Inventory

    The MESOPUFF II regional Lagrangian puff model has been evaluated and tested against measurements from the Cross-Appalachian Tracer Experiment (CAPTEX) data base in an effort to assess its abilIty to simulate the transport and dispersion of a nonreactive, nondepositing tracer plu...

  9. Evaluation and intercomparison of five major dry deposition ...

    EPA Pesticide Factsheets

    Dry deposition of various pollutants needs to be quantified in air quality monitoring networks as well as in chemical transport models. The inferential method is the most commonly used approach in which the dry deposition velocity (Vd) is empirically parameterized as a function of meteorological and biological conditions and pollutant species’ chemical properties. Earlier model intercomparison studies suggested that existing dry deposition algorithms produce quite different Vd values, e.g., up to a factor of 2 for monthly to annual average values for ozone, and sulfur and nitrogen species (Flechard et al., 2011; Schwede et al., 2011; Wu et al., 2011). To further evaluate model discrepancies using available flux data, this study compared the five dry deposition algorithms commonly used in North America and evaluated the models using five-year Vd(O3) and Vd(SO2) data generated from concentration gradient measurements above a temperate mixed forest in Canada. The five algorithms include: (1) the one used in the Canadian Air and Precipitation Monitoring Network (CAPMoN) and several Canadian air quality models based on Zhang et al. (2003), (2) the one used in the US Clean Air Status and Trends Network (CASTNET) based on Meyers et al. (1998), (3) the one used in the Community Multiscale Air Quality (CMAQ) model described in Pleim and Ran (2011), (4) the Noah land surface model coupled with a photosynthesis-based Gas Exchange Model (Noah-GEM) described in Wu et a

  10. Use of Knowledge Base Systems (EMDS) in Strategic and Tactical Forest Planning

    NASA Astrophysics Data System (ADS)

    Jensen, M. E.; Reynolds, K.; Stockmann, K.

    2008-12-01

    The USDA Forest Service 2008 Planning Rule requires Forest plans to provide a strategic vision for maintaining the sustainability of ecological, economic, and social systems across USFS lands through the identification of desired conditions and objectives. In this paper we show how knowledge-based systems can be efficiently used to evaluate disparate natural resource information to assess desired conditions and related objectives in Forest planning. We use the Ecosystem Management Decision Support (EMDS) system (http://www.institute.redlands.edu/emds/), which facilitates development of both logic-based models for evaluating ecosystem sustainability (desired conditions) and decision models to identify priority areas for integrated landscape restoration (objectives). The study area for our analysis spans 1,057 subwatersheds within western Montana and northern Idaho. Results of our study suggest that knowledge-based systems such as EMDS are well suited to both strategic and tactical planning and that the following points merit consideration in future National Forest (and other land management) planning efforts: 1) Logic models provide a consistent, transparent, and reproducible method for evaluating broad propositions about ecosystem sustainability such as: are watershed integrity, ecosystem and species diversity, social opportunities, and economic integrity in good shape across a planning area? The ability to evaluate such propositions in a formal logic framework also allows users the opportunity to evaluate statistical changes in outcomes over time, which could be very useful for regional and national reporting purposes and for addressing litigation; 2) The use of logic and decision models in strategic and tactical Forest planning provides a repository for expert knowledge (corporate memory) that is critical to the evaluation and management of ecosystem sustainability over time. This is especially true for the USFS and other federal resource agencies, which are likely to experience rapid turnover in tenured resource specialist positions within the next five years due to retirements; 3) Use of logic model output in decision models is an efficient method for synthesizing the typically large amounts of information needed to support integrated landscape restoration. Moreover, use of logic and decision models to design customized scenarios for integrated landscape restoration, as we have demonstrated with EMDS, offers substantial improvements to traditional GIS-based procedures such as suitability analysis. To our knowledge, this study represents the first attempt to link evaluations of desired conditions for ecosystem sustainability in strategic planning to tactical planning regarding the location of subwatersheds that best meet the objectives of integrated landscape restoration. The basic knowledge-based approach implemented in EMDS, with its logic (NetWeaver) and decision (Criterion Decision Plus) engines, is well suited both to multi-scale strategic planning and to multi-resource tactical planning.

  11. A constructive Indian country response to the evidence-based program mandate.

    PubMed

    Walker, R Dale; Bigelow, Douglas A

    2011-01-01

    Over the last 20 years governmental mandates for preferentially funding evidence-based "model" practices and programs has become doctrine in some legislative bodies, federal agencies, and state agencies. It was assumed that what works in small sample, controlled settings would work in all community settings, substantially improving safety, effectiveness, and value-for-money. The evidence-based "model" programs mandate has imposed immutable "core components," fidelity testing, alien programming and program developers, loss of familiar programs, and resource capacity requirements upon tribes, while infringing upon their tribal sovereignty and consultation rights. Tribal response in one state (Oregon) went through three phases: shock and rejection; proposing an alternative approach using criteria of cultural appropriateness, aspiring to evaluability; and adopting logic modeling. The state heard and accepted the argument that the tribal way of knowing is different and valid. Currently, a state-authorized tribal logic model and a review panel process are used to approve tribal best practices for state funding. This constructive response to the evidence-based program mandate elevates tribal practices in the funding and regulatory world, facilitates continuing quality improvement and evaluation, while ensuring that practices and programs remain based on local community context and culture. This article provides details of a model that could well serve tribes facing evidence-based model program mandates throughout the country.

  12. Evaluation of Liquid Fuel Spray Models for Hybrid RANS/LES and DLES Prediction of Turbulent Reactive Flows

    NASA Astrophysics Data System (ADS)

    Afshar, Ali

    An evaluation of Lagrangian-based, discrete-phase models for multi-component liquid sprays encountered in the combustors of gas turbine engines is considered. In particular, the spray modeling capabilities of the commercial software, ANSYS Fluent, was evaluated. Spray modeling was performed for various cold flow validation cases. These validation cases include a liquid jet in a cross-flow, an airblast atomizer, and a high shear fuel nozzle. Droplet properties including velocity and diameter were investigated and compared with previous experimental and numerical results. Different primary and secondary breakup models were evaluated in this thesis. The secondary breakup models investigated include the Taylor analogy breakup (TAB) model, the wave model, the Kelvin-Helmholtz Rayleigh-Taylor model (KHRT), and the Stochastic secondary droplet (SSD) approach. The modeling of fuel sprays requires a proper treatment for the turbulence. Reynolds-averaged Navier-Stokes (RANS), large eddy simulation (LES), hybrid RANS/LES, and dynamic LES (DLES) were also considered for the turbulent flows involving sprays. The spray and turbulence models were evaluated using the available benchmark experimental data.

  13. Modeling of the rough spherical nanoparticles manipulation on a substrate based on the AFM nanorobot

    NASA Astrophysics Data System (ADS)

    Zakeri, M.; Faraji, J.

    2014-12-01

    In this paper, dynamic behavior of the rough spherical micro/nanoparticles during pulling/pushing on the flat substrate has been investigated and analyzed. For this purpose, at first, two hexagonal roughness models (George and Cooper) were studied and then evaluations for adhesion force were determined for rough particle manipulation on flat substrate. These two models were then changed by using of the Rabinovich theory. Evaluations were determined for contact adhesion force between rough particle and flat substrate; depth of penetration evaluations were determined by the Johnson-Kendall-Roberts contact mechanic theory and the Schwartz method and according to Cooper and George roughness models. Then, the novel contact theory was used to determine a dynamic model for rough micro/nanoparticle manipulation on flat substrate. Finally, simulation of particle dynamic behavior was implemented during pushing of rough spherical gold particles with radii of 50, 150, 400, 600, and 1,000 nm. Results derived from simulations of particles with several rates of roughness on flat substrate indicated that compared to results for flat particles, inherent roughness on particles might reduce the rate of critical force needed for sliding and rolling given particles. Given a fixed radius for roughness value and increased roughness height, evaluations for sliding and rolling critical forces showed greater reduction. Alternately, the rate of critical force was shown to reduce relative to an increased roughness radius. With respect to both models, based on the George roughness model, the predicted rate of adhesion force was greater than that determined in the Cooper roughness model, and as a result, the predicted rate of critical force based on the George roughness model was closer to the critical force value of flat particle.

  14. Node-Splitting Generalized Linear Mixed Models for Evaluation of Inconsistency in Network Meta-Analysis.

    PubMed

    Yu-Kang, Tu

    2016-12-01

    Network meta-analysis for multiple treatment comparisons has been a major development in evidence synthesis methodology. The validity of a network meta-analysis, however, can be threatened by inconsistency in evidence within the network. One particular issue of inconsistency is how to directly evaluate the inconsistency between direct and indirect evidence with regard to the effects difference between two treatments. A Bayesian node-splitting model was first proposed and a similar frequentist side-splitting model has been put forward recently. Yet, assigning the inconsistency parameter to one or the other of the two treatments or splitting the parameter symmetrically between the two treatments can yield different results when multi-arm trials are involved in the evaluation. We aimed to show that a side-splitting model can be viewed as a special case of design-by-treatment interaction model, and different parameterizations correspond to different design-by-treatment interactions. We demonstrated how to evaluate the side-splitting model using the arm-based generalized linear mixed model, and an example data set was used to compare results from the arm-based models with those from the contrast-based models. The three parameterizations of side-splitting make slightly different assumptions: the symmetrical method assumes that both treatments in a treatment contrast contribute to inconsistency between direct and indirect evidence, whereas the other two parameterizations assume that only one of the two treatments contributes to this inconsistency. With this understanding in mind, meta-analysts can then make a choice about how to implement the side-splitting method for their analysis. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  15. Using a logic model to relate the strategic to the tactical in program planning and evaluation: an illustration based on social norms interventions.

    PubMed

    Keller, Adrienne; Bauerle, Jennifer A

    2009-01-01

    Logic models are a ubiquitous tool for specifying the tactics--including implementation and evaluation--of interventions in the public health, health and social behaviors arenas. Similarly, social norms interventions are a common strategy, particularly in college settings, to address hazardous drinking and other dangerous or asocial behaviors. This paper illustrates an extension of logic models to include strategic as well as tactical components, using a specific example developed for social norms interventions. Placing the evaluation of projects within the context of this kind of logic model addresses issues related to the lack of a research design to evaluate effectiveness.

  16. Evaluation of strategies for nature-based solutions to drought: a decision support model at the national scale

    NASA Astrophysics Data System (ADS)

    Simpson, Mike; Ives, Matthew; Hall, Jim

    2016-04-01

    There is an increasing body of evidence in support of the use of nature based solutions as a strategy to mitigate drought. Restored or constructed wetlands, grasslands and in some cases forests have been used with success in numerous case studies. Such solutions remain underused in the UK, where they are not considered as part of long-term plans for supply by water companies. An important step is the translation of knowledge on the benefits of nature based solutions at the upland/catchment scale into a model of the impact of these solutions on national water resource planning in terms of financial costs, carbon benefits and robustness to drought. Our project, 'A National Scale Model of Green Infrastructure for Water Resources', addresses this issue through development of a model that can show the costs and benefits associated with a broad roll-out of nature based solutions for water supply. We have developed generalised models of both the hydrological effects of various classes and implementations of nature-based approaches and their economic impacts in terms of construction costs, running costs, time to maturity, land use and carbon benefits. Our next step will be to compare this work with our recent evaluation of conventional water infrastructure, allowing a case to be made in financial terms and in terms of security of water supply. By demonstrating the benefits of nature based solutions under multiple possible climate and population scenarios we aim to demonstrate the potential value of using nature based solutions as a component of future long-term water resource plans. Strategies for decision making regarding the selection of nature based and conventional approaches, developed through discussion with government and industry, will be applied to the final model. Our focus is on keeping our work relevant to the requirements of decision-makers involved in conventional water planning. We propose to present the outcomes of our model for the evaluation of nature-based solutions at catchment scale and ongoing results of our national-scale model.

  17. Methodology Evaluation Framework for Component-Based System Development.

    ERIC Educational Resources Information Center

    Dahanayake, Ajantha; Sol, Henk; Stojanovic, Zoran

    2003-01-01

    Explains component-based development (CBD) for distributed information systems and presents an evaluation framework, which highlights the extent to which a methodology is component oriented. Compares prominent CBD methods, discusses ways of modeling, and suggests that this is a first step towards a components-oriented systems development…

  18. Decomposition of the Mean Squared Error and NSE Performance Criteria: Implications for Improving Hydrological Modelling

    NASA Technical Reports Server (NTRS)

    Gupta, Hoshin V.; Kling, Harald; Yilmaz, Koray K.; Martinez-Baquero, Guillermo F.

    2009-01-01

    The mean squared error (MSE) and the related normalization, the Nash-Sutcliffe efficiency (NSE), are the two criteria most widely used for calibration and evaluation of hydrological models with observed data. Here, we present a diagnostically interesting decomposition of NSE (and hence MSE), which facilitates analysis of the relative importance of its different components in the context of hydrological modelling, and show how model calibration problems can arise due to interactions among these components. The analysis is illustrated by calibrating a simple conceptual precipitation-runoff model to daily data for a number of Austrian basins having a broad range of hydro-meteorological characteristics. Evaluation of the results clearly demonstrates the problems that can be associated with any calibration based on the NSE (or MSE) criterion. While we propose and test an alternative criterion that can help to reduce model calibration problems, the primary purpose of this study is not to present an improved measure of model performance. Instead, we seek to show that there are systematic problems inherent with any optimization based on formulations related to the MSE. The analysis and results have implications to the manner in which we calibrate and evaluate environmental models; we discuss these and suggest possible ways forward that may move us towards an improved and diagnostically meaningful approach to model performance evaluation and identification.

  19. Addressing issues associated with evaluating prediction models for survival endpoints based on the concordance statistic.

    PubMed

    Wang, Ming; Long, Qi

    2016-09-01

    Prediction models for disease risk and prognosis play an important role in biomedical research, and evaluating their predictive accuracy in the presence of censored data is of substantial interest. The standard concordance (c) statistic has been extended to provide a summary measure of predictive accuracy for survival models. Motivated by a prostate cancer study, we address several issues associated with evaluating survival prediction models based on c-statistic with a focus on estimators using the technique of inverse probability of censoring weighting (IPCW). Compared to the existing work, we provide complete results on the asymptotic properties of the IPCW estimators under the assumption of coarsening at random (CAR), and propose a sensitivity analysis under the mechanism of noncoarsening at random (NCAR). In addition, we extend the IPCW approach as well as the sensitivity analysis to high-dimensional settings. The predictive accuracy of prediction models for cancer recurrence after prostatectomy is assessed by applying the proposed approaches. We find that the estimated predictive accuracy for the models in consideration is sensitive to NCAR assumption, and thus identify the best predictive model. Finally, we further evaluate the performance of the proposed methods in both settings of low-dimensional and high-dimensional data under CAR and NCAR through simulations. © 2016, The International Biometric Society.

  20. Comparison of chiller models for use in model-based fault detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sreedharan, Priya; Haves, Philip

    Selecting the model is an important and essential step in model based fault detection and diagnosis (FDD). Factors that are considered in evaluating a model include accuracy, training data requirements, calibration effort, generality, and computational requirements. The objective of this study was to evaluate different modeling approaches for their applicability to model based FDD of vapor compression chillers. Three different models were studied: the Gordon and Ng Universal Chiller model (2nd generation) and a modified version of the ASHRAE Primary Toolkit model, which are both based on first principles, and the DOE-2 chiller model, as implemented in CoolTools{trademark}, which ismore » empirical. The models were compared in terms of their ability to reproduce the observed performance of an older, centrifugal chiller operating in a commercial office building and a newer centrifugal chiller in a laboratory. All three models displayed similar levels of accuracy. Of the first principles models, the Gordon-Ng model has the advantage of being linear in the parameters, which allows more robust parameter estimation methods to be used and facilitates estimation of the uncertainty in the parameter values. The ASHRAE Toolkit Model may have advantages when refrigerant temperature measurements are also available. The DOE-2 model can be expected to have advantages when very limited data are available to calibrate the model, as long as one of the previously identified models in the CoolTools library matches the performance of the chiller in question.« less

  1. Utilizing Adjoint-Based Error Estimates for Surrogate Models to Accurately Predict Probabilities of Events

    DOE PAGES

    Butler, Troy; Wildey, Timothy

    2018-01-01

    In thist study, we develop a procedure to utilize error estimates for samples of a surrogate model to compute robust upper and lower bounds on estimates of probabilities of events. We show that these error estimates can also be used in an adaptive algorithm to simultaneously reduce the computational cost and increase the accuracy in estimating probabilities of events using computationally expensive high-fidelity models. Specifically, we introduce the notion of reliability of a sample of a surrogate model, and we prove that utilizing the surrogate model for the reliable samples and the high-fidelity model for the unreliable samples gives preciselymore » the same estimate of the probability of the output event as would be obtained by evaluation of the original model for each sample. The adaptive algorithm uses the additional evaluations of the high-fidelity model for the unreliable samples to locally improve the surrogate model near the limit state, which significantly reduces the number of high-fidelity model evaluations as the limit state is resolved. Numerical results based on a recently developed adjoint-based approach for estimating the error in samples of a surrogate are provided to demonstrate (1) the robustness of the bounds on the probability of an event, and (2) that the adaptive enhancement algorithm provides a more accurate estimate of the probability of the QoI event than standard response surface approximation methods at a lower computational cost.« less

  2. Utilizing Adjoint-Based Error Estimates for Surrogate Models to Accurately Predict Probabilities of Events

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Butler, Troy; Wildey, Timothy

    In thist study, we develop a procedure to utilize error estimates for samples of a surrogate model to compute robust upper and lower bounds on estimates of probabilities of events. We show that these error estimates can also be used in an adaptive algorithm to simultaneously reduce the computational cost and increase the accuracy in estimating probabilities of events using computationally expensive high-fidelity models. Specifically, we introduce the notion of reliability of a sample of a surrogate model, and we prove that utilizing the surrogate model for the reliable samples and the high-fidelity model for the unreliable samples gives preciselymore » the same estimate of the probability of the output event as would be obtained by evaluation of the original model for each sample. The adaptive algorithm uses the additional evaluations of the high-fidelity model for the unreliable samples to locally improve the surrogate model near the limit state, which significantly reduces the number of high-fidelity model evaluations as the limit state is resolved. Numerical results based on a recently developed adjoint-based approach for estimating the error in samples of a surrogate are provided to demonstrate (1) the robustness of the bounds on the probability of an event, and (2) that the adaptive enhancement algorithm provides a more accurate estimate of the probability of the QoI event than standard response surface approximation methods at a lower computational cost.« less

  3. Evaluation of driver fatigue on two channels of EEG data.

    PubMed

    Li, Wei; He, Qi-chang; Fan, Xiu-min; Fei, Zhi-min

    2012-01-11

    Electroencephalogram (EEG) data is an effective indicator to evaluate driver fatigue. The 16 channels of EEG data are collected and transformed into three bands (θ, α, and β) in the current paper. First, 12 types of energy parameters are computed based on the EEG data. Then, Grey Relational Analysis (GRA) is introduced to identify the optimal indicator of driver fatigue, after which, the number of significant electrodes is reduced using Kernel Principle Component Analysis (KPCA). Finally, the evaluation model for driver fatigue is established with the regression equation based on the EEG data from two significant electrodes (Fp1 and O1). The experimental results verify that the model is effective in evaluating driver fatigue. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  4. Model‐based economic evaluations in smoking cessation and their transferability to new contexts: a systematic review

    PubMed Central

    Berg, Marrit L.; Cheung, Kei Long; Hiligsmann, Mickaël; Evers, Silvia; de Kinderen, Reina J. A.; Kulchaitanaroaj, Puttarin

    2017-01-01

    Abstract Aims To identify different types of models used in economic evaluations of smoking cessation, analyse the quality of the included models examining their attributes and ascertain their transferability to a new context. Methods A systematic review of the literature on the economic evaluation of smoking cessation interventions published between 1996 and April 2015, identified via Medline, EMBASE, National Health Service (NHS) Economic Evaluation Database (NHS EED), Health Technology Assessment (HTA). The checklist‐based quality of the included studies and transferability scores was based on the European Network of Health Economic Evaluation Databases (EURONHEED) criteria. Studies that were not in smoking cessation, not original research, not a model‐based economic evaluation, that did not consider adult population and not from a high‐income country were excluded. Findings Among the 64 economic evaluations included in the review, the state‐transition Markov model was the most frequently used method (n = 30/64), with quality adjusted life years (QALY) being the most frequently used outcome measure in a life‐time horizon. A small number of the included studies (13 of 64) were eligible for EURONHEED transferability checklist. The overall transferability scores ranged from 0.50 to 0.97, with an average score of 0.75. The average score per section was 0.69 (range = 0.35–0.92). The relative transferability of the studies could not be established due to a limitation present in the EURONHEED method. Conclusion All existing economic evaluations in smoking cessation lack in one or more key study attributes necessary to be fully transferable to a new context. PMID:28060453

  5. Agent-Based Computational Modeling to Examine How Individual Cell Morphology Affects Dosimetry

    EPA Science Inventory

    Cell-based models utilizing high-content screening (HCS) data have applications for predictive toxicology. Evaluating concentration-dependent effects on cell fate and state response is a fundamental utilization of HCS data.Although HCS assays may capture quantitative readouts at ...

  6. DEVELOPMENT OF A RATIONALLY BASED DESIGN PROTOCOL FOR THE ULTRAVIOLET LIGHT DISINFECTION PROCESS

    EPA Science Inventory

    A protocol is demonstrated for the design and evaluation of ultraviolet (UV) disinfection systems based on a mathematical model. The disinfection model incorporates the system's physical dimensions, the residence time distribution of the reactor and dispersion characteristics, th...

  7. Prospective Evaluation of a Model-Based Dosing Regimen for Amikacin in Preterm and Term Neonates in Clinical Practice

    PubMed Central

    De Cock, R. F. W.; Allegaert, K.; Vanhaesebrouck, S.; Danhof, M.; Knibbe, C. A. J.

    2015-01-01

    Based on a previously derived population pharmacokinetic model, a novel neonatal amikacin dosing regimen was developed. The aim of the current study was to prospectively evaluate this dosing regimen. First, early (before and after second dose) therapeutic drug monitoring (TDM) observations were evaluated for achieving target trough (<3 mg/liter) and peak (>24 mg/liter) levels. Second, all observed TDM concentrations were compared with model-predicted concentrations, whereby the results of a normalized prediction distribution error (NPDE) were considered. Subsequently, Monte Carlo simulations were performed. Finally, remaining causes limiting amikacin predictability (i.e., prescription errors and disease characteristics of outliers) were explored. In 579 neonates (median birth body weight, 2,285 [range, 420 to 4,850] g; postnatal age 2 days [range, 1 to 30 days]; gestational age, 34 weeks [range, 24 to 41 weeks]), 90.5% of the observed early peak levels reached 24 mg/liter, and 60.2% of the trough levels were <3 mg/liter (93.4% ≤5 mg/liter). Observations were accurately predicted by the model without bias, which was confirmed by the NPDE. Monte Carlo simulations showed that peak concentrations of >24 mg/liter were reached at steady state in almost all patients. Trough values of <3 mg/liter at steady state were documented in 78% to 100% and 45% to 96% of simulated cases with and without ibuprofen coadministration, respectively; suboptimal trough levels were found in patients with postnatal age <14 days and current weight of >2,000 g. Prospective evaluation of a model-based neonatal amikacin dosing regimen resulted in optimized peak and trough concentrations in almost all patients. Slightly adapted dosing for patient subgroups with suboptimal trough levels was proposed. This model-based approach improves neonatal dosing individualization. PMID:26248375

  8. A Data Model for Teleconsultation in Managing High-Risk Pregnancies: Design and Preliminary Evaluation

    PubMed Central

    Deldar, Kolsoum

    2017-01-01

    Background Teleconsultation is a guarantor for virtual supervision of clinical professors on clinical decisions made by medical residents in teaching hospitals. Type, format, volume, and quality of exchanged information have a great influence on the quality of remote clinical decisions or tele-decisions. Thus, it is necessary to develop a reliable and standard model for these clinical relationships. Objective The goal of this study was to design and evaluate a data model for teleconsultation in the management of high-risk pregnancies. Methods This study was implemented in three phases. In the first phase, a systematic review, a qualitative study, and a Delphi approach were done in selected teaching hospitals. Systematic extraction and localization of diagnostic items to develop the tele-decision clinical archetypes were performed as the second phase. Finally, the developed model was evaluated using predefined consultation scenarios. Results Our review study has shown that present medical consultations have no specific structure or template for patient information exchange. Furthermore, there are many challenges in the remote medical decision-making process, and some of them are related to the lack of the mentioned structure. The evaluation phase of our research has shown that data quality (P<.001), adequacy (P<.001), organization (P<.001), confidence (P<.001), and convenience (P<.001) had more scores in archetype-based consultation scenarios compared with routine-based ones. Conclusions Our archetype-based model could acquire better and higher scores in the data quality, adequacy, organization, confidence, and convenience dimensions than ones with routine scenarios. It is probable that the suggested archetype-based teleconsultation model may improve the quality of physician-physician remote medical consultations. PMID:29242181

  9. Efficient Monte Carlo sampling of inverse problems using a neural network-based forward—applied to GPR crosshole traveltime inversion

    NASA Astrophysics Data System (ADS)

    Hansen, T. M.; Cordua, K. S.

    2017-12-01

    Probabilistically formulated inverse problems can be solved using Monte Carlo-based sampling methods. In principle, both advanced prior information, based on for example, complex geostatistical models and non-linear forward models can be considered using such methods. However, Monte Carlo methods may be associated with huge computational costs that, in practice, limit their application. This is not least due to the computational requirements related to solving the forward problem, where the physical forward response of some earth model has to be evaluated. Here, it is suggested to replace a numerical complex evaluation of the forward problem, with a trained neural network that can be evaluated very fast. This will introduce a modeling error that is quantified probabilistically such that it can be accounted for during inversion. This allows a very fast and efficient Monte Carlo sampling of the solution to an inverse problem. We demonstrate the methodology for first arrival traveltime inversion of crosshole ground penetrating radar data. An accurate forward model, based on 2-D full-waveform modeling followed by automatic traveltime picking, is replaced by a fast neural network. This provides a sampling algorithm three orders of magnitude faster than using the accurate and computationally expensive forward model, and also considerably faster and more accurate (i.e. with better resolution), than commonly used approximate forward models. The methodology has the potential to dramatically change the complexity of non-linear and non-Gaussian inverse problems that have to be solved using Monte Carlo sampling techniques.

  10. A new UK fission yield evaluation UKFY3.7

    NASA Astrophysics Data System (ADS)

    Mills, Robert William

    2017-09-01

    The JEFF neutron induced and spontaneous fission product yield evaluation is currently unchanged from JEFF-3.1.1, also known by its UK designation UKFY3.6A. It is based upon experimental data combined with empirically fitted mass, charge and isomeric state models which are then adjusted within the experimental and model uncertainties to conform to the physical constraints of the fission process. A new evaluation has been prepared for JEFF, called UKFY3.7, that incorporates new experimental data and replaces the current empirical models (multi-Gaussian fits of mass distribution and Wahl Zp model for charge distribution combined with parameter extrapolation), with predictions from GEF. The GEF model has the advantage that one set of parameters allows the prediction of many different fissioning nuclides at different excitation energies unlike previous models where each fissioning nuclide at a specific excitation energy had to be fitted individually to the relevant experimental data. The new UKFY3.7 evaluation, submitted for testing as part of JEFF-3.3, is described alongside initial results of testing. In addition, initial ideas for future developments allowing inclusion of new measurements types and changing from any neutron spectrum type to true neutron energy dependence are discussed. Also, a method is proposed to propagate uncertainties of fission product yields based upon the experimental data that underlies the fission yield evaluation. The covariance terms being determined from the evaluated cumulative and independent yields combined with the experimental uncertainties on the cumulative yield measurements.

  11. Using a Systematic Conceptual Model for a Process Evaluation of a Middle School Obesity Risk-Reduction Nutrition Curriculum Intervention: Choice, Control & Change

    PubMed Central

    Lee, Heewon; Contento, Isobel R.; Koch, Pamela

    2012-01-01

    Objective To use and review a conceptual model of process evaluation and to examine the implementation of a nutrition education curriculum, Choice, Control & Change, designed to promote dietary and physical activity behaviors that reduce obesity risk. Design A process evaluation study based on a systematic conceptual model. Setting Five middle schools in New York City. Participants 562 students in 20 classes and their science teachers (n=8). Main Outcome Measures Based on the model, teacher professional development, teacher implementation, and student reception were evaluated. Also measured were teacher characteristics, teachers’ curriculum evaluation, and satisfaction with teaching the curriculum. Analysis Descriptive statistics and Spearman’s Rho Correlation for quantitative analysis and content analysis for qualitative data were used. Results Mean score of the teacher professional development evaluation was 4.75 on a 5-point scale. Average teacher implementation rate was 73%, and student reception rate was 69%. Ongoing teacher support was highly valued by teachers. Teachers’ satisfaction with teaching the curriculum was highly correlated with students’ satisfaction (p <.05). Teachers’ perception of amount of student work was negatively correlated with implementation and with student satisfaction (p<.05). Conclusions and implications Use of a systematic conceptual model and comprehensive process measures improves understanding of the implementation process and helps educators to better implement interventions as designed. PMID:23321021

  12. Using a systematic conceptual model for a process evaluation of a middle school obesity risk-reduction nutrition curriculum intervention: choice, control & change.

    PubMed

    Lee, Heewon; Contento, Isobel R; Koch, Pamela

    2013-03-01

    To use and review a conceptual model of process evaluation and to examine the implementation of a nutrition education curriculum, Choice, Control & Change, designed to promote dietary and physical activity behaviors that reduce obesity risk. A process evaluation study based on a systematic conceptual model. Five middle schools in New York City. Five hundred sixty-two students in 20 classes and their science teachers (n = 8). Based on the model, teacher professional development, teacher implementation, and student reception were evaluated. Also measured were teacher characteristics, teachers' curriculum evaluation, and satisfaction with teaching the curriculum. Descriptive statistics and Spearman ρ correlation for quantitative analysis and content analysis for qualitative data were used. Mean score of the teacher professional development evaluation was 4.75 on a 5-point scale. Average teacher implementation rate was 73%, and the student reception rate was 69%. Ongoing teacher support was highly valued by teachers. Teacher satisfaction with teaching the curriculum was highly correlated with student satisfaction (P < .05). Teacher perception of amount of student work was negatively correlated with implementation and with student satisfaction (P < .05). Use of a systematic conceptual model and comprehensive process measures improves understanding of the implementation process and helps educators to better implement interventions as designed. Copyright © 2013 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.

  13. Evaluation model of wind energy resources and utilization efficiency of wind farm

    NASA Astrophysics Data System (ADS)

    Ma, Jie

    2018-04-01

    Due to the large amount of abandoned winds in wind farms, the establishment of a wind farm evaluation model is particularly important for the future development of wind farms In this essay, consider the wind farm's wind energy situation, Wind Energy Resource Model (WERM) and Wind Energy Utilization Efficiency Model(WEUEM) are established to conduct a comprehensive assessment of the wind farm. Wind Energy Resource Model (WERM) contains average wind speed, average wind power density and turbulence intensity, which assessed wind energy resources together. Based on our model, combined with the actual measurement data of a wind farm, calculate the indicators using the model, and the results are in line with the actual situation. We can plan the future development of the wind farm based on this result. Thus, the proposed establishment approach of wind farm assessment model has application value.

  14. Research on the recycling industry development model for typical exterior plastic components of end-of-life passenger vehicle based on the SWOT method.

    PubMed

    Zhang, Hongshen; Chen, Ming

    2013-11-01

    In-depth studies on the recycling of typical automotive exterior plastic parts are significant and beneficial for environmental protection, energy conservation, and sustainable development of China. In the current study, several methods were used to analyze the recycling industry model for typical exterior parts of passenger vehicles in China. The strengths, weaknesses, opportunities, and challenges of the current recycling industry for typical exterior parts of passenger vehicles were analyzed comprehensively based on the SWOT method. The internal factor evaluation matrix and external factor evaluation matrix were used to evaluate the internal and external factors of the recycling industry. The recycling industry was found to respond well to all the factors and it was found to face good developing opportunities. Then, the cross-link strategies analysis for the typical exterior parts of the passenger car industry of China was conducted based on the SWOT analysis strategies and established SWOT matrix. Finally, based on the aforementioned research, the recycling industry model led by automobile manufacturers was promoted. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. Project-Based Learning Using Discussion and Lesson-Learned Methods via Social Media Model for Enhancing Problem Solving Skills

    ERIC Educational Resources Information Center

    Jewpanich, Chaiwat; Piriyasurawong, Pallop

    2015-01-01

    This research aims to 1) develop the project-based learning using discussion and lesson-learned methods via social media model (PBL-DLL SoMe Model) used for enhancing problem solving skills of undergraduate in education student, and 2) evaluate the PBL-DLL SoMe Model used for enhancing problem solving skills of undergraduate in education student.…

  16. Objective Quantification of Pre-and Postphonosurgery Vocal Fold Vibratory Characteristics Using High-Speed Videoendoscopy and a Harmonic Waveform Model

    ERIC Educational Resources Information Center

    Ikuma, Takeshi; Kunduk, Melda; McWhorter, Andrew J.

    2014-01-01

    Purpose: The model-based quantitative analysis of high-speed videoendoscopy (HSV) data at a low frame rate of 2,000 frames per second was assessed for its clinical adequacy. Stepwise regression was employed to evaluate the HSV parameters using harmonic models and their relationships to the Voice Handicap Index (VHI). Also, the model-based HSV…

  17. PPSITE - A New Method of Site Evaluation for Longleaf Pine: Model Development and User's Guide

    Treesearch

    Constance A. Harrington

    1990-01-01

    A model was developed to predict site index (base age 50 years) for longleaf pine (Pinus palustris Mill.). The model, named PPSITE, was based on soil characteristics, site location on the landscape, and land history. The model was constrained so that the relationship between site index and each soil-site variable was consistent with what was known...

  18. Evolution of Natural Attenuation Evaluation Protocols

    EPA Science Inventory

    Traditionally the evaluation of the efficacy of natural attenuation was based on changes in contaminant concentrations and mass reduction. Statistical tools and models such as Bioscreen provided evaluation protocols which now are being approached via other vehicles including m...

  19. Evaluative methodology for comprehensive water quality management planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dyer, H. L.

    Computer-based evaluative methodologies have been developed to provide for the analysis of coupled phenomena associated with natural resource comprehensive planning requirements. Provisions for planner/computer interaction have been included. Each of the simulation models developed is described in terms of its coded procedures. An application of the models for water quality management planning is presented; and the data requirements for each of the models are noted.

  20. Traffic protection in MPLS networks using an off-line flow optimization model

    NASA Astrophysics Data System (ADS)

    Krzesinski, Anthony E.; Muller, Karen E.

    2002-07-01

    MPLS-based recovery is intended to effect rapid and complete restoration of traffic affected by a fault in an MPLS network. Two MPLS-based recovery models have been proposed: IP re-routing which establishes recovery paths on demand, and protection switching which works with pre-established recovery paths. IP re-routing is robust and frugal since no resources are pre-committed but is inherently slower than protection switching which is intended to offer high reliability to premium services where fault recovery takes place at the 100 ms time scale. We present a model of protection switching in MPLS networks. A variant of the flow deviation method is used to find and capacitate a set of optimal label switched paths. The traffic is routed over a set of working LSPs. Global repair is implemented by reserving a set of pre-established recovery LSPs. An analytic model is used to evaluate the MPLS-based recovery mechanisms in response to bi-directional link failures. A simulation model is used to evaluate the MPLS recovery cycle in terms of the time needed to restore the traffic after a uni-directional link failure. The models are applied to evaluate the effectiveness of protection switching in networks consisting of between 20 and 100 nodes.

  1. Process service quality evaluation based on Dempster-Shafer theory and support vector machine.

    PubMed

    Pei, Feng-Que; Li, Dong-Bo; Tong, Yi-Fei; He, Fei

    2017-01-01

    Human involvement influences traditional service quality evaluations, which triggers an evaluation's low accuracy, poor reliability and less impressive predictability. This paper proposes a method by employing a support vector machine (SVM) and Dempster-Shafer evidence theory to evaluate the service quality of a production process by handling a high number of input features with a low sampling data set, which is called SVMs-DS. Features that can affect production quality are extracted by a large number of sensors. Preprocessing steps such as feature simplification and normalization are reduced. Based on three individual SVM models, the basic probability assignments (BPAs) are constructed, which can help the evaluation in a qualitative and quantitative way. The process service quality evaluation results are validated by the Dempster rules; the decision threshold to resolve conflicting results is generated from three SVM models. A case study is presented to demonstrate the effectiveness of the SVMs-DS method.

  2. Piloted evaluation of an integrated propulsion and flight control simulator

    NASA Technical Reports Server (NTRS)

    Bright, Michelle M.; Simon, Donald L.

    1992-01-01

    This paper describes a piloted evaluation of the integrated flight and propulsion control simulator at NASA Lewis Research Center. The purpose of this evaluation is to demonstrate the suitability and effectiveness of this fixed based simulator for advanced integrated propulsion and airframe control design. The evaluation will cover control effector gains and deadbands, control effectiveness and control authority, and heads up display functionality. For this evaluation the flight simulator is configured for transition flight using an advanced Short Take-Off and vertical Landing fighter aircraft model, a simplified high-bypass turbofan engine model, fighter cockpit, displays, and pilot effectors. The paper describes the piloted tasks used for rating displays and control effector gains. Pilot comments and simulation results confirm that the display symbology and control gains are very adequate for the transition flight task. Additionally, it is demonstrated that this small-scale, fixed base flight simulator facility can adequately perform a real time, piloted control evaluation.

  3. A dynamic vulnerability evaluation model to smart grid for the emergency response

    NASA Astrophysics Data System (ADS)

    Yu, Zhen; Wu, Xiaowei; Fang, Diange

    2018-01-01

    Smart grid shows more significant vulnerability to natural disasters and external destroy. According to the influence characteristics of important facilities suffered from typical kinds of natural disaster and external destroy, this paper built a vulnerability evaluation index system of important facilities in smart grid based on eight typical natural disasters, including three levels of static and dynamic indicators, totally forty indicators. Then a smart grid vulnerability evaluation method was proposed based on the index system, including determining the value range of each index, classifying the evaluation grade standard and giving the evaluation process and integrated index calculation rules. Using the proposed evaluation model, it can identify the most vulnerable parts of smart grid, and then help adopting targeted emergency response measures, developing emergency plans and increasing its capacity of disaster prevention and mitigation, which guarantee its safe and stable operation.

  4. Emerging In Vitro Liver Technologies for Drug Metabolism and Inter-Organ Interactions

    PubMed Central

    Bale, Shyam Sundhar; Moore, Laura

    2016-01-01

    In vitro liver models provide essential information for evaluating drug metabolism, metabolite formation, and hepatotoxicity. Interfacing liver models with other organ models could provide insights into the desirable as well as unintended systemic side effects of therapeutic agents and their metabolites. Such information is invaluable for drug screening processes particularly in the context of secondary organ toxicity. While interfacing of liver models with other organ models has been achieved, platforms that effectively provide human-relevant precise information are needed. In this concise review, we discuss the current state-of-the-art of liver-based multiorgan cell culture platforms primarily from a drug and metabolite perspective, and highlight the importance of media-to-cell ratio in interfacing liver models with other organ models. In addition, we briefly discuss issues related to development of optimal liver models that include recent advances in hepatic cell lines, stem cells, and challenges associated with primary hepatocyte-based liver models. Liver-based multiorgan models that achieve physiologically relevant coupling of different organ models can have a broad impact in evaluating drug efficacy and toxicity, as well as mechanistic investigation of human-relevant disease conditions. PMID:27049038

  5. Evaluation methodology for query-based scene understanding systems

    NASA Astrophysics Data System (ADS)

    Huster, Todd P.; Ross, Timothy D.; Culbertson, Jared L.

    2015-05-01

    In this paper, we are proposing a method for the principled evaluation of scene understanding systems in a query-based framework. We can think of a query-based scene understanding system as a generalization of typical sensor exploitation systems where instead of performing a narrowly defined task (e.g., detect, track, classify, etc.), the system can perform general user-defined tasks specified in a query language. Examples of this type of system have been developed as part of DARPA's Mathematics of Sensing, Exploitation, and Execution (MSEE) program. There is a body of literature on the evaluation of typical sensor exploitation systems, but the open-ended nature of the query interface introduces new aspects to the evaluation problem that have not been widely considered before. In this paper, we state the evaluation problem and propose an approach to efficiently learn about the quality of the system under test. We consider the objective of the evaluation to be to build a performance model of the system under test, and we rely on the principles of Bayesian experiment design to help construct and select optimal queries for learning about the parameters of that model.

  6. A probabilistic model framework for evaluating year-to-year variation in crop productivity

    NASA Astrophysics Data System (ADS)

    Yokozawa, M.; Iizumi, T.; Tao, F.

    2008-12-01

    Most models describing the relation between crop productivity and weather condition have so far been focused on mean changes of crop yield. For keeping stable food supply against abnormal weather as well as climate change, evaluating the year-to-year variations in crop productivity rather than the mean changes is more essential. We here propose a new framework of probabilistic model based on Bayesian inference and Monte Carlo simulation. As an example, we firstly introduce a model on paddy rice production in Japan. It is called PRYSBI (Process- based Regional rice Yield Simulator with Bayesian Inference; Iizumi et al., 2008). The model structure is the same as that of SIMRIW, which was developed and used widely in Japan. The model includes three sub- models describing phenological development, biomass accumulation and maturing of rice crop. These processes are formulated to include response nature of rice plant to weather condition. This model inherently was developed to predict rice growth and yield at plot paddy scale. We applied it to evaluate the large scale rice production with keeping the same model structure. Alternatively, we assumed the parameters as stochastic variables. In order to let the model catch up actual yield at larger scale, model parameters were determined based on agricultural statistical data of each prefecture of Japan together with weather data averaged over the region. The posterior probability distribution functions (PDFs) of parameters included in the model were obtained using Bayesian inference. The MCMC (Markov Chain Monte Carlo) algorithm was conducted to numerically solve the Bayesian theorem. For evaluating the year-to-year changes in rice growth/yield under this framework, we firstly iterate simulations with set of parameter values sampled from the estimated posterior PDF of each parameter and then take the ensemble mean weighted with the posterior PDFs. We will also present another example for maize productivity in China. The framework proposed here provides us information on uncertainties, possibilities and limitations on future improvements in crop model as well.

  7. Evaluation of different models to segregate Pelibuey and Katahdin ewes into resistant or susceptible to gastrointestinal nematodes.

    PubMed

    Palomo-Couoh, Jovanny Gaspar; Aguilar-Caballero, Armando Jacinto; Torres-Acosta, Juan Felipe de Jesús; Magaña-Monforte, Juan Gabriel

    2016-12-01

    This study evaluated four models based on the number of eggs per gram of faeces (EPG) to segregate Pelibuey or Katahdin ewes during the lactation period into resistant or susceptible to gastrointestinal nematodes (GIN) in tropical Mexico. Nine hundred and thirty EPG counts of Pelibuey ewes and 710 of Katahdin ewes were obtained during 10 weeks of lactation. Ewes were segregated into resistant, intermediate and susceptible, using their individual EPG every week. Then, data of every ewe was used to provide a reference classification, which included all the EPG values of each animal. Then, four models were evaluated against such reference. Model 1 was based on the 10-week mean EPG count ± 2 SE. Models 2, 3 and 4 were based on the mean EPG count of 10, 5 and 2 weeks of lactation. The cutoff points for the segregation of ewe in those three models were the quartiles ≤Q1 (low elimination) and ≥Q3 (high elimination). In all the models evaluated, the ewes classified as resistant had lower EPG than intermediates and susceptible (P < 0.001) while ewes classified as susceptible had higher EPG than intermediate and resistant (P < 0.001). According to J Youden test, models presented concordance with the reference group (>70 %). Model 3 tended to show higher sensitivity and specificity with the reference data, but no difference was found with other models. The present study showed that the phenotypic marker EPG might serve to identify and segregate populations of adult ewes during the lactation period. All models used served to segregate Pelibuey and Katahdin ewes into resistant, intermediate and susceptible. The model 3 (mean of 5 weeks) could be used because it required less sampling effort without losing sensitivity or specificity in the segregation of animals. However, model 2 (mean of 2 weeks) was less labour-intensive.

  8. An improved Rosetta pedotransfer function and evaluation in earth system models

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Schaap, M. G.

    2017-12-01

    Soil hydraulic parameters are often difficult and expensive to measure, leading to the pedotransfer functions (PTFs) an alternative to predict those parameters. Rosetta (Schaap et al., 2001, denoted as Rosetta1) are widely used PTFs, which is based on artificial neural network (ANN) analysis coupled with the bootstrap re-sampling method, allowing the estimation of van Genuchten water retention parameters (van Genuchten, 1980, abbreviated here as VG), saturated hydraulic conductivity (Ks), as well as their uncertainties. We present an improved hierarchical pedotransfer functions (Rosetta3) that unify the VG water retention and Ks submodels into one, thus allowing the estimation of uni-variate and bi-variate probability distributions of estimated parameters. Results show that the estimation bias of moisture content was reduced significantly. Rosetta1 and Posetta3 were implemented in the python programming language, and the source code are available online. Based on different soil water retention equations, there are diverse PTFs used in different disciplines of earth system modelings. PTFs based on Campbell [1974] or Clapp and Hornberger [1978] are frequently used in land surface models and general circulation models, while van Genuchten [1980] based PTFs are more widely used in hydrology and soil sciences. We use an independent global scale soil database to evaluate the performance of diverse PTFs used in different disciplines of earth system modelings. PTFs are evaluated based on different soil characteristics and environmental characteristics, such as soil textural data, soil organic carbon, soil pH, as well as precipitation and soil temperature. This analysis provides more quantitative estimation error information for PTF predictions in different disciplines of earth system modelings.

  9. Environmental risk assessment of selected organic chemicals based on TOC test and QSAR estimation models.

    PubMed

    Chi, Yulang; Zhang, Huanteng; Huang, Qiansheng; Lin, Yi; Ye, Guozhu; Zhu, Huimin; Dong, Sijun

    2018-02-01

    Environmental risks of organic chemicals have been greatly determined by their persistence, bioaccumulation, and toxicity (PBT) and physicochemical properties. Major regulations in different countries and regions identify chemicals according to their bioconcentration factor (BCF) and octanol-water partition coefficient (Kow), which frequently displays a substantial correlation with the sediment sorption coefficient (Koc). Half-life or degradability is crucial for the persistence evaluation of chemicals. Quantitative structure activity relationship (QSAR) estimation models are indispensable for predicting environmental fate and health effects in the absence of field- or laboratory-based data. In this study, 39 chemicals of high concern were chosen for half-life testing based on total organic carbon (TOC) degradation, and two widely accepted and highly used QSAR estimation models (i.e., EPI Suite and PBT Profiler) were adopted for environmental risk evaluation. The experimental results and estimated data, as well as the two model-based results were compared, based on the water solubility, Kow, Koc, BCF and half-life. Environmental risk assessment of the selected compounds was achieved by combining experimental data and estimation models. It was concluded that both EPI Suite and PBT Profiler were fairly accurate in measuring the physicochemical properties and degradation half-lives for water, soil, and sediment. However, the half-lives between the experimental and the estimated results were still not absolutely consistent. This suggests deficiencies of the prediction models in some ways, and the necessity to combine the experimental data and predicted results for the evaluation of environmental fate and risks of pollutants. Copyright © 2016. Published by Elsevier B.V.

  10. Development of Conceptual Models for Internet Search: A Case Study.

    ERIC Educational Resources Information Center

    Uden, Lorna; Tearne, Stephen; Alderson, Albert

    This paper describes the creation and evaluation of a World Wide Web-based courseware module, using conceptual models based on constructivism, that teaches novices how to use the Internet for searching. Questionnaires and interviews were used to understand the difficulties of a group of novices. The conceptual model of the experts for the task was…

  11. Modeling the Monthly Water Balance of a First Order Coastal Forested Watershed

    Treesearch

    S. V. Harder; Devendra M. Amatya; T. J. Callahan; Carl C. Trettin

    2006-01-01

    A study has been conducted to evaluate a spreadsheet-based conceptual Thornthwaite monthly water balance model and the process-based DRAINMOD model for their reliability in predicting monthly water budgets of a poorly drained, first order forested watershed at the Santee Experimental Forest located along the Lower Coastal Plain of South Carolina. Measured precipitation...

  12. CAT Model with Personalized Algorithm for Evaluation of Estimated Student Knowledge

    ERIC Educational Resources Information Center

    Andjelic, Svetlana; Cekerevac, Zoran

    2014-01-01

    This article presents the original model of the computer adaptive testing and grade formation, based on scientifically recognized theories. The base of the model is a personalized algorithm for selection of questions depending on the accuracy of the answer to the previous question. The test is divided into three basic levels of difficulty, and the…

  13. EVALUATING THE REGIONAL PREDICTIVE CAPACITY OF A PROCESS-BASED MERCURY EXPOSURE MODEL (R-MCM) FOR LAKES ACROSS VERMONT AND NEW HAMPSHIRE, USA

    EPA Science Inventory

    Regulatory agencies are confronted with a daunting task of developing fish consumption advisories for a large number of lakes and rivers with little resources. A feasible mechanism to develop region-wide fish advisories is by using a process-based mathematical model. One model of...

  14. Evaluating crown fire rate of spread predictions from physics-based models

    Treesearch

    C. M. Hoffman; J. Ziegler; J. Canfield; R. R. Linn; W. Mell; C. H. Sieg; F. Pimont

    2015-01-01

    Modeling the behavior of crown fires is challenging due to the complex set of coupled processes that drive the characteristics of a spreading wildfire and the large range of spatial and temporal scales over which these processes occur. Detailed physics-based modeling approaches such as FIRETEC and the Wildland Urban Interface Fire Dynamics Simulator (WFDS) simulate...

  15. Selection of Variables in Cluster Analysis: An Empirical Comparison of Eight Procedures

    ERIC Educational Resources Information Center

    Steinley, Douglas; Brusco, Michael J.

    2008-01-01

    Eight different variable selection techniques for model-based and non-model-based clustering are evaluated across a wide range of cluster structures. It is shown that several methods have difficulties when non-informative variables (i.e., random noise) are included in the model. Furthermore, the distribution of the random noise greatly impacts the…

  16. Development of a mission-based funding model for undergraduate medical education: incorporation of quality.

    PubMed

    Stagnaro-Green, Alex; Roe, David; Soto-Greene, Maria; Joffe, Russell

    2008-01-01

    Increasing financial pressures, along with a desire to realign resources with institutional priorities, has resulted in the adoption of mission-based funding (MBF) at many medical schools. The lack of inclusion of quality and the time and expense in developing and implementing mission based funding are major deficiencies in the models reported to date. In academic year 2002-2003 New Jersey Medical School developed a model that included both quantity and quality in the education metric and that was departmentally based. Eighty percent of the undergraduate medical education allocation was based on the quantity of undergraduate medical education taught by the department ($7.35 million), and 20% ($1.89 million) was allocated based on the quality of the education delivered. Quality determinations were made by the educational leadership based on student evaluations and departmental compliance with educational administrative requirements. Evolution of the model has included the development of a faculty oversight committee and the integration of peer evaluation in the determination of educational quality. Six departments had a documented increase in quality over time, and one department had a transient decrease in quality. The MBF model has been well accepted by chairs, educational leaders, and faculty and has been instrumental in enhancing the stature of education at our institution.

  17. Automated Text Analysis Based on Skip-Gram Model for Food Evaluation in Predicting Consumer Acceptance

    PubMed Central

    Kim, Augustine Yongwhi; Choi, Hoduk

    2018-01-01

    The purpose of this paper is to evaluate food taste, smell, and characteristics from consumers' online reviews. Several studies in food sensory evaluation have been presented for consumer acceptance. However, these studies need taste descriptive word lexicon, and they are not suitable for analyzing large number of evaluators to predict consumer acceptance. In this paper, an automated text analysis method for food evaluation is presented to analyze and compare recently introduced two jjampong ramen types (mixed seafood noodles). To avoid building a sensory word lexicon, consumers' reviews are collected from SNS. Then, by training word embedding model with acquired reviews, words in the large amount of review text are converted into vectors. Based on these words represented as vectors, inference is performed to evaluate taste and smell of two jjampong ramen types. Finally, the reliability and merits of the proposed food evaluation method are confirmed by a comparison with the results from an actual consumer preference taste evaluation. PMID:29606960

  18. Automated Text Analysis Based on Skip-Gram Model for Food Evaluation in Predicting Consumer Acceptance.

    PubMed

    Kim, Augustine Yongwhi; Ha, Jin Gwan; Choi, Hoduk; Moon, Hyeonjoon

    2018-01-01

    The purpose of this paper is to evaluate food taste, smell, and characteristics from consumers' online reviews. Several studies in food sensory evaluation have been presented for consumer acceptance. However, these studies need taste descriptive word lexicon, and they are not suitable for analyzing large number of evaluators to predict consumer acceptance. In this paper, an automated text analysis method for food evaluation is presented to analyze and compare recently introduced two jjampong ramen types (mixed seafood noodles). To avoid building a sensory word lexicon, consumers' reviews are collected from SNS. Then, by training word embedding model with acquired reviews, words in the large amount of review text are converted into vectors. Based on these words represented as vectors, inference is performed to evaluate taste and smell of two jjampong ramen types. Finally, the reliability and merits of the proposed food evaluation method are confirmed by a comparison with the results from an actual consumer preference taste evaluation.

  19. A Robot-Driven Computational Model for Estimating Passive Ankle Torque With Subject-Specific Adaptation.

    PubMed

    Zhang, Mingming; Meng, Wei; Davies, T Claire; Zhang, Yanxin; Xie, Sheng Q

    2016-04-01

    Robot-assisted ankle assessment could potentially be conducted using sensor-based and model-based methods. Existing ankle rehabilitation robots usually use torquemeters and multiaxis load cells for measuring joint dynamics. These measurements are accurate, but the contribution as a result of muscles and ligaments is not taken into account. Some computational ankle models have been developed to evaluate ligament strain and joint torque. These models do not include muscles and, thus, are not suitable for an overall ankle assessment in robot-assisted therapy. This study proposed a computational ankle model for use in robot-assisted therapy with three rotational degrees of freedom, 12 muscles, and seven ligaments. This model is driven by robotics, uses three independent position variables as inputs, and outputs an overall ankle assessment. Subject-specific adaptations by geometric and strength scaling were also made to allow for a universal model. This model was evaluated using published results and experimental data from 11 participants. Results show a high accuracy in the evaluation of ligament neutral length and passive joint torque. The subject-specific adaptation performance is high, with each normalized root-mean-square deviation value less than 10%. This model could be used for ankle assessment, especially in evaluating passive ankle torque, for a specific individual. The characteristic that is unique to this model is the use of three independent position variables that can be measured in real time as inputs, which makes it advantageous over other models when combined with robot-assisted therapy.

  20. Turbulence Model Comparisons and Reynolds Number Effects Over a High-Speed Aircraft at Transonic Speeds

    NASA Technical Reports Server (NTRS)

    Rivers, Melissa B.; Wahls, Richard A.

    1999-01-01

    This paper gives the results of a grid study, a turbulence model study, and a Reynolds number effect study for transonic flows over a high-speed aircraft using the thin-layer, upwind, Navier-Stokes CFL3D code. The four turbulence models evaluated are the algebraic Baldwin-Lomax model with the Degani-Schiff modifications, the one-equation Baldwin-Barth model, the one-equation Spalart-Allmaras model, and Menter's two-equation Shear-Stress-Transport (SST) model. The flow conditions, which correspond to tests performed in the NASA Langley National Transonic Facility (NTF), are a Mach number of 0.90 and a Reynolds number of 30 million based on chord for a range of angle-of-attacks (1 degree to 10 degrees). For the Reynolds number effect study, Reynolds numbers of 10 and 80 million based on chord were also evaluated. Computed forces and surface pressures compare reasonably well with the experimental data for all four of the turbulence models. The Baldwin-Lomax model with the Degani-Schiff modifications and the one-equation Baldwin-Barth model show the best agreement with experiment overall. The Reynolds number effects are evaluated using the Baldwin-Lomax with the Degani-Schiff modifications and the Baldwin-Barth turbulence models. Five angles-of-attack were evaluated for the Reynolds number effect study at three different Reynolds numbers. More work is needed to determine the ability of CFL3D to accurately predict Reynolds number effects.

Top