Sample records for assessment model based

  1. Introducing the fit-criteria assessment plot - A visualisation tool to assist class enumeration in group-based trajectory modelling.

    PubMed

    Klijn, Sven L; Weijenberg, Matty P; Lemmens, Paul; van den Brandt, Piet A; Lima Passos, Valéria

    2017-10-01

    Background and objective Group-based trajectory modelling is a model-based clustering technique applied for the identification of latent patterns of temporal changes. Despite its manifold applications in clinical and health sciences, potential problems of the model selection procedure are often overlooked. The choice of the number of latent trajectories (class-enumeration), for instance, is to a large degree based on statistical criteria that are not fail-safe. Moreover, the process as a whole is not transparent. To facilitate class enumeration, we introduce a graphical summary display of several fit and model adequacy criteria, the fit-criteria assessment plot. Methods An R-code that accepts universal data input is presented. The programme condenses relevant group-based trajectory modelling output information of model fit indices in automated graphical displays. Examples based on real and simulated data are provided to illustrate, assess and validate fit-criteria assessment plot's utility. Results Fit-criteria assessment plot provides an overview of fit criteria on a single page, placing users in an informed position to make a decision. Fit-criteria assessment plot does not automatically select the most appropriate model but eases the model assessment procedure. Conclusions Fit-criteria assessment plot is an exploratory, visualisation tool that can be employed to assist decisions in the initial and decisive phase of group-based trajectory modelling analysis. Considering group-based trajectory modelling's widespread resonance in medical and epidemiological sciences, a more comprehensive, easily interpretable and transparent display of the iterative process of class enumeration may foster group-based trajectory modelling's adequate use.

  2. Massive integration of diverse protein quality assessment methods to improve template based modeling in CASP11

    PubMed Central

    Cao, Renzhi; Bhattacharya, Debswapna; Adhikari, Badri; Li, Jilong; Cheng, Jianlin

    2015-01-01

    Model evaluation and selection is an important step and a big challenge in template-based protein structure prediction. Individual model quality assessment methods designed for recognizing some specific properties of protein structures often fail to consistently select good models from a model pool because of their limitations. Therefore, combining multiple complimentary quality assessment methods is useful for improving model ranking and consequently tertiary structure prediction. Here, we report the performance and analysis of our human tertiary structure predictor (MULTICOM) based on the massive integration of 14 diverse complementary quality assessment methods that was successfully benchmarked in the 11th Critical Assessment of Techniques of Protein Structure prediction (CASP11). The predictions of MULTICOM for 39 template-based domains were rigorously assessed by six scoring metrics covering global topology of Cα trace, local all-atom fitness, side chain quality, and physical reasonableness of the model. The results show that the massive integration of complementary, diverse single-model and multi-model quality assessment methods can effectively leverage the strength of single-model methods in distinguishing quality variation among similar good models and the advantage of multi-model quality assessment methods of identifying reasonable average-quality models. The overall excellent performance of the MULTICOM predictor demonstrates that integrating a large number of model quality assessment methods in conjunction with model clustering is a useful approach to improve the accuracy, diversity, and consequently robustness of template-based protein structure prediction. PMID:26369671

  3. WORKSHOP ON APPLICATION OF STATISTICAL METHODS TO BIOLOGICALLY-BASED PHARMACOKINETIC MODELING FOR RISK ASSESSMENT

    EPA Science Inventory

    Biologically-based pharmacokinetic models are being increasingly used in the risk assessment of environmental chemicals. These models are based on biological, mathematical, statistical and engineering principles. Their potential uses in risk assessment include extrapolation betwe...

  4. Assessment of Energy Efficient and Model Based Control

    DTIC Science & Technology

    2017-06-15

    ARL-TR-8042 ● JUNE 2017 US Army Research Laboratory Assessment of Energy -Efficient and Model- Based Control by Craig Lennon...originator. ARL-TR-8042 ● JUNE 2017 US Army Research Laboratory Assessment of Energy -Efficient and Model- Based Control by Craig...

  5. Massive integration of diverse protein quality assessment methods to improve template based modeling in CASP11.

    PubMed

    Cao, Renzhi; Bhattacharya, Debswapna; Adhikari, Badri; Li, Jilong; Cheng, Jianlin

    2016-09-01

    Model evaluation and selection is an important step and a big challenge in template-based protein structure prediction. Individual model quality assessment methods designed for recognizing some specific properties of protein structures often fail to consistently select good models from a model pool because of their limitations. Therefore, combining multiple complimentary quality assessment methods is useful for improving model ranking and consequently tertiary structure prediction. Here, we report the performance and analysis of our human tertiary structure predictor (MULTICOM) based on the massive integration of 14 diverse complementary quality assessment methods that was successfully benchmarked in the 11th Critical Assessment of Techniques of Protein Structure prediction (CASP11). The predictions of MULTICOM for 39 template-based domains were rigorously assessed by six scoring metrics covering global topology of Cα trace, local all-atom fitness, side chain quality, and physical reasonableness of the model. The results show that the massive integration of complementary, diverse single-model and multi-model quality assessment methods can effectively leverage the strength of single-model methods in distinguishing quality variation among similar good models and the advantage of multi-model quality assessment methods of identifying reasonable average-quality models. The overall excellent performance of the MULTICOM predictor demonstrates that integrating a large number of model quality assessment methods in conjunction with model clustering is a useful approach to improve the accuracy, diversity, and consequently robustness of template-based protein structure prediction. Proteins 2016; 84(Suppl 1):247-259. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  6. Integration of an Evidence Base into a Probabilistic Risk Assessment Model. The Integrated Medical Model Database: An Organized Evidence Base for Assessing In-Flight Crew Health Risk and System Design

    NASA Technical Reports Server (NTRS)

    Saile, Lynn; Lopez, Vilma; Bickham, Grandin; FreiredeCarvalho, Mary; Kerstman, Eric; Byrne, Vicky; Butler, Douglas; Myers, Jerry; Walton, Marlei

    2011-01-01

    This slide presentation reviews the Integrated Medical Model (IMM) database, which is an organized evidence base for assessing in-flight crew health risk. The database is a relational database accessible to many people. The database quantifies the model inputs by a ranking based on the highest value of the data as Level of Evidence (LOE) and the quality of evidence (QOE) score that provides an assessment of the evidence base for each medical condition. The IMM evidence base has already been able to provide invaluable information for designers, and for other uses.

  7. Improving mathematical problem solving ability through problem-based learning and authentic assessment for the students of Bali State Polytechnic

    NASA Astrophysics Data System (ADS)

    Darma, I. K.

    2018-01-01

    This research is aimed at determining: 1) the differences of mathematical problem solving ability between the students facilitated with problem-based learning model and conventional learning model, 2) the differences of mathematical problem solving ability between the students facilitated with authentic and conventional assessment model, and 3) interaction effect between learning and assessment model on mathematical problem solving. The research was conducted in Bali State Polytechnic, using the 2x2 experiment factorial design. The samples of this research were 110 students. The data were collected using a theoretically and empirically-validated test. Instruments were validated by using Aiken’s approach of technique content validity and item analysis, and then analyzed using anova stylistic. The result of the analysis shows that the students facilitated with problem-based learning and authentic assessment models get the highest score average compared to the other students, both in the concept understanding and mathematical problem solving. The result of hypothesis test shows that, significantly: 1) there is difference of mathematical problem solving ability between the students facilitated with problem-based learning model and conventional learning model, 2) there is difference of mathematical problem solving ability between the students facilitated with authentic assessment model and conventional assessment model, and 3) there is interaction effect between learning model and assessment model on mathematical problem solving. In order to improve the effectiveness of mathematics learning, collaboration between problem-based learning model and authentic assessment model can be considered as one of learning models in class.

  8. Leveraging Strengths Assessment and Intervention Model (LeStAIM): A Theoretical Strength-Based Assessment Framework

    ERIC Educational Resources Information Center

    Laija-Rodriguez, Wilda; Grites, Karen; Bouman, Doug; Pohlman, Craig; Goldman, Richard L.

    2013-01-01

    Current assessments in the schools are based on a deficit model (Epstein, 1998). "The National Association of School Psychologists (NASP) Model for Comprehensive and Integrated School Psychological Services" (2010), federal initiatives and mandates, and experts in the field of assessment have highlighted the need for the comprehensive…

  9. Approaches for the Application of Physiologically Based ...

    EPA Pesticide Factsheets

    EPA released the final report, Approaches for the Application of Physiologically Based Pharmacokinetic (PBPK) Models and Supporting Data in Risk Assessment as announced in a September 22 2006 Federal Register Notice.This final report addresses the application and evaluation of PBPK models for risk assessment purposes. These models represent an important class of dosimetry models that are useful for predicting internal dose at target organs for risk assessment applications. EPA is releasing a final report describing the evaluation and applications of physiologically based pharmacokinetic (PBPK) models in health risk assessment. This was announced in the September 22 2006 Federal Register Notice.

  10. The implementation of assessment model based on character building to improve students’ discipline and achievement

    NASA Astrophysics Data System (ADS)

    Rusijono; Khotimah, K.

    2018-01-01

    The purpose of this research was to investigate the effect of implementing the assessment model based on character building to improve discipline and student’s achievement. Assessment model based on character building includes three components, which are the behaviour of students, the efforts, and student’s achievement. This assessment model based on the character building is implemented in science philosophy and educational assessment courses, in Graduate Program of Educational Technology Department, Educational Faculty, Universitas Negeri Surabaya. This research used control group pre-test and post-test design. Data collection method used in this research were observation and test. The observation was used to collect the data about the disciplines of the student in the instructional process, while the test was used to collect the data about student’s achievement. Moreover, the study applied t-test to the analysis of data. The result of this research showed that assessment model based on character building improved discipline and student’s achievement.

  11. Dependability modeling and assessment in UML-based software development.

    PubMed

    Bernardi, Simona; Merseguer, José; Petriu, Dorina C

    2012-01-01

    Assessment of software nonfunctional properties (NFP) is an important problem in software development. In the context of model-driven development, an emerging approach for the analysis of different NFPs consists of the following steps: (a) to extend the software models with annotations describing the NFP of interest; (b) to transform automatically the annotated software model to the formalism chosen for NFP analysis; (c) to analyze the formal model using existing solvers; (d) to assess the software based on the results and give feedback to designers. Such a modeling→analysis→assessment approach can be applied to any software modeling language, be it general purpose or domain specific. In this paper, we focus on UML-based development and on the dependability NFP, which encompasses reliability, availability, safety, integrity, and maintainability. The paper presents the profile used to extend UML with dependability information, the model transformation to generate a DSPN formal model, and the assessment of the system properties based on the DSPN results.

  12. Dependability Modeling and Assessment in UML-Based Software Development

    PubMed Central

    Bernardi, Simona; Merseguer, José; Petriu, Dorina C.

    2012-01-01

    Assessment of software nonfunctional properties (NFP) is an important problem in software development. In the context of model-driven development, an emerging approach for the analysis of different NFPs consists of the following steps: (a) to extend the software models with annotations describing the NFP of interest; (b) to transform automatically the annotated software model to the formalism chosen for NFP analysis; (c) to analyze the formal model using existing solvers; (d) to assess the software based on the results and give feedback to designers. Such a modeling→analysis→assessment approach can be applied to any software modeling language, be it general purpose or domain specific. In this paper, we focus on UML-based development and on the dependability NFP, which encompasses reliability, availability, safety, integrity, and maintainability. The paper presents the profile used to extend UML with dependability information, the model transformation to generate a DSPN formal model, and the assessment of the system properties based on the DSPN results. PMID:22988428

  13. Some Statistics for Assessing Person-Fit Based on Continuous-Response Models

    ERIC Educational Resources Information Center

    Ferrando, Pere Joan

    2010-01-01

    This article proposes several statistics for assessing individual fit based on two unidimensional models for continuous responses: linear factor analysis and Samejima's continuous response model. Both models are approached using a common framework based on underlying response variables and are formulated at the individual level as fixed regression…

  14. Review Article: A comparison of flood and earthquake vulnerability assessment indicators

    NASA Astrophysics Data System (ADS)

    de Ruiter, Marleen C.; Ward, Philip J.; Daniell, James E.; Aerts, Jeroen C. J. H.

    2017-07-01

    In a cross-disciplinary study, we carried out an extensive literature review to increase understanding of vulnerability indicators used in the disciplines of earthquake- and flood vulnerability assessments. We provide insights into potential improvements in both fields by identifying and comparing quantitative vulnerability indicators grouped into physical and social categories. Next, a selection of index- and curve-based vulnerability models that use these indicators are described, comparing several characteristics such as temporal and spatial aspects. Earthquake vulnerability methods traditionally have a strong focus on object-based physical attributes used in vulnerability curve-based models, while flood vulnerability studies focus more on indicators applied to aggregated land-use classes in curve-based models. In assessing the differences and similarities between indicators used in earthquake and flood vulnerability models, we only include models that separately assess either of the two hazard types. Flood vulnerability studies could be improved using approaches from earthquake studies, such as developing object-based physical vulnerability curve assessments and incorporating time-of-the-day-based building occupation patterns. Likewise, earthquake assessments could learn from flood studies by refining their selection of social vulnerability indicators. Based on the lessons obtained in this study, we recommend future studies for exploring risk assessment methodologies across different hazard types.

  15. Proceedings of the Conference on Toxicology: Applications of Advances in Toxicology to Risk Assessment. Held at Wright-Patterson AFB, Ohio on 19-21 May 1992

    DTIC Science & Technology

    1993-01-01

    animals in toxicology research, the application of pharmacokinetics and physiologically based pharmacokinetic mdels in chemical risk assessment, selected...metaplasia Neurotoxicity Nonmutagenic carcinogens Ozone P450 PBPK modeling Perfluorohexane Peroxisome proliferators Pharmacokinetics Pharmacokinetic models...Physiological modeling Physiologically based pharmacokinetic modeling Polycyclic organic matter Quantitative risk assessment RAIRM model Rats

  16. Alternative model for administration and analysis of research-based assessments

    NASA Astrophysics Data System (ADS)

    Wilcox, Bethany R.; Zwickl, Benjamin M.; Hobbs, Robert D.; Aiken, John M.; Welch, Nathan M.; Lewandowski, H. J.

    2016-06-01

    Research-based assessments represent a valuable tool for both instructors and researchers interested in improving undergraduate physics education. However, the historical model for disseminating and propagating conceptual and attitudinal assessments developed by the physics education research (PER) community has not resulted in widespread adoption of these assessments within the broader community of physics instructors. Within this historical model, assessment developers create high quality, validated assessments, make them available for a wide range of instructors to use, and provide minimal (if any) support to assist with administration or analysis of the results. Here, we present and discuss an alternative model for assessment dissemination, which is characterized by centralized data collection and analysis. This model provides a greater degree of support for both researchers and instructors in order to more explicitly support adoption of research-based assessments. Specifically, we describe our experiences developing a centralized, automated system for an attitudinal assessment we previously created to examine students' epistemologies and expectations about experimental physics. This system provides a proof of concept that we use to discuss the advantages associated with centralized administration and data collection for research-based assessments in PER. We also discuss the challenges that we encountered while developing, maintaining, and automating this system. Ultimately, we argue that centralized administration and data collection for standardized assessments is a viable and potentially advantageous alternative to the default model characterized by decentralized administration and analysis. Moreover, with the help of online administration and automation, this model can support the long-term sustainability of centralized assessment systems.

  17. USE OF BIOLOGICALLY BASED COMPUTATIONAL MODELING IN MODE OF ACTION-BASED RISK ASSESSMENT – AN EXAMPLE OF CHLOROFORM

    EPA Science Inventory

    The objective of current work is to develop a new cancer dose-response assessment for chloroform using a physiologically based pharmacokinetic/pharmacodynamic (PBPK/PD) model. The PBPK/PD model is based on a mode of action in which the cytolethality of chloroform occurs when the ...

  18. Authentic assessment based showcase portfolio on learning of mathematical problem solving in senior high school

    NASA Astrophysics Data System (ADS)

    Sukmawati, Zuhairoh, Faihatuz

    2017-05-01

    The purpose of this research was to develop authentic assessment model based on showcase portfolio on learning of mathematical problem solving. This research used research and development Method (R & D) which consists of four stages of development that: Phase I, conducting a preliminary study. Phase II, determining the purpose of developing and preparing the initial model. Phase III, trial test of instrument for the initial draft model and the initial product. The respondents of this research are the students of SMAN 8 and SMAN 20 Makassar. The collection of data was through observation, interviews, documentation, student questionnaire, and instrument tests mathematical solving abilities. The data were analyzed with descriptive and inferential statistics. The results of this research are authentic assessment model design based on showcase portfolio which involves: 1) Steps in implementing the authentic assessment based Showcase, assessment rubric of cognitive aspects, assessment rubric of affective aspects, and assessment rubric of skill aspect. 2) The average ability of the students' problem solving which is scored by using authentic assessment based on showcase portfolio was in high category and the students' response in good category.

  19. Wearable-Sensor-Based Classification Models of Faller Status in Older Adults.

    PubMed

    Howcroft, Jennifer; Lemaire, Edward D; Kofman, Jonathan

    2016-01-01

    Wearable sensors have potential for quantitative, gait-based, point-of-care fall risk assessment that can be easily and quickly implemented in clinical-care and older-adult living environments. This investigation generated models for wearable-sensor based fall-risk classification in older adults and identified the optimal sensor type, location, combination, and modelling method; for walking with and without a cognitive load task. A convenience sample of 100 older individuals (75.5 ± 6.7 years; 76 non-fallers, 24 fallers based on 6 month retrospective fall occurrence) walked 7.62 m under single-task and dual-task conditions while wearing pressure-sensing insoles and tri-axial accelerometers at the head, pelvis, and left and right shanks. Participants also completed the Activities-specific Balance Confidence scale, Community Health Activities Model Program for Seniors questionnaire, six minute walk test, and ranked their fear of falling. Fall risk classification models were assessed for all sensor combinations and three model types: multi-layer perceptron neural network, naïve Bayesian, and support vector machine. The best performing model was a multi-layer perceptron neural network with input parameters from pressure-sensing insoles and head, pelvis, and left shank accelerometers (accuracy = 84%, F1 score = 0.600, MCC score = 0.521). Head sensor-based models had the best performance of the single-sensor models for single-task gait assessment. Single-task gait assessment models outperformed models based on dual-task walking or clinical assessment data. Support vector machines and neural networks were the best modelling technique for fall risk classification. Fall risk classification models developed for point-of-care environments should be developed using support vector machines and neural networks, with a multi-sensor single-task gait assessment.

  20. Proceedings of the 2006 Toxicology and Risk Assessment Conference: Applying Mode of Action in Risk Assessment

    DTIC Science & Technology

    2006-07-01

    physiologically-based pharmacokinetic modeling of interactions and multiple route exposure assessment; and integrating relative potency factors with response...defaults, while at the other end is the use of extensive chemical-specific data in physiologically based pharmacokinetic (PBPK) modeling or even...for internal dosimetry as well as an in depth prospective on the use and limitations of physiologically based pharmacokinetic (PBPK) models in

  1. Model-Based Approaches for Teaching and Practicing Personality Assessment.

    PubMed

    Blais, Mark A; Hopwood, Christopher J

    2017-01-01

    Psychological assessment is a complex professional skill. Competence in assessment requires an extensive knowledge of personality, neuropsychology, social behavior, and psychopathology, a background in psychometrics, familiarity with a range of multimethod tools, cognitive flexibility, skepticism, and interpersonal sensitivity. This complexity makes assessment a challenge to teach and learn, particularly as the investment of resources and time in assessment has waned in psychological training programs over the last few decades. In this article, we describe 3 conceptual models that can assist teaching and learning psychological assessments. The transtheoretical model of personality provides a personality systems-based framework for understanding how multimethod assessment data relate to major personality systems and can be combined to describe and explain complex human behavior. The quantitative psychopathology-personality trait model is an empirical model based on the hierarchical organization of individual differences. Application of this model can help students understand diagnostic comorbidity and symptom heterogeneity, focus on more meaningful high-order domains, and identify the most effective assessment tools for addressing a given question. The interpersonal situation model is rooted in interpersonal theory and can help students connect test data to here-and-now interactions with patients. We conclude by demonstrating the utility of these models using a case example.

  2. The Mental Health Recovery Measure Can Be Used to Assess Aspects of Both Customer-Based and Service-Based Recovery in the Context of Severe Mental Illness.

    PubMed

    Oliveira-Maia, Albino J; Mendonça, Carina; Pessoa, Maria J; Camacho, Marta; Gago, Joaquim

    2016-01-01

    Within clinical psychiatry, recovery from severe mental illness (SMI) has classically been defined according to symptoms and function (service-based recovery). However, service-users have argued that recovery should be defined as the process of overcoming mental illness, regaining self-control and establishing a meaningful life (customer-based recovery). Here, we aimed to compare customer-based and service-based recovery and clarify their differential relationship with other constructs, namely needs and quality of life. The study was conducted in 101 patients suffering from SMI, recruited from a rural community mental health setting in Portugal. Customer-based recovery and function-related service-based recovery were assessed, respectively, using a shortened version of the Mental Health Recovery Measure (MHRM-20) and the Global Assessment of Functioning score. The Camberwell Assessment of Need scale was used to objectively assess needs, while subjective quality of life was measured with the TL-30s scale. Using multiple linear regression models, we found that the Global Assessment of Functioning score was incrementally predictive of the MHRM-20 score, when added to a model including only clinical and demographic factors, and that this model was further incremented by the score for quality of life. However, in an alternate model using the Global Assessment of Functioning score as the dependent variable, while the MHRM-20 score contributed significantly to the model when added to clinical and demographic factors, the model was not incremented by the score for quality of life. These results suggest that, while a more global concept of recovery from SMI may be assessed using measures for service-based and customer-based recovery, the latter, namely the MHRM-20, also provides information about subjective well-being. Pending confirmation of these findings in other populations, this instrument could thus be useful for comprehensive assessment of recovery and subjective well-being in patients suffering from SMI.

  3. The Mental Health Recovery Measure Can Be Used to Assess Aspects of Both Customer-Based and Service-Based Recovery in the Context of Severe Mental Illness

    PubMed Central

    Oliveira-Maia, Albino J.; Mendonça, Carina; Pessoa, Maria J.; Camacho, Marta; Gago, Joaquim

    2016-01-01

    Within clinical psychiatry, recovery from severe mental illness (SMI) has classically been defined according to symptoms and function (service-based recovery). However, service-users have argued that recovery should be defined as the process of overcoming mental illness, regaining self-control and establishing a meaningful life (customer-based recovery). Here, we aimed to compare customer-based and service-based recovery and clarify their differential relationship with other constructs, namely needs and quality of life. The study was conducted in 101 patients suffering from SMI, recruited from a rural community mental health setting in Portugal. Customer-based recovery and function-related service-based recovery were assessed, respectively, using a shortened version of the Mental Health Recovery Measure (MHRM-20) and the Global Assessment of Functioning score. The Camberwell Assessment of Need scale was used to objectively assess needs, while subjective quality of life was measured with the TL-30s scale. Using multiple linear regression models, we found that the Global Assessment of Functioning score was incrementally predictive of the MHRM-20 score, when added to a model including only clinical and demographic factors, and that this model was further incremented by the score for quality of life. However, in an alternate model using the Global Assessment of Functioning score as the dependent variable, while the MHRM-20 score contributed significantly to the model when added to clinical and demographic factors, the model was not incremented by the score for quality of life. These results suggest that, while a more global concept of recovery from SMI may be assessed using measures for service-based and customer-based recovery, the latter, namely the MHRM-20, also provides information about subjective well-being. Pending confirmation of these findings in other populations, this instrument could thus be useful for comprehensive assessment of recovery and subjective well-being in patients suffering from SMI. PMID:27857698

  4. Specifying and Refining a Measurement Model for a Simulation-Based Assessment. CSE Report 619.

    ERIC Educational Resources Information Center

    Levy, Roy; Mislevy, Robert J.

    2004-01-01

    The challenges of modeling students' performance in simulation-based assessments include accounting for multiple aspects of knowledge and skill that arise in different situations and the conditional dependencies among multiple aspects of performance in a complex assessment. This paper describes a Bayesian approach to modeling and estimating…

  5. [Joint application of mathematic models in assessing the residual risk of hepatitis C virus transmitted through blood transfusion].

    PubMed

    Wang, Xun; Jia, Yao; Xie, Yun-zheng; Li, Xiu-mei; Liu, Xiao-ying; Wu, Xiao-fei

    2011-09-01

    The practicable and effective methods for residual risk assessment on transfusion-transmitted disease was to establish the mathematic models. Based on the characteristics of the repeat donors which donated their blood on a regular base, a model of sero-conversion during the interval of donations was established to assess the incidence of the repeat donors. Based on the characteristics of the prevalence in the population, a model of 'prevalence increased with the age of the donor' was established to assess the incidence of those first-time donors. And based on the impact of the windows period through blood screening program, a model of residual risk associated with the incidence and the length of the windows period was established to assess the residual risk of blood transfusion. In this paper, above said 3 kinds of mathematic models were jointly applied to assess the residual risk of hepatitis C virus (HCV) which was transmitted through blood transfusion in Shanghai, based on data from the routine blood collection and screening program. All the anti-HCV unqualified blood donations were confirmed before assessment. Results showed that the residual risk of HCV transmitted through blood transfusion during Jan. 1(st), 2007 to Dec. 31(st), 2008 in Shanghai was 1:101 000. Data showed that the results of residual risk assessment with mathematic models was valuable. The residual risk of transfusion-transmitted HCV in Shanghai was at a safe level, according to the results in this paper.

  6. Assessing Graduate Attributes: Building a Criteria-Based Competency Model

    ERIC Educational Resources Information Center

    Ipperciel, Donald; ElAtia, Samira

    2014-01-01

    Graduate attributes (GAs) have become a necessary framework of reference for the 21st century competency-based model of higher education. However, the issue of evaluating and assessing GAs still remains unchartered territory. In this article, we present a criteria-based method of assessment that allows for an institution-wide comparison of the…

  7. Assessment of Matrix Multiplication Learning with a Rule-Based Analytical Model--"A Bayesian Network Representation"

    ERIC Educational Resources Information Center

    Zhang, Zhidong

    2016-01-01

    This study explored an alternative assessment procedure to examine learning trajectories of matrix multiplication. It took rule-based analytical and cognitive task analysis methods specifically to break down operation rules for a given matrix multiplication. Based on the analysis results, a hierarchical Bayesian network, an assessment model,…

  8. A systematic literature review of open source software quality assessment models.

    PubMed

    Adewumi, Adewole; Misra, Sanjay; Omoregbe, Nicholas; Crawford, Broderick; Soto, Ricardo

    2016-01-01

    Many open source software (OSS) quality assessment models are proposed and available in the literature. However, there is little or no adoption of these models in practice. In order to guide the formulation of newer models so they can be acceptable by practitioners, there is need for clear discrimination of the existing models based on their specific properties. Based on this, the aim of this study is to perform a systematic literature review to investigate the properties of the existing OSS quality assessment models by classifying them with respect to their quality characteristics, the methodology they use for assessment, and their domain of application so as to guide the formulation and development of newer models. Searches in IEEE Xplore, ACM, Science Direct, Springer and Google Search is performed so as to retrieve all relevant primary studies in this regard. Journal and conference papers between the year 2003 and 2015 were considered since the first known OSS quality model emerged in 2003. A total of 19 OSS quality assessment model papers were selected. To select these models we have developed assessment criteria to evaluate the quality of the existing studies. Quality assessment models are classified into five categories based on the quality characteristics they possess namely: single-attribute, rounded category, community-only attribute, non-community attribute as well as the non-quality in use models. Our study reflects that software selection based on hierarchical structures is found to be the most popular selection method in the existing OSS quality assessment models. Furthermore, we found that majority (47%) of the existing models do not specify any domain of application. In conclusion, our study will be a valuable contribution to the community and helps the quality assessment model developers in formulating newer models and also to the practitioners (software evaluators) in selecting suitable OSS in the midst of alternatives.

  9. Milestone-specific, Observed data points for evaluating levels of performance (MODEL) assessment strategy for anesthesiology residency programs.

    PubMed

    Nagy, Christopher J; Fitzgerald, Brian M; Kraus, Gregory P

    2014-01-01

    Anesthesiology residency programs will be expected to have Milestones-based evaluation systems in place by July 2014 as part of the Next Accreditation System. The San Antonio Uniformed Services Health Education Consortium (SAUSHEC) anesthesiology residency program developed and implemented a Milestones-based feedback and evaluation system a year ahead of schedule. It has been named the Milestone-specific, Observed Data points for Evaluating Levels of performance (MODEL) assessment strategy. The "MODEL Menu" and the "MODEL Blueprint" are tools that other anesthesiology residency programs can use in developing their own Milestones-based feedback and evaluation systems prior to ACGME-required implementation. Data from our early experience with the streamlined MODEL blueprint assessment strategy showed substantially improved faculty compliance with reporting requirements. The MODEL assessment strategy provides programs with a workable assessment method for residents, and important Milestones data points to programs for ACGME reporting.

  10. Operations Assessment of Launch Vehicle Architectures using Activity Based Cost Models

    NASA Technical Reports Server (NTRS)

    Ruiz-Torres, Alex J.; McCleskey, Carey

    2000-01-01

    The growing emphasis on affordability for space transportation systems requires the assessment of new space vehicles for all life cycle activities, from design and development, through manufacturing and operations. This paper addresses the operational assessment of launch vehicles, focusing on modeling the ground support requirements of a vehicle architecture, and estimating the resulting costs and flight rate. This paper proposes the use of Activity Based Costing (ABC) modeling for this assessment. The model uses expert knowledge to determine the activities, the activity times and the activity costs based on vehicle design characteristics. The approach provides several advantages to current approaches to vehicle architecture assessment including easier validation and allowing vehicle designers to understand the cost and cycle time drivers.

  11. Strategies to Enhance Online Learning Teams. Team Assessment and Diagnostics Instrument and Agent-based Modeling

    DTIC Science & Technology

    2010-08-12

    Strategies to Enhance Online Learning Teams Team Assessment and Diagnostics Instrument and Agent-based Modeling Tristan E. Johnson, Ph.D. Learning ...REPORT DATE AUG 2010 2. REPORT TYPE 3. DATES COVERED 00-00-2010 to 00-00-2010 4. TITLE AND SUBTITLE Strategies to Enhance Online Learning ...TeamsTeam Strategies to Enhance Online Learning Teams: Team Assessment and Diagnostics Instrument and Agent-based Modeling 5a. CONTRACT NUMBER 5b. GRANT

  12. Developing Computer Model-Based Assessment of Chemical Reasoning: A Feasibility Study

    ERIC Educational Resources Information Center

    Liu, Xiufeng; Waight, Noemi; Gregorius, Roberto; Smith, Erica; Park, Mihwa

    2012-01-01

    This paper reports a feasibility study on developing computer model-based assessments of chemical reasoning at the high school level. Computer models are flash and NetLogo environments to make simultaneously available three domains in chemistry: macroscopic, submicroscopic, and symbolic. Students interact with computer models to answer assessment…

  13. The Research and Evaluation of Road Environment in the Block of City Based on 3-D Streetscape Data

    NASA Astrophysics Data System (ADS)

    Guan, L.; Ding, Y.; Ge, J.; Yang, H.; Feng, X.; Chen, P.

    2018-04-01

    This paper focus on the problem of the street environment of block unit, based on making clear the acquisition mode and characteristics of 3D streetscape data, the paper designs the assessment model of regional block unit based on 3D streetscape data. The 3D streetscape data with the aid of oblique photogrammetry surveying and mobile equipment, will greatly improve the efficiency and accuracy of urban regional assessment, and expand the assessment scope. Based on the latest urban regional assessment model, with the street environment assessment model of the current situation, this paper analyzes the street form and street environment assessment of current situation in the typical area of Beijing. Through the street environment assessment of block unit, we found that in the megacity street environment assessment model of block unit based on 3D streetscape data has greatly help to improve the assessment efficiency and accuracy. At the same time, motor vehicle lane, green shade deficiency, bad railings and street lost situation is still very serious in Beijing, the street environment improvement of the block unit is still a heavy task. The research results will provide data support for urban fine management and urban design, and provide a solid foundation for the improvement of city image.

  14. Development and Exemplification of a Model for Teacher Assessment in Primary Science

    ERIC Educational Resources Information Center

    Davies, D. J.; Earle, S.; McMahon, K.; Howe, A.; Collier, C.

    2017-01-01

    The Teacher Assessment in Primary Science project is funded by the Primary Science Teaching Trust and based at Bath Spa University. The study aims to develop a whole-school model of valid, reliable and manageable teacher assessment to inform practice and make a positive impact on primary-aged children's learning in science. The model is based on a…

  15. The Effect of Learning Based on Technology Model and Assessment Technique toward Thermodynamic Learning Achievement

    NASA Astrophysics Data System (ADS)

    Makahinda, T.

    2018-02-01

    The purpose of this research is to find out the effect of learning model based on technology and assessment technique toward thermodynamic achievement by controlling students intelligence. This research is an experimental research. The sample is taken through cluster random sampling with the total respondent of 80 students. The result of the research shows that the result of learning of thermodynamics of students who taught the learning model of environmental utilization is higher than the learning result of student thermodynamics taught by simulation animation, after controlling student intelligence. There is influence of student interaction, and the subject between models of technology-based learning with assessment technique to student learning result of Thermodynamics, after controlling student intelligence. Based on the finding in the lecture then should be used a thermodynamic model of the learning environment with the use of project assessment technique.

  16. Flight simulator fidelity assessment in a rotorcraft lateral translation maneuver

    NASA Technical Reports Server (NTRS)

    Hess, R. A.; Malsbury, T.; Atencio, A., Jr.

    1992-01-01

    A model-based methodology for assessing flight simulator fidelity in closed-loop fashion is exercised in analyzing a rotorcraft low-altitude maneuver for which flight test and simulation results were available. The addition of a handling qualities sensitivity function to a previously developed model-based assessment criteria allows an analytical comparison of both performance and handling qualities between simulation and flight test. Model predictions regarding the existence of simulator fidelity problems are corroborated by experiment. The modeling approach is used to assess analytically the effects of modifying simulator characteristics on simulator fidelity.

  17. Development of a lumbar EMG-based coactivation index for the assessment of complex dynamic tasks.

    PubMed

    Le, Peter; Aurand, Alexander; Walter, Benjamin A; Best, Thomas M; Khan, Safdar N; Mendel, Ehud; Marras, William S

    2018-03-01

    The objective of this study was to develop and test an EMG-based coactivation index and compare it to a coactivation index defined by a biologically assisted lumbar spine model to differentiate between tasks. The purpose was to provide a universal approach to assess coactivation of a multi-muscle system when a computational model is not accessible. The EMG-based index developed utilised anthropometric-defined muscle characteristics driven by torso kinematics and EMG. Muscles were classified as agonists/antagonists based upon 'simulated' moments of the muscles relative to the total 'simulated' moment. Different tasks were used to test the range of the index including lifting, pushing and Valsalva. Results showed that the EMG-based index was comparable to the index defined by a biologically assisted model (r 2  = 0.78). Overall, the EMG-based index provides a universal, usable method to assess the neuromuscular effort associated with coactivation for complex dynamic tasks when the benefit of a biomechanical model is not available. Practitioner Summary: A universal coactivation index for the lumbar spine was developed to assess complex dynamic tasks. This method was validated relative to a model-based index for use when a high-end computational model is not available. Its simplicity allows for fewer inputs and usability for assessment of task ergonomics and rehabilitation.

  18. Alternative Model for Administration and Analysis of Research-Based Assessments

    ERIC Educational Resources Information Center

    Wilcox, Bethany R.; Zwickl, Benjamin M.; Hobbs, Robert D.; Aiken, John M.; Welch, Nathan M.; Lewandowski, H. J.

    2016-01-01

    Research-based assessments represent a valuable tool for both instructors and researchers interested in improving undergraduate physics education. However, the historical model for disseminating and propagating conceptual and attitudinal assessments developed by the physics education research (PER) community has not resulted in widespread adoption…

  19. A user credit assessment model based on clustering ensemble for broadband network new media service supervision

    NASA Astrophysics Data System (ADS)

    Liu, Fang; Cao, San-xing; Lu, Rui

    2012-04-01

    This paper proposes a user credit assessment model based on clustering ensemble aiming to solve the problem that users illegally spread pirated and pornographic media contents within the user self-service oriented broadband network new media platforms. Its idea is to do the new media user credit assessment by establishing indices system based on user credit behaviors, and the illegal users could be found according to the credit assessment results, thus to curb the bad videos and audios transmitted on the network. The user credit assessment model based on clustering ensemble proposed by this paper which integrates the advantages that swarm intelligence clustering is suitable for user credit behavior analysis and K-means clustering could eliminate the scattered users existed in the result of swarm intelligence clustering, thus to realize all the users' credit classification automatically. The model's effective verification experiments are accomplished which are based on standard credit application dataset in UCI machine learning repository, and the statistical results of a comparative experiment with a single model of swarm intelligence clustering indicates this clustering ensemble model has a stronger creditworthiness distinguishing ability, especially in the aspect of predicting to find user clusters with the best credit and worst credit, which will facilitate the operators to take incentive measures or punitive measures accurately. Besides, compared with the experimental results of Logistic regression based model under the same conditions, this clustering ensemble model is robustness and has better prediction accuracy.

  20. Qualitative Assessment of Inquiry-Based Teaching Methods

    ERIC Educational Resources Information Center

    Briggs, Michael; Long, George; Owens, Katrina

    2011-01-01

    A new approach to teaching method assessment using student focused qualitative studies and the theoretical framework of mental models is proposed. The methodology is considered specifically for the advantages it offers when applied to the assessment of inquiry-based teaching methods. The theoretical foundation of mental models is discussed, and…

  1. Approaches for the Application of Physiologically Based ...

    EPA Pesticide Factsheets

    This draft report of Approaches for the Application of Physiologically Based Pharmacokinetic (PBPK) Models and Supporting Data in Risk Assessment addresses the application and evaluation of PBPK models for risk assessment purposes. These models represent an important class of dosimetry models that are useful for predicting internal dose at target organs for risk assessment applications. Topics covered include:the types of data required use of PBPK models in risk assessment,evaluation of PBPK models for use in risk assessment, andthe application of these models to address uncertainties resulting from extrapolations (e.g. interspecies extrapolation) often used in risk assessment.In addition, appendices are provided that includea compilation of chemical partition coefficients and rate constants,algorithms for estimating chemical-specific parameters, anda list of publications relating to PBPK modeling. This report is primarily meant to serve as a learning tool for EPA scientists and risk assessors who may be less familiar with the field. In addition, this report can be informative to PBPK modelers within and outside the Agency, as it provides an assessment of the types of data and models that the EPA requires for consideration of a model for use in risk assessment.

  2. Workplace-Based Assessment: Effects of Rater Expertise

    ERIC Educational Resources Information Center

    Govaerts, M. J. B.; Schuwirth, L. W. T.; Van der Vleuten, C. P. M.; Muijtjens, A. M. M.

    2011-01-01

    Traditional psychometric approaches towards assessment tend to focus exclusively on quantitative properties of assessment outcomes. This may limit more meaningful educational approaches towards workplace-based assessment (WBA). Cognition-based models of WBA argue that assessment outcomes are determined by cognitive processes by raters which are…

  3. Assessing Online Textual Feedback to Support Student Intrinsic Motivation Using a Collaborative Text-Based Dialogue System: A Qualitative Study

    ERIC Educational Resources Information Center

    Shroff, Ronnie H.; Deneen, Christopher

    2011-01-01

    This paper assesses textual feedback to support student intrinsic motivation using a collaborative text-based dialogue system. A research model is presented based on research into intrinsic motivation, and the specific construct of feedback provides a framework for the model. A qualitative research methodology is used to validate the model.…

  4. Teacher Conceptions and Approaches Associated with an Immersive Instructional Implementation of Computer-Based Models and Assessment in a Secondary Chemistry Classroom

    ERIC Educational Resources Information Center

    Waight, Noemi; Liu, Xiufeng; Gregorius, Roberto Ma.; Smith, Erica; Park, Mihwa

    2014-01-01

    This paper reports on a case study of an immersive and integrated multi-instructional approach (namely computer-based model introduction and connection with content; facilitation of individual student exploration guided by exploratory worksheet; use of associated differentiated labs and use of model-based assessments) in the implementation of…

  5. Assessment of Programming Language Learning Based on Peer Code Review Model: Implementation and Experience Report

    ERIC Educational Resources Information Center

    Wang, Yanqing; Li, Hang; Feng, Yuqiang; Jiang, Yu; Liu, Ying

    2012-01-01

    The traditional assessment approach, in which one single written examination counts toward a student's total score, no longer meets new demands of programming language education. Based on a peer code review process model, we developed an online assessment system called "EduPCR" and used a novel approach to assess the learning of computer…

  6. Approaches for Increasing Acceptance of Physiologically Based Pharmacokinetic Models in Public Health Risk Assessment

    EPA Science Inventory

    Physiologically based pharmacokinetic (PBPK) models have great potential for application in regulatory and non-regulatory public health risk assessment. The development and application of PBPK models in chemical toxicology has grown steadily since their emergence in the 1980s. Ho...

  7. Risk assessment of storm surge disaster based on numerical models and remote sensing

    NASA Astrophysics Data System (ADS)

    Liu, Qingrong; Ruan, Chengqing; Zhong, Shan; Li, Jian; Yin, Zhonghui; Lian, Xihu

    2018-06-01

    Storm surge is one of the most serious ocean disasters in the world. Risk assessment of storm surge disaster for coastal areas has important implications for planning economic development and reducing disaster losses. Based on risk assessment theory, this paper uses coastal hydrological observations, a numerical storm surge model and multi-source remote sensing data, proposes methods for valuing hazard and vulnerability for storm surge and builds a storm surge risk assessment model. Storm surges in different recurrence periods are simulated in numerical models and the flooding areas and depth are calculated, which are used for assessing the hazard of storm surge; remote sensing data and GIS technology are used for extraction of coastal key objects and classification of coastal land use are identified, which is used for vulnerability assessment of storm surge disaster. The storm surge risk assessment model is applied for a typical coastal city, and the result shows the reliability and validity of the risk assessment model. The building and application of storm surge risk assessment model provides some basis reference for the city development plan and strengthens disaster prevention and mitigation.

  8. Physics-Based Fragment Acceleration Modeling for Pressurized Tank Burst Risk Assessments

    NASA Technical Reports Server (NTRS)

    Manning, Ted A.; Lawrence, Scott L.

    2014-01-01

    As part of comprehensive efforts to develop physics-based risk assessment techniques for space systems at NASA, coupled computational fluid and rigid body dynamic simulations were carried out to investigate the flow mechanisms that accelerate tank fragments in bursting pressurized vessels. Simulations of several configurations were compared to analyses based on the industry-standard Baker explosion model, and were used to formulate an improved version of the model. The standard model, which neglects an external fluid, was found to agree best with simulation results only in configurations where the internal-to-external pressure ratio is very high and fragment curvature is small. The improved model introduces terms that accommodate an external fluid and better account for variations based on circumferential fragment count. Physics-based analysis was critical in increasing the model's range of applicability. The improved tank burst model can be used to produce more accurate risk assessments of space vehicle failure modes that involve high-speed debris, such as exploding propellant tanks and bursting rocket engines.

  9. State-of-the-Art Review on Physiologically Based Pharmacokinetic Modeling in Pediatric Drug Development.

    PubMed

    Yellepeddi, Venkata; Rower, Joseph; Liu, Xiaoxi; Kumar, Shaun; Rashid, Jahidur; Sherwin, Catherine M T

    2018-05-18

    Physiologically based pharmacokinetic modeling and simulation is an important tool for predicting the pharmacokinetics, pharmacodynamics, and safety of drugs in pediatrics. Physiologically based pharmacokinetic modeling is applied in pediatric drug development for first-time-in-pediatric dose selection, simulation-based trial design, correlation with target organ toxicities, risk assessment by investigating possible drug-drug interactions, real-time assessment of pharmacokinetic-safety relationships, and assessment of non-systemic biodistribution targets. This review summarizes the details of a physiologically based pharmacokinetic modeling approach in pediatric drug research, emphasizing reports on pediatric physiologically based pharmacokinetic models of individual drugs. We also compare and contrast the strategies employed by various researchers in pediatric physiologically based pharmacokinetic modeling and provide a comprehensive overview of physiologically based pharmacokinetic modeling strategies and approaches in pediatrics. We discuss the impact of physiologically based pharmacokinetic models on regulatory reviews and product labels in the field of pediatric pharmacotherapy. Additionally, we examine in detail the current limitations and future directions of physiologically based pharmacokinetic modeling in pediatrics with regard to the ability to predict plasma concentrations and pharmacokinetic parameters. Despite the skepticism and concern in the pediatric community about the reliability of physiologically based pharmacokinetic models, there is substantial evidence that pediatric physiologically based pharmacokinetic models have been used successfully to predict differences in pharmacokinetics between adults and children for several drugs. It is obvious that the use of physiologically based pharmacokinetic modeling to support various stages of pediatric drug development is highly attractive and will rapidly increase, provided the robustness and reliability of these techniques are well established.

  10. An Entropy-Based Measure for Assessing Fuzziness in Logistic Regression

    ERIC Educational Resources Information Center

    Weiss, Brandi A.; Dardick, William

    2016-01-01

    This article introduces an entropy-based measure of data-model fit that can be used to assess the quality of logistic regression models. Entropy has previously been used in mixture-modeling to quantify how well individuals are classified into latent classes. The current study proposes the use of entropy for logistic regression models to quantify…

  11. The System of Systems Architecture Feasibility Assessment Model

    DTIC Science & Technology

    2016-06-01

    OF SYSTEMS ARCHITECTURE FEASIBILITY ASSESSMENT MODEL by Stephen E. Gillespie June 2016 Dissertation Supervisor Eugene Paulo THIS PAGE...Dissertation 4. TITLE AND SUBTITLE THE SYSTEM OF SYSTEMS ARCHITECTURE FEASIBILITY ASSESSMENT MODEL 5. FUNDING NUMBERS 6. AUTHOR(S) Stephen E...SoS architecture feasibility assessment model (SoS-AFAM). Together, these extend current model- based systems engineering (MBSE) and SoS engineering

  12. Uncertainty and Variability in Physiologically-Based ...

    EPA Pesticide Factsheets

    EPA announced the availability of the final report, Uncertainty and Variability in Physiologically-Based Pharmacokinetic (PBPK) Models: Key Issues and Case Studies. This report summarizes some of the recent progress in characterizing uncertainty and variability in physiologically-based pharmacokinetic models and their predictions for use in risk assessment. This report summarizes some of the recent progress in characterizing uncertainty and variability in physiologically-based pharmacokinetic models and their predictions for use in risk assessment.

  13. Specifying and Refining a Measurement Model for a Computer-Based Interactive Assessment

    ERIC Educational Resources Information Center

    Levy, Roy; Mislevy, Robert J.

    2004-01-01

    The challenges of modeling students' performance in computer-based interactive assessments include accounting for multiple aspects of knowledge and skill that arise in different situations and the conditional dependencies among multiple aspects of performance. This article describes a Bayesian approach to modeling and estimating cognitive models…

  14. Atmospheric Ozone 1985. Assessment of our understanding of the processes controlling its present distribution and change, volume 3

    NASA Technical Reports Server (NTRS)

    1985-01-01

    Topics addressed include: assessment models; model predictions of ozone changes; ozone and temperature trends; trace gas effects on climate; kinetics and photchemical data base; spectroscopic data base (infrared to microwave); instrument intercomparisons and assessments; and monthly mean distribution of ozone and temperature.

  15. Validity of Basic Electronic 1 Module Integrated Character Value Based on Conceptual Change Teaching Model to Increase Students Physics Competency in STKIP PGRI West Sumatera

    NASA Astrophysics Data System (ADS)

    Hidayati, A.; Rahmi, A.; Yohandri; Ratnawulan

    2018-04-01

    The importance of teaching materials in accordance with the characteristics of students became the main reason for the development of basic electronics I module integrated character values based on conceptual change teaching model. The module development in this research follows the development procedure of Plomp which includes preliminary research, prototyping phase and assessment phase. In the first year of this research, the module is validated. Content validity is seen from the conformity of the module with the development theory in accordance with the demands of learning model characteristics. The validity of the construct is seen from the linkage and consistency of each module component developed with the characteristic of the integrated learning model of character values obtained through validator assessment. The average validation value assessed by the validator belongs to a very valid category. Based on the validator assessment then revised the basic electronics I module integrated character values based on conceptual change teaching model.

  16. A virtual maintenance-based approach for satellite assembling and troubleshooting assessment

    NASA Astrophysics Data System (ADS)

    Geng, Jie; Li, Ying; Wang, Ranran; Wang, Zili; Lv, Chuan; Zhou, Dong

    2017-09-01

    In this study, a Virtual Maintenance (VM)-based approach for satellite troubleshooting assessment is proposed. By focusing on various elements in satellite assemble troubleshooting, such as accessibility, ergonomics, wiring, and extent of damage, a systematic, quantitative, and objective assessment model is established to decrease subjectivity in satellite assembling and troubleshooting assessment. Afterwards, based on the established assessment model and satellite virtual prototype, an application process of this model suitable for a virtual environment is presented. Finally, according to the application process, all the elements in satellite troubleshooting are analyzed and assessed. The corresponding improvements, which realize the transformation from a conventional way to a virtual simulation and assessment, are suggested, and the flaws in assembling and troubleshooting are revealed. Assembling or troubleshooting schemes can be improved in the early stage of satellite design with the help of a virtual prototype. Repetition in the practical operation is beneficial to companies as risk and cost are effectively reduced.

  17. Video-based intervention for children with autism: towards improved assessment of pre-requisite imitation skills.

    PubMed

    Rayner, Christopher

    2015-04-01

    To explore the relationship between responses to imitation assessment and video-based intervention (VBI) in children with autism. Interview- and observation-based imitation assessments were conducted for five boys with autism prior to VBI across three studies. In two of the three studies, the boys' imitative responses to videos with an animated model and a human model were also compared. Participants who were assessed to have strong imitation skills were also those who responded more positively to VBI. No clear differences were reported in the boys' responses to the equivalent videos with the animated model and the human model. The level of imitation skills required for successful VBI is relative to the target behaviour. Revision of existing imitation assessment measures, as well as development and validation of more comprehensive measures is warranted for use in conjunction with VBI.

  18. Usefulness of an ability-based health model in work ability assessments provided by psychiatrists and psychology specialists writing social security certificates.

    PubMed

    Solli, Hans Magnus; Barbosa da Silva, António; Egeland, Jens

    2015-01-01

    To investigate whether adding descriptions of the health factors "ability," "environment" and "intentions/goals" to the officially sanctioned biomedical disability model (BDM) would improve assessments of work ability for social security purposes. The study was based on a theoretical design consisting of textual analysis and interpretation. Two further work ability models were defined: the mixed health model (MHM), which describes health factors without assessing a person's abilities in context, and the ability-based health model (AHM), which assesses abilities in a concrete context of environment and intention. Eighty-six social security certificates, written by psychiatrists and psychology specialists in a Norwegian hospital-based mental health clinic, were analysed in relation to the three work ability/disability models. In certificates based on the BDM, a general pattern was found of "gradual work training". The MHM added health factors, but without linking them together in a concrete way. With the AHM, work ability was assessed in terms of a concrete unified evaluation of the claimant's abilities, environments and intentions/goals. Applying the AHM in work ability assessments, in comparison with the BDM and the MHM, is useful because this foregrounds claimants' abilities in a context of concrete goals and work-related opportunities, as a unity. Implications for Rehabilitation A concept of health should include ability, environment and intentions/goals as components. When all three of these components are described in concrete terms in a work ability assessment, an integrated picture of the individual's abilities in the context of his/her particular intentions/goals and work opportunities comes to the fore. This kind of assessment makes it possible to meet the individual's needs for individual follow-up in a work environment.

  19. An Australasian model license reassessment procedure for identifying potentially unsafe drivers.

    PubMed

    Fildes, Brian N; Charlton, Judith; Pronk, Nicola; Langford, Jim; Oxley, Jennie; Koppel, Sjaanie

    2008-08-01

    Most licensing jurisdictions in Australia currently employ age-based assessment programs as a means to manage older driver safety, yet available evidence suggests that these programs have no safety benefits. This paper describes a community referral-based model license re assessment procedure for identifying and assessing potentially unsafe drivers. While the model was primarily developed for assessing older driver fitness to drive, it could be applicable to other forms of driver impairment associated with increased crash risk. It includes a three-tier process of assessment, involving the use of validated and relevant assessment instruments. A case is argued that this process is a more systematic, transparent and effective process for managing older driver safety and thus more likely to be widely acceptable to the target community and licensing authorities than age-based practices.

  20. Design of a component-based integrated environmental modeling framework

    EPA Science Inventory

    Integrated environmental modeling (IEM) includes interdependent science-based components (e.g., models, databases, viewers, assessment protocols) that comprise an appropriate software modeling system. The science-based components are responsible for consuming and producing inform...

  1. Acoustic Tomographic Estimate of Ocean Advective Heat Flux: A Numerical Assessment in the Norwegian Sea

    DTIC Science & Technology

    1990-06-01

    of transceivers used and the characteristics of the sound channel. In the assessment we use the General Digital Environmental Model ( GDEM ), a...sound channel. In the assessment we use the General Digital Environmental Model ( GDEM ), a climatological data base, to simulate an ocean area 550 x 550...34 3. GDEM Data Base

  2. Index-based groundwater vulnerability mapping models using hydrogeological settings: A critical evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumar, Prashant, E-mail: prashantkumar@csio.res.in; Academy of Scientific and Innovative Research—CSIO, Chandigarh 160030; Bansod, Baban K.S.

    2015-02-15

    Groundwater vulnerability maps are useful for decision making in land use planning and water resource management. This paper reviews the various groundwater vulnerability assessment models developed across the world. Each model has been evaluated in terms of its pros and cons and the environmental conditions of its application. The paper further discusses the validation techniques used for the generated vulnerability maps by various models. Implicit challenges associated with the development of the groundwater vulnerability assessment models have also been identified with scientific considerations to the parameter relations and their selections. - Highlights: • Various index-based groundwater vulnerability assessment models havemore » been discussed. • A comparative analysis of the models and its applicability in different hydrogeological settings has been discussed. • Research problems of underlying vulnerability assessment models are also reported in this review paper.« less

  3. Meaning-Based Scoring: A Systemic Functional Linguistics Model for Automated Test Tasks

    ERIC Educational Resources Information Center

    Gleason, Jesse

    2014-01-01

    Communicative approaches to language teaching that emphasize the importance of speaking (e.g., task-based language teaching) require innovative and evidence-based means of assessing oral language. Nonetheless, research has yet to produce an adequate assessment model for oral language (Chun 2006; Downey et al. 2008). Limited by automatic speech…

  4. Elements of Network-Based Assessment

    ERIC Educational Resources Information Center

    Gibson, David

    2007-01-01

    Elements of network-based assessment systems are envisioned based on recent advances in knowledge and practice in learning theory, assessment design and delivery, and semantic web interoperability. The architecture takes advantage of the meditating role of technology as well as recent models of assessment systems. This overview of the elements…

  5. PHYSIOLOCIGALLY BASED PHARMACOKINETIC (PBPK) MODELING AND MODE OF ACTION IN DOSE-RESPONSE ASSESSMENT

    EPA Science Inventory

    PHYSIOLOGICALLY BASED PHARMACOKINETIC (PBPK) MODELING AND MODE OF ACTION IN DOSE-RESPONSE ASSESSMENT. Barton HA. Experimental Toxicology Division, National Health and Environmental Effects Laboratory, ORD, U.S. EPA
    Dose-response analysis requires quantitatively linking infor...

  6. The AgESGUI geospatial simulation system for environmental model application and evaluation

    USDA-ARS?s Scientific Manuscript database

    Practical decision making in spatially-distributed environmental assessment and management is increasingly being based on environmental process-based models linked to geographical information systems (GIS). Furthermore, powerful computers and Internet-accessible assessment tools are providing much g...

  7. Teleassessment: A Model for Team Developmental Assessment of High-Risk Infants Using a Televideo Network.

    ERIC Educational Resources Information Center

    Smith, Douglas L.

    1997-01-01

    Describes a model for team developmental assessment of high-risk infants using a fiber-optic "distance learning" televideo network in south-central New York. An arena style transdisciplinary play-based assessment model was adapted for use across the televideo connection and close simulation of convention assessment procedures was…

  8. Conditional Toxicity Value (CTV) Predictor: An In Silico Approach for Generating Quantitative Risk Estimates for Chemicals.

    PubMed

    Wignall, Jessica A; Muratov, Eugene; Sedykh, Alexander; Guyton, Kathryn Z; Tropsha, Alexander; Rusyn, Ivan; Chiu, Weihsueh A

    2018-05-01

    Human health assessments synthesize human, animal, and mechanistic data to produce toxicity values that are key inputs to risk-based decision making. Traditional assessments are data-, time-, and resource-intensive, and they cannot be developed for most environmental chemicals owing to a lack of appropriate data. As recommended by the National Research Council, we propose a solution for predicting toxicity values for data-poor chemicals through development of quantitative structure-activity relationship (QSAR) models. We used a comprehensive database of chemicals with existing regulatory toxicity values from U.S. federal and state agencies to develop quantitative QSAR models. We compared QSAR-based model predictions to those based on high-throughput screening (HTS) assays. QSAR models for noncancer threshold-based values and cancer slope factors had cross-validation-based Q 2 of 0.25-0.45, mean model errors of 0.70-1.11 log 10 units, and applicability domains covering >80% of environmental chemicals. Toxicity values predicted from QSAR models developed in this study were more accurate and precise than those based on HTS assays or mean-based predictions. A publicly accessible web interface to make predictions for any chemical of interest is available at http://toxvalue.org. An in silico tool that can predict toxicity values with an uncertainty of an order of magnitude or less can be used to quickly and quantitatively assess risks of environmental chemicals when traditional toxicity data or human health assessments are unavailable. This tool can fill a critical gap in the risk assessment and management of data-poor chemicals. https://doi.org/10.1289/EHP2998.

  9. Incorporating biologically based models into assessments of risk from chemical contaminants

    NASA Technical Reports Server (NTRS)

    Bull, R. J.; Conolly, R. B.; De Marini, D. M.; MacPhail, R. C.; Ohanian, E. V.; Swenberg, J. A.

    1993-01-01

    The general approach to assessment of risk from chemical contaminants in drinking water involves three steps: hazard identification, exposure assessment, and dose-response assessment. Traditionally, the risks to humans associated with different levels of a chemical have been derived from the toxic responses observed in animals. It is becoming increasingly clear, however, that further information is needed if risks to humans are to be assessed accurately. Biologically based models help clarify the dose-response relationship and reduce uncertainty.

  10. A fuzzy-logic based decision-making approach for identification of groundwater quality based on groundwater quality indices.

    PubMed

    Vadiati, M; Asghari-Moghaddam, A; Nakhaei, M; Adamowski, J; Akbarzadeh, A H

    2016-12-15

    Due to inherent uncertainties in measurement and analysis, groundwater quality assessment is a difficult task. Artificial intelligence techniques, specifically fuzzy inference systems, have proven useful in evaluating groundwater quality in uncertain and complex hydrogeological systems. In the present study, a Mamdani fuzzy-logic-based decision-making approach was developed to assess groundwater quality based on relevant indices. In an effort to develop a set of new hybrid fuzzy indices for groundwater quality assessment, a Mamdani fuzzy inference model was developed with widely-accepted groundwater quality indices: the Groundwater Quality Index (GQI), the Water Quality Index (WQI), and the Ground Water Quality Index (GWQI). In an effort to present generalized hybrid fuzzy indices a significant effort was made to employ well-known groundwater quality index acceptability ranges as fuzzy model output ranges rather than employing expert knowledge in the fuzzification of output parameters. The proposed approach was evaluated for its ability to assess the drinking water quality of 49 samples collected seasonally from groundwater resources in Iran's Sarab Plain during 2013-2014. Input membership functions were defined as "desirable", "acceptable" and "unacceptable" based on expert knowledge and the standard and permissible limits prescribed by the World Health Organization. Output data were categorized into multiple categories based on the GQI (5 categories), WQI (5 categories), and GWQI (3 categories). Given the potential of fuzzy models to minimize uncertainties, hybrid fuzzy-based indices produce significantly more accurate assessments of groundwater quality than traditional indices. The developed models' accuracy was assessed and a comparison of the performance indices demonstrated the Fuzzy Groundwater Quality Index model to be more accurate than both the Fuzzy Water Quality Index and Fuzzy Ground Water Quality Index models. This suggests that the new hybrid fuzzy indices developed in this research are reliable and flexible when used in groundwater quality assessment for drinking purposes. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Simulation Modeling of Resilience Assessment in Indonesian Fertiliser Industry Supply Networks

    NASA Astrophysics Data System (ADS)

    Utami, I. D.; Holt, R. J.; McKay, A.

    2018-01-01

    Supply network resilience is a significant aspect in the performance of the Indonesian fertiliser industry. Decision makers use risk assessment and port management reports to evaluate the availability of infrastructure. An opportunity was identified to incorporate both types of data into an approach for the measurement of resilience. A framework, based on a synthesis of literature and interviews with industry practitioners, covering both social and technical factors is introduced. A simulation model was then built to allow managers to explore implications for resilience and predict levels of risk in different scenarios. Result of interview with respondens from Indonesian fertiliser industry indicated that the simulation model could be valuable in the assessment. This paper provides details of the simulation model for decision makers to explore levels of risk in supply networks. For practitioners, the model could be used by government to assess the current condition of supply networks in Indonesian industries. On the other hand, for academia, the approach provides a new application of agent-based models in research on supply network resilience and presents a real example of how agent-based modeling could be used as to support the assessment approach.

  12. Modeling-Oriented Assessment in K-12 Science Education: A synthesis of research from 1980 to 2013 and new directions

    NASA Astrophysics Data System (ADS)

    Namdar, Bahadir; Shen, Ji

    2015-05-01

    Scientific modeling has been advocated as one of the core practices in recent science education policy initiatives. In modeling-based instruction (MBI), students use, construct, and revise models to gain scientific knowledge and inquiry skills. Oftentimes, the benefits of MBI have been documented using assessments targeting students' conceptual understanding or affective domains. Fewer studies have used assessments directly built on the ideas of modeling. The purpose of this study is to synthesize and examine modeling-oriented assessments (MOA) in the last three decades and propose new directions for research in this area. The study uses a collection of 30 empirical research articles that report MOA from an initial library of 153 articles focusing on MBI in K-12 science education from 1980 to 2013. The findings include the variety of themes within each of the three MOA dimensions (modeling products, modeling practices, and meta-modeling knowledge) and the areas of MOA still in need of much work. Based on the review, three guiding principles are proposed for future work in MOA: (a) framing MOA in an ecology of assessment, (b) providing authentic modeling contexts for assessment, and (c) spelling out the connections between MOA items and the essential aspects of modeling to be assessed.

  13. Statistical Methods for Assessments in Simulations and Serious Games. Research Report. ETS RR-14-12

    ERIC Educational Resources Information Center

    Fu, Jianbin; Zapata, Diego; Mavronikolas, Elia

    2014-01-01

    Simulation or game-based assessments produce outcome data and process data. In this article, some statistical models that can potentially be used to analyze data from simulation or game-based assessments are introduced. Specifically, cognitive diagnostic models that can be used to estimate latent skills from outcome data so as to scale these…

  14. Using Hi-FAME (High Feedback-Assessment-Multimedia-Environment) Instructional Model in WBI: A Case Study for Biology Teacher Education.

    ERIC Educational Resources Information Center

    Wang, Tzu-Hua; Wang, Wei-Lung; Wang, Kuo-Hua; Huang, Shih-Chieh

    The study attempted to adapt two web tools, FFS system (Frontpage Feedback System) and WATA system (Web-based Assessment and Test Analysis System), to construct a Hi-FAME (High Feedback-Assessment-Multimedia-Environment) Model in WBI (Web-based Instruction) to facilitate pre-service teacher training. Participants were 30 junior pre-service…

  15. A Risk-Analysis Approach to Implementing Web-Based Assessment

    ERIC Educational Resources Information Center

    Ricketts, Chris; Zakrzewski, Stan

    2005-01-01

    Computer-Based Assessment is a risky business. This paper proposes the use of a model for web-based assessment systems that identifies pedagogic, operational, technical (non web-based), web-based and financial risks. The strategies and procedures for risk elimination or reduction arise from risk analysis and management and are the means by which…

  16. Functional Behavioral Assessment: A School Based Model.

    ERIC Educational Resources Information Center

    Asmus, Jennifer M.; Vollmer, Timothy R.; Borrero, John C.

    2002-01-01

    This article begins by discussing requirements for functional behavioral assessment under the Individuals with Disabilities Education Act and then describes a comprehensive model for the application of behavior analysis in the schools. The model includes descriptive assessment, functional analysis, and intervention and involves the participation…

  17. Assessment of Template-Based Modeling of Protein Structure in CASP11

    PubMed Central

    Modi, Vivek; Xu, Qifang; Adhikari, Sam; Dunbrack, Roland L.

    2016-01-01

    We present the assessment of predictions submitted in the template-based modeling (TBM) category of CASP11 (Critical Assessment of Protein Structure Prediction). Model quality was judged on the basis of global and local measures of accuracy on all atoms including side chains. The top groups on 39 human-server targets based on model 1 predictions were LEER, Zhang, LEE, MULTICOM, and Zhang-Server. The top groups on 81 targets by server groups based on model 1 predictions were Zhang-Server, nns, BAKER-ROSETTASERVER, QUARK, and myprotein-me. In CASP11, the best models for most targets were equal to or better than the best template available in the Protein Data Bank, even for targets with poor templates. The overall performance in CASP11 is similar to the performance of predictors in CASP10 with slightly better performance on the hardest targets. For most targets, assessment measures exhibited bimodal probability density distributions. Multi-dimensional scaling of an RMSD matrix for each target typically revealed a single cluster with models similar to the target structure, with a mode in the GDT-TS density between 40 and 90, and a wide distribution of models highly divergent from each other and from the experimental structure, with density mode at a GDT-TS value of ~20. The models in this peak in the density were either compact models with entirely the wrong fold, or highly non-compact models. The results argue for a density-driven approach in future CASP TBM assessments that accounts for the bimodal nature of these distributions instead of Z-scores, which assume a unimodal, Gaussian distribution. PMID:27081927

  18. BASIN-SCALE ASSESSMENTS FOR SUSTAINABLE ECOSYSTEMS (BASE)

    EPA Science Inventory

    The need for multi-media, multi-stressor, and multi-response models for ecological assessment is widely acknowledged. Assessments at this level of complexity have not been conducted, and therefore pilot assessments are required to identify the critical concepts, models, data, and...

  19. Probabilistic modeling of percutaneous absorption for risk-based exposure assessments and transdermal drug delivery.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ho, Clifford Kuofei

    Chemical transport through human skin can play a significant role in human exposure to toxic chemicals in the workplace, as well as to chemical/biological warfare agents in the battlefield. The viability of transdermal drug delivery also relies on chemical transport processes through the skin. Models of percutaneous absorption are needed for risk-based exposure assessments and drug-delivery analyses, but previous mechanistic models have been largely deterministic. A probabilistic, transient, three-phase model of percutaneous absorption of chemicals has been developed to assess the relative importance of uncertain parameters and processes that may be important to risk-based assessments. Penetration routes through the skinmore » that were modeled include the following: (1) intercellular diffusion through the multiphase stratum corneum; (2) aqueous-phase diffusion through sweat ducts; and (3) oil-phase diffusion through hair follicles. Uncertainty distributions were developed for the model parameters, and a Monte Carlo analysis was performed to simulate probability distributions of mass fluxes through each of the routes. Sensitivity analyses using stepwise linear regression were also performed to identify model parameters that were most important to the simulated mass fluxes at different times. This probabilistic analysis of percutaneous absorption (PAPA) method has been developed to improve risk-based exposure assessments and transdermal drug-delivery analyses, where parameters and processes can be highly uncertain.« less

  20. Comparison between two statistically based methods, and two physically based models developed to compute daily mean streamflow at ungaged locations in the Cedar River Basin, Iowa

    USGS Publications Warehouse

    Linhart, S. Mike; Nania, Jon F.; Christiansen, Daniel E.; Hutchinson, Kasey J.; Sanders, Curtis L.; Archfield, Stacey A.

    2013-01-01

    A variety of individuals from water resource managers to recreational users need streamflow information for planning and decisionmaking at locations where there are no streamgages. To address this problem, two statistically based methods, the Flow Duration Curve Transfer method and the Flow Anywhere method, were developed for statewide application and the two physically based models, the Precipitation Runoff Modeling-System and the Soil and Water Assessment Tool, were only developed for application for the Cedar River Basin. Observed and estimated streamflows for the two methods and models were compared for goodness of fit at 13 streamgages modeled in the Cedar River Basin by using the Nash-Sutcliffe and the percent-bias efficiency values. Based on median and mean Nash-Sutcliffe values for the 13 streamgages the Precipitation Runoff Modeling-System and Soil and Water Assessment Tool models appear to have performed similarly and better than Flow Duration Curve Transfer and Flow Anywhere methods. Based on median and mean percent bias values, the Soil and Water Assessment Tool model appears to have generally overestimated daily mean streamflows, whereas the Precipitation Runoff Modeling-System model and statistical methods appear to have underestimated daily mean streamflows. The Flow Duration Curve Transfer method produced the lowest median and mean percent bias values and appears to perform better than the other models.

  1. INCORPORATING NONCHEMICAL STRESSORS INTO CUMMULATIVE RISK ASSESSMENTS

    EPA Science Inventory

    The risk assessment paradigm has begun to shift from assessing single chemicals using "reasonable worst case" assumptions for individuals to considering multiple chemicals and community-based models. Inherent in community-based risk assessment is examination of all stressors a...

  2. Base Flow Model Validation

    NASA Technical Reports Server (NTRS)

    Sinha, Neeraj; Brinckman, Kevin; Jansen, Bernard; Seiner, John

    2011-01-01

    A method was developed of obtaining propulsive base flow data in both hot and cold jet environments, at Mach numbers and altitude of relevance to NASA launcher designs. The base flow data was used to perform computational fluid dynamics (CFD) turbulence model assessments of base flow predictive capabilities in order to provide increased confidence in base thermal and pressure load predictions obtained from computational modeling efforts. Predictive CFD analyses were used in the design of the experiments, available propulsive models were used to reduce program costs and increase success, and a wind tunnel facility was used. The data obtained allowed assessment of CFD/turbulence models in a complex flow environment, working within a building-block procedure to validation, where cold, non-reacting test data was first used for validation, followed by more complex reacting base flow validation.

  3. Alternative Strategies in Assessing Special Education Needs

    ERIC Educational Resources Information Center

    Dykeman, Bruce F.

    2006-01-01

    The conventional use of standardized testing within a discrepancy analysis model is reviewed. The Response-to-Intervention (RTI) process is explained, along with descriptions of assessment procedures within RTI: functional assessment, authentic assessment, curriculum-based measurement, and play-based assessment. Psychometric issues relevant to RTI…

  4. Stress testing hydrologic models using bottom-up climate change assessment

    NASA Astrophysics Data System (ADS)

    Stephens, C.; Johnson, F.; Marshall, L. A.

    2017-12-01

    Bottom-up climate change assessment is a promising approach for understanding the vulnerability of a system to potential future changes. The technique has been utilised successfully in risk-based assessments of future flood severity and infrastructure vulnerability. We find that it is also an ideal tool for assessing hydrologic model performance in a changing climate. In this study, we applied bottom-up climate change to compare the performance of two different hydrologic models (an event-based and a continuous model) under increasingly severe climate change scenarios. This allowed us to diagnose likely sources of future prediction error in the two models. The climate change scenarios were based on projections for southern Australia, which indicate drier average conditions with increased extreme rainfall intensities. We found that the key weakness in using the event-based model to simulate drier future scenarios was the model's inability to dynamically account for changing antecedent conditions. This led to increased variability in model performance relative to the continuous model, which automatically accounts for the wetness of a catchment through dynamic simulation of water storages. When considering more intense future rainfall events, representation of antecedent conditions became less important than assumptions around (non)linearity in catchment response. The linear continuous model we applied may underestimate flood risk in a future climate with greater extreme rainfall intensity. In contrast with the recommendations of previous studies, this indicates that continuous simulation is not necessarily the key to robust flood modelling under climate change. By applying bottom-up climate change assessment, we were able to understand systematic changes in relative model performance under changing conditions and deduce likely sources of prediction error in the two models.

  5. Characterizing Uncertainty and Variability in PBPK Models: State of the Science and Needs for Research and Implementation

    EPA Science Inventory

    Mode-of-action based risk and safety assessments can rely upon tissue dosimetry estimates in animals and humans obtained from physiologically-based pharmacokinetic (PBPK) modeling. However, risk assessment also increasingly requires characterization of uncertainty and variabilit...

  6. Assessment "as" Learning: Enhancing Discourse, Understanding, and Achievement in Innovative Science Curricula

    ERIC Educational Resources Information Center

    Hickey, Daniel T.; Taasoobshirazi, Gita; Cross, Dionne

    2012-01-01

    An assessment-oriented design-based research model was applied to existing inquiry-oriented multimedia programs in astronomy, biology, and ecology. Building on emerging situative theories of assessment, the model extends prevailing views of formative assessment "for" learning by embedding "discursive" formative assessment more directly into the…

  7. A Qualitative Approach to Portfolios: The Early Assessment for Exceptional Potential Model.

    ERIC Educational Resources Information Center

    Shaklee, Beverly D.; Viechnicki, Karen J.

    1995-01-01

    The Early Assessment for Exceptional Potential portfolio assessment model assesses children as exceptional learners, users, generators, and pursuers of knowledge. It is based on use of authentic learning opportunities; interaction of assessment, curriculum, and instruction; multiple criteria derived from multiple sources; and systematic teacher…

  8. Development of good modelling practice for phsiologically based pharmacokinetic models for use in risk assessment: The first steps

    EPA Science Inventory

    The increasing use of tissue dosimetry estimated using pharmacokinetic models in chemical risk assessments in multiple countries necessitates the need to develop internationally recognized good modelling practices. These practices would facilitate sharing of models and model eva...

  9. An Introduction to the Partial Credit Model for Developing Nursing Assessments.

    ERIC Educational Resources Information Center

    Fox, Christine

    1999-01-01

    Demonstrates how the partial credit model, a variation of the Rasch Measurement Model, can be used to develop performance-based assessments for nursing education. Applies the model using the Practical Knowledge Inventory for Nurses. (SK)

  10. EPA CENTER FOR EXPOSURE ASSESSMENT MODELING (CEAM)

    EPA Science Inventory

    The EPA Center for Exposure Assessment Modeling (CEAM) supports the Agency and professional community in environmental, risk-based decision-making by expanding their applications expertise for quantitatively assessing pollutant exposure via aquatic, terrestrial, and multimedia pa...

  11. A review of air exchange rate models for air pollution exposure assessments.

    PubMed

    Breen, Michael S; Schultz, Bradley D; Sohn, Michael D; Long, Thomas; Langstaff, John; Williams, Ronald; Isaacs, Kristin; Meng, Qing Yu; Stallings, Casson; Smith, Luther

    2014-11-01

    A critical aspect of air pollution exposure assessments is estimation of the air exchange rate (AER) for various buildings where people spend their time. The AER, which is the rate of exchange of indoor air with outdoor air, is an important determinant for entry of outdoor air pollutants and for removal of indoor-emitted air pollutants. This paper presents an overview and critical analysis of the scientific literature on empirical and physically based AER models for residential and commercial buildings; the models highlighted here are feasible for exposure assessments as extensive inputs are not required. Models are included for the three types of airflows that can occur across building envelopes: leakage, natural ventilation, and mechanical ventilation. Guidance is provided to select the preferable AER model based on available data, desired temporal resolution, types of airflows, and types of buildings included in the exposure assessment. For exposure assessments with some limited building leakage or AER measurements, strategies are described to reduce AER model uncertainty. This review will facilitate the selection of AER models in support of air pollution exposure assessments.

  12. Laboratory-based versus non-laboratory-based method for assessment of cardiovascular disease risk: the NHANES I Follow-up Study cohort

    PubMed Central

    Gaziano, Thomas A; Young, Cynthia R; Fitzmaurice, Garrett; Atwood, Sidney; Gaziano, J Michael

    2008-01-01

    Summary Background Around 80% of all cardiovascular deaths occur in developing countries. Assessment of those patients at high risk is an important strategy for prevention. Since developing countries have limited resources for prevention strategies that require laboratory testing, we assessed if a risk prediction method that did not require any laboratory tests could be as accurate as one requiring laboratory information. Methods The National Health and Nutrition Examination Survey (NHANES) was a prospective cohort study of 14 407 US participants aged between 25–74 years at the time they were first examined (between 1971 and 1975). Our follow-up study population included participants with complete information on these surveys who did not report a history of cardiovascular disease (myocardial infarction, heart failure, stroke, angina) or cancer, yielding an analysis dataset N=6186. We compared how well either method could predict first-time fatal and non-fatal cardiovascular disease events in this cohort. For the laboratory-based model, which required blood testing, we used standard risk factors to assess risk of cardiovascular disease: age, systolic blood pressure, smoking status, total cholesterol, reported diabetes status, and current treatment for hypertension. For the non-laboratory-based model, we substituted body-mass index for cholesterol. Findings In the cohort of 6186, there were 1529 first-time cardiovascular events and 578 (38%) deaths due to cardiovascular disease over 21 years. In women, the laboratory-based model was useful for predicting events, with a c statistic of 0·829. The c statistic of the non-laboratory-based model was 0·831. In men, the results were similar (0·784 for the laboratory-based model and 0·783 for the non-laboratory-based model). Results were similar between the laboratory-based and non-laboratory-based models in both men and women when restricted to fatal events only. Interpretation A method that uses non-laboratory-based risk factors predicted cardiovascular events as accurately as one that relied on laboratory-based values. This approach could simplify risk assessment in situations where laboratory testing is inconvenient or unavailable. PMID:18342687

  13. Innovative Models of Dental Care Delivery and Coverage: Patient-Centric Dental Benefits Based on Digital Oral Health Risk Assessment.

    PubMed

    Martin, John; Mills, Shannon; Foley, Mary E

    2018-04-01

    Innovative models of dental care delivery and coverage are emerging across oral health care systems causing changes to treatment and benefit plans. A novel addition to these models is digital risk assessment, which offers a promising new approach that incorporates the use of a cloud-based technology platform to assess an individual patient's risk for oral disease. Risk assessment changes treatment by including risk as a modifier of treatment and as a determinant of preventive services. Benefit plans are being developed to use risk assessment to predetermine preventive benefits for patients identified at elevated risk for oral disease. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Multilayered Word Structure Model for Assessing Spelling of Finnish Children in Shallow Orthography

    ERIC Educational Resources Information Center

    Kulju, Pirjo; Mäkinen, Marita

    2017-01-01

    This study explores Finnish children's word-level spelling by applying a linguistically based multilayered word structure model for assessing spelling performance. The model contributes to the analytical qualitative assessment approach in order to identify children's spelling performance for enhancing writing skills. The children (N = 105)…

  15. Sensors vs. experts - a performance comparison of sensor-based fall risk assessment vs. conventional assessment in a sample of geriatric patients.

    PubMed

    Marschollek, Michael; Rehwald, Anja; Wolf, Klaus-Hendrik; Gietzelt, Matthias; Nemitz, Gerhard; zu Schwabedissen, Hubertus Meyer; Schulze, Mareike

    2011-06-28

    Fall events contribute significantly to mortality, morbidity and costs in our ageing population. In order to identify persons at risk and to target preventive measures, many scores and assessment tools have been developed. These often require expertise and are costly to implement. Recent research investigates the use of wearable inertial sensors to provide objective data on motion features which can be used to assess individual fall risk automatically. So far it is unknown how well this new method performs in comparison with conventional fall risk assessment tools. The aim of our research is to compare the predictive performance of our new sensor-based method with conventional and established methods, based on prospective data. In a first study phase, 119 inpatients of a geriatric clinic took part in motion measurements using a wireless triaxial accelerometer during a Timed Up&Go (TUG) test and a 20 m walk. Furthermore, the St. Thomas Risk Assessment Tool in Falling Elderly Inpatients (STRATIFY) was performed, and the multidisciplinary geriatric care team estimated the patients' fall risk. In a second follow-up phase of the study, 46 of the participants were interviewed after one year, including a fall and activity assessment. The predictive performances of the TUG, the STRATIFY and team scores are compared. Furthermore, two automatically induced logistic regression models based on conventional clinical and assessment data (CONV) as well as sensor data (SENSOR) are matched. Among the risk assessment scores, the geriatric team score (sensitivity 56%, specificity 80%) outperforms STRATIFY and TUG. The induced logistic regression models CONV and SENSOR achieve similar performance values (sensitivity 68%/58%, specificity 74%/78%, AUC 0.74/0.72, +LR 2.64/2.61). Both models are able to identify more persons at risk than the simple scores. Sensor-based objective measurements of motion parameters in geriatric patients can be used to assess individual fall risk, and our prediction model's performance matches that of a model based on conventional clinical and assessment data. Sensor-based measurements using a small wearable device may contribute significant information to conventional methods and are feasible in an unsupervised setting. More prospective research is needed to assess the cost-benefit relation of our approach.

  16. Sensors vs. experts - A performance comparison of sensor-based fall risk assessment vs. conventional assessment in a sample of geriatric patients

    PubMed Central

    2011-01-01

    Background Fall events contribute significantly to mortality, morbidity and costs in our ageing population. In order to identify persons at risk and to target preventive measures, many scores and assessment tools have been developed. These often require expertise and are costly to implement. Recent research investigates the use of wearable inertial sensors to provide objective data on motion features which can be used to assess individual fall risk automatically. So far it is unknown how well this new method performs in comparison with conventional fall risk assessment tools. The aim of our research is to compare the predictive performance of our new sensor-based method with conventional and established methods, based on prospective data. Methods In a first study phase, 119 inpatients of a geriatric clinic took part in motion measurements using a wireless triaxial accelerometer during a Timed Up&Go (TUG) test and a 20 m walk. Furthermore, the St. Thomas Risk Assessment Tool in Falling Elderly Inpatients (STRATIFY) was performed, and the multidisciplinary geriatric care team estimated the patients' fall risk. In a second follow-up phase of the study, 46 of the participants were interviewed after one year, including a fall and activity assessment. The predictive performances of the TUG, the STRATIFY and team scores are compared. Furthermore, two automatically induced logistic regression models based on conventional clinical and assessment data (CONV) as well as sensor data (SENSOR) are matched. Results Among the risk assessment scores, the geriatric team score (sensitivity 56%, specificity 80%) outperforms STRATIFY and TUG. The induced logistic regression models CONV and SENSOR achieve similar performance values (sensitivity 68%/58%, specificity 74%/78%, AUC 0.74/0.72, +LR 2.64/2.61). Both models are able to identify more persons at risk than the simple scores. Conclusions Sensor-based objective measurements of motion parameters in geriatric patients can be used to assess individual fall risk, and our prediction model's performance matches that of a model based on conventional clinical and assessment data. Sensor-based measurements using a small wearable device may contribute significant information to conventional methods and are feasible in an unsupervised setting. More prospective research is needed to assess the cost-benefit relation of our approach. PMID:21711504

  17. The benefit of 3D laser scanning technology in the generation and calibration of FEM models for health assessment of concrete structures.

    PubMed

    Yang, Hao; Xu, Xiangyang; Neumann, Ingo

    2014-11-19

    Terrestrial laser scanning technology (TLS) is a new technique for quickly getting three-dimensional information. In this paper we research the health assessment of concrete structures with a Finite Element Method (FEM) model based on TLS. The goal focuses on the benefits of 3D TLS in the generation and calibration of FEM models, in order to build a convenient, efficient and intelligent model which can be widely used for the detection and assessment of bridges, buildings, subways and other objects. After comparing the finite element simulation with surface-based measurement data from TLS, the FEM model is determined to be acceptable with an error of less than 5%. The benefit of TLS lies mainly in the possibility of a surface-based validation of results predicted by the FEM model.

  18. Development of a hydrologic connectivity dataset for SWAT assessments in the U.S.

    USDA-ARS?s Scientific Manuscript database

    Model based water quality assessments are as important informer of conservation and environmental policy in the US. The recently completed national scale Conservation Effects Assessment Project (CEAP) is being repeated using newer data, greater resolution, and enhanced models. National assessment...

  19. Impact on house staff evaluation scores when changing from a Dreyfus- to a Milestone-based evaluation model: one internal medicine residency program's findings.

    PubMed

    Friedman, Karen A; Balwan, Sandy; Cacace, Frank; Katona, Kyle; Sunday, Suzanne; Chaudhry, Saima

    2014-01-01

    As graduate medical education (GME) moves into the Next Accreditation System (NAS), programs must take a critical look at their current models of evaluation and assess how well they align with reporting outcomes. Our objective was to assess the impact on house staff evaluation scores when transitioning from a Dreyfus-based model of evaluation to a Milestone-based model of evaluation. Milestones are a key component of the NAS. We analyzed all end of rotation evaluations of house staff completed by faculty for academic years 2010-2011 (pre-Dreyfus model) and 2011-2012 (post-Milestone model) in one large university-based internal medicine residency training program. Main measures included change in PGY-level average score; slope, range, and separation of average scores across all six Accreditation Council for Graduate Medical Education (ACGME) competencies. Transitioning from a Dreyfus-based model to a Milestone-based model resulted in a larger separation in the scores between our three post-graduate year classes, a steeper progression of scores in the PGY-1 class, a wider use of the 5-point scale on our global end of rotation evaluation form, and a downward shift in the PGY-1 scores and an upward shift in the PGY-3 scores. For faculty trained in both models of assessment, the Milestone-based model had greater discriminatory ability as evidenced by the larger separation in the scores for all the classes, in particular the PGY-1 class.

  20. A Consensus Model: Shifting assessment practices in dietetics tertiary education.

    PubMed

    Bacon, Rachel; Kellett, Jane; Dart, Janeane; Knight-Agarwal, Cathy; Mete, Rebecca; Ash, Susan; Palermo, Claire

    2018-02-21

    The aim of this research was to evaluate a Consensus Model for competency-based assessment. An evaluative case study was used to allow a holistic examination of a constructivist-interpretivist programmatic model of assessment. Using a modified Delphi process, the competence of all 29 students enrolled in their final year of a Master of Nutrition and Dietetics course was assessed by a panel (with expertise in competency-based assessment; industry and academic representation) from a course e-portfolio (that included the judgements of student performance made by worksite educators) and a panel interview. Data were triangulated with assessments from a capstone internship. Qualitative descriptive studies with worksite educators (focus groups n = 4, n = 5, n = 8) and students (personal interviews n = 29) explored stakeholder experiences analysed using thematic analysis. Panel consensus was achieved for all cases by the third-round and corroborated by internship outcomes. For 34% of students this differed to the 'interpretations' of their performance made by their worksite educator/s. Emerging qualitative themes from stakeholder data found the model: (i) supported sustainable assessment practices; (ii) shifted the power relationship between students and worksite educators and (iii) provided a fair method to assess competence. To maximise benefits, more refinement, resources and training are required. This research questions competency-based assessment practices based on discrete placement units and supports a constructivist-interpretivist programmatic approach where evidence across a whole course of study is considered by a panel of assessors. © 2018 Dietitians Association of Australia.

  1. Introduction: Hazard mapping

    USGS Publications Warehouse

    Baum, Rex L.; Miyagi, Toyohiko; Lee, Saro; Trofymchuk, Oleksandr M

    2014-01-01

    Twenty papers were accepted into the session on landslide hazard mapping for oral presentation. The papers presented susceptibility and hazard analysis based on approaches ranging from field-based assessments to statistically based models to assessments that combined hydromechanical and probabilistic components. Many of the studies have taken advantage of increasing availability of remotely sensed data and nearly all relied on Geographic Information Systems to organize and analyze spatial data. The studies used a range of methods for assessing performance and validating hazard and susceptibility models. A few of the studies presented in this session also included some element of landslide risk assessment. This collection of papers clearly demonstrates that a wide range of approaches can lead to useful assessments of landslide susceptibility and hazard.

  2. FRAMEWORK FOR EVALUATION OF PHYSIOLOGICALLY-BASED PHARMACOKINETIC MODELS FOR USE IN SAFETY OR RISK ASSESSMENT

    EPA Science Inventory

    ABSTRACT

    Proposed applications of increasingly sophisticated biologically-based computational models, such as physiologically-based pharmacokinetic (PBPK) models, raise the issue of how to evaluate whether the models are adequate for proposed uses including safety or risk ...

  3. Approaches for the Application of Physiologically Based Pharmacokinetic (PBPK) Models and Supporting Data in Risk Assessment (Final Report)

    EPA Science Inventory

    EPA released the final report, Approaches for the Application of Physiologically Based Pharmacokinetic (PBPK) Models and Supporting Data in Risk Assessment as announced in a September 22 2006 Federal Register Notice.This final report addresses the application and evaluati...

  4. Cotton growth modeling and assessment using UAS visual-band imagery

    USDA-ARS?s Scientific Manuscript database

    This paper explores the potential of using unmanned aircraft system (UAS)-based visible-band images to assess cotton growth. By applying the structure-from-motion algorithm, cotton plant height (ph) and canopy cover (cc) were retrieved from the point cloud-based digital surface models (DSMs) and ort...

  5. Automated workflows for modelling chemical fate, kinetics and toxicity.

    PubMed

    Sala Benito, J V; Paini, Alicia; Richarz, Andrea-Nicole; Meinl, Thorsten; Berthold, Michael R; Cronin, Mark T D; Worth, Andrew P

    2017-12-01

    Automation is universal in today's society, from operating equipment such as machinery, in factory processes, to self-parking automobile systems. While these examples show the efficiency and effectiveness of automated mechanical processes, automated procedures that support the chemical risk assessment process are still in their infancy. Future human safety assessments will rely increasingly on the use of automated models, such as physiologically based kinetic (PBK) and dynamic models and the virtual cell based assay (VCBA). These biologically-based models will be coupled with chemistry-based prediction models that also automate the generation of key input parameters such as physicochemical properties. The development of automated software tools is an important step in harmonising and expediting the chemical safety assessment process. In this study, we illustrate how the KNIME Analytics Platform can be used to provide a user-friendly graphical interface for these biokinetic models, such as PBK models and VCBA, which simulates the fate of chemicals in vivo within the body and in vitro test systems respectively. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  6. Metrological traceability in education: A practical online system for measuring and managing middle school mathematics instruction

    NASA Astrophysics Data System (ADS)

    Torres Irribarra, D.; Freund, R.; Fisher, W.; Wilson, M.

    2015-02-01

    Computer-based, online assessments modelled, designed, and evaluated for adaptively administered invariant measurement are uniquely suited to defining and maintaining traceability to standardized units in education. An assessment of this kind is embedded in the Assessing Data Modeling and Statistical Reasoning (ADM) middle school mathematics curriculum. Diagnostic information about middle school students' learning of statistics and modeling is provided via computer-based formative assessments for seven constructs that comprise a learning progression for statistics and modeling from late elementary through the middle school grades. The seven constructs are: Data Display, Meta-Representational Competence, Conceptions of Statistics, Chance, Modeling Variability, Theory of Measurement, and Informal Inference. The end product is a web-delivered system built with Ruby on Rails for use by curriculum development teams working with classroom teachers in designing, developing, and delivering formative assessments. The online accessible system allows teachers to accurately diagnose students' unique comprehension and learning needs in a common language of real-time assessment, logging, analysis, feedback, and reporting.

  7. Developing School Heads as Instructional Leaders in School-Based Assessment: Challenges and Next Steps

    ERIC Educational Resources Information Center

    Lingam, Govinda Ishwar; Lingam, Narsamma

    2016-01-01

    The study explored challenges faced by school leaders in the Pacific nation of Solomon Islands in school-based assessment, and the adequacy of an assessment course to prepare them. A questionnaire including both open and closed-ended questions elicited relevant data from the school leaders. Modelling best practices in school-based assessment was…

  8. Assessing College Students' Understanding of Acid Base Chemistry Concepts

    ERIC Educational Resources Information Center

    Wan, Yanjun Jean

    2014-01-01

    Typically most college curricula include three acid base models: Arrhenius', Bronsted-Lowry's, and Lewis'. Although Lewis' acid base model is generally thought to be the most sophisticated among these three models, and can be further applied in reaction mechanisms, most general chemistry curricula either do not include Lewis' acid base model, or…

  9. A simulations approach for meta-analysis of genetic association studies based on additive genetic model.

    PubMed

    John, Majnu; Lencz, Todd; Malhotra, Anil K; Correll, Christoph U; Zhang, Jian-Ping

    2018-06-01

    Meta-analysis of genetic association studies is being increasingly used to assess phenotypic differences between genotype groups. When the underlying genetic model is assumed to be dominant or recessive, assessing the phenotype differences based on summary statistics, reported for individual studies in a meta-analysis, is a valid strategy. However, when the genetic model is additive, a similar strategy based on summary statistics will lead to biased results. This fact about the additive model is one of the things that we establish in this paper, using simulations. The main goal of this paper is to present an alternate strategy for the additive model based on simulating data for the individual studies. We show that the alternate strategy is far superior to the strategy based on summary statistics.

  10. Reciprocal Peer Assessment as a Learning Tool for Secondary School Students in Modeling-Based Learning

    ERIC Educational Resources Information Center

    Tsivitanidou, Olia E.; Constantinou, Costas P.; Labudde, Peter; Rönnebeck, Silke; Ropohl, Mathias

    2018-01-01

    The aim of this study was to investigate how reciprocal peer assessment in modeling-based learning can serve as a learning tool for secondary school learners in a physics course. The participants were 22 upper secondary school students from a gymnasium in Switzerland. They were asked to model additive and subtractive color mixing in groups of two,…

  11. Quality assessment of protein model-structures based on structural and functional similarities.

    PubMed

    Konopka, Bogumil M; Nebel, Jean-Christophe; Kotulska, Malgorzata

    2012-09-21

    Experimental determination of protein 3D structures is expensive, time consuming and sometimes impossible. A gap between number of protein structures deposited in the World Wide Protein Data Bank and the number of sequenced proteins constantly broadens. Computational modeling is deemed to be one of the ways to deal with the problem. Although protein 3D structure prediction is a difficult task, many tools are available. These tools can model it from a sequence or partial structural information, e.g. contact maps. Consequently, biologists have the ability to generate automatically a putative 3D structure model of any protein. However, the main issue becomes evaluation of the model quality, which is one of the most important challenges of structural biology. GOBA--Gene Ontology-Based Assessment is a novel Protein Model Quality Assessment Program. It estimates the compatibility between a model-structure and its expected function. GOBA is based on the assumption that a high quality model is expected to be structurally similar to proteins functionally similar to the prediction target. Whereas DALI is used to measure structure similarity, protein functional similarity is quantified using standardized and hierarchical description of proteins provided by Gene Ontology combined with Wang's algorithm for calculating semantic similarity. Two approaches are proposed to express the quality of protein model-structures. One is a single model quality assessment method, the other is its modification, which provides a relative measure of model quality. Exhaustive evaluation is performed on data sets of model-structures submitted to the CASP8 and CASP9 contests. The validation shows that the method is able to discriminate between good and bad model-structures. The best of tested GOBA scores achieved 0.74 and 0.8 as a mean Pearson correlation to the observed quality of models in our CASP8 and CASP9-based validation sets. GOBA also obtained the best result for two targets of CASP8, and one of CASP9, compared to the contest participants. Consequently, GOBA offers a novel single model quality assessment program that addresses the practical needs of biologists. In conjunction with other Model Quality Assessment Programs (MQAPs), it would prove useful for the evaluation of single protein models.

  12. Assessment of credit risk based on fuzzy relations

    NASA Astrophysics Data System (ADS)

    Tsabadze, Teimuraz

    2017-06-01

    The purpose of this paper is to develop a new approach for an assessment of the credit risk to corporate borrowers. There are different models for borrowers' risk assessment. These models are divided into two groups: statistical and theoretical. When assessing the credit risk for corporate borrowers, statistical model is unacceptable due to the lack of sufficiently large history of defaults. At the same time, we cannot use some theoretical models due to the lack of stock exchange. In those cases, when studying a particular borrower given that statistical base does not exist, the decision-making process is always of expert nature. The paper describes a new approach that may be used in group decision-making. An example of the application of the proposed approach is given.

  13. A Structural Equation Model at the Individual and Group Level for Assessing Faking-Related Change

    ERIC Educational Resources Information Center

    Ferrando, Pere Joan; Anguiano-Carrasco, Cristina

    2011-01-01

    This article proposes a comprehensive approach based on structural equation modeling for assessing the amount of trait-level change derived from faking-motivating situations. The model is intended for a mixed 2-wave 2-group design, and assesses change at both the group and the individual level. Theoretically the model adopts an integrative…

  14. An assessment of the near-surface accuracy of the international geomagnetic reference field 1980 model of the main geomagnetic field

    USGS Publications Warehouse

    Peddie, N.W.; Zunde, A.K.

    1985-01-01

    The new International Geomagnetic Reference Field (IGRF) model of the main geomagnetic field for 1980 is based heavily on measurements from the MAGSAT satellite survey. Assessment of the accuracy of the new model, as a description of the main field near the Earth's surface, is important because the accuracy of models derived from satellite data can be adversely affected by the magnetic field of electric currents in the ionosphere and the auroral zones. Until now, statements about its accuracy have been based on the 6 published assessments of the 2 proposed models from which it was derived. However, those assessments were either regional in scope or were based mainly on preliminary or extrapolated data. Here we assess the near-surface accuracy of the new model by comparing it with values for 1980 derived from annual means from 69 magnetic observatories, and by comparing it with WC80, a model derived from near-surface data. The comparison with observatory-derived data shows that the new model describes the field at the 69 observatories about as accurately as would a model derived solely from near-surface data. The comparison with WC80 shows that the 2 models agree closely in their description of D and I near the surface. These comparisons support the proposition that the new IGRF 1980 main-field model is a generally accurate description of the main field near the Earth's surface in 1980. ?? 1985.

  15. How TK-TD and population models for aquatic macrophytes could support the risk assessment for plant protection products.

    PubMed

    Hommen, Udo; Schmitt, Walter; Heine, Simon; Brock, Theo Cm; Duquesne, Sabine; Manson, Phil; Meregalli, Giovanna; Ochoa-Acuña, Hugo; van Vliet, Peter; Arts, Gertie

    2016-01-01

    This case study of the Society of Environmental Toxicology and Chemistry (SETAC) workshop MODELINK demonstrates the potential use of mechanistic effects models for macrophytes to extrapolate from effects of a plant protection product observed in laboratory tests to effects resulting from dynamic exposure on macrophyte populations in edge-of-field water bodies. A standard European Union (EU) risk assessment for an example herbicide based on macrophyte laboratory tests indicated risks for several exposure scenarios. Three of these scenarios are further analyzed using effect models for 2 aquatic macrophytes, the free-floating standard test species Lemna sp., and the sediment-rooted submerged additional standard test species Myriophyllum spicatum. Both models include a toxicokinetic (TK) part, describing uptake and elimination of the toxicant, a toxicodynamic (TD) part, describing the internal concentration-response function for growth inhibition, and a description of biomass growth as a function of environmental factors to allow simulating seasonal dynamics. The TK-TD models are calibrated and tested using laboratory tests, whereas the growth models were assumed to be fit for purpose based on comparisons of predictions with typical growth patterns observed in the field. For the risk assessment, biomass dynamics are predicted for the control situation and for several exposure levels. Based on specific protection goals for macrophytes, preliminary example decision criteria are suggested for evaluating the model outputs. The models refined the risk indicated by lower tier testing for 2 exposure scenarios, while confirming the risk associated for the third. Uncertainties related to the experimental and the modeling approaches and their application in the risk assessment are discussed. Based on this case study and the assumption that the models prove suitable for risk assessment once fully evaluated, we recommend that 1) ecological scenarios be developed that are also linked to the exposure scenarios, and 2) quantitative protection goals be set to facilitate the interpretation of model results for risk assessment. © 2015 SETAC.

  16. Proposals for enhanced health risk assessment and stratification in an integrated care scenario

    PubMed Central

    Dueñas-Espín, Ivan; Vela, Emili; Pauws, Steffen; Bescos, Cristina; Cano, Isaac; Cleries, Montserrat; Contel, Joan Carles; de Manuel Keenoy, Esteban; Garcia-Aymerich, Judith; Gomez-Cabrero, David; Kaye, Rachelle; Lahr, Maarten M H; Lluch-Ariet, Magí; Moharra, Montserrat; Monterde, David; Mora, Joana; Nalin, Marco; Pavlickova, Andrea; Piera, Jordi; Ponce, Sara; Santaeugenia, Sebastià; Schonenberg, Helen; Störk, Stefan; Tegner, Jesper; Velickovski, Filip; Westerteicher, Christoph; Roca, Josep

    2016-01-01

    Objectives Population-based health risk assessment and stratification are considered highly relevant for large-scale implementation of integrated care by facilitating services design and case identification. The principal objective of the study was to analyse five health-risk assessment strategies and health indicators used in the five regions participating in the Advancing Care Coordination and Telehealth Deployment (ACT) programme (http://www.act-programme.eu). The second purpose was to elaborate on strategies toward enhanced health risk predictive modelling in the clinical scenario. Settings The five ACT regions: Scotland (UK), Basque Country (ES), Catalonia (ES), Lombardy (I) and Groningen (NL). Participants Responsible teams for regional data management in the five ACT regions. Primary and secondary outcome measures We characterised and compared risk assessment strategies among ACT regions by analysing operational health risk predictive modelling tools for population-based stratification, as well as available health indicators at regional level. The analysis of the risk assessment tool deployed in Catalonia in 2015 (GMAs, Adjusted Morbidity Groups) was used as a basis to propose how population-based analytics could contribute to clinical risk prediction. Results There was consensus on the need for a population health approach to generate health risk predictive modelling. However, this strategy was fully in place only in two ACT regions: Basque Country and Catalonia. We found marked differences among regions in health risk predictive modelling tools and health indicators, and identified key factors constraining their comparability. The research proposes means to overcome current limitations and the use of population-based health risk prediction for enhanced clinical risk assessment. Conclusions The results indicate the need for further efforts to improve both comparability and flexibility of current population-based health risk predictive modelling approaches. Applicability and impact of the proposals for enhanced clinical risk assessment require prospective evaluation. PMID:27084274

  17. Modeling Diagnostic Assessments with Bayesian Networks

    ERIC Educational Resources Information Center

    Almond, Russell G.; DiBello, Louis V.; Moulder, Brad; Zapata-Rivera, Juan-Diego

    2007-01-01

    This paper defines Bayesian network models and examines their applications to IRT-based cognitive diagnostic modeling. These models are especially suited to building inference engines designed to be synchronous with the finer grained student models that arise in skills diagnostic assessment. Aspects of the theory and use of Bayesian network models…

  18. Do repeated assessments of performance status improve predictions for risk of death among patients with cancer? A population-based cohort study.

    PubMed

    Su, Jiandong; Barbera, Lisa; Sutradhar, Rinku

    2015-06-01

    Prior work has utilized longitudinal information on performance status to demonstrate its association with risk of death among cancer patients; however, no study has assessed whether such longitudinal information improves the predictions for risk of death. To examine whether the use of repeated performance status assessments improve predictions for risk of death compared to using only performance status assessment at the time of cancer diagnosis. This was a population-based longitudinal study of adult outpatients who had a cancer diagnosis and had at least one assessment of performance status. To account for each patient's changing performance status over time, we implemented a Cox model with a time-varying covariate for performance status. This model was compared to a Cox model using only a time-fixed (baseline) covariate for performance status. The regression coefficients of each model were derived based on a randomly selected 60% of patients, and then, the predictive ability of each model was assessed via concordance probabilities when applied to the remaining 40% of patients. Our study consisted of 15,487 cancer patients with over 53,000 performance status assessments. The utilization of repeated performance status assessments improved predictions for risk of death compared to using only the performance status assessment taken at diagnosis. When studying the hazard of death among patients with cancer, if available, researchers should incorporate changing information on performance status scores, instead of simply baseline information on performance status. © The Author(s) 2015.

  19. MQAPRank: improved global protein model quality assessment by learning-to-rank.

    PubMed

    Jing, Xiaoyang; Dong, Qiwen

    2017-05-25

    Protein structure prediction has achieved a lot of progress during the last few decades and a greater number of models for a certain sequence can be predicted. Consequently, assessing the qualities of predicted protein models in perspective is one of the key components of successful protein structure prediction. Over the past years, a number of methods have been developed to address this issue, which could be roughly divided into three categories: single methods, quasi-single methods and clustering (or consensus) methods. Although these methods achieve much success at different levels, accurate protein model quality assessment is still an open problem. Here, we present the MQAPRank, a global protein model quality assessment program based on learning-to-rank. The MQAPRank first sorts the decoy models by using single method based on learning-to-rank algorithm to indicate their relative qualities for the target protein. And then it takes the first five models as references to predict the qualities of other models by using average GDT_TS scores between reference models and other models. Benchmarked on CASP11 and 3DRobot datasets, the MQAPRank achieved better performances than other leading protein model quality assessment methods. Recently, the MQAPRank participated in the CASP12 under the group name FDUBio and achieved the state-of-the-art performances. The MQAPRank provides a convenient and powerful tool for protein model quality assessment with the state-of-the-art performances, it is useful for protein structure prediction and model quality assessment usages.

  20. A GIS-based approach for comparative analysis of potential fire risk assessment

    NASA Astrophysics Data System (ADS)

    Sun, Ying; Hu, Lieqiu; Liu, Huiping

    2007-06-01

    Urban fires are one of the most important sources of property loss and human casualty and therefore it is necessary to assess the potential fire risk with consideration of urban community safety. Two evaluation models are proposed, both of which are integrated with GIS. One is the single factor model concerning the accessibility of fire passage and the other is grey clustering approach based on the multifactor system. In the latter model, fourteen factors are introduced and divided into four categories involving security management, evacuation facility, construction resistance and fire fighting capability. A case study on campus of Beijing Normal University is presented to express the potential risk assessment models in details. A comparative analysis of the two models is carried out to validate the accuracy. The results are approximately consistent with each other. Moreover, modeling with GIS promotes the efficiency the potential risk assessment.

  1. Gaze-Based Assistive Technology - Usefulness in Clinical Assessments.

    PubMed

    Wandin, Helena

    2017-01-01

    Gaze-based assistive technology was used in informal clinical assessments. Excerpts of medical journals were analyzed by directed content analysis using a model of communicative competence. The results of this pilot study indicate that gaze-based assistive technology is a useful tool in communication assessments that can generate clinically relevant information.

  2. An Entropy-Based Measure for Assessing Fuzziness in Logistic Regression

    PubMed Central

    Weiss, Brandi A.; Dardick, William

    2015-01-01

    This article introduces an entropy-based measure of data–model fit that can be used to assess the quality of logistic regression models. Entropy has previously been used in mixture-modeling to quantify how well individuals are classified into latent classes. The current study proposes the use of entropy for logistic regression models to quantify the quality of classification and separation of group membership. Entropy complements preexisting measures of data–model fit and provides unique information not contained in other measures. Hypothetical data scenarios, an applied example, and Monte Carlo simulation results are used to demonstrate the application of entropy in logistic regression. Entropy should be used in conjunction with other measures of data–model fit to assess how well logistic regression models classify cases into observed categories. PMID:29795897

  3. An Entropy-Based Measure for Assessing Fuzziness in Logistic Regression.

    PubMed

    Weiss, Brandi A; Dardick, William

    2016-12-01

    This article introduces an entropy-based measure of data-model fit that can be used to assess the quality of logistic regression models. Entropy has previously been used in mixture-modeling to quantify how well individuals are classified into latent classes. The current study proposes the use of entropy for logistic regression models to quantify the quality of classification and separation of group membership. Entropy complements preexisting measures of data-model fit and provides unique information not contained in other measures. Hypothetical data scenarios, an applied example, and Monte Carlo simulation results are used to demonstrate the application of entropy in logistic regression. Entropy should be used in conjunction with other measures of data-model fit to assess how well logistic regression models classify cases into observed categories.

  4. Early Grade Writing Assessment: An Instrument Model

    ERIC Educational Resources Information Center

    Jiménez, Juan E.

    2017-01-01

    The United Nations Educational, Scientific, and Cultural Organization promoted the creation of a model instrument for individual assessment of students' foundational writing skills in the Spanish language that was based on a literature review and existing writing tools and assessments. The purpose of the "Early Grade Writing Assessment"…

  5. Item Response Theory for Peer Assessment

    ERIC Educational Resources Information Center

    Uto, Masaki; Ueno, Maomi

    2016-01-01

    As an assessment method based on a constructivist approach, peer assessment has become popular in recent years. However, in peer assessment, a problem remains that reliability depends on the rater characteristics. For this reason, some item response models that incorporate rater parameters have been proposed. Those models are expected to improve…

  6. Best Practices in Physics-Based Fault Rupture Models for Seismic Hazard Assessment of Nuclear Installations

    NASA Astrophysics Data System (ADS)

    Dalguer, Luis A.; Fukushima, Yoshimitsu; Irikura, Kojiro; Wu, Changjiang

    2017-09-01

    Inspired by the first workshop on Best Practices in Physics-Based Fault Rupture Models for Seismic Hazard Assessment of Nuclear Installations (BestPSHANI) conducted by the International Atomic Energy Agency (IAEA) on 18-20 November, 2015 in Vienna (http://www-pub.iaea.org/iaeameetings/50896/BestPSHANI), this PAGEOPH topical volume collects several extended articles from this workshop as well as several new contributions. A total of 17 papers have been selected on topics ranging from the seismological aspects of earthquake cycle simulations for source-scaling evaluation, seismic source characterization, source inversion and ground motion modeling (based on finite fault rupture using dynamic, kinematic, stochastic and empirical Green's functions approaches) to the engineering application of simulated ground motion for the analysis of seismic response of structures. These contributions include applications to real earthquakes and description of current practice to assess seismic hazard in terms of nuclear safety in low seismicity areas, as well as proposals for physics-based hazard assessment for critical structures near large earthquakes. Collectively, the papers of this volume highlight the usefulness of physics-based models to evaluate and understand the physical causes of observed and empirical data, as well as to predict ground motion beyond the range of recorded data. Relevant importance is given on the validation and verification of the models by comparing synthetic results with observed data and empirical models.

  7. Source-term development for a contaminant plume for use by multimedia risk assessment models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whelan, Gene; McDonald, John P.; Taira, Randal Y.

    1999-12-01

    Multimedia modelers from the U.S. Environmental Protection Agency (EPA) and the U.S. Department of Energy (DOE) are collaborating to conduct a comprehensive and quantitative benchmarking analysis of four intermedia models: DOE's Multimedia Environmental Pollutant Assessment System (MEPAS), EPA's MMSOILS, EPA's PRESTO, and DOE's RESidual RADioactivity (RESRAD). These models represent typical analytically, semi-analytically, and empirically based tools that are utilized in human risk and endangerment assessments for use at installations containing radioactive and/or hazardous contaminants. Although the benchmarking exercise traditionally emphasizes the application and comparison of these models, the establishment of a Conceptual Site Model (CSM) should be viewed with equalmore » importance. This paper reviews an approach for developing a CSM of an existing, real-world, Sr-90 plume at DOE's Hanford installation in Richland, Washington, for use in a multimedia-based benchmarking exercise bet ween MEPAS, MMSOILS, PRESTO, and RESRAD. In an unconventional move for analytically based modeling, the benchmarking exercise will begin with the plume as the source of contamination. The source and release mechanism are developed and described within the context of performing a preliminary risk assessment utilizing these analytical models. By beginning with the plume as the source term, this paper reviews a typical process and procedure an analyst would follow in developing a CSM for use in a preliminary assessment using this class of analytical tool.« less

  8. A manufacturing quality assessment model based-on two stages interval type-2 fuzzy logic

    NASA Astrophysics Data System (ADS)

    Purnomo, Muhammad Ridwan Andi; Helmi Shintya Dewi, Intan

    2016-01-01

    This paper presents the development of an assessment models for manufacturing quality using Interval Type-2 Fuzzy Logic (IT2-FL). The proposed model is developed based on one of building block in sustainable supply chain management (SSCM), which is benefit of SCM, and focuses more on quality. The proposed model can be used to predict the quality level of production chain in a company. The quality of production will affect to the quality of product. Practically, quality of production is unique for every type of production system. Hence, experts opinion will play major role in developing the assessment model. The model will become more complicated when the data contains ambiguity and uncertainty. In this study, IT2-FL is used to model the ambiguity and uncertainty. A case study taken from a company in Yogyakarta shows that the proposed manufacturing quality assessment model can work well in determining the quality level of production.

  9. Creating More Trauma-Informed Services for Children Using Assessment-Focused Tools

    ERIC Educational Resources Information Center

    Igelman, Robyn; Taylor, Nicole; Gilbert, Alicia; Ryan, Barbara; Steinberg, Alan; Wilson, Charles; Mann, Gail

    2007-01-01

    This article promotes integrating assessment and evidence-based practice in the treatment of traumatized children through a review of two newly developed trauma assessment tools: (1) the Child Welfare Trauma Referral Tool (CWT), and (2) Assessment-Based Treatment for Traumatized Children: A Trauma Assessment Pathway Model (TAP). These tools use…

  10. Protein single-model quality assessment by feature-based probability density functions.

    PubMed

    Cao, Renzhi; Cheng, Jianlin

    2016-04-04

    Protein quality assessment (QA) has played an important role in protein structure prediction. We developed a novel single-model quality assessment method-Qprob. Qprob calculates the absolute error for each protein feature value against the true quality scores (i.e. GDT-TS scores) of protein structural models, and uses them to estimate its probability density distribution for quality assessment. Qprob has been blindly tested on the 11th Critical Assessment of Techniques for Protein Structure Prediction (CASP11) as MULTICOM-NOVEL server. The official CASP result shows that Qprob ranks as one of the top single-model QA methods. In addition, Qprob makes contributions to our protein tertiary structure predictor MULTICOM, which is officially ranked 3rd out of 143 predictors. The good performance shows that Qprob is good at assessing the quality of models of hard targets. These results demonstrate that this new probability density distribution based method is effective for protein single-model quality assessment and is useful for protein structure prediction. The webserver of Qprob is available at: http://calla.rnet.missouri.edu/qprob/. The software is now freely available in the web server of Qprob.

  11. ASSESSMENT OF TWO PHYSICALLY BASED WATERSHED MODELS BASED ON THEIR PERFORMANCES OF SIMULATING SEDIMENT MOVEMENT OVER SMALL WATERSHEDS

    EPA Science Inventory


    Abstract: Two physically based and deterministic models, CASC2-D and KINEROS are evaluated and compared for their performances on modeling sediment movement on a small agricultural watershed over several events. Each model has different conceptualization of a watershed. CASC...

  12. ASSESSMENT OF TWO PHYSICALLY-BASED WATERSHED MODELS BASED ON THEIR PERFORMANCES OF SIMULATING WATER AND SEDIMENT MOVEMENT

    EPA Science Inventory

    Two physically based watershed models, GSSHA and KINEROS-2 are evaluated and compared for their performances on modeling flow and sediment movement. Each model has a different watershed conceptualization. GSSHA divides the watershed into cells, and flow and sediments are routed t...

  13. Implementation of the nursing process in a health area: models and assessment structures used

    PubMed Central

    Huitzi-Egilegor, Joseba Xabier; Elorza-Puyadena, Maria Isabel; Urkia-Etxabe, Jose Maria; Asurabarrena-Iraola, Carmen

    2014-01-01

    OBJECTIVE: to analyze what nursing models and nursing assessment structures have been used in the implementation of the nursing process at the public and private centers in the health area Gipuzkoa (Basque Country). METHOD: a retrospective study was undertaken, based on the analysis of the nursing records used at the 158 centers studied. RESULTS: the Henderson model, Carpenito's bifocal structure, Gordon's assessment structure and the Resident Assessment Instrument Nursing Home 2.0 have been used as nursing models and assessment structures to implement the nursing process. At some centers, the selected model or assessment structure has varied over time. CONCLUSION: Henderson's model has been the most used to implement the nursing process. Furthermore, the trend is observed to complement or replace Henderson's model by nursing assessment structures. PMID:25493672

  14. Parameters for Pesticide QSAR and PBPK/PD Models to inform Human Risk Assessments

    EPA Science Inventory

    Physiologically-based pharmacokinetic and pharmacodynamic (PBPK/PD) modeling has emerged as an important computational approach supporting quantitative risk assessment of agrochemicals. However, before complete regulatory acceptance of this tool, an assessment of assets and liabi...

  15. Evaluating Curriculum-Based Measurement from a Behavioral Assessment Perspective

    ERIC Educational Resources Information Center

    Ardoin, Scott P.; Roof, Claire M.; Klubnick, Cynthia; Carfolite, Jessica

    2008-01-01

    Curriculum-based measurement Reading (CBM-R) is an assessment procedure used to evaluate students' relative performance compared to peers and to evaluate their growth in reading. Within the response to intervention (RtI) model, CBM-R data are plotted in time series fashion as a means modeling individual students' response to varying levels of…

  16. Biotechnology on the Battlefield: An Application of Agent-based Modelling for Emerging Technology Assessment

    DTIC Science & Technology

    2015-03-01

    UNCLASSIFIED UNCLASSIFIED Biotechnology on the Battlefield: An Application of Agent-based Modelling for Emerging Technology Assessment...wounds might be treatable using advanced biotechnologies to control haemorrhaging and reduce blood-loss until medical evacuation can be completed. This...APPROVED FOR PUBLIC RELEASE UNCLASSIFIED UNCLASSIFIED Biotechnology on the Battlefield: An Application

  17. Defining Strategic and Excellence Bases for the Development of Portuguese Higher Education

    ERIC Educational Resources Information Center

    Rosa, Maria Joao; Saraiva, Pedro M.; Diz, Henrique

    2005-01-01

    A self-assessment model was developed for the Portuguese higher education institutions (HEIs) which was based on an empirical study aiming at better understanding their strategic and quality management and innovation practices and tools and on the study of several quality assessment models developed both for HEIs and business organisations. From…

  18. Steel Alloy Hot Roll Simulations and Through-Thickness Variation Using Dislocation Density-Based Modeling

    NASA Astrophysics Data System (ADS)

    Jansen Van Rensburg, G. J.; Kok, S.; Wilke, D. N.

    2017-10-01

    Different roll pass reduction schedules have different effects on the through-thickness properties of hot-rolled metal slabs. In order to assess or improve a reduction schedule using the finite element method, a material model is required that captures the relevant deformation mechanisms and physics. The model should also report relevant field quantities to assess variations in material state through the thickness of a simulated rolled metal slab. In this paper, a dislocation density-based material model with recrystallization is presented and calibrated on the material response of a high-strength low-alloy steel. The model has the ability to replicate and predict material response to a fair degree thanks to the physically motivated mechanisms it is built on. An example study is also presented to illustrate the possible effect different reduction schedules could have on the through-thickness material state and the ability to assess these effects based on finite element simulations.

  19. V&V framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hills, Richard G.; Maniaci, David Charles; Naughton, Jonathan W.

    2015-09-01

    A Verification and Validation (V&V) framework is presented for the development and execution of coordinated modeling and experimental program s to assess the predictive capability of computational models of complex systems through focused, well structured, and formal processes.The elements of the framework are based on established V&V methodology developed by various organizations including the Department of Energy, National Aeronautics and Space Administration, the American Institute of Aeronautics and Astronautics, and the American Society of Mechanical Engineers. Four main topics are addressed: 1) Program planning based on expert elicitation of the modeling physics requirements, 2) experimental design for model assessment, 3)more » uncertainty quantification for experimental observations and computational model simulations, and 4) assessment of the model predictive capability. The audience for this document includes program planners, modelers, experimentalist, V &V specialist, and customers of the modeling results.« less

  20. Computer-Based Resource Accounting Model for Automobile Technology Impact Assessment

    DOT National Transportation Integrated Search

    1976-10-01

    A computer-implemented resource accounting model has been developed for assessing resource impacts of future automobile technology options. The resources tracked are materials, energy, capital, and labor. The model has been used in support of the Int...

  1. Study on quantitative risk assessment model of the third party damage for natural gas pipelines based on fuzzy comprehensive assessment

    NASA Astrophysics Data System (ADS)

    Qiu, Zeyang; Liang, Wei; Wang, Xue; Lin, Yang; Zhang, Meng

    2017-05-01

    As an important part of national energy supply system, transmission pipelines for natural gas are possible to cause serious environmental pollution, life and property loss in case of accident. The third party damage is one of the most significant causes for natural gas pipeline system accidents, and it is very important to establish an effective quantitative risk assessment model of the third party damage for reducing the number of gas pipelines operation accidents. Against the third party damage accident has the characteristics such as diversity, complexity and uncertainty, this paper establishes a quantitative risk assessment model of the third party damage based on Analytic Hierarchy Process (AHP) and Fuzzy Comprehensive Evaluation (FCE). Firstly, risk sources of third party damage should be identified exactly, and the weight of factors could be determined via improved AHP, finally the importance of each factor is calculated by fuzzy comprehensive evaluation model. The results show that the quantitative risk assessment model is suitable for the third party damage of natural gas pipelines and improvement measures could be put forward to avoid accidents based on the importance of each factor.

  2. Impact on house staff evaluation scores when changing from a Dreyfus- to a Milestone-based evaluation model: one internal medicine residency program's findings.

    PubMed

    Friedman, Karen A; Balwan, Sandy; Cacace, Frank; Katona, Kyle; Sunday, Suzanne; Chaudhry, Saima

    2014-01-01

    Purpose As graduate medical education (GME) moves into the Next Accreditation System (NAS), programs must take a critical look at their current models of evaluation and assess how well they align with reporting outcomes. Our objective was to assess the impact on house staff evaluation scores when transitioning from a Dreyfus-based model of evaluation to a Milestone-based model of evaluation. Milestones are a key component of the NAS. Method We analyzed all end of rotation evaluations of house staff completed by faculty for academic years 2010-2011 (pre-Dreyfus model) and 2011-2012 (post-Milestone model) in one large university-based internal medicine residency training program. Main measures included change in PGY-level average score; slope, range, and separation of average scores across all six Accreditation Council for Graduate Medical Education (ACGME) competencies. Results Transitioning from a Dreyfus-based model to a Milestone-based model resulted in a larger separation in the scores between our three post-graduate year classes, a steeper progression of scores in the PGY-1 class, a wider use of the 5-point scale on our global end of rotation evaluation form, and a downward shift in the PGY-1 scores and an upward shift in the PGY-3 scores. Conclusions For faculty trained in both models of assessment, the Milestone-based model had greater discriminatory ability as evidenced by the larger separation in the scores for all the classes, in particular the PGY-1 class.

  3. Climate-based archetypes for the environmental fate assessment of chemicals.

    PubMed

    Ciuffo, Biagio; Sala, Serenella

    2013-11-15

    Emissions of chemicals have been on the rise for years, and their impacts are greatly influenced by spatial differentiation. Chemicals are usually emitted locally but their impact can be felt both locally and globally, due to their chemical properties and persistence. The variability of environmental parameters in the emission compartment may affect the chemicals' fate and the exposure at different orders of magnitude. The assessment of the environmental fate of chemicals and the inherent spatial differentiation requires the use of multimedia models at various levels of complexity (from a simple box model to complex computational and high-spatial-resolution models). The objective of these models is to support ecological and human health risk assessment, by reducing the uncertainty of chemical impact assessments. The parameterisation of spatially resolved multimedia models is usually based on scenarios of evaluative environments, or on geographical resolutions related to administrative boundaries (e.g. countries/continents) or landscape areas (e.g. watersheds, eco-regions). The choice of the most appropriate scale and scenario is important from a management perspective, as a balance should be reached between a simplified approach and computationally intensive multimedia models. In this paper, which aims to go beyond the more traditional approach based on scale/resolution (cell, country, and basin), we propose and assess climate-based archetypes for the impact assessment of chemicals released in air. We define the archetypes based on the main drivers of spatial variability, which we systematically identify by adopting global sensitivity analysis techniques. A case study that uses the high resolution multimedia model MAPPE (Multimedia Assessment of Pollutant Pathways in the Environment) is presented. Results of the analysis showed that suitable archetypes should be both climate- and chemical-specific, as different chemicals (or groups of them) have different traits that influence their spatial variability. This hypothesis was tested by comparing the variability of the output of MAPPE for four different climatic zones on four different continents for four different chemicals (which represent different combinations of physical and chemical properties). Results showed the high suitability of climate-based archetypes in assessing the impacts of chemicals released in air. However, further research work is still necessary to test these findings. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. Development and exemplification of a model for Teacher Assessment in Primary Science

    NASA Astrophysics Data System (ADS)

    Davies, D. J.; Earle, S.; McMahon, K.; Howe, A.; Collier, C.

    2017-09-01

    The Teacher Assessment in Primary Science project is funded by the Primary Science Teaching Trust and based at Bath Spa University. The study aims to develop a whole-school model of valid, reliable and manageable teacher assessment to inform practice and make a positive impact on primary-aged children's learning in science. The model is based on a data-flow 'pyramid' (analogous to the flow of energy through an ecosystem), whereby the rich formative assessment evidence gathered in the classroom is summarised for monitoring, reporting and evaluation purposes [Nuffield Foundation. (2012). Developing policy, principles and practice in primary school science assessment. London: Nuffield Foundation]. Using a design-based research (DBR) methodology, the authors worked in collaboration with teachers from project schools and other expert groups to refine, elaborate, validate and operationalise the data-flow 'pyramid' model, resulting in the development of a whole-school self-evaluation tool. In this paper, we argue that a DBR approach to theory-building and school improvement drawing upon teacher expertise has led to the identification, adaptation and successful scaling up of a promising approach to school self-evaluation in relation to assessment in science.

  5. Using concept maps to describe undergraduate students’ mental model in microbiology course

    NASA Astrophysics Data System (ADS)

    Hamdiyati, Y.; Sudargo, F.; Redjeki, S.; Fitriani, A.

    2018-05-01

    The purpose of this research was to describe students’ mental model in a mental model based-microbiology course using concept map as assessment tool. Respondents were 5th semester of undergraduate students of Biology Education of Universitas Pendidikan Indonesia. The mental modelling instrument used was concept maps. Data were taken on Bacteria sub subject. A concept map rubric was subsequently developed with a maximum score of 4. Quantitative data was converted into a qualitative one to determine mental model level, namely: emergent = score 1, transitional = score 2, close to extended = score 3, and extended = score 4. The results showed that mental model level on bacteria sub subject before the implementation of mental model based-microbiology course was at the transitional level. After implementation of mental model based-microbiology course, mental model was at transitional level, close to extended, and extended. This indicated an increase in the level of students’ mental model after the implementation of mental model based-microbiology course using concept map as assessment tool.

  6. Case Study of a Computer Based Examination System

    ERIC Educational Resources Information Center

    Fluck, Andrew; Pullen, Darren; Harper, Colleen

    2009-01-01

    Electronic supported assessment or e-Assessment is a field of growing importance, but it has yet to make a significant impact in the Australian higher education sector (Byrnes & Ellis, 2006). Current computer based assessment models focus on the assessment of knowledge rather than deeper understandings, using multiple choice type questions,…

  7. AUTOMATED GEOSPATIAL WATERSHED ASSESSMENT: A GIS-BASED HYDROLOGIC MODELING TOOL

    EPA Science Inventory

    Planning and assessment in land and water resource management are evolving toward complex, spatially explicit regional assessments. These problems have to be addressed with distributed models that can compute runoff and erosion at different spatial and temporal scales. The extens...

  8. Combining correlative and mechanistic habitat suitability models to improve ecological compensation.

    PubMed

    Meineri, Eric; Deville, Anne-Sophie; Grémillet, David; Gauthier-Clerc, Michel; Béchet, Arnaud

    2015-02-01

    Only a few studies have shown positive impacts of ecological compensation on species dynamics affected by human activities. We argue that this is due to inappropriate methods used to forecast required compensation in environmental impact assessments. These assessments are mostly descriptive and only valid at limited spatial and temporal scales. However, habitat suitability models developed to predict the impacts of environmental changes on potential species' distributions should provide rigorous science-based tools for compensation planning. Here we describe the two main classes of predictive models: correlative models and individual-based mechanistic models. We show how these models can be used alone or synoptically to improve compensation planning. While correlative models are easier to implement, they tend to ignore underlying ecological processes and lack accuracy. On the contrary, individual-based mechanistic models can integrate biological interactions, dispersal ability and adaptation. Moreover, among mechanistic models, those considering animal energy balance are particularly efficient at predicting the impact of foraging habitat loss. However, mechanistic models require more field data compared to correlative models. Hence we present two approaches which combine both methods for compensation planning, especially in relation to the spatial scale considered. We show how the availability of biological databases and software enabling fast and accurate population projections could be advantageously used to assess ecological compensation requirement efficiently in environmental impact assessments. © 2014 The Authors. Biological Reviews © 2014 Cambridge Philosophical Society.

  9. A Two-Stage Multi-Agent Based Assessment Approach to Enhance Students' Learning Motivation through Negotiated Skills Assessment

    ERIC Educational Resources Information Center

    Chadli, Abdelhafid; Bendella, Fatima; Tranvouez, Erwan

    2015-01-01

    In this paper we present an Agent-based evaluation approach in a context of Multi-agent simulation learning systems. Our evaluation model is based on a two stage assessment approach: (1) a Distributed skill evaluation combining agents and fuzzy sets theory; and (2) a Negotiation based evaluation of students' performance during a training…

  10. The AgMIP Coordinated Global and Regional Assessments (CGRA) of Climate Change Impacts on Agriculture and Food Security

    NASA Technical Reports Server (NTRS)

    Ruane, Alex; Rosenzweig, Cynthia; Elliott, Joshua; Antle, John

    2015-01-01

    The Agricultural Model Intercomparison and Improvement Project (AgMIP) has been working since 2010 to construct a protocol-based framework enabling regional assessments (led by regional experts and modelers) that can provide consistent inputs to global economic and integrated assessment models. These global models can then relay important global-level information that drive regional decision-making and outcomes throughout an interconnected agricultural system. AgMIPs community of nearly 800 climate, crop, livestock, economics, and IT experts has improved the state-of-the-art through model intercomparisons, validation exercises, regional integrated assessments, and the launch of AgMIP programs on all six arable continents. AgMIP is now launching Coordinated Global and Regional Assessments (CGRA) of climate change impacts on agriculture and food security to link global and regional crop and economic models using a protocol-based framework. The CGRA protocols are being developed to utilize historical observations, climate projections, and RCPsSSPs from CMIP5 (and potentially CMIP6), and will examine stakeholder-driven agricultural development and adaptation scenarios to provide cutting-edge assessments of climate changes impact on agriculture and food security. These protocols will build on the foundation of established protocols from AgMIPs 30+ activities, and will emphasize the use of multiple models, scenarios, and scales to enable an accurate assessment of related uncertainties. The CGRA is also designed to provide the outputs necessary to feed into integrated assessment models (IAMs), nutrition and food security assessments, nitrogen and carbon cycle models, and additional impact-sector assessments (e.g., water resources, land-use, biomes, urban areas). This presentation will describe the current status of CGRA planning and initial prototype experiments to demonstrate key aspects of the protocols before wider implementation ahead of the IPCC Sixth Assessment Report.

  11. The AgMIP Coordinated Global and Regional Assessments (CGRA) of Climate Change Impacts on Agriculture and Food Security

    NASA Astrophysics Data System (ADS)

    Ruane, A. C.; Rosenzweig, C.; Antle, J. M.; Elliott, J. W.

    2015-12-01

    The Agricultural Model Intercomparison and Improvement Project (AgMIP) has been working since 2010 to construct a protocol-based framework enabling regional assessments (led by regional experts and modelers) that can provide consistent inputs to global economic and integrated assessment models. These global models can then relay important global-level information that drive regional decision-making and outcomes throughout an interconnected agricultural system. AgMIP's community of nearly 800 climate, crop, livestock, economics, and IT experts has improved the state-of-the-art through model intercomparisons, validation exercises, regional integrated assessments, and the launch of AgMIP programs on all six arable continents. AgMIP is now launching Coordinated Global and Regional Assessments (CGRA) of climate change impacts on agriculture and food security to link global and regional crop and economic models using a protocol-based framework. The CGRA protocols are being developed to utilize historical observations, climate projections, and RCPs/SSPs from CMIP5 (and potentially CMIP6), and will examine stakeholder-driven agricultural development and adaptation scenarios to provide cutting-edge assessments of climate change's impact on agriculture and food security. These protocols will build on the foundation of established protocols from AgMIP's 30+ activities, and will emphasize the use of multiple models, scenarios, and scales to enable an accurate assessment of related uncertainties. The CGRA is also designed to provide the outputs necessary to feed into integrated assessment models (IAMs), nutrition and food security assessments, nitrogen and carbon cycle models, and additional impact-sector assessments (e.g., water resources, land-use, biomes, urban areas). This presentation will describe the current status of CGRA planning and initial prototype experiments to demonstrate key aspects of the protocols before wider implementation ahead of the IPCC Sixth Assessment Report.

  12. The Compass Rose Effectiveness Model

    ERIC Educational Resources Information Center

    Spiers, Cynthia E.; Kiel, Dorothy; Hohenrink, Brad

    2008-01-01

    The effectiveness model focuses the institution on mission achievement through assessment and improvement planning. Eleven mission criteria, measured by key performance indicators, are aligned with the accountability interest of internal and external stakeholders. A Web-based performance assessment application supports the model, documenting the…

  13. Uncertainties in Integrated Climate Change Impact Assessments by Sub-setting GCMs Based on Annual as well as Crop Growing Period under Rice Based Farming System of Indo-Gangetic Plains of India

    NASA Astrophysics Data System (ADS)

    Pillai, S. N.; Singh, H.; Panwar, A. S.; Meena, M. S.; Singh, S. V.; Singh, B.; Paudel, G. P.; Baigorria, G. A.; Ruane, A. C.; McDermid, S.; Boote, K. J.; Porter, C.; Valdivia, R. O.

    2016-12-01

    Integrated assessment of climate change impact on agricultural productivity is a challenge to the scientific community due to uncertainties of input data, particularly the climate, soil, crop calibration and socio-economic dataset. However, the uncertainty due to selection of GCMs is the major source due to complex underlying processes involved in initial as well as the boundary conditions dealt in solving the air-sea interactions. Under Agricultural Modeling Intercomparison and Improvement Project (AgMIP), the Indo-Gangetic Plains Regional Research Team investigated the uncertainties caused due to selection of GCMs through sub-setting based on annual as well as crop-growth period of rice-wheat systems in AgMIP Integrated Assessment methodology. The AgMIP Phase II protocols were used to study the linking of climate-crop-economic models for two study sites Meerut and Karnal to analyse the sensitivity of current production systems to climate change. Climate Change Projections were made using 29 CMIP5 GCMs under RCP4.5 and RCP 8.5 during mid-century period (2040-2069). Two crop models (APSIM & DSSAT) were used. TOA-MD economic model was used for integrated assessment. Based on RAPs (Representative Agricultural Pathways), some of the parameters, which are not possible to get through modeling, derived from literature and interactions with stakeholders incorporated into the TOA-MD model for integrated assessment.

  14. Spatially explicit risk assessment of an estuarine fish in Barataria Bay, Louisiana, following the Deepwater Horizon Oil spill: evaluating tradeoffs in model complexity and parsimony

    EPA Science Inventory

    As ecological risk assessments (ERA) move beyond organism-based determinations towards probabilistic population-level assessments, model complexity must be evaluated against the goals of the assessment, the information available to parameterize components with minimal dependence ...

  15. Development and Validation of a Consumer Quality Assessment Instrument for Dentistry.

    ERIC Educational Resources Information Center

    Johnson, Jeffrey D.; And Others

    1990-01-01

    This paper reviews the literature on consumer involvement in dental quality assessment, argues for inclusion of this information in quality assessment measures, outlines a conceptual model for measuring dental consumer quality assessment, and presents data relating to the development and validation of an instrument based on the conceptual model.…

  16. QTIMaps: A Model to Enable Web Maps in Assessment

    ERIC Educational Resources Information Center

    Navarrete, Toni; Santos, Patricia; Hernandez-Leo, Davinia; Blat, Josep

    2011-01-01

    Test-based e-Assessment approaches are mostly focused on the assessment of knowledge and not on that of other skills, which could be supported by multimedia interactive services. This paper presents the QTIMaps model, which combines the IMS QTI standard with web maps services enabling the computational assessment of geographical skills. We…

  17. GLIMPSE: An integrated assessment model-based tool for coordinated energy and environmental planning

    EPA Science Inventory

    Dan Loughlin will describe the GCAM-USA integrated assessment model and how that model is being improved and integrated into the GLIMPSE decision support system. He will also demonstrate the application of the model to evaluate the emissions and health implications of hypothetica...

  18. Capturing ecology in modeling approaches applied to environmental risk assessment of endocrine active chemicals in fish.

    PubMed

    Mintram, Kate S; Brown, A Ross; Maynard, Samuel K; Thorbek, Pernille; Tyler, Charles R

    2018-02-01

    Endocrine active chemicals (EACs) are widespread in freshwater environments and both laboratory and field based studies have shown reproductive effects in fish at environmentally relevant exposures. Environmental risk assessment (ERA) seeks to protect wildlife populations and prospective assessments rely on extrapolation from individual-level effects established for laboratory fish species to populations of wild fish using arbitrary safety factors. Population susceptibility to chemical effects, however, depends on exposure risk, physiological susceptibility, and population resilience, each of which can differ widely between fish species. Population models have significant potential to address these shortfalls and to include individual variability relating to life-history traits, demographic and density-dependent vital rates, and behaviors which arise from inter-organism and organism-environment interactions. Confidence in population models has recently resulted in the EU Commission stating that results derived from reliable models may be considered when assessing the relevance of adverse effects of EACs at the population level. This review critically assesses the potential risks posed by EACs for fish populations, considers the ecological factors influencing these risks and explores the benefits and challenges of applying population modeling (including individual-based modeling) in ERA for EACs in fish. We conclude that population modeling offers a way forward for incorporating greater environmental relevance in assessing the risks of EACs for fishes and for identifying key risk factors through sensitivity analysis. Individual-based models (IBMs) allow for the incorporation of physiological and behavioral endpoints relevant to EAC exposure effects, thus capturing both direct and indirect population-level effects.

  19. Sidney Blatt's Contributions to Personality Assessment.

    PubMed

    Auerbach, John S

    2016-01-01

    Over a long, distinguished career, Sidney Blatt contributed to theory and research in personality development, personality assessment, and psychotherapy. Best known for his 2-configurations model of personality and author or co-author of more than 250 articles and 18 books and monographs, Blatt was also a master clinician, a psychoanalyst who was awarded the 1989 Bruno J. Klopfer Award by the Society for Personality Assessment (SPA) for his contributions to both self-report and performance-based assessment. He was also the president of SPA from 1984 to 1986. This special series contains papers by writers who participated in all aspects of Blatt's contributions to personality assessment, both self-report and performance-based. Topics covered include Blatt's 2-configurations model of personality, development, and psychopathology; boundary disturbance and psychosis in performance-based assessment; the interaction of gender and personality on narrative assessments; and the Object Relations Inventory and differentiation relatedness, especially as these relate to therapeutic outcome.

  20. Racism and Psychological and Emotional Injury: Recognizing and Assessing Race-Based Traumatic Stress

    ERIC Educational Resources Information Center

    Carter, Robert T.

    2007-01-01

    The purpose of this article is to discuss the psychological and emotional effects of racism on people of Color. Psychological models and research on racism, discrimination, stress, and trauma will be integrated to promote a model to be used to understand, recognize, and assess race-based traumatic stress to aid counseling and psychological…

  1. Assessment of Differential Item Functioning in Testlet-Based Items Using the Rasch Testlet Model

    ERIC Educational Resources Information Center

    Wang, Wen-Chung; Wilson, Mark

    2005-01-01

    This study presents a procedure for detecting differential item functioning (DIF) for dichotomous and polytomous items in testlet-based tests, whereby DIF is taken into account by adding DIF parameters into the Rasch testlet model. Simulations were conducted to assess recovery of the DIF and other parameters. Two independent variables, test type…

  2. Relationships between migration rates and landscape resistance assessed using individual-based simulations

    Treesearch

    E. L. Landguth; S. A. Cushman; M. A. Murphy; G. Luikart

    2010-01-01

    Linking landscape effects on gene flow to processes such as dispersal and mating is essential to provide a conceptual foundation for landscape genetics. It is particularly important to determine how classical population genetic models relate to recent individual-based landscape genetic models when assessing individual movement and its influence on population genetic...

  3. Improving vulnerability models: lessons learned from a comparison between flood and earthquake assessments

    NASA Astrophysics Data System (ADS)

    de Ruiter, Marleen; Ward, Philip; Daniell, James; Aerts, Jeroen

    2017-04-01

    In a cross-discipline study, an extensive literature review has been conducted to increase the understanding of vulnerability indicators used in both earthquake- and flood vulnerability assessments, and to provide insights into potential improvements of earthquake and flood vulnerability assessments. It identifies and compares indicators used to quantitatively assess earthquake and flood vulnerability, and discusses their respective differences and similarities. Indicators have been categorized into Physical- and Social categories, and further subdivided into (when possible) measurable and comparable indicators. Physical vulnerability indicators have been differentiated to exposed assets such as buildings and infrastructure. Social indicators are grouped in subcategories such as demographics, economics and awareness. Next, two different vulnerability model types have been described that use these indicators: index- and curve-based vulnerability models. A selection of these models (e.g. HAZUS) have been described, and compared on several characteristics such as temporal- and spatial aspects. It appears that earthquake vulnerability methods are traditionally strongly developed towards physical attributes at an object scale and used in vulnerability curve models, whereas flood vulnerability studies focus more on indicators applied to aggregated land-use scales. Flood risk studies could be improved using approaches from earthquake studies, such as incorporating more detailed lifeline and building indicators, and developing object-based vulnerability curve assessments of physical vulnerability, for example by defining building material based flood vulnerability curves. Related to this, is the incorporation of time of the day based building occupation patterns (at 2am most people will be at home while at 2pm most people will be in the office). Earthquake assessments could learn from flood studies when it comes to the refined selection of social vulnerability indicators. Based on the lessons obtained in this study, we recommend future studies to further explore cross-hazard studies.

  4. Evaluation model of wind energy resources and utilization efficiency of wind farm

    NASA Astrophysics Data System (ADS)

    Ma, Jie

    2018-04-01

    Due to the large amount of abandoned winds in wind farms, the establishment of a wind farm evaluation model is particularly important for the future development of wind farms In this essay, consider the wind farm's wind energy situation, Wind Energy Resource Model (WERM) and Wind Energy Utilization Efficiency Model(WEUEM) are established to conduct a comprehensive assessment of the wind farm. Wind Energy Resource Model (WERM) contains average wind speed, average wind power density and turbulence intensity, which assessed wind energy resources together. Based on our model, combined with the actual measurement data of a wind farm, calculate the indicators using the model, and the results are in line with the actual situation. We can plan the future development of the wind farm based on this result. Thus, the proposed establishment approach of wind farm assessment model has application value.

  5. Advancing Consumer Product Composition and Chemical ...

    EPA Pesticide Factsheets

    This presentation describes EPA efforts to collect, model, and measure publically available consumer product data for use in exposure assessment. The development of the ORD Chemicals and Products database will be described, as will machine-learning based models for predicting chemical function. Finally, the talk describes new mass spectrometry-based methods for measuring chemicals in formulation and articles. This presentation is an invited talk to the ICCA-LRI workshop "Fit-For-Purpose Exposure Assessments For Risk-Based Decision Making". The talk will share EPA efforts to characterize the components of consumer products for use in exposure assessment with the international exposure science community.

  6. APPROACHES FOR INCORPORATING NON-CHEMICAL STRESSORS INTO CUMULATIVE RISK ASSESSMENTS

    EPA Science Inventory

    Over the past twenty years, the risk assessment paradigm has gradually shifted from an individual chemical approach to a community-based model. Inherent in community-based risk assessment is consideration of the totality of stressors affecting a defined population including both ...

  7. Modeling Instruction in AP Physics C: Mechanics and Electricity and Magnetism

    NASA Astrophysics Data System (ADS)

    Belcher, Nathan Tillman

    This action research study used data from multiple assessments in Mechanics and Electricity and Magnetism to determine the viability of Modeling Instruction as a pedagogy for students in AP Physics C: Mechanics and Electricity and Magnetism. Modeling Instruction is a guided-inquiry approach to teaching science in which students progress through the Modeling Cycle to develop a fully-constructed model for a scientific concept. AP Physics C: Mechanics and Electricity and Magnetism are calculus-based physics courses, approximately equivalent to first-year calculus-based physics courses at the collegiate level. Using a one-group pretest-posttest design, students were assessed in Mechanics using the Force Concept Inventory, Mechanics Baseline Test, and 2015 AP Physics C: Mechanics Practice Exam. With the same design, students were assessed in Electricity and Magnetism on the Brief Electricity and Magnetism Assessment, Electricity and Magnetism Conceptual Assessment, and 2015 AP Physics C: Electricity and Magnetism Practice Exam. In a one-shot case study design, student scores were collected from the 2017 AP Physics C: Mechanics and Electricity and Magnetism Exams. Students performed moderately well on the assessments in Mechanics and Electricity and Magnetism, demonstrating that Modeling Instruction is a viable pedagogy in AP Physics C: Electricity and Magnetism.

  8. Lecturers, Students and Community Members Sharing the Responsibility of Assessing Project-Based Poster Presentations

    ERIC Educational Resources Information Center

    Beylefeld, A. A.; Joubert, G.; Jama, M. P.; de Klerk, B.

    2003-01-01

    Active participation in the process of learning rather than transmission of information is prominent in modern higher education contexts. In alignment with this trend, traditional modes of assessment, based on the "transmission model", are increasingly replaced or supplemented by more authentic forms of assessment. Authentic assessment "measures"…

  9. Quality assessment of protein model-structures based on structural and functional similarities

    PubMed Central

    2012-01-01

    Background Experimental determination of protein 3D structures is expensive, time consuming and sometimes impossible. A gap between number of protein structures deposited in the World Wide Protein Data Bank and the number of sequenced proteins constantly broadens. Computational modeling is deemed to be one of the ways to deal with the problem. Although protein 3D structure prediction is a difficult task, many tools are available. These tools can model it from a sequence or partial structural information, e.g. contact maps. Consequently, biologists have the ability to generate automatically a putative 3D structure model of any protein. However, the main issue becomes evaluation of the model quality, which is one of the most important challenges of structural biology. Results GOBA - Gene Ontology-Based Assessment is a novel Protein Model Quality Assessment Program. It estimates the compatibility between a model-structure and its expected function. GOBA is based on the assumption that a high quality model is expected to be structurally similar to proteins functionally similar to the prediction target. Whereas DALI is used to measure structure similarity, protein functional similarity is quantified using standardized and hierarchical description of proteins provided by Gene Ontology combined with Wang's algorithm for calculating semantic similarity. Two approaches are proposed to express the quality of protein model-structures. One is a single model quality assessment method, the other is its modification, which provides a relative measure of model quality. Exhaustive evaluation is performed on data sets of model-structures submitted to the CASP8 and CASP9 contests. Conclusions The validation shows that the method is able to discriminate between good and bad model-structures. The best of tested GOBA scores achieved 0.74 and 0.8 as a mean Pearson correlation to the observed quality of models in our CASP8 and CASP9-based validation sets. GOBA also obtained the best result for two targets of CASP8, and one of CASP9, compared to the contest participants. Consequently, GOBA offers a novel single model quality assessment program that addresses the practical needs of biologists. In conjunction with other Model Quality Assessment Programs (MQAPs), it would prove useful for the evaluation of single protein models. PMID:22998498

  10. Fire risk in San Diego County, California: A weighted Bayesian model approach

    USGS Publications Warehouse

    Kolden, Crystal A.; Weigel, Timothy J.

    2007-01-01

    Fire risk models are widely utilized to mitigate wildfire hazards, but models are often based on expert opinions of less understood fire-ignition and spread processes. In this study, we used an empirically derived weights-of-evidence model to assess what factors produce fire ignitions east of San Diego, California. We created and validated a dynamic model of fire-ignition risk based on land characteristics and existing fire-ignition history data, and predicted ignition risk for a future urbanization scenario. We then combined our empirical ignition-risk model with a fuzzy fire behavior-risk model developed by wildfire experts to create a hybrid model of overall fire risk. We found that roads influence fire ignitions and that future growth will increase risk in new rural development areas. We conclude that empirically derived risk models and hybrid models offer an alternative method to assess current and future fire risk based on management actions.

  11. Using a data base management system for modelling SSME test history data

    NASA Technical Reports Server (NTRS)

    Abernethy, K.

    1985-01-01

    The usefulness of a data base management system (DBMS) for modelling historical test data for the complete series of static test firings for the Space Shuttle Main Engine (SSME) was assessed. From an analysis of user data base query requirements, it became clear that a relational DMBS which included a relationally complete query language would permit a model satisfying the query requirements. Representative models and sample queries are discussed. A list of environment-particular evaluation criteria for the desired DBMS was constructed; these criteria include requirements in the areas of user-interface complexity, program independence, flexibility, modifiability, and output capability. The evaluation process included the construction of several prototype data bases for user assessement. The systems studied, representing the three major DBMS conceptual models, were: MIRADS, a hierarchical system; DMS-1100, a CODASYL-based network system; ORACLE, a relational system; and DATATRIEVE, a relational-type system.

  12. A 2-D process-based model for suspended sediment dynamics: A first step towards ecological modeling

    USGS Publications Warehouse

    Achete, F. M.; van der Wegen, M.; Roelvink, D.; Jaffe, B.

    2015-01-01

    In estuaries suspended sediment concentration (SSC) is one of the most important contributors to turbidity, which influences habitat conditions and ecological functions of the system. Sediment dynamics differs depending on sediment supply and hydrodynamic forcing conditions that vary over space and over time. A robust sediment transport model is a first step in developing a chain of models enabling simulations of contaminants, phytoplankton and habitat conditions. This works aims to determine turbidity levels in the complex-geometry delta of the San Francisco estuary using a process-based approach (Delft3D Flexible Mesh software). Our approach includes a detailed calibration against measured SSC levels, a sensitivity analysis on model parameters and the determination of a yearly sediment budget as well as an assessment of model results in terms of turbidity levels for a single year, water year (WY) 2011. Model results show that our process-based approach is a valuable tool in assessing sediment dynamics and their related ecological parameters over a range of spatial and temporal scales. The model may act as the base model for a chain of ecological models assessing the impact of climate change and management scenarios. Here we present a modeling approach that, with limited data, produces reliable predictions and can be useful for estuaries without a large amount of processes data.

  13. A 2-D process-based model for suspended sediment dynamics: a first step towards ecological modeling

    NASA Astrophysics Data System (ADS)

    Achete, F. M.; van der Wegen, M.; Roelvink, D.; Jaffe, B.

    2015-06-01

    In estuaries suspended sediment concentration (SSC) is one of the most important contributors to turbidity, which influences habitat conditions and ecological functions of the system. Sediment dynamics differs depending on sediment supply and hydrodynamic forcing conditions that vary over space and over time. A robust sediment transport model is a first step in developing a chain of models enabling simulations of contaminants, phytoplankton and habitat conditions. This works aims to determine turbidity levels in the complex-geometry delta of the San Francisco estuary using a process-based approach (Delft3D Flexible Mesh software). Our approach includes a detailed calibration against measured SSC levels, a sensitivity analysis on model parameters and the determination of a yearly sediment budget as well as an assessment of model results in terms of turbidity levels for a single year, water year (WY) 2011. Model results show that our process-based approach is a valuable tool in assessing sediment dynamics and their related ecological parameters over a range of spatial and temporal scales. The model may act as the base model for a chain of ecological models assessing the impact of climate change and management scenarios. Here we present a modeling approach that, with limited data, produces reliable predictions and can be useful for estuaries without a large amount of processes data.

  14. Cost-effectiveness of a National Telemedicine Diabetic Retinopathy Screening Program in Singapore.

    PubMed

    Nguyen, Hai V; Tan, Gavin Siew Wei; Tapp, Robyn Jennifer; Mital, Shweta; Ting, Daniel Shu Wei; Wong, Hon Tym; Tan, Colin S; Laude, Augustinus; Tai, E Shyong; Tan, Ngiap Chuan; Finkelstein, Eric A; Wong, Tien Yin; Lamoureux, Ecosse L

    2016-12-01

    To determine the incremental cost-effectiveness of a new telemedicine technician-based assessment relative to an existing model of family physician (FP)-based assessment of diabetic retinopathy (DR) in Singapore from the health system and societal perspectives. Model-based, cost-effectiveness analysis of the Singapore Integrated Diabetic Retinopathy Program (SiDRP). A hypothetical cohort of patients aged 55 years with type 2 diabetes previously not screened for DR. The SiDRP is a new telemedicine-based DR screening program using trained technicians to assess retinal photographs. We compared the cost-effectiveness of SiDRP with the existing model in which FPs assess photographs. We developed a hybrid decision tree/Markov model to simulate the costs, effectiveness, and incremental cost-effectiveness ratio (ICER) of SiDRP relative to FP-based DR screening over a lifetime horizon. We estimated the costs from the health system and societal perspectives. Effectiveness was measured in terms of quality-adjusted life-years (QALYs). Result robustness was calculated using deterministic and probabilistic sensitivity analyses. The ICER. From the societal perspective that takes into account all costs and effects, the telemedicine-based DR screening model had significantly lower costs (total cost savings of S$173 per person) while generating similar QALYs compared with the physician-based model (i.e., 13.1 QALYs). From the health system perspective that includes only direct medical costs, the cost savings are S$144 per person. By extrapolating these data to approximately 170 000 patients with diabetes currently being screened yearly for DR in Singapore's primary care polyclinics, the present value of future cost savings associated with the telemedicine-based model is estimated to be S$29.4 million over a lifetime horizon. While generating similar health outcomes, the telemedicine-based DR screening using technicians in the primary care setting saves costs for Singapore compared with the FP model. Our data provide a strong economic rationale to expand the telemedicine-based DR screening program in Singapore and elsewhere. Copyright © 2016 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.

  15. Metrics for Performance Evaluation of Patient Exercises during Physical Therapy.

    PubMed

    Vakanski, Aleksandar; Ferguson, Jake M; Lee, Stephen

    2017-06-01

    The article proposes a set of metrics for evaluation of patient performance in physical therapy exercises. Taxonomy is employed that classifies the metrics into quantitative and qualitative categories, based on the level of abstraction of the captured motion sequences. Further, the quantitative metrics are classified into model-less and model-based metrics, in reference to whether the evaluation employs the raw measurements of patient performed motions, or whether the evaluation is based on a mathematical model of the motions. The reviewed metrics include root-mean square distance, Kullback Leibler divergence, log-likelihood, heuristic consistency, Fugl-Meyer Assessment, and similar. The metrics are evaluated for a set of five human motions captured with a Kinect sensor. The metrics can potentially be integrated into a system that employs machine learning for modelling and assessment of the consistency of patient performance in home-based therapy setting. Automated performance evaluation can overcome the inherent subjectivity in human performed therapy assessment, and it can increase the adherence to prescribed therapy plans, and reduce healthcare costs.

  16. United3D: a protein model quality assessment program that uses two consensus based methods.

    PubMed

    Terashi, Genki; Oosawa, Makoto; Nakamura, Yuuki; Kanou, Kazuhiko; Takeda-Shitaka, Mayuko

    2012-01-01

    In protein structure prediction, such as template-based modeling and free modeling (ab initio modeling), the step that assesses the quality of protein models is very important. We have developed a model quality assessment (QA) program United3D that uses an optimized clustering method and a simple Cα atom contact-based potential. United3D automatically estimates the quality scores (Qscore) of predicted protein models that are highly correlated with the actual quality (GDT_TS). The performance of United3D was tested in the ninth Critical Assessment of protein Structure Prediction (CASP9) experiment. In CASP9, United3D showed the lowest average loss of GDT_TS (5.3) among the QA methods participated in CASP9. This result indicates that the performance of United3D to identify the high quality models from the models predicted by CASP9 servers on 116 targets was best among the QA methods that were tested in CASP9. United3D also produced high average Pearson correlation coefficients (0.93) and acceptable Kendall rank correlation coefficients (0.68) between the Qscore and GDT_TS. This performance was competitive with the other top ranked QA methods that were tested in CASP9. These results indicate that United3D is a useful tool for selecting high quality models from many candidate model structures provided by various modeling methods. United3D will improve the accuracy of protein structure prediction.

  17. Stochastic output error vibration-based damage detection and assessment in structures under earthquake excitation

    NASA Astrophysics Data System (ADS)

    Sakellariou, J. S.; Fassois, S. D.

    2006-11-01

    A stochastic output error (OE) vibration-based methodology for damage detection and assessment (localization and quantification) in structures under earthquake excitation is introduced. The methodology is intended for assessing the state of a structure following potential damage occurrence by exploiting vibration signal measurements produced by low-level earthquake excitations. It is based upon (a) stochastic OE model identification, (b) statistical hypothesis testing procedures for damage detection, and (c) a geometric method (GM) for damage assessment. The methodology's advantages include the effective use of the non-stationary and limited duration earthquake excitation, the handling of stochastic uncertainties, the tackling of the damage localization and quantification subproblems, the use of "small" size, simple and partial (in both the spatial and frequency bandwidth senses) identified OE-type models, and the use of a minimal number of measured vibration signals. Its feasibility and effectiveness are assessed via Monte Carlo experiments employing a simple simulation model of a 6 storey building. It is demonstrated that damage levels of 5% and 20% reduction in a storey's stiffness characteristics may be properly detected and assessed using noise-corrupted vibration signals.

  18. Decision-relevant evaluation of climate models: A case study of chill hours in California

    NASA Astrophysics Data System (ADS)

    Jagannathan, K. A.; Jones, A. D.; Kerr, A. C.

    2017-12-01

    The past decade has seen a proliferation of different climate datasets with over 60 climate models currently in use. Comparative evaluation and validation of models can assist practitioners chose the most appropriate models for adaptation planning. However, such assessments are usually conducted for `climate metrics' such as seasonal temperature, while sectoral decisions are often based on `decision-relevant outcome metrics' such as growing degree days or chill hours. Since climate models predict different metrics with varying skill, the goal of this research is to conduct a bottom-up evaluation of model skill for `outcome-based' metrics. Using chill hours (number of hours in winter months where temperature is lesser than 45 deg F) in Fresno, CA as a case, we assess how well different GCMs predict the historical mean and slope of chill hours, and whether and to what extent projections differ based on model selection. We then compare our results with other climate-based evaluations of the region, to identify similarities and differences. For the model skill evaluation, historically observed chill hours were compared with simulations from 27 GCMs (and multiple ensembles). Model skill scores were generated based on a statistical hypothesis test of the comparative assessment. Future projections from RCP 8.5 runs were evaluated, and a simple bias correction was also conducted. Our analysis indicates that model skill in predicting chill hour slope is dependent on its skill in predicting mean chill hours, which results from the non-linear nature of the chill metric. However, there was no clear relationship between the models that performed well for the chill hour metric and those that performed well in other temperature-based evaluations (such winter minimum temperature or diurnal temperature range). Further, contrary to conclusions from other studies, we also found that the multi-model mean or large ensemble mean results may not always be most appropriate for this outcome metric. Our assessment sheds light on key differences between global versus local skill, and broad versus specific skill of climate models, highlighting that decision-relevant model evaluation may be crucial for providing practitioners with the best available climate information for their specific needs.

  19. Assessing Students' Understanding of Human Behavior: A Multidisciplinary Outcomes Based Approach for the Design and Assessment of an Academic Program Goal.

    ERIC Educational Resources Information Center

    Keith, Bruce; Meese, Michael J.; Efflandt, Scott; Malinowski, Jon C.; LeBoeuf, Joseph; Gallagher, Martha; Hurley, John; Green, Charles

    2002-01-01

    Presents a strategy for the curricular design and assessment of one multidisciplinary program goal: understanding human behavior. Discusses how to assess a desired outcome based on four specific areas: (1) organizational context; (2) articulation of a learning model; (3) program design and implementation; and (4) outcomes assessment. (Author/KDR)

  20. Modeling Joint Exposures and Health Outcomes for Cumulative Risk Assessment: The Case of Radon and Smoking

    PubMed Central

    Chahine, Teresa; Schultz, Bradley D.; Zartarian, Valerie G.; Xue, Jianping; Subramanian, SV; Levy, Jonathan I.

    2011-01-01

    Community-based cumulative risk assessment requires characterization of exposures to multiple chemical and non-chemical stressors, with consideration of how the non-chemical stressors may influence risks from chemical stressors. Residential radon provides an interesting case example, given its large attributable risk, effect modification due to smoking, and significant variability in radon concentrations and smoking patterns. In spite of this fact, no study to date has estimated geographic and sociodemographic patterns of both radon and smoking in a manner that would allow for inclusion of radon in community-based cumulative risk assessment. In this study, we apply multi-level regression models to explain variability in radon based on housing characteristics and geological variables, and construct a regression model predicting housing characteristics using U.S. Census data. Multi-level regression models of smoking based on predictors common to the housing model allow us to link the exposures. We estimate county-average lifetime lung cancer risks from radon ranging from 0.15 to 1.8 in 100, with high-risk clusters in areas and for subpopulations with high predicted radon and smoking rates. Our findings demonstrate the viability of screening-level assessment to characterize patterns of lung cancer risk from radon, with an approach that can be generalized to multiple chemical and non-chemical stressors. PMID:22016710

  1. USEPA SHEDS MODEL: METHODOLOGY FOR EXPOSURE ASSESSMENT FOR WOOD PRESERVATIVES

    EPA Science Inventory

    A physically-based, Monte Carlo probabilistic model (SHEDS-Wood: Stochastic Human Exposure and Dose Simulation model for wood preservatives) has been applied to assess the exposure and dose of children to arsenic (As) and chromium (Cr) from contact with chromated copper arsenat...

  2. Consensus-based training and assessment model for general surgery.

    PubMed

    Szasz, P; Louridas, M; de Montbrun, S; Harris, K A; Grantcharov, T P

    2016-05-01

    Surgical education is becoming competency-based with the implementation of in-training milestones. Training guidelines should reflect these changes and determine the specific procedures for such milestone assessments. This study aimed to develop a consensus view regarding operative procedures and tasks considered appropriate for junior and senior trainees, and the procedures that can be used as technical milestone assessments for trainee progression in general surgery. A Delphi process was followed where questionnaires were distributed to all 17 Canadian general surgery programme directors. Items were ranked on a 5-point Likert scale, with consensus defined as Cronbach's α of at least 0·70. Items rated 4 or above on the 5-point Likert scale by 80 per cent of the programme directors were included in the models. Two Delphi rounds were completed, with 14 programme directors taking part in round one and 11 in round two. The overall consensus was high (Cronbach's α = 0·98). The training model included 101 unique procedures and tasks, 24 specific to junior trainees, 68 specific to senior trainees, and nine appropriate to all. The assessment model included four procedures. A system of operative procedures and tasks for junior- and senior-level trainees has been developed along with an assessment model for trainee progression. These can be used as milestones in competency-based assessments. © 2016 BJS Society Ltd Published by John Wiley & Sons Ltd.

  3. A Maturity Model for Assessing the Use of ICT in School Education

    ERIC Educational Resources Information Center

    Solar, Mauricio; Sabattin, Jorge; Parada, Victor

    2013-01-01

    This article describes an ICT-based and capability-driven model for assessing ICT in education capabilities and maturity of schools. The proposed model, called ICTE-MM (ICT in School Education Maturity Model), has three elements supporting educational processes: information criteria, ICT resources, and leverage domains. Changing the traditional…

  4. Assessment of health surveys: fitting a multidimensional graded response model.

    PubMed

    Depaoli, Sarah; Tiemensma, Jitske; Felt, John M

    The multidimensional graded response model, an item response theory (IRT) model, can be used to improve the assessment of surveys, even when sample sizes are restricted. Typically, health-based survey development utilizes classical statistical techniques (e.g. reliability and factor analysis). In a review of four prominent journals within the field of Health Psychology, we found that IRT-based models were used in less than 10% of the studies examining scale development or assessment. However, implementing IRT-based methods can provide more details about individual survey items, which is useful when determining the final item content of surveys. An example using a quality of life survey for Cushing's syndrome (CushingQoL) highlights the main components for implementing the multidimensional graded response model. Patients with Cushing's syndrome (n = 397) completed the CushingQoL. Results from the multidimensional graded response model supported a 2-subscale scoring process for the survey. All items were deemed as worthy contributors to the survey. The graded response model can accommodate unidimensional or multidimensional scales, be used with relatively lower sample sizes, and is implemented in free software (example code provided in online Appendix). Use of this model can help to improve the quality of health-based scales being developed within the Health Sciences.

  5. Psychosocial Modeling of Insider Threat Risk Based on Behavioral and Word Use Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greitzer, Frank L.; Kangas, Lars J.; Noonan, Christine F.

    In many insider crimes, managers and other coworkers observed that the offenders had exhibited signs of stress, disgruntlement, or other issues, but no alarms were raised. Barriers to using such psychosocial indicators include the inability to recognize the signs and the failure to record the behaviors so that they can be assessed. A psychosocial model was developed to assess an employee’s behavior associated with an increased risk of insider abuse. The model is based on case studies and research literature on factors/correlates associated with precursor behavioral manifestations of individuals committing insider crimes. A complementary Personality Factor modeling approach was developedmore » based on analysis to derive relevant personality characteristics from word use. Several implementations of the psychosocial model were evaluated by comparing their agreement with judgments of human resources and management professionals; the personality factor modeling approach was examined using email samples. If implemented in an operational setting, these models should be part of a set of management tools for employee assessment to identify employees who pose a greater insider threat.« less

  6. Development of the quality assessment model of EHR software in family medicine practices: research based on user satisfaction.

    PubMed

    Kralj, Damir; Kern, Josipa; Tonkovic, Stanko; Koncar, Miroslav

    2015-09-09

    Family medicine practices (FMPs) make the basis for the Croatian health care system. Use of electronic health record (EHR) software is mandatory and it plays an important role in running these practices, but important functional features still remain uneven and largely left to the will of the software developers. The objective of this study was to develop a novel and comprehensive model for functional evaluation of the EHR software in FMPs, based on current world standards, models and projects, as well as on actual user satisfaction and requirements. Based on previous theoretical and experimental research in this area, we made the initial framework model consisting of six basic categories as a base for online survey questionnaire. Family doctors assessed perceived software quality by using a five-point Likert-type scale. Using exploratory factor analysis and appropriate statistical methods over the collected data, the final optimal structure of the novel model was formed. Special attention was focused on the validity and quality of the novel model. The online survey collected a total of 384 cases. The obtained results indicate both the quality of the assessed software and the quality in use of the novel model. The intense ergonomic orientation of the novel measurement model was particularly emphasised. The resulting novel model is multiple validated, comprehensive and universal. It could be used to assess the user-perceived quality of almost all forms of the ambulatory EHR software and therefore useful to all stakeholders in this area of the health care informatisation.

  7. Linking a modified EPIC-based growth model (UPGM) with a component-based watershed model (AGES-W)

    USDA-ARS?s Scientific Manuscript database

    Agricultural models and decision support systems (DSS) for assessing water use and management are increasingly being applied to diverse geographic regions at different scales. This requires models that can simulate different crops, however, very few plant growth models are available that “easily” ...

  8. Bayesian-network-based safety risk assessment for steel construction projects.

    PubMed

    Leu, Sou-Sen; Chang, Ching-Miao

    2013-05-01

    There are four primary accident types at steel building construction (SC) projects: falls (tumbles), object falls, object collapse, and electrocution. Several systematic safety risk assessment approaches, such as fault tree analysis (FTA) and failure mode and effect criticality analysis (FMECA), have been used to evaluate safety risks at SC projects. However, these traditional methods ineffectively address dependencies among safety factors at various levels that fail to provide early warnings to prevent occupational accidents. To overcome the limitations of traditional approaches, this study addresses the development of a safety risk-assessment model for SC projects by establishing the Bayesian networks (BN) based on fault tree (FT) transformation. The BN-based safety risk-assessment model was validated against the safety inspection records of six SC building projects and nine projects in which site accidents occurred. The ranks of posterior probabilities from the BN model were highly consistent with the accidents that occurred at each project site. The model accurately provides site safety-management abilities by calculating the probabilities of safety risks and further analyzing the causes of accidents based on their relationships in BNs. In practice, based on the analysis of accident risks and significant safety factors, proper preventive safety management strategies can be established to reduce the occurrence of accidents on SC sites. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. A Study of Wind Turbine Comprehensive Operational Assessment Model Based on EM-PCA Algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, Minqiang; Xu, Bin; Zhan, Yangyan; Ren, Danyuan; Liu, Dexing

    2018-01-01

    To assess wind turbine performance accurately and provide theoretical basis for wind farm management, a hybrid assessment model based on Entropy Method and Principle Component Analysis (EM-PCA) was established, which took most factors of operational performance into consideration and reach to a comprehensive result. To verify the model, six wind turbines were chosen as the research objects, the ranking obtained by the method proposed in the paper were 4#>6#>1#>5#>2#>3#, which are completely in conformity with the theoretical ranking, which indicates that the reliability and effectiveness of the EM-PCA method are high. The method could give guidance for processing unit state comparison among different units and launching wind farm operational assessment.

  10. Practice management based on risk assessment.

    PubMed

    Sandberg, Hans

    2004-01-01

    The management of a dental practice is most often focused on what clinicians do (production of items), and not so much on what is achieved in terms of oral health. The main reason for this is probably that it is easier to measure production and more difficult to measure health outcome. This paper presents a model based on individual risk assessment that aims to achieve a financially sound economy and good oral health. The close-to-the-clinic management tool, the HIDEP Model (Health Improvement in a DEntal Practice) was pioneered initially in Sweden at the end of 1980s. The experience over a 15-year period with different elements of the model is presented, including: the basis of examination and risk assessment; motivation; task delegation and leadership issues; health-finance evaluations; and quality development within a dental clinic. DentiGroupXL, a software program designed to support the work based on the model, is also described.

  11. Validation of a New Conceptual Model of School Connectedness and Its Assessment Measure

    ERIC Educational Resources Information Center

    Hirao, Katsura

    2011-01-01

    A self-report assessment scale of school connectedness was validated in this study based on the data from middle-school children in a northeastern state of the United States (n = 145). The scale was based on the School Bonding Model (Morita, 1991), which was derived reductively from the social control (bond) theory (Hirschi, 1969). This validation…

  12. Dynamic drought risk assessment using crop model and remote sensing techniques

    NASA Astrophysics Data System (ADS)

    Sun, H.; Su, Z.; Lv, J.; Li, L.; Wang, Y.

    2017-02-01

    Drought risk assessment is of great significance to reduce the loss of agricultural drought and ensure food security. The normally drought risk assessment method is to evaluate its exposure to the hazard and the vulnerability to extended periods of water shortage for a specific region, which is a static evaluation method. The Dynamic Drought Risk Assessment (DDRA) is to estimate the drought risk according to the crop growth and water stress conditions in real time. In this study, a DDRA method using crop model and remote sensing techniques was proposed. The crop model we employed is DeNitrification and DeComposition (DNDC) model. The drought risk was quantified by the yield losses predicted by the crop model in a scenario-based method. The crop model was re-calibrated to improve the performance by the Leaf Area Index (LAI) retrieved from MODerate Resolution Imaging Spectroradiometer (MODIS) data. And the in-situ station-based crop model was extended to assess the regional drought risk by integrating crop planted mapping. The crop planted area was extracted with extended CPPI method from MODIS data. This study was implemented and validated on maize crop in Liaoning province, China.

  13. The NASA Space Radiobiology Risk Assessment Project

    NASA Astrophysics Data System (ADS)

    Cucinotta, Francis A.; Huff, Janice; Ponomarev, Artem; Patel, Zarana; Kim, Myung-Hee

    The current first phase (2006-2011) has the three major goals of: 1) optimizing the conventional cancer risk models currently used based on the double-detriment life-table and radiation quality functions; 2) the integration of biophysical models of acute radiation syndromes; and 3) the development of new systems radiation biology models of cancer processes. The first-phase also includes continued uncertainty assessment of space radiation environmental models and transport codes, and relative biological effectiveness factors (RBE) based on flight data and NSRL results, respectively. The second phase of the (2012-2016) will: 1) develop biophysical models of central nervous system risks (CNS); 2) achieve comphrensive systems biology models of cancer processes using data from proton and heavy ion studies performed at NSRL; and 3) begin to identify computational models of biological countermeasures. Goals for the third phase (2017-2021) include: 1) the development of a systems biology model of cancer risks for operational use at NASA; 2) development of models of degenerative risks, 2) quantitative models of counter-measure impacts on cancer risks; and 3) indiviudal based risk assessments. Finally, we will support a decision point to continue NSRL research in support of NASA's exploration goals beyond 2021, and create an archival of NSRL research results for continued analysis. Details on near term goals, plans for a WEB based data resource of NSRL results, and a space radiation Wikepedia are described.

  14. Performance and Cognitive Assessment in 3-D Modeling

    ERIC Educational Resources Information Center

    Fahrer, Nolan E.; Ernst, Jeremy V.; Branoff, Theodore J.; Clark, Aaron C.

    2011-01-01

    The purpose of this study was to investigate identifiable differences between performance and cognitive assessment scores in a 3-D modeling unit of an engineering drafting course curriculum. The study aimed to provide further investigation of the need of skill-based assessments in engineering/technical graphics courses to potentially increase…

  15. Atmospheric Effects of Subsonic Aircraft: Interim Assessment Report of the Advanced Subsonic Technology Program

    NASA Technical Reports Server (NTRS)

    Friedl, Randall R. (Editor)

    1997-01-01

    This first interim assessment of the subsonic assessment (SASS) project attempts to summarize concisely the status of our knowledge concerning the impacts of present and future subsonic aircraft fleets. It also highlights the major areas of scientific uncertainty, through review of existing data bases and model-based sensitivity studies. In view of the need for substantial improvements in both model formulations and experimental databases, this interim assessment cannot provide confident numerical predictions of aviation impacts. However, a number of quantitative estimates are presented, which provide some guidance to policy makers.

  16. Environmental exposure modeling and monitoring of human pharmaceutical concentrations in the environment

    USGS Publications Warehouse

    Versteeg, D.J.; Alder, A. C.; Cunningham, V. L.; Kolpin, D.W.; Murray-Smith, R.; Ternes, T.

    2005-01-01

    Human pharmaceuticals are receiving increased attention as environmental contaminants. This is due to their biological activity and the number of monitoring programs focusing on analysis of these compounds in various environmental media and compartments. Risk assessments are needed to understand the implications of reported concentrations; a fundamental part of the risk assessment is an assessment of environmental exposures. The purpose of this chapter is to provide guidance on the use of predictive tools (e.g., models) and monitoring data in exposure assessments for pharmaceuticals in the environment. Methods to predict environmental concentrations from equations based on first principles are presented. These equations form the basis of existing GIS (geographic information systems)-based systems for understanding the spatial distribution of pharmaceuticals in the environment. The pharmaceutical assessment and transport (PhATE), georeferenced regional exposure assessment tool for European rivers (GREAT-ER), and geographical information system (GIS)-ROUT models are reviewed and recommendations are provided concerning the design and execution of monitoring studies. Model predictions and monitoring data are compared to evaluate the relative utility of each approach in environmental exposure assessments. In summary, both models and monitoring data can be used to define representative exposure concentrations of pharmaceuticals in the environment in support of environmental risk assessments.

  17. Teaching and assessing procedural skills using simulation: metrics and methodology.

    PubMed

    Lammers, Richard L; Davenport, Moira; Korley, Frederick; Griswold-Theodorson, Sharon; Fitch, Michael T; Narang, Aneesh T; Evans, Leigh V; Gross, Amy; Rodriguez, Elliot; Dodge, Kelly L; Hamann, Cara J; Robey, Walter C

    2008-11-01

    Simulation allows educators to develop learner-focused training and outcomes-based assessments. However, the effectiveness and validity of simulation-based training in emergency medicine (EM) requires further investigation. Teaching and testing technical skills require methods and assessment instruments that are somewhat different than those used for cognitive or team skills. Drawing from work published by other medical disciplines as well as educational, behavioral, and human factors research, the authors developed six research themes: measurement of procedural skills; development of performance standards; assessment and validation of training methods, simulator models, and assessment tools; optimization of training methods; transfer of skills learned on simulator models to patients; and prevention of skill decay over time. The article reviews relevant and established educational research methodologies and identifies gaps in our knowledge of how physicians learn procedures. The authors present questions requiring further research that, once answered, will advance understanding of simulation-based procedural training and assessment in EM.

  18. Proposals for enhanced health risk assessment and stratification in an integrated care scenario.

    PubMed

    Dueñas-Espín, Ivan; Vela, Emili; Pauws, Steffen; Bescos, Cristina; Cano, Isaac; Cleries, Montserrat; Contel, Joan Carles; de Manuel Keenoy, Esteban; Garcia-Aymerich, Judith; Gomez-Cabrero, David; Kaye, Rachelle; Lahr, Maarten M H; Lluch-Ariet, Magí; Moharra, Montserrat; Monterde, David; Mora, Joana; Nalin, Marco; Pavlickova, Andrea; Piera, Jordi; Ponce, Sara; Santaeugenia, Sebastià; Schonenberg, Helen; Störk, Stefan; Tegner, Jesper; Velickovski, Filip; Westerteicher, Christoph; Roca, Josep

    2016-04-15

    Population-based health risk assessment and stratification are considered highly relevant for large-scale implementation of integrated care by facilitating services design and case identification. The principal objective of the study was to analyse five health-risk assessment strategies and health indicators used in the five regions participating in the Advancing Care Coordination and Telehealth Deployment (ACT) programme (http://www.act-programme.eu). The second purpose was to elaborate on strategies toward enhanced health risk predictive modelling in the clinical scenario. The five ACT regions: Scotland (UK), Basque Country (ES), Catalonia (ES), Lombardy (I) and Groningen (NL). Responsible teams for regional data management in the five ACT regions. We characterised and compared risk assessment strategies among ACT regions by analysing operational health risk predictive modelling tools for population-based stratification, as well as available health indicators at regional level. The analysis of the risk assessment tool deployed in Catalonia in 2015 (GMAs, Adjusted Morbidity Groups) was used as a basis to propose how population-based analytics could contribute to clinical risk prediction. There was consensus on the need for a population health approach to generate health risk predictive modelling. However, this strategy was fully in place only in two ACT regions: Basque Country and Catalonia. We found marked differences among regions in health risk predictive modelling tools and health indicators, and identified key factors constraining their comparability. The research proposes means to overcome current limitations and the use of population-based health risk prediction for enhanced clinical risk assessment. The results indicate the need for further efforts to improve both comparability and flexibility of current population-based health risk predictive modelling approaches. Applicability and impact of the proposals for enhanced clinical risk assessment require prospective evaluation. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  19. Bridging the etiologic and prognostic outlooks in individualized assessment of absolute risk of an illness: application in lung cancer.

    PubMed

    Karp, Igor; Sylvestre, Marie-Pierre; Abrahamowicz, Michal; Leffondré, Karen; Siemiatycki, Jack

    2016-11-01

    Assessment of individual risk of illness is an important activity in preventive medicine. Development of risk-assessment models has heretofore relied predominantly on studies involving follow-up of cohort-type populations, while case-control studies have generally been considered unfit for this purpose. To present a method for individualized assessment of absolute risk of an illness (as illustrated by lung cancer) based on data from a 'non-nested' case-control study. We used data from a case-control study conducted in Montreal, Canada in 1996-2001. Individuals diagnosed with lung cancer (n = 920) and age- and sex-matched lung-cancer-free subjects (n = 1288) completed questionnaires documenting life-time cigarette-smoking history and occupational, medical, and family history. Unweighted and weighted logistic models were fitted. Model overfitting was assessed using bootstrap-based cross-validation and 'shrinkage.' The discriminating ability was assessed by the c-statistic, and the risk-stratifying performance was assessed by examination of the variability in risk estimates over hypothetical risk-profiles. In the logistic models, the logarithm of incidence-density of lung cancer was expressed as a function of age, sex, cigarette-smoking history, history of respiratory conditions and exposure to occupational carcinogens, and family history of lung cancer. The models entailed a minimal degree of overfitting ('shrinkage' factor: 0.97 for both unweighted and weighted models) and moderately high discriminating ability (c-statistic: 0.82 for the unweighted model and 0.66 for the weighted model). The method's risk-stratifying performance was quite high. The presented method allows for individualized assessment of risk of lung cancer and can be used for development of risk-assessment models for other illnesses.

  20. State of the Science: Biologically Based Modeling in Risk Assessment [Editorial

    EPA Science Inventory

    The health risk assessment from exposure to a particular agent is preferred when the assessment is based on a relevant measure of internal dose (e.g., maximal concentration of an active metabolite in target tissue) rather than simply the administered dose or exposure concentratio...

  1. Toward Dynamic Ocean Management: Fisheries assessment and climate projections informed by community developed habitat models based on dynamic coastal oceanography

    NASA Astrophysics Data System (ADS)

    Kohut, J. T.; Manderson, J.; Palamara, L. J.; Saba, V. S.; Saba, G.; Hare, J. A.; Curchitser, E. N.; Moore, P.; Seibel, B.; DiDomenico, G.

    2016-12-01

    Through a multidisciplinary study group of experts in marine ecology, physical oceanography and stock assessment from the fishing industry, government and academia we developed a method to explicitly account for shifting habitat distributions in fish population assessments. We used data from field surveys throughout the Northwest Atlantic Ocean to develop a parametric thermal niche model for an important short-lived pelagic forage fish, Atlantic Butterfish. This niche model was coupled to a hindcast of daily bottom water temperature derived from a regional numerical ocean model in order to project daily thermal habitat suitability over the last 40 years. This ecological hindcast was used to estimate the proportion of thermal habitat suitability available on the U.S. Northeast Shelf that was sampled on fishery-independent surveys, accounting for the relative motions of thermal habitat and the trajectory of sampling on the survey. The method and habitat based estimates of availability was integrated into the catchability estimate used to scale population size in the butterfish stock assessment model accepted by the reviewers of the 59th NEFSC stock assessment review, as well as the mid-Atlantic Council's Scientific and Statistical Committee. The contribution of the availability estimate (along with an estimate of detectability) allowed for the development of fishery reference points, a change in stock status from unknown to known, and the establishment of a directed fishery with an allocation of 20,000 metric tons of quota. This presentation will describe how a community based workgroup utilized ocean observing technologies combined with ocean models to better understand the physical ocean that structures marine ecosystems. Using these approaches we will discuss opportunities to inform ecological hindcasts and climate projections with mechanistic models that link species-specific physiology to climate-based thermal scenarios.

  2. Toward Dynamic Ocean Management: Fisheries assessment and climate projections informed by community developed habitat models based on dynamic coastal oceanography

    NASA Astrophysics Data System (ADS)

    Kohut, J. T.; Manderson, J.; Palamara, L. J.; Saba, V. S.; Saba, G.; Hare, J. A.; Curchitser, E. N.; Moore, P.; Seibel, B.; DiDomenico, G.

    2016-02-01

    Through a multidisciplinary study group of experts in marine ecology, physical oceanography and stock assessment from the fishing industry, government and academia we developed a method to explicitly account for shifting habitat distributions in fish population assessments. We used data from field surveys throughout the Northwest Atlantic Ocean to develop a parametric thermal niche model for an important short-lived pelagic forage fish, Atlantic Butterfish. This niche model was coupled to a hindcast of daily bottom water temperature derived from a regional numerical ocean model in order to project daily thermal habitat suitability over the last 40 years. This ecological hindcast was used to estimate the proportion of thermal habitat suitability available on the U.S. Northeast Shelf that was sampled on fishery-independent surveys, accounting for the relative motions of thermal habitat and the trajectory of sampling on the survey. The method and habitat based estimates of availability was integrated into the catchability estimate used to scale population size in the butterfish stock assessment model accepted by the reviewers of the 59th NEFSC stock assessment review, as well as the mid-Atlantic Council's Scientific and Statistical Committee. The contribution of the availability estimate (along with an estimate of detectability) allowed for the development of fishery reference points, a change in stock status from unknown to known, and the establishment of a directed fishery with an allocation of 20,000 metric tons of quota. This presentation will describe how a community based workgroup utilized ocean observing technologies combined with ocean models to better understand the physical ocean that structures marine ecosystems. Using these approaches we will discuss opportunities to inform ecological hindcasts and climate projections with mechanistic models that link species-specific physiology to climate-based thermal scenarios.

  3. Simulating Runoff from a Grid Based Mercury Model: Flow Comparisons

    EPA Science Inventory

    Several mercury cycling models, including general mass balance approaches, mixed-batch reactors in streams or lakes, or regional process-based models, exist to assess the ecological exposure risks associated with anthropogenically increased atmospheric mercury (Hg) deposition, so...

  4. Summarization as the base for text assessment

    NASA Astrophysics Data System (ADS)

    Karanikolas, Nikitas N.

    2015-02-01

    We present a model that apply shallow text summarization as a cheap (in resources needed) process for Automatic (machine based) free text answer Assessment (AA). The evaluation of the proposed method induces the inference that the Conventional Assessment (CA, man made assessment of free text answers) does not have an obvious mechanical replacement. However, this is a research challenge.

  5. A Bayesian hierarchical latent trait model for estimating rater bias and reliability in large-scale performance assessment

    PubMed Central

    2018-01-01

    We propose a novel approach to modelling rater effects in scoring-based assessment. The approach is based on a Bayesian hierarchical model and simulations from the posterior distribution. We apply it to large-scale essay assessment data over a period of 5 years. Empirical results suggest that the model provides a good fit for both the total scores and when applied to individual rubrics. We estimate the median impact of rater effects on the final grade to be ± 2 points on a 50 point scale, while 10% of essays would receive a score at least ± 5 different from their actual quality. Most of the impact is due to rater unreliability, not rater bias. PMID:29614129

  6. IVUS-Based Computational Modeling and Planar Biaxial Artery Material Properties for Human Coronary Plaque Vulnerability Assessment

    PubMed Central

    Liu, Haofei; Cai, Mingchao; Yang, Chun; Zheng, Jie; Bach, Richard; Kural, Mehmet H.; Billiar, Kristen L.; Muccigrosso, David; Lu, Dongsi; Tang, Dalin

    2012-01-01

    Image-based computational modeling has been introduced for vulnerable atherosclerotic plaques to identify critical mechanical conditions which may be used for better plaque assessment and rupture predictions. In vivo patient-specific coronary plaque models are lagging due to limitations on non-invasive image resolution, flow data, and vessel material properties. A framework is proposed to combine intravascular ultrasound (IVUS) imaging, biaxial mechanical testing and computational modeling with fluid-structure interactions and anisotropic material properties to acquire better and more complete plaque data and make more accurate plaque vulnerability assessment and predictions. Impact of pre-shrink-stretch process, vessel curvature and high blood pressure on stress, strain, flow velocity and flow maximum principal shear stress was investigated. PMID:22428362

  7. Recent advances in mathematical modeling of developmental abnormalities using mechanistic information.

    PubMed

    Kavlock, R J

    1997-01-01

    During the last several years, significant changes in the risk assessment process for developmental toxicity of environmental contaminants have begun to emerge. The first of these changes is the development and beginning use of statistically based dose-response models [the benchmark dose (BMD) approach] that better utilize data derived from existing testing approaches. Accompanying this change is the greater emphasis placed on understanding and using mechanistic information to yield more accurate, reliable, and less uncertain risk assessments. The next stage in the evolution of risk assessment will be the use of biologically based dose-response (BBDR) models that begin to build into the statistically based models factors related to the underlying kinetic, biochemical, and/or physiologic processes perturbed by a toxicant. Such models are now emerging from several research laboratories. The introduction of quantitative models and the incorporation of biologic information into them has pointed to the need for even more sophisticated modifications for which we offer the term embryologically based dose-response (EBDR) models. Because these models would be based upon the understanding of normal morphogenesis, they represent a quantum leap in our thinking, but their complexity presents daunting challenges both to the developmental biologist and the developmental toxicologist. Implementation of these models will require extensive communication between developmental toxicologists, molecular embryologists, and biomathematicians. The remarkable progress in the understanding of mammalian embryonic development at the molecular level that has occurred over the last decade combined with advances in computing power and computational models should eventually enable these as yet hypothetical models to be brought into use.

  8. Assessing work disability for social security benefits: international models for the direct assessment of work capacity.

    PubMed

    Geiger, Ben Baumberg; Garthwaite, Kayleigh; Warren, Jon; Bambra, Clare

    2017-08-25

    It has been argued that social security disability assessments should directly assess claimants' work capacity, rather than relying on proxies such as on functioning. However, there is little academic discussion of how such assessments could be conducted. The article presents an account of different models of direct disability assessments based on case studies of the Netherlands, Germany, Denmark, Norway, the United States of America, Canada, Australia, and New Zealand, utilising over 150 documents and 40 expert interviews. Three models of direct work disability assessments can be observed: (i) structured assessment, which measures the functional demands of jobs across the national economy and compares these to claimants' functional capacities; (ii) demonstrated assessment, which looks at claimants' actual experiences in the labour market and infers a lack of work capacity from the failure of a concerned rehabilitation attempt; and (iii) expert assessment, based on the judgement of skilled professionals. Direct disability assessment within social security is not just theoretically desirable, but can be implemented in practice. We have shown that there are three distinct ways that this can be done, each with different strengths and weaknesses. Further research is needed to clarify the costs, validity/legitimacy, and consequences of these different models. Implications for rehabilitation It has recently been argued that social security disability assessments should directly assess work capacity rather than simply assessing functioning - but we have no understanding about how this can be done in practice. Based on case studies of nine countries, we show that direct disability assessment can be implemented, and argue that there are three different ways of doing it. These are "demonstrated assessment" (using claimants' experiences in the labour market), "structured assessment" (matching functional requirements to workplace demands), and "expert assessment" (the judgement of skilled professionals). While it is possible to implement a direct assessment of work capacity for social security benefits, further research is necessary to understand how best to maximise validity, legitimacy, and cost-effectiveness.

  9. Risk Assessment

    EPA Pesticide Factsheets

    How the EPA conducts risk assessment to protect human health and the environment. Several assessments are included with the guidelines, models, databases, state-based RSL Tables, local contacts and framework documents used to perform these assessments.

  10. Space station ECLSS integration analysis: Simplified General Cluster Systems Model, ECLS System Assessment Program enhancements

    NASA Technical Reports Server (NTRS)

    Ferguson, R. E.

    1985-01-01

    The data base verification of the ECLS Systems Assessment Program (ESAP) was documented and changes made to enhance the flexibility of the water recovery subsystem simulations are given. All changes which were made to the data base values are described and the software enhancements performed. The refined model documented herein constitutes the submittal of the General Cluster Systems Model. A source listing of the current version of ESAP is provided in Appendix A.

  11. A Corrosion Risk Assessment Model for Underground Piping

    NASA Technical Reports Server (NTRS)

    Datta, Koushik; Fraser, Douglas R.

    2009-01-01

    The Pressure Systems Manager at NASA Ames Research Center (ARC) has embarked on a project to collect data and develop risk assessment models to support risk-informed decision making regarding future inspections of underground pipes at ARC. This paper shows progress in one area of this project - a corrosion risk assessment model for the underground high-pressure air distribution piping system at ARC. It consists of a Corrosion Model of pipe-segments, a Pipe Wrap Protection Model; and a Pipe Stress Model for a pipe segment. A Monte Carlo simulation of the combined models provides a distribution of the failure probabilities. Sensitivity study results show that the model uncertainty, or lack of knowledge, is the dominant contributor to the calculated unreliability of the underground piping system. As a result, the Pressure Systems Manager may consider investing resources specifically focused on reducing these uncertainties. Future work includes completing the data collection effort for the existing ground based pressure systems and applying the risk models to risk-based inspection strategies of the underground pipes at ARC.

  12. Modeling-Oriented Assessment in K-12 Science Education: A Synthesis of Research from 1980 to 2013 and New Directions

    ERIC Educational Resources Information Center

    Namdar, Bahadir; Shen, Ji

    2015-01-01

    Scientific modeling has been advocated as one of the core practices in recent science education policy initiatives. In modeling-based instruction (MBI), students use, construct, and revise models to gain scientific knowledge and inquiry skills. Oftentimes, the benefits of MBI have been documented using assessments targeting students' conceptual…

  13. Assessing the Wildlife Habitat Value of New England Salt Marshes: I. Model and Application

    EPA Science Inventory

    We developed an assessment model to quantify the wildlife habitat value of New England salt marshes based on marsh characteristics and the presence of habitat types that influence habitat use by terrestrial wildlife. Applying the model to12 salt marshes located in Narragansett B...

  14. Dynamic Bayesian Network Modeling of Game Based Diagnostic Assessments. CRESST Report 837

    ERIC Educational Resources Information Center

    Levy, Roy

    2014-01-01

    Digital games offer an appealing environment for assessing student proficiencies, including skills and misconceptions in a diagnostic setting. This paper proposes a dynamic Bayesian network modeling approach for observations of student performance from an educational video game. A Bayesian approach to model construction, calibration, and use in…

  15. Program Assessment: Getting to a Practical How-To Model

    ERIC Educational Resources Information Center

    Gardiner, Lorraine R.; Corbitt, Gail; Adams, Steven J.

    2010-01-01

    The Association to Advance Collegiate Schools of Business (AACSB) International's assurance of learning (AoL) standards require that schools develop a sophisticated continuous-improvement process. The authors review various assessment models and develop a practical, 6-step AoL model based on the literature and the authors' AoL-implementation…

  16. Using models in Integrated Ecosystem Assessment of coastal areas

    NASA Astrophysics Data System (ADS)

    Solidoro, Cosimo; Bandelj, Vinko; Cossarini, Gianpiero; Melaku Canu, Donata; Libralato, Simone

    2014-05-01

    Numerical Models can greatly contribute to integrated ecological assessment of coastal and marine systems. Indeed, models can: i) assist in the identification of efficient sampling strategy; ii) provide space interpolation and time extrapolation of experiemtanl data which are based on the knowedge on processes dynamics and causal realtionships which is coded within the model, iii) provide estimates of hardly measurable indicators. Furthermore model can provide indication on potential effects of implementation of alternative management policies. Finally, by providing a synthetic representation of an ideal system, based on its essential dynamic, model return a picture of ideal behaviour of a system in the absence of external perturbation, alteration, noise, which might help in the identification of reference behaivuor. As an important example, model based reanalyses of biogeochemical and ecological properties are an urgent need for the estimate of the environmental status and the assessment of efficacy of conservation and environmental policies, also with reference to the enforcement of the European MSFD. However, the use of numerical models, and particularly of ecological models, in modeling and in environmental management still is far from be the rule, possibly because of a lack in realizing the benefits which a full integration of modeling and montoring systems might provide, possibly because of a lack of trust in modeling results, or because many problems still exists in the development, validation and implementation of models. For istance, assessing the validity of model results is a complex process that requires the definition of appropriate indicators, metrics, methodologies and faces with the scarcity of real-time in-situ biogeochemical data. Furthermore, biogeochemical models typically consider dozens of variables which are heavily undersampled. Here we show how the integration of mathematical model and monitoring data can support integrated ecosystem assessment of a waterbody by reviewing applications from a complex coastal ecosystem, the Lagoon of Venice, and explore potential applications to other coastal and open sea system, up to the scale of the Mediterannean Sea.

  17. Biodiversity in environmental assessment-current practice and tools for prediction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gontier, Mikael; Balfors, Berit; Moertberg, Ulla

    Habitat loss and fragmentation are major threats to biodiversity. Environmental impact assessment and strategic environmental assessment are essential instruments used in physical planning to address such problems. Yet there are no well-developed methods for quantifying and predicting impacts of fragmentation on biodiversity. In this study, a literature review was conducted on GIS-based ecological models that have potential as prediction tools for biodiversity assessment. Further, a review of environmental impact statements for road and railway projects from four European countries was performed, to study how impact prediction concerning biodiversity issues was addressed. The results of the study showed the existing gapmore » between research in GIS-based ecological modelling and current practice in biodiversity assessment within environmental assessment.« less

  18. 3MRA UNCERTAINTY AND SENSITIVITY ANALYSIS

    EPA Science Inventory

    This presentation discusses the Multimedia, Multipathway, Multireceptor Risk Assessment (3MRA) modeling system. The outline of the presentation is: modeling system overview - 3MRA versions; 3MRA version 1.0; national-scale assessment dimensionality; SuperMUSE: windows-based super...

  19. QUANTITATIVE PROCEDURES FOR NEUROTOXICOLOGY RISK ASSESSMENT

    EPA Science Inventory

    In this project, previously published information on biologically based dose-response model for brain development was used to quantitatively evaluate critical neurodevelopmental processes, and to assess potential chemical impacts on early brain development. This model has been ex...

  20. Comparative and Predictive Multimedia Assessments Using Monte Carlo Uncertainty Analyses

    NASA Astrophysics Data System (ADS)

    Whelan, G.

    2002-05-01

    Multiple-pathway frameworks (sometimes referred to as multimedia models) provide a platform for combining medium-specific environmental models and databases, such that they can be utilized in a more holistic assessment of contaminant fate and transport in the environment. These frameworks provide a relatively seamless transfer of information from one model to the next and from databases to models. Within these frameworks, multiple models are linked, resulting in models that consume information from upstream models and produce information to be consumed by downstream models. The Framework for Risk Analysis in Multimedia Environmental Systems (FRAMES) is an example, which allows users to link their models to other models and databases. FRAMES is an icon-driven, site-layout platform that is an open-architecture, object-oriented system that interacts with environmental databases; helps the user construct a Conceptual Site Model that is real-world based; allows the user to choose the most appropriate models to solve simulation requirements; solves the standard risk paradigm of release transport and fate; and exposure/risk assessments to people and ecology; and presents graphical packages for analyzing results. FRAMES is specifically designed allow users to link their own models into a system, which contains models developed by others. This paper will present the use of FRAMES to evaluate potential human health exposures using real site data and realistic assumptions from sources, through the vadose and saturated zones, to exposure and risk assessment at three real-world sites, using the Multimedia Environmental Pollutant Assessment System (MEPAS), which is a multimedia model contained within FRAMES. These real-world examples use predictive and comparative approaches coupled with a Monte Carlo analysis. A predictive analysis is where models are calibrated to monitored site data, prior to the assessment, and a comparative analysis is where models are not calibrated but based solely on literature or judgement and is usually used to compare alternatives. In many cases, a combination is employed where the model is calibrated to a portion of the data (e.g., to determine hydrodynamics), then used to compare alternatives. Three subsurface-based multimedia examples are presented, increasing in complexity. The first presents the application of a predictive, deterministic assessment; the second presents a predictive and comparative, Monte Carlo analysis; and the third presents a comparative, multi-dimensional Monte Carlo analysis. Endpoints are typically presented in terms of concentration, hazard, risk, and dose, and because the vadose zone model typically represents a connection between a source and the aquifer, it does not generally represent the final medium in a multimedia risk assessment.

  1. [Health assessment and economic assessment in health: introduction to the debate on the points of intersection].

    PubMed

    Sancho, Leyla Gomes; Dain, Sulamis

    2012-03-01

    The study aims to infer the existence of a continuum between Health Assessment and Economic Assessment in Health, by highlighting points of intersection of these forms of appraisal. To achieve this, a review of the theoretical foundations, methods and approaches of both forms of assessment was conducted. It was based on the theoretical model of health evaluation as reported by Hartz et al and economic assessment in health approaches reported by Brouwer et al. It was seen that there is a continuum between the theoretical model of evaluative research and the extrawelfarist approach for economic assessment in health, and between the normative theoretical model for health assessment and the welfarist approaches for economic assessment in health. However, in practice the assessment is still conducted using the normative theoretical model and with a welfarist approach.

  2. Measuring nursing competencies in the operating theatre: instrument development and psychometric analysis using Item Response Theory.

    PubMed

    Nicholson, Patricia; Griffin, Patrick; Gillis, Shelley; Wu, Margaret; Dunning, Trisha

    2013-09-01

    Concern about the process of identifying underlying competencies that contribute to effective nursing performance has been debated with a lack of consensus surrounding an approved measurement instrument for assessing clinical performance. Although a number of methodologies are noted in the development of competency-based assessment measures, these studies are not without criticism. The primary aim of the study was to develop and validate a Performance Based Scoring Rubric, which included both analytical and holistic scales. The aim included examining the validity and reliability of the rubric, which was designed to measure clinical competencies in the operating theatre. The fieldwork observations of 32 nurse educators and preceptors assessing the performance of 95 instrument nurses in the operating theatre were used in the calibration of the rubric. The Rasch model, a particular model among Item Response Models, was used in the calibration of each item in the rubric in an attempt at improving the measurement properties of the scale. This is done by establishing the 'fit' of the data to the conditions demanded by the Rasch model. Acceptable reliability estimates, specifically a high Cronbach's alpha reliability coefficient (0.940), as well as empirical support for construct and criterion validity for the rubric were achieved. Calibration of the Performance Based Scoring Rubric using Rasch model revealed that the fit statistics for most items were acceptable. The use of the Rasch model offers a number of features in developing and refining healthcare competency-based assessments, improving confidence in measuring clinical performance. The Rasch model was shown to be useful in developing and validating a competency-based assessment for measuring the competence of the instrument nurse in the operating theatre with implications for use in other areas of nursing practice. Crown Copyright © 2012. Published by Elsevier Ltd. All rights reserved.

  3. REDUCING UNCERTAINTY IN AIR TOXICS RISK ASSESSMENT: A MECHANISTIC EXPOSURE-DOSE-RESPONSE (EDR) MODEL FOR ASSESSING THE ACUTE NEUROTOXICITY OF VOLATILE ORGANIC COMPOUNDS (VOCS) BASED UPON A RECEPTOR-MEDIATED MODE OF ACTION

    EPA Science Inventory

    SUMMARY: The major accomplishment of NTD’s air toxics program is the development of an exposure-dose- response model for acute exposure to volatile organic compounds (VOCs), based on momentary brain concentration as the dose metric associated with acute neurological impairments...

  4. Status of Standards Implementation in Anchorage Secondary Schools: A Concerns Based Acceptance Model (CBAM) Review, 2001-2002.

    ERIC Educational Resources Information Center

    Fenton, Ray

    The Concerns Based Acceptance Model (CBAM) has been a key element in developing and assessing the implementation of science and mathematics programs over the past 20 years. CBAM provides an organized approach to assessing where people stand as they learn about, and accept, changes in organizations. This study examined the status of the adoption of…

  5. Assessment of Professional Development for Teachers in the Vocational Education and Training Sector: An Examination of the Concerns Based Adoption Model

    ERIC Educational Resources Information Center

    Saunders, Rebecca

    2012-01-01

    The purpose of this article is to describe the use of the Concerns Based Adoption Model (Hall & Hord, 2006) as a conceptual lens and practical methodology for professional development program assessment in the vocational education and training (VET) sector. In this sequential mixed-methods study, findings from the first two phases (two of…

  6. Assessing age- and silt index-independent diameter growth models of individual-tree Southern Appalachian hardwoods

    Treesearch

    Henry W. Mcnab; Thomas F. Lloyd

    1999-01-01

    Models of forest vegetation dynamics based on characteristics of individual trees are more suitable to predicting growth of multiple species and age classes than those based on stands. The objective of this study was to assess age- and site index-independent relationships between periodic diameter increment and tree and site effects for 11 major hardwood tree species....

  7. Assessment of School-Based Management. [Volume I: Findings and Conclusions.] Studies of Education Reform.

    ERIC Educational Resources Information Center

    Wohlstetter, Priscilla; Mohrman, Susan Albers

    This document presents findings of the Assessment of School-Based Management Study, which identified the conditions in schools that promote high performance through school-based management (SBM). The study's conceptual framework was based on Edward E. Lawler's (1986) model. The high-involvement framework posits that four resources must spread…

  8. Creating an anthropomorphic digital MR phantom—an extensible tool for comparing and evaluating quantitative imaging algorithms

    NASA Astrophysics Data System (ADS)

    Bosca, Ryan J.; Jackson, Edward F.

    2016-01-01

    Assessing and mitigating the various sources of bias and variance associated with image quantification algorithms is essential to the use of such algorithms in clinical research and practice. Assessment is usually accomplished with grid-based digital reference objects (DRO) or, more recently, digital anthropomorphic phantoms based on normal human anatomy. Publicly available digital anthropomorphic phantoms can provide a basis for generating realistic model-based DROs that incorporate the heterogeneity commonly found in pathology. Using a publicly available vascular input function (VIF) and digital anthropomorphic phantom of a normal human brain, a methodology was developed to generate a DRO based on the general kinetic model (GKM) that represented realistic and heterogeneously enhancing pathology. GKM parameters were estimated from a deidentified clinical dynamic contrast-enhanced (DCE) MRI exam. This clinical imaging volume was co-registered with a discrete tissue model, and model parameters estimated from clinical images were used to synthesize a DCE-MRI exam that consisted of normal brain tissues and a heterogeneously enhancing brain tumor. An example application of spatial smoothing was used to illustrate potential applications in assessing quantitative imaging algorithms. A voxel-wise Bland-Altman analysis demonstrated negligible differences between the parameters estimated with and without spatial smoothing (using a small radius Gaussian kernel). In this work, we reported an extensible methodology for generating model-based anthropomorphic DROs containing normal and pathological tissue that can be used to assess quantitative imaging algorithms.

  9. The Relational-Behavior Model: A Pilot Assessment Study for At-Risk College Populations

    ERIC Educational Resources Information Center

    Chandler, Donald S., Jr.; Perkins, Michele D.

    2007-01-01

    This pilot study examined the relational-behavior model (RBM) as an HIV/AIDS assessment tool for at-risk college populations. Based on this theory, a survey was constructed to assess the six areas associated with HIV/AIDS prevention: personal awareness, knowledge deficiency, relational skills, HIV/STD stigmatization, community awareness, and…

  10. The Integrated Medical Model - A Risk Assessment and Decision Support Tool for Human Space Flight Missions

    NASA Technical Reports Server (NTRS)

    Kerstman, Eric; Minard, Charles G.; Saile, Lynn; FreiredeCarvalho, Mary; Myers, Jerry; Walton, Marlei; Butler, Douglas; Lopez, Vilma

    2010-01-01

    The Integrated Medical Model (IMM) is a decision support tool that is useful to space flight mission planners and medical system designers in assessing risks and optimizing medical systems. The IMM employs an evidence-based, probabilistic risk assessment (PRA) approach within the operational constraints of space flight.

  11. Assessment and Innovation: One Darn Thing Leads to Another

    ERIC Educational Resources Information Center

    Rutz, Carol; Lauer-Glebov, Jacqulyn

    2005-01-01

    Using recent experience at Carleton College in Minnesota as a case history, the authors offer a model for assessment that provides more flexibility than the well-known assessment feedback loop, which assumes a linear progression within a hierarchical administrative structure. The proposed model is based on a double helix, with values and feedback…

  12. Preparing the Dutch delta for future droughts: model based support in the national Delta Programme

    NASA Astrophysics Data System (ADS)

    ter Maat, Judith; Haasnoot, Marjolijn; van der Vat, Marnix; Hunink, Joachim; Prinsen, Geert; Visser, Martijn

    2014-05-01

    Keywords: uncertainty, policymaking, adaptive policies, fresh water management, droughts, Netherlands, Dutch Deltaprogramme, physically-based complex model, theory-motivated meta-model To prepare the Dutch Delta for future droughts and water scarcity, a nation-wide 4-year project, called Delta Programme, is established to assess impacts of climate scenarios and socio-economic developments and to explore policy options. The results should contribute to a national adaptive plan that is able to adapt to future uncertain conditions, if necessary. For this purpose, we followed a model-based step-wise approach, wherein both physically-based complex models and theory-motivated meta-models were used. First step (2010-2011) was to make a quantitative problem description. This involved a sensitivity analysis of the water system for drought situations under current and future conditions. The comprehensive Dutch national hydrological instrument was used for this purpose and further developed. Secondly (2011-2012) our main focus was on making an inventory of potential actions together with stakeholders. We assessed efficacy, sell-by date of actions, and reassessed vulnerabilities and opportunities for the future water supply system if actions were (not) taken. A rapid assessment meta-model was made based on the complex model. The effects of all potential measures were included in the tool. Thirdly (2012-2013), with support of the rapid assessment model, we assessed the efficacy of policy actions over time for an ensemble of possible futures including sea level rise and climate and land use change. Last step (2013-2014) involves the selection of preferred actions from a set of promising actions that meet the defined objectives. These actions are all modeled and evaluated using the complex model. The outcome of the process will be an adaptive management plan. The adaptive plan describes a set of preferred policy pathways - sequences of policy actions - to achieve targets under changing conditions. The plan commits to short term actions, and identifies signpost indicators and trigger values to assess if next actions of the identified policy pathways need to be implemented or if reassessment of the plan is needed. For example, river discharges could be measured to monitor changes in low discharges as a result of climate change, and assess whether policy options such as diverting more water the main fresh water lake (IJsselmeer) need to be implemented sooner or later or not at all. The adaptive plan of the Delta Programme will be presented in 2014. First lessons of this part of the Delta Programme can already be drawn: Both the complex and meta-model had its own purpose in each phase. The meta-model was particularly useful for identifying promising policy options and for consultation of stakeholders due to the instant response. The complex model had much more opportunities to assess impacts of regional policy actions, and was supported by regional stakeholders that recognized their areas better in this model. Different sector impact assessment modules are also included in the workflow of the complex model. However, the complex model has a long runtime (i.e. three days for 1 year simulation or more than 100 days for 35 year time series simulation), which makes it less suitable to support the dynamic policy process on instant demand and interactively.

  13. Instantiating the art of war for effects-based operations

    NASA Astrophysics Data System (ADS)

    Burns, Carla L.

    2002-07-01

    Effects-Based Operations (EBO) is a mindset, a philosophy and an approach for planning, executing and assessing military operations for the effects they produce rather than the targets or even objectives they deal with. An EBO approach strives to provide economy of force, dynamic tasking, and reduced collateral damage. The notion of EBO is not new. Military Commanders certainly have desired effects in mind when conducting military operations. However, to date EBO has been an art of war that lacks automated techniques and tools that enable effects-based analysis and assessment. Modeling and simulation is at the heart of this challenge. The Air Force Research Laboratory (AFRL) EBO Program is developing modeling techniques and corresponding tool capabilities that can be brought to bear against the challenges presented by effects-based analysis and assessment. Effects-based course-of-action development, center of gravity/target system analysis, and wargaming capabilities are being developed and integrated to help give Commanders the information decision support required to achieve desired national security objectives. This paper presents an introduction to effects-based operations, discusses the benefits of an EBO approach, and focuses on modeling and analysis for effects-based strategy development. An overview of modeling and simulation challenges for EBO is presented, setting the stage for the detailed technical papers in the subject session.

  14. Sensitivity to Uncertainty in Asteroid Impact Risk Assessment

    NASA Astrophysics Data System (ADS)

    Mathias, D.; Wheeler, L.; Prabhu, D. K.; Aftosmis, M.; Dotson, J.; Robertson, D. K.

    2015-12-01

    The Engineering Risk Assessment (ERA) team at NASA Ames Research Center is developing a physics-based impact risk model for probabilistically assessing threats from potential asteroid impacts on Earth. The model integrates probabilistic sampling of asteroid parameter ranges with physics-based analyses of entry, breakup, and impact to estimate damage areas and casualties from various impact scenarios. Assessing these threats is a highly coupled, dynamic problem involving significant uncertainties in the range of expected asteroid characteristics, how those characteristics may affect the level of damage, and the fidelity of various modeling approaches and assumptions. The presented model is used to explore the sensitivity of impact risk estimates to these uncertainties in order to gain insight into what additional data or modeling refinements are most important for producing effective, meaningful risk assessments. In the extreme cases of very small or very large impacts, the results are generally insensitive to many of the characterization and modeling assumptions. However, the nature of the sensitivity can change across moderate-sized impacts. Results will focus on the value of additional information in this critical, mid-size range, and how this additional data can support more robust mitigation decisions.

  15. Multi-model ensembles for assessment of flood losses and associated uncertainty

    NASA Astrophysics Data System (ADS)

    Figueiredo, Rui; Schröter, Kai; Weiss-Motz, Alexander; Martina, Mario L. V.; Kreibich, Heidi

    2018-05-01

    Flood loss modelling is a crucial part of risk assessments. However, it is subject to large uncertainty that is often neglected. Most models available in the literature are deterministic, providing only single point estimates of flood loss, and large disparities tend to exist among them. Adopting any one such model in a risk assessment context is likely to lead to inaccurate loss estimates and sub-optimal decision-making. In this paper, we propose the use of multi-model ensembles to address these issues. This approach, which has been applied successfully in other scientific fields, is based on the combination of different model outputs with the aim of improving the skill and usefulness of predictions. We first propose a model rating framework to support ensemble construction, based on a probability tree of model properties, which establishes relative degrees of belief between candidate models. Using 20 flood loss models in two test cases, we then construct numerous multi-model ensembles, based both on the rating framework and on a stochastic method, differing in terms of participating members, ensemble size and model weights. We evaluate the performance of ensemble means, as well as their probabilistic skill and reliability. Our results demonstrate that well-designed multi-model ensembles represent a pragmatic approach to consistently obtain more accurate flood loss estimates and reliable probability distributions of model uncertainty.

  16. Interactive Rapid Dose Assessment Model (IRDAM): reactor-accident assessment methods. Vol. 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poeton, R.W.; Moeller, M.P.; Laughlin, G.J.

    1983-05-01

    As part of the continuing emphasis on emergency preparedness, the US Nuclear Regulatory Commission (NRC) sponsored the development of a rapid dose assessment system by Pacific Northwest Laboratory (PNL). This system, the Interactive Rapid Dose Assessment Model (IRDAM) is a micro-computer based program for rapidly assessing the radiological impact of accidents at nuclear power plants. This document describes the technical bases for IRDAM including methods, models and assumptions used in calculations. IRDAM calculates whole body (5-cm depth) and infant thyroid doses at six fixed downwind distances between 500 and 20,000 meters. Radionuclides considered primarily consist of noble gases and radioiodines.more » In order to provide a rapid assessment capability consistent with the capacity of the Osborne-1 computer, certain simplifying approximations and assumptions are made. These are described, along with default values (assumptions used in the absence of specific input) in the text of this document. Two companion volumes to this one provide additional information on IRDAM. The user's Guide (NUREG/CR-3012, Volume 1) describes the setup and operation of equipment necessary to run IRDAM. Scenarios for Comparing Dose Assessment Models (NUREG/CR-3012, Volume 3) provides the results of calculations made by IRDAM and other models for specific accident scenarios.« less

  17. Grading for Understanding--Standards-Based Grading

    ERIC Educational Resources Information Center

    Zimmerman, Todd

    2017-01-01

    Standards-based grading (SBG), sometimes called learning objectives-based assessment (LOBA), is an assessment model that relies on students demonstrating mastery of learning objectives (sometimes referred to as standards). The goal of this grading system is to focus students on mastering learning objectives rather than on accumulating points. I…

  18. Comparing GIS-based habitat models for applications in EIA and SEA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gontier, Mikael, E-mail: gontier@kth.s; Moertberg, Ulla, E-mail: mortberg@kth.s; Balfors, Berit, E-mail: balfors@kth.s

    Land use changes, urbanisation and infrastructure developments in particular, cause fragmentation of natural habitats and threaten biodiversity. Tools and measures must be adapted to assess and remedy the potential effects on biodiversity caused by human activities and developments. Within physical planning, environmental impact assessment (EIA) and strategic environmental assessment (SEA) play important roles in the prediction and assessment of biodiversity-related impacts from planned developments. However, adapted prediction tools to forecast and quantify potential impacts on biodiversity components are lacking. This study tested and compared four different GIS-based habitat models and assessed their relevance for applications in environmental assessment. The modelsmore » were implemented in the Stockholm region in central Sweden and applied to data on the crested tit (Parus cristatus), a sedentary bird species of coniferous forest. All four models performed well and allowed the distribution of suitable habitats for the crested tit in the Stockholm region to be predicted. The models were also used to predict and quantify habitat loss for two regional development scenarios. The study highlighted the importance of model selection in impact prediction. Criteria that are relevant for the choice of model for predicting impacts on biodiversity were identified and discussed. Finally, the importance of environmental assessment for the preservation of biodiversity within the general frame of biodiversity conservation is emphasised.« less

  19. Modelling NO2 concentrations at the street level in the GAINS integrated assessment model: projections under current legislation

    NASA Astrophysics Data System (ADS)

    Kiesewetter, G.; Borken-Kleefeld, J.; Schöpp, W.; Heyes, C.; Thunis, P.; Bessagnet, B.; Gsella, A.; Amann, M.

    2013-08-01

    NO2 concentrations at the street level are a major concern for urban air quality in Europe and have been regulated under the EU Thematic Strategy on Air Pollution. Despite the legal requirements, limit values are exceeded at many monitoring stations with little or no improvement during recent years. In order to assess the effects of future emission control regulations on roadside NO2 concentrations, a downscaling module has been implemented in the GAINS integrated assessment model. The module follows a hybrid approach based on atmospheric dispersion calculations and observations from the AirBase European air quality data base that are used to estimate site-specific parameters. Pollutant concentrations at every monitoring site with sufficient data coverage are disaggregated into contributions from regional background, urban increment, and local roadside increment. The future evolution of each contribution is assessed with a model of the appropriate scale - 28 × 28 km grid based on the EMEP Model for the regional background, 7 × 7 km urban increment based on the CHIMERE Chemistry Transport Model, and a chemical box model for the roadside increment. Thus, different emission scenarios and control options for long-range transport, regional and local emissions can be analysed. Observed concentrations and historical trends are well captured, in particular the differing NO2 and total NOx = NO + NO2 trends. Altogether, more than 1950 air quality monitoring stations in the EU are covered by the model, including more than 400 traffic stations and 70% of the critical stations. Together with its well-established bottom-up emission and dispersion calculation scheme, GAINS is thus able to bridge the scales from European-wide policies to impacts in street canyons. As an application of the model, we assess the evolution of attainment of NO2 limit values under current legislation until 2030. Strong improvements are expected with the introduction of the Euro 6 emission standard for light duty vehicles; however, for some major European cities, further measures may be required, in particular if aiming to achieve compliance at an earlier time.

  20. Applicability of western chemical dietary exposure models to the Chinese population.

    PubMed

    Zhao, Shizhen; Price, Oliver; Liu, Zhengtao; Jones, Kevin C; Sweetman, Andrew J

    2015-07-01

    A range of exposure models, which have been developed in Europe and North America, are playing an increasingly important role in priority setting and the risk assessment of chemicals. However, the applicability of these tools, which are based on Western dietary exposure pathways, to estimate chemical exposure to the Chinese population to support the development of a risk-based environment and exposure assessment, is unclear. Three frequently used modelling tools, EUSES, RAIDAR and ACC-HUMANsteady, have been evaluated in terms of human dietary exposure estimation by application to a range of chemicals with different physicochemical properties under both model default and Chinese dietary scenarios. Hence, the modelling approaches were assessed by considering dietary pattern differences only. The predicted dietary exposure pathways were compared under both scenarios using a range of hypothetical and current emerging contaminants. Although the differences across models are greater than those between dietary scenarios, model predictions indicated that dietary preference can have a significant impact on human exposure, with the relatively high consumption of vegetables and cereals resulting in higher exposure via plants-based foodstuffs under Chinese consumption patterns compared to Western diets. The selected models demonstrated a good ability to identify key dietary exposure pathways which can be used for screening purposes and an evaluative risk assessment. However, some model adaptations will be required to cover a number of important Chinese exposure pathways, such as freshwater farmed-fish, grains and pork. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. Validation of a physically based catchment model for application in post-closure radiological safety assessments of deep geological repositories for solid radioactive wastes.

    PubMed

    Thorne, M C; Degnan, P; Ewen, J; Parkin, G

    2000-12-01

    The physically based river catchment modelling system SHETRAN incorporates components representing water flow, sediment transport and radionuclide transport both in solution and bound to sediments. The system has been applied to simulate hypothetical future catchments in the context of post-closure radiological safety assessments of a potential site for a deep geological disposal facility for intermediate and certain low-level radioactive wastes at Sellafield, west Cumbria. In order to have confidence in the application of SHETRAN for this purpose, various blind validation studies have been undertaken. In earlier studies, the validation was undertaken against uncertainty bounds in model output predictions set by the modelling team on the basis of how well they expected the model to perform. However, validation can also be carried out with bounds set on the basis of how well the model is required to perform in order to constitute a useful assessment tool. Herein, such an assessment-based validation exercise is reported. This exercise related to a field plot experiment conducted at Calder Hollow, west Cumbria, in which the migration of strontium and lanthanum in subsurface Quaternary deposits was studied on a length scale of a few metres. Blind predictions of tracer migration were compared with experimental results using bounds set by a small group of assessment experts independent of the modelling team. Overall, the SHETRAN system performed well, failing only two out of seven of the imposed tests. Furthermore, of the five tests that were not failed, three were positively passed even when a pessimistic view was taken as to how measurement errors should be taken into account. It is concluded that the SHETRAN system, which is still being developed further, is a powerful tool for application in post-closure radiological safety assessments.

  2. Assessment of a Business-to-Consumer (B2C) model for Telemonitoring patients with Chronic Heart Failure (CHF).

    PubMed

    Grustam, Andrija S; Vrijhoef, Hubertus J M; Koymans, Ron; Hukal, Philipp; Severens, Johan L

    2017-10-11

    The purpose of this study is to assess the Business-to-Consumer (B2C) model for telemonitoring patients with Chronic Heart Failure (CHF) by analysing the value it creates, both for organizations or ventures that provide telemonitoring services based on it, and for society. The business model assessment was based on the following categories: caveats, venture type, six-factor alignment, strategic market assessment, financial viability, valuation analysis, sustainability, societal impact, and technology assessment. The venture valuation was performed for three jurisdictions (countries) - Singapore, the Netherlands and the United States - in order to show the opportunities in a small, medium-sized, and large country (i.e. population). The business model assessment revealed that B2C telemonitoring is viable and profitable in the Innovating in Healthcare Framework. Analysis of the ecosystem revealed an average-to-excellent fit with the six factors. The structure and financing fit was average, public policy and technology alignment was good, while consumer alignment and accountability fit was deemed excellent. The financial prognosis revealed that the venture is viable and profitable in Singapore and the Netherlands but not in the United States due to relatively high salary inputs. The B2C model in telemonitoring CHF potentially creates value for patients, shareholders of the service provider, and society. However, the validity of the results could be improved, for instance by using a peer-reviewed framework, a systematic literature search, case-based cost/efficiency inputs, and varied scenario inputs.

  3. Predictive models to assess risk of type 2 diabetes, hypertension and comorbidity: machine-learning algorithms and validation using national health data from Kuwait--a cohort study.

    PubMed

    Farran, Bassam; Channanath, Arshad Mohamed; Behbehani, Kazem; Thanaraj, Thangavel Alphonse

    2013-05-14

    We build classification models and risk assessment tools for diabetes, hypertension and comorbidity using machine-learning algorithms on data from Kuwait. We model the increased proneness in diabetic patients to develop hypertension and vice versa. We ascertain the importance of ethnicity (and natives vs expatriate migrants) and of using regional data in risk assessment. Retrospective cohort study. Four machine-learning techniques were used: logistic regression, k-nearest neighbours (k-NN), multifactor dimensionality reduction and support vector machines. The study uses fivefold cross validation to obtain generalisation accuracies and errors. Kuwait Health Network (KHN) that integrates data from primary health centres and hospitals in Kuwait. 270 172 hospital visitors (of which, 89 858 are diabetic, 58 745 hypertensive and 30 522 comorbid) comprising Kuwaiti natives, Asian and Arab expatriates. Incident type 2 diabetes, hypertension and comorbidity. Classification accuracies of >85% (for diabetes) and >90% (for hypertension) are achieved using only simple non-laboratory-based parameters. Risk assessment tools based on k-NN classification models are able to assign 'high' risk to 75% of diabetic patients and to 94% of hypertensive patients. Only 5% of diabetic patients are seen assigned 'low' risk. Asian-specific models and assessments perform even better. Pathological conditions of diabetes in the general population or in hypertensive population and those of hypertension are modelled. Two-stage aggregate classification models and risk assessment tools, built combining both the component models on diabetes (or on hypertension), perform better than individual models. Data on diabetes, hypertension and comorbidity from the cosmopolitan State of Kuwait are available for the first time. This enabled us to apply four different case-control models to assess risks. These tools aid in the preliminary non-intrusive assessment of the population. Ethnicity is seen significant to the predictive models. Risk assessments need to be developed using regional data as we demonstrate the applicability of the American Diabetes Association online calculator on data from Kuwait.

  4. The Communication, Awareness, Relationships and Empowerment (C.A.R.E.) Model: An Effective Tool for Engaging Urban Communities in Community-Based Participatory Research.

    PubMed

    Ceasar, Joniqua; Peters-Lawrence, Marlene H; Mitchell, Valerie; Powell-Wiley, Tiffany M

    2017-11-21

    Little is known about recruitment methods for racial/ethnic minority populations from resource-limited areas for community-based health and needs assessments, particularly assessments that incorporate mobile health (mHealth) technology for characterizing physical activity and dietary intake. We examined whether the Communication, Awareness, Relationships and Empowerment (C.A.R.E.) model could reduce challenges recruiting and retaining participants from faith-based organizations in predominantly African American Washington, D.C. communities for a community-based assessment. Employing C.A.R.E. model elements, our diverse research team developed partnerships with churches, health organizations, academic institutions and governmental agencies. Through these partnerships, we cultivated a visible presence at community events, provided cardiovascular health education and remained accessible throughout the research process. Additionally, these relationships led to the creation of a community advisory board (CAB), which influenced the study's design, implementation, and dissemination. Over thirteen months, 159 individuals were recruited for the study, 99 completed the initial assessment, and 81 used mHealth technology to self-monitor physical activity over 30 days. The culturally and historically sensitive C.A.R.E. model strategically engaged CAB members and study participants. It was essential for success in recruitment and retention of an at-risk, African American population and may be an effective model for researchers hoping to engage racial/ethnic minority populations living in urban communities.

  5. Statistical analysis of earthquakes after the 1999 MW 7.7 Chi-Chi, Taiwan, earthquake based on a modified Reasenberg-Jones model

    NASA Astrophysics Data System (ADS)

    Chen, Yuh-Ing; Huang, Chi-Shen; Liu, Jann-Yenq

    2015-12-01

    We investigated the temporal-spatial hazard of the earthquakes after the 1999 September 21 MW = 7.7 Chi-Chi shock in a continental region of Taiwan. The Reasenberg-Jones (RJ) model (Reasenberg and Jones, 1989, 1994) that combines the frequency-magnitude distribution (Gutenberg and Richter, 1944) and time-decaying occurrence rate (Utsu et al., 1995) is conventionally employed for assessing the earthquake hazard after a large shock. However, it is found that the b values in the frequency-magnitude distribution of the earthquakes in the study region dramatically decreased from background values after the Chi-Chi shock, and then gradually increased up. The observation of a time-dependent frequency-magnitude distribution motivated us to propose a modified RJ model (MRJ) to assess the earthquake hazard. To see how the models perform on assessing short-term earthquake hazard, the RJ and MRJ models were separately used to sequentially forecast earthquakes in the study region. To depict the potential rupture area for future earthquakes, we further constructed relative hazard (RH) maps based on the two models. The Receiver Operating Characteristics (ROC) curves (Swets, 1988) finally demonstrated that the RH map based on the MRJ model was, in general, superior to the one based on the original RJ model for exploring the spatial hazard of earthquakes in a short time after the Chi-Chi shock.

  6. Application of process mining to assess the data quality of routinely collected time-based performance data sourced from electronic health records by validating process conformance.

    PubMed

    Perimal-Lewis, Lua; Teubner, David; Hakendorf, Paul; Horwood, Chris

    2016-12-01

    Effective and accurate use of routinely collected health data to produce Key Performance Indicator reporting is dependent on the underlying data quality. In this research, Process Mining methodology and tools were leveraged to assess the data quality of time-based Emergency Department data sourced from electronic health records. This research was done working closely with the domain experts to validate the process models. The hospital patient journey model was used to assess flow abnormalities which resulted from incorrect timestamp data used in time-based performance metrics. The research demonstrated process mining as a feasible methodology to assess data quality of time-based hospital performance metrics. The insight gained from this research enabled appropriate corrective actions to be put in place to address the data quality issues. © The Author(s) 2015.

  7. An Overview of GIS-Based Modeling and Assessment of Mining-Induced Hazards: Soil, Water, and Forest

    PubMed Central

    Kim, Sung-Min; Yi, Huiuk; Choi, Yosoon

    2017-01-01

    In this study, current geographic information system (GIS)-based methods and their application for the modeling and assessment of mining-induced hazards were reviewed. Various types of mining-induced hazard, including soil contamination, soil erosion, water pollution, and deforestation were considered in the discussion of the strength and role of GIS as a viable problem-solving tool in relation to mining-induced hazards. The various types of mining-induced hazard were classified into two or three subtopics according to the steps involved in the reclamation procedure, or elements of the hazard of interest. Because GIS is appropriated for the handling of geospatial data in relation to mining-induced hazards, the application and feasibility of exploiting GIS-based modeling and assessment of mining-induced hazards within the mining industry could be expanded further. PMID:29186922

  8. An Overview of GIS-Based Modeling and Assessment of Mining-Induced Hazards: Soil, Water, and Forest.

    PubMed

    Suh, Jangwon; Kim, Sung-Min; Yi, Huiuk; Choi, Yosoon

    2017-11-27

    In this study, current geographic information system (GIS)-based methods and their application for the modeling and assessment of mining-induced hazards were reviewed. Various types of mining-induced hazard, including soil contamination, soil erosion, water pollution, and deforestation were considered in the discussion of the strength and role of GIS as a viable problem-solving tool in relation to mining-induced hazards. The various types of mining-induced hazard were classified into two or three subtopics according to the steps involved in the reclamation procedure, or elements of the hazard of interest. Because GIS is appropriated for the handling of geospatial data in relation to mining-induced hazards, the application and feasibility of exploiting GIS-based modeling and assessment of mining-induced hazards within the mining industry could be expanded further.

  9. Toward refined environmental scenarios for ecological risk assessment of down-the-drain chemicals in freshwater environments.

    PubMed

    Franco, Antonio; Price, Oliver R; Marshall, Stuart; Jolliet, Olivier; Van den Brink, Paul J; Rico, Andreu; Focks, Andreas; De Laender, Frederik; Ashauer, Roman

    2017-03-01

    Current regulatory practice for chemical risk assessment suffers from the lack of realism in conventional frameworks. Despite significant advances in exposure and ecological effect modeling, the implementation of novel approaches as high-tier options for prospective regulatory risk assessment remains limited, particularly among general chemicals such as down-the-drain ingredients. While reviewing the current state of the art in environmental exposure and ecological effect modeling, we propose a scenario-based framework that enables a better integration of exposure and effect assessments in a tiered approach. Global- to catchment-scale spatially explicit exposure models can be used to identify areas of higher exposure and to generate ecologically relevant exposure information for input into effect models. Numerous examples of mechanistic ecological effect models demonstrate that it is technically feasible to extrapolate from individual-level effects to effects at higher levels of biological organization and from laboratory to environmental conditions. However, the data required to parameterize effect models that can embrace the complexity of ecosystems are large and require a targeted approach. Experimental efforts should, therefore, focus on vulnerable species and/or traits and ecological conditions of relevance. We outline key research needs to address the challenges that currently hinder the practical application of advanced model-based approaches to risk assessment of down-the-drain chemicals. Integr Environ Assess Manag 2017;13:233-248. © 2016 SETAC. © 2016 SETAC.

  10. Accuracy of Digital Impressions and Fitness of Single Crowns Based on Digital Impressions

    PubMed Central

    Yang, Xin; Lv, Pin; Liu, Yihong; Si, Wenjie; Feng, Hailan

    2015-01-01

    In this study, the accuracy (precision and trueness) of digital impressions and the fitness of single crowns manufactured based on digital impressions were evaluated. #14-17 epoxy resin dentitions were made, while full-crown preparations of extracted natural teeth were embedded at #16. (1) To assess precision, deviations among repeated scan models made by intraoral scanner TRIOS and MHT and model scanner D700 and inEos were calculated through best-fit algorithm and three-dimensional (3D) comparison. Root mean square (RMS) and color-coded difference images were offered. (2) To assess trueness, micro computed tomography (micro-CT) was used to get the reference model (REF). Deviations between REF and repeated scan models (from (1)) were calculated. (3) To assess fitness, single crowns were manufactured based on TRIOS, MHT, D700 and inEos scan models. The adhesive gaps were evaluated under stereomicroscope after cross-sectioned. Digital impressions showed lower precision and better trueness. Except for MHT, the means of RMS for precision were lower than 10 μm. Digital impressions showed better internal fitness. Fitness of single crowns based on digital impressions was up to clinical standard. Digital impressions could be an alternative method for single crowns manufacturing. PMID:28793417

  11. Protein model quality assessment prediction by combining fragment comparisons and a consensus Cα contact potential

    PubMed Central

    Zhou, Hongyi; Skolnick, Jeffrey

    2009-01-01

    In this work, we develop a fully automated method for the quality assessment prediction of protein structural models generated by structure prediction approaches such as fold recognition servers, or ab initio methods. The approach is based on fragment comparisons and a consensus Cα contact potential derived from the set of models to be assessed and was tested on CASP7 server models. The average Pearson linear correlation coefficient between predicted quality and model GDT-score per target is 0.83 for the 98 targets which is better than those of other quality assessment methods that participated in CASP7. Our method also outperforms the other methods by about 3% as assessed by the total GDT-score of the selected top models. PMID:18004783

  12. Protocol for Reliability Assessment of Structural Health Monitoring Systems Incorporating Model-assisted Probability of Detection (MAPOD) Approach

    DTIC Science & Technology

    2011-09-01

    a quality evaluation with limited data, a model -based assessment must be...that affect system performance, a multistage approach to system validation, a modeling and experimental methodology for efficiently addressing a ...affect system performance, a multistage approach to system validation, a modeling and experimental methodology for efficiently addressing a wide range

  13. Use of conventional fishery models to assess entrainment and impingement of three Lake Michigan fish species

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jensen, A.L.; Spigarelli, J.A.; Thommes, M.M.

    1982-01-01

    Two conventional fishery stock assessment models, the surplus-production model and the dynamic-pool model, were applied to assess the impacts of water withdrawals by electricity-generating plants, industries, and municipalities on the standing stocks and yields of alewife Alosa pseudoharengus, rainbow smelt Osmerus mordax, and yellow perch Perca flavescens in Lake Michigan. Impingement and entrainment estimates were based on data collected at 15 power plants. The surplus-production model was fitted to the three populations with catch and effort data from the commercial fisheries. Dynamic-pool model parameters were estimated from published data. The numbers entrained and impinged are large, but the proportions ofmore » the standing stocks impinged and the proportions of the eggs and larvae entrained are small. The reductions in biomass of the stocks and in maximum sustainable yields are larger than the proportions impinged. The reductions in biomass, based on 1975 data and an assumed full water withdrawal, are 2.86% for alewife, 0.76% for rainbow smelt, and 0.28% for yellow perch. Fishery models are an economical means of impact assessment in situations where catch and effort data are available for estimation of model parameters.« less

  14. A multicriteria decision making model for assessment and selection of an ERP in a logistics context

    NASA Astrophysics Data System (ADS)

    Pereira, Teresa; Ferreira, Fernanda A.

    2017-07-01

    The aim of this work is to apply a methodology of decision support based on a multicriteria decision analyses (MCDA) model that allows the assessment and selection of an Enterprise Resource Planning (ERP) in a Portuguese logistics company by Group Decision Maker (GDM). A Decision Support system (DSS) that implements a MCDA - Multicriteria Methodology for the Assessment and Selection of Information Systems / Information Technologies (MMASSI / IT) is used based on its features and facility to change and adapt the model to a given scope. Using this DSS it was obtained the information system that best suited to the decisional context, being this result evaluated through a sensitivity and robustness analysis.

  15. A no-reference video quality assessment metric based on ROI

    NASA Astrophysics Data System (ADS)

    Jia, Lixiu; Zhong, Xuefei; Tu, Yan; Niu, Wenjuan

    2015-01-01

    A no reference video quality assessment metric based on the region of interest (ROI) was proposed in this paper. In the metric, objective video quality was evaluated by integrating the quality of two compressed artifacts, i.e. blurring distortion and blocking distortion. The Gaussian kernel function was used to extract the human density maps of the H.264 coding videos from the subjective eye tracking data. An objective bottom-up ROI extraction model based on magnitude discrepancy of discrete wavelet transform between two consecutive frames, center weighted color opponent model, luminance contrast model and frequency saliency model based on spectral residual was built. Then only the objective saliency maps were used to compute the objective blurring and blocking quality. The results indicate that the objective ROI extraction metric has a higher the area under the curve (AUC) value. Comparing with the conventional video quality assessment metrics which measured all the video quality frames, the metric proposed in this paper not only decreased the computation complexity, but improved the correlation between subjective mean opinion score (MOS) and objective scores.

  16. Assessment and Requirements of Nuclear Reaction Databases for GCR Transport in the Atmosphere and Structures

    NASA Technical Reports Server (NTRS)

    Cucinotta, F. A.; Wilson, J. W.; Shinn, J. L.; Tripathi, R. K.

    1998-01-01

    The transport properties of galactic cosmic rays (GCR) in the atmosphere, material structures, and human body (self-shielding) am of interest in risk assessment for supersonic and subsonic aircraft and for space travel in low-Earth orbit and on interplanetary missions. Nuclear reactions, such as knockout and fragmentation, present large modifications of particle type and energies of the galactic cosmic rays in penetrating materials. We make an assessment of the current nuclear reaction models and improvements in these model for developing required transport code data bases. A new fragmentation data base (QMSFRG) based on microscopic models is compared to the NUCFRG2 model and implications for shield assessment made using the HZETRN radiation transport code. For deep penetration problems, the build-up of light particles, such as nucleons, light clusters and mesons from nuclear reactions in conjunction with the absorption of the heavy ions, leads to the dominance of the charge Z = 0, 1, and 2 hadrons in the exposures at large penetration depths. Light particles are produced through nuclear or cluster knockout and in evaporation events with characteristically distinct spectra which play unique roles in the build-up of secondary radiation's in shielding. We describe models of light particle production in nucleon and heavy ion induced reactions and make an assessment of the importance of light particle multiplicity and spectral parameters in these exposures.

  17. Benchmark dose analysis via nonparametric regression modeling

    PubMed Central

    Piegorsch, Walter W.; Xiong, Hui; Bhattacharya, Rabi N.; Lin, Lizhen

    2013-01-01

    Estimation of benchmark doses (BMDs) in quantitative risk assessment traditionally is based upon parametric dose-response modeling. It is a well-known concern, however, that if the chosen parametric model is uncertain and/or misspecified, inaccurate and possibly unsafe low-dose inferences can result. We describe a nonparametric approach for estimating BMDs with quantal-response data based on an isotonic regression method, and also study use of corresponding, nonparametric, bootstrap-based confidence limits for the BMD. We explore the confidence limits’ small-sample properties via a simulation study, and illustrate the calculations with an example from cancer risk assessment. It is seen that this nonparametric approach can provide a useful alternative for BMD estimation when faced with the problem of parametric model uncertainty. PMID:23683057

  18. Curriculum-Based Measurement: Developing a Computer-Based Assessment Instrument for Monitoring Student Reading Progress on Multiple Indicators

    ERIC Educational Resources Information Center

    Forster, Natalie; Souvignier, Elmar

    2011-01-01

    The purpose of this study was to examine the technical adequacy of a computer-based assessment instrument which is based on hierarchical models of text comprehension for monitoring student reading progress following the Curriculum-Based Measurement (CBM) approach. At intervals of two weeks, 120 third-grade students finished eight CBM tests. To…

  19. DYNAMIC EVALUATION OF REGIONAL AIR QUALITY MODELS: ASSESSING CHANGES TO O 3 STEMMING FROM CHANGES IN EMISSIONS AND METEOROLOGY

    EPA Science Inventory

    Regional-scale air quality models are used to estimate the response of air pollutants to potential emission control strategies as part of the decision-making process. Traditionally, the model predicted pollutant concentrations are evaluated for the “base case” to assess a model’s...

  20. Assessing Argumentative Representation with Bayesian Network Models in Debatable Social Issues

    ERIC Educational Resources Information Center

    Zhang, Zhidong; Lu, Jingyan

    2014-01-01

    This study seeks to obtain argumentation models, which represent argumentative processes and an assessment structure in secondary school debatable issues in the social sciences. The argumentation model was developed based on mixed methods, a combination of both theory-driven and data-driven methods. The coding system provided a combing point by…

  1. COMPARISON OF THE USE OF A PHYSIOLOGICALLY-BASED PHARMACOKINETIC MODEL AND A CLASSICAL PHARMACOKINETIC MODEL FOR DIOXIN EXPOSURE ASSESSMENTS

    EPA Science Inventory

    In epidemiological studies, exposure assessments to TCDD, known as a possible human carcinogen, assume mono or biphasic elimination rates. Recent data suggests a dose dependent elimination rate for TCDD. A PBPK model, which uses a body burden dependent elimination rate, was dev...

  2. Assessment of the scale effect on statistical downscaling quality at a station scale using a weather generator-based model

    USDA-ARS?s Scientific Manuscript database

    The resolution of General Circulation Models (GCMs) is too coarse to assess the fine scale or site-specific impacts of climate change. Downscaling approaches including dynamical and statistical downscaling have been developed to meet this requirement. As the resolution of climate model increases, it...

  3. The Design of an Instructional Model Based on Connectivism and Constructivism to Create Innovation in Real World Experience

    ERIC Educational Resources Information Center

    Jirasatjanukul, Kanokrat; Jeerungsuwan, Namon

    2018-01-01

    The objectives of the research were to (1) design an instructional model based on Connectivism and Constructivism to create innovation in real world experience, (2) assess the model designed--the designed instructional model. The research involved 2 stages: (1) the instructional model design and (2) the instructional model rating. The sample…

  4. Evaluating Computer-Based Assessment in a Risk-Based Model

    ERIC Educational Resources Information Center

    Zakrzewski, Stan; Steven, Christine; Ricketts, Chris

    2009-01-01

    There are three purposes for evaluation: evaluation for action to aid the decision making process, evaluation for understanding to further enhance enlightenment and evaluation for control to ensure compliance to standards. This article argues that the primary function of evaluation in the "Catherine Wheel" computer-based assessment (CBA)…

  5. Development of an integrated generic model for multi-scale assessment of the impacts of agro-ecosystems on major ecosystem services in West Africa.

    PubMed

    Belem, Mahamadou; Saqalli, Mehdi

    2017-11-01

    This paper presents an integrated model assessing the impacts of climate change, agro-ecosystem and demographic transition patterns on major ecosystem services in West-Africa along a partial overview of economic aspects (poverty reduction, food self-sufficiency and income generation). The model is based on an agent-based model associated with a soil model and multi-scale spatial model. The resulting Model for West-Africa Agro-Ecosystem Integrated Assessment (MOWASIA) is ecologically generic, meaning it is designed for all sudano-sahelian environments but may then be used as an experimentation facility for testing different scenarios combining ecological and socioeconomic dimensions. A case study in Burkina Faso is examined to assess the environmental and economic performances of semi-continuous and continuous farming systems. Results show that the semi-continuous system using organic fertilizer and fallowing practices contribute better to environment preservation and food security than the more economically performant continuous system. In addition, this study showed that farmers heterogeneity could play an important role in agricultural policies planning and assessment. In addition, the results showed that MOWASIA is an effective tool for designing, analysing the impacts of agro-ecosystems. Copyright © 2017. Published by Elsevier Ltd.

  6. Monitoring scale scores over time via quality control charts, model-based approaches, and time series techniques.

    PubMed

    Lee, Yi-Hsuan; von Davier, Alina A

    2013-07-01

    Maintaining a stable score scale over time is critical for all standardized educational assessments. Traditional quality control tools and approaches for assessing scale drift either require special equating designs, or may be too time-consuming to be considered on a regular basis with an operational test that has a short time window between an administration and its score reporting. Thus, the traditional methods are not sufficient to catch unusual testing outcomes in a timely manner. This paper presents a new approach for score monitoring and assessment of scale drift. It involves quality control charts, model-based approaches, and time series techniques to accommodate the following needs of monitoring scale scores: continuous monitoring, adjustment of customary variations, identification of abrupt shifts, and assessment of autocorrelation. Performance of the methodologies is evaluated using manipulated data based on real responses from 71 administrations of a large-scale high-stakes language assessment.

  7. The development of performance-based practical assessment model at civil engineering workshop in state polytechnic

    NASA Astrophysics Data System (ADS)

    Kristinayanti, W. S.; Mas Pertiwi, I. G. A. I.; Evin Yudhi, S.; Lokantara, W. D.

    2018-01-01

    Assessment is an important element in education that shall oversees students’ competence not only in terms of cognitive aspect, but alsothe students’ psychomotorin a comprehensive way. Civil Engineering Department at Bali State Polytechnic,as a vocational education institution, emphasizes on not only the theoretical foundation of the study, but also the application throughpracticum in workshop-based learning. We are aware of a need for performance-based assessment for these students, which would be essential for the student’s all-round performance in their studies.We try to develop a performance-based practicum assessment model that is needed to assess student’s ability in workshop-based learning. This research was conducted in three stages, 1) learning needs analysis, 2) instruments development, and 3) testing of instruments. The study uses rubrics set-up to test students’ competence in the workshop and test the validity. We obtained 34-point valid statement out of 35, and resulted in value of Cronbach’s alpha equal to 0.977. In expert test we obtained a value of CVI = 0.75 which means that the drafted assessment is empirically valid within thetrial group.

  8. Manipulating the Geometric Computer-aided Design of the Operational Requirements-based Casualty Assessment Model within BRL-CAD

    DTIC Science & Technology

    2018-03-30

    ARL-TR-8336 ● MAR 2018 US Army Research Laboratory Manipulating the Geometric Computer-aided Design of the Operational...so designated by other authorized documents. Citation of manufacturer’s or trade names does not constitute an official endorsement or approval of...Army Research Laboratory Manipulating the Geometric Computer-aided Design of the Operational Requirements-based Casualty Assessment Model within

  9. Taming Log Files from Game/Simulation-Based Assessments: Data Models and Data Analysis Tools. Research Report. ETS RR-16-10

    ERIC Educational Resources Information Center

    Hao, Jiangang; Smith, Lawrence; Mislevy, Robert; von Davier, Alina; Bauer, Malcolm

    2016-01-01

    Extracting information efficiently from game/simulation-based assessment (G/SBA) logs requires two things: a well-structured log file and a set of analysis methods. In this report, we propose a generic data model specified as an extensible markup language (XML) schema for the log files of G/SBAs. We also propose a set of analysis methods for…

  10. GEO Collisional Risk Assessment Based on Analysis of NASA-WISE Data and Modeling

    DTIC Science & Technology

    2015-10-18

    GEO Collisional Risk Assessment Based on Analysis of NASA -WISE Data and Modeling Jeremy Murray Krezan1, Samantha Howard1, Phan D. Dao1, Derek...Surka2 1AFRL Space Vehicles Directorate,2Applied Technology Associates Incorporated From December 2009 through 2011 the NASA Wide-Field Infrared...of known debris. The NASA -WISE GEO belt debris population adds potentially thousands previously uncataloged objects. This paper describes

  11. Constructing an Integrated and Evidenced-Based Model for Residential Services

    ERIC Educational Resources Information Center

    Metzger, Jed

    2006-01-01

    There is paucity in both the literature and in the practice of integrated, evidence-based models of residential care for youth. This article describes the assessment and the process that led to the redesign of services at a residential center. The article describes how evidence-based models for each of the four major disciplines (residential…

  12. Linear stiff string vibrations in musical acoustics: Assessment and comparison of models.

    PubMed

    Ducceschi, Michele; Bilbao, Stefan

    2016-10-01

    Strings are amongst the most common elements found in musical instruments and an appropriate physical description of string dynamics is essential to modelling, analysis, and simulation. For linear vibration in a single polarisation, the most common model is based on the Euler-Bernoulli beam equation under tension. In spite of its simple form, such a model gives unbounded phase and group velocities at large wavenumbers, and such behaviour may be interpreted as unphysical. The Timoshenko model has, therefore, been employed in more recent works to overcome such shortcoming. This paper presents a third model based on the shear beam equations. The three models are here assessed and compared with regard to the perceptual considerations in musical acoustics.

  13. Toxicokinetics/toxicodynamics links bioavailability for assessing arsenic uptake and toxicity in three aquaculture species.

    PubMed

    Chen, Wei-Yu; Liao, Chung-Min

    2012-11-01

    The purpose of this study was to link toxicokinetics/toxicodynamics (TK/TD) and bioavailability-based metal uptake kinetics to assess arsenic (As) uptake and bioaccumulation in three common farmed species of tilapia (Oreochromis mossambicus), milkfish (Chanos chanos), and freshwater clam (Corbicula fluminea). We developed a mechanistic framework by linking damage assessment model (DAM) and bioavailability-based Michaelis-Menten model for describing TK/TD and As uptake mechanisms. The proposed model was verified with published acute toxicity data. The estimated TK/TD parameters were used to simulate the relationship between bioavailable As uptake and susceptibility probability. The As toxicity was also evaluated based on a constructed elimination-recovery scheme. Absorption rate constants were estimated to be 0.025, 0.016, and 0.175 mL g(-1) h(-1) and As uptake rate constant estimates were 22.875, 63.125, and 788.318 ng g(-1) h(-1) for tilapia, milkfish, and freshwater clam, respectively. Here we showed that a potential trade-off between capacities of As elimination and damage recovery was found among three farmed species. Moreover, the susceptibility probability can also be estimated by the elimination-recovery relations. This study suggested that bioavailability-based uptake kinetics and TK/TD-based DAM could be integrated for assessing metal uptake and toxicity in aquatic organisms. This study is useful to quantitatively assess the complex environmental behavior of metal uptake and implicate to risk assessment of metals in aquaculture systems.

  14. Assessment of cardiovascular risk based on a data-driven knowledge discovery approach.

    PubMed

    Mendes, D; Paredes, S; Rocha, T; Carvalho, P; Henriques, J; Cabiddu, R; Morais, J

    2015-01-01

    The cardioRisk project addresses the development of personalized risk assessment tools for patients who have been admitted to the hospital with acute myocardial infarction. Although there are models available that assess the short-term risk of death/new events for such patients, these models were established in circumstances that do not take into account the present clinical interventions and, in some cases, the risk factors used by such models are not easily available in clinical practice. The integration of the existing risk tools (applied in the clinician's daily practice) with data-driven knowledge discovery mechanisms based on data routinely collected during hospitalizations, will be a breakthrough in overcoming some of these difficulties. In this context, the development of simple and interpretable models (based on recent datasets), unquestionably will facilitate and will introduce confidence in this integration process. In this work, a simple and interpretable model based on a real dataset is proposed. It consists of a decision tree model structure that uses a reduced set of six binary risk factors. The validation is performed using a recent dataset provided by the Portuguese Society of Cardiology (11113 patients), which originally comprised 77 risk factors. A sensitivity, specificity and accuracy of, respectively, 80.42%, 77.25% and 78.80% were achieved showing the effectiveness of the approach.

  15. PHOTOTOXIC POLYCYCLIC AROMATIC HYDROCARBONS IN SEDIMENTS: A MODEL-BASED APPROACH FOR ASSESSING RISK

    EPA Science Inventory

    Over the past five years we have developed a number of models which will be combined in an integrated framework with chemical-monitoring information to assess the potential for widespread risk of phototoxic PAHs in sediments.

  16. A TEST OF WATERSHED CLASSIFICATION SYSTEMS FOR ECOLOGICAL RISK ASSESSMENT

    EPA Science Inventory

    To facilitate extrapolation among watersheds, ecological risk assessments should be based on a model of underlying factors influencing watershed response, particularly vulnerability. We propose a conceptual model of landscape vulnerability to serve as a basis for watershed classi...

  17. Using integrated environmental modeling to automate a process-based Quantitative Microbial Risk Assessment

    EPA Science Inventory

    Integrated Environmental Modeling (IEM) organizes multidisciplinary knowledge that explains and predicts environmental-system response to stressors. A Quantitative Microbial Risk Assessment (QMRA) is an approach integrating a range of disparate data (fate/transport, exposure, an...

  18. Using Integrated Environmental Modeling to Automate a Process-Based Quantitative Microbial Risk Assessment (presentation)

    EPA Science Inventory

    Integrated Environmental Modeling (IEM) organizes multidisciplinary knowledge that explains and predicts environmental-system response to stressors. A Quantitative Microbial Risk Assessment (QMRA) is an approach integrating a range of disparate data (fate/transport, exposure, and...

  19. A CONSISTENT APPROACH FOR THE APPLICATION OF PHARMACOKINETIC MODELING IN CANCER RISK ASSESSMENT

    EPA Science Inventory

    Physiologically based pharmacokinetic (PBPK) modeling provides important capabilities for improving the reliability of the extrapolations across dose, species, and exposure route that are generally required in chemical risk assessment regardless of the toxic endpoint being consid...

  20. Parameters for Pyrethroid Insecticide QSAR and PBPK/PD Models for Human Risk Assessment

    EPA Science Inventory

    This pyrethroid insecticide parameter review is an extension of our interest in developing quantitative structure–activity relationship–physiologically based pharmacokinetic/pharmacodynamic (QSAR-PBPK/PD) models for assessing health risks, which interest started with the organoph...

  1. Predicting paddlefish roe yields using an extension of the Beverton–Holt equilibrium yield-per-recruit model

    USGS Publications Warehouse

    Colvin, M.E.; Bettoli, Phillip William; Scholten, G.D.

    2013-01-01

    Equilibrium yield models predict the total biomass removed from an exploited stock; however, traditional yield models must be modified to simulate roe yields because a linear relationship between age (or length) and mature ovary weight does not typically exist. We extended the traditional Beverton-Holt equilibrium yield model to predict roe yields of Paddlefish Polyodon spathula in Kentucky Lake, Tennessee-Kentucky, as a function of varying conditional fishing mortality rates (10-70%), conditional natural mortality rates (cm; 9% and 18%), and four minimum size limits ranging from 864 to 1,016mm eye-to-fork length. These results were then compared to a biomass-based yield assessment. Analysis of roe yields indicated the potential for growth overfishing at lower exploitation rates and smaller minimum length limits than were suggested by the biomass-based assessment. Patterns of biomass and roe yields in relation to exploitation rates were similar regardless of the simulated value of cm, thus indicating that the results were insensitive to changes in cm. Our results also suggested that higher minimum length limits would increase roe yield and reduce the potential for growth overfishing and recruitment overfishing at the simulated cm values. Biomass-based equilibrium yield assessments are commonly used to assess the effects of harvest on other caviar-based fisheries; however, our analysis demonstrates that such assessments likely underestimate the probability and severity of growth overfishing when roe is targeted. Therefore, equilibrium roe yield-per-recruit models should also be considered to guide the management process for caviar-producing fish species.

  2. Demonstration of the Web-based Interspecies Correlation Estimation (Web-ICE) modeling application

    EPA Science Inventory

    The Web-based Interspecies Correlation Estimation (Web-ICE) modeling application is available to the risk assessment community through a user-friendly internet platform (http://epa.gov/ceampubl/fchain/webice/). ICE models are log-linear least square regressions that predict acute...

  3. Assessing the applicability of template-based protein docking in the twilight zone.

    PubMed

    Negroni, Jacopo; Mosca, Roberto; Aloy, Patrick

    2014-09-02

    The structural modeling of protein interactions in the absence of close homologous templates is a challenging task. Recently, template-based docking methods have emerged to exploit local structural similarities to help ab-initio protocols provide reliable 3D models for protein interactions. In this work, we critically assess the performance of template-based docking in the twilight zone. Our results show that, while it is possible to find templates for nearly all known interactions, the quality of the obtained models is rather limited. We can increase the precision of the models at expenses of coverage, but it drastically reduces the potential applicability of the method, as illustrated by the whole-interactome modeling of nine organisms. Template-based docking is likely to play an important role in the structural characterization of the interaction space, but we still need to improve the repertoire of structural templates onto which we can reliably model protein complexes. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Updating the Duplex Design for Test-Based Accountability in the Twenty-First Century

    ERIC Educational Resources Information Center

    Bejar, Isaac I.; Graf, E. Aurora

    2010-01-01

    The duplex design by Bock and Mislevy for school-based testing is revisited and evaluated as a potential platform in test-based accountability assessments today. We conclude that the model could be useful in meeting the many competing demands of today's test-based accountability assessments, although many research questions will need to be…

  5. Revisiting the Affect Regulation Model of Binge Eating: A Meta-Analysis of Studies Using Ecological Momentary Assessment

    ERIC Educational Resources Information Center

    Haedt-Matt, Alissa A.; Keel, Pamela K.

    2011-01-01

    The affect regulation model of binge eating, which posits that patients binge eat to reduce negative affect (NA), has received support from cross-sectional and laboratory-based studies. Ecological momentary assessment (EMA) involves momentary ratings and repeated assessments over time and is ideally suited to identify temporal antecedents and…

  6. The aggregate timberland assessment system—ATLAS: a comprehensive timber projection model.

    Treesearch

    J.R. Mills; J.C. Kincaid

    1992-01-01

    The aggregate timberland assessment system is a time-based deterministic timber projection model. It was developed by the USDA Forest Service to address broad policy questions related to future timber supplies for the 1989 Renewable Resources Planning Act timber assessment. An open framework design allows for customizing inputs to account for regional and subregional...

  7. Validation of a Cognitive Diagnostic Model across Multiple Forms of a Reading Comprehension Assessment

    ERIC Educational Resources Information Center

    Clark, Amy K.

    2013-01-01

    The present study sought to fit a cognitive diagnostic model (CDM) across multiple forms of a passage-based reading comprehension assessment using the attribute hierarchy method. Previous research on CDMs for reading comprehension assessments served as a basis for the attributes in the hierarchy. The two attribute hierarchies were fit to data from…

  8. [Training of health personnel in the framework of humanitarian action. Choosing an assessment model].

    PubMed

    Marchand, C; Gagnayre, R; d'Ivernois, J F

    1996-01-01

    There are very few examples of health training assessment in developing countries. Such an undertaking faces a number of difficulties concerning the problems inherent to assessment, the particular and unstable nature of the environment, and the problems associated with humanitarian action and development aid. It is difficult to choose between a formal and a natural approach. Indeed, a dual approach, combining quantitative and qualitative data seems best suited to a variety of cultural contexts of variable stability. Faced with these difficulties, a criteria-based, formative, quality-oriented assessment aimed at improving teaching and learning methods should be able to satisfy the needs of training professionals. We propose a training assessment guide based on an assessment model which aims to improve training techniques using comprehensive, descriptive and prescriptive approaches.

  9. A Standardized Generalized Dimensionality Discrepancy Measure and a Standardized Model-Based Covariance for Dimensionality Assessment for Multidimensional Models

    ERIC Educational Resources Information Center

    Levy, Roy; Xu, Yuning; Yel, Nedim; Svetina, Dubravka

    2015-01-01

    The standardized generalized dimensionality discrepancy measure and the standardized model-based covariance are introduced as tools to critique dimensionality assumptions in multidimensional item response models. These tools are grounded in a covariance theory perspective and associated connections between dimensionality and local independence.…

  10. Integrated Technology Assessment Center (ITAC) Update

    NASA Technical Reports Server (NTRS)

    Taylor, J. L.; Neely, M. A.; Curran, F. M.; Christensen, E. R.; Escher, D.; Lovell, N.; Morris, Charles (Technical Monitor)

    2002-01-01

    The Integrated Technology Assessment Center (ITAC) has developed a flexible systems analysis framework to identify long-term technology needs, quantify payoffs for technology investments, and assess the progress of ASTP-sponsored technology programs in the hypersonics area. For this, ITAC has assembled an experienced team representing a broad sector of the aerospace community and developed a systematic assessment process complete with supporting tools. Concepts for transportation systems are selected based on relevance to the ASTP and integrated concept models (ICM) of these concepts are developed. Key technologies of interest are identified and projections are made of their characteristics with respect to their impacts on key aspects of the specific concepts of interest. Both the models and technology projections are then fed into the ITAC's probabilistic systems analysis framework in ModelCenter. This framework permits rapid sensitivity analysis, single point design assessment, and a full probabilistic assessment of each concept with respect to both embedded and enhancing technologies. Probabilistic outputs are weighed against metrics of interest to ASTP using a multivariate decision making process to provide inputs for technology prioritization within the ASTP. ITAC program is currently finishing the assessment of a two-stage-to-orbit (TSTO), rocket-based combined cycle (RBCC) concept and a TSTO turbine-based combined cycle (TBCC) concept developed by the team with inputs from NASA. A baseline all rocket TSTO concept is also being developed for comparison. Boeing has recently submitted a performance model for their Flexible Aerospace System Solution for Tomorrow (FASST) concept and the ISAT program will provide inputs for a single-stage-to-orbit (SSTO) TBCC based concept in the near-term. Both of these latter concepts will be analyzed within the ITAC framework over the summer. This paper provides a status update of the ITAC program.

  11. A combined triggering-propagation modeling approach for the assessment of rainfall induced debris flow susceptibility

    NASA Astrophysics Data System (ADS)

    Stancanelli, Laura Maria; Peres, David Johnny; Cancelliere, Antonino; Foti, Enrico

    2017-07-01

    Rainfall-induced shallow slides can evolve into debris flows that move rapidly downstream with devastating consequences. Mapping the susceptibility to debris flow is an important aid for risk mitigation. We propose a novel practical approach to derive debris flow inundation maps useful for susceptibility assessment, that is based on the integrated use of DEM-based spatially-distributed hydrological and slope stability models with debris flow propagation models. More specifically, the TRIGRS infiltration and infinite slope stability model and the FLO-2D model for the simulation of the related debris flow propagation and deposition are combined. An empirical instability-to-debris flow triggering threshold calibrated on the basis of observed events, is applied to link the two models and to accomplish the task of determining the amount of unstable mass that develops as a debris flow. Calibration of the proposed methodology is carried out based on real data of the debris flow event occurred on 1 October 2009, in the Peloritani mountains area (Italy). Model performance, assessed by receiver-operating-characteristics (ROC) indexes, evidences fairly good reproduction of the observed event. Comparison with the performance of the traditional debris flow modeling procedure, in which sediment and water hydrographs are inputed as lumped at selected points on top of the streams, is also performed, in order to assess quantitatively the limitations of such commonly applied approach. Results show that the proposed method, besides of being more process-consistent than the traditional hydrograph-based approach, can potentially provide a more accurate simulation of debris-flow phenomena, in terms of spatial patterns of erosion and deposition as well on the quantification of mobilized volumes and depths, avoiding overestimation of debris flow triggering volume and, thus, of maximum inundation flow depths.

  12. Clinical risk assessment of patients with chronic kidney disease by using clinical data and multivariate models.

    PubMed

    Chen, Zewei; Zhang, Xin; Zhang, Zhuoyong

    2016-12-01

    Timely risk assessment of chronic kidney disease (CKD) and proper community-based CKD monitoring are important to prevent patients with potential risk from further kidney injuries. As many symptoms are associated with the progressive development of CKD, evaluating risk of CKD through a set of clinical data of symptoms coupled with multivariate models can be considered as an available method for prevention of CKD and would be useful for community-based CKD monitoring. Three common used multivariate models, i.e., K-nearest neighbor (KNN), support vector machine (SVM), and soft independent modeling of class analogy (SIMCA), were used to evaluate risk of 386 patients based on a series of clinical data taken from UCI machine learning repository. Different types of composite data, in which proportional disturbances were added to simulate measurement deviations caused by environment and instrument noises, were also utilized to evaluate the feasibility and robustness of these models in risk assessment of CKD. For the original data set, three mentioned multivariate models can differentiate patients with CKD and non-CKD with the overall accuracies over 93 %. KNN and SVM have better performances than SIMCA has in this study. For the composite data set, SVM model has the best ability to tolerate noise disturbance and thus are more robust than the other two models. Using clinical data set on symptoms coupled with multivariate models has been proved to be feasible approach for assessment of patient with potential CKD risk. SVM model can be used as useful and robust tool in this study.

  13. Integrating pixel- and polygon-based approaches to wildfire risk assessment: Application to a high-value watershed on the Pike and San Isabel National Forests, Colorado, USA

    Treesearch

    Matthew P. Thompson; Julie W. Gilbertson-Day; Joe H. Scott

    2015-01-01

    We develop a novel risk assessment approach that integrates complementary, yet distinct, spatial modeling approaches currently used in wildfire risk assessment. Motivation for this work stems largely from limitations of existing stochastic wildfire simulation systems, which can generate pixel-based outputs of fire behavior as well as polygon-based outputs of simulated...

  14. The Zero Suicide Model: Applying Evidence-Based Suicide Prevention Practices to Clinical Care

    PubMed Central

    Brodsky, Beth S.; Spruch-Feiner, Aliza; Stanley, Barbara

    2018-01-01

    Suicide is reaching epidemic proportions, with over 44,000 deaths by suicide in the US, and 800,000 worldwide in 2015. This, despite research and development of evidence-based interventions that target suicidal behavior directly. Suicide prevention efforts need a comprehensive approach, and research must lead to effective implementation across public and mental health systems. A 10-year systematic review of evidence-based findings in suicide prevention summarized the areas necessary for translating research into practice. These include risk assessment, means restriction, evidence-based treatments, population screening combined with chain of care, monitoring, and follow-up. In this article, we review how suicide prevention research informs implementation in clinical settings where those most at risk present for care. Evidence-based and best practices address the fluctuating nature of suicide risk, which requires ongoing risk assessment, direct intervention and monitoring. In the US, the National Action Alliance for Suicide Prevention has put forth the Zero Suicide (ZS) Model, a framework to coordinate a multilevel approach to implementing evidence-based practices. We present the Assess, Intervene and Monitor for Suicide Prevention model (AIM-SP) as a guide for implementation of ZS evidence-based and best practices in clinical settings. Ten basic steps for clinical management model will be described and illustrated through case vignette. These steps are designed to be easily incorporated into standard clinical practice to enhance suicide risk assessment, brief interventions to increase safety and teach coping strategies and to improve ongoing contact and monitoring of high-risk individuals during transitions in care and high risk periods. PMID:29527178

  15. Chapter 8: US geological survey Circum-Arctic Resource Appraisal (CARA): Introduction and summary of organization and methods

    USGS Publications Warehouse

    Charpentier, R.R.; Gautier, D.L.

    2011-01-01

    The USGS has assessed undiscovered petroleum resources in the Arctic through geological mapping, basin analysis and quantitative assessment. The new map compilation provided the base from which geologists subdivided the Arctic for burial history modelling and quantitative assessment. The CARA was a probabilistic, geologically based study that used existing USGS methodology, modified somewhat for the circumstances of the Arctic. The assessment relied heavily on analogue modelling, with numerical input as lognormal distributions of sizes and numbers of undiscovered accumulations. Probabilistic results for individual assessment units were statistically aggregated taking geological dependencies into account. Fourteen papers in this Geological Society volume present summaries of various aspects of the CARA. ?? 2011 The Geological Society of London.

  16. Conceptual Models and Guidelines for Clinical Assessment of Financial Capacity

    PubMed Central

    Marson, Daniel

    2016-01-01

    The ability to manage financial affairs is a life skill of critical importance, and neuropsychologists are increasingly asked to assess financial capacity across a variety of settings. Sound clinical assessment of financial capacity requires knowledge and appreciation of applicable clinical conceptual models and principles. However, the literature has presented relatively little conceptual guidance for clinicians concerning financial capacity and its assessment. This article seeks to address this gap. The article presents six clinical models of financial capacity : (1) the early gerontological IADL model of Lawton, (2) the clinical skills model and (3) related cognitive psychological model developed by Marson and colleagues, (4) a financial decision-making model adapting earlier decisional capacity work of Appelbaum and Grisso, (5) a person-centered model of financial decision-making developed by Lichtenberg and colleagues, and (6) a recent model of financial capacity in the real world developed through the Institute of Medicine. Accompanying presentation of the models is discussion of conceptual and practical perspectives they represent for clinician assessment. Based on the models, the article concludes by presenting a series of conceptually oriented guidelines for clinical assessment of financial capacity. In summary, sound assessment of financial capacity requires knowledge and appreciation of clinical conceptual models and principles. Awareness of such models, principles and guidelines will strengthen and advance clinical assessment of financial capacity. PMID:27506235

  17. Alternative Assessment Methods Based on Categorizations, Supporting Technologies, and a Model for Betterment

    ERIC Educational Resources Information Center

    Ben-Jacob, Marion G.; Ben-Jacob, Tyler E.

    2014-01-01

    This paper explores alternative assessment methods from the perspective of categorizations. It addresses the technologies that support assessment. It discusses initial, formative, and summative assessment, as well as objective and subjective assessment, and formal and informal assessment. It approaches each category of assessment from the…

  18. Ecological risk assessment conceptual model formulation for nonindigenous species.

    PubMed

    Landis, Wayne G

    2004-08-01

    This article addresses the application of ecological risk assessment at the regional scale to the prediction of impacts due to invasive or nonindigenous species (NIS). The first section describes risk assessment, the decision-making process, and introduces regional risk assessment. A general conceptual model for the risk assessment of NIS is then presented based upon the regional risk assessment approach. Two diverse examples of the application of this approach are presented. The first example is based upon the dynamics of introduced plasmids into bacteria populations. The second example is the application risk assessment approach to the invasion of a coastal marine site of Cherry Point, Washington, USA by the European green crab. The lessons learned from the two examples demonstrate that assessment of the risks of invasion of NIS will have to incorporate not only the characteristics of the invasive species, but also the other stresses and impacts affecting the region of interest.

  19. The Genetics Panel of the NAS BEAR I Committee (1956): epistolary evidence suggests self-interest may have prompted an exaggeration of radiation risks that led to the adoption of the LNT cancer risk assessment model.

    PubMed

    Calabrese, Edward J

    2014-09-01

    This paper extends a series of historical papers which demonstrated that the linear-no-threshold (LNT) model for cancer risk assessment was founded on ideological-based scientific deceptions by key radiation genetics leaders. Based on an assessment of recently uncovered personal correspondence, it is shown that some members of the United States (US) National Academy of Sciences (NAS) Biological Effects of Atomic Radiation I (BEAR I) Genetics Panel were motivated by self-interest to exaggerate risks to promote their science and personal/professional agenda. Such activities have profound implications for public policy and may have had a significant impact on the adoption of the LNT model for cancer risk assessment.

  20. NATIONAL-SCALE ASSESSMENT OF AIR TOXICS RISKS ...

    EPA Pesticide Factsheets

    The national-scale assessment of air toxics risks is a modeling assessment which combines emission inventory development, atmospheric fate and transport modeling, exposure modeling, and risk assessment to characterize the risk associated with inhaling air toxics from outdoor sources. This national-scale effort will be initiated for the base year 1996 and repeated every three years thereafter to track trends and inform program development. Provide broad-scale understanding of inhalation risks for a subset of atmospherically-emitted air toxics to inform further data-gathering efforts and priority-setting for the EPA's Air Toxics Programs.

  1. Information security system quality assessment through the intelligent tools

    NASA Astrophysics Data System (ADS)

    Trapeznikov, E. V.

    2018-04-01

    The technology development has shown the automated system information security comprehensive analysis necessity. The subject area analysis indicates the study relevance. The research objective is to develop the information security system quality assessment methodology based on the intelligent tools. The basis of the methodology is the information security assessment model in the information system through the neural network. The paper presents the security assessment model, its algorithm. The methodology practical implementation results in the form of the software flow diagram are represented. The practical significance of the model being developed is noted in conclusions.

  2. Use of Decision Models in the Development of Evidence-Based Clinical Preventive Services Recommendations: Methods of the U.S. Preventive Services Task Force.

    PubMed

    Owens, Douglas K; Whitlock, Evelyn P; Henderson, Jillian; Pignone, Michael P; Krist, Alex H; Bibbins-Domingo, Kirsten; Curry, Susan J; Davidson, Karina W; Ebell, Mark; Gillman, Matthew W; Grossman, David C; Kemper, Alex R; Kurth, Ann E; Maciosek, Michael; Siu, Albert L; LeFevre, Michael L

    2016-10-04

    The U.S. Preventive Services Task Force (USPSTF) develops evidence-based recommendations about preventive care based on comprehensive systematic reviews of the best available evidence. Decision models provide a complementary, quantitative approach to support the USPSTF as it deliberates about the evidence and develops recommendations for clinical and policy use. This article describes the rationale for using modeling, an approach to selecting topics for modeling, and how modeling may inform recommendations about clinical preventive services. Decision modeling is useful when clinical questions remain about how to target an empirically established clinical preventive service at the individual or program level or when complex determinations of magnitude of net benefit, overall or among important subpopulations, are required. Before deciding whether to use decision modeling, the USPSTF assesses whether the benefits and harms of the preventive service have been established empirically, assesses whether there are key issues about applicability or implementation that modeling could address, and then defines the decision problem and key questions to address through modeling. Decision analyses conducted for the USPSTF are expected to follow best practices for modeling. For chosen topics, the USPSTF assesses the strengths and limitations of the systematically reviewed evidence and the modeling analyses and integrates the results of each to make preventive service recommendations.

  3. Physiologically-based pharmacokinetic (PBPK) modeling to explore potential metabolic pathways of bromochloromethane in rats.

    EPA Science Inventory

    Bromochloromethane (BCM) is a volatile organic compound and a by-product of disinfection of water by chlorination. Physiologically based pharmacokinetic (PBPK) models are used in risk assessment applications and a PBPK model for BCM, Updated with F-344 specific input parameters,...

  4. Using integrated environmental modeling to automate a process-based Quantitative Microbial Risk Assessment

    USDA-ARS?s Scientific Manuscript database

    Integrated Environmental Modeling (IEM) organizes multidisciplinary knowledge that explains and predicts environmental-system response to stressors. A Quantitative Microbial Risk Assessment (QMRA) is an approach integrating a range of disparate data (fate/transport, exposure, and human health effect...

  5. Fish Consumption Advisories: Toward a Unified, Scientifically Credible Approach

    EPA Science Inventory

    A model is proposed for fish consumption advisories based on consensus-derived risk assessment values for common contaminants in fish and the latest risk assessment methods. he model accounts in part for the expected toxicity to mixtures of chemicals, the underlying uncertainties...

  6. Integrated performance and reliability specification for digital avionics systems

    NASA Technical Reports Server (NTRS)

    Brehm, Eric W.; Goettge, Robert T.

    1995-01-01

    This paper describes an automated tool for performance and reliability assessment of digital avionics systems, called the Automated Design Tool Set (ADTS). ADTS is based on an integrated approach to design assessment that unifies traditional performance and reliability views of system designs, and that addresses interdependencies between performance and reliability behavior via exchange of parameters and result between mathematical models of each type. A multi-layer tool set architecture has been developed for ADTS that separates the concerns of system specification, model generation, and model solution. Performance and reliability models are generated automatically as a function of candidate system designs, and model results are expressed within the system specification. The layered approach helps deal with the inherent complexity of the design assessment process, and preserves long-term flexibility to accommodate a wide range of models and solution techniques within the tool set structure. ADTS research and development to date has focused on development of a language for specification of system designs as a basis for performance and reliability evaluation. A model generation and solution framework has also been developed for ADTS, that will ultimately encompass an integrated set of analytic and simulated based techniques for performance, reliability, and combined design assessment.

  7. Spaceflight tracking and data network operational reliability assessment for Skylab

    NASA Technical Reports Server (NTRS)

    Seneca, V. I.; Mlynarczyk, R. H.

    1974-01-01

    Data on the spaceflight communications equipment status during the Skylab mission were subjected to an operational reliability assessment. Reliability models were revised to reflect pertinent equipment changes accomplished prior to the beginning of the Skylab missions. Appropriate adjustments were made to fit the data to the models. The availabilities are based on the failure events resulting in the stations inability to support a function of functions and the MTBF's are based on all events including 'can support' and 'cannot support'. Data were received from eleven land-based stations and one ship.

  8. Advanced reliability modeling of fault-tolerant computer-based systems

    NASA Technical Reports Server (NTRS)

    Bavuso, S. J.

    1982-01-01

    Two methodologies for the reliability assessment of fault tolerant digital computer based systems are discussed. The computer-aided reliability estimation 3 (CARE 3) and gate logic software simulation (GLOSS) are assessment technologies that were developed to mitigate a serious weakness in the design and evaluation process of ultrareliable digital systems. The weak link is based on the unavailability of a sufficiently powerful modeling technique for comparing the stochastic attributes of one system against others. Some of the more interesting attributes are reliability, system survival, safety, and mission success.

  9. Grid Transmission Expansion Planning Model Based on Grid Vulnerability

    NASA Astrophysics Data System (ADS)

    Tang, Quan; Wang, Xi; Li, Ting; Zhang, Quanming; Zhang, Hongli; Li, Huaqiang

    2018-03-01

    Based on grid vulnerability and uniformity theory, proposed global network structure and state vulnerability factor model used to measure different grid models. established a multi-objective power grid planning model which considering the global power network vulnerability, economy and grid security constraint. Using improved chaos crossover and mutation genetic algorithm to optimize the optimal plan. For the problem of multi-objective optimization, dimension is not uniform, the weight is not easy given. Using principal component analysis (PCA) method to comprehensive assessment of the population every generation, make the results more objective and credible assessment. the feasibility and effectiveness of the proposed model are validated by simulation results of Garver-6 bus system and Garver-18 bus.

  10. A Resource-Based Modelling Framework to Assess Habitat Suitability for Steppe Birds in Semiarid Mediterranean Agricultural Systems

    PubMed Central

    Cardador, Laura; De Cáceres, Miquel; Bota, Gerard; Giralt, David; Casas, Fabián; Arroyo, Beatriz; Mougeot, François; Cantero-Martínez, Carlos; Moncunill, Judit; Butler, Simon J.; Brotons, Lluís

    2014-01-01

    European agriculture is undergoing widespread changes that are likely to have profound impacts on farmland biodiversity. The development of tools that allow an assessment of the potential biodiversity effects of different land-use alternatives before changes occur is fundamental to guiding management decisions. In this study, we develop a resource-based model framework to estimate habitat suitability for target species, according to simple information on species’ key resource requirements (diet, foraging habitat and nesting site), and examine whether it can be used to link land-use and local species’ distribution. We take as a study case four steppe bird species in a lowland area of the north-eastern Iberian Peninsula. We also compare the performance of our resource-based approach to that obtained through habitat-based models relating species’ occurrence and land-cover variables. Further, we use our resource-based approach to predict the effects that change in farming systems can have on farmland bird habitat suitability and compare these predictions with those obtained using the habitat-based models. Habitat suitability estimates generated by our resource-based models performed similarly (and better for one study species) than habitat based-models when predicting current species distribution. Moderate prediction success was achieved for three out of four species considered by resource-based models and for two of four by habitat-based models. Although, there is potential for improving the performance of resource-based models, they provide a structure for using available knowledge of the functional links between agricultural practices, provision of key resources and the response of organisms to predict potential effects of changing land-uses in a variety of context or the impacts of changes such as altered management practices that are not easily incorporated into habitat-based models. PMID:24667825

  11. Scientific white paper on concentration-QTc modeling.

    PubMed

    Garnett, Christine; Bonate, Peter L; Dang, Qianyu; Ferber, Georg; Huang, Dalong; Liu, Jiang; Mehrotra, Devan; Riley, Steve; Sager, Philip; Tornoe, Christoffer; Wang, Yaning

    2018-06-01

    The International Council for Harmonisation revised the E14 guideline through the questions and answers process to allow concentration-QTc (C-QTc) modeling to be used as the primary analysis for assessing the QTc interval prolongation risk of new drugs. A well-designed and conducted QTc assessment based on C-QTc modeling in early phase 1 studies can be an alternative approach to a thorough QT study for some drugs to reliably exclude clinically relevant QTc effects. This white paper provides recommendations on how to plan and conduct a definitive QTc assessment of a drug using C-QTc modeling in early phase clinical pharmacology and thorough QT studies. Topics included are: important study design features in a phase 1 study; modeling objectives and approach; exploratory plots; the pre-specified linear mixed effects model; general principles for model development and evaluation; and expectations for modeling analysis plans and reports. The recommendations are based on current best modeling practices, scientific literature and personal experiences of the authors. These recommendations are expected to evolve as their implementation during drug development provides additional data and with advances in analytical methodology.

  12. A Model for Program-Wide Assessment of the Effectiveness of Writing Instruction in Science Laboratory Courses

    ERIC Educational Resources Information Center

    Saitta, Erin K.; Zemliansky, Pavel; Turner, Anna

    2015-01-01

    The authors present a model for program-wide assessment of the effectiveness of writing instruction in a chemistry laboratory course. This model, which involves collaboration between faculty from chemistry, the Writing Across the Curriculum (WAC) program, and the Faculty Center for Teaching and Learning, is based on several theories and…

  13. Development and assessment of a physics-based simulation model to investigate residential PM2.5 infiltration across the US housing stock

    EPA Science Inventory

    The Lawrence Berkeley National Laboratory Population Impact Assessment Modeling Framework (PIAMF) was expanded to enable determination of indoor PM2.5 concentrations and exposures in a set of 50,000 homes representing the US housing stock. A mass-balance model is used to calculat...

  14. OTLA: A New Model for Online Teaching, Learning and Assessment in Higher Education

    ERIC Educational Resources Information Center

    Ghilay, Yaron; Ghilay, Ruth

    2013-01-01

    The study examined a new asynchronous model for online teaching, learning and assessment, called OTLA. It is designed for higher-education institutions and is based on LMS (Learning Management System) as well as other relevant IT tools. The new model includes six digital basic components: text, hypertext, text reading, lectures (voice/video),…

  15. Models and parameters for environmental radiological assessments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, C W

    1984-01-01

    This book presents a unified compilation of models and parameters appropriate for assessing the impact of radioactive discharges to the environment. Models examined include those developed for the prediction of atmospheric and hydrologic transport and deposition, for terrestrial and aquatic food-chain bioaccumulation, and for internal and external dosimetry. Chapters have been entered separately into the data base. (ACR)

  16. Using lab notebooks to examine students' engagement in modeling in an upper-division electronics lab course

    NASA Astrophysics Data System (ADS)

    Stanley, Jacob T.; Su, Weifeng; Lewandowski, H. J.

    2017-12-01

    We demonstrate how students' use of modeling can be examined and assessed using student notebooks collected from an upper-division electronics lab course. The use of models is a ubiquitous practice in undergraduate physics education, but the process of constructing, testing, and refining these models is much less common. We focus our attention on a lab course that has been transformed to engage students in this modeling process during lab activities. The design of the lab activities was guided by a framework that captures the different components of model-based reasoning, called the Modeling Framework for Experimental Physics. We demonstrate how this framework can be used to assess students' written work and to identify how students' model-based reasoning differed from activity to activity. Broadly speaking, we were able to identify the different steps of students' model-based reasoning and assess the completeness of their reasoning. Varying degrees of scaffolding present across the activities had an impact on how thoroughly students would engage in the full modeling process, with more scaffolded activities resulting in more thorough engagement with the process. Finally, we identified that the step in the process with which students had the most difficulty was the comparison between their interpreted data and their model prediction. Students did not use sufficiently sophisticated criteria in evaluating such comparisons, which had the effect of halting the modeling process. This may indicate that in order to engage students further in using model-based reasoning during lab activities, the instructor needs to provide further scaffolding for how students make these types of experimental comparisons. This is an important design consideration for other such courses attempting to incorporate modeling as a learning goal.

  17. SU-F-J-178: A Computer Simulation Model Observer for Task-Based Image Quality Assessment in Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dolly, S; Mutic, S; Anastasio, M

    Purpose: Traditionally, image quality in radiation therapy is assessed subjectively or by utilizing physically-based metrics. Some model observers exist for task-based medical image quality assessment, but almost exclusively for diagnostic imaging tasks. As opposed to disease diagnosis, the task for image observers in radiation therapy is to utilize the available images to design and deliver a radiation dose which maximizes patient disease control while minimizing normal tissue damage. The purpose of this study was to design and implement a new computer simulation model observer to enable task-based image quality assessment in radiation therapy. Methods: A modular computer simulation framework wasmore » developed to resemble the radiotherapy observer by simulating an end-to-end radiation therapy treatment. Given images and the ground-truth organ boundaries from a numerical phantom as inputs, the framework simulates an external beam radiation therapy treatment and quantifies patient treatment outcomes using the previously defined therapeutic operating characteristic (TOC) curve. As a preliminary demonstration, TOC curves were calculated for various CT acquisition and reconstruction parameters, with the goal of assessing and optimizing simulation CT image quality for radiation therapy. Sources of randomness and bias within the system were analyzed. Results: The relationship between CT imaging dose and patient treatment outcome was objectively quantified in terms of a singular value, the area under the TOC (AUTOC) curve. The AUTOC decreases more rapidly for low-dose imaging protocols. AUTOC variation introduced by the dose optimization algorithm was approximately 0.02%, at the 95% confidence interval. Conclusion: A model observer has been developed and implemented to assess image quality based on radiation therapy treatment efficacy. It enables objective determination of appropriate imaging parameter values (e.g. imaging dose). Framework flexibility allows for incorporation of additional modules to include any aspect of the treatment process, and therefore has great potential for both assessment and optimization within radiation therapy.« less

  18. Self-awareness assessment during cognitive rehabilitation in children with acquired brain injury: a feasibility study and proposed model of child anosognosia.

    PubMed

    Krasny-Pacini, Agata; Limond, Jennifer; Evans, Jonathan; Hiebel, Jean; Bendjelida, Karim; Chevignard, Mathilde

    2015-01-01

    To compare three ways of assessing self-awareness in children with traumatic brain injury (TBI) and to propose a model of child anosognosia. Five single cases of children with severe TBI, aged 8-14, undergoing metacognitive training. Awareness was assessed using three different measures: two measures of metacognitive knowledge/intellectual awareness (a questionnaire and illustrated stories where child characters have everyday problems related to their executive dysfunction) and one measure of on-line/emergent awareness (post-task appraisal of task difficulty). All three measures showed good feasibility. Analysis of awareness deficit scores indicated large variability (1-100%). Three children showed dissociated scores. Based on these results, we propose a model of child self-awareness and anosognosia and a framework for awareness assessment for rehabilitation purposes. The model emphasizes (1) the role of on-line error detection in the construction of autobiographical memories that allow a child to build a self-knowledge of his/her strengths and difficulties; (2) the multiple components of awareness that need to be assessed separately; (3) the implications for rehabilitation: errorless versus error-based learning, rehabilitation approaches based on metacognition, rationale for rehabilitation intervention based on child's age and impaired awareness component, ethical and developmental consideration of confrontational methods. Self-awareness has multiple components that need to be assessed separately, to better adapt cognitive rehabilitation. Using questionnaires and discrepancy scores are not sufficient to assess awareness, because it does not include on-line error detection, which can be massively impaired in children, especially those with impaired executive functions. On-line error detection is important to promote and error-based learning is useful to allow a child to build a self-knowledge of his/her strengths and difficulties, in the absence of severe episodic memory problems. Metacognitive trainings may not be appropriate for younger children who have age appropriate developmentally immature self-awareness, nor for patients with brain injury if they suffer anosognosia because of their brain injury.

  19. Teacher Education, Motivation, Compensation, Workplace Support, and Links to Quality of Center-Based Child Care and Teachers' Intention to Stay in the Early Childhood Profession

    ERIC Educational Resources Information Center

    Torquati, Julia C.; Raikes, Helen; Huddleston-Casas, Catherine A.

    2007-01-01

    The purposes of this study were to present a conceptual model for selection into the early childhood profession and to test the model using contemporaneous assessments. A stratified random sample of center-based child care providers in 4 Midwestern states (n=964) participated in a telephone interview, and 223 were also assessed with the Early…

  20. A multicriteria decision making approach based on fuzzy theory and credibility mechanism for logistics center location selection.

    PubMed

    Wang, Bowen; Xiong, Haitao; Jiang, Chengrui

    2014-01-01

    As a hot topic in supply chain management, fuzzy method has been widely used in logistics center location selection to improve the reliability and suitability of the logistics center location selection with respect to the impacts of both qualitative and quantitative factors. However, it does not consider the consistency and the historical assessments accuracy of experts in predecisions. So this paper proposes a multicriteria decision making model based on credibility of decision makers by introducing priority of consistency and historical assessments accuracy mechanism into fuzzy multicriteria decision making approach. In this way, only decision makers who pass the credibility check are qualified to perform the further assessment. Finally, a practical example is analyzed to illustrate how to use the model. The result shows that the fuzzy multicriteria decision making model based on credibility mechanism can improve the reliability and suitability of site selection for the logistics center.

  1. A Multicriteria Decision Making Approach Based on Fuzzy Theory and Credibility Mechanism for Logistics Center Location Selection

    PubMed Central

    Wang, Bowen; Jiang, Chengrui

    2014-01-01

    As a hot topic in supply chain management, fuzzy method has been widely used in logistics center location selection to improve the reliability and suitability of the logistics center location selection with respect to the impacts of both qualitative and quantitative factors. However, it does not consider the consistency and the historical assessments accuracy of experts in predecisions. So this paper proposes a multicriteria decision making model based on credibility of decision makers by introducing priority of consistency and historical assessments accuracy mechanism into fuzzy multicriteria decision making approach. In this way, only decision makers who pass the credibility check are qualified to perform the further assessment. Finally, a practical example is analyzed to illustrate how to use the model. The result shows that the fuzzy multicriteria decision making model based on credibility mechanism can improve the reliability and suitability of site selection for the logistics center. PMID:25215319

  2. Assessing Preservice Teachers' Presentation Capabilities: Contrasting the Modes of Communication with the Constructed Impression

    ERIC Educational Resources Information Center

    Bower, Matt G.; Moloney, Robyn A.; Cavanagh, Michael S.; Sweller, Naomi

    2013-01-01

    A research-based understanding of how to develop and assess classroom presentation skills is vital for the effective development of pre-service teacher communication capabilities. This paper identifies and compares two different models of assessing pre-service teachers' presentation performance--one based on the Modes of Communication (voice,…

  3. Ethical and Legal Issues Associated with Using Response-to-Intervention to Assess Learning Disabilities

    ERIC Educational Resources Information Center

    Burns, Matthew K.; Jacob, Susan; Wagner, Angela R.

    2008-01-01

    The Individuals with Disabilities Education Improvement Act of 2004 allows schools to use a child's response to research-based intervention (RTI) as a part of procedures to identify students with learning disabilities. This paper considers whether RTI-based assessment models meet ethical and legal standards for acceptable assessment practices.…

  4. A Curriculum-Based Vocational Assessment Procedure: Addressing the School-to-Work Transition Needs of Secondary Schools.

    ERIC Educational Resources Information Center

    Porter, Mahlone E.; Stodden, Robert A.

    1986-01-01

    Curriculum-based vocational assessment procedures as implemented in the United States Department of Defense Dependents Schools in Germany are assessing a match of handicapped students' interests and strengths in terms of career and vocational instructional options. The model is described, with emphasis on project planning and design and…

  5. Habitat classification modeling with incomplete data: Pushing the habitat envelope

    USGS Publications Warehouse

    Zarnetske, P.L.; Edwards, T.C.; Moisen, Gretchen G.

    2007-01-01

    Habitat classification models (HCMs) are invaluable tools for species conservation, land-use planning, reserve design, and metapopulation assessments, particularly at broad spatial scales. However, species occurrence data are often lacking and typically limited to presence points at broad scales. This lack of absence data precludes the use of many statistical techniques for HCMs. One option is to generate pseudo-absence points so that the many available statistical modeling tools can be used. Traditional techniques generate pseudoabsence points at random across broadly defined species ranges, often failing to include biological knowledge concerning the species-habitat relationship. We incorporated biological knowledge of the species-habitat relationship into pseudo-absence points by creating habitat envelopes that constrain the region from which points were randomly selected. We define a habitat envelope as an ecological representation of a species, or species feature's (e.g., nest) observed distribution (i.e., realized niche) based on a single attribute, or the spatial intersection of multiple attributes. We created HCMs for Northern Goshawk (Accipiter gentilis atricapillus) nest habitat during the breeding season across Utah forests with extant nest presence points and ecologically based pseudo-absence points using logistic regression. Predictor variables were derived from 30-m USDA Landfire and 250-m Forest Inventory and Analysis (FIA) map products. These habitat-envelope-based models were then compared to null envelope models which use traditional practices for generating pseudo-absences. Models were assessed for fit and predictive capability using metrics such as kappa, thresholdindependent receiver operating characteristic (ROC) plots, adjusted deviance (Dadj2), and cross-validation, and were also assessed for ecological relevance. For all cases, habitat envelope-based models outperformed null envelope models and were more ecologically relevant, suggesting that incorporating biological knowledge into pseudo-absence point generation is a powerful tool for species habitat assessments. Furthermore, given some a priori knowledge of the species-habitat relationship, ecologically based pseudo-absence points can be applied to any species, ecosystem, data resolution, and spatial extent. ?? 2007 by the Ecological Society of America.

  6. Wheat stress indicator model, Crop Condition Assessment Division (CCAD) data base interface driver, user's manual

    NASA Technical Reports Server (NTRS)

    Hansen, R. F. (Principal Investigator)

    1981-01-01

    The use of the wheat stress indicator model CCAD data base interface driver is described. The purpose of this system is to interface the wheat stress indicator model with the CCAD operational data base. The interface driver routine decides what meteorological stations should be processed and calls the proper subroutines to process the stations.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mou, J.I.; King, C.

    The focus of this study is to develop a sensor fused process modeling and control methodology to model, assess, and then enhance the performance of a hexapod machine for precision product realization. Deterministic modeling technique was used to derive models for machine performance assessment and enhancement. Sensor fusion methodology was adopted to identify the parameters of the derived models. Empirical models and computational algorithms were also derived and implemented to model, assess, and then enhance the machine performance. The developed sensor fusion algorithms can be implemented on a PC-based open architecture controller to receive information from various sensors, assess themore » status of the process, determine the proper action, and deliver the command to actuators for task execution. This will enhance a hexapod machine`s capability to produce workpieces within the imposed dimensional tolerances.« less

  8. Evidence-Based Assessment of Attention-Deficit/Hyperactivity Disorder: Using Multiple Sources of Information

    ERIC Educational Resources Information Center

    Frazier, Thomas W.; Youngstrom, Eric A.

    2006-01-01

    In this article, the authors illustrate a step-by-step process of acquiring and integrating information according to the recommendations of evidence-based practices. A case example models the process, leading to specific recommendations regarding instruments and strategies for evidence-based assessment (EBA) of attention-deficit/hyperactivity…

  9. Models for Theory-Based M.A. and Ph.D. Programs.

    ERIC Educational Resources Information Center

    Botan, Carl; Vasquez, Gabriel

    1999-01-01

    Presents work accomplished at the 1998 National Communication Association Summer Conference. Outlines reasons for theory-based education in public relations. Presents an integrated model of student outcomes, curriculum, pedagogy, and assessment for theory-based master's and doctoral programs, including assumptions made and rationale for such…

  10. Model of environmental life cycle assessment for coal mining operations.

    PubMed

    Burchart-Korol, Dorota; Fugiel, Agata; Czaplicka-Kolarz, Krystyna; Turek, Marian

    2016-08-15

    This paper presents a novel approach to environmental assessment of coal mining operations, which enables assessment of the factors that are both directly and indirectly affecting the environment and are associated with the production of raw materials and energy used in processes. The primary novelty of the paper is the development of a computational environmental life cycle assessment (LCA) model for coal mining operations and the application of the model for coal mining operations in Poland. The LCA model enables the assessment of environmental indicators for all identified unit processes in hard coal mines with the life cycle approach. The proposed model enables the assessment of greenhouse gas emissions (GHGs) based on the IPCC method and the assessment of damage categories, such as human health, ecosystems and resources based on the ReCiPe method. The model enables the assessment of GHGs for hard coal mining operations in three time frames: 20, 100 and 500years. The model was used to evaluate the coal mines in Poland. It was demonstrated that the largest environmental impacts in damage categories were associated with the use of fossil fuels, methane emissions and the use of electricity, processing of wastes, heat, and steel supports. It was concluded that an environmental assessment of coal mining operations, apart from direct influence from processing waste, methane emissions and drainage water, should include the use of electricity, heat and steel, particularly for steel supports. Because the model allows the comparison of environmental impact assessment for various unit processes, it can be used for all hard coal mines, not only in Poland but also in the world. This development is an important step forward in the study of the impacts of fossil fuels on the environment with the potential to mitigate the impact of the coal industry on the environment. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. An example of population-level risk assessments for small mammals using individual-based population models.

    PubMed

    Schmitt, Walter; Auteri, Domenica; Bastiansen, Finn; Ebeling, Markus; Liu, Chun; Luttik, Robert; Mastitsky, Sergey; Nacci, Diane; Topping, Chris; Wang, Magnus

    2016-01-01

    This article presents a case study demonstrating the application of 3 individual-based, spatially explicit population models (IBMs, also known as agent-based models) in ecological risk assessments to predict long-term effects of a pesticide to populations of small mammals. The 3 IBMs each used a hypothetical fungicide (FungicideX) in different scenarios: spraying in cereals (common vole, Microtus arvalis), spraying in orchards (field vole, Microtus agrestis), and cereal seed treatment (wood mouse, Apodemus sylvaticus). Each scenario used existing model landscapes, which differed greatly in size and structural complexity. The toxicological profile of FungicideX was defined so that the deterministic long-term first tier risk assessment would result in high risk to small mammals, thus providing the opportunity to use the IBMs for risk assessment refinement (i.e., higher tier risk assessment). Despite differing internal model design and scenarios, results indicated in all 3 cases low population sensitivity unless FungicideX was applied at very high (×10) rates. Recovery from local population impacts was generally fast. Only when patch extinctions occured in simulations of intentionally high acute toxic effects, recovery periods, then determined by recolonization, were of any concern. Conclusions include recommendations for the most important input considerations, including the selection of exposure levels, duration of simulations, statistically robust number of replicates, and endpoints to report. However, further investigation and agreement are needed to develop recommendations for landscape attributes such as size, structure, and crop rotation to define appropriate regulatory risk assessment scenarios. Overall, the application of IBMs provides multiple advantages to higher tier ecological risk assessments for small mammals, including consistent and transparent direct links to specific protection goals, and the consideration of more realistic scenarios. © 2015 SETAC.

  12. Helicopter simulation validation using flight data

    NASA Technical Reports Server (NTRS)

    Key, D. L.; Hansen, R. S.; Cleveland, W. B.; Abbott, W. Y.

    1982-01-01

    A joint NASA/Army effort to perform a systematic ground-based piloted simulation validation assessment is described. The best available mathematical model for the subject helicopter (UH-60A Black Hawk) was programmed for real-time operation. Flight data were obtained to validate the math model, and to develop models for the pilot control strategy while performing mission-type tasks. The validated math model is to be combined with motion and visual systems to perform ground based simulation. Comparisons of the control strategy obtained in flight with that obtained on the simulator are to be used as the basis for assessing the fidelity of the results obtained in the simulator.

  13. AHP-based spatial analysis of water quality impact assessment due to change in vehicular traffic caused by highway broadening in Sikkim Himalaya

    NASA Astrophysics Data System (ADS)

    Banerjee, Polash; Ghose, Mrinal Kanti; Pradhan, Ratika

    2018-05-01

    Spatial analysis of water quality impact assessment of highway projects in mountainous areas remains largely unexplored. A methodology is presented here for Spatial Water Quality Impact Assessment (SWQIA) due to highway-broadening-induced vehicular traffic change in the East district of Sikkim. Pollution load of the highway runoff was estimated using an Average Annual Daily Traffic-Based Empirical model in combination with mass balance model to predict pollution in the rivers within the study area. Spatial interpolation and overlay analysis were used for impact mapping. Analytic Hierarchy Process-Based Water Quality Status Index was used to prepare a composite impact map. Model validation criteria, cross-validation criteria, and spatial explicit sensitivity analysis show that the SWQIA model is robust. The study shows that vehicular traffic is a significant contributor to water pollution in the study area. The model is catering specifically to impact analysis of the concerned project. It can be an aid for decision support system for the project stakeholders. The applicability of SWQIA model needs to be explored and validated in the context of a larger set of water quality parameters and project scenarios at a greater spatial scale.

  14. Field-based landslide susceptibility assessment in a data-scarce environment: the populated areas of the Rwenzori Mountains

    NASA Astrophysics Data System (ADS)

    Jacobs, Liesbet; Dewitte, Olivier; Poesen, Jean; Sekajugo, John; Nobile, Adriano; Rossi, Mauro; Thiery, Wim; Kervyn, Matthieu

    2018-01-01

    The inhabited zone of the Ugandan Rwenzori Mountains is affected by landslides, frequently causing loss of life, damage to infrastructure and loss of livelihood. This area of ca. 1230 km2 is characterized by contrasting geomorphologic, climatic and lithological patterns, resulting in different landslide types. In this study, the spatial pattern of landslide susceptibility is investigated based on an extensive field inventory constructed for five representative areas within the region (153 km2) and containing over 450 landslides. To achieve a reliable susceptibility assessment, the effects of (1) using different topographic data sources and spatial resolutions and (2) changing the scale of assessment by comparing local and regional susceptibility models on the susceptibility model performances are investigated using a pixel-based logistic regression approach. Topographic data are extracted from different digital elevation models (DEMs) based on radar interferometry (SRTM and TanDEM-X) and optical stereophotogrammetry (ASTER DEM). Susceptibility models using the radar-based DEMs tend to outperform the ones using the ASTER DEM. The model spatial resolution is varied between 10, 20, 30 and 90 m. The optimal resolution depends on the location of the investigated area within the region but the lowest model resolution (90 m) rarely yields the best model performances while the highest model resolution (10 m) never results in significant increases in performance compared to the 20 m resolution. Models built for the local case studies generally have similar or better performances than the regional model and better reflect site-specific controlling factors. At the regional level the effect of distinguishing landslide types between shallow and deep-seated landslides is investigated. The separation of landslide types allows us to improve model performances for the prediction of deep-seated landslides and to better understand factors influencing the occurrence of shallow landslides such as tangent curvature and total rainfall. Finally, the landslide susceptibility assessment is overlaid with a population density map in order to identify potential landslide risk hotspots, which could direct research and policy action towards reduced landslide risk in this under-researched, landslide-prone region.

  15. APOLLO: a quality assessment service for single and multiple protein models.

    PubMed

    Wang, Zheng; Eickholt, Jesse; Cheng, Jianlin

    2011-06-15

    We built a web server named APOLLO, which can evaluate the absolute global and local qualities of a single protein model using machine learning methods or the global and local qualities of a pool of models using a pair-wise comparison approach. Based on our evaluations on 107 CASP9 (Critical Assessment of Techniques for Protein Structure Prediction) targets, the predicted quality scores generated from our machine learning and pair-wise methods have an average per-target correlation of 0.671 and 0.917, respectively, with the true model quality scores. Based on our test on 92 CASP9 targets, our predicted absolute local qualities have an average difference of 2.60 Å with the actual distances to native structure. http://sysbio.rnet.missouri.edu/apollo/. Single and pair-wise global quality assessment software is also available at the site.

  16. Application of multimedia models for screening assessment of long-range transport potential and overall persistence.

    PubMed

    Klasmeier, Jörg; Matthies, Michael; Macleod, Matthew; Fenner, Kathrin; Scheringer, Martin; Stroebe, Maximilian; Le Gall, Anne Christine; Mckone, Thomas; Van De Meent, Dik; Wania, Frank

    2006-01-01

    We propose a multimedia model-based methodology to evaluate whether a chemical substance qualifies as POP-like based on overall persistence (Pov) and potential for long-range transport (LRTP). It relies upon screening chemicals against the Pov and LRTP characteristics of selected reference chemicals with well-established environmental fates. Results indicate that chemicals of high and low concern in terms of persistence and long-range transport can be consistently identified by eight contemporary multimedia models using the proposed methodology. Model results for three hypothetical chemicals illustrate that the model-based classification of chemicals according to Pov and LRTP is not always consistent with the single-media half-life approach proposed by the UNEP Stockholm Convention and thatthe models provide additional insight into the likely long-term hazards associated with chemicals in the environment. We suggest this model-based classification method be adopted as a complement to screening against defined half-life criteria at the initial stages of tiered assessments designed to identify POP-like chemicals and to prioritize further environmental fate studies for new and existing chemicals.

  17. Connecting Lines of Research on Task Model Variables, Automatic Item Generation, and Learning Progressions in Game-Based Assessment

    ERIC Educational Resources Information Center

    Graf, Edith Aurora

    2014-01-01

    In "How Task Features Impact Evidence from Assessments Embedded in Simulations and Games," Almond, Kim, Velasquez, and Shute have prepared a thought-provoking piece contrasting the roles of task model variables in a traditional assessment of mathematics word problems to their roles in "Newton's Playground," a game designed…

  18. EVALUATING VIRULENCE OF WATERBORNE AND CLINCIAL AEROMONAS ISOLATES USING GENE EXPRESSION AND MORTALITY IN NEONATAL MICE FOLLOWED BY ASSESSING CELL CULTURE'S ABILITY TO PREDICT VIRULENCE BASED ON TRANSCRIPTIONAL RESPONSE

    EPA Science Inventory

    The virulence of multiple Aeromonas spp. were assessed using two models, a neonatal mouse assay and a mouse intestinal cell culture. Transcriptional responses to both infection models were assessed using microarrays. After artificial infection with a variety of Aeromonas spp., ...

  19. An Automated System for Skeletal Maturity Assessment by Extreme Learning Machines

    PubMed Central

    Mansourvar, Marjan; Shamshirband, Shahaboddin; Raj, Ram Gopal; Gunalan, Roshan; Mazinani, Iman

    2015-01-01

    Assessing skeletal age is a subjective and tedious examination process. Hence, automated assessment methods have been developed to replace manual evaluation in medical applications. In this study, a new fully automated method based on content-based image retrieval and using extreme learning machines (ELM) is designed and adapted to assess skeletal maturity. The main novelty of this approach is it overcomes the segmentation problem as suffered by existing systems. The estimation results of ELM models are compared with those of genetic programming (GP) and artificial neural networks (ANNs) models. The experimental results signify improvement in assessment accuracy over GP and ANN, while generalization capability is possible with the ELM approach. Moreover, the results are indicated that the ELM model developed can be used confidently in further work on formulating novel models of skeletal age assessment strategies. According to the experimental results, the new presented method has the capacity to learn many hundreds of times faster than traditional learning methods and it has sufficient overall performance in many aspects. It has conclusively been found that applying ELM is particularly promising as an alternative method for evaluating skeletal age. PMID:26402795

  20. [Physically-based model of pesticide application for risk assessment of agricultural workers].

    PubMed

    Rubino, F M; Mandic-Rajcevic, S; Vianello, G; Brambilla, G; Colosio, C

    2012-01-01

    Due to their unavoidable toxicity to non-target organisms, including man, the not of Plant Protection Products requires a thorough risk assessment to rationally advise safe use procedures and protection equipment by farmers. Most information on active substances and formulations, such as dermal absorption rates and exposure limits are available in the large body of regulatory data. Physically-based computational models can be used to forecast risk in real-life conditions (preventive assessment by 'exposure profiles'), to drive the cost-effective use of products and equipment and to understand the sources of unexpected exposure.

  1. Assessment of vulnerability zones for ground water pollution using GIS-DRASTIC-EC model: A field-based approach

    NASA Astrophysics Data System (ADS)

    Anantha Rao, D.; Naik, Pradeep K.; Jain, Sunil K.; Vinod Kumar, K.; Dhanamjaya Rao, E. N.

    2018-06-01

    Assessment of groundwater vulnerability to pollution is an essential pre-requisite for better planning of an area. We present the groundwater vulnerability assessment in parts of the Yamuna Nagar District, Haryana State, India in an area of about 800 km2, considered to be a freshwater zone in the foothills of the Siwalik Hill Ranges. Such areas in the Lower Himalayas form good groundwater recharge zones, and should always be free from contamination. But, the administration has been trying to promote industrialization along these foothill zones without actually assessing the environmental consequences such activities may invite in the future. GIS-DRASTIC model has been used with field based data inputs for studying the vulnerability assessment. But, we find that inclusion electrical conductivity (EC) as a model parameter makes it more robust. Therefore, we rename it as GIS-DRASTIC-EC model. The model identifies three vulnerability zones such as low, moderate and high with an areal extent of 5%, 80% and 15%, respectively. On the basis of major chemical parameters alone, the groundwater in the foothill zones apparently looks safe, but analysis with the help of GIS-DRASTIC-EC model gives a better perspective of the groundwater quality in terms of identifying the vulnerable areas.

  2. Creating Needs-Based Tiered Models for Assisted Living Reimbursement

    ERIC Educational Resources Information Center

    Howell-White, Sandra; Gaboda, Dorothy; Rosato, Nancy Scotto; Lucas, Judith A.

    2006-01-01

    Purpose: This research provides state policy makers and others interested in developing needs-based reimbursement models for Medicaid-funded assisted living with an evaluation of different methodologies that affect the structure and outcomes of these models. Design and Methods: We used assessment data from Medicaid-enrolled assisted living…

  3. Reasoning with Causal Cycles

    ERIC Educational Resources Information Center

    Rehder, Bob

    2017-01-01

    This article assesses how people reason with categories whose features are related in causal cycles. Whereas models based on causal graphical models (CGMs) have enjoyed success modeling category-based judgments as well as a number of other cognitive phenomena, CGMs are only able to represent causal structures that are acyclic. A number of new…

  4. GIS-BASED HYDROLOGIC MODELING: THE AUTOMATED GEOSPATIAL WATERSHED ASSESSMENT TOOL

    EPA Science Inventory

    Planning and assessment in land and water resource management are evolving from simple, local scale problems toward complex, spatially explicit regional ones. Such problems have to be
    addressed with distributed models that can compute runoff and erosion at different spatial a...

  5. Hydrological modeling of upper Indus Basin and assessment of deltaic ecology

    USDA-ARS?s Scientific Manuscript database

    Managing water resources is mostly required at watershed scale where the complex hydrology processes and interactions linking land surface, climatic factors and human activities can be studied. Geographical Information System based watershed model; Soil and Water Assessment Tool (SWAT) is applied f...

  6. Clinical Assessment of Family Caregivers in Dementia.

    ERIC Educational Resources Information Center

    Rankin, Eric D.; And Others

    1992-01-01

    Evaluated development of integrated family assessment inventory based on Double ABCX and Circumplex models of family functioning and its clinical utility with 121 primary family caregivers from cognitive disorders program. Proposed model predicted significant proportion of variance associated with caregiver stress and strain. Several aspects of…

  7. MODEL DEVELOPMENT AND APPLICATION FOR ASSESSING HUMAN EXPOSURE AND DOSE TO TOXIC CHEMICALS AND POLLUTANTS

    EPA Science Inventory

    This project aims to strengthen the general scientific foundation of EPA's exposure and risk assessment processes by developing state-of-the-art exposure to dose computational models. This research will produce physiologically-based pharmacokinetic (PBPK) and pharmacodynamic (PD)...

  8. Using Remote Sensing and Radar Meteorological Data to Support Watershed Assessments Comprising Integrated Environmental Modeling

    EPA Science Inventory

    Meteorological (MET) data required by watershed assessments comprising Integrated Environmental Modeling (IEM) traditionally have been provided by land-based weather (gauge) stations, although these data may not be the most appropriate for adequate spatial and temporal resolution...

  9. Development and testing of a physically based model of streambank erosion for coupling with a basin-scale hydrologic model SWAT

    USDA-ARS?s Scientific Manuscript database

    A comprehensive stream bank erosion model based on excess shear stress has been developed and incorporated in the hydrological model Soil and Water Assessment Tool (SWAT). It takes into account processes such as weathering, vegetative cover, and channel meanders to adjust critical and effective str...

  10. Response to Intervention in Canada: Definitions, the Evidence Base, and Future Directions

    ERIC Educational Resources Information Center

    McIntosh, Kent; MacKay, Leslie D.; Andreou, Theresa; Brown, Jacqueline A.; Mathews, Susanna; Gietz, Carmen; Bennett, Joanna L.

    2011-01-01

    Based on challenges with the traditional model of school psychology, response to intervention (RTI) has been advanced as a model of special education eligibility decision making and service delivery that may address the drawbacks of the traditional models of assessment and result in improved outcomes for students. In this article, the RTI model is…

  11. Dynamic Assessment of Water Quality Based on a Variable Fuzzy Pattern Recognition Model

    PubMed Central

    Xu, Shiguo; Wang, Tianxiang; Hu, Suduan

    2015-01-01

    Water quality assessment is an important foundation of water resource protection and is affected by many indicators. The dynamic and fuzzy changes of water quality lead to problems for proper assessment. This paper explores a method which is in accordance with the water quality changes. The proposed method is based on the variable fuzzy pattern recognition (VFPR) model and combines the analytic hierarchy process (AHP) model with the entropy weight (EW) method. The proposed method was applied to dynamically assess the water quality of Biliuhe Reservoir (Dailan, China). The results show that the water quality level is between levels 2 and 3 and worse in August or September, caused by the increasing water temperature and rainfall. Weights and methods are compared and random errors of the values of indicators are analyzed. It is concluded that the proposed method has advantages of dynamism, fuzzification and stability by considering the interval influence of multiple indicators and using the average level characteristic values of four models as results. PMID:25689998

  12. Dynamic assessment of water quality based on a variable fuzzy pattern recognition model.

    PubMed

    Xu, Shiguo; Wang, Tianxiang; Hu, Suduan

    2015-02-16

    Water quality assessment is an important foundation of water resource protection and is affected by many indicators. The dynamic and fuzzy changes of water quality lead to problems for proper assessment. This paper explores a method which is in accordance with the water quality changes. The proposed method is based on the variable fuzzy pattern recognition (VFPR) model and combines the analytic hierarchy process (AHP) model with the entropy weight (EW) method. The proposed method was applied to dynamically assess the water quality of Biliuhe Reservoir (Dailan, China). The results show that the water quality level is between levels 2 and 3 and worse in August or September, caused by the increasing water temperature and rainfall. Weights and methods are compared and random errors of the values of indicators are analyzed. It is concluded that the proposed method has advantages of dynamism, fuzzification and stability by considering the interval influence of multiple indicators and using the average level characteristic values of four models as results.

  13. Assessing the Performance of a Computer-Based Policy Model of HIV and AIDS

    PubMed Central

    Rydzak, Chara E.; Cotich, Kara L.; Sax, Paul E.; Hsu, Heather E.; Wang, Bingxia; Losina, Elena; Freedberg, Kenneth A.; Weinstein, Milton C.; Goldie, Sue J.

    2010-01-01

    Background Model-based analyses, conducted within a decision analytic framework, provide a systematic way to combine information about the natural history of disease and effectiveness of clinical management strategies with demographic and epidemiological characteristics of the population. Among the challenges with disease-specific modeling include the need to identify influential assumptions and to assess the face validity and internal consistency of the model. Methods and Findings We describe a series of exercises involved in adapting a computer-based simulation model of HIV disease to the Women's Interagency HIV Study (WIHS) cohort and assess model performance as we re-parameterized the model to address policy questions in the U.S. relevant to HIV-infected women using data from the WIHS. Empiric calibration targets included 24-month survival curves stratified by treatment status and CD4 cell count. The most influential assumptions in untreated women included chronic HIV-associated mortality following an opportunistic infection, and in treated women, the ‘clinical effectiveness’ of HAART and the ability of HAART to prevent HIV complications independent of virologic suppression. Good-fitting parameter sets required reductions in the clinical effectiveness of 1st and 2nd line HAART and improvements in 3rd and 4th line regimens. Projected rates of treatment regimen switching using the calibrated cohort-specific model closely approximated independent analyses published using data from the WIHS. Conclusions The model demonstrated good internal consistency and face validity, and supported cohort heterogeneities that have been reported in the literature. Iterative assessment of model performance can provide information about the relative influence of uncertain assumptions and provide insight into heterogeneities within and between cohorts. Description of calibration exercises can enhance the transparency of disease-specific models. PMID:20844741

  14. Assessing the performance of a computer-based policy model of HIV and AIDS.

    PubMed

    Rydzak, Chara E; Cotich, Kara L; Sax, Paul E; Hsu, Heather E; Wang, Bingxia; Losina, Elena; Freedberg, Kenneth A; Weinstein, Milton C; Goldie, Sue J

    2010-09-09

    Model-based analyses, conducted within a decision analytic framework, provide a systematic way to combine information about the natural history of disease and effectiveness of clinical management strategies with demographic and epidemiological characteristics of the population. Among the challenges with disease-specific modeling include the need to identify influential assumptions and to assess the face validity and internal consistency of the model. We describe a series of exercises involved in adapting a computer-based simulation model of HIV disease to the Women's Interagency HIV Study (WIHS) cohort and assess model performance as we re-parameterized the model to address policy questions in the U.S. relevant to HIV-infected women using data from the WIHS. Empiric calibration targets included 24-month survival curves stratified by treatment status and CD4 cell count. The most influential assumptions in untreated women included chronic HIV-associated mortality following an opportunistic infection, and in treated women, the 'clinical effectiveness' of HAART and the ability of HAART to prevent HIV complications independent of virologic suppression. Good-fitting parameter sets required reductions in the clinical effectiveness of 1st and 2nd line HAART and improvements in 3rd and 4th line regimens. Projected rates of treatment regimen switching using the calibrated cohort-specific model closely approximated independent analyses published using data from the WIHS. The model demonstrated good internal consistency and face validity, and supported cohort heterogeneities that have been reported in the literature. Iterative assessment of model performance can provide information about the relative influence of uncertain assumptions and provide insight into heterogeneities within and between cohorts. Description of calibration exercises can enhance the transparency of disease-specific models.

  15. Model averaging techniques for quantifying conceptual model uncertainty.

    PubMed

    Singh, Abhishek; Mishra, Srikanta; Ruskauff, Greg

    2010-01-01

    In recent years a growing understanding has emerged regarding the need to expand the modeling paradigm to include conceptual model uncertainty for groundwater models. Conceptual model uncertainty is typically addressed by formulating alternative model conceptualizations and assessing their relative likelihoods using statistical model averaging approaches. Several model averaging techniques and likelihood measures have been proposed in the recent literature for this purpose with two broad categories--Monte Carlo-based techniques such as Generalized Likelihood Uncertainty Estimation or GLUE (Beven and Binley 1992) and criterion-based techniques that use metrics such as the Bayesian and Kashyap Information Criteria (e.g., the Maximum Likelihood Bayesian Model Averaging or MLBMA approach proposed by Neuman 2003) and Akaike Information Criterion-based model averaging (AICMA) (Poeter and Anderson 2005). These different techniques can often lead to significantly different relative model weights and ranks because of differences in the underlying statistical assumptions about the nature of model uncertainty. This paper provides a comparative assessment of the four model averaging techniques (GLUE, MLBMA with KIC, MLBMA with BIC, and AIC-based model averaging) mentioned above for the purpose of quantifying the impacts of model uncertainty on groundwater model predictions. Pros and cons of each model averaging technique are examined from a practitioner's perspective using two groundwater modeling case studies. Recommendations are provided regarding the use of these techniques in groundwater modeling practice.

  16. The Communication, Awareness, Relationships and Empowerment (C.A.R.E.) Model: An Effective Tool for Engaging Urban Communities in Community-Based Participatory Research

    PubMed Central

    Ceasar, Joniqua; Peters-Lawrence, Marlene H.; Mitchell, Valerie; Powell-Wiley, Tiffany M.

    2017-01-01

    Little is known about recruitment methods for racial/ethnic minority populations from resource-limited areas for community-based health and needs assessments, particularly assessments that incorporate mobile health (mHealth) technology for characterizing physical activity and dietary intake. We examined whether the Communication, Awareness, Relationships and Empowerment (C.A.R.E.) model could reduce challenges recruiting and retaining participants from faith-based organizations in predominantly African American Washington, D.C. communities for a community-based assessment. Employing C.A.R.E. model elements, our diverse research team developed partnerships with churches, health organizations, academic institutions and governmental agencies. Through these partnerships, we cultivated a visible presence at community events, provided cardiovascular health education and remained accessible throughout the research process. Additionally, these relationships led to the creation of a community advisory board (CAB), which influenced the study’s design, implementation, and dissemination. Over thirteen months, 159 individuals were recruited for the study, 99 completed the initial assessment, and 81 used mHealth technology to self-monitor physical activity over 30 days. The culturally and historically sensitive C.A.R.E. model strategically engaged CAB members and study participants. It was essential for success in recruitment and retention of an at-risk, African American population and may be an effective model for researchers hoping to engage racial/ethnic minority populations living in urban communities. PMID:29160826

  17. Graphic comparison of reserve-growth models for conventional oil and accumulation

    USGS Publications Warehouse

    Klett, T.R.

    2003-01-01

    The U.S. Geological Survey (USGS) periodically assesses crude oil, natural gas, and natural gas liquids resources of the world. The assessment procedure requires estimated recover-able oil and natural gas volumes (field size, cumulative production plus remaining reserves) in discovered fields. Because initial reserves are typically conservative, subsequent estimates increase through time as these fields are developed and produced. The USGS assessment of petroleum resources makes estimates, or forecasts, of the potential additions to reserves in discovered oil and gas fields resulting from field development, and it also estimates the potential fully developed sizes of undiscovered fields. The term ?reserve growth? refers to the commonly observed upward adjustment of reserve estimates. Because such additions are related to increases in the total size of a field, the USGS uses field sizes to model reserve growth. Future reserve growth in existing fields is a major component of remaining U.S. oil and natural gas resources and has therefore become a necessary element of U.S. petroleum resource assessments. Past and currently proposed reserve-growth models compared herein aid in the selection of a suitable set of forecast functions to provide an estimate of potential additions to reserves from reserve growth in the ongoing National Oil and Gas Assessment Project (NOGA). Reserve growth is modeled by construction of a curve that represents annual fractional changes of recoverable oil and natural gas volumes (for fields and reservoirs), which provides growth factors. Growth factors are used to calculate forecast functions, which are sets of field- or reservoir-size multipliers. Comparisons of forecast functions were made based on datasets used to construct the models, field type, modeling method, and length of forecast span. Comparisons were also made between forecast functions based on field-level and reservoir- level growth, and between forecast functions based on older and newer data. The reserve-growth model used in the 1995 USGS National Assessment and the model currently used in the NOGA project provide forecast functions that yield similar estimates of potential additions to reserves. Both models are based on the Oil and Gas Integrated Field File from the Energy Information Administration (EIA), but different vintages of data (from 1977 through 1991 and 1977 through 1996, respectively). The model based on newer data can be used in place of the previous model, providing similar estimates of potential additions to reserves. Fore-cast functions for oil fields vary little from those for gas fields in these models; therefore, a single function may be used for both oil and gas fields, like that used in the USGS World Petroleum Assessment 2000. Forecast functions based on the field-level reserve growth model derived from the NRG Associates databases (from 1982 through 1998) differ from those derived from EIA databases (from 1977 through 1996). However, the difference may not be enough to preclude the use of the forecast functions derived from NRG data in place of the forecast functions derived from EIA data. Should the model derived from NRG data be used, separate forecast functions for oil fields and gas fields must be employed. The forecast function for oil fields from the model derived from NRG data varies significantly from that for gas fields, and a single function for both oil and gas fields may not be appropriate.

  18. Predicting in-patient falls in a geriatric clinic: a clinical study combining assessment data and simple sensory gait measurements.

    PubMed

    Marschollek, M; Nemitz, G; Gietzelt, M; Wolf, K H; Meyer Zu Schwabedissen, H; Haux, R

    2009-08-01

    Falls are among the predominant causes for morbidity and mortality in elderly persons and occur most often in geriatric clinics. Despite several studies that have identified parameters associated with elderly patients' fall risk, prediction models -- e.g., based on geriatric assessment data -- are currently not used on a regular basis. Furthermore, technical aids to objectively assess mobility-associated parameters are currently not used. To assess group differences in clinical as well as common geriatric assessment data and sensory gait measurements between fallers and non-fallers in a geriatric sample, and to derive and compare two prediction models based on assessment data alone (model #1) and added sensory measurement data (model #2). For a sample of n=110 geriatric in-patients (81 women, 29 men) the following fall risk-associated assessments were performed: Timed 'Up & Go' (TUG) test, STRATIFY score and Barthel index. During the TUG test the subjects wore a triaxial accelerometer, and sensory gait parameters were extracted from the data recorded. Group differences between fallers (n=26) and non-fallers (n=84) were compared using Student's t-test. Two classification tree prediction models were computed and compared. Significant differences between the two groups were found for the following parameters: time to complete the TUG test, transfer item (Barthel), recent falls (STRATIFY), pelvic sway while walking and step length. Prediction model #1 (using common assessment data only) showed a sensitivity of 38.5% and a specificity of 97.6%, prediction model #2 (assessment data plus sensory gait parameters) performed with 57.7% and 100%, respectively. Significant differences between fallers and non-fallers among geriatric in-patients can be detected for several assessment subscores as well as parameters recorded by simple accelerometric measurements during a common mobility test. Existing geriatric assessment data may be used for falls prediction on a regular basis. Adding sensory data improves the specificity of our test markedly.

  19. Full uncertainty quantification of N2O and NO emissions using the biogeochemical model LandscapeDNDC on site and regional scale

    NASA Astrophysics Data System (ADS)

    Haas, Edwin; Santabarbara, Ignacio; Kiese, Ralf; Butterbach-Bahl, Klaus

    2017-04-01

    Numerical simulation models are increasingly used to estimate greenhouse gas emissions at site to regional / national scale and are outlined as the most advanced methodology (Tier 3) in the framework of UNFCCC reporting. Process-based models incorporate the major processes of the carbon and nitrogen cycle of terrestrial ecosystems and are thus thought to be widely applicable at various conditions and spatial scales. Process based modelling requires high spatial resolution input data on soil properties, climate drivers and management information. The acceptance of model based inventory calculations depends on the assessment of the inventory's uncertainty (model, input data and parameter induced uncertainties). In this study we fully quantify the uncertainty in modelling soil N2O and NO emissions from arable, grassland and forest soils using the biogeochemical model LandscapeDNDC. We address model induced uncertainty (MU) by contrasting two different soil biogeochemistry modules within LandscapeDNDC. The parameter induced uncertainty (PU) was assessed by using joint parameter distributions for key parameters describing microbial C and N turnover processes as obtained by different Bayesian calibration studies for each model configuration. Input data induced uncertainty (DU) was addressed by Bayesian calibration of soil properties, climate drivers and agricultural management practices data. For the MU, DU and PU we performed several hundred simulations each to contribute to the individual uncertainty assessment. For the overall uncertainty quantification we assessed the model prediction probability, followed by sampled sets of input datasets and parameter distributions. Statistical analysis of the simulation results have been used to quantify the overall full uncertainty of the modelling approach. With this study we can contrast the variation in model results to the different sources of uncertainties for each ecosystem. Further we have been able to perform a fully uncertainty analysis for modelling N2O and NO emissions from arable, grassland and forest soils necessary for the comprehensibility of modelling results. We have applied the methodology to a regional inventory to assess the overall modelling uncertainty for a regional N2O and NO emissions inventory for the state of Saxony, Germany.

  20. Bayesian Revision of Residual Detection Power

    NASA Technical Reports Server (NTRS)

    DeLoach, Richard

    2013-01-01

    This paper addresses some issues with quality assessment and quality assurance in response surface modeling experiments executed in wind tunnels. The role of data volume on quality assurance for response surface models is reviewed. Specific wind tunnel response surface modeling experiments are considered for which apparent discrepancies exist between fit quality expectations based on implemented quality assurance tactics, and the actual fit quality achieved in those experiments. These discrepancies are resolved by using Bayesian inference to account for certain imperfections in the assessment methodology. Estimates of the fraction of out-of-tolerance model predictions based on traditional frequentist methods are revised to account for uncertainty in the residual assessment process. The number of sites in the design space for which residuals are out of tolerance is seen to exceed the number of sites where the model actually fails to fit the data. A method is presented to estimate how much of the design space in inadequately modeled by low-order polynomial approximations to the true but unknown underlying response function.

  1. A Modeling Approach for Burn Scar Assessment Using Natural Features and Elastic Property

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsap, L V; Zhang, Y; Goldgof, D B

    2004-04-02

    A modeling approach is presented for quantitative burn scar assessment. Emphases are given to: (1) constructing a finite element model from natural image features with an adaptive mesh, and (2) quantifying the Young's modulus of scars using the finite element model and the regularization method. A set of natural point features is extracted from the images of burn patients. A Delaunay triangle mesh is then generated that adapts to the point features. A 3D finite element model is built on top of the mesh with the aid of range images providing the depth information. The Young's modulus of scars ismore » quantified with a simplified regularization functional, assuming that the knowledge of scar's geometry is available. The consistency between the Relative Elasticity Index and the physician's rating based on the Vancouver Scale (a relative scale used to rate burn scars) indicates that the proposed modeling approach has high potentials for image-based quantitative burn scar assessment.« less

  2. A probabilistic model for accidental cargo oil outflow from product tankers in a ship-ship collision.

    PubMed

    Goerlandt, Floris; Montewka, Jakub

    2014-02-15

    In risk assessment of maritime transportation, estimation of accidental oil outflow from tankers is important for assessing environmental impacts. However, there typically is limited data concerning the specific structural design and tank arrangement of ships operating in a given area. Moreover, there is uncertainty about the accident scenarios potentially emerging from ship encounters. This paper proposes a Bayesian network (BN) model for reasoning under uncertainty for the assessment of accidental cargo oil outflow in a ship-ship collision where a product tanker is struck. The BN combines a model linking impact scenarios to damage extent with a model for estimating the tank layouts based on limited information regarding the ship. The methodology for constructing the model is presented and output for two accident scenarios is shown. The discussion elaborates on the issue of model validation, both in terms of the BN and in light of the adopted uncertainty/bias-based risk perspective. Copyright © 2013 The Authors. Published by Elsevier Ltd.. All rights reserved.

  3. Conceptual Framework for Trait-Based Ecological Risk Assessment for Wildlife Populations Exposed to Pesticides

    EPA Science Inventory

    Between screening level risk assessments and complex ecological models, a need exists for practical identification of risk based on general information about species, chemicals, and exposure scenarios. Several studies have identified demographic, biological, and toxicological fa...

  4. Multiple flood vulnerability assessment approach based on fuzzy comprehensive evaluation method and coordinated development degree model.

    PubMed

    Yang, Weichao; Xu, Kui; Lian, Jijian; Bin, Lingling; Ma, Chao

    2018-05-01

    Flood is a serious challenge that increasingly affects the residents as well as policymakers. Flood vulnerability assessment is becoming gradually relevant in the world. The purpose of this study is to develop an approach to reveal the relationship between exposure, sensitivity and adaptive capacity for better flood vulnerability assessment, based on the fuzzy comprehensive evaluation method (FCEM) and coordinated development degree model (CDDM). The approach is organized into three parts: establishment of index system, assessment of exposure, sensitivity and adaptive capacity, and multiple flood vulnerability assessment. Hydrodynamic model and statistical data are employed for the establishment of index system; FCEM is used to evaluate exposure, sensitivity and adaptive capacity; and CDDM is applied to express the relationship of the three components of vulnerability. Six multiple flood vulnerability types and four levels are proposed to assess flood vulnerability from multiple perspectives. Then the approach is applied to assess the spatiality of flood vulnerability in Hainan's eastern area, China. Based on the results of multiple flood vulnerability, a decision-making process for rational allocation of limited resources is proposed and applied to the study area. The study shows that multiple flood vulnerability assessment can evaluate vulnerability more completely, and help decision makers learn more information about making decisions in a more comprehensive way. In summary, this study provides a new way for flood vulnerability assessment and disaster prevention decision. Copyright © 2018 Elsevier Ltd. All rights reserved.

  5. Assessment of quantitative structure-activity relationship of toxicity prediction models for Korean chemical substance control legislation

    PubMed Central

    Kim, Kwang-Yon; Shin, Seong Eun; No, Kyoung Tai

    2015-01-01

    Objectives For successful adoption of legislation controlling registration and assessment of chemical substances, it is important to obtain sufficient toxicological experimental evidence and other related information. It is also essential to obtain a sufficient number of predicted risk and toxicity results. Particularly, methods used in predicting toxicities of chemical substances during acquisition of required data, ultimately become an economic method for future dealings with new substances. Although the need for such methods is gradually increasing, the-required information about reliability and applicability range has not been systematically provided. Methods There are various representative environmental and human toxicity models based on quantitative structure-activity relationships (QSAR). Here, we secured the 10 representative QSAR-based prediction models and its information that can make predictions about substances that are expected to be regulated. We used models that predict and confirm usability of the information expected to be collected and submitted according to the legislation. After collecting and evaluating each predictive model and relevant data, we prepared methods quantifying the scientific validity and reliability, which are essential conditions for using predictive models. Results We calculated predicted values for the models. Furthermore, we deduced and compared adequacies of the models using the Alternative non-testing method assessed for Registration, Evaluation, Authorization, and Restriction of Chemicals Substances scoring system, and deduced the applicability domains for each model. Additionally, we calculated and compared inclusion rates of substances expected to be regulated, to confirm the applicability. Conclusions We evaluated and compared the data, adequacy, and applicability of our selected QSAR-based toxicity prediction models, and included them in a database. Based on this data, we aimed to construct a system that can be used with predicted toxicity results. Furthermore, by presenting the suitability of individual predicted results, we aimed to provide a foundation that could be used in actual assessments and regulations. PMID:26206368

  6. High-resolution modeling assessment of tidal stream resource in Western Passage of Maine, USA

    NASA Astrophysics Data System (ADS)

    Yang, Zhaoqing; Wang, Taiping; Feng, Xi; Xue, Huijie; Kilcher, Levi

    2017-04-01

    Although significant efforts have been taken to assess the maximum potential of tidal stream energy at system-wide scale, accurate assessment of tidal stream energy resource at project design scale requires detailed hydrodynamic simulations using high-resolution three-dimensional (3-D) numerical models. Extended model validation against high quality measured data is essential to minimize the uncertainties of the resource assessment. Western Passage in the State of Maine in U.S. has been identified as one of the top ranking sites for tidal stream energy development in U.S. coastal waters, based on a number of criteria including tidal power density, market value and transmission distance. This study presents an on-going modeling effort for simulating the tidal hydrodynamics in Western Passage using the 3-D unstructured-grid Finite Volume Community Ocean Model (FVCOM). The model domain covers a large region including the entire the Bay of Fundy with grid resolution varies from 20 m in the Western Passage to approximately 1000 m along the open boundary near the mouth of Bay of Fundy. Preliminary model validation was conducted using existing NOAA measurements within the model domain. Spatial distributions of tidal power density were calculated and extractable tidal energy was estimated using a tidal turbine module embedded in FVCOM under different tidal farm scenarios. Additional field measurements to characterize resource and support model validation were discussed. This study provides an example of high resolution resource assessment based on the guidance recommended by the International Electrotechnical Commission Technical Specification.

  7. 75 FR 2523 - Office of Innovation and Improvement; Overview Information; Arts in Education Model Development...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-15

    ... that is based on rigorous scientifically based research methods to assess the effectiveness of a...) Relies on measurements or observational methods that provide reliable and valid data across evaluators... of innovative, cohesive models that are based on research and have demonstrated that they effectively...

  8. Perspectives to performance of environment and health assessments and models--from outputs to outcomes?

    PubMed

    Pohjola, Mikko V; Pohjola, Pasi; Tainio, Marko; Tuomisto, Jouni T

    2013-06-26

    The calls for knowledge-based policy and policy-relevant research invoke a need to evaluate and manage environment and health assessments and models according to their societal outcomes. This review explores how well the existing approaches to assessment and model performance serve this need. The perspectives to assessment and model performance in the scientific literature can be called: (1) quality assurance/control, (2) uncertainty analysis, (3) technical assessment of models, (4) effectiveness and (5) other perspectives, according to what is primarily seen to constitute the goodness of assessments and models. The categorization is not strict and methods, tools and frameworks in different perspectives may overlap. However, altogether it seems that most approaches to assessment and model performance are relatively narrow in their scope. The focus in most approaches is on the outputs and making of assessments and models. Practical application of the outputs and the consequential outcomes are often left unaddressed. It appears that more comprehensive approaches that combine the essential characteristics of different perspectives are needed. This necessitates a better account of the mechanisms of collective knowledge creation and the relations between knowledge and practical action. Some new approaches to assessment, modeling and their evaluation and management span the chain from knowledge creation to societal outcomes, but the complexity of evaluating societal outcomes remains a challenge.

  9. Patient or physician preferences for decision analysis: the prenatal genetic testing decision.

    PubMed

    Heckerling, P S; Verp, M S; Albert, N

    1999-01-01

    The choice between amniocentesis and chorionic villus sampling for prenatal genetic testing involves tradeoffs of the benefits and risks of the tests. Decision analysis is a method of explicitly weighing such tradeoffs. The authors examined the relationship between prenatal test choices made by patients and the choices prescribed by decision-analytic models based on their preferences, and separate models based on the preferences of their physicians. Preferences were assessed using written scenarios describing prenatal testing outcomes, and were recorded on linear rating scales. After adjustment for sociodemographic and obstetric confounders, test choice was significantly associated with the choice of decision models based on patient preferences (odds ratio 4.44; Cl, 2.53 to 7.78), but not with the choice of models based on the preferences of the physicians (odds ratio 1.60; Cl, 0.79 to 3.26). Agreement between decision analyses based on patient preferences and on physician preferences was little better than chance (kappa = 0.085+/-0.063). These results were robust both to changes in the decision-analytic probabilities and to changes in the model structure itself to simulate non-expected utility decision rules. The authors conclude that patient but not physician preferences, incorporated in decision models, correspond to the choice of amniocentesis or chorionic villus sampling made by the patient. Nevertheless, because patient preferences were assessed after referral for genetic testing, prospective preference-assessment studies will be necessary to confirm this association.

  10. Consensus modeling to develop the farmers' market readiness assessment and decision instrument.

    PubMed

    Lee, Eunlye; Dalton, Jarrod; Ngendahimana, David; Bebo, Pat; Davis, Ashley; Remley, Daniel; Smathers, Carol; Freedman, Darcy A

    2017-09-01

    Nutrition-related policy, system, and environmental (PSE) interventions such as farmers' markets have been recommended as effective strategies for promoting healthy diet for chronic disease prevention. Tools are needed to assess community readiness and capacity factors influencing successful farmers' market implementation among diverse practitioners in different community contexts. We describe a multiphase consensus modeling approach used to develop a diagnostic tool for assessing readiness and capacity to implement farmers' market interventions among public health and community nutrition practitioners working with low-income populations in diverse contexts. Modeling methods included the following: phase 1, qualitative study with community stakeholders to explore facilitators and barriers influencing successful implementation of farmers' market interventions in low-income communities; phase 2, development of indicators based on operationalization of qualitative findings; phase 3, assessment of relevance and importance of indicators and themes through consensus conference with expert panel; phase 4, refinement of indicators based on consensus conference; and phase 5, pilot test of the assessment tool. Findings illuminate a range of implementation factors influencing farmers' market PSE interventions and offer guidance for tailoring intervention delivery based on levels of community, practitioner, and organizational readiness and capacity.

  11. Improving the Validity of Activity of Daily Living Dependency Risk Assessment

    PubMed Central

    Clark, Daniel O.; Stump, Timothy E.; Tu, Wanzhu; Miller, Douglas K.

    2015-01-01

    Objectives Efforts to prevent activity of daily living (ADL) dependency may be improved through models that assess older adults’ dependency risk. We evaluated whether cognition and gait speed measures improve the predictive validity of interview-based models. Method Participants were 8,095 self-respondents in the 2006 Health and Retirement Survey who were aged 65 years or over and independent in five ADLs. Incident ADL dependency was determined from the 2008 interview. Models were developed using random 2/3rd cohorts and validated in the remaining 1/3rd. Results Compared to a c-statistic of 0.79 in the best interview model, the model including cognitive measures had c-statistics of 0.82 and 0.80 while the best fitting gait speed model had c-statistics of 0.83 and 0.79 in the development and validation cohorts, respectively. Conclusion Two relatively brief models, one that requires an in-person assessment and one that does not, had excellent validity for predicting incident ADL dependency but did not significantly improve the predictive validity of the best fitting interview-based models. PMID:24652867

  12. A web GIS based integrated flood assessment modeling tool for coastal urban watersheds

    NASA Astrophysics Data System (ADS)

    Kulkarni, A. T.; Mohanty, J.; Eldho, T. I.; Rao, E. P.; Mohan, B. K.

    2014-03-01

    Urban flooding has become an increasingly important issue in many parts of the world. In this study, an integrated flood assessment model (IFAM) is presented for the coastal urban flood simulation. A web based GIS framework has been adopted to organize the spatial datasets for the study area considered and to run the model within this framework. The integrated flood model consists of a mass balance based 1-D overland flow model, 1-D finite element based channel flow model based on diffusion wave approximation and a quasi 2-D raster flood inundation model based on the continuity equation. The model code is written in MATLAB and the application is integrated within a web GIS server product viz: Web Gram Server™ (WGS), developed at IIT Bombay, using Java, JSP and JQuery technologies. Its user interface is developed using open layers and the attribute data are stored in MySQL open source DBMS. The model is integrated within WGS and is called via Java script. The application has been demonstrated for two coastal urban watersheds of Navi Mumbai, India. Simulated flood extents for extreme rainfall event of 26 July, 2005 in the two urban watersheds of Navi Mumbai city are presented and discussed. The study demonstrates the effectiveness of the flood simulation tool in a web GIS environment to facilitate data access and visualization of GIS datasets and simulation results.

  13. A pilot study to assess feasibility of value based pricing in Cyprus through pharmacoeconomic modelling and assessment of its operational framework: sorafenib for second line renal cell cancer.

    PubMed

    Petrou, Panagiotis; Talias, Michael A

    2014-01-01

    The continuing increase of pharmaceutical expenditure calls for new approaches to pricing and reimbursement of pharmaceuticals. Value based pricing of pharmaceuticals is emerging as a useful tool and possess theoretical attributes to help health system cope with rising pharmaceutical expenditure. To assess the feasibility of introducing a value-based pricing scheme of pharmaceuticals in Cyprus and explore the integrative framework. A probabilistic Markov chain Monte Carlo model was created to simulate progression of advanced renal cell cancer for comparison of sorafenib to standard best supportive care. Literature review was performed and efficacy data were transferred from a published landmark trial, while official pricelists and clinical guidelines from Cyprus Ministry of Health were utilised for cost calculation. Based on proposed willingness to pay threshold the maximum price of sorafenib for the indication of second line renal cell cancer was assessed. Sorafenib value based price was found to be significantly lower compared to its current reference price. Feasibility of Value Based Pricing is documented and pharmacoeconomic modelling can lead to robust results. Integration of value and affordability in the price are its main advantages which have to be weighed against lack of documentation for several theoretical parameters that influence outcome. Smaller countries such as Cyprus may experience adversities in establishing and sustaining essential structures for this scheme.

  14. Using toxicokinetic-toxicodynamic modeling as an acute risk assessment refinement approach in vertebrate ecological risk assessment.

    PubMed

    Ducrot, Virginie; Ashauer, Roman; Bednarska, Agnieszka J; Hinarejos, Silvia; Thorbek, Pernille; Weyman, Gabriel

    2016-01-01

    Recent guidance identified toxicokinetic-toxicodynamic (TK-TD) modeling as a relevant approach for risk assessment refinement. Yet, its added value compared to other refinement options is not detailed, and how to conduct the modeling appropriately is not explained. This case study addresses these issues through 2 examples of individual-level risk assessment for 2 hypothetical plant protection products: 1) evaluating the risk for small granivorous birds and small omnivorous mammals of a single application, as a seed treatment in winter cereals, and 2) evaluating the risk for fish after a pulsed treatment in the edge-of-field zone. Using acute test data, we conducted the first tier risk assessment as defined in the European Food Safety Authority (EFSA) guidance. When first tier risk assessment highlighted a concern, refinement options were discussed. Cases where the use of models should be preferred over other existing refinement approaches were highlighted. We then practically conducted the risk assessment refinement by using 2 different models as examples. In example 1, a TK model accounting for toxicokinetics and relevant feeding patterns in the skylark and in the wood mouse was used to predict internal doses of the hypothetical active ingredient in individuals, based on relevant feeding patterns in an in-crop situation, and identify the residue levels leading to mortality. In example 2, a TK-TD model accounting for toxicokinetics, toxicodynamics, and relevant exposure patterns in the fathead minnow was used to predict the time-course of fish survival for relevant FOCUS SW exposure scenarios and identify which scenarios might lead to mortality. Models were calibrated using available standard data and implemented to simulate the time-course of internal dose of active ingredient or survival for different exposure scenarios. Simulation results were discussed and used to derive the risk assessment refinement endpoints used for decision. Finally, we compared the "classical" risk assessment approach with the model-based approach. These comparisons showed that TK and TK-TD models can bring more realism to the risk assessment through the possibility to study realistic exposure scenarios and to simulate relevant mechanisms of effects (including delayed toxicity and recovery). Noticeably, using TK-TD models is currently the most relevant way to directly connect realistic exposure patterns to effects. We conclude with recommendations on how to properly use TK and TK-TD model in acute risk assessment for vertebrates. © 2015 SETAC.

  15. Site descriptive modeling as a part of site characterization in Sweden - Concluding the surface based investigations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andersson, Johan; Winberg, Anders; Skagius, Kristina

    The Swedish Nuclear Fuel and Waste Management Co., SKB, is currently finalizing its surface based site investigations for the final repository for spent nuclear fuel in the municipalities of Oestharmnar (the Forsmark area) and Oskarshamn (the Simpevar/Laxemar area). The investigation data are assessed into a Site Descriptive Model, constituting a synthesis of geology, rock mechanics, thermal properties, hydrogeology, hydro-geochemistry, transport properties and a surface system description. Site data constitute a wide range of different measurement results. These data both need to be checked for consistency and to be interpreted into a format more amenable for three-dimensional modeling. The three-dimensional modelingmore » (i.e. estimating the distribution of parameter values in space) is made in a sequence where the geometrical framework is taken from the geological models and in turn used by the rock mechanics, thermal and hydrogeological modeling. These disciplines in turn are partly interrelated, and also provide feedback to the geological modeling, especially if the geological description appears unreasonable when assessed together with the other data. Procedures for assessing the uncertainties and the confidence in the modeling have been developed during the course of the site modeling. These assessments also provide key input to the completion of the site investigation program. (authors)« less

  16. Assessment and prediction of air quality using fuzzy logic and autoregressive models

    NASA Astrophysics Data System (ADS)

    Carbajal-Hernández, José Juan; Sánchez-Fernández, Luis P.; Carrasco-Ochoa, Jesús A.; Martínez-Trinidad, José Fco.

    2012-12-01

    In recent years, artificial intelligence methods have been used for the treatment of environmental problems. This work, presents two models for assessment and prediction of air quality. First, we develop a new computational model for air quality assessment in order to evaluate toxic compounds that can harm sensitive people in urban areas, affecting their normal activities. In this model we propose to use a Sigma operator to statistically asses air quality parameters using their historical data information and determining their negative impact in air quality based on toxicity limits, frequency average and deviations of toxicological tests. We also introduce a fuzzy inference system to perform parameter classification using a reasoning process and integrating them in an air quality index describing the pollution levels in five stages: excellent, good, regular, bad and danger, respectively. The second model proposed in this work predicts air quality concentrations using an autoregressive model, providing a predicted air quality index based on the fuzzy inference system previously developed. Using data from Mexico City Atmospheric Monitoring System, we perform a comparison among air quality indices developed for environmental agencies and similar models. Our results show that our models are an appropriate tool for assessing site pollution and for providing guidance to improve contingency actions in urban areas.

  17. Quality assessment of protein model-structures using evolutionary conservation.

    PubMed

    Kalman, Matan; Ben-Tal, Nir

    2010-05-15

    Programs that evaluate the quality of a protein structural model are important both for validating the structure determination procedure and for guiding the model-building process. Such programs are based on properties of native structures that are generally not expected for faulty models. One such property, which is rarely used for automatic structure quality assessment, is the tendency for conserved residues to be located at the structural core and for variable residues to be located at the surface. We present ConQuass, a novel quality assessment program based on the consistency between the model structure and the protein's conservation pattern. We show that it can identify problematic structural models, and that the scores it assigns to the server models in CASP8 correlate with the similarity of the models to the native structure. We also show that when the conservation information is reliable, the method's performance is comparable and complementary to that of the other single-structure quality assessment methods that participated in CASP8 and that do not use additional structural information from homologs. A perl implementation of the method, as well as the various perl and R scripts used for the analysis are available at http://bental.tau.ac.il/ConQuass/. nirb@tauex.tau.ac.il Supplementary data are available at Bioinformatics online.

  18. Microsurgery Workout: A Novel Simulation Training Curriculum Based on Nonliving Models.

    PubMed

    Rodriguez, Jose R; Yañez, Ricardo; Cifuentes, Ignacio; Varas, Julian; Dagnino, Bruno

    2016-10-01

    Currently, there are no valid training programs based solely on nonliving models. The authors aimed to develop and validate a microsurgery training program based on nonliving models and assess the transfer of skills to a live rat model. Postgraduate year-3 general surgery residents were assessed in a 17-session program, performing arterial and venous end-to-end anastomosis on ex vivo chicken models. Procedures were recorded and rated by two blinded experts using validated global and specific scales (objective structured assessment of technical skills) and a validated checklist. Operating times and patency rates were assessed. Hand-motion analysis was used to measure economy of movements. After training, residents performed an arterial and venous end-to-end anastomosis on live rats. Results were compared to six experienced surgeons in the same models. Values of p < 0.05 were considered statistically significant. Learning curves were achieved. Ten residents improved their median global and specific objective structured assessment of technical skills scores for artery [10 (range, 8 to 10) versus 28 (range, 27 to 29), p < 0.05; and 8 (range, 7 to 9) versus 28 (range, 27 to 28), p < 0.05] and vein [8 (range, 8 to 11) versus 28 (range, 27 to 28), p < 0.05; and 8 (range, 7 to 9) versus 28 (range, 27 to 29), p < 0.05]. Checklist scores also improved for both procedures (p < 0.05). Trainees were slower and less efficient than experienced surgeons (p < 0.05). In the living rat, patency rates at 30 minutes were 100 percent and 50 percent for artery and vein, respectively. Significant acquisition of microsurgical skills was achieved by trainees to a level similar to that of experienced surgeons. Acquired skills were transferred to a more complex live model.

  19. Development and implementation of a Bayesian-based aquifer vulnerability assessment in Florida

    USGS Publications Warehouse

    Arthur, J.D.; Wood, H.A.R.; Baker, A.E.; Cichon, J.R.; Raines, G.L.

    2007-01-01

    The Florida Aquifer Vulnerability Assessment (FAVA) was designed to provide a tool for environmental, regulatory, resource management, and planning professionals to facilitate protection of groundwater resources from surface sources of contamination. The FAVA project implements weights-of-evidence (WofE), a data-driven, Bayesian-probabilistic model to generate a series of maps reflecting relative aquifer vulnerability of Florida's principal aquifer systems. The vulnerability assessment process, from project design to map implementation is described herein in reference to the Floridan aquifer system (FAS). The WofE model calculates weighted relationships between hydrogeologic data layers that influence aquifer vulnerability and ambient groundwater parameters in wells that reflect relative degrees of vulnerability. Statewide model input data layers (evidential themes) include soil hydraulic conductivity, density of karst features, thickness of aquifer confinement, and hydraulic head difference between the FAS and the watertable. Wells with median dissolved nitrogen concentrations exceeding statistically established thresholds serve as training points in the WofE model. The resulting vulnerability map (response theme) reflects classified posterior probabilities based on spatial relationships between the evidential themes and training points. The response theme is subjected to extensive sensitivity and validation testing. Among the model validation techniques is calculation of a response theme based on a different water-quality indicator of relative recharge or vulnerability: dissolved oxygen. Successful implementation of the FAVA maps was facilitated by the overall project design, which included a needs assessment and iterative technical advisory committee input and review. Ongoing programs to protect Florida's springsheds have led to development of larger-scale WofE-based vulnerability assessments. Additional applications of the maps include land-use planning amendments and prioritization of land purchases to protect groundwater resources. ?? International Association for Mathematical Geology 2007.

  20. Assessment and Improvement of GOCE based Global Geopotential Models Using Wavelet Decomposition

    NASA Astrophysics Data System (ADS)

    Erol, Serdar; Erol, Bihter; Serkan Isik, Mustafa

    2016-07-01

    The contribution of recent Earth gravity field satellite missions, specifically GOCE mission, leads significant improvement in quality of gravity field models in both accuracy and resolution manners. However the performance and quality of each released model vary not only depending on the spatial location of the Earth but also the different bands of the spectral expansion. Therefore the assessment of the global model performances with validations using in situ-data in varying territories on the Earth is essential for clarifying their exact performances in local. Beside of this, their spectral evaluation and quality assessment of the signal in each part of the spherical harmonic expansion spectrum is essential to have a clear decision for the commission error content of the model and determining its optimal degree, revealed the best results, as well. The later analyses provide also a perspective and comparison on the global behavior of the models and opportunity to report the sequential improvement of the models depending on the mission developments and hence the contribution of the new data of missions. In this study a review on spectral assessment results of the recently released GOCE based global geopotential models DIR-R5, TIM-R5 with the enhancement using EGM2008, as reference model, in Turkey, versus the terrestrial data is provided. Beside of reporting the GOCE mission contribution to the models in Turkish territory, the possible improvement in the spectral quality of these models, via decomposition that are highly contaminated by noise, is purposed. In the analyses the motivation is on achieving an optimal amount of improvement that rely on conserving the useful component of the GOCE signal as much as possible, while fusing the filtered GOCE based models with EGM2008 in the appropriate spectral bands. The investigation also contain the assessment of the coherence and the correlation between the Earth gravity field parameters (free-air gravity anomalies and geoid undulations), derived from the validated geopotential models and terrestrial data (GPS/leveling, terrestrial gravity observations, DTM etc.), as well as the WGM2012 products. In the conclusion, with the numerical results, the performance of the assessed models are clarified in Turkish territory and the potential of the Wavelet decomposition in the improvement of the geopotential models is verified.

  1. Diagnostic indicators for integrated assessment models of climate policy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kriegler, Elmar; Petermann, Nils; Krey, Volker

    2015-01-01

    Integrated assessments of how climate policy interacts with energy-economic systems can be performed by a variety of models with different functional structures. This article proposes a diagnostic scheme that can be applied to a wide range of integrated assessment models to classify differences among models based on their carbon price responses. Model diagnostics can uncover patterns and provide insights into why, under a given scenario, certain types of models behave in observed ways. Such insights are informative since model behavior can have a significant impact on projections of climate change mitigation costs and other policy-relevant information. The authors propose diagnosticmore » indicators to characterize model responses to carbon price signals and test these in a diagnostic study with 11 global models. Indicators describe the magnitude of emission abatement and the associated costs relative to a harmonized baseline, the relative changes in carbon intensity and energy intensity and the extent of transformation in the energy system. This study shows a correlation among indicators suggesting that models can be classified into groups based on common patterns of behavior in response to carbon pricing. Such a classification can help to more easily explain variations among policy-relevant model results.« less

  2. Environmental performances of coproducts. Application of Claiming-Based Allocation models to straw and vetiver biorefineries in an Indian context.

    PubMed

    Gnansounou, Edgard; Raman, Jegannathan Kenthorai

    2018-04-24

    Among the renewables, non-food and wastelands based biofuels are essential for the transport sector to achieve country's climate mitigation targets. With the growing interest in biorefineries, setting policy requirements for other coproducts along with biofuels is necessary to improve the products portfolio of biorefinery, increase the bioproducts perception by the consumers and push the technology forward. Towards this context, Claiming-Based allocation models were used in comparative life cycle assessment of multiple products from wheat straw biorefinery and vetiver biorefinery. Vetiver biorefinery shows promising Greenhouse gas emission savings (181-213%) compared to the common crop based lignocellulose (wheat straw) biorefinery. Assistance of Claiming-Based Allocation models favors to find out the affordable allocation limit (0-80%) among the coproducts in order to achieve the individual prospective policy targets. Such models show promising application in multiproduct life cycle assessment studies where appropriate allocation is challenging to achieve the individual products emission subject to policy targets. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. Metrological analysis of a virtual flowmeter-based transducer for cryogenic helium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arpaia, P., E-mail: pasquale.arpaia@unina.it; Technology Department, European Organization for Nuclear Research; Girone, M., E-mail: mario.girone@cern.ch

    2015-12-15

    The metrological performance of a virtual flowmeter-based transducer for monitoring helium under cryogenic conditions is assessed. At this aim, an uncertainty model of the transducer, mainly based on a valve model, exploiting finite-element approach, and a virtual flowmeter model, based on the Sereg-Schlumberger method, are presented. The models are validated experimentally on a case study for helium monitoring in cryogenic systems at the European Organization for Nuclear Research (CERN). The impact of uncertainty sources on the transducer metrological performance is assessed by a sensitivity analysis, based on statistical experiment design and analysis of variance. In this way, the uncertainty sourcesmore » most influencing metrological performance of the transducer are singled out over the input range as a whole, at varying operating and setting conditions. This analysis turns out to be important for CERN cryogenics operation because the metrological design of the transducer is validated, and its components and working conditions with critical specifications for future improvements are identified.« less

  4. Empirical flow parameters - a tool for hydraulic model validity assessment : [summary].

    DOT National Transportation Integrated Search

    2013-10-01

    Hydraulic modeling assembles models based on generalizations of parameter values from textbooks, professional literature, computer program documentation, and engineering experience. Actual measurements adjacent to the model location are seldom availa...

  5. Program Development and External Assessment.

    ERIC Educational Resources Information Center

    Minnis, D. L.

    Although the development of new models is essential to the improvement of teacher preparation programs, the California system of accreditation of teacher education programs seems to hinder innovative program development and evaluation. External assessment in California is based on a discrepancy model which works best when applied to static…

  6. MODEL UNCERTAINTY ANALYSIS, FIELD DATA COLLECTION AND ANALYSIS OF CONTAMINATED VAPOR INTRUSION INTO BUILDINGS

    EPA Science Inventory

    To address uncertainty associated with the evaluation of vapor intrusion problems we are working on a three part strategy that includes: evaluation of uncertainty in model-based assessments; collection of field data and assessment of sites using EPA and state protocols.

  7. Notional Scoring for Technical Review Weighting As Applied to Simulation Credibility Assessment

    NASA Technical Reports Server (NTRS)

    Hale, Joseph Peter; Hartway, Bobby; Thomas, Danny

    2008-01-01

    NASA's Modeling and Simulation Standard requires a credibility assessment for critical engineering data produced by models and simulations. Credibility assessment is thus a "qualifyingfactor" in reporting results from simulation-based analysis. The degree to which assessors should be independent of the simulation developers, users and decision makers is a recurring question. This paper provides alternative "weighting algorithms" for calculating the value-added for independence of the levels of technical review defined for the NASA Modeling and Simulation Standard.

  8. An introduction to the partial credit model for developing nursing assessments.

    PubMed

    Fox, C

    1999-11-01

    The partial credit model, which is a special case of the Rasch measurement model, was presented as a useful way to develop and refine complex nursing assessments. The advantages of the Rasch model over the classical psychometric model were presented including the lack of bias in the measurement process, the ability to highlight those items in need of refinement, the provision of information on congruence between the data and the model, and feedback on the usefulness of the response categories. The partial credit model was introduced as a way to develop complex nursing assessments such as performance-based assessments, because of the model's ability to accommodate a variety of scoring procedures. Finally, an application of the partial credit model was illustrated using the Practical Knowledge Inventory for Nurses, a paper-and-pencil instrument that measures on-the-job decision-making for nurses.

  9. Automated Assessment of Child Vocalization Development Using LENA.

    PubMed

    Richards, Jeffrey A; Xu, Dongxin; Gilkerson, Jill; Yapanel, Umit; Gray, Sharmistha; Paul, Terrance

    2017-07-12

    To produce a novel, efficient measure of children's expressive vocal development on the basis of automatic vocalization assessment (AVA), child vocalizations were automatically identified and extracted from audio recordings using Language Environment Analysis (LENA) System technology. Assessment was based on full-day audio recordings collected in a child's unrestricted, natural language environment. AVA estimates were derived using automatic speech recognition modeling techniques to categorize and quantify the sounds in child vocalizations (e.g., protophones and phonemes). These were expressed as phone and biphone frequencies, reduced to principal components, and inputted to age-based multiple linear regression models to predict independently collected criterion-expressive language scores. From these models, we generated vocal development AVA estimates as age-standardized scores and development age estimates. AVA estimates demonstrated strong statistical reliability and validity when compared with standard criterion expressive language assessments. Automated analysis of child vocalizations extracted from full-day recordings in natural settings offers a novel and efficient means to assess children's expressive vocal development. More research remains to identify specific mechanisms of operation.

  10. Developing teachers' models for assessing students' competence in mathematical modelling through lesson study

    NASA Astrophysics Data System (ADS)

    Aydogan Yenmez, Arzu; Erbas, Ayhan Kursat; Cakiroglu, Erdinc; Alacaci, Cengiz; Cetinkaya, Bulent

    2017-08-01

    Applications and modelling have gained a prominent role in mathematics education reform documents and curricula. Thus, there is a growing need for studies focusing on the effective use of mathematical modelling in classrooms. Assessment is an integral part of using modelling activities in classrooms, since it allows teachers to identify and manage problems that arise in various stages of the modelling process. However, teachers' difficulties in assessing student modelling work are a challenge to be considered when implementing modelling in the classroom. Thus, the purpose of this study was to investigate how teachers' knowledge on generating assessment criteria for assessing student competence in mathematical modelling evolved through a professional development programme, which is based on a lesson study approach and modelling perspective. The data was collected with four teachers from two public high schools over a five-month period. The professional development programme included a cyclical process, with each cycle consisting of an introductory meeting, the implementation of a model-eliciting activity with students, and a follow-up meeting. The results showed that the professional development programme contributed to teachers' knowledge for generating assessment criteria on the products, and the observable actions that affect the modelling cycle.

  11. A Decision Analytic Approach to Exposure-Based Chemical Prioritization

    PubMed Central

    Mitchell, Jade; Pabon, Nicolas; Collier, Zachary A.; Egeghy, Peter P.; Cohen-Hubal, Elaine; Linkov, Igor; Vallero, Daniel A.

    2013-01-01

    The manufacture of novel synthetic chemicals has increased in volume and variety, but often the environmental and health risks are not fully understood in terms of toxicity and, in particular, exposure. While efforts to assess risks have generally been effective when sufficient data are available, the hazard and exposure data necessary to assess risks adequately are unavailable for the vast majority of chemicals in commerce. The US Environmental Protection Agency has initiated the ExpoCast Program to develop tools for rapid chemical evaluation based on potential for exposure. In this context, a model is presented in which chemicals are evaluated based on inherent chemical properties and behaviorally-based usage characteristics over the chemical’s life cycle. These criteria are assessed and integrated within a decision analytic framework, facilitating rapid assessment and prioritization for future targeted testing and systems modeling. A case study outlines the prioritization process using 51 chemicals. The results show a preliminary relative ranking of chemicals based on exposure potential. The strength of this approach is the ability to integrate relevant statistical and mechanistic data with expert judgment, allowing for an initial tier assessment that can further inform targeted testing and risk management strategies. PMID:23940664

  12. Modelling NO2 concentrations at the street level in the GAINS integrated assessment model: projections under current legislation

    NASA Astrophysics Data System (ADS)

    Kiesewetter, G.; Borken-Kleefeld, J.; Schöpp, W.; Heyes, C.; Thunis, P.; Bessagnet, B.; Terrenoire, E.; Gsella, A.; Amann, M.

    2014-01-01

    NO2 concentrations at the street level are a major concern for urban air quality in Europe and have been regulated under the EU Thematic Strategy on Air Pollution. Despite the legal requirements, limit values are exceeded at many monitoring stations with little or no improvement in recent years. In order to assess the effects of future emission control regulations on roadside NO2 concentrations, a downscaling module has been implemented in the GAINS integrated assessment model. The module follows a hybrid approach based on atmospheric dispersion calculations and observations from the AirBase European air quality database that are used to estimate site-specific parameters. Pollutant concentrations at every monitoring site with sufficient data coverage are disaggregated into contributions from regional background, urban increment, and local roadside increment. The future evolution of each contribution is assessed with a model of the appropriate scale: 28 × 28 km grid based on the EMEP Model for the regional background, 7 × 7 km urban increment based on the CHIMERE Chemistry Transport Model, and a chemical box model for the roadside increment. Thus, different emission scenarios and control options for long-range transport as well as regional and local emissions can be analysed. Observed concentrations and historical trends are well captured, in particular the differing NO2 and total NOx = NO + NO2 trends. Altogether, more than 1950 air quality monitoring stations in the EU are covered by the model, including more than 400 traffic stations and 70% of the critical stations. Together with its well-established bottom-up emission and dispersion calculation scheme, GAINS is thus able to bridge the scales from European-wide policies to impacts in street canyons. As an application of the model, we assess the evolution of attainment of NO2 limit values under current legislation until 2030. Strong improvements are expected with the introduction of the Euro 6 emission standard for light duty vehicles; however, for some major European cities, further measures may be required, in particular if aiming to achieve compliance at an earlier time.

  13. Assessing Motivation to Improve Learning: Practical Applications of Keller's MVP Model and ARCS-V Design Process

    ERIC Educational Resources Information Center

    Angelo, Thomas A.

    2017-01-01

    This chapter applies John Keller's MVP model and, specifically, adapts the ARCS-V components of that model--defined and described in Chapter 1 of this issue of "New Directions for Teaching and Learning"--as a frame for exploring practical, research-based assessment, and feedback strategies and tools teachers can use to help students…

  14. Appraisal of jump distributions in ensemble-based sampling algorithms

    NASA Astrophysics Data System (ADS)

    Dejanic, Sanda; Scheidegger, Andreas; Rieckermann, Jörg; Albert, Carlo

    2017-04-01

    Sampling Bayesian posteriors of model parameters is often required for making model-based probabilistic predictions. For complex environmental models, standard Monte Carlo Markov Chain (MCMC) methods are often infeasible because they require too many sequential model runs. Therefore, we focused on ensemble methods that use many Markov chains in parallel, since they can be run on modern cluster architectures. Little is known about how to choose the best performing sampler, for a given application. A poor choice can lead to an inappropriate representation of posterior knowledge. We assessed two different jump moves, the stretch and the differential evolution move, underlying, respectively, the software packages EMCEE and DREAM, which are popular in different scientific communities. For the assessment, we used analytical posteriors with features as they often occur in real posteriors, namely high dimensionality, strong non-linear correlations or multimodality. For posteriors with non-linear features, standard convergence diagnostics based on sample means can be insufficient. Therefore, we resorted to an entropy-based convergence measure. We assessed the samplers by means of their convergence speed, robustness and effective sample sizes. For posteriors with strongly non-linear features, we found that the stretch move outperforms the differential evolution move, w.r.t. all three aspects.

  15. TSARINA: A computer model for assessing conventional and chemical attacks on air bases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Emerson, D.E.; Wegner, L.H.

    This Note describes the latest version of the TSARINA (TSAR INputs using AIDA) airbase damage assessment computer program that has been developed to estimate the on-base concentration of toxic agents that would be deposited by a chemical attack and to assess losses to various on-base resources from conventional attacks, as well as the physical damage to runways, taxiways, buildings, and other facilities. Although the model may be used as a general-purpose, complex-target damage assessment model, its primary role in intended to be in support of the TSAR (Theater Simulation of Airbase Resources) aircraft sortie generation simulation program. When used withmore » TSAR, multiple trials of a multibase airbase-attack campaign can be assessed with TSARINA, and the impact of those attacks on sortie generation can be derived using the TSAR simulation model. TSARINA, as currently configured, permits damage assessments of attacks on an airbase (or other) complex that is compassed of up to 1000 individual targets (buildings, taxiways, etc,), and 2500 packets of resources. TSARINA determines the actual impact points (pattern centroids for CBUs and container burst point for chemical weapons) by Monte Carlo procedures-i.e., by random selections from the appropriate error distributions. Uncertainties in wind velocity and heading are also considered for chemical weapons. Point-impact weapons that impact within a specified distance of each target type are classed as hits, and estimates of the damage to the structures and to the various classes of support resources are assessed using cookie-cutter weapon-effects approximations.« less

  16. Diagnosing Alzheimer's disease: a systematic review of economic evaluations.

    PubMed

    Handels, Ron L H; Wolfs, Claire A G; Aalten, Pauline; Joore, Manuela A; Verhey, Frans R J; Severens, Johan L

    2014-03-01

    The objective of this study is to systematically review the literature on economic evaluations of interventions for the early diagnosis of Alzheimer's disease (AD) and related disorders and to describe their general and methodological characteristics. We focused on the diagnostic aspects of the decision models to assess the applicability of existing decision models for the evaluation of the recently revised diagnostic research criteria for AD. PubMed and the National Institute for Health Research Economic Evaluation database were searched for English-language publications related to economic evaluations on diagnostic technologies. Trial-based economic evaluations were assessed using the Consensus on Health Economic Criteria list. Modeling studies were assessed using the framework for quality assessment of decision-analytic models. The search retrieved 2109 items, from which eight decision-analytic modeling studies and one trial-based economic evaluation met all eligibility criteria. Diversity among the study objective and characteristics was considerable and, despite considerable methodological quality, several flaws were indicated. Recommendations were focused on diagnostic aspects and the applicability of existing models for the evaluation of recently revised diagnostic research criteria for AD. Copyright © 2014 The Alzheimer's Association. Published by Elsevier Inc. All rights reserved.

  17. Modeling and assessment of civil aircraft evacuation based on finer-grid

    NASA Astrophysics Data System (ADS)

    Fang, Zhi-Ming; Lv, Wei; Jiang, Li-Xue; Xu, Qing-Feng; Song, Wei-Guo

    2016-04-01

    Studying civil aircraft emergency evacuation process by using computer model is an effective way. In this study, the evacuation of Airbus A380 is simulated using a Finer-Grid Civil Aircraft Evacuation (FGCAE) model. In this model, the effect of seat area and others on escape process and pedestrian's "hesitation" before leaving exits are considered, and an optimized rule of exit choice is defined. Simulations reproduce typical characteristics of aircraft evacuation, such as the movement synchronization between adjacent pedestrians, route choice and so on, and indicate that evacuation efficiency will be determined by pedestrian's "preference" and "hesitation". Based on the model, an assessment procedure of aircraft evacuation safety is presented. The assessment and comparison with the actual evacuation test demonstrate that the available exit setting of "one exit from each exit pair" used by practical demonstration test is not the worst scenario. The half exits of one end of the cabin are all unavailable is the worst one, that should be paid more attention to, and even be adopted in the certification test. The model and method presented in this study could be useful for assessing, validating and improving the evacuation performance of aircraft.

  18. Comparison of numerical weather prediction based deterministic and probabilistic wind resource assessment methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Jie; Draxl, Caroline; Hopson, Thomas

    Numerical weather prediction (NWP) models have been widely used for wind resource assessment. Model runs with higher spatial resolution are generally more accurate, yet extremely computational expensive. An alternative approach is to use data generated by a low resolution NWP model, in conjunction with statistical methods. In order to analyze the accuracy and computational efficiency of different types of NWP-based wind resource assessment methods, this paper performs a comparison of three deterministic and probabilistic NWP-based wind resource assessment methodologies: (i) a coarse resolution (0.5 degrees x 0.67 degrees) global reanalysis data set, the Modern-Era Retrospective Analysis for Research and Applicationsmore » (MERRA); (ii) an analog ensemble methodology based on the MERRA, which provides both deterministic and probabilistic predictions; and (iii) a fine resolution (2-km) NWP data set, the Wind Integration National Dataset (WIND) Toolkit, based on the Weather Research and Forecasting model. Results show that: (i) as expected, the analog ensemble and WIND Toolkit perform significantly better than MERRA confirming their ability to downscale coarse estimates; (ii) the analog ensemble provides the best estimate of the multi-year wind distribution at seven of the nine sites, while the WIND Toolkit is the best at one site; (iii) the WIND Toolkit is more accurate in estimating the distribution of hourly wind speed differences, which characterizes the wind variability, at five of the available sites, with the analog ensemble being best at the remaining four locations; and (iv) the analog ensemble computational cost is negligible, whereas the WIND Toolkit requires large computational resources. Future efforts could focus on the combination of the analog ensemble with intermediate resolution (e.g., 10-15 km) NWP estimates, to considerably reduce the computational burden, while providing accurate deterministic estimates and reliable probabilistic assessments.« less

  19. Computational assessment of model-based wave separation using a database of virtual subjects.

    PubMed

    Hametner, Bernhard; Schneider, Magdalena; Parragh, Stephanie; Wassertheurer, Siegfried

    2017-11-07

    The quantification of arterial wave reflection is an important area of interest in arterial pulse wave analysis. It can be achieved by wave separation analysis (WSA) if both the aortic pressure waveform and the aortic flow waveform are known. For better applicability, several mathematical models have been established to estimate aortic flow solely based on pressure waveforms. The aim of this study is to investigate and verify the model-based wave separation of the ARCSolver method on virtual pulse wave measurements. The study is based on an open access virtual database generated via simulations. Seven cardiac and arterial parameters were varied within physiological healthy ranges, leading to a total of 3325 virtual healthy subjects. For assessing the model-based ARCSolver method computationally, this method was used to perform WSA based on the aortic root pressure waveforms of the virtual patients. Asa reference, the values of WSA using both the pressure and flow waveforms provided by the virtual database were taken. The investigated parameters showed a good overall agreement between the model-based method and the reference. Mean differences and standard deviations were -0.05±0.02AU for characteristic impedance, -3.93±1.79mmHg for forward pressure amplitude, 1.37±1.56mmHg for backward pressure amplitude and 12.42±4.88% for reflection magnitude. The results indicate that the mathematical blood flow model of the ARCSolver method is a feasible surrogate for a measured flow waveform and provides a reasonable way to assess arterial wave reflection non-invasively in healthy subjects. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Probabilistic assessment methodology for continuous-type petroleum accumulations

    USGS Publications Warehouse

    Crovelli, R.A.

    2003-01-01

    The analytic resource assessment method, called ACCESS (Analytic Cell-based Continuous Energy Spreadsheet System), was developed to calculate estimates of petroleum resources for the geologic assessment model, called FORSPAN, in continuous-type petroleum accumulations. The ACCESS method is based upon mathematical equations derived from probability theory in the form of a computer spreadsheet system. ?? 2003 Elsevier B.V. All rights reserved.

  1. Predictive models to assess risk of type 2 diabetes, hypertension and comorbidity: machine-learning algorithms and validation using national health data from Kuwait—a cohort study

    PubMed Central

    Farran, Bassam; Channanath, Arshad Mohamed; Behbehani, Kazem; Thanaraj, Thangavel Alphonse

    2013-01-01

    Objective We build classification models and risk assessment tools for diabetes, hypertension and comorbidity using machine-learning algorithms on data from Kuwait. We model the increased proneness in diabetic patients to develop hypertension and vice versa. We ascertain the importance of ethnicity (and natives vs expatriate migrants) and of using regional data in risk assessment. Design Retrospective cohort study. Four machine-learning techniques were used: logistic regression, k-nearest neighbours (k-NN), multifactor dimensionality reduction and support vector machines. The study uses fivefold cross validation to obtain generalisation accuracies and errors. Setting Kuwait Health Network (KHN) that integrates data from primary health centres and hospitals in Kuwait. Participants 270 172 hospital visitors (of which, 89 858 are diabetic, 58 745 hypertensive and 30 522 comorbid) comprising Kuwaiti natives, Asian and Arab expatriates. Outcome measures Incident type 2 diabetes, hypertension and comorbidity. Results Classification accuracies of >85% (for diabetes) and >90% (for hypertension) are achieved using only simple non-laboratory-based parameters. Risk assessment tools based on k-NN classification models are able to assign ‘high’ risk to 75% of diabetic patients and to 94% of hypertensive patients. Only 5% of diabetic patients are seen assigned ‘low’ risk. Asian-specific models and assessments perform even better. Pathological conditions of diabetes in the general population or in hypertensive population and those of hypertension are modelled. Two-stage aggregate classification models and risk assessment tools, built combining both the component models on diabetes (or on hypertension), perform better than individual models. Conclusions Data on diabetes, hypertension and comorbidity from the cosmopolitan State of Kuwait are available for the first time. This enabled us to apply four different case–control models to assess risks. These tools aid in the preliminary non-intrusive assessment of the population. Ethnicity is seen significant to the predictive models. Risk assessments need to be developed using regional data as we demonstrate the applicability of the American Diabetes Association online calculator on data from Kuwait. PMID:23676796

  2. GLIMPSE: An integrated assessment model-based tool for ...

    EPA Pesticide Factsheets

    Dan Loughlin will describe the GCAM-USA integrated assessment model and how that model is being improved and integrated into the GLIMPSE decision support system. He will also demonstrate the application of the model to evaluate the emissions and health implications of hypothetical state-level renewable electricity standards. Introduce the GLIMPSE project to state and regional environmental modelers and analysts. Presented as part of the State Energy and Air Quality Group Webinar Series, which is organized by NESCAUM.

  3. RESIDUAL RISK ASSESSMENTS - FINAL RESIDUAL RISK ASSESSMENT FOR SECONDARY LEAD SMELTERS

    EPA Science Inventory

    This source category previously subjected to a technology-based standard will be examined to determine if health or ecological risks are significant enough to warrant further regulation for Secondary Lead Smelters. These assesments utilize existing models and data bases to examin...

  4. An information entropy model on clinical assessment of patients based on the holographic field of meridian

    NASA Astrophysics Data System (ADS)

    Wu, Jingjing; Wu, Xinming; Li, Pengfei; Li, Nan; Mao, Xiaomei; Chai, Lihe

    2017-04-01

    Meridian system is not only the basis of traditional Chinese medicine (TCM) method (e.g. acupuncture, massage), but also the core of TCM's basic theory. This paper has introduced a new informational perspective to understand the reality and the holographic field of meridian. Based on maximum information entropy principle (MIEP), a dynamic equation for the holographic field has been deduced, which reflects the evolutionary characteristics of meridian. By using self-organizing artificial neural network as algorithm, the evolutionary dynamic equation of the holographic field can be resolved to assess properties of meridians and clinically diagnose the health characteristics of patients. Finally, through some cases from clinical patients (e.g. a 30-year-old male patient, an apoplectic patient, an epilepsy patient), we use this model to assess the evolutionary properties of meridians. It is proved that this model not only has significant implications in revealing the essence of meridian in TCM, but also may play a guiding role in clinical assessment of patients based on the holographic field of meridians.

  5. Model-based pH monitor for sensor assessment.

    PubMed

    van Schagen, Kim; Rietveld, Luuk; Veersma, Alex; Babuska, Robert

    2009-01-01

    Owing to the nature of the treatment processes, monitoring the processes based on individual online measurements is difficult or even impossible. However, the measurements (online and laboratory) can be combined with a priori process knowledge, using mathematical models, to objectively monitor the treatment processes and measurement devices. The pH measurement is a commonly used measurement at different stages in the drinking water treatment plant, although it is a unreliable instrument, requiring significant maintenance. It is shown that, using a grey-box model, it is possible to assess the measurement devices effectively, even if detailed information of the specific processes is unknown.

  6. Competency in health care management: a training model in epidemiologic methods for assessing and improving the quality of clinical practice through evidence-based decision making.

    PubMed

    Hudak, R P; Jacoby, I; Meyer, G S; Potter, A L; Hooper, T I; Krakauer, H

    1997-01-01

    This article describes a training model that focuses on health care management by applying epidemiologic methods to assess and improve the quality of clinical practice. The model's uniqueness is its focus on integrating clinical evidence-based decision making with fundamental principles of resource management to achieve attainable, cost-effective, high-quality health outcomes. The target students are current and prospective clinical and administrative executives who must optimize decision making at the clinical and managerial levels of health care organizations.

  7. An Experimental Comparison of Similarity Assessment Measures for 3D Models on Constrained Surface Deformation

    NASA Astrophysics Data System (ADS)

    Quan, Lulin; Yang, Zhixin

    2010-05-01

    To address the issues in the area of design customization, this paper expressed the specification and application of the constrained surface deformation, and reported the experimental performance comparison of three prevail effective similarity assessment algorithms on constrained surface deformation domain. Constrained surface deformation becomes a promising method that supports for various downstream applications of customized design. Similarity assessment is regarded as the key technology for inspecting the success of new design via measuring the difference level between the deformed new design and the initial sample model, and indicating whether the difference level is within the limitation. According to our theoretical analysis and pre-experiments, three similarity assessment algorithms are suitable for this domain, including shape histogram based method, skeleton based method, and U system moment based method. We analyze their basic functions and implementation methodologies in detail, and do a series of experiments on various situations to test their accuracy and efficiency using precision-recall diagram. Shoe model is chosen as an industrial example for the experiments. It shows that shape histogram based method gained an optimal performance in comparison. Based on the result, we proposed a novel approach that integrating surface constrains and shape histogram description with adaptive weighting method, which emphasize the role of constrains during the assessment. The limited initial experimental result demonstrated that our algorithm outperforms other three algorithms. A clear direction for future development is also drawn at the end of the paper.

  8. Developing evaluation instrument based on CIPP models on the implementation of portfolio assessment

    NASA Astrophysics Data System (ADS)

    Kurnia, Feni; Rosana, Dadan; Supahar

    2017-08-01

    This study aimed to develop an evaluation instrument constructed by CIPP model on the implementation of portfolio assessment in science learning. This study used research and development (R & D) method; adapting 4-D by the development of non-test instrument, and the evaluation instrument constructed by CIPP model. CIPP is the abbreviation of Context, Input, Process, and Product. The techniques of data collection were interviews, questionnaires, and observations. Data collection instruments were: 1) the interview guidelines for the analysis of the problems and the needs, 2) questionnaire to see level of accomplishment of portfolio assessment instrument, and 3) observation sheets for teacher and student to dig up responses to the portfolio assessment instrument. The data obtained was quantitative data obtained from several validators. The validators consist of two lecturers as the evaluation experts, two practitioners (science teachers), and three colleagues. This paper shows the results of content validity obtained from the validators and the analysis result of the data obtained by using Aikens' V formula. The results of this study shows that the evaluation instrument based on CIPP models is proper to evaluate the implementation of portfolio assessment instruments. Based on the experts' judgments, practitioners, and colleagues, the Aikens' V coefficient was between 0.86-1,00 which means that it is valid and can be used in the limited trial and operational field trial.

  9. A Regional Climate Model Evaluation System based on contemporary Satellite and other Observations for Assessing Regional Climate Model Fidelity

    NASA Astrophysics Data System (ADS)

    Waliser, D. E.; Kim, J.; Mattman, C.; Goodale, C.; Hart, A.; Zimdars, P.; Lean, P.

    2011-12-01

    Evaluation of climate models against observations is an essential part of assessing the impact of climate variations and change on regionally important sectors and improving climate models. Regional climate models (RCMs) are of a particular concern. RCMs provide fine-scale climate needed by the assessment community via downscaling global climate model projections such as those contributing to the Coupled Model Intercomparison Project (CMIP) that form one aspect of the quantitative basis of the IPCC Assessment Reports. The lack of reliable fine-resolution observational data and formal tools and metrics has represented a challenge in evaluating RCMs. Recent satellite observations are particularly useful as they provide a wealth of information and constraints on many different processes within the climate system. Due to their large volume and the difficulties associated with accessing and using contemporary observations, however, these datasets have been generally underutilized in model evaluation studies. Recognizing this problem, NASA JPL and UCLA have developed the Regional Climate Model Evaluation System (RCMES) to help make satellite observations, in conjunction with in-situ and reanalysis datasets, more readily accessible to the regional modeling community. The system includes a central database (Regional Climate Model Evaluation Database: RCMED) to store multiple datasets in a common format and codes for calculating and plotting statistical metrics to assess model performance (Regional Climate Model Evaluation Tool: RCMET). This allows the time taken to compare model data with satellite observations to be reduced from weeks to days. RCMES is a component of the recent ExArch project, an international effort for facilitating the archive and access of massive amounts data for users using cloud-based infrastructure, in this case as applied to the study of climate and climate change. This presentation will describe RCMES and demonstrate its utility using examples from RCMs applied to the southwest US as well as to Africa based on output from the CORDEX activity. Application of RCMES to the evaluation of multi-RCM hindcast for CORDEX-Africa will be presented in a companion paper in A41.

  10. Coupling System Dynamics and Physically-based Models for Participatory Water Management - A Methodological Framework, with Two Case Studies: Water Quality in Quebec, and Soil Salinity in Pakistan

    NASA Astrophysics Data System (ADS)

    Boisvert-Chouinard, J.; Halbe, J.; Baig, A. I.; Adamowski, J. F.

    2014-12-01

    The principles of Integrated Water Resource Management outline the importance of stakeholder participation in water management processes, but in practice, there is a lack of meaningful engagement in water planning and implementation, and participation is often limited to public consultation and education. When models are used to support water planning, stakeholders are usually not involved in their development and use, and the models commonly fail to represent important feedbacks between socio-economic and physical processes. This paper presents the development of holistic models of the Du Chêne basin in Quebec, and the Rechna Doab basin in Pakistan, that simulate socio-economic and physical processes related to, respectively, water quality management, and soil salinity management. The models each consists of two sub-components: a System Dynamics (SD) model, and a physically based model. The SD component was developed in collaboration with key stakeholders in the basins. The Du Chêne SD model was coupled with a Soil and Water Assessment Tool (SWAT) model, while the Rechna Doab SD model was coupled with SahysMod, a soil salinity model. The coupled models were used to assess the environmental and socio-economic impacts of different management scenarios proposed by stakeholders. Results indicate that coupled SD - physically-based models can be used as effective tools for participatory water planning and implementation. The participatory modeling process provides a structure for meaningful stakeholder engagement, and the models themselves can be used to transparently and coherently assess and compare different management options.

  11. Using Rasch Measurement to Develop a Computer Modeling-Based Instrument to Assess Students' Conceptual Understanding of Matter

    ERIC Educational Resources Information Center

    Wei, Silin; Liu, Xiufeng; Wang, Zuhao; Wang, Xingqiao

    2012-01-01

    Research suggests that difficulty in making connections among three levels of chemical representations--macroscopic, submicroscopic, and symbolic--is a primary reason for student alternative conceptions of chemistry concepts, and computer modeling is promising to help students make the connections. However, no computer modeling-based assessment…

  12. Characterizing uncertainty and variability in physiologically based pharmacokinetic models: state of the science and needs for research and implementation.

    PubMed

    Barton, Hugh A; Chiu, Weihsueh A; Setzer, R Woodrow; Andersen, Melvin E; Bailer, A John; Bois, Frédéric Y; Dewoskin, Robert S; Hays, Sean; Johanson, Gunnar; Jones, Nancy; Loizou, George; Macphail, Robert C; Portier, Christopher J; Spendiff, Martin; Tan, Yu-Mei

    2007-10-01

    Physiologically based pharmacokinetic (PBPK) models are used in mode-of-action based risk and safety assessments to estimate internal dosimetry in animals and humans. When used in risk assessment, these models can provide a basis for extrapolating between species, doses, and exposure routes or for justifying nondefault values for uncertainty factors. Characterization of uncertainty and variability is increasingly recognized as important for risk assessment; this represents a continuing challenge for both PBPK modelers and users. Current practices show significant progress in specifying deterministic biological models and nondeterministic (often statistical) models, estimating parameters using diverse data sets from multiple sources, using them to make predictions, and characterizing uncertainty and variability of model parameters and predictions. The International Workshop on Uncertainty and Variability in PBPK Models, held 31 Oct-2 Nov 2006, identified the state-of-the-science, needed changes in practice and implementation, and research priorities. For the short term, these include (1) multidisciplinary teams to integrate deterministic and nondeterministic/statistical models; (2) broader use of sensitivity analyses, including for structural and global (rather than local) parameter changes; and (3) enhanced transparency and reproducibility through improved documentation of model structure(s), parameter values, sensitivity and other analyses, and supporting, discrepant, or excluded data. Longer-term needs include (1) theoretical and practical methodological improvements for nondeterministic/statistical modeling; (2) better methods for evaluating alternative model structures; (3) peer-reviewed databases of parameters and covariates, and their distributions; (4) expanded coverage of PBPK models across chemicals with different properties; and (5) training and reference materials, such as cases studies, bibliographies/glossaries, model repositories, and enhanced software. The multidisciplinary dialogue initiated by this Workshop will foster the collaboration, research, data collection, and training necessary to make characterizing uncertainty and variability a standard practice in PBPK modeling and risk assessment.

  13. Bladder Cancer Treatment Response Assessment in CT using Radiomics with Deep-Learning.

    PubMed

    Cha, Kenny H; Hadjiiski, Lubomir; Chan, Heang-Ping; Weizer, Alon Z; Alva, Ajjai; Cohan, Richard H; Caoili, Elaine M; Paramagul, Chintana; Samala, Ravi K

    2017-08-18

    Cross-sectional X-ray imaging has become the standard for staging most solid organ malignancies. However, for some malignancies such as urinary bladder cancer, the ability to accurately assess local extent of the disease and understand response to systemic chemotherapy is limited with current imaging approaches. In this study, we explored the feasibility that radiomics-based predictive models using pre- and post-treatment computed tomography (CT) images might be able to distinguish between bladder cancers with and without complete chemotherapy responses. We assessed three unique radiomics-based predictive models, each of which employed different fundamental design principles ranging from a pattern recognition method via deep-learning convolution neural network (DL-CNN), to a more deterministic radiomics feature-based approach and then a bridging method between the two, utilizing a system which extracts radiomics features from the image patterns. Our study indicates that the computerized assessment using radiomics information from the pre- and post-treatment CT of bladder cancer patients has the potential to assist in assessment of treatment response.

  14. Fast Geometric Consensus Approach for Protein Model Quality Assessment

    PubMed Central

    Adamczak, Rafal; Pillardy, Jaroslaw; Vallat, Brinda K.

    2011-01-01

    Abstract Model quality assessment (MQA) is an integral part of protein structure prediction methods that typically generate multiple candidate models. The challenge lies in ranking and selecting the best models using a variety of physical, knowledge-based, and geometric consensus (GC)-based scoring functions. In particular, 3D-Jury and related GC methods assume that well-predicted (sub-)structures are more likely to occur frequently in a population of candidate models, compared to incorrectly folded fragments. While this approach is very successful in the context of diversified sets of models, identifying similar substructures is computationally expensive since all pairs of models need to be superimposed using MaxSub or related heuristics for structure-to-structure alignment. Here, we consider a fast alternative, in which structural similarity is assessed using 1D profiles, e.g., consisting of relative solvent accessibilities and secondary structures of equivalent amino acid residues in the respective models. We show that the new approach, dubbed 1D-Jury, allows to implicitly compare and rank N models in O(N) time, as opposed to quadratic complexity of 3D-Jury and related clustering-based methods. In addition, 1D-Jury avoids computationally expensive 3D superposition of pairs of models. At the same time, structural similarity scores based on 1D profiles are shown to correlate strongly with those obtained using MaxSub. In terms of the ability to select the best models as top candidates 1D-Jury performs on par with other GC methods. Other potential applications of the new approach, including fast clustering of large numbers of intermediate structures generated by folding simulations, are discussed as well. PMID:21244273

  15. Mixture of autoregressive modeling orders and its implication on single trial EEG classification

    PubMed Central

    Atyabi, Adham; Shic, Frederick; Naples, Adam

    2016-01-01

    Autoregressive (AR) models are of commonly utilized feature types in Electroencephalogram (EEG) studies due to offering better resolution, smoother spectra and being applicable to short segments of data. Identifying correct AR’s modeling order is an open challenge. Lower model orders poorly represent the signal while higher orders increase noise. Conventional methods for estimating modeling order includes Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) and Final Prediction Error (FPE). This article assesses the hypothesis that appropriate mixture of multiple AR orders is likely to better represent the true signal compared to any single order. Better spectral representation of underlying EEG patterns can increase utility of AR features in Brain Computer Interface (BCI) systems by increasing timely & correctly responsiveness of such systems to operator’s thoughts. Two mechanisms of Evolutionary-based fusion and Ensemble-based mixture are utilized for identifying such appropriate mixture of modeling orders. The classification performance of the resultant AR-mixtures are assessed against several conventional methods utilized by the community including 1) A well-known set of commonly used orders suggested by the literature, 2) conventional order estimation approaches (e.g., AIC, BIC and FPE), 3) blind mixture of AR features originated from a range of well-known orders. Five datasets from BCI competition III that contain 2, 3 and 4 motor imagery tasks are considered for the assessment. The results indicate superiority of Ensemble-based modeling order mixture and evolutionary-based order fusion methods within all datasets. PMID:28740331

  16. Analyzing the Impact of a Data Analysis Process to Improve Instruction Using a Collaborative Model

    ERIC Educational Resources Information Center

    Good, Rebecca B.

    2006-01-01

    The Data Collaborative Model (DCM) assembles assessment literacy, reflective practices, and professional development into a four-component process. The sub-components include assessing students, reflecting over data, professional dialogue, professional development for the teachers, interventions for students based on data results, and re-assessing…

  17. Development of cropland management dataset to support U.S. SWAT assessments

    USDA-ARS?s Scientific Manuscript database

    The Soil and Water Assessment Tool (SWAT) is a widely used hydrologic/water quality simulation model in the U.S. Process-based models like SWAT require a great deal of data to accurately represent the natural world, including topography, landuse, soils, weather, and management. With the exception ...

  18. Feelings of Loss in Response to Divorce: Assessment and Intervention.

    ERIC Educational Resources Information Center

    Huber, Charles H.

    1983-01-01

    Presents a cognitively based model, founded on rational emotive therapy, as a basis for assessment and intervention strategies for assisting individuals to cope with feelings of loss in response to divorce. The model is seen as a four-pane window through which persons might see their divorce. (Author/JAC)

  19. Bayesian models based on test statistics for multiple hypothesis testing problems.

    PubMed

    Ji, Yuan; Lu, Yiling; Mills, Gordon B

    2008-04-01

    We propose a Bayesian method for the problem of multiple hypothesis testing that is routinely encountered in bioinformatics research, such as the differential gene expression analysis. Our algorithm is based on modeling the distributions of test statistics under both null and alternative hypotheses. We substantially reduce the complexity of the process of defining posterior model probabilities by modeling the test statistics directly instead of modeling the full data. Computationally, we apply a Bayesian FDR approach to control the number of rejections of null hypotheses. To check if our model assumptions for the test statistics are valid for various bioinformatics experiments, we also propose a simple graphical model-assessment tool. Using extensive simulations, we demonstrate the performance of our models and the utility of the model-assessment tool. In the end, we apply the proposed methodology to an siRNA screening and a gene expression experiment.

  20. Data-Driven Risk Assessment from Small Scale Epidemics: Estimation and Model Choice for Spatio-Temporal Data with Application to a Classical Swine Fever Outbreak

    PubMed Central

    Gamado, Kokouvi; Marion, Glenn; Porphyre, Thibaud

    2017-01-01

    Livestock epidemics have the potential to give rise to significant economic, welfare, and social costs. Incursions of emerging and re-emerging pathogens may lead to small and repeated outbreaks. Analysis of the resulting data is statistically challenging but can inform disease preparedness reducing potential future losses. We present a framework for spatial risk assessment of disease incursions based on data from small localized historic outbreaks. We focus on between-farm spread of livestock pathogens and illustrate our methods by application to data on the small outbreak of Classical Swine Fever (CSF) that occurred in 2000 in East Anglia, UK. We apply models based on continuous time semi-Markov processes, using data-augmentation Markov Chain Monte Carlo techniques within a Bayesian framework to infer disease dynamics and detection from incompletely observed outbreaks. The spatial transmission kernel describing pathogen spread between farms, and the distribution of times between infection and detection, is estimated alongside unobserved exposure times. Our results demonstrate inference is reliable even for relatively small outbreaks when the data-generating model is known. However, associated risk assessments depend strongly on the form of the fitted transmission kernel. Therefore, for real applications, methods are needed to select the most appropriate model in light of the data. We assess standard Deviance Information Criteria (DIC) model selection tools and recently introduced latent residual methods of model assessment, in selecting the functional form of the spatial transmission kernel. These methods are applied to the CSF data, and tested in simulated scenarios which represent field data, but assume the data generation mechanism is known. Analysis of simulated scenarios shows that latent residual methods enable reliable selection of the transmission kernel even for small outbreaks whereas the DIC is less reliable. Moreover, compared with DIC, model choice based on latent residual assessment correlated better with predicted risk. PMID:28293559

  1. Integrated Modeling for Watershed Ecosystem Services Assessment and Forecasting

    EPA Science Inventory

    Regional scale watershed management decisions must be informed by the science-based relationship between anthropogenic activities on the landscape and the change in ecosystem structure, function, and services that occur as a result. We applied process-based models that represent...

  2. Rule Extraction Based on Extreme Learning Machine and an Improved Ant-Miner Algorithm for Transient Stability Assessment.

    PubMed

    Li, Yang; Li, Guoqing; Wang, Zhenhao

    2015-01-01

    In order to overcome the problems of poor understandability of the pattern recognition-based transient stability assessment (PRTSA) methods, a new rule extraction method based on extreme learning machine (ELM) and an improved Ant-miner (IAM) algorithm is presented in this paper. First, the basic principles of ELM and Ant-miner algorithm are respectively introduced. Then, based on the selected optimal feature subset, an example sample set is generated by the trained ELM-based PRTSA model. And finally, a set of classification rules are obtained by IAM algorithm to replace the original ELM network. The novelty of this proposal is that transient stability rules are extracted from an example sample set generated by the trained ELM-based transient stability assessment model by using IAM algorithm. The effectiveness of the proposed method is shown by the application results on the New England 39-bus power system and a practical power system--the southern power system of Hebei province.

  3. Review of early assessment models of innovative medical technologies.

    PubMed

    Fasterholdt, Iben; Krahn, Murray; Kidholm, Kristian; Yderstræde, Knud Bonnet; Pedersen, Kjeld Møller

    2017-08-01

    Hospitals increasingly make decisions regarding the early development of and investment in technologies, but a formal evaluation model for assisting hospitals early on in assessing the potential of innovative medical technologies is lacking. This article provides an overview of models for early assessment in different health organisations and discusses which models hold most promise for hospital decision makers. A scoping review of published studies between 1996 and 2015 was performed using nine databases. The following information was collected: decision context, decision problem, and a description of the early assessment model. 2362 articles were identified and 12 studies fulfilled the inclusion criteria. An additional 12 studies were identified and included in the review by searching reference lists. The majority of the 24 early assessment studies were variants of traditional cost-effectiveness analysis. Around one fourth of the studies presented an evaluation model with a broader focus than cost-effectiveness. Uncertainty was mostly handled by simple sensitivity or scenario analysis. This review shows that evaluation models using known methods assessing cost-effectiveness are most prevalent in early assessment, but seems ill-suited for early assessment in hospitals. Four models provided some usable elements for the development of a hospital-based model. Crown Copyright © 2017. Published by Elsevier B.V. All rights reserved.

  4. Assessing tomorrow's learners: in competency-based education only a radically different holistic method of assessment will work. Six things we could forget.

    PubMed

    Schuwirth, Lambert; Ash, Julie

    2013-07-01

    In this paper we are challenging six traditional notions about assessment that are unhelpful when designing 'assessment for learning'-programmes for competency-based education. We are arguing for the following: Reductionism is not the only way to assure rigour in high-stakes assessment; holistic judgements can be equally rigorous. Combining results of assessment parts only because they are of the same format (like different stations in an OSCE) is often not defensible; instead there must be a logically justifiable combination. Numbers describe the quality of the assessment. Therefore, manipulating the numbers is usually not the best way to improve its quality. Not every assessment moment needs to be a decision moment, disconnecting both makes combining summative and formative functions of assessment easier. Standardisation is not the only route to equity. Especially with diverse student groups tailoring is more equitable than standardisation. The most important element to standardise is the quality of the process and not the process itself. Finally, most assessment is too much focussed on detecting deficiencies and not on valuing individual student differences. In competency-based education--especially with a focus on learner orientation--this 'deficiency-model' is not as well aligned as a 'differences-model'.

  5. Simulation model for assessing the efficiency of a combined power installation based on a geothermal heat pump and a vacuum solar collector

    NASA Astrophysics Data System (ADS)

    Vaysman, Ya I.; Surkov, AA; Surkova, Yu I.; Kychkin, AV

    2017-06-01

    The article is devoted to the use of renewable energy sources and the assessment of the feasibility of their use in the climatic conditions of the Western Urals. A simulation model that calculates the efficiency of a combined power installations (CPI) was (RES) developed. The CPI consists of the geothermal heat pump (GHP) and the vacuum solar collector (VCS) and is based on the research model. This model allows solving a wide range of problems in the field of energy and resource efficiency, and can be applied to other objects using RES. Based on the research recommendations for optimizing the management and the application of CPI were given. The optimization system will give a positive effect in the energy and resource consumption of low-rise residential buildings projects.

  6. Energy Facility Siting by Means of Environmental Modelling with LANDSAT, Thematic Mapper and Geographic Information System (GIS) Data

    NASA Technical Reports Server (NTRS)

    1982-01-01

    Currently based on ground and aerial surveys, the land cover data base of the Pennsylvania Power and Light Company is routinely used for modelling the effects of alternative generating plant and transmission line sites on the local and regional environment. The development of a satellite-based geographic information system would facilitate both the preparation of environmental impact statements by power companies and assessment of the data by the Nuclear Regulatory Commission. A cooperative project is planned to demonstrate the methodology for integrating satellite data into an existing geographic information system, d to further evaluate the ability of satellite data in modeling environmental conditions that would be applied in the preparation and assessment of environmental impact statements.

  7. The Role of Simulation in Microsurgical Training.

    PubMed

    Evgeniou, Evgenios; Walker, Harriet; Gujral, Sameer

    Simulation has been established as an integral part of microsurgical training. The aim of this study was to assess and categorize the various simulation models in relation to the complexity of the microsurgical skill being taught and analyze the assessment methods commonly employed in microsurgical simulation training. Numerous courses have been established using simulation models. These models can be categorized, according to the level of complexity of the skill being taught, into basic, intermediate, and advanced. Microsurgical simulation training should be assessed using validated assessment methods. Assessment methods vary significantly from subjective expert opinions to self-assessment questionnaires and validated global rating scales. The appropriate assessment method should carefully be chosen based on the simulation modality. Simulation models should be validated, and a model with appropriate fidelity should be chosen according to the microsurgical skill being taught. Assessment should move from traditional simple subjective evaluations of trainee performance to validated tools. Future studies should assess the transferability of skills gained during simulation training to the real-life setting. Copyright © 2018 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  8. A Model to Assess the Behavioral Impacts of Consultative Knowledge Based Systems.

    ERIC Educational Resources Information Center

    Mak, Brenda; Lyytinen, Kalle

    1997-01-01

    This research model studies the behavioral impacts of consultative knowledge based systems (KBS). A study of graduate students explored to what extent their decisions were affected by user participation in updating the knowledge base; ambiguity of decision setting; routinization of usage; and source credibility of the expertise embedded in the…

  9. Model-based RSA of a femoral hip stem using surface and geometrical shape models.

    PubMed

    Kaptein, Bart L; Valstar, Edward R; Spoor, Cees W; Stoel, Berend C; Rozing, Piet M

    2006-07-01

    Roentgen stereophotogrammetry (RSA) is a highly accurate three-dimensional measuring technique for assessing micromotion of orthopaedic implants. A drawback is that markers have to be attached to the implant. Model-based techniques have been developed to prevent using special marked implants. We compared two model-based RSA methods with standard marker-based RSA techniques. The first model-based RSA method used surface models, and the second method used elementary geometrical shape (EGS) models. We used a commercially available stem to perform experiments with a phantom as well as reanalysis of patient RSA radiographs. The data from the phantom experiment indicated the accuracy and precision of the elementary geometrical shape model-based RSA method is equal to marker-based RSA. For model-based RSA using surface models, the accuracy is equal to the accuracy of marker-based RSA, but its precision is worse. We found no difference in accuracy and precision between the two model-based RSA techniques in clinical data. For this particular hip stem, EGS model-based RSA is a good alternative for marker-based RSA.

  10. Non-linear assessment and deficiency of linear relationship for healthcare industry

    NASA Astrophysics Data System (ADS)

    Nordin, N.; Abdullah, M. M. A. B.; Razak, R. C.

    2017-09-01

    This paper presents the development of the non-linear service satisfaction model that assumes patients are not necessarily satisfied or dissatisfied with good or poor service delivery. With that, compliment and compliant assessment is considered, simultaneously. Non-linear service satisfaction instrument called Kano-Q and Kano-SS is developed based on Kano model and Theory of Quality Attributes (TQA) to define the unexpected, hidden and unspoken patient satisfaction and dissatisfaction into service quality attribute. A new Kano-Q and Kano-SS algorithm for quality attribute assessment is developed based satisfaction impact theories and found instrumentally fit the reliability and validity test. The results were also validated based on standard Kano model procedure before Kano model and Quality Function Deployment (QFD) is integrated for patient attribute and service attribute prioritization. An algorithm of Kano-QFD matrix operation is developed to compose the prioritized complaint and compliment indexes. Finally, the results of prioritized service attributes are mapped to service delivery category to determine the most prioritized service delivery that need to be improved at the first place by healthcare service provider.

  11. CULTURAL ADAPTATIONS OF EVIDENCE-BASED HOME-VISITATION MODELS IN TRIBAL COMMUNITIES.

    PubMed

    Hiratsuka, Vanessa Y; Parker, Myra E; Sanchez, Jenae; Riley, Rebecca; Heath, Debra; Chomo, Julianna C; Beltangady, Moushumi; Sarche, Michelle

    2018-05-01

    The Tribal Maternal, Infant, and Early Childhood Home Visiting (Tribal MIECHV) Program provides federal grants to tribes, tribal consortia, tribal organizations, and urban Indian organizations to implement evidence-based home-visiting services for American Indian and Alaska Native (AI/AN) families. To date, only one evidence-based home-visiting program has been developed for use in AI/AN communities. The purpose of this article is to describe the steps that four Tribal MIECHV Programs took to assess community needs, select a home-visiting model, and culturally adapt the model for use in AI/AN communities. In these four unique Tribal MIECHV Program settings, each program employed a rigorous needs-assessment process and developed cultural modifications in accordance with community strengths and needs. Adaptations occurred in consultation with model developers, with consideration of the conceptual rationale for the program, while grounding new content in indigenous cultures. Research is needed to improve measurement of home-visiting outcomes in tribal and urban AI/AN settings, develop culturally grounded home-visiting interventions, and assess the effectiveness of home visiting in AI/AN communities. © 2018 Michigan Association for Infant Mental Health.

  12. Devaluation and sequential decisions: linking goal-directed and model-based behavior

    PubMed Central

    Friedel, Eva; Koch, Stefan P.; Wendt, Jean; Heinz, Andreas; Deserno, Lorenz; Schlagenhauf, Florian

    2014-01-01

    In experimental psychology different experiments have been developed to assess goal–directed as compared to habitual control over instrumental decisions. Similar to animal studies selective devaluation procedures have been used. More recently sequential decision-making tasks have been designed to assess the degree of goal-directed vs. habitual choice behavior in terms of an influential computational theory of model-based compared to model-free behavioral control. As recently suggested, different measurements are thought to reflect the same construct. Yet, there has been no attempt to directly assess the construct validity of these different measurements. In the present study, we used a devaluation paradigm and a sequential decision-making task to address this question of construct validity in a sample of 18 healthy male human participants. Correlational analysis revealed a positive association between model-based choices during sequential decisions and goal-directed behavior after devaluation suggesting a single framework underlying both operationalizations and speaking in favor of construct validity of both measurement approaches. Up to now, this has been merely assumed but never been directly tested in humans. PMID:25136310

  13. Uncertainty estimates of purity measurements based on current information: toward a "live validation" of purity methods.

    PubMed

    Apostol, Izydor; Kelner, Drew; Jiang, Xinzhao Grace; Huang, Gang; Wypych, Jette; Zhang, Xin; Gastwirt, Jessica; Chen, Kenneth; Fodor, Szilan; Hapuarachchi, Suminda; Meriage, Dave; Ye, Frank; Poppe, Leszek; Szpankowski, Wojciech

    2012-12-01

    To predict precision and other performance characteristics of chromatographic purity methods, which represent the most widely used form of analysis in the biopharmaceutical industry. We have conducted a comprehensive survey of purity methods, and show that all performance characteristics fall within narrow measurement ranges. This observation was used to develop a model called Uncertainty Based on Current Information (UBCI), which expresses these performance characteristics as a function of the signal and noise levels, hardware specifications, and software settings. We applied the UCBI model to assess the uncertainty of purity measurements, and compared the results to those from conventional qualification. We demonstrated that the UBCI model is suitable to dynamically assess method performance characteristics, based on information extracted from individual chromatograms. The model provides an opportunity for streamlining qualification and validation studies by implementing a "live validation" of test results utilizing UBCI as a concurrent assessment of measurement uncertainty. Therefore, UBCI can potentially mitigate the challenges associated with laborious conventional method validation and facilitates the introduction of more advanced analytical technologies during the method lifecycle.

  14. A model of scientific attitudes assessment by observation in physics learning based scientific approach: case study of dynamic fluid topic in high school

    NASA Astrophysics Data System (ADS)

    Yusliana Ekawati, Elvin

    2017-01-01

    This study aimed to produce a model of scientific attitude assessment in terms of the observations for physics learning based scientific approach (case study of dynamic fluid topic in high school). Development of instruments in this study adaptation of the Plomp model, the procedure includes the initial investigation, design, construction, testing, evaluation and revision. The test is done in Surakarta, so that the data obtained are analyzed using Aiken formula to determine the validity of the content of the instrument, Cronbach’s alpha to determine the reliability of the instrument, and construct validity using confirmatory factor analysis with LISREL 8.50 program. The results of this research were conceptual models, instruments and guidelines on scientific attitudes assessment by observation. The construct assessment instruments include components of curiosity, objectivity, suspended judgment, open-mindedness, honesty and perseverance. The construct validity of instruments has been qualified (rated load factor > 0.3). The reliability of the model is quite good with the Alpha value 0.899 (> 0.7). The test showed that the model fits the theoretical models are supported by empirical data, namely p-value 0.315 (≥ 0.05), RMSEA 0.027 (≤ 0.08)

  15. Prospector II: Towards a knowledge base for mineral deposits

    USGS Publications Warehouse

    McCammon, R.B.

    1994-01-01

    What began in the mid-seventies as a research effort in designing an expert system to aid geologists in exploring for hidden mineral deposits has in the late eighties become a full-sized knowledge-based system to aid geologists in conducting regional mineral resource assessments. Prospector II, the successor to Prospector, is interactive-graphics oriented, flexible in its representation of mineral deposit models, and suited to regional mineral resource assessment. In Prospector II, the geologist enters the findings for an area, selects the deposit models or examples of mineral deposits for consideration, and the program compares the findings with the models or the examples selected, noting the similarities, differences, and missing information. The models or the examples selected are ranked according to scores that are based on the comparisons with the findings. Findings can be reassessed and the process repeated if necessary. The results provide the geologist with a rationale for identifying those mineral deposit types that the geology of an area permits. In future, Prospector II can assist in the creation of new models used in regional mineral resource assessment and in striving toward an ultimate classification of mineral deposits. ?? 1994 International Association for Mathematical Geology.

  16. Is the perception of clean, humid air indeed affected by cooling the respiratory tract?

    NASA Astrophysics Data System (ADS)

    Burek, Rudolf; Polednik, Bernard; Guz, Łukasz

    2017-07-01

    The study aims at determining exposure-response relationships after short exposure to clean air and long exposure to air polluted by people. The impact of water vapor content in the indoor air on its acceptability (ACC) was assessed by the occupants after a short exposure to clean air and an hour-long exposure to increasingly polluted air. The study presents a critical analysis pertaining to the stimulation of olfactory sensations by the air enthalpy suggested in previous models and proposes a new model based on the Weber-Fechner law. Our assumption was that water vapor is the stimulus of olfactory sensations. The model was calibrated and verified in field conditions, in a mechanically ventilated and air conditioned auditorium. Measurements of the air temperature, relative humidity, velocity and CO2 content were carried out; the acceptability of air quality was assessed by 162 untrained students. The subjective assessments and the measurements of the environmental qualities allowed for determining the Weber coefficients and the threshold concentrations of water vapor, as well as for establishing the limitations of the model at short and long exposure to polluted indoor air. The results are in agreement with previous studies. The standard error equals 0.07 for immediate assessments and 0.17 for assessments after adaptation. Based on the model one can predict the ACC assessments of trained and untrained participants.

  17. The assessment of knowledge and learning in competence spaces: The gain-loss model for dependent skills.

    PubMed

    Anselmi, Pasquale; Stefanutti, Luca; de Chiusole, Debora; Robusto, Egidio

    2017-11-01

    The gain-loss model (GaLoM) is a formal model for assessing knowledge and learning. In its original formulation, the GaLoM assumes independence among the skills. Such an assumption is not reasonable in several domains, in which some preliminary knowledge is the foundation for other knowledge. This paper presents an extension of the GaLoM to the case in which the skills are not independent, and the dependence relation among them is described by a well-graded competence space. The probability of mastering skill s at the pretest is conditional on the presence of all skills on which s depends. The probabilities of gaining or losing skill s when moving from pretest to posttest are conditional on the mastery of s at the pretest, and on the presence at the posttest of all skills on which s depends. Two formulations of the model are presented, in which the learning path is allowed to change from pretest to posttest or not. A simulation study shows that models based on the true competence space obtain a better fit than models based on false competence spaces, and are also characterized by a higher assessment accuracy. An empirical application shows that models based on pedagogically sound assumptions about the dependencies among the skills obtain a better fit than models assuming independence among the skills. © 2017 The British Psychological Society.

  18. Modeling startle eyeblink electromyogram to assess fear learning.

    PubMed

    Khemka, Saurabh; Tzovara, Athina; Gerster, Samuel; Quednow, Boris B; Bach, Dominik R

    2017-02-01

    Pavlovian fear conditioning is widely used as a laboratory model of associative learning in human and nonhuman species. In this model, an organism is trained to predict an aversive unconditioned stimulus from initially neutral events (conditioned stimuli, CS). In humans, fear memory is typically measured via conditioned autonomic responses or fear-potentiated startle. For the latter, various analysis approaches have been developed, but a systematic comparison of competing methodologies is lacking. Here, we investigate the suitability of a model-based approach to startle eyeblink analysis for assessment of fear memory, and compare this to extant analysis strategies. First, we build a psychophysiological model (PsPM) on a generic startle response. Then, we optimize and validate this PsPM on three independent fear-conditioning data sets. We demonstrate that our model can robustly distinguish aversive (CS+) from nonaversive stimuli (CS-, i.e., has high predictive validity). Importantly, our model-based approach captures fear-potentiated startle during fear retention as well as fear acquisition. Our results establish a PsPM-based approach to assessment of fear-potentiated startle, and qualify previous peak-scoring methods. Our proposed model represents a generic startle response and can potentially be used beyond fear conditioning, for example, to quantify affective startle modulation or prepulse inhibition of the acoustic startle response. © 2016 The Authors. Psychophysiology published by Wiley Periodicals, Inc. on behalf of Society for Psychophysiological Research.

  19. Utility of Social Modeling for Proliferation Assessment - Preliminary Assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coles, Garill A.; Gastelum, Zoe N.; Brothers, Alan J.

    2009-06-01

    This Preliminary Assessment draft report will present the results of a literature search and preliminary assessment of the body of research, analysis methods, models and data deemed to be relevant to the Utility of Social Modeling for Proliferation Assessment research. This report will provide: 1) a description of the problem space and the kinds of information pertinent to the problem space, 2) a discussion of key relevant or representative literature, 3) a discussion of models and modeling approaches judged to be potentially useful to the research, and 4) the next steps of this research that will be pursued based onmore » this preliminary assessment. This draft report represents a technical deliverable for the NA-22 Simulations, Algorithms, and Modeling (SAM) program. Specifically this draft report is the Task 1 deliverable for project PL09-UtilSocial-PD06, Utility of Social Modeling for Proliferation Assessment. This project investigates non-traditional use of social and cultural information to improve nuclear proliferation assessment, including nonproliferation assessment, proliferation resistance assessments, safeguards assessments and other related studies. These assessments often use and create technical information about the State’s posture towards proliferation, the vulnerability of a nuclear energy system to an undesired event, and the effectiveness of safeguards. This project will find and fuse social and technical information by explicitly considering the role of cultural, social and behavioral factors relevant to proliferation. The aim of this research is to describe and demonstrate if and how social science modeling has utility in proliferation assessment.« less

  20. Multisensor satellite data for water quality analysis and water pollution risk assessment: decision making under deep uncertainty with fuzzy algorithm in framework of multimodel approach

    NASA Astrophysics Data System (ADS)

    Kostyuchenko, Yuriy V.; Sztoyka, Yulia; Kopachevsky, Ivan; Artemenko, Igor; Yuschenko, Maxim

    2017-10-01

    Multi-model approach for remote sensing data processing and interpretation is described. The problem of satellite data utilization in multi-modeling approach for socio-ecological risks assessment is formally defined. Observation, measurement and modeling data utilization method in the framework of multi-model approach is described. Methodology and models of risk assessment in framework of decision support approach are defined and described. Method of water quality assessment using satellite observation data is described. Method is based on analysis of spectral reflectance of aquifers. Spectral signatures of freshwater bodies and offshores are analyzed. Correlations between spectral reflectance, pollutions and selected water quality parameters are analyzed and quantified. Data of MODIS, MISR, AIRS and Landsat sensors received in 2002-2014 have been utilized verified by in-field spectrometry and lab measurements. Fuzzy logic based approach for decision support in field of water quality degradation risk is discussed. Decision on water quality category is making based on fuzzy algorithm using limited set of uncertain parameters. Data from satellite observations, field measurements and modeling is utilizing in the framework of the approach proposed. It is shown that this algorithm allows estimate water quality degradation rate and pollution risks. Problems of construction of spatial and temporal distribution of calculated parameters, as well as a problem of data regularization are discussed. Using proposed approach, maps of surface water pollution risk from point and diffuse sources are calculated and discussed.

  1. Risk assessment of tropical cyclone rainfall flooding in the Delaware River Basin

    NASA Astrophysics Data System (ADS)

    Lu, P.; Lin, N.; Smith, J. A.; Emanuel, K.

    2016-12-01

    Rainfall-induced inland flooding is a leading cause of death, injury, and property damage from tropical cyclones (TCs). In the context of climate change, it has been shown that extreme precipitation from TCs is likely to increase during the 21st century. Assessing the long-term risk of inland flooding associated with landfalling TCs is therefore an important task. Standard risk assessment techniques, which are based on observations from rain gauges and stream gauges, are not broadly applicable to TC induced flooding, since TCs are rare, extreme events with very limited historical observations at any specific location. Also, rain gauges and stream gauges can hardly capture the complex spatial variation of TC rainfall and flooding. Furthermore, the utility of historically based assessments is compromised by climate change. Regional dynamical downscaling models can resolve many features of TC precipitation. In terms of risk assessment, however, it is computationally demanding to run such models to obtain long-term climatology of TC induced flooding. Here we apply a computationally efficient climatological-hydrological method to assess the risk of inland flooding associated with landfalling TCs. It includes: 1) a deterministic TC climatology modeling method to generate large numbers of synthetic TCs with physically correlated characteristics (i.e., track, intensity, size) under observed and projected climates; 2) a simple physics-based tropical cyclone rainfall model which is able to simulate rainfall fields associated with each synthetic storm; 3) a hydrologic modeling system that takes in rainfall fields to simulate flood peaks over an entire drainage basin. We will present results of this method applied to the Delaware River Basin in the mid-Atlantic US.

  2. Waste-to-energy: A review of life cycle assessment and its extension methods.

    PubMed

    Zhou, Zhaozhi; Tang, Yuanjun; Chi, Yong; Ni, Mingjiang; Buekens, Alfons

    2018-01-01

    This article proposes a comprehensive review of evaluation tools based on life cycle thinking, as applied to waste-to-energy. Habitually, life cycle assessment is adopted to assess environmental burdens associated with waste-to-energy initiatives. Based on this framework, several extension methods have been developed to focus on specific aspects: Exergetic life cycle assessment for reducing resource depletion, life cycle costing for evaluating its economic burden, and social life cycle assessment for recording its social impacts. Additionally, the environment-energy-economy model integrates both life cycle assessment and life cycle costing methods and judges simultaneously these three features for sustainable waste-to-energy conversion. Life cycle assessment is sufficiently developed on waste-to-energy with concrete data inventory and sensitivity analysis, although the data and model uncertainty are unavoidable. Compared with life cycle assessment, only a few evaluations are conducted to waste-to-energy techniques by using extension methods and its methodology and application need to be further developed. Finally, this article succinctly summarises some recommendations for further research.

  3. Development of the AFRL Aircrew Perfomance and Protection Data Bank

    DTIC Science & Technology

    2007-12-01

    Growth model and statistical model of hypobaric chamber simulations. It offers a quick and readily accessible online DCS risk assessment tool for...are used for the DCS prediction instead of the original model. ADRAC is based on more than 20 years of hypobaric chamber studies using human...prediction based on the combined Bubble Growth model and statistical model of hypobaric chamber simulations was integrated into the Data Bank. It

  4. The Ames two-dimensional stratosphere-mesospheric model. [chemistry and transport of SST pollution

    NASA Technical Reports Server (NTRS)

    Whitten, R. C.; Borucki, W. J.; Watson, V. R.; Capone, L. A.; Maples, A. L.; Riegel, C. A.

    1974-01-01

    A two-dimensional model of the stratosphere and mesosphere has recently been developed at Ames Research Center. The model contains chemistry based on 18 species that are solved for at each step and a seasonally-varying transport model based on both winds and eddy transport. The model is described and a preliminary assessment of the impact of supersonic aircraft flights on the ozone layer is given.

  5. Agent Model Development for Assessing Climate-Induced Geopolitical Instability.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boslough, Mark B.; Backus, George A.

    2005-12-01

    We present the initial stages of development of new agent-based computational methods to generate and test hypotheses about linkages between environmental change and international instability. This report summarizes the first year's effort of an originally proposed three-year Laboratory Directed Research and Development (LDRD) project. The preliminary work focused on a set of simple agent-based models and benefited from lessons learned in previous related projects and case studies of human response to climate change and environmental scarcity. Our approach was to define a qualitative model using extremely simple cellular agent models akin to Lovelock's Daisyworld and Schelling's segregation model. Such modelsmore » do not require significant computing resources, and users can modify behavior rules to gain insights. One of the difficulties in agent-based modeling is finding the right balance between model simplicity and real-world representation. Our approach was to keep agent behaviors as simple as possible during the development stage (described herein) and to ground them with a realistic geospatial Earth system model in subsequent years. This work is directed toward incorporating projected climate data--including various C02 scenarios from the Intergovernmental Panel on Climate Change (IPCC) Third Assessment Report--and ultimately toward coupling a useful agent-based model to a general circulation model.3« less

  6. Three-dimensional assessment of scoliosis based on ultrasound data

    NASA Astrophysics Data System (ADS)

    Zhang, Junhua; Li, Hongjian; Yu, Bo

    2015-12-01

    In this study, an approach was proposed to assess the 3D scoliotic deformity based on ultrasound data. The 3D spine model was reconstructed by using a freehand 3D ultrasound imaging system. The geometric torsion was then calculated from the reconstructed spine model. A thoracic spine phantom set at a given pose was used in the experiment. The geometric torsion of the spine phantom calculated from the freehand ultrasound imaging system was 0.041 mm-1 which was close to that calculated from the biplanar radiographs (0.025 mm-1). Therefore, ultrasound is a promising technique for the 3D assessment of scoliosis.

  7. Recommending Education Materials for Diabetic Questions Using Information Retrieval Approaches

    PubMed Central

    Wang, Yanshan; Shen, Feichen; Liu, Sijia; Rastegar-Mojarad, Majid; Wang, Liwei

    2017-01-01

    Background Self-management is crucial to diabetes care and providing expert-vetted content for answering patients’ questions is crucial in facilitating patient self-management. Objective The aim is to investigate the use of information retrieval techniques in recommending patient education materials for diabetic questions of patients. Methods We compared two retrieval algorithms, one based on Latent Dirichlet Allocation topic modeling (topic modeling-based model) and one based on semantic group (semantic group-based model), with the baseline retrieval models, vector space model (VSM), in recommending diabetic patient education materials to diabetic questions posted on the TuDiabetes forum. The evaluation was based on a gold standard dataset consisting of 50 randomly selected diabetic questions where the relevancy of diabetic education materials to the questions was manually assigned by two experts. The performance was assessed using precision of top-ranked documents. Results We retrieved 7510 diabetic questions on the forum and 144 diabetic patient educational materials from the patient education database at Mayo Clinic. The mapping rate of words in each corpus mapped to the Unified Medical Language System (UMLS) was significantly different (P<.001). The topic modeling-based model outperformed the other retrieval algorithms. For example, for the top-retrieved document, the precision of the topic modeling-based, semantic group-based, and VSM models was 67.0%, 62.8%, and 54.3%, respectively. Conclusions This study demonstrated that topic modeling can mitigate the vocabulary difference and it achieved the best performance in recommending education materials for answering patients’ questions. One direction for future work is to assess the generalizability of our findings and to extend our study to other disease areas, other patient education material resources, and online forums. PMID:29038097

  8. Power inequalities in the assessment of nursing competency within the workplace: implications for nursing management.

    PubMed

    Cusack, Lynette; Smith, Morgan

    2010-09-01

    This article explores the power implications of implementing competency-based assessments within the nursing work environment from a manager's perspective. It discusses how the implementation of competency-based assessments for continuing education may affect workplace culture, in particular, the use of power, within the nursing team. The term "managers" for the purpose of this article is defined as "nurses in senior administrative and educational positions within a health care facility." This article adds to the discourse on competency-based models by emphasizing the effect of the nursing work environment on the competency-based assessment process. It concludes by identifying strategies that can be used by nursing management when designing and implementing an effective and fair competency-based assessment for the nursing workplace. Copyright 2010, SLACK Incorporated.

  9. Study protocol for a comparative effectiveness trial of two models of perinatal integrated psychosocial assessment: the PIPA project.

    PubMed

    Reilly, Nicole; Black, Emma; Chambers, Georgina M; Schmied, Virginia; Matthey, Stephen; Farrell, Josephine; Kingston, Dawn; Bisits, Andrew; Austin, Marie-Paule

    2017-07-20

    Studies examining psychosocial and depression assessment programs in maternity settings have not adequately considered the context in which psychosocial assessment occurs or how broader components of integrated care, including clinician decision-making aids, may optimise program delivery and its cost-effectiveness. There is also limited evidence relating to the diagnostic accuracy of symptom-based screening measures used in this context. The Perinatal Integrated Psychosocial Assessment (PIPA) Project was developed to address these knowledge gaps. The primary aims of the PIPA Project are to examine the clinical- and cost-effectiveness of two alternative models of integrated psychosocial care during pregnancy: 'care as usual' (the SAFE START model) and an alternative model (the PIPA model). The acceptability and perceived benefit of each model of care from the perspective of both pregnant women and their healthcare providers will also be assessed. Our secondary aim is to examine the psychometric properties of a number of symptom-based screening tools for depression and anxiety when used in pregnancy. This is a comparative-effectiveness study comparing 'care as usual' to an alternative model sequentially over two 12-month periods. Data will be collected from women at Time 1 (initial antenatal psychosocial assessment), Time 2 (2-weeks after Time 1) and from clinicians at Time 3 for each condition. Primary aims will be evaluated using a between-groups design, and the secondary aim using a within group design. The PIPA Project will provide evidence relating to the clinical- and cost- effectiveness of psychosocial assessment integrated with electronic clinician decision making prompts, and referral options that are tailored to the woman's psychosocial risk, in the maternity care setting. It will also address research recommendations from the Australian (2011) and NICE (2015) Clinical Practice Guidelines. ACTRN12617000932369.

  10. Observational attachment theory-based parenting measures predict children's attachment narratives independently from social learning theory-based measures.

    PubMed

    Matias, Carla; O'Connor, Thomas G; Futh, Annabel; Scott, Stephen

    2014-01-01

    Conceptually and methodologically distinct models exist for assessing quality of parent-child relationships, but few studies contrast competing models or assess their overlap in predicting developmental outcomes. Using observational methodology, the current study examined the distinctiveness of attachment theory-based and social learning theory-based measures of parenting in predicting two key measures of child adjustment: security of attachment narratives and social acceptance in peer nominations. A total of 113 5-6-year-old children from ethnically diverse families participated. Parent-child relationships were rated using standard paradigms. Measures derived from attachment theory included sensitive responding and mutuality; measures derived from social learning theory included positive attending, directives, and criticism. Child outcomes were independently-rated attachment narrative representations and peer nominations. Results indicated that Attachment theory-based and Social Learning theory-based measures were modestly correlated; nonetheless, parent-child mutuality predicted secure child attachment narratives independently of social learning theory-based measures; in contrast, criticism predicted peer-nominated fighting independently of attachment theory-based measures. In young children, there is some evidence that attachment theory-based measures may be particularly predictive of attachment narratives; however, no single model of measuring parent-child relationships is likely to best predict multiple developmental outcomes. Assessment in research and applied settings may benefit from integration of different theoretical and methodological paradigms.

  11. Subject-Specific Fully-Coupled and One-Way Fluid-Structure Interaction Models for Modeling of Carotid Atherosclerotic Plaques in Humans

    PubMed Central

    Tao, Xiaojuan; Gao, Peiyi; Jing, Lina; Lin, Yan; Sui, Binbin

    2015-01-01

    Background Hemodynamics play an important role in the development and progression of carotid atherosclerosis, and may be important in the assessment of plaque vulnerability. The aim of this study was to develop a system to assess the hemodynamics of carotid atherosclerotic plaques using subject-specific fluid-structure interaction (FSI) models based on magnetic resonance imaging (MRI). Material/Methods Models of carotid bifurcations (n=86 with plaques from 52 patients, n=14 normal carotids from 12 participants) were obtained at the Department of Radiology, Beijing Tian Tan Hospital between 2010 and 2013. The maximum von Mises stress, minimum pressure, and flow velocity values were assessed at the most stenotic site in patients, or at the carotid bifurcations in healthy volunteers. Results of one-way FSI were compared with fully-coupled FSI for the plaques of 19 randomly selected models. Results The maximum von Mises stress and the minimum pressure and velocity were significantly increased in the stenosis group compared with controls based on one-way FSI (all P<0.05). The maximum von Mises stress and the minimum pressure were significantly higher and the velocity was significantly lower based on fully coupled FSI compared with on-way FSI (all P<0.05). Although there were differences in numerical values, both methods were equivalent. The maximum von Mises stress of vulnerable plaques was significantly higher than stable plaques (P<0.001). The maximum von Mises stress of the group with fibrous cap defect was significantly higher than the group without fibrous cap defect (P=0.001). Conclusions The hemodynamics of atherosclerotic plaques can be assessed noninvasively using subject-specific models of FSI based on MRI. PMID:26510514

  12. Assessing physician leadership styles: application of the situational leadership model to transitions in patient acuity.

    PubMed

    Skog, Alexander; Peyre, Sarah E; Pozner, Charles N; Thorndike, Mary; Hicks, Gloria; Dellaripa, Paul F

    2012-01-01

    The situational leadership model suggests that an effective leader adapts leadership style depending on the followers' level of competency. We assessed the applicability and reliability of the situational leadership model when observing residents in simulated hospital floor-based scenarios. Resident teams engaged in clinical simulated scenarios. Video recordings were divided into clips based on Emergency Severity Index v4 acuity scores. Situational leadership styles were identified in clips by two physicians. Interrater reliability was determined through descriptive statistical data analysis. There were 114 participants recorded in 20 sessions, and 109 clips were reviewed and scored. There was a high level of interrater reliability (weighted kappa r = .81) supporting situational leadership model's applicability to medical teams. A suggestive correlation was found between frequency of changes in leadership style and the ability to effectively lead a medical team. The situational leadership model represents a unique tool to assess medical leadership performance in the context of acuity changes.

  13. Representing Micro-Macro Linkages by Actor-Based Dynamic Network Models

    PubMed Central

    Snijders, Tom A.B.; Steglich, Christian E.G.

    2014-01-01

    Stochastic actor-based models for network dynamics have the primary aim of statistical inference about processes of network change, but may be regarded as a kind of agent-based models. Similar to many other agent-based models, they are based on local rules for actor behavior. Different from many other agent-based models, by including elements of generalized linear statistical models they aim to be realistic detailed representations of network dynamics in empirical data sets. Statistical parallels to micro-macro considerations can be found in the estimation of parameters determining local actor behavior from empirical data, and the assessment of goodness of fit from the correspondence with network-level descriptives. This article studies several network-level consequences of dynamic actor-based models applied to represent cross-sectional network data. Two examples illustrate how network-level characteristics can be obtained as emergent features implied by micro-specifications of actor-based models. PMID:25960578

  14. Using the Knowledge, Process, Practice (KPP) model for driving the design and development of online postgraduate medical education.

    PubMed

    Shaw, Tim; Barnet, Stewart; Mcgregor, Deborah; Avery, Jennifer

    2015-01-01

    Online learning is a primary delivery method for continuing health education programs. It is critical that programs have curricula objectives linked to educational models that support learning. Using a proven educational modelling process ensures that curricula objectives are met and a solid basis for learning and assessment is achieved. To develop an educational design model that produces an educationally sound program development plan for use by anyone involved in online course development. We have described the development of a generic educational model designed for continuing health education programs. The Knowledge, Process, Practice (KPP) model is founded on recognised educational theory and online education practice. This paper presents a step-by-step guide on using this model for program development that encases reliable learning and evaluation. The model supports a three-step approach, KPP, based on learning outcomes and supporting appropriate assessment activities. It provides a program structure for online or blended learning that is explicit, educationally defensible, and supports multiple assessment points for health professionals. The KPP model is based on best practice educational design using a structure that can be adapted for a variety of online or flexibly delivered postgraduate medical education programs.

  15. Operator Performance Measures for Assessing Voice Communication Effectiveness

    DTIC Science & Technology

    1989-07-01

    performance and work- load assessment techniques have been based.I Broadbent (1958) described a limited capacity filter model of human information...INFORMATION PROCESSING 20 3.1.1. Auditory Attention 20 3.1.2. Auditory Memory 24 3.2. MODELS OF INFORMATION PROCESSING 24 3.2.1. Capacity Theories 25...Learning 0 Attention * Language Specialization • Decision Making• Problem Solving Auditory Information Processing Models of Processing Ooemtor

  16. The Relative Importance of the Vadose Zone in Multimedia Risk Assessment Modeling Applied at a National Scale: An Analysis of Benzene Using 3MRA

    NASA Astrophysics Data System (ADS)

    Babendreier, J. E.

    2002-05-01

    Evaluating uncertainty and parameter sensitivity in environmental models can be a difficult task, even for low-order, single-media constructs driven by a unique set of site-specific data. The challenge of examining ever more complex, integrated, higher-order models is a formidable one, particularly in regulatory settings applied on a national scale. Quantitative assessment of uncertainty and sensitivity within integrated, multimedia models that simulate hundreds of sites, spanning multiple geographical and ecological regions, will ultimately require a systematic, comparative approach coupled with sufficient computational power. The Multimedia, Multipathway, and Multireceptor Risk Assessment Model (3MRA) is an important code being developed by the United States Environmental Protection Agency for use in site-scale risk assessment (e.g. hazardous waste management facilities). The model currently entails over 700 variables, 185 of which are explicitly stochastic. The 3MRA can start with a chemical concentration in a waste management unit (WMU). It estimates the release and transport of the chemical throughout the environment, and predicts associated exposure and risk. The 3MRA simulates multimedia (air, water, soil, sediments), pollutant fate and transport, multipathway exposure routes (food ingestion, water ingestion, soil ingestion, air inhalation, etc.), multireceptor exposures (resident, gardener, farmer, fisher, ecological habitats and populations), and resulting risk (human cancer and non-cancer effects, ecological population and community effects). The 3MRA collates the output for an overall national risk assessment, offering a probabilistic strategy as a basis for regulatory decisions. To facilitate model execution of 3MRA for purposes of conducting uncertainty and sensitivity analysis, a PC-based supercomputer cluster was constructed. Design of SuperMUSE, a 125 GHz Windows-based Supercomputer for Model Uncertainty and Sensitivity Evaluation is described, along with the conceptual layout of an accompanying java-based paralleling software toolset. Preliminary work is also reported for a scenario involving Benzene disposal that describes the relative importance of the vadose zone in driving risk levels for ecological receptors and human health. Incorporating landfills, waste piles, aerated tanks, surface impoundments, and land application units, the site-based data used in the analysis included 201 national facilities representing 419 site-WMU combinations.

  17. Life cycle assessment of vehicle lightweighting: a physics-based model of mass-induced fuel consumption.

    PubMed

    Kim, Hyung Chul; Wallington, Timothy J

    2013-12-17

    Lightweighting is a key strategy used to improve vehicle fuel economy. Replacing conventional materials (e.g., steel) with lighter alternatives (e.g., aluminum, magnesium, and composites) decreases energy consumption and greenhouse gas (GHG) emissions during vehicle use, but often increases energy consumption and GHG emissions during materials and vehicle production. Assessing the life-cycle benefits of mass reduction requires a quantitative description of the mass-induced fuel consumption during vehicle use. A new physics-based method for estimating mass-induced fuel consumption (MIF) is proposed. We illustrate the utility of this method by using publicly available data to calculate MIF values in the range of 0.2-0.5 L/(100 km 100 kg) based on 106 records of fuel economy tests by the U.S. Environmental Protection Agency for 2013 model year vehicles. Lightweighting is shown to have the most benefit when applied to vehicles with high fuel consumption and high power. Use of the physics-based model presented here would place future life cycle assessment studies of vehicle lightweighting on a firmer scientific foundation.

  18. Life cycle assessment based environmental impact estimation model for pre-stressed concrete beam bridge in the early design phase

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Kyong Ju, E-mail: kjkim@cau.ac.kr; Yun, Won Gun, E-mail: ogun78@naver.com; Cho, Namho, E-mail: nhc51@cau.ac.kr

    The late rise in global concern for environmental issues such as global warming and air pollution is accentuating the need for environmental assessments in the construction industry. Promptly evaluating the environmental loads of the various design alternatives during the early stages of a construction project and adopting the most environmentally sustainable candidate is therefore of large importance. Yet, research on the early evaluation of a construction project's environmental load in order to aid the decision making process is hitherto lacking. In light of this dilemma, this study proposes a model for estimating the environmental load by employing only the mostmore » basic information accessible during the early design phases of a project for the pre-stressed concrete (PSC) beam bridge, the most common bridge structure. Firstly, a life cycle assessment (LCA) was conducted on the data from 99 bridges by integrating the bills of quantities (BOQ) with a life cycle inventory (LCI) database. The processed data was then utilized to construct a case based reasoning (CBR) model for estimating the environmental load. The accuracy of the estimation model was then validated using five test cases; the model's mean absolute error rates (MAER) for the total environmental load was calculated as 7.09%. Such test results were shown to be superior compared to those obtained from a multiple-regression based model and a slab area base-unit analysis model. Henceforth application of this model during the early stages of a project is expected to highly complement environmentally friendly designs and construction by facilitating the swift evaluation of the environmental load from multiple standpoints. - Highlights: • This study is to develop the model of assessing the environmental impacts on LCA. • Bills of quantity from completed designs of PSC Beam were linked with the LCI DB. • Previous cases were used to estimate the environmental load of new case by CBR model. • CBR model produces more accurate estimations (7.09%) than other conventional models. • This study supports decision making process in the early stage of a new construction case.« less

  19. Physiologically-based kinetic modelling in risk assessment

    EPA Science Inventory

    The European Union Reference Laboratory for Alternatives to Animal Testing (EURL ECVAM) hosted a two-day workshop with an aim to discuss the role and application of Physiologically Based Kinetic (PBK) models in regulatory decision making. The EURL ECVAM strategy document on Toxic...

  20. Assessment of groundwater quality: a fusion of geochemical and geophysical information via Bayesian neural networks.

    PubMed

    Maiti, Saumen; Erram, V C; Gupta, Gautam; Tiwari, Ram Krishna; Kulkarni, U D; Sangpal, R R

    2013-04-01

    Deplorable quality of groundwater arising from saltwater intrusion, natural leaching and anthropogenic activities is one of the major concerns for the society. Assessment of groundwater quality is, therefore, a primary objective of scientific research. Here, we propose an artificial neural network-based method set in a Bayesian neural network (BNN) framework and employ it to assess groundwater quality. The approach is based on analyzing 36 water samples and inverting up to 85 Schlumberger vertical electrical sounding data. We constructed a priori model by suitably parameterizing geochemical and geophysical data collected from the western part of India. The posterior model (post-inversion) was estimated using the BNN learning procedure and global hybrid Monte Carlo/Markov Chain Monte Carlo optimization scheme. By suitable parameterization of geochemical and geophysical parameters, we simulated 1,500 training samples, out of which 50 % samples were used for training and remaining 50 % were used for validation and testing. We show that the trained model is able to classify validation and test samples with 85 % and 80 % accuracy respectively. Based on cross-correlation analysis and Gibb's diagram of geochemical attributes, the groundwater qualities of the study area were classified into following three categories: "Very good", "Good", and "Unsuitable". The BNN model-based results suggest that groundwater quality falls mostly in the range of "Good" to "Very good" except for some places near the Arabian Sea. The new modeling results powered by uncertainty and statistical analyses would provide useful constrain, which could be utilized in monitoring and assessment of the groundwater quality.

  1. Improving On-Task Behavior Using a Functional Assessment-Based Intervention in an Inclusive High School Setting

    ERIC Educational Resources Information Center

    Majeika, Caitlyn E.; Walder, Jessica P.; Hubbard, Jessica P.; Steeb, Kelly M.; Ferris, Geoffrey J.; Oakes, Wendy P.; Lane, Kathleen Lynne

    2011-01-01

    A comprehensive, integrated, three-tiered model (CI3T) of prevention is a framework for proactively meeting students' academic, behavioral, and social skills. At the tertiary (Tier 3) level of prevention, functional-assessment based interventions (FABIs) may be used to identify, develop, and implement supports based on the function, or purpose, of…

  2. The Impact of Computational Experiment and Formative Assessment in Inquiry-Based Teaching and Learning Approach in STEM Education

    ERIC Educational Resources Information Center

    Psycharis, Sarantos

    2016-01-01

    In this study, an instructional design model, based on the computational experiment approach, was employed in order to explore the effects of the formative assessment strategies and scientific abilities rubrics on students' engagement in the development of inquiry-based pedagogical scenario. In the following study, rubrics were used during the…

  3. Scan-To Output Validation: Towards a Standardized Geometric Quality Assessment of Building Information Models Based on Point Clouds

    NASA Astrophysics Data System (ADS)

    Bonduel, M.; Bassier, M.; Vergauwen, M.; Pauwels, P.; Klein, R.

    2017-11-01

    The use of Building Information Modeling (BIM) for existing buildings based on point clouds is increasing. Standardized geometric quality assessment of the BIMs is needed to make them more reliable and thus reusable for future users. First, available literature on the subject is studied. Next, an initial proposal for a standardized geometric quality assessment is presented. Finally, this method is tested and evaluated with a case study. The number of specifications on BIM relating to existing buildings is limited. The Levels of Accuracy (LOA) specification of the USIBD provides definitions and suggestions regarding geometric model accuracy, but lacks a standardized assessment method. A deviation analysis is found to be dependent on (1) the used mathematical model, (2) the density of the point clouds and (3) the order of comparison. Results of the analysis can be graphical and numerical. An analysis on macro (building) and micro (BIM object) scale is necessary. On macro scale, the complete model is compared to the original point cloud and vice versa to get an overview of the general model quality. The graphical results show occluded zones and non-modeled objects respectively. Colored point clouds are derived from this analysis and integrated in the BIM. On micro scale, the relevant surface parts are extracted per BIM object and compared to the complete point cloud. Occluded zones are extracted based on a maximum deviation. What remains is classified according to the LOA specification. The numerical results are integrated in the BIM with the use of object parameters.

  4. A Robot-Driven Computational Model for Estimating Passive Ankle Torque With Subject-Specific Adaptation.

    PubMed

    Zhang, Mingming; Meng, Wei; Davies, T Claire; Zhang, Yanxin; Xie, Sheng Q

    2016-04-01

    Robot-assisted ankle assessment could potentially be conducted using sensor-based and model-based methods. Existing ankle rehabilitation robots usually use torquemeters and multiaxis load cells for measuring joint dynamics. These measurements are accurate, but the contribution as a result of muscles and ligaments is not taken into account. Some computational ankle models have been developed to evaluate ligament strain and joint torque. These models do not include muscles and, thus, are not suitable for an overall ankle assessment in robot-assisted therapy. This study proposed a computational ankle model for use in robot-assisted therapy with three rotational degrees of freedom, 12 muscles, and seven ligaments. This model is driven by robotics, uses three independent position variables as inputs, and outputs an overall ankle assessment. Subject-specific adaptations by geometric and strength scaling were also made to allow for a universal model. This model was evaluated using published results and experimental data from 11 participants. Results show a high accuracy in the evaluation of ligament neutral length and passive joint torque. The subject-specific adaptation performance is high, with each normalized root-mean-square deviation value less than 10%. This model could be used for ankle assessment, especially in evaluating passive ankle torque, for a specific individual. The characteristic that is unique to this model is the use of three independent position variables that can be measured in real time as inputs, which makes it advantageous over other models when combined with robot-assisted therapy.

  5. Applicability and feasibility of systematic review for performing evidence-based risk assessment in food and feed safety.

    PubMed

    Aiassa, E; Higgins, J P T; Frampton, G K; Greiner, M; Afonso, A; Amzal, B; Deeks, J; Dorne, J-L; Glanville, J; Lövei, G L; Nienstedt, K; O'connor, A M; Pullin, A S; Rajić, A; Verloo, D

    2015-01-01

    Food and feed safety risk assessment uses multi-parameter models to evaluate the likelihood of adverse events associated with exposure to hazards in human health, plant health, animal health, animal welfare, and the environment. Systematic review and meta-analysis are established methods for answering questions in health care, and can be implemented to minimize biases in food and feed safety risk assessment. However, no methodological frameworks exist for refining risk assessment multi-parameter models into questions suitable for systematic review, and use of meta-analysis to estimate all parameters required by a risk model may not be always feasible. This paper describes novel approaches for determining question suitability and for prioritizing questions for systematic review in this area. Risk assessment questions that aim to estimate a parameter are likely to be suitable for systematic review. Such questions can be structured by their "key elements" [e.g., for intervention questions, the population(s), intervention(s), comparator(s), and outcome(s)]. Prioritization of questions to be addressed by systematic review relies on the likely impact and related uncertainty of individual parameters in the risk model. This approach to planning and prioritizing systematic review seems to have useful implications for producing evidence-based food and feed safety risk assessment.

  6. Academic performance and perception of learning following a peer coaching teaching and assessment strategy.

    PubMed

    Moore, Catherine; Westwater-Wood, Sarah; Kerry, Roger

    2016-03-01

    Peer coaching has been associated with positive effects on learning. Specifically, these associations have been explored in complex healthcare professions. A social theory of learning has been proposed as a key component of the utility of peer coaching. Further, within the peer coaching model, assessment has been considered as an important driver. Empirical support for these dimensions of the model is lacking. To quantify assessment achievements and explore emergent attitudes and beliefs about learning related to a specific peer coaching model with integrated assessment. A longitudinal study based in a UK Higher Education Institute recorded assessment achievements and surveyed attitudes and beliefs in consecutive Year 1 undergraduate (physiotherapy) students (n = 560) between 2002 and 2012. A 6% improvement in academic achievement was demonstrated following the introduction of a peer coaching learning model. This was increased by a further 5% following the implementation of an integrated assessment. The improvement related to an overall averaged increase of one marking band. Students valued the strategy, and themes relating to the importance of social learning emerged from survey data. Peer coaching is an evidence-based teaching and learning strategy which can facilitate learning in complex subject areas. The strategy is underpinned by social learning theory which is supported by emergent student-reported attitudes.

  7. A multi-scale, multi-disciplinary approach for assessing the technological, economic and environmental performance of bio-based chemicals.

    PubMed

    Herrgård, Markus; Sukumara, Sumesh; Campodonico, Miguel; Zhuang, Kai

    2015-12-01

    In recent years, bio-based chemicals have gained interest as a renewable alternative to petrochemicals. However, there is a significant need to assess the technological, biological, economic and environmental feasibility of bio-based chemicals, particularly during the early research phase. Recently, the Multi-scale framework for Sustainable Industrial Chemicals (MuSIC) was introduced to address this issue by integrating modelling approaches at different scales ranging from cellular to ecological scales. This framework can be further extended by incorporating modelling of the petrochemical value chain and the de novo prediction of metabolic pathways connecting existing host metabolism to desirable chemical products. This multi-scale, multi-disciplinary framework for quantitative assessment of bio-based chemicals will play a vital role in supporting engineering, strategy and policy decisions as we progress towards a sustainable chemical industry. © 2015 Authors; published by Portland Press Limited.

  8. Effect of quality chronic disease management for alcohol and drug dependence on addiction outcomes.

    PubMed

    Kim, Theresa W; Saitz, Richard; Cheng, Debbie M; Winter, Michael R; Witas, Julie; Samet, Jeffrey H

    2012-12-01

    We examined the effect of the quality of primary care-based chronic disease management (CDM) for alcohol and/or other drug (AOD) dependence on addiction outcomes. We assessed quality using (1) a visit frequency based measure and (2) a self-reported assessment measuring alignment with the chronic care model. The visit frequency based measure had no significant association with addiction outcomes. The self-reported measure of care-when care was at a CDM clinic-was associated with lower drug addiction severity. The self-reported assessment of care from any healthcare source (CDM clinic or elsewhere) was associated with lower alcohol addiction severity and abstinence. These findings suggest that high quality CDM for AOD dependence may improve addiction outcomes. Quality measures based upon alignment with the chronic care model may better capture features of effective CDM care than a visit frequency measure. Copyright © 2012 Elsevier Inc. All rights reserved.

  9. Results of research on development of an intellectual information system of bankruptcy risk assessment of the enterprise

    NASA Astrophysics Data System (ADS)

    Telipenko, E.; Chernysheva, T.; Zakharova, A.; Dumchev, A.

    2015-10-01

    The article represents research results about the knowledge base development for the intellectual information system for the bankruptcy risk assessment of the enterprise. It is described the process analysis of the knowledge base development; the main process stages, some problems and their solutions are given. The article introduces the connectionist model for the bankruptcy risk assessment based on the analysis of industrial enterprise financial accounting. The basis for this connectionist model is a three-layer perceptron with the back propagation of error algorithm. The knowledge base for the intellectual information system consists of processed information and the processing operation method represented as the connectionist model. The article represents the structure of the intellectual information system, the knowledge base, and the information processing algorithm for neural network training. The paper shows mean values of 10 indexes for industrial enterprises; with the help of them it is possible to carry out a financial analysis of industrial enterprises and identify correctly the current situation for well-timed managerial decisions. Results are given about neural network testing on the data of both bankrupt and financially strong enterprises, which were not included into training and test sets.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Longgao; Yang, Xiaoyan; School of Environmental Science and Spatial Informatics, China University of Mining and Technology, Xuzhou 221116

    The implementation of land use planning (LUP) has a large impact on environmental quality. There lacks a widely accepted and consolidated approach to assess the LUP environmental impact using Strategic Environmental Assessment (SEA). In this paper, we developed a state-impact-state (SIS) model employed in the LUP environmental impact assessment (LUPEA). With the usage of Matter-element (ME) and Extenics method, the methodology based on the SIS model was established and applied in the LUPEA of Zoucheng County, China. The results show that: (1) this methodology provides an intuitive and easy understanding logical model for both the theoretical analysis and application ofmore » LUPEA; (2) the spatial multi-temporal assessment from base year, near-future year to planning target year suggests the positive impact on the environmental quality in the whole County despite certain environmental degradation in some towns; (3) besides the spatial assessment, other achievements including the environmental elements influenced by land use and their weights, the identification of key indicators in LUPEA, and the appropriate environmental mitigation measures were obtained; and (4) this methodology can be used to achieve multi-temporal assessment of LUP environmental impact of County or Town level in other areas. - Highlights: • A State-Impact-State model for Land Use Planning Environmental Assessment (LUPEA). • Matter-element (ME) and Extenics methods were embedded in the LUPEA. • The model was applied to the LUPEA of Zoucheng County. • The assessment shows improving environment quality since 2000 in Zoucheng County. • The method provides a useful tool for the LUPEA in the county level.« less

  11. A theoretical framework to describe communication processes during medical disability assessment interviews

    PubMed Central

    van Rijssen, H Jolanda; Schellart, Antonius JM; Anema, Johannes R; van der Beek, Allard J

    2009-01-01

    Background Research in different fields of medicine suggests that communication is important in physician-patient encounters and influences satisfaction with these encounters. It is argued that this also applies to the non-curative tasks that physicians perform, such as sickness certification and medical disability assessments. However, there is no conceptualised theoretical framework that can be used to describe intentions with regard to communication behaviour, communication behaviour itself, and satisfaction with communication behaviour in a medical disability assessment context. Objective The objective of this paper is to describe the conceptualisation of a model for the communication behaviour of physicians performing medical disability assessments in a social insurance context and of their claimants, in face-to-face encounters during medical disability assessment interviews and the preparation thereof. Conceptualisation The behavioural model, based on the Theory of Planned Behaviour (TPB), is conceptualised for the communication behaviour of social insurance physicians and claimants separately, but also combined during the assessment interview. Other important concepts in the model are the evaluation of communication behaviour (satisfaction), intentions, attitudes, skills, and barriers for communication. Conclusion The conceptualisation of the TPB-based behavioural model will help to provide insight into the communication behaviour of social insurance physicians and claimants during disability assessment interviews. After empirical testing of the relationships in the model, it can be used in other studies to obtain more insight into communication behaviour in non-curative medicine, and it could help social insurance physicians to adapt their communication behaviour to their task when performing disability assessments. PMID:19807905

  12. Objective Quantification of Pre-and Postphonosurgery Vocal Fold Vibratory Characteristics Using High-Speed Videoendoscopy and a Harmonic Waveform Model

    ERIC Educational Resources Information Center

    Ikuma, Takeshi; Kunduk, Melda; McWhorter, Andrew J.

    2014-01-01

    Purpose: The model-based quantitative analysis of high-speed videoendoscopy (HSV) data at a low frame rate of 2,000 frames per second was assessed for its clinical adequacy. Stepwise regression was employed to evaluate the HSV parameters using harmonic models and their relationships to the Voice Handicap Index (VHI). Also, the model-based HSV…

  13. New Dental Accreditation Standard on Critical Thinking: A Call for Learning Models, Outcomes, Assessments.

    PubMed

    Johnsen, David C; Williams, John N; Baughman, Pauletta Gay; Roesch, Darren M; Feldman, Cecile A

    2015-10-01

    This opinion article applauds the recent introduction of a new dental accreditation standard addressing critical thinking and problem-solving, but expresses a need for additional means for dental schools to demonstrate they are meeting the new standard because articulated outcomes, learning models, and assessments of competence are still being developed. Validated, research-based learning models are needed to define reference points against which schools can design and assess the education they provide to their students. This article presents one possible learning model for this purpose and calls for national experts from within and outside dental education to develop models that will help schools define outcomes and assess performance in educating their students to become practitioners who are effective critical thinkers and problem-solvers.

  14. Personality Assessment in the Schools: Issues and Procedures for School Psychologists.

    ERIC Educational Resources Information Center

    Knoff, Howard M.

    1983-01-01

    A conceptual model for school-based personality assessment, methods to integrate behavioral and projective assessment procedures, and issues surrounding the use of projective tests are presented. Ways to maximize the personality assessment process for use in placement and programing decisions are suggested. (Author/DWH)

  15. Structural Validation of the Holistic Wellness Assessment

    ERIC Educational Resources Information Center

    Brown, Charlene; Applegate, E. Brooks; Yildiz, Mustafa

    2015-01-01

    The Holistic Wellness Assessment (HWA) is a relatively new assessment instrument based on an emergent transdisciplinary model of wellness. This study validated the factor structure identified via exploratory factor analysis (EFA), assessed test-retest reliability, and investigated concurrent validity of the HWA in three separate samples. The…

  16. Application of a Physiologically Based Pharmacokinetic Model to Assess Propofol Hepatic and Renal Glucuronidation in Isolation: Utility of In Vitro and In Vivo Data

    PubMed Central

    Gill, Katherine L.; Gertz, Michael; Houston, J. Brian

    2013-01-01

    A physiologically based pharmacokinetic (PBPK) modeling approach was used to assess the prediction accuracy of propofol hepatic and extrahepatic metabolic clearance and to address previously reported underprediction of in vivo clearance based on static in vitro–in vivo extrapolation methods. The predictive capacity of propofol intrinsic clearance data (CLint) obtained in human hepatocytes and liver and kidney microsomes was assessed using the PBPK model developed in MATLAB software. Microsomal data obtained by both substrate depletion and metabolite formation methods and in the presence of 2% bovine serum albumin were considered in the analysis. Incorporation of hepatic and renal in vitro metabolic clearance in the PBPK model resulted in underprediction of propofol clearance regardless of the source of in vitro data; the predicted value did not exceed 35% of the observed clearance. Subsequently, propofol clinical data from three dose levels in intact patients and anhepatic subjects were used for the optimization of hepatic and renal CLint in a simultaneous fitting routine. Optimization process highlighted that renal glucuronidation clearance was underpredicted to a greater extent than liver clearance, requiring empirical scaling factors of 17 and 9, respectively. The use of optimized clearance parameters predicted hepatic and renal extraction ratios within 20% of the observed values, reported in an additional independent clinical study. This study highlights the complexity involved in assessing the contribution of extrahepatic clearance mechanisms and illustrates the application of PBPK modeling, in conjunction with clinical data, to assess prediction of clearance from in vitro data for each tissue individually. PMID:23303442

  17. Hurricane Sandy Economic Impacts Assessment: A Computable General Equilibrium Approach and Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boero, Riccardo; Edwards, Brian Keith

    Economists use computable general equilibrium (CGE) models to assess how economies react and self-organize after changes in policies, technology, and other exogenous shocks. CGE models are equation-based, empirically calibrated, and inspired by Neoclassical economic theory. The focus of this work was to validate the National Infrastructure Simulation and Analysis Center (NISAC) CGE model and apply it to the problem of assessing the economic impacts of severe events. We used the 2012 Hurricane Sandy event as our validation case. In particular, this work first introduces the model and then describes the validation approach and the empirical data available for studying themore » event of focus. Shocks to the model are then formalized and applied. Finally, model results and limitations are presented and discussed, pointing out both the model degree of accuracy and the assessed total damage caused by Hurricane Sandy.« less

  18. Foraging Behaviour in Magellanic Woodpeckers Is Consistent with a Multi-Scale Assessment of Tree Quality

    PubMed Central

    Vergara, Pablo M.; Soto, Gerardo E.; Rodewald, Amanda D.; Meneses, Luis O.; Pérez-Hernández, Christian G.

    2016-01-01

    Theoretical models predict that animals should make foraging decisions after assessing the quality of available habitat, but most models fail to consider the spatio-temporal scales at which animals perceive habitat availability. We tested three foraging strategies that explain how Magellanic woodpeckers (Campephilus magellanicus) assess the relative quality of trees: 1) Woodpeckers with local knowledge select trees based on the available trees in the immediate vicinity. 2) Woodpeckers lacking local knowledge select trees based on their availability at previously visited locations. 3) Woodpeckers using information from long-term memory select trees based on knowledge about trees available within the entire landscape. We observed foraging woodpeckers and used a Brownian Bridge Movement Model to identify trees available to woodpeckers along foraging routes. Woodpeckers selected trees with a later decay stage than available trees. Selection models indicated that preferences of Magellanic woodpeckers were based on clusters of trees near the most recently visited trees, thus suggesting that woodpeckers use visual cues from neighboring trees. In a second analysis, Cox’s proportional hazards models showed that woodpeckers used information consolidated across broader spatial scales to adjust tree residence times. Specifically, woodpeckers spent more time at trees with larger diameters and in a more advanced stage of decay than trees available along their routes. These results suggest that Magellanic woodpeckers make foraging decisions based on the relative quality of trees that they perceive and memorize information at different spatio-temporal scales. PMID:27416115

  19. Foraging Behaviour in Magellanic Woodpeckers Is Consistent with a Multi-Scale Assessment of Tree Quality.

    PubMed

    Vergara, Pablo M; Soto, Gerardo E; Moreira-Arce, Darío; Rodewald, Amanda D; Meneses, Luis O; Pérez-Hernández, Christian G

    2016-01-01

    Theoretical models predict that animals should make foraging decisions after assessing the quality of available habitat, but most models fail to consider the spatio-temporal scales at which animals perceive habitat availability. We tested three foraging strategies that explain how Magellanic woodpeckers (Campephilus magellanicus) assess the relative quality of trees: 1) Woodpeckers with local knowledge select trees based on the available trees in the immediate vicinity. 2) Woodpeckers lacking local knowledge select trees based on their availability at previously visited locations. 3) Woodpeckers using information from long-term memory select trees based on knowledge about trees available within the entire landscape. We observed foraging woodpeckers and used a Brownian Bridge Movement Model to identify trees available to woodpeckers along foraging routes. Woodpeckers selected trees with a later decay stage than available trees. Selection models indicated that preferences of Magellanic woodpeckers were based on clusters of trees near the most recently visited trees, thus suggesting that woodpeckers use visual cues from neighboring trees. In a second analysis, Cox's proportional hazards models showed that woodpeckers used information consolidated across broader spatial scales to adjust tree residence times. Specifically, woodpeckers spent more time at trees with larger diameters and in a more advanced stage of decay than trees available along their routes. These results suggest that Magellanic woodpeckers make foraging decisions based on the relative quality of trees that they perceive and memorize information at different spatio-temporal scales.

  20. Preliminary assessment of a hysteroscopic fallopian tube heat and biomaterial technology for permanent female sterilization

    NASA Astrophysics Data System (ADS)

    Divakar, Prajan; Trembly, B. Stuart; Moodie, Karen L.; Hoopes, P. Jack; Wegst, Ulrike G. K.

    2017-02-01

    Recent failures in hysteroscopic female sterilization procedures have brought into question the implantation of nonresorbable metal devices into the fallopian tubes due to long-term risks such as migration, fragmentation, and tubal perforation. The goal of this study is to assess whether a porous, biodegradable implant can be deposited into the fallopian tube lumen with or without a local mild heat treatment to generate a safe and permanent fallopian tube occlusion/sterilization event. The technologies investigated included freeze-cast collagen-based scaffolds and magnetic nanoparticle (MNP) based scaffolds. In vitro assessment of iron oxide MNP-based scaffolds was performed to determine the absorption rate density (ARD); subsequent computational modeling quantified the thermal in vivo steady state temperature as a function of tubal radius for treatment planning. For collagen-based scaffolds, in vivo testing was performed to study the biocompatibility in a mouse flank model, followed by implantation into an in vivo anestrus feline uterine horn (animal model for the fallopian tube). Biological responses were studied histopathologically. Uterine horn patency was assessed via radiographic imaging. Preliminary studies suggest the MNP-impregnated scaffold and a safe, noninvasive AMF excitation field have potential to generate a sufficient focal fallopian tube thermal dose to create a fibrotic healing event and ultimately, permanent tubal occlusion.

  1. Conceptual Models and Guidelines for Clinical Assessment of Financial Capacity.

    PubMed

    Marson, Daniel

    2016-09-01

    The ability to manage financial affairs is a life skill of critical importance, and neuropsychologists are increasingly asked to assess financial capacity across a variety of settings. Sound clinical assessment of financial capacity requires knowledge and appreciation of applicable clinical conceptual models and principles. However, the literature has presented relatively little conceptual guidance for clinicians concerning financial capacity and its assessment. This article seeks to address this gap. The article presents six clinical models of financial capacity : (1) the early gerontological IADL model of Lawton, (2) the clinical skills model and (3) related cognitive psychological model developed by Marson and colleagues, (4) a financial decision-making model adapting earlier decisional capacity work of Appelbaum and Grisso, (5) a person-centered model of financial decision-making developed by Lichtenberg and colleagues, and (6) a recent model of financial capacity in the real world developed through the Institute of Medicine. Accompanying presentation of the models is discussion of conceptual and practical perspectives they represent for clinician assessment. Based on the models, the article concludes by presenting a series of conceptually oriented guidelines for clinical assessment of financial capacity. In summary, sound assessment of financial capacity requires knowledge and appreciation of clinical conceptual models and principles. Awareness of such models, principles and guidelines will strengthen and advance clinical assessment of financial capacity. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  2. Economic Assessment: A Model for Assessing Ability to Pay.

    ERIC Educational Resources Information Center

    Andre, Patricia; And Others

    1978-01-01

    Accurate assessment of the client's ability to pay is the cornerstone to fee collections in any service organization. York County Counseling Services implemented a new method of fee assessment and collection based on the principles of providing a service worth paying for, accurate assessment of ability to pay, and a budget-payment system. (Author)

  3. Stepwise Analysis of Differential Item Functioning Based on Multiple-Group Partial Credit Model.

    ERIC Educational Resources Information Center

    Muraki, Eiji

    1999-01-01

    Extended an Item Response Theory (IRT) method for detection of differential item functioning to the partial credit model and applied the method to simulated data using a stepwise procedure. Then applied the stepwise DIF analysis based on the multiple-group partial credit model to writing trend data from the National Assessment of Educational…

  4. Coupling physically based and data-driven models for assessing freshwater inflow into the Small Aral Sea

    NASA Astrophysics Data System (ADS)

    Ayzel, Georgy; Izhitskiy, Alexander

    2018-06-01

    The Aral Sea desiccation and related changes in hydroclimatic conditions on a regional level is a hot topic for past decades. The key problem of scientific research projects devoted to an investigation of modern Aral Sea basin hydrological regime is its discontinuous nature - the only limited amount of papers takes into account the complex runoff formation system entirely. Addressing this challenge we have developed a continuous prediction system for assessing freshwater inflow into the Small Aral Sea based on coupling stack of hydrological and data-driven models. Results show a good prediction skill and approve the possibility to develop a valuable water assessment tool which utilizes the power of classical physically based and modern machine learning models both for territories with complex water management system and strong water-related data scarcity. The source code and data of the proposed system is available on a Github page (https://github.com/SMASHIproject/IWRM2018).

  5. Construction risk assessment of deep foundation pit in metro station based on G-COWA method

    NASA Astrophysics Data System (ADS)

    You, Weibao; Wang, Jianbo; Zhang, Wei; Liu, Fangmeng; Yang, Diying

    2018-05-01

    In order to get an accurate understanding of the construction safety of deep foundation pit in metro station and reduce the probability and loss of risk occurrence, a risk assessment method based on G-COWA is proposed. Firstly, relying on the specific engineering examples and the construction characteristics of deep foundation pit, an evaluation index system based on the five factors of “human, management, technology, material and environment” is established. Secondly, the C-OWA operator is introduced to realize the evaluation index empowerment and weaken the negative influence of expert subjective preference. The gray cluster analysis and fuzzy comprehensive evaluation method are combined to construct the construction risk assessment model of deep foundation pit, which can effectively solve the uncertainties. Finally, the model is applied to the actual project of deep foundation pit of Qingdao Metro North Station, determine its construction risk rating is “medium”, evaluate the model is feasible and reasonable. And then corresponding control measures are put forward and useful reference are provided.

  6. The Development of Statistics Textbook Supported with ICT and Portfolio-Based Assessment

    NASA Astrophysics Data System (ADS)

    Hendikawati, Putriaji; Yuni Arini, Florentina

    2016-02-01

    This research was development research that aimed to develop and produce a Statistics textbook model that supported with information and communication technology (ICT) and Portfolio-Based Assessment. This book was designed for students of mathematics at the college to improve students’ ability in mathematical connection and communication. There were three stages in this research i.e. define, design, and develop. The textbooks consisted of 10 chapters which each chapter contains introduction, core materials and include examples and exercises. The textbook developed phase begins with the early stages of designed the book (draft 1) which then validated by experts. Revision of draft 1 produced draft 2 which then limited test for readability test book. Furthermore, revision of draft 2 produced textbook draft 3 which simulated on a small sample to produce a valid model textbook. The data were analysed with descriptive statistics. The analysis showed that the Statistics textbook model that supported with ICT and Portfolio-Based Assessment valid and fill up the criteria of practicality.

  7. A pilot study to assess feasibility of value based pricing in Cyprus through pharmacoeconomic modelling and assessment of its operational framework: sorafenib for second line renal cell cancer

    PubMed Central

    2014-01-01

    Background The continuing increase of pharmaceutical expenditure calls for new approaches to pricing and reimbursement of pharmaceuticals. Value based pricing of pharmaceuticals is emerging as a useful tool and possess theoretical attributes to help health system cope with rising pharmaceutical expenditure. Aim To assess the feasibility of introducing a value-based pricing scheme of pharmaceuticals in Cyprus and explore the integrative framework. Methods A probabilistic Markov chain Monte Carlo model was created to simulate progression of advanced renal cell cancer for comparison of sorafenib to standard best supportive care. Literature review was performed and efficacy data were transferred from a published landmark trial, while official pricelists and clinical guidelines from Cyprus Ministry of Health were utilised for cost calculation. Based on proposed willingness to pay threshold the maximum price of sorafenib for the indication of second line renal cell cancer was assessed. Results Sorafenib value based price was found to be significantly lower compared to its current reference price. Conclusion Feasibility of Value Based Pricing is documented and pharmacoeconomic modelling can lead to robust results. Integration of value and affordability in the price are its main advantages which have to be weighed against lack of documentation for several theoretical parameters that influence outcome. Smaller countries such as Cyprus may experience adversities in establishing and sustaining essential structures for this scheme. PMID:24910539

  8. Modeling of 2D diffusion processes based on microscopy data: parameter estimation and practical identifiability analysis.

    PubMed

    Hock, Sabrina; Hasenauer, Jan; Theis, Fabian J

    2013-01-01

    Diffusion is a key component of many biological processes such as chemotaxis, developmental differentiation and tissue morphogenesis. Since recently, the spatial gradients caused by diffusion can be assessed in-vitro and in-vivo using microscopy based imaging techniques. The resulting time-series of two dimensional, high-resolutions images in combination with mechanistic models enable the quantitative analysis of the underlying mechanisms. However, such a model-based analysis is still challenging due to measurement noise and sparse observations, which result in uncertainties of the model parameters. We introduce a likelihood function for image-based measurements with log-normal distributed noise. Based upon this likelihood function we formulate the maximum likelihood estimation problem, which is solved using PDE-constrained optimization methods. To assess the uncertainty and practical identifiability of the parameters we introduce profile likelihoods for diffusion processes. As proof of concept, we model certain aspects of the guidance of dendritic cells towards lymphatic vessels, an example for haptotaxis. Using a realistic set of artificial measurement data, we estimate the five kinetic parameters of this model and compute profile likelihoods. Our novel approach for the estimation of model parameters from image data as well as the proposed identifiability analysis approach is widely applicable to diffusion processes. The profile likelihood based method provides more rigorous uncertainty bounds in contrast to local approximation methods.

  9. Predicting the effect of cytochrome P450 inhibitors on substrate drugs: analysis of physiologically based pharmacokinetic modeling submissions to the US Food and Drug Administration.

    PubMed

    Wagner, Christian; Pan, Yuzhuo; Hsu, Vicky; Grillo, Joseph A; Zhang, Lei; Reynolds, Kellie S; Sinha, Vikram; Zhao, Ping

    2015-01-01

    The US Food and Drug Administration (FDA) has seen a recent increase in the application of physiologically based pharmacokinetic (PBPK) modeling towards assessing the potential of drug-drug interactions (DDI) in clinically relevant scenarios. To continue our assessment of such approaches, we evaluated the predictive performance of PBPK modeling in predicting cytochrome P450 (CYP)-mediated DDI. This evaluation was based on 15 substrate PBPK models submitted by nine sponsors between 2009 and 2013. For these 15 models, a total of 26 DDI studies (cases) with various CYP inhibitors were available. Sponsors developed the PBPK models, reportedly without considering clinical DDI data. Inhibitor models were either developed by sponsors or provided by PBPK software developers and applied with minimal or no modification. The metric for assessing predictive performance of the sponsors' PBPK approach was the R predicted/observed value (R predicted/observed = [predicted mean exposure ratio]/[observed mean exposure ratio], with the exposure ratio defined as [C max (maximum plasma concentration) or AUC (area under the plasma concentration-time curve) in the presence of CYP inhibition]/[C max or AUC in the absence of CYP inhibition]). In 81 % (21/26) and 77 % (20/26) of cases, respectively, the R predicted/observed values for AUC and C max ratios were within a pre-defined threshold of 1.25-fold of the observed data. For all cases, the R predicted/observed values for AUC and C max were within a 2-fold range. These results suggest that, based on the submissions to the FDA to date, there is a high degree of concordance between PBPK-predicted and observed effects of CYP inhibition, especially CYP3A-based, on the exposure of drug substrates.

  10. A Multidirectional Model for Assessing Learning Disabled Students' Intelligence: An Information-Processing Framework.

    ERIC Educational Resources Information Center

    Swanson, H. Lee

    1982-01-01

    An information processing approach to the assessment of learning disabled students' intellectual performance is presented. The model is based on the assumption that intelligent behavior is comprised of a variety of problem- solving strategies. An account of child problem solving is explained and illustrated with a "thinking aloud" protocol.…

  11. Can You Build It? Using Manipulatives to Assess Student Understanding of Food-Web Concepts

    ERIC Educational Resources Information Center

    Grumbine, Richard

    2012-01-01

    This article outlines an exercise that assesses student knowledge of food-web and energy-flow concepts. Students work in teams and use manipulatives to build food-web models based on criteria assigned by the instructor. The models are then peer reviewed according to guidelines supplied by the instructor.

  12. GPS-based Microenvironment Tracker (MicroTrac) Model to Estimate Time-Location of Individuals for Air Pollution Exposure Assessments: Model Evaluation in Central North Carolina

    EPA Science Inventory

    A critical aspect of air pollution exposure assessment is the estimation of the time spent by individuals in various microenvironments (ME). Accounting for the time spent in different ME with different pollutant concentrations can reduce exposure misclassifications, while failure...

  13. Constrained range expansion and climate change assessments

    Treesearch

    Yohay Carmel; Curtis H. Flather

    2006-01-01

    Modeling the future distribution of keystone species has proved to be an important approach to assessing the potential ecological consequences of climate change (Loehle and LeBlanc 1996; Hansen et al. 2001). Predictions of range shifts are typically based on empirical models derived from simple correlative relationships between climatic characteristics of occupied and...

  14. East Tennessee State University's "Make a Difference" Project: Using a Team-Based Consultative Model To Conduct Functional Behavioral Assessments.

    ERIC Educational Resources Information Center

    Vaughn, Kelley; Hales, Cindy; Bush, Marta; Fox, James

    1998-01-01

    Describes implementation of functional behavioral assessment (FBA) through collaboration between a university (East Tennessee State University) and the local school system. Discusses related issues such as factors in team training, team size, FBA adaptations, and replicability of the FBA team model. (Author/DB)

  15. Assessing the New Competencies for Resident Education: A Model from an Emergency Medicine Program.

    ERIC Educational Resources Information Center

    Reisdorff, Earl J.; Hayes, Oliver W.; Carlson, Dale J.; Walker, Gregory L.

    2001-01-01

    Based on the experience of Michigan State University's emergency medicine residency program, proposes a practical method for modifying an existing student evaluation format. The model provides a template other programs could use in assessing residents' acquisition of the knowledge, skills, and attitudes reflected in the six general competencies…

  16. A landscape based, systems dynamic model for assessing impacts of urban development on water quality for sustainable seagrass growth in Tampa Bay, Florida

    EPA Science Inventory

    We present an integrated assessment model to predict potential unintended consequences of urban development on the sustainability of seagrasses and preservation of ecosystem services, such as catchable fish, in Tampa Bay. Ecosystem services are those ecological functions and pro...

  17. GPCC - A weather generator-based statistical downscaling tool for site-specific assessment of climate change impacts

    USDA-ARS?s Scientific Manuscript database

    Resolution of climate model outputs are too coarse to be used as direct inputs to impact models for assessing climate change impacts on agricultural production, water resources, and eco-system services at local or site-specific scales. Statistical downscaling approaches are usually used to bridge th...

  18. Modeling current climate conditions for forest pest risk assessment

    Treesearch

    Frank H. Koch; John W. Coulston

    2010-01-01

    Current information on broad-scale climatic conditions is essential for assessing potential distribution of forest pests. At present, sophisticated spatial interpolation approaches such as the Parameter-elevation Regressions on Independent Slopes Model (PRISM) are used to create high-resolution climatic data sets. Unfortunately, these data sets are based on 30-year...

  19. Fish Assemblage Indicators for the National Rivers and Streams Assessment: Performance of model-based vs. traditionally constructed multimetric indices

    EPA Science Inventory

    The development of multimetric indices (MMIs) for use in assessing the ecological condition of rivers and streams has advanced in recent years with the use of various types of modeling approaches to factor out the influence of natural variability and improve the performance. Ass...

  20. New ShakeMaps for Georgia Resulting from Collaboration with EMME

    NASA Astrophysics Data System (ADS)

    Kvavadze, N.; Tsereteli, N. S.; Varazanashvili, O.; Alania, V.

    2015-12-01

    Correct assessment of probabilistic seismic hazard and risks maps are first step for advance planning and action to reduce seismic risk. Seismic hazard maps for Georgia were calculated based on modern approach that was developed in the frame of EMME (Earthquake Modl for Middle east region) project. EMME was one of GEM's successful endeavors at regional level. With EMME and GEM assistance, regional models were analyzed to identify the information and additional work needed for the preparation national hazard models. Probabilistic seismic hazard map (PSH) provides the critical bases for improved building code and construction. The most serious deficiency in PSH assessment for the territory of Georgia is the lack of high-quality ground motion data. Due to this an initial hybrid empirical ground motion model is developed for PGA and SA at selected periods. An application of these coefficients for ground motion models have been used in probabilistic seismic hazard assessment. Obtained results of seismic hazard maps show evidence that there were gaps in seismic hazard assessment and the present normative seismic hazard map needed a careful recalculation.

Top