Sample records for model development validation

  1. Validity of Basic Electronic 1 Module Integrated Character Value Based on Conceptual Change Teaching Model to Increase Students Physics Competency in STKIP PGRI West Sumatera

    NASA Astrophysics Data System (ADS)

    Hidayati, A.; Rahmi, A.; Yohandri; Ratnawulan

    2018-04-01

    The importance of teaching materials in accordance with the characteristics of students became the main reason for the development of basic electronics I module integrated character values based on conceptual change teaching model. The module development in this research follows the development procedure of Plomp which includes preliminary research, prototyping phase and assessment phase. In the first year of this research, the module is validated. Content validity is seen from the conformity of the module with the development theory in accordance with the demands of learning model characteristics. The validity of the construct is seen from the linkage and consistency of each module component developed with the characteristic of the integrated learning model of character values obtained through validator assessment. The average validation value assessed by the validator belongs to a very valid category. Based on the validator assessment then revised the basic electronics I module integrated character values based on conceptual change teaching model.

  2. Developing Enhanced Blood–Brain Barrier Permeability Models: Integrating External Bio-Assay Data in QSAR Modeling

    PubMed Central

    Wang, Wenyi; Kim, Marlene T.; Sedykh, Alexander

    2015-01-01

    Purpose Experimental Blood–Brain Barrier (BBB) permeability models for drug molecules are expensive and time-consuming. As alternative methods, several traditional Quantitative Structure-Activity Relationship (QSAR) models have been developed previously. In this study, we aimed to improve the predictivity of traditional QSAR BBB permeability models by employing relevant public bio-assay data in the modeling process. Methods We compiled a BBB permeability database consisting of 439 unique compounds from various resources. The database was split into a modeling set of 341 compounds and a validation set of 98 compounds. Consensus QSAR modeling workflow was employed on the modeling set to develop various QSAR models. A five-fold cross-validation approach was used to validate the developed models, and the resulting models were used to predict the external validation set compounds. Furthermore, we used previously published membrane transporter models to generate relevant transporter profiles for target compounds. The transporter profiles were used as additional biological descriptors to develop hybrid QSAR BBB models. Results The consensus QSAR models have R2=0.638 for fivefold cross-validation and R2=0.504 for external validation. The consensus model developed by pooling chemical and transporter descriptors showed better predictivity (R2=0.646 for five-fold cross-validation and R2=0.526 for external validation). Moreover, several external bio-assays that correlate with BBB permeability were identified using our automatic profiling tool. Conclusions The BBB permeability models developed in this study can be useful for early evaluation of new compounds (e.g., new drug candidates). The combination of chemical and biological descriptors shows a promising direction to improve the current traditional QSAR models. PMID:25862462

  3. Global Precipitation Measurement (GPM) Ground Validation (GV) Science Implementation Plan

    NASA Technical Reports Server (NTRS)

    Petersen, Walter A.; Hou, Arthur Y.

    2008-01-01

    For pre-launch algorithm development and post-launch product evaluation Global Precipitation Measurement (GPM) Ground Validation (GV) goes beyond direct comparisons of surface rain rates between ground and satellite measurements to provide the means for improving retrieval algorithms and model applications.Three approaches to GPM GV include direct statistical validation (at the surface), precipitation physics validation (in a vertical columns), and integrated science validation (4-dimensional). These three approaches support five themes: core satellite error characterization; constellation satellites validation; development of physical models of snow, cloud water, and mixed phase; development of cloud-resolving model (CRM) and land-surface models to bridge observations and algorithms; and, development of coupled CRM-land surface modeling for basin-scale water budget studies and natural hazard prediction. This presentation describes the implementation of these approaches.

  4. Development of a Conservative Model Validation Approach for Reliable Analysis

    DTIC Science & Technology

    2015-01-01

    CIE 2015 August 2-5, 2015, Boston, Massachusetts, USA [DRAFT] DETC2015-46982 DEVELOPMENT OF A CONSERVATIVE MODEL VALIDATION APPROACH FOR RELIABLE...obtain a conservative simulation model for reliable design even with limited experimental data. Very little research has taken into account the...3, the proposed conservative model validation is briefly compared to the conventional model validation approach. Section 4 describes how to account

  5. A new framework to enhance the interpretation of external validation studies of clinical prediction models.

    PubMed

    Debray, Thomas P A; Vergouwe, Yvonne; Koffijberg, Hendrik; Nieboer, Daan; Steyerberg, Ewout W; Moons, Karel G M

    2015-03-01

    It is widely acknowledged that the performance of diagnostic and prognostic prediction models should be assessed in external validation studies with independent data from "different but related" samples as compared with that of the development sample. We developed a framework of methodological steps and statistical methods for analyzing and enhancing the interpretation of results from external validation studies of prediction models. We propose to quantify the degree of relatedness between development and validation samples on a scale ranging from reproducibility to transportability by evaluating their corresponding case-mix differences. We subsequently assess the models' performance in the validation sample and interpret the performance in view of the case-mix differences. Finally, we may adjust the model to the validation setting. We illustrate this three-step framework with a prediction model for diagnosing deep venous thrombosis using three validation samples with varying case mix. While one external validation sample merely assessed the model's reproducibility, two other samples rather assessed model transportability. The performance in all validation samples was adequate, and the model did not require extensive updating to correct for miscalibration or poor fit to the validation settings. The proposed framework enhances the interpretation of findings at external validation of prediction models. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  6. Solar Sail Models and Test Measurements Correspondence for Validation Requirements Definition

    NASA Technical Reports Server (NTRS)

    Ewing, Anthony; Adams, Charles

    2004-01-01

    Solar sails are being developed as a mission-enabling technology in support of future NASA science missions. Current efforts have advanced solar sail technology sufficient to justify a flight validation program. A primary objective of this activity is to test and validate solar sail models that are currently under development so that they may be used with confidence in future science mission development (e.g., scalable to larger sails). Both system and model validation requirements must be defined early in the program to guide design cycles and to ensure that relevant and sufficient test data will be obtained to conduct model validation to the level required. A process of model identification, model input/output documentation, model sensitivity analyses, and test measurement correspondence is required so that decisions can be made to satisfy validation requirements within program constraints.

  7. A diagnostic model for the detection of sensitization to wheat allergens was developed and validated in bakery workers.

    PubMed

    Suarthana, Eva; Vergouwe, Yvonne; Moons, Karel G; de Monchy, Jan; Grobbee, Diederick; Heederik, Dick; Meijer, Evert

    2010-09-01

    To develop and validate a prediction model to detect sensitization to wheat allergens in bakery workers. The prediction model was developed in 867 Dutch bakery workers (development set, prevalence of sensitization 13%) and included questionnaire items (candidate predictors). First, principal component analysis was used to reduce the number of candidate predictors. Then, multivariable logistic regression analysis was used to develop the model. Internal validation and extent of optimism was assessed with bootstrapping. External validation was studied in 390 independent Dutch bakery workers (validation set, prevalence of sensitization 20%). The prediction model contained the predictors nasoconjunctival symptoms, asthma symptoms, shortness of breath and wheeze, work-related upper and lower respiratory symptoms, and traditional bakery. The model showed good discrimination with an area under the receiver operating characteristic (ROC) curve area of 0.76 (and 0.75 after internal validation). Application of the model in the validation set gave a reasonable discrimination (ROC area=0.69) and good calibration after a small adjustment of the model intercept. A simple model with questionnaire items only can be used to stratify bakers according to their risk of sensitization to wheat allergens. Its use may increase the cost-effectiveness of (subsequent) medical surveillance.

  8. Prognostic models for complete recovery in ischemic stroke: a systematic review and meta-analysis.

    PubMed

    Jampathong, Nampet; Laopaiboon, Malinee; Rattanakanokchai, Siwanon; Pattanittum, Porjai

    2018-03-09

    Prognostic models have been increasingly developed to predict complete recovery in ischemic stroke. However, questions arise about the performance characteristics of these models. The aim of this study was to systematically review and synthesize performance of existing prognostic models for complete recovery in ischemic stroke. We searched journal publications indexed in PUBMED, SCOPUS, CENTRAL, ISI Web of Science and OVID MEDLINE from inception until 4 December, 2017, for studies designed to develop and/or validate prognostic models for predicting complete recovery in ischemic stroke patients. Two reviewers independently examined titles and abstracts, and assessed whether each study met the pre-defined inclusion criteria and also independently extracted information about model development and performance. We evaluated validation of the models by medians of the area under the receiver operating characteristic curve (AUC) or c-statistic and calibration performance. We used a random-effects meta-analysis to pool AUC values. We included 10 studies with 23 models developed from elderly patients with a moderately severe ischemic stroke, mainly in three high income countries. Sample sizes for each study ranged from 75 to 4441. Logistic regression was the only analytical strategy used to develop the models. The number of various predictors varied from one to 11. Internal validation was performed in 12 models with a median AUC of 0.80 (95% CI 0.73 to 0.84). One model reported good calibration. Nine models reported external validation with a median AUC of 0.80 (95% CI 0.76 to 0.82). Four models showed good discrimination and calibration on external validation. The pooled AUC of the two validation models of the same developed model was 0.78 (95% CI 0.71 to 0.85). The performance of the 23 models found in the systematic review varied from fair to good in terms of internal and external validation. Further models should be developed with internal and external validation in low and middle income countries.

  9. Early Prediction of Intensive Care Unit-Acquired Weakness: A Multicenter External Validation Study.

    PubMed

    Witteveen, Esther; Wieske, Luuk; Sommers, Juultje; Spijkstra, Jan-Jaap; de Waard, Monique C; Endeman, Henrik; Rijkenberg, Saskia; de Ruijter, Wouter; Sleeswijk, Mengalvio; Verhamme, Camiel; Schultz, Marcus J; van Schaik, Ivo N; Horn, Janneke

    2018-01-01

    An early diagnosis of intensive care unit-acquired weakness (ICU-AW) is often not possible due to impaired consciousness. To avoid a diagnostic delay, we previously developed a prediction model, based on single-center data from 212 patients (development cohort), to predict ICU-AW at 2 days after ICU admission. The objective of this study was to investigate the external validity of the original prediction model in a new, multicenter cohort and, if necessary, to update the model. Newly admitted ICU patients who were mechanically ventilated at 48 hours after ICU admission were included. Predictors were prospectively recorded, and the outcome ICU-AW was defined by an average Medical Research Council score <4. In the validation cohort, consisting of 349 patients, we analyzed performance of the original prediction model by assessment of calibration and discrimination. Additionally, we updated the model in this validation cohort. Finally, we evaluated a new prediction model based on all patients of the development and validation cohort. Of 349 analyzed patients in the validation cohort, 190 (54%) developed ICU-AW. Both model calibration and discrimination of the original model were poor in the validation cohort. The area under the receiver operating characteristics curve (AUC-ROC) was 0.60 (95% confidence interval [CI]: 0.54-0.66). Model updating methods improved calibration but not discrimination. The new prediction model, based on all patients of the development and validation cohort (total of 536 patients) had a fair discrimination, AUC-ROC: 0.70 (95% CI: 0.66-0.75). The previously developed prediction model for ICU-AW showed poor performance in a new independent multicenter validation cohort. Model updating methods improved calibration but not discrimination. The newly derived prediction model showed fair discrimination. This indicates that early prediction of ICU-AW is still challenging and needs further attention.

  10. Analysis of model development strategies: predicting ventral hernia recurrence.

    PubMed

    Holihan, Julie L; Li, Linda T; Askenasy, Erik P; Greenberg, Jacob A; Keith, Jerrod N; Martindale, Robert G; Roth, J Scott; Liang, Mike K

    2016-11-01

    There have been many attempts to identify variables associated with ventral hernia recurrence; however, it is unclear which statistical modeling approach results in models with greatest internal and external validity. We aim to assess the predictive accuracy of models developed using five common variable selection strategies to determine variables associated with hernia recurrence. Two multicenter ventral hernia databases were used. Database 1 was randomly split into "development" and "internal validation" cohorts. Database 2 was designated "external validation". The dependent variable for model development was hernia recurrence. Five variable selection strategies were used: (1) "clinical"-variables considered clinically relevant, (2) "selective stepwise"-all variables with a P value <0.20 were assessed in a step-backward model, (3) "liberal stepwise"-all variables were included and step-backward regression was performed, (4) "restrictive internal resampling," and (5) "liberal internal resampling." Variables were included with P < 0.05 for the Restrictive model and P < 0.10 for the Liberal model. A time-to-event analysis using Cox regression was performed using these strategies. The predictive accuracy of the developed models was tested on the internal and external validation cohorts using Harrell's C-statistic where C > 0.70 was considered "reasonable". The recurrence rate was 32.9% (n = 173/526; median/range follow-up, 20/1-58 mo) for the development cohort, 36.0% (n = 95/264, median/range follow-up 20/1-61 mo) for the internal validation cohort, and 12.7% (n = 155/1224, median/range follow-up 9/1-50 mo) for the external validation cohort. Internal validation demonstrated reasonable predictive accuracy (C-statistics = 0.772, 0.760, 0.767, 0.757, 0.763), while on external validation, predictive accuracy dipped precipitously (C-statistic = 0.561, 0.557, 0.562, 0.553, 0.560). Predictive accuracy was equally adequate on internal validation among models; however, on external validation, all five models failed to demonstrate utility. Future studies should report multiple variable selection techniques and demonstrate predictive accuracy on external data sets for model validation. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Development of Learning Models Based on Problem Solving and Meaningful Learning Standards by Expert Validity for Animal Development Course

    NASA Astrophysics Data System (ADS)

    Lufri, L.; Fitri, R.; Yogica, R.

    2018-04-01

    The purpose of this study is to produce a learning model based on problem solving and meaningful learning standards by expert assessment or validation for the course of Animal Development. This research is a development research that produce the product in the form of learning model, which consist of sub product, namely: the syntax of learning model and student worksheets. All of these products are standardized through expert validation. The research data is the level of validity of all sub products obtained using questionnaire, filled by validators from various field of expertise (field of study, learning strategy, Bahasa). Data were analysed using descriptive statistics. The result of the research shows that the problem solving and meaningful learning model has been produced. Sub products declared appropriate by expert include the syntax of learning model and student worksheet.

  12. Assessing Discriminative Performance at External Validation of Clinical Prediction Models

    PubMed Central

    Nieboer, Daan; van der Ploeg, Tjeerd; Steyerberg, Ewout W.

    2016-01-01

    Introduction External validation studies are essential to study the generalizability of prediction models. Recently a permutation test, focusing on discrimination as quantified by the c-statistic, was proposed to judge whether a prediction model is transportable to a new setting. We aimed to evaluate this test and compare it to previously proposed procedures to judge any changes in c-statistic from development to external validation setting. Methods We compared the use of the permutation test to the use of benchmark values of the c-statistic following from a previously proposed framework to judge transportability of a prediction model. In a simulation study we developed a prediction model with logistic regression on a development set and validated them in the validation set. We concentrated on two scenarios: 1) the case-mix was more heterogeneous and predictor effects were weaker in the validation set compared to the development set, and 2) the case-mix was less heterogeneous in the validation set and predictor effects were identical in the validation and development set. Furthermore we illustrated the methods in a case study using 15 datasets of patients suffering from traumatic brain injury. Results The permutation test indicated that the validation and development set were homogenous in scenario 1 (in almost all simulated samples) and heterogeneous in scenario 2 (in 17%-39% of simulated samples). Previously proposed benchmark values of the c-statistic and the standard deviation of the linear predictors correctly pointed at the more heterogeneous case-mix in scenario 1 and the less heterogeneous case-mix in scenario 2. Conclusion The recently proposed permutation test may provide misleading results when externally validating prediction models in the presence of case-mix differences between the development and validation population. To correctly interpret the c-statistic found at external validation it is crucial to disentangle case-mix differences from incorrect regression coefficients. PMID:26881753

  13. Assessing Discriminative Performance at External Validation of Clinical Prediction Models.

    PubMed

    Nieboer, Daan; van der Ploeg, Tjeerd; Steyerberg, Ewout W

    2016-01-01

    External validation studies are essential to study the generalizability of prediction models. Recently a permutation test, focusing on discrimination as quantified by the c-statistic, was proposed to judge whether a prediction model is transportable to a new setting. We aimed to evaluate this test and compare it to previously proposed procedures to judge any changes in c-statistic from development to external validation setting. We compared the use of the permutation test to the use of benchmark values of the c-statistic following from a previously proposed framework to judge transportability of a prediction model. In a simulation study we developed a prediction model with logistic regression on a development set and validated them in the validation set. We concentrated on two scenarios: 1) the case-mix was more heterogeneous and predictor effects were weaker in the validation set compared to the development set, and 2) the case-mix was less heterogeneous in the validation set and predictor effects were identical in the validation and development set. Furthermore we illustrated the methods in a case study using 15 datasets of patients suffering from traumatic brain injury. The permutation test indicated that the validation and development set were homogenous in scenario 1 (in almost all simulated samples) and heterogeneous in scenario 2 (in 17%-39% of simulated samples). Previously proposed benchmark values of the c-statistic and the standard deviation of the linear predictors correctly pointed at the more heterogeneous case-mix in scenario 1 and the less heterogeneous case-mix in scenario 2. The recently proposed permutation test may provide misleading results when externally validating prediction models in the presence of case-mix differences between the development and validation population. To correctly interpret the c-statistic found at external validation it is crucial to disentangle case-mix differences from incorrect regression coefficients.

  14. Applicability of Monte Carlo cross validation technique for model development and validation using generalised least squares regression

    NASA Astrophysics Data System (ADS)

    Haddad, Khaled; Rahman, Ataur; A Zaman, Mohammad; Shrestha, Surendra

    2013-03-01

    SummaryIn regional hydrologic regression analysis, model selection and validation are regarded as important steps. Here, the model selection is usually based on some measurements of goodness-of-fit between the model prediction and observed data. In Regional Flood Frequency Analysis (RFFA), leave-one-out (LOO) validation or a fixed percentage leave out validation (e.g., 10%) is commonly adopted to assess the predictive ability of regression-based prediction equations. This paper develops a Monte Carlo Cross Validation (MCCV) technique (which has widely been adopted in Chemometrics and Econometrics) in RFFA using Generalised Least Squares Regression (GLSR) and compares it with the most commonly adopted LOO validation approach. The study uses simulated and regional flood data from the state of New South Wales in Australia. It is found that when developing hydrologic regression models, application of the MCCV is likely to result in a more parsimonious model than the LOO. It has also been found that the MCCV can provide a more realistic estimate of a model's predictive ability when compared with the LOO.

  15. Innovative learning model for improving students’ argumentation skill and concept understanding on science

    NASA Astrophysics Data System (ADS)

    Nafsiati Astuti, Rini

    2018-04-01

    Argumentation skill is the ability to compose and maintain arguments consisting of claims, supports for evidence, and strengthened-reasons. Argumentation is an important skill student needs to face the challenges of globalization in the 21st century. It is not an ability that can be developed by itself along with the physical development of human, but it must be developed under nerve like process, giving stimulus so as to require a person to be able to argue. Therefore, teachers should develop students’ skill of arguing in science learning in the classroom. The purpose of this study is to obtain an innovative learning model that are valid in terms of content and construct in improving the skills of argumentation and concept understanding of junior high school students. The assessment of content validity and construct validity was done through Focus Group Discussion (FGD), using the content and construct validation sheet, book model, learning video, and a set of learning aids for one meeting. Assessment results from 3 (three) experts showed that the learning model developed in the category was valid. The validity itself shows that the developed learning model has met the content requirement, the student needs, state of the art, strong theoretical and empirical foundation and construct validity, which has a connection of syntax stages and components of learning model so that it can be applied in the classroom activities

  16. Independent external validation of predictive models for urinary dysfunction following external beam radiotherapy of the prostate: Issues in model development and reporting.

    PubMed

    Yahya, Noorazrul; Ebert, Martin A; Bulsara, Max; Kennedy, Angel; Joseph, David J; Denham, James W

    2016-08-01

    Most predictive models are not sufficiently validated for prospective use. We performed independent external validation of published predictive models for urinary dysfunctions following radiotherapy of the prostate. Multivariable models developed to predict atomised and generalised urinary symptoms, both acute and late, were considered for validation using a dataset representing 754 participants from the TROG 03.04-RADAR trial. Endpoints and features were harmonised to match the predictive models. The overall performance, calibration and discrimination were assessed. 14 models from four publications were validated. The discrimination of the predictive models in an independent external validation cohort, measured using the area under the receiver operating characteristic (ROC) curve, ranged from 0.473 to 0.695, generally lower than in internal validation. 4 models had ROC >0.6. Shrinkage was required for all predictive models' coefficients ranging from -0.309 (prediction probability was inverse to observed proportion) to 0.823. Predictive models which include baseline symptoms as a feature produced the highest discrimination. Two models produced a predicted probability of 0 and 1 for all patients. Predictive models vary in performance and transferability illustrating the need for improvements in model development and reporting. Several models showed reasonable potential but efforts should be increased to improve performance. Baseline symptoms should always be considered as potential features for predictive models. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  17. Risk prediction models of breast cancer: a systematic review of model performances.

    PubMed

    Anothaisintawee, Thunyarat; Teerawattananon, Yot; Wiratkapun, Chollathip; Kasamesup, Vijj; Thakkinstian, Ammarin

    2012-05-01

    The number of risk prediction models has been increasingly developed, for estimating about breast cancer in individual women. However, those model performances are questionable. We therefore have conducted a study with the aim to systematically review previous risk prediction models. The results from this review help to identify the most reliable model and indicate the strengths and weaknesses of each model for guiding future model development. We searched MEDLINE (PubMed) from 1949 and EMBASE (Ovid) from 1974 until October 2010. Observational studies which constructed models using regression methods were selected. Information about model development and performance were extracted. Twenty-five out of 453 studies were eligible. Of these, 18 developed prediction models and 7 validated existing prediction models. Up to 13 variables were included in the models and sample sizes for each study ranged from 550 to 2,404,636. Internal validation was performed in four models, while five models had external validation. Gail and Rosner and Colditz models were the significant models which were subsequently modified by other scholars. Calibration performance of most models was fair to good (expected/observe ratio: 0.87-1.12), but discriminatory accuracy was poor to fair both in internal validation (concordance statistics: 0.53-0.66) and in external validation (concordance statistics: 0.56-0.63). Most models yielded relatively poor discrimination in both internal and external validation. This poor discriminatory accuracy of existing models might be because of a lack of knowledge about risk factors, heterogeneous subtypes of breast cancer, and different distributions of risk factors across populations. In addition the concordance statistic itself is insensitive to measure the improvement of discrimination. Therefore, the new method such as net reclassification index should be considered to evaluate the improvement of the performance of a new develop model.

  18. NRL Hyperspectral Imagery Trafficability Tool (HITT): Software andSpectral-Geotechnical Look-up Tables for Estimation and Mapping of Soil Bearing Strength from Hyperspectral Imagery

    DTIC Science & Technology

    2012-09-28

    spectral-geotechnical libraries and models developed during remote sensing and calibration/ validation campaigns conducted by NRL and collaborating...geotechnical libraries and models developed during remote sensing and calibration/ validation campaigns conducted by NRL and collaborating institutions in four...2010; Bachmann, Fry, et al, 2012a). The NRL HITT tool is a model for how we develop and validate software, and the future development of tools by

  19. CFD Modeling Needs and What Makes a Good Supersonic Combustion Validation Experiment

    NASA Technical Reports Server (NTRS)

    Gaffney, Richard L., Jr.; Cutler, Andrew D.

    2005-01-01

    If a CFD code/model developer is asked what experimental data he wants to validate his code or numerical model, his answer will be: "Everything, everywhere, at all times." Since this is not possible, practical, or even reasonable, the developer must understand what can be measured within the limits imposed by the test article, the test location, the test environment and the available diagnostic equipment. At the same time, it is important for the expermentalist/diagnostician to understand what the CFD developer needs (as opposed to wants) in order to conduct a useful CFD validation experiment. If these needs are not known, it is possible to neglect easily measured quantities at locations needed by the developer, rendering the data set useless for validation purposes. It is also important for the experimentalist/diagnostician to understand what the developer is trying to validate so that the experiment can be designed to isolate (as much as possible) the effects of a particular physical phenomena that is associated with the model to be validated. The probability of a successful validation experiment can be greatly increased if the two groups work together, each understanding the needs and limitations of the other.

  20. Development, Testing, and Validation of a Model-Based Tool to Predict Operator Responses in Unexpected Workload Transitions

    NASA Technical Reports Server (NTRS)

    Sebok, Angelia; Wickens, Christopher; Sargent, Robert

    2015-01-01

    One human factors challenge is predicting operator performance in novel situations. Approaches such as drawing on relevant previous experience, and developing computational models to predict operator performance in complex situations, offer potential methods to address this challenge. A few concerns with modeling operator performance are that models need to realistic, and they need to be tested empirically and validated. In addition, many existing human performance modeling tools are complex and require that an analyst gain significant experience to be able to develop models for meaningful data collection. This paper describes an effort to address these challenges by developing an easy to use model-based tool, using models that were developed from a review of existing human performance literature and targeted experimental studies, and performing an empirical validation of key model predictions.

  1. Development and validation of a cost-utility model for Type 1 diabetes mellitus.

    PubMed

    Wolowacz, S; Pearson, I; Shannon, P; Chubb, B; Gundgaard, J; Davies, M; Briggs, A

    2015-08-01

    To develop a health economic model to evaluate the cost-effectiveness of new interventions for Type 1 diabetes mellitus by their effects on long-term complications (measured through mean HbA1c ) while capturing the impact of treatment on hypoglycaemic events. Through a systematic review, we identified complications associated with Type 1 diabetes mellitus and data describing the long-term incidence of these complications. An individual patient simulation model was developed and included the following complications: cardiovascular disease, peripheral neuropathy, microalbuminuria, end-stage renal disease, proliferative retinopathy, ketoacidosis, cataract, hypoglycemia and adverse birth outcomes. Risk equations were developed from published cumulative incidence data and hazard ratios for the effect of HbA1c , age and duration of diabetes. We validated the model by comparing model predictions with observed outcomes from studies used to build the model (internal validation) and from other published data (external validation). We performed illustrative analyses for typical patient cohorts and a hypothetical intervention. Model predictions were within 2% of expected values in the internal validation and within 8% of observed values in the external validation (percentages represent absolute differences in the cumulative incidence). The model utilized high-quality, recent data specific to people with Type 1 diabetes mellitus. In the model validation, results deviated less than 8% from expected values. © 2014 Research Triangle Institute d/b/a RTI Health Solutions. Diabetic Medicine © 2014 Diabetes UK.

  2. Aeroservoelastic Model Validation and Test Data Analysis of the F/A-18 Active Aeroelastic Wing

    NASA Technical Reports Server (NTRS)

    Brenner, Martin J.; Prazenica, Richard J.

    2003-01-01

    Model validation and flight test data analysis require careful consideration of the effects of uncertainty, noise, and nonlinearity. Uncertainty prevails in the data analysis techniques and results in a composite model uncertainty from unmodeled dynamics, assumptions and mechanics of the estimation procedures, noise, and nonlinearity. A fundamental requirement for reliable and robust model development is an attempt to account for each of these sources of error, in particular, for model validation, robust stability prediction, and flight control system development. This paper is concerned with data processing procedures for uncertainty reduction in model validation for stability estimation and nonlinear identification. F/A-18 Active Aeroelastic Wing (AAW) aircraft data is used to demonstrate signal representation effects on uncertain model development, stability estimation, and nonlinear identification. Data is decomposed using adaptive orthonormal best-basis and wavelet-basis signal decompositions for signal denoising into linear and nonlinear identification algorithms. Nonlinear identification from a wavelet-based Volterra kernel procedure is used to extract nonlinear dynamics from aeroelastic responses, and to assist model development and uncertainty reduction for model validation and stability prediction by removing a class of nonlinearity from the uncertainty.

  3. A Conceptual Model of Career Development to Enhance Academic Motivation

    ERIC Educational Resources Information Center

    Collins, Nancy Creighton

    2010-01-01

    The purpose of this study was to develop, refine, and validate a conceptual model of career development to enhance the academic motivation of community college students. To achieve this end, a straw model was built from the theoretical and empirical research literature. The model was then refined and validated through three rounds of a Delphi…

  4. Community-Based Participatory Research Conceptual Model: Community Partner Consultation and Face Validity.

    PubMed

    Belone, Lorenda; Lucero, Julie E; Duran, Bonnie; Tafoya, Greg; Baker, Elizabeth A; Chan, Domin; Chang, Charlotte; Greene-Moton, Ella; Kelley, Michele A; Wallerstein, Nina

    2016-01-01

    A national community-based participatory research (CBPR) team developed a conceptual model of CBPR partnerships to understand the contribution of partnership processes to improved community capacity and health outcomes. With the model primarily developed through academic literature and expert consensus building, we sought community input to assess face validity and acceptability. Our research team conducted semi-structured focus groups with six partnerships nationwide. Participants validated and expanded on existing model constructs and identified new constructs based on "real-world" praxis, resulting in a revised model. Four cross-cutting constructs were identified: trust development, capacity, mutual learning, and power dynamics. By empirically testing the model, we found community face validity and capacity to adapt the model to diverse contexts. We recommend partnerships use and adapt the CBPR model and its constructs, for collective reflection and evaluation, to enhance their partnering practices and achieve their health and research goals. © The Author(s) 2014.

  5. A Public-Private Partnership Develops and Externally Validates a 30-Day Hospital Readmission Risk Prediction Model

    PubMed Central

    Choudhry, Shahid A.; Li, Jing; Davis, Darcy; Erdmann, Cole; Sikka, Rishi; Sutariya, Bharat

    2013-01-01

    Introduction: Preventing the occurrence of hospital readmissions is needed to improve quality of care and foster population health across the care continuum. Hospitals are being held accountable for improving transitions of care to avert unnecessary readmissions. Advocate Health Care in Chicago and Cerner (ACC) collaborated to develop all-cause, 30-day hospital readmission risk prediction models to identify patients that need interventional resources. Ideally, prediction models should encompass several qualities: they should have high predictive ability; use reliable and clinically relevant data; use vigorous performance metrics to assess the models; be validated in populations where they are applied; and be scalable in heterogeneous populations. However, a systematic review of prediction models for hospital readmission risk determined that most performed poorly (average C-statistic of 0.66) and efforts to improve their performance are needed for widespread usage. Methods: The ACC team incorporated electronic health record data, utilized a mixed-method approach to evaluate risk factors, and externally validated their prediction models for generalizability. Inclusion and exclusion criteria were applied on the patient cohort and then split for derivation and internal validation. Stepwise logistic regression was performed to develop two predictive models: one for admission and one for discharge. The prediction models were assessed for discrimination ability, calibration, overall performance, and then externally validated. Results: The ACC Admission and Discharge Models demonstrated modest discrimination ability during derivation, internal and external validation post-recalibration (C-statistic of 0.76 and 0.78, respectively), and reasonable model fit during external validation for utility in heterogeneous populations. Conclusions: The ACC Admission and Discharge Models embody the design qualities of ideal prediction models. The ACC plans to continue its partnership to further improve and develop valuable clinical models. PMID:24224068

  6. DEVELOPMENT OF GUIDELINES FOR CALIBRATING, VALIDATING, AND EVALUATING HYDROLOGIC AND WATER QUALITY MODELS: ASABE ENGINEERING PRACTICE 621

    USDA-ARS?s Scientific Manuscript database

    Information to support application of hydrologic and water quality (H/WQ) models abounds, yet modelers commonly use arbitrary, ad hoc methods to conduct, document, and report model calibration, validation, and evaluation. Consistent methods are needed to improve model calibration, validation, and e...

  7. Incremental checking of Master Data Management model based on contextual graphs

    NASA Astrophysics Data System (ADS)

    Lamolle, Myriam; Menet, Ludovic; Le Duc, Chan

    2015-10-01

    The validation of models is a crucial step in distributed heterogeneous systems. In this paper, an incremental validation method is proposed in the scope of a Model Driven Engineering (MDE) approach, which is used to develop a Master Data Management (MDM) field represented by XML Schema models. The MDE approach presented in this paper is based on the definition of an abstraction layer using UML class diagrams. The validation method aims to minimise the model errors and to optimisethe process of model checking. Therefore, the notion of validation contexts is introduced allowing the verification of data model views. Description logics specify constraints that the models have to check. An experimentation of the approach is presented through an application developed in ArgoUML IDE.

  8. Model Validation | Center for Cancer Research

    Cancer.gov

    Research Investigation and Animal Model Validation This activity is also under development and thus far has included increasing pathology resources, delivering pathology services, as well as using imaging and surgical methods to develop and refine animal models in collaboration with other CCR investigators.

  9. Simulation model calibration and validation : phase II : development of implementation handbook and short course.

    DOT National Transportation Integrated Search

    2006-01-01

    A previous study developed a procedure for microscopic simulation model calibration and validation and evaluated the procedure via two relatively simple case studies using three microscopic simulation models. Results showed that default parameters we...

  10. Development and validation of risk models to predict outcomes following in-hospital cardiac arrest attended by a hospital-based resuscitation team.

    PubMed

    Harrison, David A; Patel, Krishna; Nixon, Edel; Soar, Jasmeet; Smith, Gary B; Gwinnutt, Carl; Nolan, Jerry P; Rowan, Kathryn M

    2014-08-01

    The National Cardiac Arrest Audit (NCAA) is the UK national clinical audit for in-hospital cardiac arrest. To make fair comparisons among health care providers, clinical indicators require case mix adjustment using a validated risk model. The aim of this study was to develop and validate risk models to predict outcomes following in-hospital cardiac arrest attended by a hospital-based resuscitation team in UK hospitals. Risk models for two outcomes-return of spontaneous circulation (ROSC) for greater than 20min and survival to hospital discharge-were developed and validated using data for in-hospital cardiac arrests between April 2011 and March 2013. For each outcome, a full model was fitted and then simplified by testing for non-linearity, combining categories and stepwise reduction. Finally, interactions between predictors were considered. Models were assessed for discrimination, calibration and accuracy. 22,479 in-hospital cardiac arrests in 143 hospitals were included (14,688 development, 7791 validation). The final risk model for ROSC>20min included: age (non-linear), sex, prior length of stay in hospital, reason for attendance, location of arrest, presenting rhythm, and interactions between presenting rhythm and location of arrest. The model for hospital survival included the same predictors, excluding sex. Both models had acceptable performance across the range of measures, although discrimination for hospital mortality exceeded that for ROSC>20min (c index 0.81 versus 0.72). Validated risk models for ROSC>20min and hospital survival following in-hospital cardiac arrest have been developed. These models will strengthen comparative reporting in NCAA and support local quality improvement. Copyright © 2014 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  11. Development and validation of risk models to predict outcomes following in-hospital cardiac arrest attended by a hospital-based resuscitation team☆

    PubMed Central

    Harrison, David A.; Patel, Krishna; Nixon, Edel; Soar, Jasmeet; Smith, Gary B.; Gwinnutt, Carl; Nolan, Jerry P.; Rowan, Kathryn M.

    2014-01-01

    Aim The National Cardiac Arrest Audit (NCAA) is the UK national clinical audit for in-hospital cardiac arrest. To make fair comparisons among health care providers, clinical indicators require case mix adjustment using a validated risk model. The aim of this study was to develop and validate risk models to predict outcomes following in-hospital cardiac arrest attended by a hospital-based resuscitation team in UK hospitals. Methods Risk models for two outcomes—return of spontaneous circulation (ROSC) for greater than 20 min and survival to hospital discharge—were developed and validated using data for in-hospital cardiac arrests between April 2011 and March 2013. For each outcome, a full model was fitted and then simplified by testing for non-linearity, combining categories and stepwise reduction. Finally, interactions between predictors were considered. Models were assessed for discrimination, calibration and accuracy. Results 22,479 in-hospital cardiac arrests in 143 hospitals were included (14,688 development, 7791 validation). The final risk model for ROSC > 20 min included: age (non-linear), sex, prior length of stay in hospital, reason for attendance, location of arrest, presenting rhythm, and interactions between presenting rhythm and location of arrest. The model for hospital survival included the same predictors, excluding sex. Both models had acceptable performance across the range of measures, although discrimination for hospital mortality exceeded that for ROSC > 20 min (c index 0.81 versus 0.72). Conclusions Validated risk models for ROSC > 20 min and hospital survival following in-hospital cardiac arrest have been developed. These models will strengthen comparative reporting in NCAA and support local quality improvement. PMID:24830872

  12. Web Based Semi-automatic Scientific Validation of Models of the Corona and Inner Heliosphere

    NASA Astrophysics Data System (ADS)

    MacNeice, P. J.; Chulaki, A.; Taktakishvili, A.; Kuznetsova, M. M.

    2013-12-01

    Validation is a critical step in preparing models of the corona and inner heliosphere for future roles supporting either or both the scientific research community and the operational space weather forecasting community. Validation of forecasting quality tends to focus on a short list of key features in the model solutions, with an unchanging order of priority. Scientific validation exposes a much larger range of physical processes and features, and as the models evolve to better represent features of interest, the research community tends to shift its focus to other areas which are less well understood and modeled. Given the more comprehensive and dynamic nature of scientific validation, and the limited resources available to the community to pursue this, it is imperative that the community establish a semi-automated process which engages the model developers directly into an ongoing and evolving validation process. In this presentation we describe the ongoing design and develpment of a web based facility to enable this type of validation of models of the corona and inner heliosphere, on the growing list of model results being generated, and on strategies we have been developing to account for model results that incorporate adaptively refined numerical grids.

  13. Load Model Verification, Validation and Calibration Framework by Statistical Analysis on Field Data

    NASA Astrophysics Data System (ADS)

    Jiao, Xiangqing; Liao, Yuan; Nguyen, Thai

    2017-11-01

    Accurate load models are critical for power system analysis and operation. A large amount of research work has been done on load modeling. Most of the existing research focuses on developing load models, while little has been done on developing formal load model verification and validation (V&V) methodologies or procedures. Most of the existing load model validation is based on qualitative rather than quantitative analysis. In addition, not all aspects of model V&V problem have been addressed by the existing approaches. To complement the existing methods, this paper proposes a novel load model verification and validation framework that can systematically and more comprehensively examine load model's effectiveness and accuracy. Statistical analysis, instead of visual check, quantifies the load model's accuracy, and provides a confidence level of the developed load model for model users. The analysis results can also be used to calibrate load models. The proposed framework can be used as a guidance to systematically examine load models for utility engineers and researchers. The proposed method is demonstrated through analysis of field measurements collected from a utility system.

  14. A Comprehensive Validation Methodology for Sparse Experimental Data

    NASA Technical Reports Server (NTRS)

    Norman, Ryan B.; Blattnig, Steve R.

    2010-01-01

    A comprehensive program of verification and validation has been undertaken to assess the applicability of models to space radiation shielding applications and to track progress as models are developed over time. The models are placed under configuration control, and automated validation tests are used so that comparisons can readily be made as models are improved. Though direct comparisons between theoretical results and experimental data are desired for validation purposes, such comparisons are not always possible due to lack of data. In this work, two uncertainty metrics are introduced that are suitable for validating theoretical models against sparse experimental databases. The nuclear physics models, NUCFRG2 and QMSFRG, are compared to an experimental database consisting of over 3600 experimental cross sections to demonstrate the applicability of the metrics. A cumulative uncertainty metric is applied to the question of overall model accuracy, while a metric based on the median uncertainty is used to analyze the models from the perspective of model development by analyzing subsets of the model parameter space.

  15. LIVVkit 2: An extensible land ice verification and validation toolkit for comparing observations and models?

    NASA Astrophysics Data System (ADS)

    Kennedy, J. H.; Bennett, A. R.; Evans, K. J.; Fyke, J. G.; Vargo, L.; Price, S. F.; Hoffman, M. J.

    2016-12-01

    Accurate representation of ice sheets and glaciers are essential for robust predictions of arctic climate within Earth System models. Verification and Validation (V&V) is a set of techniques used to quantify the correctness and accuracy of a model, which builds developer/modeler confidence, and can be used to enhance the credibility of the model. Fundamentally, V&V is a continuous process because each model change requires a new round of V&V testing. The Community Ice Sheet Model (CISM) development community is actively developing LIVVkit, the Land Ice Verification and Validation toolkit, which is designed to easily integrate into an ice-sheet model's development workflow (on both personal and high-performance computers) to provide continuous V&V testing.LIVVkit is a robust and extensible python package for V&V, which has components for both software V&V (construction and use) and model V&V (mathematics and physics). The model Verification component is used, for example, to verify model results against community intercomparisons such as ISMIP-HOM. The model validation component is used, for example, to generate a series of diagnostic plots showing the differences between model results against observations for variables such as thickness, surface elevation, basal topography, surface velocity, surface mass balance, etc. Because many different ice-sheet models are under active development, new validation datasets are becoming available, and new methods of analysing these models are actively being researched, LIVVkit includes a framework to easily extend the model V&V analyses by ice-sheet modelers. This allows modelers and developers to develop evaluations of parameters, implement changes, and quickly see how those changes effect the ice-sheet model and earth system model (when coupled). Furthermore, LIVVkit outputs a portable hierarchical website allowing evaluations to be easily shared, published, and analysed throughout the arctic and Earth system communities.

  16. Sonic Boom Modeling Technical Challenge

    NASA Technical Reports Server (NTRS)

    Sullivan, Brenda M.

    2007-01-01

    This viewgraph presentation reviews the technical challenges in modeling sonic booms. The goal of this program is to develop knowledge, capabilities and technologies to enable overland supersonic flight. The specific objectives of the modeling are: (1) Develop and validate sonic boom propagation model through realistic atmospheres, including effects of turbulence (2) Develop methods enabling prediction of response of and acoustic transmission into structures impacted by sonic booms (3) Develop and validate psychoacoustic model of human response to sonic booms under both indoor and outdoor listening conditions, using simulators.

  17. HBOI Underwater Imaging and Communication Research - Phase 1

    DTIC Science & Technology

    2012-04-19

    validation of one-way pulse stretching radiative transfer code The objective was to develop and validate time-resolved radiative transfer models that...and validation of one-way pulse stretching radiative transfer code The models were subjected to a series of validation experiments over 12.5 meter...about the theoretical basis of the model together with validation results can be found in Dalgleish et al., (20 1 0). Forward scattering Mueller

  18. Turbine Engine Mathematical Model Validation

    DTIC Science & Technology

    1976-12-01

    AEDC-TR-76-90 ~Ec i ? Z985 TURBINE ENGINE MATHEMATICAL MODEL VALIDATION ENGINE TEST FACILITY ARNOLD ENGINEERING DEVELOPMENT CENTER AIR FORCE...i f n e c e s e a ~ ~ d i den t i f y by b l ock number) YJI01-GE-100 engine turbine engines mathematical models computations mathematical...report presents and discusses the results of an investigation to develop a rationale and technique for the validation of turbine engine steady-state

  19. Individualized prediction of perineural invasion in colorectal cancer: development and validation of a radiomics prediction model.

    PubMed

    Huang, Yanqi; He, Lan; Dong, Di; Yang, Caiyun; Liang, Cuishan; Chen, Xin; Ma, Zelan; Huang, Xiaomei; Yao, Su; Liang, Changhong; Tian, Jie; Liu, Zaiyi

    2018-02-01

    To develop and validate a radiomics prediction model for individualized prediction of perineural invasion (PNI) in colorectal cancer (CRC). After computed tomography (CT) radiomics features extraction, a radiomics signature was constructed in derivation cohort (346 CRC patients). A prediction model was developed to integrate the radiomics signature and clinical candidate predictors [age, sex, tumor location, and carcinoembryonic antigen (CEA) level]. Apparent prediction performance was assessed. After internal validation, independent temporal validation (separate from the cohort used to build the model) was then conducted in 217 CRC patients. The final model was converted to an easy-to-use nomogram. The developed radiomics nomogram that integrated the radiomics signature and CEA level showed good calibration and discrimination performance [Harrell's concordance index (c-index): 0.817; 95% confidence interval (95% CI): 0.811-0.823]. Application of the nomogram in validation cohort gave a comparable calibration and discrimination (c-index: 0.803; 95% CI: 0.794-0.812). Integrating the radiomics signature and CEA level into a radiomics prediction model enables easy and effective risk assessment of PNI in CRC. This stratification of patients according to their PNI status may provide a basis for individualized auxiliary treatment.

  20. Risk prediction models for graft failure in kidney transplantation: a systematic review.

    PubMed

    Kaboré, Rémi; Haller, Maria C; Harambat, Jérôme; Heinze, Georg; Leffondré, Karen

    2017-04-01

    Risk prediction models are useful for identifying kidney recipients at high risk of graft failure, thus optimizing clinical care. Our objective was to systematically review the models that have been recently developed and validated to predict graft failure in kidney transplantation recipients. We used PubMed and Scopus to search for English, German and French language articles published in 2005-15. We selected studies that developed and validated a new risk prediction model for graft failure after kidney transplantation, or validated an existing model with or without updating the model. Data on recipient characteristics and predictors, as well as modelling and validation methods were extracted. In total, 39 articles met the inclusion criteria. Of these, 34 developed and validated a new risk prediction model and 5 validated an existing one with or without updating the model. The most frequently predicted outcome was graft failure, defined as dialysis, re-transplantation or death with functioning graft. Most studies used the Cox model. There was substantial variability in predictors used. In total, 25 studies used predictors measured at transplantation only, and 14 studies used predictors also measured after transplantation. Discrimination performance was reported in 87% of studies, while calibration was reported in 56%. Performance indicators were estimated using both internal and external validation in 13 studies, and using external validation only in 6 studies. Several prediction models for kidney graft failure in adults have been published. Our study highlights the need to better account for competing risks when applicable in such studies, and to adequately account for post-transplant measures of predictors in studies aiming at improving monitoring of kidney transplant recipients. © The Author 2017. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.

  1. Impact of model development, calibration and validation decisions on hydrological simulations in West Lake Erie Basin

    USDA-ARS?s Scientific Manuscript database

    Watershed simulation models are used extensively to investigate hydrologic processes, landuse and climate change impacts, pollutant load assessments and best management practices (BMPs). Developing, calibrating and validating these models require a number of critical decisions that will influence t...

  2. Economic analysis of model validation for a challenge problem

    DOE PAGES

    Paez, Paul J.; Paez, Thomas L.; Hasselman, Timothy K.

    2016-02-19

    It is now commonplace for engineers to build mathematical models of the systems they are designing, building, or testing. And, it is nearly universally accepted that phenomenological models of physical systems must be validated prior to use for prediction in consequential scenarios. Yet, there are certain situations in which testing only or no testing and no modeling may be economically viable alternatives to modeling and its associated testing. This paper develops an economic framework within which benefit–cost can be evaluated for modeling and model validation relative to other options. The development is presented in terms of a challenge problem. Asmore » a result, we provide a numerical example that quantifies when modeling, calibration, and validation yield higher benefit–cost than a testing only or no modeling and no testing option.« less

  3. Assessment of the Value, Impact, and Validity of the Jobs and Economic Development Impacts (JEDI) Suite of Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Billman, L.; Keyser, D.

    The Jobs and Economic Development Impacts (JEDI) models, developed by the National Renewable Energy Laboratory (NREL) for the U.S. Department of Energy (DOE) Office of Energy Efficiency and Renewable Energy (EERE), use input-output methodology to estimate gross (not net) jobs and economic impacts of building and operating selected types of renewable electricity generation and fuel plants. This analysis provides the DOE with an assessment of the value, impact, and validity of the JEDI suite of models. While the models produce estimates of jobs, earnings, and economic output, this analysis focuses only on jobs estimates. This validation report includes an introductionmore » to JEDI models, an analysis of the value and impact of the JEDI models, and an analysis of the validity of job estimates generated by JEDI model through comparison to other modeled estimates and comparison to empirical, observed jobs data as reported or estimated for a commercial project, a state, or a region.« less

  4. The Challenge of Grounding Planning in Simulation with an Interactive Model Development Environment

    NASA Technical Reports Server (NTRS)

    Clement, Bradley J.; Frank, Jeremy D.; Chachere, John M.; Smith, Tristan B.; Swanson, Keith J.

    2011-01-01

    A principal obstacle to fielding automated planning systems is the difficulty of modeling. Physical systems are modeled conventionally based on specification documents and the modeler's understanding of the system. Thus, the model is developed in a way that is disconnected from the system's actual behavior and is vulnerable to manual error. Another obstacle to fielding planners is testing and validation. For a space mission, generated plans must be validated often by translating them into command sequences that are run in a simulation testbed. Testing in this way is complex and onerous because of the large number of possible plans and states of the spacecraft. Though, if used as a source of domain knowledge, the simulator can ease validation. This paper poses a challenge: to ground planning models in the system physics represented by simulation. A proposed, interactive model development environment illustrates the integration of planning and simulation to meet the challenge. This integration reveals research paths for automated model construction and validation.

  5. LIVVkit: An extensible, python-based, land ice verification and validation toolkit for ice sheet models

    NASA Astrophysics Data System (ADS)

    Kennedy, Joseph H.; Bennett, Andrew R.; Evans, Katherine J.; Price, Stephen; Hoffman, Matthew; Lipscomb, William H.; Fyke, Jeremy; Vargo, Lauren; Boghozian, Adrianna; Norman, Matthew; Worley, Patrick H.

    2017-06-01

    To address the pressing need to better understand the behavior and complex interaction of ice sheets within the global Earth system, significant development of continental-scale, dynamical ice sheet models is underway. Concurrent to the development of the Community Ice Sheet Model (CISM), the corresponding verification and validation (V&V) process is being coordinated through a new, robust, Python-based extensible software package, the Land Ice Verification and Validation toolkit (LIVVkit). Incorporated into the typical ice sheet model development cycle, it provides robust and automated numerical verification, software verification, performance validation, and physical validation analyses on a variety of platforms, from personal laptops to the largest supercomputers. LIVVkit operates on sets of regression test and reference data sets, and provides comparisons for a suite of community prioritized tests, including configuration and parameter variations, bit-for-bit evaluation, and plots of model variables to indicate where differences occur. LIVVkit also provides an easily extensible framework to incorporate and analyze results of new intercomparison projects, new observation data, and new computing platforms. LIVVkit is designed for quick adaptation to additional ice sheet models via abstraction of model specific code, functions, and configurations into an ice sheet model description bundle outside the main LIVVkit structure. Ultimately, through shareable and accessible analysis output, LIVVkit is intended to help developers build confidence in their models and enhance the credibility of ice sheet models overall.

  6. Improving the Validity of Activity of Daily Living Dependency Risk Assessment

    PubMed Central

    Clark, Daniel O.; Stump, Timothy E.; Tu, Wanzhu; Miller, Douglas K.

    2015-01-01

    Objectives Efforts to prevent activity of daily living (ADL) dependency may be improved through models that assess older adults’ dependency risk. We evaluated whether cognition and gait speed measures improve the predictive validity of interview-based models. Method Participants were 8,095 self-respondents in the 2006 Health and Retirement Survey who were aged 65 years or over and independent in five ADLs. Incident ADL dependency was determined from the 2008 interview. Models were developed using random 2/3rd cohorts and validated in the remaining 1/3rd. Results Compared to a c-statistic of 0.79 in the best interview model, the model including cognitive measures had c-statistics of 0.82 and 0.80 while the best fitting gait speed model had c-statistics of 0.83 and 0.79 in the development and validation cohorts, respectively. Conclusion Two relatively brief models, one that requires an in-person assessment and one that does not, had excellent validity for predicting incident ADL dependency but did not significantly improve the predictive validity of the best fitting interview-based models. PMID:24652867

  7. On the validation of a code and a turbulence model appropriate to circulation control airfoils

    NASA Technical Reports Server (NTRS)

    Viegas, J. R.; Rubesin, M. W.; Maccormack, R. W.

    1988-01-01

    A computer code for calculating flow about a circulation control airfoil within a wind tunnel test section has been developed. This code is being validated for eventual use as an aid to design such airfoils. The concept of code validation being used is explained. The initial stages of the process have been accomplished. The present code has been applied to a low-subsonic, 2-D flow about a circulation control airfoil for which extensive data exist. Two basic turbulence models and variants thereof have been successfully introduced into the algorithm, the Baldwin-Lomax algebraic and the Jones-Launder two-equation models of turbulence. The variants include adding a history of the jet development for the algebraic model and adding streamwise curvature effects for both models. Numerical difficulties and difficulties in the validation process are discussed. Turbulence model and code improvements to proceed with the validation process are also discussed.

  8. Developing rural palliative care: validating a conceptual model.

    PubMed

    Kelley, Mary Lou; Williams, Allison; DeMiglio, Lily; Mettam, Hilary

    2011-01-01

    The purpose of this research was to validate a conceptual model for developing palliative care in rural communities. This model articulates how local rural healthcare providers develop palliative care services according to four sequential phases. The model has roots in concepts of community capacity development, evolves from collaborative, generalist rural practice, and utilizes existing health services infrastructure. It addresses how rural providers manage challenges, specifically those related to: lack of resources, minimal community understanding of palliative care, health professionals' resistance, the bureaucracy of the health system, and the obstacles of providing services in rural environments. Seven semi-structured focus groups were conducted with interdisciplinary health providers in 7 rural communities in two Canadian provinces. Using a constant comparative analysis approach, focus group data were analyzed by examining participants' statements in relation to the model and comparing emerging themes in the development of rural palliative care to the elements of the model. The data validated the conceptual model as the model was able to theoretically predict and explain the experiences of the 7 rural communities that participated in the study. New emerging themes from the data elaborated existing elements in the model and informed the requirement for minor revisions. The model was validated and slightly revised, as suggested by the data. The model was confirmed as being a useful theoretical tool for conceptualizing the development of rural palliative care that is applicable in diverse rural communities.

  9. Development of estrogen receptor beta binding prediction model using large sets of chemicals.

    PubMed

    Sakkiah, Sugunadevi; Selvaraj, Chandrabose; Gong, Ping; Zhang, Chaoyang; Tong, Weida; Hong, Huixiao

    2017-11-03

    We developed an ER β binding prediction model to facilitate identification of chemicals specifically bind ER β or ER α together with our previously developed ER α binding model. Decision Forest was used to train ER β binding prediction model based on a large set of compounds obtained from EADB. Model performance was estimated through 1000 iterations of 5-fold cross validations. Prediction confidence was analyzed using predictions from the cross validations. Informative chemical features for ER β binding were identified through analysis of the frequency data of chemical descriptors used in the models in the 5-fold cross validations. 1000 permutations were conducted to assess the chance correlation. The average accuracy of 5-fold cross validations was 93.14% with a standard deviation of 0.64%. Prediction confidence analysis indicated that the higher the prediction confidence the more accurate the predictions. Permutation testing results revealed that the prediction model is unlikely generated by chance. Eighteen informative descriptors were identified to be important to ER β binding prediction. Application of the prediction model to the data from ToxCast project yielded very high sensitivity of 90-92%. Our results demonstrated ER β binding of chemicals could be accurately predicted using the developed model. Coupling with our previously developed ER α prediction model, this model could be expected to facilitate drug development through identification of chemicals that specifically bind ER β or ER α .

  10. Developing and validating risk prediction models in an individual participant data meta-analysis

    PubMed Central

    2014-01-01

    Background Risk prediction models estimate the risk of developing future outcomes for individuals based on one or more underlying characteristics (predictors). We review how researchers develop and validate risk prediction models within an individual participant data (IPD) meta-analysis, in order to assess the feasibility and conduct of the approach. Methods A qualitative review of the aims, methodology, and reporting in 15 articles that developed a risk prediction model using IPD from multiple studies. Results The IPD approach offers many opportunities but methodological challenges exist, including: unavailability of requested IPD, missing patient data and predictors, and between-study heterogeneity in methods of measurement, outcome definitions and predictor effects. Most articles develop their model using IPD from all available studies and perform only an internal validation (on the same set of data). Ten of the 15 articles did not allow for any study differences in baseline risk (intercepts), potentially limiting their model’s applicability and performance in some populations. Only two articles used external validation (on different data), including a novel method which develops the model on all but one of the IPD studies, tests performance in the excluded study, and repeats by rotating the omitted study. Conclusions An IPD meta-analysis offers unique opportunities for risk prediction research. Researchers can make more of this by allowing separate model intercept terms for each study (population) to improve generalisability, and by using ‘internal-external cross-validation’ to simultaneously develop and validate their model. Methodological challenges can be reduced by prospectively planned collaborations that share IPD for risk prediction. PMID:24397587

  11. External validation of preexisting first trimester preeclampsia prediction models.

    PubMed

    Allen, Rebecca E; Zamora, Javier; Arroyo-Manzano, David; Velauthar, Luxmilar; Allotey, John; Thangaratinam, Shakila; Aquilina, Joseph

    2017-10-01

    To validate the increasing number of prognostic models being developed for preeclampsia using our own prospective study. A systematic review of literature that assessed biomarkers, uterine artery Doppler and maternal characteristics in the first trimester for the prediction of preeclampsia was performed and models selected based on predefined criteria. Validation was performed by applying the regression coefficients that were published in the different derivation studies to our cohort. We assessed the models discrimination ability and calibration. Twenty models were identified for validation. The discrimination ability observed in derivation studies (Area Under the Curves) ranged from 0.70 to 0.96 when these models were validated against the validation cohort, these AUC varied importantly, ranging from 0.504 to 0.833. Comparing Area Under the Curves obtained in the derivation study to those in the validation cohort we found statistically significant differences in several studies. There currently isn't a definitive prediction model with adequate ability to discriminate for preeclampsia, which performs as well when applied to a different population and can differentiate well between the highest and lowest risk groups within the tested population. The pre-existing large number of models limits the value of further model development and future research should be focussed on further attempts to validate existing models and assessing whether implementation of these improves patient care. Crown Copyright © 2017. Published by Elsevier B.V. All rights reserved.

  12. Predictive modeling of infrared radiative heating in tomato dry-peeling process: Part II. Model validation and sensitivity analysis

    USDA-ARS?s Scientific Manuscript database

    A predictive mathematical model was developed to simulate heat transfer in a tomato undergoing double sided infrared (IR) heating in a dry-peeling process. The aims of this study were to validate the developed model using experimental data and to investigate different engineering parameters that mos...

  13. Three phase heat and mass transfer model for unsaturated soil freezing process: Part 2 - model validation

    NASA Astrophysics Data System (ADS)

    Zhang, Yaning; Xu, Fei; Li, Bingxi; Kim, Yong-Song; Zhao, Wenke; Xie, Gongnan; Fu, Zhongbin

    2018-04-01

    This study aims to validate the three-phase heat and mass transfer model developed in the first part (Three phase heat and mass transfer model for unsaturated soil freezing process: Part 1 - model development). Experimental results from studies and experiments were used for the validation. The results showed that the correlation coefficients for the simulated and experimental water contents at different soil depths were between 0.83 and 0.92. The correlation coefficients for the simulated and experimental liquid water contents at different soil temperatures were between 0.95 and 0.99. With these high accuracies, the developed model can be well used to predict the water contents at different soil depths and temperatures.

  14. Issues and approach to develop validated analysis tools for hypersonic flows: One perspective

    NASA Technical Reports Server (NTRS)

    Deiwert, George S.

    1993-01-01

    Critical issues concerning the modeling of low density hypervelocity flows where thermochemical nonequilibrium effects are pronounced are discussed. Emphasis is on the development of validated analysis tools, and the activity in the NASA Ames Research Center's Aerothermodynamics Branch is described. Inherent in the process is a strong synergism between ground test and real gas computational fluid dynamics (CFD). Approaches to develop and/or enhance phenomenological models and incorporate them into computational flowfield simulation codes are discussed. These models were partially validated with experimental data for flows where the gas temperature is raised (compressive flows). Expanding flows, where temperatures drop, however, exhibit somewhat different behavior. Experimental data for these expanding flow conditions is sparse and reliance must be made on intuition and guidance from computational chemistry to model transport processes under these conditions. Ground based experimental studies used to provide necessary data for model development and validation are described. Included are the performance characteristics of high enthalpy flow facilities, such as shock tubes and ballistic ranges.

  15. Issues and approach to develop validated analysis tools for hypersonic flows: One perspective

    NASA Technical Reports Server (NTRS)

    Deiwert, George S.

    1992-01-01

    Critical issues concerning the modeling of low-density hypervelocity flows where thermochemical nonequilibrium effects are pronounced are discussed. Emphasis is on the development of validated analysis tools. A description of the activity in the Ames Research Center's Aerothermodynamics Branch is also given. Inherent in the process is a strong synergism between ground test and real-gas computational fluid dynamics (CFD). Approaches to develop and/or enhance phenomenological models and incorporate them into computational flow-field simulation codes are discussed. These models have been partially validated with experimental data for flows where the gas temperature is raised (compressive flows). Expanding flows, where temperatures drop, however, exhibit somewhat different behavior. Experimental data for these expanding flow conditions are sparse; reliance must be made on intuition and guidance from computational chemistry to model transport processes under these conditions. Ground-based experimental studies used to provide necessary data for model development and validation are described. Included are the performance characteristics of high-enthalpy flow facilities, such as shock tubes and ballistic ranges.

  16. Risk prediction model: Statistical and artificial neural network approach

    NASA Astrophysics Data System (ADS)

    Paiman, Nuur Azreen; Hariri, Azian; Masood, Ibrahim

    2017-04-01

    Prediction models are increasingly gaining popularity and had been used in numerous areas of studies to complement and fulfilled clinical reasoning and decision making nowadays. The adoption of such models assist physician's decision making, individual's behavior, and consequently improve individual outcomes and the cost-effectiveness of care. The objective of this paper is to reviewed articles related to risk prediction model in order to understand the suitable approach, development and the validation process of risk prediction model. A qualitative review of the aims, methods and significant main outcomes of the nineteen published articles that developed risk prediction models from numerous fields were done. This paper also reviewed on how researchers develop and validate the risk prediction models based on statistical and artificial neural network approach. From the review done, some methodological recommendation in developing and validating the prediction model were highlighted. According to studies that had been done, artificial neural network approached in developing the prediction model were more accurate compared to statistical approach. However currently, only limited published literature discussed on which approach is more accurate for risk prediction model development.

  17. Item Development and Validity Testing for a Self- and Proxy Report: The Safe Driving Behavior Measure

    PubMed Central

    Classen, Sherrilene; Winter, Sandra M.; Velozo, Craig A.; Bédard, Michel; Lanford, Desiree N.; Brumback, Babette; Lutz, Barbara J.

    2010-01-01

    OBJECTIVE We report on item development and validity testing of a self-report older adult safe driving behaviors measure (SDBM). METHOD On the basis of theoretical frameworks (Precede–Proceed Model of Health Promotion, Haddon’s matrix, and Michon’s model), existing driving measures, and previous research and guided by measurement theory, we developed items capturing safe driving behavior. Item development was further informed by focus groups. We established face validity using peer reviewers and content validity using expert raters. RESULTS Peer review indicated acceptable face validity. Initial expert rater review yielded a scale content validity index (CVI) rating of 0.78, with 44 of 60 items rated ≥0.75. Sixteen unacceptable items (≤0.5) required major revision or deletion. The next CVI scale average was 0.84, indicating acceptable content validity. CONCLUSION The SDBM has relevance as a self-report to rate older drivers. Future pilot testing of the SDBM comparing results with on-road testing will define criterion validity. PMID:20437917

  18. Advanced Concept Modeling

    NASA Technical Reports Server (NTRS)

    Chaput, Armand; Johns, Zachary; Hodges, Todd; Selfridge, Justin; Bevirt, Joeben; Ahuja, Vivek

    2015-01-01

    Advanced Concepts Modeling software validation, analysis, and design. This was a National Institute of Aerospace contract with a lot of pieces. Efforts ranged from software development and validation for structures and aerodynamics, through flight control development, and aeropropulsive analysis, to UAV piloting services.

  19. Development and validation of P-MODTRAN7 and P-MCScene, 1D and 3D polarimetric radiative transfer models

    NASA Astrophysics Data System (ADS)

    Hawes, Frederick T.; Berk, Alexander; Richtsmeier, Steven C.

    2016-05-01

    A validated, polarimetric 3-dimensional simulation capability, P-MCScene, is being developed by generalizing Spectral Sciences' Monte Carlo-based synthetic scene simulation model, MCScene, to include calculation of all 4 Stokes components. P-MCScene polarimetric optical databases will be generated by a new version (MODTRAN7) of the government-standard MODTRAN radiative transfer algorithm. The conversion of MODTRAN6 to a polarimetric model is being accomplished by (1) introducing polarimetric data, by (2) vectorizing the MODTRAN radiation calculations and by (3) integrating the newly revised and validated vector discrete ordinate model VDISORT3. Early results, presented here, demonstrate a clear pathway to the long-term goal of fully validated polarimetric models.

  20. ASTP ranging system mathematical model

    NASA Technical Reports Server (NTRS)

    Ellis, M. R.; Robinson, L. H.

    1973-01-01

    A mathematical model is presented of the VHF ranging system to analyze the performance of the Apollo-Soyuz test project (ASTP). The system was adapted for use in the ASTP. The ranging system mathematical model is presented in block diagram form, and a brief description of the overall model is also included. A procedure for implementing the math model is presented along with a discussion of the validation of the math model and the overall summary and conclusions of the study effort. Detailed appendices of the five study tasks are presented: early late gate model development, unlock probability development, system error model development, probability of acquisition and model development, and math model validation testing.

  1. Development and external validation of new ultrasound-based mathematical models for preoperative prediction of high-risk endometrial cancer.

    PubMed

    Van Holsbeke, C; Ameye, L; Testa, A C; Mascilini, F; Lindqvist, P; Fischerova, D; Frühauf, F; Fransis, S; de Jonge, E; Timmerman, D; Epstein, E

    2014-05-01

    To develop and validate strategies, using new ultrasound-based mathematical models, for the prediction of high-risk endometrial cancer and compare them with strategies using previously developed models or the use of preoperative grading only. Women with endometrial cancer were prospectively examined using two-dimensional (2D) and three-dimensional (3D) gray-scale and color Doppler ultrasound imaging. More than 25 ultrasound, demographic and histological variables were analyzed. Two logistic regression models were developed: one 'objective' model using mainly objective variables; and one 'subjective' model including subjective variables (i.e. subjective impression of myometrial and cervical invasion, preoperative grade and demographic variables). The following strategies were validated: a one-step strategy using only preoperative grading and two-step strategies using preoperative grading as the first step and one of the new models, subjective assessment or previously developed models as a second step. One-hundred and twenty-five patients were included in the development set and 211 were included in the validation set. The 'objective' model retained preoperative grade and minimal tumor-free myometrium as variables. The 'subjective' model retained preoperative grade and subjective assessment of myometrial invasion. On external validation, the performance of the new models was similar to that on the development set. Sensitivity for the two-step strategy with the 'objective' model was 78% (95% CI, 69-84%) at a cut-off of 0.50, 82% (95% CI, 74-88%) for the strategy with the 'subjective' model and 83% (95% CI, 75-88%) for that with subjective assessment. Specificity was 68% (95% CI, 58-77%), 72% (95% CI, 62-80%) and 71% (95% CI, 61-79%) respectively. The two-step strategies detected up to twice as many high-risk cases as preoperative grading only. The new models had a significantly higher sensitivity than did previously developed models, at the same specificity. Two-step strategies with 'new' ultrasound-based models predict high-risk endometrial cancers with good accuracy and do this better than do previously developed models. Copyright © 2013 ISUOG. Published by John Wiley & Sons Ltd.

  2. Validation by simulation of a clinical trial model using the standardized mean and variance criteria.

    PubMed

    Abbas, Ismail; Rovira, Joan; Casanovas, Josep

    2006-12-01

    To develop and validate a model of a clinical trial that evaluates the changes in cholesterol level as a surrogate marker for lipodystrophy in HIV subjects under alternative antiretroviral regimes, i.e., treatment with Protease Inhibitors vs. a combination of nevirapine and other antiretroviral drugs. Five simulation models were developed based on different assumptions, on treatment variability and pattern of cholesterol reduction over time. The last recorded cholesterol level, the difference from the baseline, the average difference from the baseline and level evolution, are the considered endpoints. Specific validation criteria based on a 10% minus or plus standardized distance in means and variances were used to compare the real and the simulated data. The validity criterion was met by all models for considered endpoints. However, only two models met the validity criterion when all endpoints were considered. The model based on the assumption that within-subjects variability of cholesterol levels changes over time is the one that minimizes the validity criterion, standardized distance equal to or less than 1% minus or plus. Simulation is a useful technique for calibration, estimation, and evaluation of models, which allows us to relax the often overly restrictive assumptions regarding parameters required by analytical approaches. The validity criterion can also be used to select the preferred model for design optimization, until additional data are obtained allowing an external validation of the model.

  3. Critical-Inquiry-Based-Learning: Model of Learning to Promote Critical Thinking Ability of Pre-service Teachers

    NASA Astrophysics Data System (ADS)

    Prayogi, S.; Yuanita, L.; Wasis

    2018-01-01

    This study aimed to develop Critical-Inquiry-Based-Learning (CIBL) learning model to promote critical thinking (CT) ability of preservice teachers. The CIBL learning model was developed by meeting the criteria of validity, practicality, and effectiveness. Validation of the model involves 4 expert validators through the mechanism of the focus group discussion (FGD). CIBL learning model declared valid to promote CT ability, with the validity level (Va) of 4.20 and reliability (r) of 90,1% (very reliable). The practicality of the model was evaluated when it was implemented that involving 17 of preservice teachers. The CIBL learning model had been declared practice, its measuring from learning feasibility (LF) with very good criteria (LF-score = 4.75). The effectiveness of the model was evaluated from the improvement CT ability after the implementation of the model. CT ability were evaluated using the scoring technique adapted from Ennis-Weir Critical Thinking Essay Test. The average score of CT ability on pretest is - 1.53 (uncritical criteria), whereas on posttest is 8.76 (critical criteria), with N-gain score of 0.76 (high criteria). Based on the results of this study, it can be concluded that developed CIBL learning model is feasible to promote CT ability of preservice teachers.

  4. Validating archetypes for the Multiple Sclerosis Functional Composite.

    PubMed

    Braun, Michael; Brandt, Alexander Ulrich; Schulz, Stefan; Boeker, Martin

    2014-08-03

    Numerous information models for electronic health records, such as openEHR archetypes are available. The quality of such clinical models is important to guarantee standardised semantics and to facilitate their interoperability. However, validation aspects are not regarded sufficiently yet. The objective of this report is to investigate the feasibility of archetype development and its community-based validation process, presuming that this review process is a practical way to ensure high-quality information models amending the formal reference model definitions. A standard archetype development approach was applied on a case set of three clinical tests for multiple sclerosis assessment: After an analysis of the tests, the obtained data elements were organised and structured. The appropriate archetype class was selected and the data elements were implemented in an iterative refinement process. Clinical and information modelling experts validated the models in a structured review process. Four new archetypes were developed and publicly deployed in the openEHR Clinical Knowledge Manager, an online platform provided by the openEHR Foundation. Afterwards, these four archetypes were validated by domain experts in a team review. The review was a formalised process, organised in the Clinical Knowledge Manager. Both, development and review process turned out to be time-consuming tasks, mostly due to difficult selection processes between alternative modelling approaches. The archetype review was a straightforward team process with the goal to validate archetypes pragmatically. The quality of medical information models is crucial to guarantee standardised semantic representation in order to improve interoperability. The validation process is a practical way to better harmonise models that diverge due to necessary flexibility left open by the underlying formal reference model definitions.This case study provides evidence that both community- and tool-enabled review processes, structured in the Clinical Knowledge Manager, ensure archetype quality. It offers a pragmatic but feasible way to reduce variation in the representation of clinical information models towards a more unified and interoperable model.

  5. Validating archetypes for the Multiple Sclerosis Functional Composite

    PubMed Central

    2014-01-01

    Background Numerous information models for electronic health records, such as openEHR archetypes are available. The quality of such clinical models is important to guarantee standardised semantics and to facilitate their interoperability. However, validation aspects are not regarded sufficiently yet. The objective of this report is to investigate the feasibility of archetype development and its community-based validation process, presuming that this review process is a practical way to ensure high-quality information models amending the formal reference model definitions. Methods A standard archetype development approach was applied on a case set of three clinical tests for multiple sclerosis assessment: After an analysis of the tests, the obtained data elements were organised and structured. The appropriate archetype class was selected and the data elements were implemented in an iterative refinement process. Clinical and information modelling experts validated the models in a structured review process. Results Four new archetypes were developed and publicly deployed in the openEHR Clinical Knowledge Manager, an online platform provided by the openEHR Foundation. Afterwards, these four archetypes were validated by domain experts in a team review. The review was a formalised process, organised in the Clinical Knowledge Manager. Both, development and review process turned out to be time-consuming tasks, mostly due to difficult selection processes between alternative modelling approaches. The archetype review was a straightforward team process with the goal to validate archetypes pragmatically. Conclusions The quality of medical information models is crucial to guarantee standardised semantic representation in order to improve interoperability. The validation process is a practical way to better harmonise models that diverge due to necessary flexibility left open by the underlying formal reference model definitions. This case study provides evidence that both community- and tool-enabled review processes, structured in the Clinical Knowledge Manager, ensure archetype quality. It offers a pragmatic but feasible way to reduce variation in the representation of clinical information models towards a more unified and interoperable model. PMID:25087081

  6. Development of Chemistry Game Card as an Instructional Media in the Subject of Naming Chemical Compound in Grade X

    NASA Astrophysics Data System (ADS)

    Bayharti; Iswendi, I.; Arifin, M. N.

    2018-04-01

    The purpose of this research was to produce a chemistry game card as an instructional media in the subject of naming chemical compounds and determine the degree of validity and practicality of instructional media produced. Type of this research was Research and Development (R&D) that produced a product. The development model used was4-D model which comprises four stages incuding: (1) define, (2) design, (3) develop, and (4) disseminate. This research was restricted at the development stage. Chemistry game card developed was validated by seven validators and practicality was tested to class X6 students of SMAN 5 Padang. Instrument of this research is questionnair that consist of validity sheet and practicality sheet. Technique in collection data was done by distributing questionnaire to the validators, chemistry teachers, and students. The data were analyzed by using formula Cohen’s Kappa. Based on data analysis, validity of chemistry game card was0.87 with category highly valid and practicality of chemistry game card was 0.91 with category highly practice.

  7. A risk-model for hospital mortality among patients with severe sepsis or septic shock based on German national administrative claims data

    PubMed Central

    Fleischmann-Struzek, Carolin; Rüddel, Hendrik; Reinhart, Konrad; Thomas-Rüddel, Daniel O.

    2018-01-01

    Background Sepsis is a major cause of preventable deaths in hospitals. Feasible and valid methods for comparing quality of sepsis care between hospitals are needed. The aim of this study was to develop a risk-adjustment model suitable for comparing sepsis-related mortality between German hospitals. Methods We developed a risk-model using national German claims data. Since these data are available with a time-lag of 1.5 years only, the stability of the model across time was investigated. The model was derived from inpatient cases with severe sepsis or septic shock treated in 2013 using logistic regression with backward selection and generalized estimating equations to correct for clustering. It was validated among cases treated in 2015. Finally, the model development was repeated in 2015. To investigate secular changes, the risk-adjusted trajectory of mortality across the years 2010–2015 was analyzed. Results The 2013 deviation sample consisted of 113,750 cases; the 2015 validation sample consisted of 134,851 cases. The model developed in 2013 showed good validity regarding discrimination (AUC = 0.74), calibration (observed mortality in 1st and 10th risk-decile: 11%-78%), and fit (R2 = 0.16). Validity remained stable when the model was applied to 2015 (AUC = 0.74, 1st and 10th risk-decile: 10%-77%, R2 = 0.17). There was no indication of overfitting of the model. The final model developed in year 2015 contained 40 risk-factors. Between 2010 and 2015 hospital mortality in sepsis decreased from 48% to 42%. Adjusted for risk-factors the trajectory of decrease was still significant. Conclusions The risk-model shows good predictive validity and stability across time. The model is suitable to be used as an external algorithm for comparing risk-adjusted sepsis mortality among German hospitals or regions based on administrative claims data, but secular changes need to be taken into account when interpreting risk-adjusted mortality. PMID:29558486

  8. A risk-model for hospital mortality among patients with severe sepsis or septic shock based on German national administrative claims data.

    PubMed

    Schwarzkopf, Daniel; Fleischmann-Struzek, Carolin; Rüddel, Hendrik; Reinhart, Konrad; Thomas-Rüddel, Daniel O

    2018-01-01

    Sepsis is a major cause of preventable deaths in hospitals. Feasible and valid methods for comparing quality of sepsis care between hospitals are needed. The aim of this study was to develop a risk-adjustment model suitable for comparing sepsis-related mortality between German hospitals. We developed a risk-model using national German claims data. Since these data are available with a time-lag of 1.5 years only, the stability of the model across time was investigated. The model was derived from inpatient cases with severe sepsis or septic shock treated in 2013 using logistic regression with backward selection and generalized estimating equations to correct for clustering. It was validated among cases treated in 2015. Finally, the model development was repeated in 2015. To investigate secular changes, the risk-adjusted trajectory of mortality across the years 2010-2015 was analyzed. The 2013 deviation sample consisted of 113,750 cases; the 2015 validation sample consisted of 134,851 cases. The model developed in 2013 showed good validity regarding discrimination (AUC = 0.74), calibration (observed mortality in 1st and 10th risk-decile: 11%-78%), and fit (R2 = 0.16). Validity remained stable when the model was applied to 2015 (AUC = 0.74, 1st and 10th risk-decile: 10%-77%, R2 = 0.17). There was no indication of overfitting of the model. The final model developed in year 2015 contained 40 risk-factors. Between 2010 and 2015 hospital mortality in sepsis decreased from 48% to 42%. Adjusted for risk-factors the trajectory of decrease was still significant. The risk-model shows good predictive validity and stability across time. The model is suitable to be used as an external algorithm for comparing risk-adjusted sepsis mortality among German hospitals or regions based on administrative claims data, but secular changes need to be taken into account when interpreting risk-adjusted mortality.

  9. Prediction of functional aerobic capacity without exercise testing

    NASA Technical Reports Server (NTRS)

    Jackson, A. S.; Blair, S. N.; Mahar, M. T.; Wier, L. T.; Ross, R. M.; Stuteville, J. E.

    1990-01-01

    The purpose of this study was to develop functional aerobic capacity prediction models without using exercise tests (N-Ex) and to compare the accuracy with Astrand single-stage submaximal prediction methods. The data of 2,009 subjects (9.7% female) were randomly divided into validation (N = 1,543) and cross-validation (N = 466) samples. The validation sample was used to develop two N-Ex models to estimate VO2peak. Gender, age, body composition, and self-report activity were used to develop two N-Ex prediction models. One model estimated percent fat from skinfolds (N-Ex %fat) and the other used body mass index (N-Ex BMI) to represent body composition. The multiple correlations for the developed models were R = 0.81 (SE = 5.3 ml.kg-1.min-1) and R = 0.78 (SE = 5.6 ml.kg-1.min-1). This accuracy was confirmed when applied to the cross-validation sample. The N-Ex models were more accurate than what was obtained from VO2peak estimated from the Astrand prediction models. The SEs of the Astrand models ranged from 5.5-9.7 ml.kg-1.min-1. The N-Ex models were cross-validated on 59 men on hypertensive medication and 71 men who were found to have a positive exercise ECG. The SEs of the N-Ex models ranged from 4.6-5.4 ml.kg-1.min-1 with these subjects.(ABSTRACT TRUNCATED AT 250 WORDS).

  10. LIVVkit: An extensible, python-based, land ice verification and validation toolkit for ice sheet models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kennedy, Joseph H.; Bennett, Andrew R.; Evans, Katherine J.

    To address the pressing need to better understand the behavior and complex interaction of ice sheets within the global Earth system, significant development of continental-scale, dynamical ice sheet models is underway. Concurrent to the development of the Community Ice Sheet Model (CISM), the corresponding verification and validation (V&V) process is being coordinated through a new, robust, Python-based extensible software package, the Land Ice Verification and Validation toolkit (LIVVkit). Incorporated into the typical ice sheet model development cycle, it provides robust and automated numerical verification, software verification, performance validation, and physical validation analyses on a variety of platforms, from personal laptopsmore » to the largest supercomputers. LIVVkit operates on sets of regression test and reference data sets, and provides comparisons for a suite of community prioritized tests, including configuration and parameter variations, bit-for-bit evaluation, and plots of model variables to indicate where differences occur. LIVVkit also provides an easily extensible framework to incorporate and analyze results of new intercomparison projects, new observation data, and new computing platforms. LIVVkit is designed for quick adaptation to additional ice sheet models via abstraction of model specific code, functions, and configurations into an ice sheet model description bundle outside the main LIVVkit structure. Furthermore, through shareable and accessible analysis output, LIVVkit is intended to help developers build confidence in their models and enhance the credibility of ice sheet models overall.« less

  11. LIVVkit: An extensible, python-based, land ice verification and validation toolkit for ice sheet models

    DOE PAGES

    Kennedy, Joseph H.; Bennett, Andrew R.; Evans, Katherine J.; ...

    2017-03-23

    To address the pressing need to better understand the behavior and complex interaction of ice sheets within the global Earth system, significant development of continental-scale, dynamical ice sheet models is underway. Concurrent to the development of the Community Ice Sheet Model (CISM), the corresponding verification and validation (V&V) process is being coordinated through a new, robust, Python-based extensible software package, the Land Ice Verification and Validation toolkit (LIVVkit). Incorporated into the typical ice sheet model development cycle, it provides robust and automated numerical verification, software verification, performance validation, and physical validation analyses on a variety of platforms, from personal laptopsmore » to the largest supercomputers. LIVVkit operates on sets of regression test and reference data sets, and provides comparisons for a suite of community prioritized tests, including configuration and parameter variations, bit-for-bit evaluation, and plots of model variables to indicate where differences occur. LIVVkit also provides an easily extensible framework to incorporate and analyze results of new intercomparison projects, new observation data, and new computing platforms. LIVVkit is designed for quick adaptation to additional ice sheet models via abstraction of model specific code, functions, and configurations into an ice sheet model description bundle outside the main LIVVkit structure. Furthermore, through shareable and accessible analysis output, LIVVkit is intended to help developers build confidence in their models and enhance the credibility of ice sheet models overall.« less

  12. Models and applications for space weather forecasting and analysis at the Community Coordinated Modeling Center.

    NASA Astrophysics Data System (ADS)

    Kuznetsova, Maria

    The Community Coordinated Modeling Center (CCMC, http://ccmc.gsfc.nasa.gov) was established at the dawn of the new millennium as a long-term flexible solution to the problem of transition of progress in space environment modeling to operational space weather forecasting. CCMC hosts an expanding collection of state-of-the-art space weather models developed by the international space science community. Over the years the CCMC acquired the unique experience in preparing complex models and model chains for operational environment and developing and maintaining custom displays and powerful web-based systems and tools ready to be used by researchers, space weather service providers and decision makers. In support of space weather needs of NASA users CCMC is developing highly-tailored applications and services that target specific orbits or locations in space and partnering with NASA mission specialists on linking CCMC space environment modeling with impacts on biological and technological systems in space. Confidence assessment of model predictions is an essential element of space environment modeling. CCMC facilitates interaction between model owners and users in defining physical parameters and metrics formats relevant to specific applications and leads community efforts to quantify models ability to simulate and predict space environment events. Interactive on-line model validation systems developed at CCMC make validation a seamless part of model development circle. The talk will showcase innovative solutions for space weather research, validation, anomaly analysis and forecasting and review on-going community-wide model validation initiatives enabled by CCMC applications.

  13. The MicroArray Quality Control (MAQC)-II study of common practices for the development and validation of microarray-based predictive models

    EPA Science Inventory

    The second phase of the MicroArray Quality Control (MAQC-II) project evaluated common practices for developing and validating microarray-based models aimed at predicting toxicological and clinical endpoints. Thirty-six teams developed classifiers for 13 endpoints - some easy, som...

  14. Prospective validation of pathologic complete response models in rectal cancer: Transferability and reproducibility.

    PubMed

    van Soest, Johan; Meldolesi, Elisa; van Stiphout, Ruud; Gatta, Roberto; Damiani, Andrea; Valentini, Vincenzo; Lambin, Philippe; Dekker, Andre

    2017-09-01

    Multiple models have been developed to predict pathologic complete response (pCR) in locally advanced rectal cancer patients. Unfortunately, validation of these models normally omit the implications of cohort differences on prediction model performance. In this work, we will perform a prospective validation of three pCR models, including information whether this validation will target transferability or reproducibility (cohort differences) of the given models. We applied a novel methodology, the cohort differences model, to predict whether a patient belongs to the training or to the validation cohort. If the cohort differences model performs well, it would suggest a large difference in cohort characteristics meaning we would validate the transferability of the model rather than reproducibility. We tested our method in a prospective validation of three existing models for pCR prediction in 154 patients. Our results showed a large difference between training and validation cohort for one of the three tested models [Area under the Receiver Operating Curve (AUC) cohort differences model: 0.85], signaling the validation leans towards transferability. Two out of three models had a lower AUC for validation (0.66 and 0.58), one model showed a higher AUC in the validation cohort (0.70). We have successfully applied a new methodology in the validation of three prediction models, which allows us to indicate if a validation targeted transferability (large differences between training/validation cohort) or reproducibility (small cohort differences). © 2017 American Association of Physicists in Medicine.

  15. The development and testing of a skin tear risk assessment tool.

    PubMed

    Newall, Nelly; Lewin, Gill F; Bulsara, Max K; Carville, Keryln J; Leslie, Gavin D; Roberts, Pam A

    2017-02-01

    The aim of the present study is to develop a reliable and valid skin tear risk assessment tool. The six characteristics identified in a previous case control study as constituting the best risk model for skin tear development were used to construct a risk assessment tool. The ability of the tool to predict skin tear development was then tested in a prospective study. Between August 2012 and September 2013, 1466 tertiary hospital patients were assessed at admission and followed up for 10 days to see if they developed a skin tear. The predictive validity of the tool was assessed using receiver operating characteristic (ROC) analysis. When the tool was found not to have performed as well as hoped, secondary analyses were performed to determine whether a potentially better performing risk model could be identified. The tool was found to have high sensitivity but low specificity and therefore have inadequate predictive validity. Secondary analysis of the combined data from this and the previous case control study identified an alternative better performing risk model. The tool developed and tested in this study was found to have inadequate predictive validity. The predictive validity of an alternative, more parsimonious model now needs to be tested. © 2015 Medicalhelplines.com Inc and John Wiley & Sons Ltd.

  16. Refinement, Validation and Benchmarking of a Model for E-Government Service Quality

    NASA Astrophysics Data System (ADS)

    Magoutas, Babis; Mentzas, Gregoris

    This paper presents the refinement and validation of a model for Quality of e-Government Services (QeGS). We built upon our previous work where a conceptualized model was identified and put focus on the confirmatory phase of the model development process, in order to come up with a valid and reliable QeGS model. The validated model, which was benchmarked with very positive results with similar models found in the literature, can be used for measuring the QeGS in a reliable and valid manner. This will form the basis for a continuous quality improvement process, unleashing the full potential of e-government services for both citizens and public administrations.

  17. Validation of urban freeway models.

    DOT National Transportation Integrated Search

    2015-01-01

    This report describes the methodology, data, conclusions, and enhanced models regarding the validation of two sets of models developed in the Strategic Highway Research Program 2 (SHRP 2) Reliability Project L03, Analytical Procedures for Determining...

  18. A Model-Based Approach to Support Validation of Medical Cyber-Physical Systems.

    PubMed

    Silva, Lenardo C; Almeida, Hyggo O; Perkusich, Angelo; Perkusich, Mirko

    2015-10-30

    Medical Cyber-Physical Systems (MCPS) are context-aware, life-critical systems with patient safety as the main concern, demanding rigorous processes for validation to guarantee user requirement compliance and specification-oriented correctness. In this article, we propose a model-based approach for early validation of MCPS, focusing on promoting reusability and productivity. It enables system developers to build MCPS formal models based on a library of patient and medical device models, and simulate the MCPS to identify undesirable behaviors at design time. Our approach has been applied to three different clinical scenarios to evaluate its reusability potential for different contexts. We have also validated our approach through an empirical evaluation with developers to assess productivity and reusability. Finally, our models have been formally verified considering functional and safety requirements and model coverage.

  19. A Model-Based Approach to Support Validation of Medical Cyber-Physical Systems

    PubMed Central

    Silva, Lenardo C.; Almeida, Hyggo O.; Perkusich, Angelo; Perkusich, Mirko

    2015-01-01

    Medical Cyber-Physical Systems (MCPS) are context-aware, life-critical systems with patient safety as the main concern, demanding rigorous processes for validation to guarantee user requirement compliance and specification-oriented correctness. In this article, we propose a model-based approach for early validation of MCPS, focusing on promoting reusability and productivity. It enables system developers to build MCPS formal models based on a library of patient and medical device models, and simulate the MCPS to identify undesirable behaviors at design time. Our approach has been applied to three different clinical scenarios to evaluate its reusability potential for different contexts. We have also validated our approach through an empirical evaluation with developers to assess productivity and reusability. Finally, our models have been formally verified considering functional and safety requirements and model coverage. PMID:26528982

  20. A calibration hierarchy for risk models was defined: from utopia to empirical data.

    PubMed

    Van Calster, Ben; Nieboer, Daan; Vergouwe, Yvonne; De Cock, Bavo; Pencina, Michael J; Steyerberg, Ewout W

    2016-06-01

    Calibrated risk models are vital for valid decision support. We define four levels of calibration and describe implications for model development and external validation of predictions. We present results based on simulated data sets. A common definition of calibration is "having an event rate of R% among patients with a predicted risk of R%," which we refer to as "moderate calibration." Weaker forms of calibration only require the average predicted risk (mean calibration) or the average prediction effects (weak calibration) to be correct. "Strong calibration" requires that the event rate equals the predicted risk for every covariate pattern. This implies that the model is fully correct for the validation setting. We argue that this is unrealistic: the model type may be incorrect, the linear predictor is only asymptotically unbiased, and all nonlinear and interaction effects should be correctly modeled. In addition, we prove that moderate calibration guarantees nonharmful decision making. Finally, results indicate that a flexible assessment of calibration in small validation data sets is problematic. Strong calibration is desirable for individualized decision support but unrealistic and counter productive by stimulating the development of overly complex models. Model development and external validation should focus on moderate calibration. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Combined Use of Tissue Morphology, Neural Network Analysis of Chromatin Texture and Clinical Variables to Predict Prostate Cancer Agressiveness from Biopsy Water

    DTIC Science & Technology

    2000-10-01

    Purpose: To combine clinical, serum, pathologic and computer derived information into an artificial neural network to develop/validate a model to...Development of an artificial neural network (year 02). Prospective validation of this model (projected year 03). All models will be tested and

  2. A process improvement model for software verification and validation

    NASA Technical Reports Server (NTRS)

    Callahan, John; Sabolish, George

    1994-01-01

    We describe ongoing work at the NASA Independent Verification and Validation (IV&V) Facility to establish a process improvement model for software verification and validation (V&V) organizations. This model, similar to those used by some software development organizations, uses measurement-based techniques to identify problem areas and introduce incremental improvements. We seek to replicate this model for organizations involved in V&V on large-scale software development projects such as EOS and space station. At the IV&V Facility, a university research group and V&V contractors are working together to collect metrics across projects in order to determine the effectiveness of V&V and improve its application. Since V&V processes are intimately tied to development processes, this paper also examines the repercussions for development organizations in large-scale efforts.

  3. A process improvement model for software verification and validation

    NASA Technical Reports Server (NTRS)

    Callahan, John; Sabolish, George

    1994-01-01

    We describe ongoing work at the NASA Independent Verification and Validation (IV&V) Facility to establish a process improvement model for software verification and validation (V&V) organizations. This model, similar to those used by some software development organizations, uses measurement-based techniques to identify problem areas and introduce incremental improvements. We seek to replicate this model for organizations involved in V&V on large-scale software development projects such as EOS and Space Station. At the IV&V Facility, a university research group and V&V contractors are working together to collect metrics across projects in order to determine the effectiveness of V&V and improve its application. Since V&V processes are intimately tied to development processes, this paper also examines the repercussions for development organizations in large-scale efforts.

  4. Challenges of NDE Simulation Tool Challenges of NDE Simulation Tool

    NASA Technical Reports Server (NTRS)

    Leckey, Cara A. C.; Juarez, Peter D.; Seebo, Jeffrey P.; Frank, Ashley L.

    2015-01-01

    Realistic nondestructive evaluation (NDE) simulation tools enable inspection optimization and predictions of inspectability for new aerospace materials and designs. NDE simulation tools may someday aid in the design and certification of advanced aerospace components; potentially shortening the time from material development to implementation by industry and government. Furthermore, modeling and simulation are expected to play a significant future role in validating the capabilities and limitations of guided wave based structural health monitoring (SHM) systems. The current state-of-the-art in ultrasonic NDE/SHM simulation cannot rapidly simulate damage detection techniques for large scale, complex geometry composite components/vehicles with realistic damage types. This paper discusses some of the challenges of model development and validation for composites, such as the level of realism and scale of simulation needed for NASA' applications. Ongoing model development work is described along with examples of model validation studies. The paper will also discuss examples of the use of simulation tools at NASA to develop new damage characterization methods, and associated challenges of validating those methods.

  5. Validation of the thermal challenge problem using Bayesian Belief Networks.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McFarland, John; Swiler, Laura Painton

    The thermal challenge problem has been developed at Sandia National Laboratories as a testbed for demonstrating various types of validation approaches and prediction methods. This report discusses one particular methodology to assess the validity of a computational model given experimental data. This methodology is based on Bayesian Belief Networks (BBNs) and can incorporate uncertainty in experimental measurements, in physical quantities, and model uncertainties. The approach uses the prior and posterior distributions of model output to compute a validation metric based on Bayesian hypothesis testing (a Bayes' factor). This report discusses various aspects of the BBN, specifically in the context ofmore » the thermal challenge problem. A BBN is developed for a given set of experimental data in a particular experimental configuration. The development of the BBN and the method for ''solving'' the BBN to develop the posterior distribution of model output through Monte Carlo Markov Chain sampling is discussed in detail. The use of the BBN to compute a Bayes' factor is demonstrated.« less

  6. Construct validity of the Moral Development Scale for Professionals (MDSP).

    PubMed

    Söderhamn, Olle; Bjørnestad, John Olav; Skisland, Anne; Cliffordson, Christina

    2011-01-01

    The aim of this study was to investigate the construct validity of the Moral Development Scale for Professionals (MDSP) using structural equation modeling. The instrument is a 12-item self-report instrument, developed in the Scandinavian cultural context and based on Kohlberg's theory. A hypothesized simplex structure model underlying the MDSP was tested through structural equation modeling. Validity was also tested as the proportion of respondents older than 20 years that reached the highest moral level, which according to the theory should be small. A convenience sample of 339 nursing students with a mean age of 25.3 years participated. Results confirmed the simplex model structure, indicating that MDSP reflects a moral construct empirically organized from low to high. A minority of respondents >20 years of age (13.5%) scored more than 80% on the highest moral level. The findings support the construct validity of the MDSP and the stages and levels in Kohlberg's theory.

  7. Predictive QSAR modeling workflow, model applicability domains, and virtual screening.

    PubMed

    Tropsha, Alexander; Golbraikh, Alexander

    2007-01-01

    Quantitative Structure Activity Relationship (QSAR) modeling has been traditionally applied as an evaluative approach, i.e., with the focus on developing retrospective and explanatory models of existing data. Model extrapolation was considered if only in hypothetical sense in terms of potential modifications of known biologically active chemicals that could improve compounds' activity. This critical review re-examines the strategy and the output of the modern QSAR modeling approaches. We provide examples and arguments suggesting that current methodologies may afford robust and validated models capable of accurate prediction of compound properties for molecules not included in the training sets. We discuss a data-analytical modeling workflow developed in our laboratory that incorporates modules for combinatorial QSAR model development (i.e., using all possible binary combinations of available descriptor sets and statistical data modeling techniques), rigorous model validation, and virtual screening of available chemical databases to identify novel biologically active compounds. Our approach places particular emphasis on model validation as well as the need to define model applicability domains in the chemistry space. We present examples of studies where the application of rigorously validated QSAR models to virtual screening identified computational hits that were confirmed by subsequent experimental investigations. The emerging focus of QSAR modeling on target property forecasting brings it forward as predictive, as opposed to evaluative, modeling approach.

  8. Developing and Validating a Survival Prediction Model for NSCLC Patients Through Distributed Learning Across 3 Countries.

    PubMed

    Jochems, Arthur; Deist, Timo M; El Naqa, Issam; Kessler, Marc; Mayo, Chuck; Reeves, Jackson; Jolly, Shruti; Matuszak, Martha; Ten Haken, Randall; van Soest, Johan; Oberije, Cary; Faivre-Finn, Corinne; Price, Gareth; de Ruysscher, Dirk; Lambin, Philippe; Dekker, Andre

    2017-10-01

    Tools for survival prediction for non-small cell lung cancer (NSCLC) patients treated with chemoradiation or radiation therapy are of limited quality. In this work, we developed a predictive model of survival at 2 years. The model is based on a large volume of historical patient data and serves as a proof of concept to demonstrate the distributed learning approach. Clinical data from 698 lung cancer patients, treated with curative intent with chemoradiation or radiation therapy alone, were collected and stored at 2 different cancer institutes (559 patients at Maastro clinic (Netherlands) and 139 at Michigan university [United States]). The model was further validated on 196 patients originating from The Christie (United Kingdon). A Bayesian network model was adapted for distributed learning (the animation can be viewed at https://www.youtube.com/watch?v=ZDJFOxpwqEA). Two-year posttreatment survival was chosen as the endpoint. The Maastro clinic cohort data are publicly available at https://www.cancerdata.org/publication/developing-and-validating-survival-prediction-model-nsclc-patients-through-distributed, and the developed models can be found at www.predictcancer.org. Variables included in the final model were T and N category, age, performance status, and total tumor dose. The model has an area under the curve (AUC) of 0.66 on the external validation set and an AUC of 0.62 on a 5-fold cross validation. A model based on the T and N category performed with an AUC of 0.47 on the validation set, significantly worse than our model (P<.001). Learning the model in a centralized or distributed fashion yields a minor difference on the probabilities of the conditional probability tables (0.6%); the discriminative performance of the models on the validation set is similar (P=.26). Distributed learning from federated databases allows learning of predictive models on data originating from multiple institutions while avoiding many of the data-sharing barriers. We believe that distributed learning is the future of sharing data in health care. Copyright © 2017 The Author(s). Published by Elsevier Inc. All rights reserved.

  9. Clinical Prediction Models for Patients With Nontraumatic Knee Pain in Primary Care: A Systematic Review and Internal Validation Study.

    PubMed

    Panken, Guus; Verhagen, Arianne P; Terwee, Caroline B; Heymans, Martijn W

    2017-08-01

    Study Design Systematic review and validation study. Background Many prognostic models of knee pain outcomes have been developed for use in primary care. Variability among published studies with regard to patient population, outcome measures, and relevant prognostic factors hampers the generalizability and implementation of these models. Objectives To summarize existing prognostic models in patients with knee pain in a primary care setting and to develop and internally validate new summary prognostic models. Methods After a sensitive search strategy, 2 reviewers independently selected prognostic models for patients with nontraumatic knee pain and assessed the methodological quality of the included studies. All predictors of the included studies were evaluated, summarized, and classified. The predictors assessed in multiple studies of sufficient quality are presented in this review. Using data from the Musculoskeletal System Study (BAS) cohort of patients with a new episode of knee pain, recruited consecutively by Dutch general medical practitioners (n = 372), we used predictors with a strong level of evidence to develop new prognostic models for each outcome measure and internally validated these models. Results Sixteen studies were eligible for inclusion. We considered 11 studies to be of sufficient quality. None of these studies validated their models. Five predictors with strong evidence were related to function and 6 to recovery, and were used to compose 2 prognostic models for patients with knee pain at 1 year. Running these new models in another data set showed explained variances (R 2 ) of 0.36 (function) and 0.33 (recovery). The area under the curve of the recovery model was 0.79. After internal validation, the adjusted R 2 values of the models were 0.30 (function) and 0.20 (recovery), and the area under the curve was 0.73. Conclusion We developed 2 valid prognostic models for function and recovery for patients with nontraumatic knee pain, based on predictors with strong evidence. A longer duration of complaints predicted poorer function but did not adequately predict chance of recovery. Level of Evidence Prognosis, levels 1a and 1b. J Orthop Sports Phys Ther 2017;47(8):518-529. Epub 16 Jun 2017. doi:10.2519/jospt.2017.7142.

  10. Implications of Simulation Conceptual Model Development for Simulation Management and Uncertainty Assessment

    NASA Technical Reports Server (NTRS)

    Pace, Dale K.

    2000-01-01

    A simulation conceptual model is a simulation developers way of translating modeling requirements (i. e., what is to be represented by the simulation or its modification) into a detailed design framework (i. e., how it is to be done), from which the software, hardware, networks (in the case of distributed simulation), and systems/equipment that will make up the simulation can be built or modified. A conceptual model is the collection of information which describes a simulation developers concept about the simulation and its pieces. That information consists of assumptions, algorithms, characteristics, relationships, and data. Taken together, these describe how the simulation developer understands what is to be represented by the simulation (entities, actions, tasks, processes, interactions, etc.) and how that representation will satisfy the requirements to which the simulation responds. Thus the conceptual model is the basis for judgment about simulation fidelity and validity for any condition that is not specifically tested. The more perspicuous and precise the conceptual model, the more likely it is that the simulation development will both fully satisfy requirements and allow demonstration that the requirements are satisfied (i. e., validation). Methods used in simulation conceptual model development have significant implications for simulation management and for assessment of simulation uncertainty. This paper suggests how to develop and document a simulation conceptual model so that the simulation fidelity and validity can be most effectively determined. These ideas for conceptual model development apply to all simulation varieties. The paper relates these ideas to uncertainty assessments as they relate to simulation fidelity and validity. The paper also explores implications for simulation management from conceptual model development methods, especially relative to reuse of simulation components.

  11. Acute Kidney Injury Risk Prediction in Patients Undergoing Coronary Angiography in a National Veterans Health Administration Cohort With External Validation.

    PubMed

    Brown, Jeremiah R; MacKenzie, Todd A; Maddox, Thomas M; Fly, James; Tsai, Thomas T; Plomondon, Mary E; Nielson, Christopher D; Siew, Edward D; Resnic, Frederic S; Baker, Clifton R; Rumsfeld, John S; Matheny, Michael E

    2015-12-11

    Acute kidney injury (AKI) occurs frequently after cardiac catheterization and percutaneous coronary intervention. Although a clinical risk model exists for percutaneous coronary intervention, no models exist for both procedures, nor do existing models account for risk factors prior to the index admission. We aimed to develop such a model for use in prospective automated surveillance programs in the Veterans Health Administration. We collected data on all patients undergoing cardiac catheterization or percutaneous coronary intervention in the Veterans Health Administration from January 01, 2009 to September 30, 2013, excluding patients with chronic dialysis, end-stage renal disease, renal transplant, and missing pre- and postprocedural creatinine measurement. We used 4 AKI definitions in model development and included risk factors from up to 1 year prior to the procedure and at presentation. We developed our prediction models for postprocedural AKI using the least absolute shrinkage and selection operator (LASSO) and internally validated using bootstrapping. We developed models using 115 633 angiogram procedures and externally validated using 27 905 procedures from a New England cohort. Models had cross-validated C-statistics of 0.74 (95% CI: 0.74-0.75) for AKI, 0.83 (95% CI: 0.82-0.84) for AKIN2, 0.74 (95% CI: 0.74-0.75) for contrast-induced nephropathy, and 0.89 (95% CI: 0.87-0.90) for dialysis. We developed a robust, externally validated clinical prediction model for AKI following cardiac catheterization or percutaneous coronary intervention to automatically identify high-risk patients before and immediately after a procedure in the Veterans Health Administration. Work is ongoing to incorporate these models into routine clinical practice. © 2015 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.

  12. On various metrics used for validation of predictive QSAR models with applications in virtual screening and focused library design.

    PubMed

    Roy, Kunal; Mitra, Indrani

    2011-07-01

    Quantitative structure-activity relationships (QSARs) have important applications in drug discovery research, environmental fate modeling, property prediction, etc. Validation has been recognized as a very important step for QSAR model development. As one of the important objectives of QSAR modeling is to predict activity/property/toxicity of new chemicals falling within the domain of applicability of the developed models and QSARs are being used for regulatory decisions, checking reliability of the models and confidence of their predictions is a very important aspect, which can be judged during the validation process. One prime application of a statistically significant QSAR model is virtual screening for molecules with improved potency based on the pharmacophoric features and the descriptors appearing in the QSAR model. Validated QSAR models may also be utilized for design of focused libraries which may be subsequently screened for the selection of hits. The present review focuses on various metrics used for validation of predictive QSAR models together with an overview of the application of QSAR models in the fields of virtual screening and focused library design for diverse series of compounds with citation of some recent examples.

  13. Development of Mole Concept Module Based on Structured Inquiry with Interconection of Macro, Submicro, and Symbolic Representation for Grade X of Senior High School

    NASA Astrophysics Data System (ADS)

    Sagita, R.; Azra, F.; Azhar, M.

    2018-04-01

    The research has created the module of mole concept based on structured inquiry with interconection of macro, submicro, and symbolic representation and determined the validity and practicality of the module. The research type was Research and Development (R&D). The development model was 4-D models that consist of four steps: define, design, develop, and disseminate. The research was limited on develop step. The instrument of the research was questionnaire form that consist of validity and practicality sheets. The module was validated by 5 validators. Practicality module was tested by 2 chemistry teachers and 28 students of grade XI MIA 5 at SMAN 4 of Padang. Validity and practicality data were analysed by using the kappa Cohen formula. The moment kappa average of 5 validators was 0,95 with highest validity category. The moment kappa average of teachers and students were 0,89 and 0,91 praticality with high category. The result of the research showed that the module of mole concept based on structured inquiry with interconection of macro, submicro, and symbolic representation was valid and practice to be used on the learning chemistry.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burge, S.W.

    Erosion has been identified as one of the significant design issues in fluid beds. A cooperative R&D venture of industry, research, and government organizations was recently formed to meet the industry need for a better understanding of erosion in fluid beds. Research focussed on bed hydrodynamics, which are considered to be the primary erosion mechanism. As part of this work, ANL developed an analytical model (FLUFIX) for bed hydrodynamics. Partial validation was performed using data from experiments sponsored by the research consortium. Development of a three-dimensional fluid bed hydrodynamic model was part of Asea-Babcock`s in-kind contribution to the R&D venture.more » This model, FORCE2, was developed by Babcock & Wilcox`s Research and Development Division existing B&W program and on the gas-solids modeling and was based on an existing B&W program and on the gas-solids modeling technology developed by ANL and others. FORCE2 contains many of the features needed to model plant size beds and, therefore can be used along with the erosion technology to assess metal wastage in industrial equipment. As part of the development efforts, FORCE2 was partially validated using ANL`s two-dimensional model, FLUFIX, and experimental data. Time constraints as well as the lack of good hydrodynamic data, particularly at the plant scale, prohibited a complete validation of FORCE2. This report describes this initial validation of FORCE2.« less

  15. Prediction models for successful external cephalic version: a systematic review.

    PubMed

    Velzel, Joost; de Hundt, Marcella; Mulder, Frederique M; Molkenboer, Jan F M; Van der Post, Joris A M; Mol, Ben W; Kok, Marjolein

    2015-12-01

    To provide an overview of existing prediction models for successful ECV, and to assess their quality, development and performance. We searched MEDLINE, EMBASE and the Cochrane Library to identify all articles reporting on prediction models for successful ECV published from inception to January 2015. We extracted information on study design, sample size, model-building strategies and validation. We evaluated the phases of model development and summarized their performance in terms of discrimination, calibration and clinical usefulness. We collected different predictor variables together with their defined significance, in order to identify important predictor variables for successful ECV. We identified eight articles reporting on seven prediction models. All models were subjected to internal validation. Only one model was also validated in an external cohort. Two prediction models had a low overall risk of bias, of which only one showed promising predictive performance at internal validation. This model also completed the phase of external validation. For none of the models their impact on clinical practice was evaluated. The most important predictor variables for successful ECV described in the selected articles were parity, placental location, breech engagement and the fetal head being palpable. One model was assessed using discrimination and calibration using internal (AUC 0.71) and external validation (AUC 0.64), while two other models were assessed with discrimination and calibration, respectively. We found one prediction model for breech presentation that was validated in an external cohort and had acceptable predictive performance. This model should be used to council women considering ECV. Copyright © 2015. Published by Elsevier Ireland Ltd.

  16. Theoretical relationship between vibration transmissibility and driving-point response functions of the human body.

    PubMed

    Dong, Ren G; Welcome, Daniel E; McDowell, Thomas W; Wu, John Z

    2013-11-25

    The relationship between the vibration transmissibility and driving-point response functions (DPRFs) of the human body is important for understanding vibration exposures of the system and for developing valid models. This study identified their theoretical relationship and demonstrated that the sum of the DPRFs can be expressed as a linear combination of the transmissibility functions of the individual mass elements distributed throughout the system. The relationship is verified using several human vibration models. This study also clarified the requirements for reliably quantifying transmissibility values used as references for calibrating the system models. As an example application, this study used the developed theory to perform a preliminary analysis of the method for calibrating models using both vibration transmissibility and DPRFs. The results of the analysis show that the combined method can theoretically result in a unique and valid solution of the model parameters, at least for linear systems. However, the validation of the method itself does not guarantee the validation of the calibrated model, because the validation of the calibration also depends on the model structure and the reliability and appropriate representation of the reference functions. The basic theory developed in this study is also applicable to the vibration analyses of other structures.

  17. Applying the Mixed Methods Instrument Development and Construct Validation Process: the Transformative Experience Questionnaire

    ERIC Educational Resources Information Center

    Koskey, Kristin L. K.; Sondergeld, Toni A.; Stewart, Victoria C.; Pugh, Kevin J.

    2018-01-01

    Onwuegbuzie and colleagues proposed the Instrument Development and Construct Validation (IDCV) process as a mixed methods framework for creating and validating measures. Examples applying IDCV are lacking. We provide an illustrative case integrating the Rasch model and cognitive interviews applied to the development of the Transformative…

  18. Friendship Quality Scale: Conceptualization, Development and Validation

    ERIC Educational Resources Information Center

    Thien, Lei Mee; Razak, Nordin Abd; Jamil, Hazri

    2012-01-01

    The purpose of this study is twofold: (1) to initialize a new conceptualization of positive feature based Friendship Quality (FQUA) scale on the basis of four dimensions: Closeness, Help, Acceptance, and Safety; and (2) to develop and validate FQUA scale in the form of reflective measurement model. The scale development and validation procedures…

  19. Development and validation of a new population-based simulation model of osteoarthritis in New Zealand.

    PubMed

    Wilson, R; Abbott, J H

    2018-04-01

    To describe the construction and preliminary validation of a new population-based microsimulation model developed to analyse the health and economic burden and cost-effectiveness of treatments for knee osteoarthritis (OA) in New Zealand (NZ). We developed the New Zealand Management of Osteoarthritis (NZ-MOA) model, a discrete-time state-transition microsimulation model of the natural history of radiographic knee OA. In this article, we report on the model structure, derivation of input data, validation of baseline model parameters against external data sources, and validation of model outputs by comparison of the predicted population health loss with previous estimates. The NZ-MOA model simulates both the structural progression of radiographic knee OA and the stochastic development of multiple disease symptoms. Input parameters were sourced from NZ population-based data where possible, and from international sources where NZ-specific data were not available. The predicted distributions of structural OA severity and health utility detriments associated with OA were externally validated against other sources of evidence, and uncertainty resulting from key input parameters was quantified. The resulting lifetime and current population health-loss burden was consistent with estimates of previous studies. The new NZ-MOA model provides reliable estimates of the health loss associated with knee OA in the NZ population. The model structure is suitable for analysis of the effects of a range of potential treatments, and will be used in future work to evaluate the cost-effectiveness of recommended interventions within the NZ healthcare system. Copyright © 2018 Osteoarthritis Research Society International. Published by Elsevier Ltd. All rights reserved.

  20. Development and Validation of Methodology to Model Flow in Ventilation Systems Commonly Found in Nuclear Facilities. Phase I

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strons, Philip; Bailey, James L.; Davis, John

    2016-03-01

    In this work, we apply the CFD in modeling airflow and particulate transport. This modeling is then compared to field validation studies to both inform and validate the modeling assumptions. Based on the results of field tests, modeling assumptions and boundary conditions are refined and the process is repeated until the results are found to be reliable with a high level of confidence.

  1. Validation of tsunami inundation model TUNA-RP using OAR-PMEL-135 benchmark problem set

    NASA Astrophysics Data System (ADS)

    Koh, H. L.; Teh, S. Y.; Tan, W. K.; Kh'ng, X. Y.

    2017-05-01

    A standard set of benchmark problems, known as OAR-PMEL-135, is developed by the US National Tsunami Hazard Mitigation Program for tsunami inundation model validation. Any tsunami inundation model must be tested for its accuracy and capability using this standard set of benchmark problems before it can be gainfully used for inundation simulation. The authors have previously developed an in-house tsunami inundation model known as TUNA-RP. This inundation model solves the two-dimensional nonlinear shallow water equations coupled with a wet-dry moving boundary algorithm. This paper presents the validation of TUNA-RP against the solutions provided in the OAR-PMEL-135 benchmark problem set. This benchmark validation testing shows that TUNA-RP can indeed perform inundation simulation with accuracy consistent with that in the tested benchmark problem set.

  2. Towards practical application of sensors for monitoring animal health; design and validation of a model to detect ketosis.

    PubMed

    Steensels, Machteld; Maltz, Ephraim; Bahr, Claudia; Berckmans, Daniel; Antler, Aharon; Halachmi, Ilan

    2017-05-01

    The objective of this study was to design and validate a mathematical model to detect post-calving ketosis. The validation was conducted in four commercial dairy farms in Israel, on a total of 706 multiparous Holstein dairy cows: 203 cows clinically diagnosed with ketosis and 503 healthy cows. A logistic binary regression model was developed, where the dependent variable is categorical (healthy/diseased) and a set of explanatory variables were measured with existing commercial sensors: rumination duration, activity and milk yield of each individual cow. In a first validation step (within-farm), the model was calibrated on the database of each farm separately. Two thirds of the sick cows and an equal number of healthy cows were randomly selected for model validation. The remaining one third of the cows, which did not participate in the model validation, were used for model calibration. In order to overcome the random selection effect, this procedure was repeated 100 times. In a second (between-farms) validation step, the model was calibrated on one farm and validated on another farm. Within-farm accuracy, ranging from 74 to 79%, was higher than between-farm accuracy, ranging from 49 to 72%, in all farms. The within-farm sensitivities ranged from 78 to 90%, and specificities ranged from 71 to 74%. The between-farms sensitivities ranged from 65 to 95%. The developed model can be improved in future research, by employing other variables that can be added; or by exploring other models to achieve greater sensitivity and specificity.

  3. External validation of a prediction model for surgical site infection after thoracolumbar spine surgery in a Western European cohort.

    PubMed

    Janssen, Daniël M C; van Kuijk, Sander M J; d'Aumerie, Boudewijn B; Willems, Paul C

    2018-05-16

    A prediction model for surgical site infection (SSI) after spine surgery was developed in 2014 by Lee et al. This model was developed to compute an individual estimate of the probability of SSI after spine surgery based on the patient's comorbidity profile and invasiveness of surgery. Before any prediction model can be validly implemented in daily medical practice, it should be externally validated to assess how the prediction model performs in patients sampled independently from the derivation cohort. We included 898 consecutive patients who underwent instrumented thoracolumbar spine surgery. To quantify overall performance using Nagelkerke's R 2 statistic, the discriminative ability was quantified as the area under the receiver operating characteristic curve (AUC). We computed the calibration slope of the calibration plot, to judge prediction accuracy. Sixty patients developed an SSI. The overall performance of the prediction model in our population was poor: Nagelkerke's R 2 was 0.01. The AUC was 0.61 (95% confidence interval (CI) 0.54-0.68). The estimated slope of the calibration plot was 0.52. The previously published prediction model showed poor performance in our academic external validation cohort. To predict SSI after instrumented thoracolumbar spine surgery for the present population, a better fitting prediction model should be developed.

  4. X-56A MUTT: Aeroservoelastic Modeling

    NASA Technical Reports Server (NTRS)

    Ouellette, Jeffrey A.

    2015-01-01

    For the NASA X-56a Program, Armstrong Flight Research Center has been developing a set of linear states space models that integrate the flight dynamics and structural dynamics. These high order models are needed for the control design, control evaluation, and test input design. The current focus has been on developing stiff wing models to validate the current modeling approach. The extension of the modeling approach to the flexible wings requires only a change in the structural model. Individual subsystems models (actuators, inertial properties, etc.) have been validated by component level ground tests. Closed loop simulation of maneuvers designed to validate the flight dynamics of these models correlates very well flight test data. The open loop structural dynamics are also shown to correlate well to the flight test data.

  5. Development and Validation of a Teacher Success Questionnaire Using the Rasch Model

    ERIC Educational Resources Information Center

    Tabatabaee-Yazdi, Mona; Motallebzadeh, Khalil; Ashraf, Hamid; Baghaei, Purya

    2018-01-01

    An increased enthusiasm on teacher accountability, in recent times, has led policy makers and teachers to a significant care over evaluating teachers' success. To this aim, a 40-item Teacher Success questionnaire was developed and validated by the application of the Rasch model. The Rasch model is used to decide whether the scores of an instrument…

  6. The PKRC's Value as a Professional Development Model Validated

    ERIC Educational Resources Information Center

    Larson, Dale

    2013-01-01

    After a brief review of the 4-H professional development standards, a new model for determining the value of continuing professional development is introduced and applied to the 4-H standards. The validity of the 4-H standards is affirmed. 4-H Extension professionals are encouraged to celebrate the strength of their standards and to engage the…

  7. A Validity Agenda for Growth Models: One Size Doesn't Fit All!

    ERIC Educational Resources Information Center

    Patelis, Thanos

    2012-01-01

    This is a keynote presentation given at AERA on developing a validity agenda for growth models in a large scale (e.g., state) setting. The emphasis of this presentation was to indicate that growth models and the validity agenda designed to provide evidence in supporting the claims to be made need to be personalized to meet the local or…

  8. Independent data validation of an in vitro method for ...

    EPA Pesticide Factsheets

    In vitro bioaccessibility assays (IVBA) estimate arsenic (As) relative bioavailability (RBA) in contaminated soils to improve the accuracy of site-specific human exposure assessments and risk calculations. For an IVBA assay to gain acceptance for use in risk assessment, it must be shown to reliably predict in vivo RBA that is determined in an established animal model. Previous studies correlating soil As IVBA with RBA have been limited by the use of few soil types as the source of As. Furthermore, the predictive value of As IVBA assays has not been validated using an independent set of As-contaminated soils. Therefore, the current study was undertaken to develop a robust linear model to predict As RBA in mice using an IVBA assay and to independently validate the predictive capability of this assay using a unique set of As-contaminated soils. Thirty-six As-contaminated soils varying in soil type, As contaminant source, and As concentration were included in this study, with 27 soils used for initial model development and nine soils used for independent model validation. The initial model reliably predicted As RBA values in the independent data set, with a mean As RBA prediction error of 5.3% (range 2.4 to 8.4%). Following validation, all 36 soils were used for final model development, resulting in a linear model with the equation: RBA = 0.59 * IVBA + 9.8 and R2 of 0.78. The in vivo-in vitro correlation and independent data validation presented here provide

  9. Validation of Model Forecasts of the Ambient Solar Wind

    NASA Technical Reports Server (NTRS)

    Macneice, P. J.; Hesse, M.; Kuznetsova, M. M.; Rastaetter, L.; Taktakishvili, A.

    2009-01-01

    Independent and automated validation is a vital step in the progression of models from the research community into operational forecasting use. In this paper we describe a program in development at the CCMC to provide just such a comprehensive validation for models of the ambient solar wind in the inner heliosphere. We have built upon previous efforts published in the community, sharpened their definitions, and completed a baseline study. We also provide first results from this program of the comparative performance of the MHD models available at the CCMC against that of the Wang-Sheeley-Arge (WSA) model. An important goal of this effort is to provide a consistent validation to all available models. Clearly exposing the relative strengths and weaknesses of the different models will enable forecasters to craft more reliable ensemble forecasting strategies. Models of the ambient solar wind are developing rapidly as a result of improvements in data supply, numerical techniques, and computing resources. It is anticipated that in the next five to ten years, the MHD based models will supplant semi-empirical potential based models such as the WSA model, as the best available forecast models. We anticipate that this validation effort will track this evolution and so assist policy makers in gauging the value of past and future investment in modeling support.

  10. Development and validation of a 10-year-old child ligamentous cervical spine finite element model.

    PubMed

    Dong, Liqiang; Li, Guangyao; Mao, Haojie; Marek, Stanley; Yang, King H

    2013-12-01

    Although a number of finite element (FE) adult cervical spine models have been developed to understand the injury mechanisms of the neck in automotive related crash scenarios, there have been fewer efforts to develop a child neck model. In this study, a 10-year-old ligamentous cervical spine FE model was developed for application in the improvement of pediatric safety related to motor vehicle crashes. The model geometry was obtained from medical scans and meshed using a multi-block approach. Appropriate properties based on review of literature in conjunction with scaling were assigned to different parts of the model. Child tensile force-deformation data in three segments, Occipital-C2 (C0-C2), C4-C5 and C6-C7, were used to validate the cervical spine model and predict failure forces and displacements. Design of computer experiments was performed to determine failure properties for intervertebral discs and ligaments needed to set up the FE model. The model-predicted ultimate displacements and forces were within the experimental range. The cervical spine FE model was validated in flexion and extension against the child experimental data in three segments, C0-C2, C4-C5 and C6-C7. Other model predictions were found to be consistent with the experimental responses scaled from adult data. The whole cervical spine model was also validated in tension, flexion and extension against the child experimental data. This study provided methods for developing a child ligamentous cervical spine FE model and to predict soft tissue failures in tension.

  11. Physiological time model of Scirpophaga incertulas (Lepidoptera: Pyralidae) in rice in Guandong Province, People's Republic of China.

    PubMed

    Stevenson, Douglass E; Feng, Ge; Zhang, Runjie; Harris, Marvin K

    2005-08-01

    Scirpophaga incertulas (Walker) (Lepidoptera: Pyralidae) is autochthonous and monophagous on rice, Oryza spp., which favors the development of a physiological time model using degree-days (degrees C) to establish a well defined window during which adults will be present in fields. Model development of S. incertulas adult flight phenology used climatic data and historical field observations of S. incertulas from 1962 through 1988. Analysis of variance was used to evaluate 5,203 prospective models with starting dates ranging from 1 January (day 1) to 30 April (day 121) and base temperatures ranging from -3 through 18.5 degrees C. From six candidate models, which shared the lowest standard deviation of prediction error, a model with a base temperature of 10 degrees C starting on 19 January was selected for validation. Validation with linear regression evaluated the differences between predicted and observed events and showed the model consistently predicted phenological events of 10 to 90% cumulative flight activity within a 3.5-d prediction interval regarded as acceptable for pest management decision making. The degree-day phenology model developed here is expected to find field application in Guandong Province. Expansion to other areas of rice production will require field validation. We expect the degree-day characterization of the activity period will remain essentially intact, but the start day may vary based on climate and geographic location. The development and validation of the phenology model of the S. incertulas by using procedures originally developed for pecan nut casebearer, Acrobasis nuxvorella Neunzig, shows the fungibility of this approach to developing prediction models for other insects.

  12. Validation and Use of a Predictive Modeling Tool: Employing Scientific Findings to Improve Responsible Conduct of Research Education.

    PubMed

    Mulhearn, Tyler J; Watts, Logan L; Todd, E Michelle; Medeiros, Kelsey E; Connelly, Shane; Mumford, Michael D

    2017-01-01

    Although recent evidence suggests ethics education can be effective, the nature of specific training programs, and their effectiveness, varies considerably. Building on a recent path modeling effort, the present study developed and validated a predictive modeling tool for responsible conduct of research education. The predictive modeling tool allows users to enter ratings in relation to a given ethics training program and receive instantaneous evaluative information for course refinement. Validation work suggests the tool's predicted outcomes correlate strongly (r = 0.46) with objective course outcomes. Implications for training program development and refinement are discussed.

  13. Rational selection of training and test sets for the development of validated QSAR models

    NASA Astrophysics Data System (ADS)

    Golbraikh, Alexander; Shen, Min; Xiao, Zhiyan; Xiao, Yun-De; Lee, Kuo-Hsiung; Tropsha, Alexander

    2003-02-01

    Quantitative Structure-Activity Relationship (QSAR) models are used increasingly to screen chemical databases and/or virtual chemical libraries for potentially bioactive molecules. These developments emphasize the importance of rigorous model validation to ensure that the models have acceptable predictive power. Using k nearest neighbors ( kNN) variable selection QSAR method for the analysis of several datasets, we have demonstrated recently that the widely accepted leave-one-out (LOO) cross-validated R2 (q2) is an inadequate characteristic to assess the predictive ability of the models [Golbraikh, A., Tropsha, A. Beware of q2! J. Mol. Graphics Mod. 20, 269-276, (2002)]. Herein, we provide additional evidence that there exists no correlation between the values of q 2 for the training set and accuracy of prediction ( R 2) for the test set and argue that this observation is a general property of any QSAR model developed with LOO cross-validation. We suggest that external validation using rationally selected training and test sets provides a means to establish a reliable QSAR model. We propose several approaches to the division of experimental datasets into training and test sets and apply them in QSAR studies of 48 functionalized amino acid anticonvulsants and a series of 157 epipodophyllotoxin derivatives with antitumor activity. We formulate a set of general criteria for the evaluation of predictive power of QSAR models.

  14. Object-oriented simulation model of a parabolic trough solar collector: Static and dynamic validation

    NASA Astrophysics Data System (ADS)

    Ubieta, Eduardo; Hoyo, Itzal del; Valenzuela, Loreto; Lopez-Martín, Rafael; Peña, Víctor de la; López, Susana

    2017-06-01

    A simulation model of a parabolic-trough solar collector developed in Modelica® language is calibrated and validated. The calibration is performed in order to approximate the behavior of the solar collector model to a real one due to the uncertainty in some of the system parameters, i.e. measured data is used during the calibration process. Afterwards, the validation of this calibrated model is done. During the validation, the results obtained from the model are compared to the ones obtained during real operation in a collector from the Plataforma Solar de Almeria (PSA).

  15. Clinical Nomograms to Predict Stone-Free Rates after Shock-Wave Lithotripsy: Development and Internal-Validation

    PubMed Central

    Kim, Jung Kwon; Ha, Seung Beom; Jeon, Chan Hoo; Oh, Jong Jin; Cho, Sung Yong; Oh, Seung-June; Kim, Hyeon Hoe; Jeong, Chang Wook

    2016-01-01

    Purpose Shock-wave lithotripsy (SWL) is accepted as the first line treatment modality for uncomplicated upper urinary tract stones; however, validated prediction models with regards to stone-free rates (SFRs) are still needed. We aimed to develop nomograms predicting SFRs after the first and within the third session of SWL. Computed tomography (CT) information was also modeled for constructing nomograms. Materials and Methods From March 2006 to December 2013, 3028 patients were treated with SWL for ureter and renal stones at our three tertiary institutions. Four cohorts were constructed: Total-development, Total-validation, CT-development, and CT-validation cohorts. The nomograms were developed using multivariate logistic regression models with selected significant variables in a univariate logistic regression model. A C-index was used to assess the discrimination accuracy of nomograms and calibration plots were used to analyze the consistency of prediction. Results The SFR, after the first and within the third session, was 48.3% and 68.8%, respectively. Significant variables were sex, stone location, stone number, and maximal stone diameter in the Total-development cohort, and mean Hounsfield unit (HU) and grade of hydronephrosis (HN) were additional parameters in the CT-development cohort. The C-indices were 0.712 and 0.723 for after the first and within the third session of SWL in the Total-development cohort, and 0.755 and 0.756, in the CT-development cohort, respectively. The calibration plots showed good correspondences. Conclusions We constructed and validated nomograms to predict SFR after SWL. To the best of our knowledge, these are the first graphical nomograms to be modeled with CT information. These may be useful for patient counseling and treatment decision-making. PMID:26890006

  16. High-resolution computational algorithms for simulating offshore wind turbines and farms: Model development and validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Calderer, Antoni; Yang, Xiaolei; Angelidis, Dionysios

    2015-10-30

    The present project involves the development of modeling and analysis design tools for assessing offshore wind turbine technologies. The computational tools developed herein are able to resolve the effects of the coupled interaction of atmospheric turbulence and ocean waves on aerodynamic performance and structural stability and reliability of offshore wind turbines and farms. Laboratory scale experiments have been carried out to derive data sets for validating the computational models.

  17. Development and External Validation of a Melanoma Risk Prediction Model Based on Self-assessed Risk Factors.

    PubMed

    Vuong, Kylie; Armstrong, Bruce K; Weiderpass, Elisabete; Lund, Eiliv; Adami, Hans-Olov; Veierod, Marit B; Barrett, Jennifer H; Davies, John R; Bishop, D Timothy; Whiteman, David C; Olsen, Catherine M; Hopper, John L; Mann, Graham J; Cust, Anne E; McGeechan, Kevin

    2016-08-01

    Identifying individuals at high risk of melanoma can optimize primary and secondary prevention strategies. To develop and externally validate a risk prediction model for incident first-primary cutaneous melanoma using self-assessed risk factors. We used unconditional logistic regression to develop a multivariable risk prediction model. Relative risk estimates from the model were combined with Australian melanoma incidence and competing mortality rates to obtain absolute risk estimates. A risk prediction model was developed using the Australian Melanoma Family Study (629 cases and 535 controls) and externally validated using 4 independent population-based studies: the Western Australia Melanoma Study (511 case-control pairs), Leeds Melanoma Case-Control Study (960 cases and 513 controls), Epigene-QSkin Study (44 544, of which 766 with melanoma), and Swedish Women's Lifestyle and Health Cohort Study (49 259 women, of which 273 had melanoma). We validated model performance internally and externally by assessing discrimination using the area under the receiver operating curve (AUC). Additionally, using the Swedish Women's Lifestyle and Health Cohort Study, we assessed model calibration and clinical usefulness. The risk prediction model included hair color, nevus density, first-degree family history of melanoma, previous nonmelanoma skin cancer, and lifetime sunbed use. On internal validation, the AUC was 0.70 (95% CI, 0.67-0.73). On external validation, the AUC was 0.66 (95% CI, 0.63-0.69) in the Western Australia Melanoma Study, 0.67 (95% CI, 0.65-0.70) in the Leeds Melanoma Case-Control Study, 0.64 (95% CI, 0.62-0.66) in the Epigene-QSkin Study, and 0.63 (95% CI, 0.60-0.67) in the Swedish Women's Lifestyle and Health Cohort Study. Model calibration showed close agreement between predicted and observed numbers of incident melanomas across all deciles of predicted risk. In the external validation setting, there was higher net benefit when using the risk prediction model to classify individuals as high risk compared with classifying all individuals as high risk. The melanoma risk prediction model performs well and may be useful in prevention interventions reliant on a risk assessment using self-assessed risk factors.

  18. Development and validation of the new ICNARC model for prediction of acute hospital mortality in adult critical care.

    PubMed

    Ferrando-Vivas, Paloma; Jones, Andrew; Rowan, Kathryn M; Harrison, David A

    2017-04-01

    To develop and validate an improved risk model to predict acute hospital mortality for admissions to adult critical care units in the UK. 155,239 admissions to 232 adult critical care units in England, Wales and Northern Ireland between January and December 2012 were used to develop a risk model from a set of 38 candidate predictors. The model was validated using 90,017 admissions between January and September 2013. The final model incorporated 15 physiological predictors (modelled with continuous nonlinear models), age, dependency prior to hospital admission, chronic liver disease, metastatic disease, haematological malignancy, CPR prior to admission, location prior to admission/urgency of admission, primary reason for admission and interaction terms. The model was well calibrated and outperformed the current ICNARC model on measures of discrimination (area under the receiver operating characteristic curve 0.885 versus 0.869) and model fit (Brier's score 0.108 versus 0.115). On average, the new model reclassified patients into more appropriate risk categories (net reclassification improvement 19.9; P<0.0001). The model performed well across patient subgroups and in specialist critical care units. The risk model developed in this study showed excellent discrimination and calibration and when validated on a different period of time and across different types of critical care unit. This in turn allows improved accuracy of comparisons between UK critical care providers. Copyright © 2016. Published by Elsevier Inc.

  19. Applying Business Process Reengineering to the Marine Corps Information Assurance Certification and Accreditation Process

    DTIC Science & Technology

    2009-09-01

    the most efficient model is developed and validated by applying it to the current IA C&A process flow at the TSO-KC. Finally... models are explored using the Knowledge Value Added (KVA) methodology, and the most efficient model is developed and validated by applying it to the ... models requires only one available actor from its respective group , rather than all actors in the group , to

  20. In-line pressure-flow module for in vitro modelling of haemodynamics and biosensor validation

    NASA Technical Reports Server (NTRS)

    Koenig, S. C.; Schaub, J. D.; Ewert, D. L.; Swope, R. D.; Convertino, V. A. (Principal Investigator)

    1997-01-01

    An in-line pressure-flow module for in vitro modelling of haemodynamics and biosensor validation has been developed. Studies show that good accuracy can be achieved in the measurement of pressure and of flow, in steady and pulstile flow systems. The model can be used for development, testing and evaluation of cardiovascular-mechanical-electrical anlogue models, cardiovascular prosthetics (i.e. valves, vascular grafts) and pressure and flow biosensors.

  1. Rotary Engine Friction Test Rig Development Report

    DTIC Science & Technology

    2011-12-01

    fundamental research is needed to understand the friction characteristics of the rotary engine that lead to accelerated wear and tear on the seals...that includes a turbocharger . Once the original GT-Suite model is validated, the turbocharger model will be more accurate. This validation will...prepare for turbocharger and fuel-injector testing, which will lead to further development and calibration of the model. Further details are beyond the

  2. Wall-to-wall Landsat TM classifications for Georgia in support of SAFIS using FIA plots for training and verification

    Treesearch

    William H. Cooke; Andrew J. Hartsell

    2000-01-01

    Wall-to-wall Landsat TM classification efforts in Georgia require field validation. Validation uslng FIA data was testing by developing a new crown modeling procedure. A methodology is under development at the Southern Research Station to model crown diameter using Forest Health monitoring data. These models are used to simulate the proportion of tree crowns that...

  3. Validation of urban freeway models. [supporting datasets

    DOT National Transportation Integrated Search

    2015-01-01

    The goal of the SHRP 2 Project L33 Validation of Urban Freeway Models was to assess and enhance the predictive travel time reliability models developed in the SHRP 2 Project L03, Analytic Procedures for Determining the Impacts of Reliability Mitigati...

  4. Validation, Proof-of-Concept, and Postaudit of the Groundwater Flow and Transport Model of the Project Shoal Area

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahmed Hassan

    2004-09-01

    The groundwater flow and radionuclide transport model characterizing the Shoal underground nuclear test has been accepted by the State of Nevada Division of Environmental Protection. According to the Federal Facility Agreement and Consent Order (FFACO) between DOE and the State of Nevada, the next steps in the closure process for the site are then model validation (or postaudit), the proof-of-concept, and the long-term monitoring stage. This report addresses the development of the validation strategy for the Shoal model, needed for preparing the subsurface Corrective Action Decision Document-Corrective Action Plan and the development of the proof-of-concept tools needed during the five-yearmore » monitoring/validation period. The approach builds on a previous model, but is adapted and modified to the site-specific conditions and challenges of the Shoal site.« less

  5. Verification and Validation: High Charge and Energy (HZE) Transport Codes and Future Development

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Tripathi, Ram K.; Mertens, Christopher J.; Blattnig, Steve R.; Clowdsley, Martha S.; Cucinotta, Francis A.; Tweed, John; Heinbockel, John H.; Walker, Steven A.; Nealy, John E.

    2005-01-01

    In the present paper, we give the formalism for further developing a fully three-dimensional HZETRN code using marching procedures but also development of a new Green's function code is discussed. The final Green's function code is capable of not only validation in the space environment but also in ground based laboratories with directed beams of ions of specific energy and characterized with detailed diagnostic particle spectrometer devices. Special emphasis is given to verification of the computational procedures and validation of the resultant computational model using laboratory and spaceflight measurements. Due to historical requirements, two parallel development paths for computational model implementation using marching procedures and Green s function techniques are followed. A new version of the HZETRN code capable of simulating HZE ions with either laboratory or space boundary conditions is under development. Validation of computational models at this time is particularly important for President Bush s Initiative to develop infrastructure for human exploration with first target demonstration of the Crew Exploration Vehicle (CEV) in low Earth orbit in 2008.

  6. Improved Conceptual Models Methodology (ICoMM) for Validation of Non-Observable Systems

    DTIC Science & Technology

    2015-12-01

    distribution is unlimited IMPROVED CONCEPTUAL MODELS METHODOLOGY (ICoMM) FOR VALIDATION OF NON-OBSERVABLE SYSTEMS by Sang M. Sok December 2015...REPORT TYPE AND DATES COVERED Dissertation 4. TITLE AND SUBTITLE IMPROVED CONCEPTUAL MODELS METHODOLOGY (ICoMM) FOR VALIDATION OF NON-OBSERVABLE...importance of the CoM. The improved conceptual model methodology (ICoMM) is developed in support of improving the structure of the CoM for both face and

  7. Development and Validation of a Consumer Quality Assessment Instrument for Dentistry.

    ERIC Educational Resources Information Center

    Johnson, Jeffrey D.; And Others

    1990-01-01

    This paper reviews the literature on consumer involvement in dental quality assessment, argues for inclusion of this information in quality assessment measures, outlines a conceptual model for measuring dental consumer quality assessment, and presents data relating to the development and validation of an instrument based on the conceptual model.…

  8. Development of a Home Food Safety Questionnaire Based on the PRECEDE Model: Targeting Iranian Women.

    PubMed

    Esfarjani, Fatemeh; Hosseini, Hedayat; Mohammadi-Nasrabadi, Fatemeh; Abadi, Alireza; Roustaee, Roshanak; Alikhanian, Haleh; Khalafi, Marjan; Kiaee, Mohammad Farhad; Khaksar, Ramin

    2016-12-01

    Food safety is an essential public health issue for all countries. This study was the first attempt to design and develop a home food safety questionnaire (HFSQ), in the conceptual framework of the PRECEDE (predisposing, reinforcing, and enabling constructs in educational diagnosis and evaluation) model, and to assess its validity and reliability. The HFSQ was developed by reviewing electronic databases and 12 focus group discussions with 96 women volunteers. Ten panel members reviewed the questionnaire, and the content validity ratio and content validity index were computed. Twenty women completed the HFSQ, and face validity was assessed. Women who were responsible for food handling in their households (n =320) were selected randomly from 10 health centers and completed the HFSQ based on the PRECEDE model. To examine the construct validity, a principal components factor analysis with varimax rotation was used. Internal consistency was determined with Cronbach's α. Reproducibility was checked by Kendall's τ after 4 weeks with 30 women. The developed HSFQ was considered acceptable with a content validity index of 0.88. Face validity revealed that 95% of the participants understood the questions and found them easy to answer, and 90% confirmed the appearance of the HFSQ and declared the layout acceptable. Principal component factor analysis revealed that the HFSQ could explain 33.7, 55.3, 34.8, and 60.0% of the total variance of the predisposing, reinforcing, practice, and enabling components, respectively. Cronbach's α was acceptable at 0.73. For Kendall's τ c , r = 0.89, with a 95% confidence interval of 0.85 to 0.93. The HFSQ developed based on the PRECEDE model met the standards of acceptable reliability and validity, which can be generalized to a wider population. These results can provide information for the development of effective communication strategies to promote home food safety.

  9. Dynamic Simulation of Human Gait Model With Predictive Capability.

    PubMed

    Sun, Jinming; Wu, Shaoli; Voglewede, Philip A

    2018-03-01

    In this paper, it is proposed that the central nervous system (CNS) controls human gait using a predictive control approach in conjunction with classical feedback control instead of exclusive classical feedback control theory that controls based on past error. To validate this proposition, a dynamic model of human gait is developed using a novel predictive approach to investigate the principles of the CNS. The model developed includes two parts: a plant model that represents the dynamics of human gait and a controller that represents the CNS. The plant model is a seven-segment, six-joint model that has nine degrees-of-freedom (DOF). The plant model is validated using data collected from able-bodied human subjects. The proposed controller utilizes model predictive control (MPC). MPC uses an internal model to predict the output in advance, compare the predicted output to the reference, and optimize the control input so that the predicted error is minimal. To decrease the complexity of the model, two joints are controlled using a proportional-derivative (PD) controller. The developed predictive human gait model is validated by simulating able-bodied human gait. The simulation results show that the developed model is able to simulate the kinematic output close to experimental data.

  10. Development and validation of an in-line NIR spectroscopic method for continuous blend potency determination in the feed frame of a tablet press.

    PubMed

    De Leersnyder, Fien; Peeters, Elisabeth; Djalabi, Hasna; Vanhoorne, Valérie; Van Snick, Bernd; Hong, Ke; Hammond, Stephen; Liu, Angela Yang; Ziemons, Eric; Vervaet, Chris; De Beer, Thomas

    2018-03-20

    A calibration model for in-line API quantification based on near infrared (NIR) spectra collection during tableting in the tablet press feed frame was developed and validated. First, the measurement set-up was optimised and the effect of filling degree of the feed frame on the NIR spectra was investigated. Secondly, a predictive API quantification model was developed and validated by calculating the accuracy profile based on the analysis results of validation experiments. Furthermore, based on the data of the accuracy profile, the measurement uncertainty was determined. Finally, the robustness of the API quantification model was evaluated. An NIR probe (SentroPAT FO) was implemented into the feed frame of a rotary tablet press (Modul™ P) to monitor physical mixtures of a model API (sodium saccharine) and excipients with two different API target concentrations: 5 and 20% (w/w). Cutting notches into the paddle wheel fingers did avoid disturbances of the NIR signal caused by the rotating paddle wheel fingers and hence allowed better and more complete feed frame monitoring. The effect of the design of the notched paddle wheel fingers was also investigated and elucidated that straight paddle wheel fingers did cause less variation in NIR signal compared to curved paddle wheel fingers. The filling degree of the feed frame was reflected in the raw NIR spectra. Several different calibration models for the prediction of the API content were developed, based on the use of single spectra or averaged spectra, and using partial least squares (PLS) regression or ratio models. These predictive models were then evaluated and validated by processing physical mixtures with different API concentrations not used in the calibration models (validation set). The β-expectation tolerance intervals were calculated for each model and for each of the validated API concentration levels (β was set at 95%). PLS models showed the best predictive performance. For each examined saccharine concentration range (i.e., between 4.5 and 6.5% and between 15 and 25%), at least 95% of future measurements will not deviate more than 15% from the true value. Copyright © 2018 Elsevier B.V. All rights reserved.

  11. Construct validity of the Moral Development Scale for Professionals (MDSP)

    PubMed Central

    Söderhamn, Olle; Bjørnestad, John Olav; Skisland, Anne; Cliffordson, Christina

    2011-01-01

    The aim of this study was to investigate the construct validity of the Moral Development Scale for Professionals (MDSP) using structural equation modeling. The instrument is a 12-item self-report instrument, developed in the Scandinavian cultural context and based on Kohlberg’s theory. A hypothesized simplex structure model underlying the MDSP was tested through structural equation modeling. Validity was also tested as the proportion of respondents older than 20 years that reached the highest moral level, which according to the theory should be small. A convenience sample of 339 nursing students with a mean age of 25.3 years participated. Results confirmed the simplex model structure, indicating that MDSP reflects a moral construct empirically organized from low to high. A minority of respondents >20 years of age (13.5%) scored more than 80% on the highest moral level. The findings support the construct validity of the MDSP and the stages and levels in Kohlberg’s theory. PMID:21655343

  12. Development and validation of a prediction model for functional decline in older medical inpatients.

    PubMed

    Takada, Toshihiko; Fukuma, Shingo; Yamamoto, Yosuke; Tsugihashi, Yukio; Nagano, Hiroyuki; Hayashi, Michio; Miyashita, Jun; Azuma, Teruhisa; Fukuhara, Shunichi

    2018-05-17

    To prevent functional decline in older inpatients, identification of high-risk patients is crucial. The aim of this study was to develop and validate a prediction model to assess the risk of functional decline in older medical inpatients. In this retrospective cohort study, patients ≥65 years admitted acutely to medical wards were included. The healthcare database of 246 acute care hospitals (n = 229,913) was used for derivation, and two acute care hospitals (n = 1767 and 5443, respectively) were used for validation. Data were collected using a national administrative claims and discharge database. Functional decline was defined as a decline of the Katz score at discharge compared with on admission. About 6% of patients in the derivation cohort and 9% and 2% in each validation cohort developed functional decline. A model with 7 items, age, body mass index, living in a nursing home, ambulance use, need for assistance in walking, dementia, and bedsore, was developed. On internal validation, it demonstrated a c-statistic of 0.77 (95% confidence interval (CI) = 0.767-0.771) and good fit on the calibration plot. On external validation, the c-statistics were 0.79 (95% CI = 0.77-0.81) and 0.75 (95% CI = 0.73-0.77) for each cohort, respectively. Calibration plots showed good fit in one cohort and overestimation in the other one. A prediction model for functional decline in older medical inpatients was derived and validated. It is expected that use of the model would lead to early identification of high-risk patients and introducing early intervention. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. Load Composition Model Workflow (BPA TIP-371 Deliverable 1A)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chassin, David P.; Cezar, Gustavo V.

    This project is funded under Bonneville Power Administration (BPA) Strategic Partnership Project (SPP) 17-005 between BPA and SLAC National Accelerator Laboratory. The project in a BPA Technology Improvement Project (TIP) that builds on and validates the Composite Load Model developed by the Western Electric Coordinating Council's (WECC) Load Modeling Task Force (LMTF). The composite load model is used by the WECC Modeling and Validation Work Group to study the stability and security of the western electricity interconnection. The work includes development of load composition data sets, collection of load disturbance data, and model development and validation. This work supports reliablemore » and economic operation of the power system. This report was produced for Deliverable 1A of the BPA TIP-371 Project entitled \\TIP 371: Advancing the Load Composition Model". The deliverable documents the proposed work ow for the Composite Load Model, which provides the basis for the instrumentation, data acquisition, analysis and data dissemination activities addressed by later phases of the project.« less

  14. Validation of an Evaluation Model for Learning Management Systems

    ERIC Educational Resources Information Center

    Kim, S. W.; Lee, M. G.

    2008-01-01

    This study aims to validate a model for evaluating learning management systems (LMS) used in e-learning fields. A survey of 163 e-learning experts, regarding 81 validation items developed through literature review, was used to ascertain the importance of the criteria. A concise list of explanatory constructs, including two principle factors, was…

  15. Improved modeling of GaN HEMTs for predicting thermal and trapping-induced-kink effects

    NASA Astrophysics Data System (ADS)

    Jarndal, Anwar; Ghannouchi, Fadhel M.

    2016-09-01

    In this paper, an improved modeling approach has been developed and validated for GaN high electron mobility transistors (HEMTs). The proposed analytical model accurately simulates the drain current and its inherent trapping and thermal effects. Genetic-algorithm-based procedure is developed to automatically find the fitting parameters of the model. The developed modeling technique is implemented on a packaged GaN-on-Si HEMT and validated by DC and small-/large-signal RF measurements. The model is also employed for designing and realizing a switch-mode inverse class-F power amplifier. The amplifier simulations showed a very good agreement with RF large-signal measurements.

  16. The Development and Validation of a New Land Surface Model for Regional and Global Climate Modeling

    NASA Astrophysics Data System (ADS)

    Lynch-Stieglitz, Marc

    1995-11-01

    A new land-surface scheme intended for use in mesoscale and global climate models has been developed and validated. The ground scheme consists of 6 soil layers. Diffusion and a modified tipping bucket model govern heat and water flow respectively. A 3 layer snow model has been incorporated into a modified BEST vegetation scheme. TOPMODEL equations and Digital Elevation Model data are used to generate baseflow which supports lowland saturated zones. Soil moisture heterogeneity represented by saturated lowlands subsequently impacts watershed evapotranspiration, the partitioning of surface fluxes, and the development of the storm hydrograph. Five years of meteorological and hydrological data from the Sleepers river watershed located in the eastern highlands of Vermont where winter snow cover is significant were then used to drive and validate the new scheme. Site validation data were sufficient to evaluate model performance with regard to various aspects of the watershed water balance, including snowpack growth/ablation, the spring snowmelt hydrograph, storm hydrographs, and the seasonal development of watershed evapotranspiration and soil moisture. By including topographic effects, not only are the main spring hydrographs and individual storm hydrographs adequately resolved, but the mechanisms generating runoff are consistent with current views of hydrologic processes. The seasonal movement of the mean water table depth and the saturated area of the watershed are consistent with site data and the overall model hydroclimatology, including the surface fluxes, seems reasonable.

  17. Development and validation of Prediction models for Risks of complications in Early-onset Pre-eclampsia (PREP): a prospective cohort study.

    PubMed

    Thangaratinam, Shakila; Allotey, John; Marlin, Nadine; Mol, Ben W; Von Dadelszen, Peter; Ganzevoort, Wessel; Akkermans, Joost; Ahmed, Asif; Daniels, Jane; Deeks, Jon; Ismail, Khaled; Barnard, Ann Marie; Dodds, Julie; Kerry, Sally; Moons, Carl; Riley, Richard D; Khan, Khalid S

    2017-04-01

    The prognosis of early-onset pre-eclampsia (before 34 weeks' gestation) is variable. Accurate prediction of complications is required to plan appropriate management in high-risk women. To develop and validate prediction models for outcomes in early-onset pre-eclampsia. Prospective cohort for model development, with validation in two external data sets. Model development: 53 obstetric units in the UK. Model transportability: PIERS (Pre-eclampsia Integrated Estimate of RiSk for mothers) and PETRA (Pre-Eclampsia TRial Amsterdam) studies. Pregnant women with early-onset pre-eclampsia. Nine hundred and forty-six women in the model development data set and 850 women (634 in PIERS, 216 in PETRA) in the transportability (external validation) data sets. The predictors were identified from systematic reviews of tests to predict complications in pre-eclampsia and were prioritised by Delphi survey. The primary outcome was the composite of adverse maternal outcomes established using Delphi surveys. The secondary outcome was the composite of fetal and neonatal complications. We developed two prediction models: a logistic regression model (PREP-L) to assess the overall risk of any maternal outcome until postnatal discharge and a survival analysis model (PREP-S) to obtain individual risk estimates at daily intervals from diagnosis until 34 weeks. Shrinkage was used to adjust for overoptimism of predictor effects. For internal validation (of the full models in the development data) and external validation (of the reduced models in the transportability data), we computed the ability of the models to discriminate between those with and without poor outcomes ( c -statistic), and the agreement between predicted and observed risk (calibration slope). The PREP-L model included maternal age, gestational age at diagnosis, medical history, systolic blood pressure, urine protein-to-creatinine ratio, platelet count, serum urea concentration, oxygen saturation, baseline treatment with antihypertensive drugs and administration of magnesium sulphate. The PREP-S model additionally included exaggerated tendon reflexes and serum alanine aminotransaminase and creatinine concentration. Both models showed good discrimination for maternal complications, with anoptimism-adjusted c -statistic of 0.82 [95% confidence interval (CI) 0.80 to 0.84] for PREP-L and 0.75 (95% CI 0.73 to 0.78) for the PREP-S model in the internal validation. External validation of the reduced PREP-L model showed good performance with a c -statistic of 0.81 (95% CI 0.77 to 0.85) in PIERS and 0.75 (95% CI 0.64 to 0.86) in PETRA cohorts for maternal complications, and calibrated well with slopes of 0.93 (95% CI 0.72 to 1.10) and 0.90 (95% CI 0.48 to 1.32), respectively. In the PIERS data set, the reduced PREP-S model had a c -statistic of 0.71 (95% CI 0.67 to 0.75) and a calibration slope of 0.67 (95% CI 0.56 to 0.79). Low gestational age at diagnosis, high urine protein-to-creatinine ratio, increased serum urea concentration, treatment with antihypertensive drugs, magnesium sulphate, abnormal uterine artery Doppler scan findings and estimated fetal weight below the 10th centile were associated with fetal complications. The PREP-L model provided individualised risk estimates in early-onset pre-eclampsia to plan management of high- or low-risk individuals. The PREP-S model has the potential to be used as a triage tool for risk assessment. The impacts of the model use on outcomes need further evaluation. Current Controlled Trials ISRCTN40384046. The National Institute for Health Research Health Technology Assessment programme.

  18. Computer-aided design of liposomal drugs: In silico prediction and experimental validation of drug candidates for liposomal remote loading.

    PubMed

    Cern, Ahuva; Barenholz, Yechezkel; Tropsha, Alexander; Goldblum, Amiram

    2014-01-10

    Previously we have developed and statistically validated Quantitative Structure Property Relationship (QSPR) models that correlate drugs' structural, physical and chemical properties as well as experimental conditions with the relative efficiency of remote loading of drugs into liposomes (Cern et al., J. Control. Release 160 (2012) 147-157). Herein, these models have been used to virtually screen a large drug database to identify novel candidate molecules for liposomal drug delivery. Computational hits were considered for experimental validation based on their predicted remote loading efficiency as well as additional considerations such as availability, recommended dose and relevance to the disease. Three compounds were selected for experimental testing which were confirmed to be correctly classified by our previously reported QSPR models developed with Iterative Stochastic Elimination (ISE) and k-Nearest Neighbors (kNN) approaches. In addition, 10 new molecules with known liposome remote loading efficiency that were not used by us in QSPR model development were identified in the published literature and employed as an additional model validation set. The external accuracy of the models was found to be as high as 82% or 92%, depending on the model. This study presents the first successful application of QSPR models for the computer-model-driven design of liposomal drugs. © 2013.

  19. Computer-aided design of liposomal drugs: in silico prediction and experimental validation of drug candidates for liposomal remote loading

    PubMed Central

    Cern, Ahuva; Barenholz, Yechezkel; Tropsha, Alexander; Goldblum, Amiram

    2014-01-01

    Previously we have developed and statistically validated Quantitative Structure Property Relationship (QSPR) models that correlate drugs’ structural, physical and chemical properties as well as experimental conditions with the relative efficiency of remote loading of drugs into liposomes (Cern et al, Journal of Controlled Release, 160(2012) 14–157). Herein, these models have been used to virtually screen a large drug database to identify novel candidate molecules for liposomal drug delivery. Computational hits were considered for experimental validation based on their predicted remote loading efficiency as well as additional considerations such as availability, recommended dose and relevance to the disease. Three compounds were selected for experimental testing which were confirmed to be correctly classified by our previously reported QSPR models developed with Iterative Stochastic Elimination (ISE) and k-nearest neighbors (kNN) approaches. In addition, 10 new molecules with known liposome remote loading efficiency that were not used in QSPR model development were identified in the published literature and employed as an additional model validation set. The external accuracy of the models was found to be as high as 82% or 92%, depending on the model. This study presents the first successful application of QSPR models for the computer-model-driven design of liposomal drugs. PMID:24184343

  20. Development of Decision Support Formulas for the Prediction of Bladder Outlet Obstruction and Prostatic Surgery in Patients With Lower Urinary Tract Symptom/Benign Prostatic Hyperplasia: Part II, External Validation and Usability Testing of a Smartphone App.

    PubMed

    Choo, Min Soo; Jeong, Seong Jin; Cho, Sung Yong; Yoo, Changwon; Jeong, Chang Wook; Ku, Ja Hyeon; Oh, Seung-June

    2017-04-01

    We aimed to externally validate the prediction model we developed for having bladder outlet obstruction (BOO) and requiring prostatic surgery using 2 independent data sets from tertiary referral centers, and also aimed to validate a mobile app for using this model through usability testing. Formulas and nomograms predicting whether a subject has BOO and needs prostatic surgery were validated with an external validation cohort from Seoul National University Bundang Hospital and Seoul Metropolitan Government-Seoul National University Boramae Medical Center between January 2004 and April 2015. A smartphone-based app was developed, and 8 young urologists were enrolled for usability testing to identify any human factor issues of the app. A total of 642 patients were included in the external validation cohort. No significant differences were found in the baseline characteristics of major parameters between the original (n=1,179) and the external validation cohort, except for the maximal flow rate. Predictions of requiring prostatic surgery in the validation cohort showed a sensitivity of 80.6%, a specificity of 73.2%, a positive predictive value of 49.7%, and a negative predictive value of 92.0%, and area under receiver operating curve of 0.84. The calibration plot indicated that the predictions have good correspondence. The decision curve showed also a high net benefit. Similar evaluation results using the external validation cohort were seen in the predictions of having BOO. Overall results of the usability test demonstrated that the app was user-friendly with no major human factor issues. External validation of these newly developed a prediction model demonstrated a moderate level of discrimination, adequate calibration, and high net benefit gains for predicting both having BOO and requiring prostatic surgery. Also a smartphone app implementing the prediction model was user-friendly with no major human factor issue.

  1. Exploring Techniques for Improving Retrievals of Bio-optical Properties of Coastal Waters

    DTIC Science & Technology

    2011-09-30

    BRDF model was developed for coastal waters, and validated on the data of the two LISCO instruments, and its comparison with MODIS satellite imagery...in field conditions to validate radiative transfer modeling and assess possibilities for the separation of organic and inorganic particulate...to retrieve water components and compared with NOMAD and field CCNY data. Simulated datasets were also used to develop a BRDF model for coastal

  2. Validity of empirical models of exposure in asphalt paving

    PubMed Central

    Burstyn, I; Boffetta, P; Burr, G; Cenni, A; Knecht, U; Sciarra, G; Kromhout, H

    2002-01-01

    Aims: To investigate the validity of empirical models of exposure to bitumen fume and benzo(a)pyrene, developed for a historical cohort study of asphalt paving in Western Europe. Methods: Validity was evaluated using data from the USA, Italy, and Germany not used to develop the original models. Correlation between observed and predicted exposures was examined. Bias and precision were estimated. Results: Models were imprecise. Furthermore, predicted bitumen fume exposures tended to be lower (-70%) than concentrations found during paving in the USA. This apparent bias might be attributed to differences between Western European and USA paving practices. Evaluation of the validity of the benzo(a)pyrene exposure model revealed a similar to expected effect of re-paving and a larger than expected effect of tar use. Overall, benzo(a)pyrene models underestimated exposures by 51%. Conclusions: Possible bias as a result of underestimation of the impact of coal tar on benzo(a)pyrene exposure levels must be explored in sensitivity analysis of the exposure–response relation. Validation of the models, albeit limited, increased our confidence in their applicability to exposure assessment in the historical cohort study of cancer risk among asphalt workers. PMID:12205236

  3. The Role of Structural Models in the Solar Sail Flight Validation Process

    NASA Technical Reports Server (NTRS)

    Johnston, John D.

    2004-01-01

    NASA is currently soliciting proposals via the New Millennium Program ST-9 opportunity for a potential Solar Sail Flight Validation (SSFV) experiment to develop and operate in space a deployable solar sail that can be steered and provides measurable acceleration. The approach planned for this experiment is to test and validate models and processes for solar sail design, fabrication, deployment, and flight. These models and processes would then be used to design, fabricate, and operate scaleable solar sails for future space science missions. There are six validation objectives planned for the ST9 SSFV experiment: 1) Validate solar sail design tools and fabrication methods; 2) Validate controlled deployment; 3) Validate in space structural characteristics (focus of poster); 4) Validate solar sail attitude control; 5) Validate solar sail thrust performance; 6) Characterize the sail's electromagnetic interaction with the space environment. This poster presents a top-level assessment of the role of structural models in the validation process for in-space structural characteristics.

  4. Prediction Models for 30-Day Mortality and Complications After Total Knee and Hip Arthroplasties for Veteran Health Administration Patients With Osteoarthritis.

    PubMed

    Harris, Alex Hs; Kuo, Alfred C; Bowe, Thomas; Gupta, Shalini; Nordin, David; Giori, Nicholas J

    2018-05-01

    Statistical models to preoperatively predict patients' risk of death and major complications after total joint arthroplasty (TJA) could improve the quality of preoperative management and informed consent. Although risk models for TJA exist, they have limitations including poor transparency and/or unknown or poor performance. Thus, it is currently impossible to know how well currently available models predict short-term complications after TJA, or if newly developed models are more accurate. We sought to develop and conduct cross-validation of predictive risk models, and report details and performance metrics as benchmarks. Over 90 preoperative variables were used as candidate predictors of death and major complications within 30 days for Veterans Health Administration patients with osteoarthritis who underwent TJA. Data were split into 3 samples-for selection of model tuning parameters, model development, and cross-validation. C-indexes (discrimination) and calibration plots were produced. A total of 70,569 patients diagnosed with osteoarthritis who received primary TJA were included. C-statistics and bootstrapped confidence intervals for the cross-validation of the boosted regression models were highest for cardiac complications (0.75; 0.71-0.79) and 30-day mortality (0.73; 0.66-0.79) and lowest for deep vein thrombosis (0.59; 0.55-0.64) and return to the operating room (0.60; 0.57-0.63). Moderately accurate predictive models of 30-day mortality and cardiac complications after TJA in Veterans Health Administration patients were developed and internally cross-validated. By reporting model coefficients and performance metrics, other model developers can test these models on new samples and have a procedure and indication-specific benchmark to surpass. Published by Elsevier Inc.

  5. Systematic review of prediction models for delirium in the older adult inpatient.

    PubMed

    Lindroth, Heidi; Bratzke, Lisa; Purvis, Suzanne; Brown, Roger; Coburn, Mark; Mrkobrada, Marko; Chan, Matthew T V; Davis, Daniel H J; Pandharipande, Pratik; Carlsson, Cynthia M; Sanders, Robert D

    2018-04-28

    To identify existing prognostic delirium prediction models and evaluate their validity and statistical methodology in the older adult (≥60 years) acute hospital population. Systematic review. PubMed, CINAHL, PsychINFO, SocINFO, Cochrane, Web of Science and Embase were searched from 1 January 1990 to 31 December 2016. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses and CHARMS Statement guided protocol development. age >60 years, inpatient, developed/validated a prognostic delirium prediction model. alcohol-related delirium, sample size ≤50. The primary performance measures were calibration and discrimination statistics. Two authors independently conducted search and extracted data. The synthesis of data was done by the first author. Disagreement was resolved by the mentoring author. The initial search resulted in 7,502 studies. Following full-text review of 192 studies, 33 were excluded based on age criteria (<60 years) and 27 met the defined criteria. Twenty-three delirium prediction models were identified, 14 were externally validated and 3 were internally validated. The following populations were represented: 11 medical, 3 medical/surgical and 13 surgical. The assessment of delirium was often non-systematic, resulting in varied incidence. Fourteen models were externally validated with an area under the receiver operating curve range from 0.52 to 0.94. Limitations in design, data collection methods and model metric reporting statistics were identified. Delirium prediction models for older adults show variable and typically inadequate predictive capabilities. Our review highlights the need for development of robust models to predict delirium in older inpatients. We provide recommendations for the development of such models. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  6. Habitat models to predict wetland bird occupancy influenced by scale, anthropogenic disturbance, and imperfect detection

    USGS Publications Warehouse

    Glisson, Wesley J.; Conway, Courtney J.; Nadeau, Christopher P.; Borgmann, Kathi L.

    2017-01-01

    Understanding species–habitat relationships for endangered species is critical for their conservation. However, many studies have limited value for conservation because they fail to account for habitat associations at multiple spatial scales, anthropogenic variables, and imperfect detection. We addressed these three limitations by developing models for an endangered wetland bird, Yuma Ridgway's rail (Rallus obsoletus yumanensis), that examined how the spatial scale of environmental variables, inclusion of anthropogenic disturbance variables, and accounting for imperfect detection in validation data influenced model performance. These models identified associations between environmental variables and occupancy. We used bird survey and spatial environmental data at 2473 locations throughout the species' U.S. range to create and validate occupancy models and produce predictive maps of occupancy. We compared habitat-based models at three spatial scales (100, 224, and 500 m radii buffers) with and without anthropogenic disturbance variables using validation data adjusted for imperfect detection and an unadjusted validation dataset that ignored imperfect detection. The inclusion of anthropogenic disturbance variables improved the performance of habitat models at all three spatial scales, and the 224-m-scale model performed best. All models exhibited greater predictive ability when imperfect detection was incorporated into validation data. Yuma Ridgway's rail occupancy was negatively associated with ephemeral and slow-moving riverine features and high-intensity anthropogenic development, and positively associated with emergent vegetation, agriculture, and low-intensity development. Our modeling approach accounts for common limitations in modeling species–habitat relationships and creating predictive maps of occupancy probability and, therefore, provides a useful framework for other species.

  7. Modeling Liver-Related Adverse Effects of Drugs Using kNN QSAR Method

    PubMed Central

    Rodgers, Amie D.; Zhu, Hao; Fourches, Dennis; Rusyn, Ivan; Tropsha, Alexander

    2010-01-01

    Adverse effects of drugs (AEDs) continue to be a major cause of drug withdrawals both in development and post-marketing. While liver-related AEDs are a major concern for drug safety, there are few in silico models for predicting human liver toxicity for drug candidates. We have applied the Quantitative Structure Activity Relationship (QSAR) approach to model liver AEDs. In this study, we aimed to construct a QSAR model capable of binary classification (active vs. inactive) of drugs for liver AEDs based on chemical structure. To build QSAR models, we have employed an FDA spontaneous reporting database of human liver AEDs (elevations in activity of serum liver enzymes), which contains data on approximately 500 approved drugs. Approximately 200 compounds with wide clinical data coverage, structural similarity and balanced (40/60) active/inactive ratio were selected for modeling and divided into multiple training/test and external validation sets. QSAR models were developed using the k nearest neighbor method and validated using external datasets. Models with high sensitivity (>73%) and specificity (>94%) for prediction of liver AEDs in external validation sets were developed. To test applicability of the models, three chemical databases (World Drug Index, Prestwick Chemical Library, and Biowisdom Liver Intelligence Module) were screened in silico and the validity of predictions was determined, where possible, by comparing model-based classification with assertions in publicly available literature. Validated QSAR models of liver AEDs based on the data from the FDA spontaneous reporting system can be employed as sensitive and specific predictors of AEDs in pre-clinical screening of drug candidates for potential hepatotoxicity in humans. PMID:20192250

  8. A design of experiments approach to validation sampling for logistic regression modeling with error-prone medical records.

    PubMed

    Ouyang, Liwen; Apley, Daniel W; Mehrotra, Sanjay

    2016-04-01

    Electronic medical record (EMR) databases offer significant potential for developing clinical hypotheses and identifying disease risk associations by fitting statistical models that capture the relationship between a binary response variable and a set of predictor variables that represent clinical, phenotypical, and demographic data for the patient. However, EMR response data may be error prone for a variety of reasons. Performing a manual chart review to validate data accuracy is time consuming, which limits the number of chart reviews in a large database. The authors' objective is to develop a new design-of-experiments-based systematic chart validation and review (DSCVR) approach that is more powerful than the random validation sampling used in existing approaches. The DSCVR approach judiciously and efficiently selects the cases to validate (i.e., validate whether the response values are correct for those cases) for maximum information content, based only on their predictor variable values. The final predictive model will be fit using only the validation sample, ignoring the remainder of the unvalidated and unreliable error-prone data. A Fisher information based D-optimality criterion is used, and an algorithm for optimizing it is developed. The authors' method is tested in a simulation comparison that is based on a sudden cardiac arrest case study with 23 041 patients' records. This DSCVR approach, using the Fisher information based D-optimality criterion, results in a fitted model with much better predictive performance, as measured by the receiver operating characteristic curve and the accuracy in predicting whether a patient will experience the event, than a model fitted using a random validation sample. The simulation comparisons demonstrate that this DSCVR approach can produce predictive models that are significantly better than those produced from random validation sampling, especially when the event rate is low. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  9. QSAR modeling: where have you been? Where are you going to?

    PubMed

    Cherkasov, Artem; Muratov, Eugene N; Fourches, Denis; Varnek, Alexandre; Baskin, Igor I; Cronin, Mark; Dearden, John; Gramatica, Paola; Martin, Yvonne C; Todeschini, Roberto; Consonni, Viviana; Kuz'min, Victor E; Cramer, Richard; Benigni, Romualdo; Yang, Chihae; Rathman, James; Terfloth, Lothar; Gasteiger, Johann; Richard, Ann; Tropsha, Alexander

    2014-06-26

    Quantitative structure-activity relationship modeling is one of the major computational tools employed in medicinal chemistry. However, throughout its entire history it has drawn both praise and criticism concerning its reliability, limitations, successes, and failures. In this paper, we discuss (i) the development and evolution of QSAR; (ii) the current trends, unsolved problems, and pressing challenges; and (iii) several novel and emerging applications of QSAR modeling. Throughout this discussion, we provide guidelines for QSAR development, validation, and application, which are summarized in best practices for building rigorously validated and externally predictive QSAR models. We hope that this Perspective will help communications between computational and experimental chemists toward collaborative development and use of QSAR models. We also believe that the guidelines presented here will help journal editors and reviewers apply more stringent scientific standards to manuscripts reporting new QSAR studies, as well as encourage the use of high quality, validated QSARs for regulatory decision making.

  10. QSAR Modeling: Where have you been? Where are you going to?

    PubMed Central

    Cherkasov, Artem; Muratov, Eugene N.; Fourches, Denis; Varnek, Alexandre; Baskin, Igor I.; Cronin, Mark; Dearden, John; Gramatica, Paola; Martin, Yvonne C.; Todeschini, Roberto; Consonni, Viviana; Kuz'min, Victor E.; Cramer, Richard; Benigni, Romualdo; Yang, Chihae; Rathman, James; Terfloth, Lothar; Gasteiger, Johann; Richard, Ann; Tropsha, Alexander

    2014-01-01

    Quantitative Structure-Activity Relationship modeling is one of the major computational tools employed in medicinal chemistry. However, throughout its entire history it has drawn both praise and criticism concerning its reliability, limitations, successes, and failures. In this paper, we discuss: (i) the development and evolution of QSAR; (ii) the current trends, unsolved problems, and pressing challenges; and (iii) several novel and emerging applications of QSAR modeling. Throughout this discussion, we provide guidelines for QSAR development, validation, and application, which are summarized in best practices for building rigorously validated and externally predictive QSAR models. We hope that this Perspective will help communications between computational and experimental chemists towards collaborative development and use of QSAR models. We also believe that the guidelines presented here will help journal editors and reviewers apply more stringent scientific standards to manuscripts reporting new QSAR studies, as well as encourage the use of high quality, validated QSARs for regulatory decision making. PMID:24351051

  11. Identification of reduced-order thermal therapy models using thermal MR images: theory and validation.

    PubMed

    Niu, Ran; Skliar, Mikhail

    2012-07-01

    In this paper, we develop and validate a method to identify computationally efficient site- and patient-specific models of ultrasound thermal therapies from MR thermal images. The models of the specific absorption rate of the transduced energy and the temperature response of the therapy target are identified in the reduced basis of proper orthogonal decomposition of thermal images, acquired in response to a mild thermal test excitation. The method permits dynamic reidentification of the treatment models during the therapy by recursively utilizing newly acquired images. Such adaptation is particularly important during high-temperature therapies, which are known to substantially and rapidly change tissue properties and blood perfusion. The developed theory was validated for the case of focused ultrasound heating of a tissue phantom. The experimental and computational results indicate that the developed approach produces accurate low-dimensional treatment models despite temporal and spatial noises in MR images and slow image acquisition rate.

  12. Designing an Agent-Based Model Using Group Model Building: Application to Food Insecurity Patterns in a U.S. Midwestern Metropolitan City.

    PubMed

    Koh, Keumseok; Reno, Rebecca; Hyder, Ayaz

    2018-04-01

    Recent advances in computing resources have increased interest in systems modeling and population health. While group model building (GMB) has been effectively applied in developing system dynamics models (SD), few studies have used GMB for developing an agent-based model (ABM). This article explores the use of a GMB approach to develop an ABM focused on food insecurity. In our GMB workshops, we modified a set of the standard GMB scripts to develop and validate an ABM in collaboration with local experts and stakeholders. Based on this experience, we learned that GMB is a useful collaborative modeling platform for modelers and community experts to address local population health issues. We also provide suggestions for increasing the use of the GMB approach to develop rigorous, useful, and validated ABMs.

  13. Validating Human Performance Models of the Future Orion Crew Exploration Vehicle

    NASA Technical Reports Server (NTRS)

    Wong, Douglas T.; Walters, Brett; Fairey, Lisa

    2010-01-01

    NASA's Orion Crew Exploration Vehicle (CEV) will provide transportation for crew and cargo to and from destinations in support of the Constellation Architecture Design Reference Missions. Discrete Event Simulation (DES) is one of the design methods NASA employs for crew performance of the CEV. During the early development of the CEV, NASA and its prime Orion contractor Lockheed Martin (LM) strived to seek an effective low-cost method for developing and validating human performance DES models. This paper focuses on the method developed while creating a DES model for the CEV Rendezvous, Proximity Operations, and Docking (RPOD) task to the International Space Station. Our approach to validation was to attack the problem from several fronts. First, we began the development of the model early in the CEV design stage. Second, we adhered strictly to M&S development standards. Third, we involved the stakeholders, NASA astronauts, subject matter experts, and NASA's modeling and simulation development community throughout. Fourth, we applied standard and easy-to-conduct methods to ensure the model's accuracy. Lastly, we reviewed the data from an earlier human-in-the-loop RPOD simulation that had different objectives, which provided us an additional means to estimate the model's confidence level. The results revealed that a majority of the DES model was a reasonable representation of the current CEV design.

  14. Theoretical and Observational Determination of Global and Regional Radiation Budget, Forcing and Feedbacks at the Top-of-Atmosphere and Surface

    NASA Technical Reports Server (NTRS)

    Loeb, Norman G.

    2004-01-01

    Report consists of: 1. List of accomplishments 2. List of publications 3. Abstracts of published or submitted papers and 4. Subject invention disclosure. The accomplishments of the grant listed are: 1. Improved the third-order turbulence closure in cloud resolving models to remove the liquid water oscillation. 2. Used the University of California-Los Angeles (UCLA) large-eddy simulation (LES) model to provide data for radiation transfer testing. 3. Revised shortwave k-distribution models based on HITRAN 2000. 4. Developed a gamma-weighted two-stream radiative transfer model for radiation budget estimate applications. 5. Estimated the effect of spherical geometry to the earth radiation budget. 6. Estimated top-of-atmosphere irradiance over snow and sea ice surfaces. 7. Estimated the aerosol direct radiative effect at the top of the atmosphere. 8. Estimated the top-of-atmosphere reflectance of the clear-sky molecular atmosphere over ocean. 9. Developed and validated new set of Angular Distribution Models for the CERES TRMM satellite instrument (tropical) 10. Developed and validated new set of Angular Distribution Models for the CERES Terra satellite instrument (global) 11. Quantified the top-of-atmosphere direct radiative effect of aerosols over global oceans from merged CERES and MODIS observations 12 Clarified the definition of TOA flux reference level for radiation budget studies 13. Developed new algorithm for unfaltering CERES measured radiances 14. Used multiangle POLDER measurements to produce narrowband angular distribution models and examine the effect of scene identification errors on TOA albedo estimates 15. Developed and validated a novel algorithm called the Multidirectional Reflectance Matching (MRM) model for inferring TOA albedos from ice clouds using multi-angle satellite measurements. 16. Developed and validated a novel algorithm called the Multidirectional Polarized Reflectance Matching (MPRM) model for inferring particle shapes from ice clouds using multi-angle polarized satellite measurements. 17. Developed 4 advanced light scattering models including the three-dimensional (3D) uniaxial perfectly matched layer (UPML) finite-difference time-domain (FDTD) model. 18. Develop sunglint in situ measurement and study reflectance distribution in the sunglint area. 19. Lead a balloon-borne radiometer TOA albedo validation effort. 20. Developed a CERES surface UVB, UVA, and UV index product.

  15. Model performance evaluation (validation and calibration) in model-based studies of therapeutic interventions for cardiovascular diseases : a review and suggested reporting framework.

    PubMed

    Haji Ali Afzali, Hossein; Gray, Jodi; Karnon, Jonathan

    2013-04-01

    Decision analytic models play an increasingly important role in the economic evaluation of health technologies. Given uncertainties around the assumptions used to develop such models, several guidelines have been published to identify and assess 'best practice' in the model development process, including general modelling approach (e.g., time horizon), model structure, input data and model performance evaluation. This paper focuses on model performance evaluation. In the absence of a sufficient level of detail around model performance evaluation, concerns regarding the accuracy of model outputs, and hence the credibility of such models, are frequently raised. Following presentation of its components, a review of the application and reporting of model performance evaluation is presented. Taking cardiovascular disease as an illustrative example, the review investigates the use of face validity, internal validity, external validity, and cross model validity. As a part of the performance evaluation process, model calibration is also discussed and its use in applied studies investigated. The review found that the application and reporting of model performance evaluation across 81 studies of treatment for cardiovascular disease was variable. Cross-model validation was reported in 55 % of the reviewed studies, though the level of detail provided varied considerably. We found that very few studies documented other types of validity, and only 6 % of the reviewed articles reported a calibration process. Considering the above findings, we propose a comprehensive model performance evaluation framework (checklist), informed by a review of best-practice guidelines. This framework provides a basis for more accurate and consistent documentation of model performance evaluation. This will improve the peer review process and the comparability of modelling studies. Recognising the fundamental role of decision analytic models in informing public funding decisions, the proposed framework should usefully inform guidelines for preparing submissions to reimbursement bodies.

  16. Quality evaluation of health information system's architectures developed using the HIS-DF methodology.

    PubMed

    López, Diego M; Blobel, Bernd; Gonzalez, Carolina

    2010-01-01

    Requirement analysis, design, implementation, evaluation, use, and maintenance of semantically interoperable Health Information Systems (HIS) have to be based on eHealth standards. HIS-DF is a comprehensive approach for HIS architectural development based on standard information models and vocabulary. The empirical validity of HIS-DF has not been demonstrated so far. Through an empirical experiment, the paper demonstrates that using HIS-DF and HL7 information models, semantic quality of HIS architecture can be improved, compared to architectures developed using traditional RUP process. Semantic quality of the architecture has been measured in terms of model's completeness and validity metrics. The experimental results demonstrated an increased completeness of 14.38% and an increased validity of 16.63% when using the HIS-DF and HL7 information models in a sample HIS development project. Quality assurance of the system architecture in earlier stages of HIS development presumes an increased quality of final HIS systems, which supposes an indirect impact on patient care.

  17. The Facial Expression Coding System (FACES): Development, Validation, and Utility

    ERIC Educational Resources Information Center

    Kring, Ann M.; Sloan, Denise M.

    2007-01-01

    This article presents information on the development and validation of the Facial Expression Coding System (FACES; A. M. Kring & D. Sloan, 1991). Grounded in a dimensional model of emotion, FACES provides information on the valence (positive, negative) of facial expressive behavior. In 5 studies, reliability and validity data from 13 diverse…

  18. Development and Validation of the Sorokin Psychosocial Love Inventory for Divorced Individuals

    ERIC Educational Resources Information Center

    D'Ambrosio, Joseph G.; Faul, Anna C.

    2013-01-01

    Objective: This study describes the development and validation of the Sorokin Psychosocial Love Inventory (SPSLI) measuring love actions toward a former spouse. Method: Classical measurement theory and confirmatory factor analysis (CFA) were utilized with an a priori theory and factor model to validate the SPSLI. Results: A 15-item scale…

  19. The Construct of the Learning Organization: Dimensions, Measurement, and Validation

    ERIC Educational Resources Information Center

    Yang, Baiyin; Watkins, Karen E.; Marsick, Victoria J.

    2004-01-01

    This research describes efforts to develop and validate a multidimensional measure of the learning organization. An instrument was developed based on a critical review of both the conceptualization and practice of this construct. Supporting validity evidence for the instrument was obtained from several sources, including best model-data fit among…

  20. Temporal and external validation of a prediction model for adverse outcomes among inpatients with diabetes.

    PubMed

    Adderley, N J; Mallett, S; Marshall, T; Ghosh, S; Rayman, G; Bellary, S; Coleman, J; Akiboye, F; Toulis, K A; Nirantharakumar, K

    2018-06-01

    To temporally and externally validate our previously developed prediction model, which used data from University Hospitals Birmingham to identify inpatients with diabetes at high risk of adverse outcome (mortality or excessive length of stay), in order to demonstrate its applicability to other hospital populations within the UK. Temporal validation was performed using data from University Hospitals Birmingham and external validation was performed using data from both the Heart of England NHS Foundation Trust and Ipswich Hospital. All adult inpatients with diabetes were included. Variables included in the model were age, gender, ethnicity, admission type, intensive therapy unit admission, insulin therapy, albumin, sodium, potassium, haemoglobin, C-reactive protein, estimated GFR and neutrophil count. Adverse outcome was defined as excessive length of stay or death. Model discrimination in the temporal and external validation datasets was good. In temporal validation using data from University Hospitals Birmingham, the area under the curve was 0.797 (95% CI 0.785-0.810), sensitivity was 70% (95% CI 67-72) and specificity was 75% (95% CI 74-76). In external validation using data from Heart of England NHS Foundation Trust, the area under the curve was 0.758 (95% CI 0.747-0.768), sensitivity was 73% (95% CI 71-74) and specificity was 66% (95% CI 65-67). In external validation using data from Ipswich, the area under the curve was 0.736 (95% CI 0.711-0.761), sensitivity was 63% (95% CI 59-68) and specificity was 69% (95% CI 67-72). These results were similar to those for the internally validated model derived from University Hospitals Birmingham. The prediction model to identify patients with diabetes at high risk of developing an adverse event while in hospital performed well in temporal and external validation. The externally validated prediction model is a novel tool that can be used to improve care pathways for inpatients with diabetes. Further research to assess clinical utility is needed. © 2018 Diabetes UK.

  1. Calibration and validation of coarse-grained models of atomic systems: application to semiconductor manufacturing

    NASA Astrophysics Data System (ADS)

    Farrell, Kathryn; Oden, J. Tinsley

    2014-07-01

    Coarse-grained models of atomic systems, created by aggregating groups of atoms into molecules to reduce the number of degrees of freedom, have been used for decades in important scientific and technological applications. In recent years, interest in developing a more rigorous theory for coarse graining and in assessing the predictivity of coarse-grained models has arisen. In this work, Bayesian methods for the calibration and validation of coarse-grained models of atomistic systems in thermodynamic equilibrium are developed. For specificity, only configurational models of systems in canonical ensembles are considered. Among major challenges in validating coarse-grained models are (1) the development of validation processes that lead to information essential in establishing confidence in the model's ability predict key quantities of interest and (2), above all, the determination of the coarse-grained model itself; that is, the characterization of the molecular architecture, the choice of interaction potentials and thus parameters, which best fit available data. The all-atom model is treated as the "ground truth," and it provides the basis with respect to which properties of the coarse-grained model are compared. This base all-atom model is characterized by an appropriate statistical mechanics framework in this work by canonical ensembles involving only configurational energies. The all-atom model thus supplies data for Bayesian calibration and validation methods for the molecular model. To address the first challenge, we develop priors based on the maximum entropy principle and likelihood functions based on Gaussian approximations of the uncertainties in the parameter-to-observation error. To address challenge (2), we introduce the notion of model plausibilities as a means for model selection. This methodology provides a powerful approach toward constructing coarse-grained models which are most plausible for given all-atom data. We demonstrate the theory and methods through applications to representative atomic structures and we discuss extensions to the validation process for molecular models of polymer structures encountered in certain semiconductor nanomanufacturing processes. The powerful method of model plausibility as a means for selecting interaction potentials for coarse-grained models is discussed in connection with a coarse-grained hexane molecule. Discussions of how all-atom information is used to construct priors are contained in an appendix.

  2. Development and Validation of a Disease Severity Scoring Model for Pediatric Sepsis.

    PubMed

    Hu, Li; Zhu, Yimin; Chen, Mengshi; Li, Xun; Lu, Xiulan; Liang, Ying; Tan, Hongzhuan

    2016-07-01

    Multiple severity scoring systems have been devised and evaluated in adult sepsis, but a simplified scoring model for pediatric sepsis has not yet been developed. This study aimed to develop and validate a new scoring model to stratify the severity of pediatric sepsis, thus assisting the treatment of sepsis in children. Data from 634 consecutive patients who presented with sepsis at Children's hospital of Hunan province in China in 2011-2013 were analyzed, with 476 patients placed in training group and 158 patients in validation group. Stepwise discriminant analysis was used to develop the accurate discriminate model. A simplified scoring model was generated using weightings defined by the discriminate coefficients. The discriminant ability of the model was tested by receiver operating characteristic curves (ROC). The discriminant analysis showed that prothrombin time, D-dimer, total bilirubin, serum total protein, uric acid, PaO2/FiO2 ratio, myoglobin were associated with severity of sepsis. These seven variables were assigned with values of 4, 3, 3, 4, 3, 3, 3 respectively based on the standardized discriminant coefficients. Patients with higher scores had higher risk of severe sepsis. The areas under ROC (AROC) were 0.836 for accurate discriminate model, and 0.825 for simplified scoring model in validation group. The proposed disease severity scoring model for pediatric sepsis showed adequate discriminatory capacity and sufficient accuracy, which has important clinical significance in evaluating the severity of pediatric sepsis and predicting its progress.

  3. [Nursing on the Web: the creation and validation process of a web site on coronary artery disease].

    PubMed

    Marques, Isaac Rosa; Marin, Heimar de Fátima

    2002-01-01

    The World Wide Web is an important health information research source. A challenge for the Brazilian Nursing Informatics area is to use its potential to promote health education. This paper aims to present a developing and validating model used in an educational Web site, named CardioSite, which subject is Coronary Heart Disease. In its creation it was adopted a method with phases of conceptual modeling, development, implementation, and evaluation. In the evaluation phase, the validation was performed through an online informatics and health experts panel. The results demonstrated that information was reliable and valid. Considering that national official systems are not available to that approach, this model demonstrated effectiveness in assessing the quality of the Web site content.

  4. Development and Validation of Personality Disorder Spectra Scales for the MMPI-2-RF.

    PubMed

    Sellbom, Martin; Waugh, Mark H; Hopwood, Christopher J

    2018-01-01

    The purpose of this study was to develop and validate a set of MMPI-2-RF (Ben-Porath & Tellegen, 2008/2011) personality disorder (PD) spectra scales. These scales could serve the purpose of assisting with DSM-5 PD diagnosis and help link categorical and dimensional conceptions of personality pathology within the MMPI-2-RF. We developed and provided initial validity results for scales corresponding to the 10 PD constructs listed in the DSM-5 using data from student, community, clinical, and correctional samples. Initial validation efforts indicated good support for criterion validity with an external PD measure as well as with dimensional personality traits included in the DSM-5 alternative model for PDs. Construct validity results using psychosocial history and therapists' ratings in a large clinical sample were generally supportive as well. Overall, these brief scales provide clinicians using MMPI-2-RF data with estimates of DSM-5 PD constructs that can support cross-model connections between categorical and dimensional assessment approaches.

  5. Validity of Students Worksheet Based Problem-Based Learning for 9th Grade Junior High School in living organism Inheritance and Food Biotechnology.

    NASA Astrophysics Data System (ADS)

    Jefriadi, J.; Ahda, Y.; Sumarmin, R.

    2018-04-01

    Based on preliminary research of students worksheet used by teachers has several disadvantages such as students worksheet arranged directly drove learners conduct an investigation without preceded by directing learners to a problem or provide stimulation, student's worksheet not provide a concrete imageand presentation activities on the students worksheet not refer to any one learning models curicullum recommended. To address problems Reviews these students then developed a worksheet based on problem-based learning. This is a research development that using Ploom models. The phases are preliminary research, development and assessment. The instruments used in data collection that includes pieces of observation/interviews, instrument self-evaluation, instruments validity. The results of the validation expert on student worksheets get a valid result the average value 80,1%. Validity of students worksheet based problem-based learning for 9th grade junior high school in living organism inheritance and food biotechnology get valid category.

  6. Validity of High School Physic Module With Character Values Using Process Skill Approach In STKIP PGRI West Sumatera

    NASA Astrophysics Data System (ADS)

    Anaperta, M.; Helendra, H.; Zulva, R.

    2018-04-01

    This study aims to describe the validity of physics module with Character Oriented Values Using Process Approach Skills at Dynamic Electrical Material in high school physics / MA and SMK. The type of research is development research. The module development model uses the development model proposed by Plomp which consists of (1) preliminary research phase, (2) the prototyping phase, and (3) assessment phase. In this research is done is initial investigation phase and designing. Data collecting technique to know validation is observation and questionnaire. In the initial investigative phase, curriculum analysis, student analysis, and concept analysis were conducted. In the design phase and the realization of module design for SMA / MA and SMK subjects in dynamic electrical materials. After that, the formative evaluation which include self evaluation, prototyping (expert reviews, one-to-one, and small group. At this stage validity is performed. This research data is obtained through the module validation sheet, which then generates a valid module.

  7. Why don't you exercise? Development of the Amotivation Toward Exercise Scale among older inactive individuals.

    PubMed

    Vlachopoulos, Symeon P; Gigoudi, Maria A

    2008-07-01

    This article reports on the development and initial validation of the Amotivation Toward Exercise Scale (ATES), which reflects a taxonomy of older adults' reasons to refrain from exercise. Drawing on work by Pelletier, Dion, Tuson, and Green-Demers (1999) and Legault, Green-Demers, and Pelletier (2006), these dimensions were the outcome beliefs, capacity beliefs, effort beliefs, and value amotivation beliefs toward exercise. The results supported a 4-factor correlated model that fit the data better than either a unidimensional model or a 4-factor uncorrelated model or a hierarchical model with strong internal reliability for all the subscales. Evidence also emerged for the discriminant validity of the subscale scores. Furthermore, the predictive validity of the subscale scores was supported, and satisfactory measurement invariance was demonstrated across the calibration and validation samples, supporting the generalizability of the scale's measurement properties.

  8. Development and Validity Testing of Belief Measurement Model in Buddhism for Junior High School Students at Chiang Rai Buddhist Scripture School: An Application for Multitrait-Multimethod Analysis

    ERIC Educational Resources Information Center

    Chaidi, Thirachai; Damrongpanich, Sunthorapot

    2016-01-01

    The purposes of this study were to develop a model to measure the belief in Buddhism of junior high school students at Chiang Rai Buddhist Scripture School, and to determine construct validity of the model for measuring the belief in Buddhism by using Multitrait-Multimethod analysis. The samples were 590 junior high school students at Buddhist…

  9. Tools for Evaluating Fault Detection and Diagnostic Methods for HVAC Secondary Systems

    NASA Astrophysics Data System (ADS)

    Pourarian, Shokouh

    Although modern buildings are using increasingly sophisticated energy management and control systems that have tremendous control and monitoring capabilities, building systems routinely fail to perform as designed. More advanced building control, operation, and automated fault detection and diagnosis (AFDD) technologies are needed to achieve the goal of net-zero energy commercial buildings. Much effort has been devoted to develop such technologies for primary heating ventilating and air conditioning (HVAC) systems, and some secondary systems. However, secondary systems, such as fan coil units and dual duct systems, although widely used in commercial, industrial, and multifamily residential buildings, have received very little attention. This research study aims at developing tools that could provide simulation capabilities to develop and evaluate advanced control, operation, and AFDD technologies for these less studied secondary systems. In this study, HVACSIM+ is selected as the simulation environment. Besides developing dynamic models for the above-mentioned secondary systems, two other issues related to the HVACSIM+ environment are also investigated. One issue is the nonlinear equation solver used in HVACSIM+ (Powell's Hybrid method in subroutine SNSQ). It has been found from several previous research projects (ASRHAE RP 825 and 1312) that SNSQ is especially unstable at the beginning of a simulation and sometimes unable to converge to a solution. Another issue is related to the zone model in the HVACSIM+ library of components. Dynamic simulation of secondary HVAC systems unavoidably requires an interacting zone model which is systematically and dynamically interacting with building surrounding. Therefore, the accuracy and reliability of the building zone model affects operational data generated by the developed dynamic tool to predict HVAC secondary systems function. The available model does not simulate the impact of direct solar radiation that enters a zone through glazing and the study of zone model is conducted in this direction to modify the existing zone model. In this research project, the following tasks are completed and summarized in this report: 1. Develop dynamic simulation models in the HVACSIM+ environment for common fan coil unit and dual duct system configurations. The developed simulation models are able to produce both fault-free and faulty operational data under a wide variety of faults and severity levels for advanced control, operation, and AFDD technology development and evaluation purposes; 2. Develop a model structure, which includes the grouping of blocks and superblocks, treatment of state variables, initial and boundary conditions, and selection of equation solver, that can simulate a dual duct system efficiently with satisfactory stability; 3. Design and conduct a comprehensive and systematic validation procedure using collected experimental data to validate the developed simulation models under both fault-free and faulty operational conditions; 4. Conduct a numerical study to compare two solution techniques: Powell's Hybrid (PH) and Levenberg-Marquardt (LM) in terms of their robustness and accuracy. 5. Modification of the thermal state of the existing building zone model in HVACSIM+ library of component. This component is revised to consider the transmitted heat through glazing as a heat source for transient building zone load prediction In this report, literature, including existing HVAC dynamic modeling environment and models, HVAC model validation methodologies, and fault modeling and validation methodologies, are reviewed. The overall methodologies used for fault free and fault model development and validation are introduced. Detailed model development and validation results for the two secondary systems, i.e., fan coil unit and dual duct system are summarized. Experimental data mostly from the Iowa Energy Center Energy Resource Station are used to validate the models developed in this project. Satisfactory model performance in both fault free and fault simulation studies is observed for all studied systems.

  10. Improving Risk Adjustment for Mortality After Pediatric Cardiac Surgery: The UK PRAiS2 Model.

    PubMed

    Rogers, Libby; Brown, Katherine L; Franklin, Rodney C; Ambler, Gareth; Anderson, David; Barron, David J; Crowe, Sonya; English, Kate; Stickley, John; Tibby, Shane; Tsang, Victor; Utley, Martin; Witter, Thomas; Pagel, Christina

    2017-07-01

    Partial Risk Adjustment in Surgery (PRAiS), a risk model for 30-day mortality after children's heart surgery, has been used by the UK National Congenital Heart Disease Audit to report expected risk-adjusted survival since 2013. This study aimed to improve the model by incorporating additional comorbidity and diagnostic information. The model development dataset was all procedures performed between 2009 and 2014 in all UK and Ireland congenital cardiac centers. The outcome measure was death within each 30-day surgical episode. Model development followed an iterative process of clinical discussion and development and assessment of models using logistic regression under 25 × 5 cross-validation. Performance was measured using Akaike information criterion, the area under the receiver-operating characteristic curve (AUC), and calibration. The final model was assessed in an external 2014 to 2015 validation dataset. The development dataset comprised 21,838 30-day surgical episodes, with 539 deaths (mortality, 2.5%). The validation dataset comprised 4,207 episodes, with 97 deaths (mortality, 2.3%). The updated risk model included 15 procedural, 11 diagnostic, and 4 comorbidity groupings, and nonlinear functions of age and weight. Performance under cross-validation was: median AUC of 0.83 (range, 0.82 to 0.83), median calibration slope and intercept of 0.92 (range, 0.64 to 1.25) and -0.23 (range, -1.08 to 0.85) respectively. In the validation dataset, the AUC was 0.86 (95% confidence interval [CI], 0.82 to 0.89), and the calibration slope and intercept were 1.01 (95% CI, 0.83 to 1.18) and 0.11 (95% CI, -0.45 to 0.67), respectively, showing excellent performance. A more sophisticated PRAiS2 risk model for UK use was developed with additional comorbidity and diagnostic information, alongside age and weight as nonlinear variables. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  11. CFD Code Development for Combustor Flows

    NASA Technical Reports Server (NTRS)

    Norris, Andrew

    2003-01-01

    During the lifetime of this grant, work has been performed in the areas of model development, code development, code validation and code application. For model development, this has included the PDF combustion module, chemical kinetics based on thermodynamics, neural network storage of chemical kinetics, ILDM chemical kinetics and assumed PDF work. Many of these models were then implemented in the code, and in addition many improvements were made to the code, including the addition of new chemistry integrators, property evaluation schemes, new chemistry models and turbulence-chemistry interaction methodology. Validation of all new models and code improvements were also performed, while application of the code to the ZCET program and also the NPSS GEW combustor program were also performed. Several important items remain under development, including the NOx post processing, assumed PDF model development and chemical kinetic development. It is expected that this work will continue under the new grant.

  12. Development and validation of a LC-MS/MS assay for quantitation of plasma citrulline for application to animal models of the acute radiation syndrome across multiple species.

    PubMed

    Jones, Jace W; Tudor, Gregory; Bennett, Alexander; Farese, Ann M; Moroni, Maria; Booth, Catherine; MacVittie, Thomas J; Kane, Maureen A

    2014-07-01

    The potential risk of a radiological catastrophe highlights the need for identifying and validating potential biomarkers that accurately predict radiation-induced organ damage. A key target organ that is acutely sensitive to the effects of irradiation is the gastrointestinal (GI) tract, referred to as the GI acute radiation syndrome (GI-ARS). Recently, citrulline has been identified as a potential circulating biomarker for radiation-induced GI damage. Prior to biologically validating citrulline as a biomarker for radiation-induced GI injury, there is the important task of developing and validating a quantitation assay for citrulline detection within the radiation animal models used for biomarker validation. Herein, we describe the analytical development and validation of citrulline detection using a liquid chromatography tandem mass spectrometry assay that incorporates stable-label isotope internal standards. Analytical validation for specificity, linearity, lower limit of quantitation, accuracy, intra- and interday precision, extraction recovery, matrix effects, and stability was performed under sample collection and storage conditions according to the Guidance for Industry, Bioanalytical Methods Validation issued by the US Food and Drug Administration. In addition, the method was biologically validated using plasma from well-characterized mouse, minipig, and nonhuman primate GI-ARS models. The results demonstrated that circulating citrulline can be confidently quantified from plasma. Additionally, circulating citrulline displayed a time-dependent response for radiological doses covering GI-ARS across multiple species.

  13. Validation of psychoanalytic theories: towards a conceptualization of references.

    PubMed

    Zachrisson, Anders; Zachrisson, Henrik Daae

    2005-10-01

    The authors discuss criteria for the validation of psychoanalytic theories and develop a heuristic and normative model of the references needed for this. Their core question in this paper is: can psychoanalytic theories be validated exclusively from within psychoanalytic theory (internal validation), or are references to sources of knowledge other than psychoanalysis also necessary (external validation)? They discuss aspects of the classic truth criteria correspondence and coherence, both from the point of view of contemporary psychoanalysis and of contemporary philosophy of science. The authors present arguments for both external and internal validation. Internal validation has to deal with the problems of subjectivity of observations and circularity of reasoning, external validation with the problem of relevance. They recommend a critical attitude towards psychoanalytic theories, which, by carefully scrutinizing weak points and invalidating observations in the theories, reduces the risk of wishful thinking. The authors conclude by sketching a heuristic model of validation. This model combines correspondence and coherence with internal and external validation into a four-leaf model for references for the process of validating psychoanalytic theories.

  14. Joint use of over- and under-sampling techniques and cross-validation for the development and assessment of prediction models.

    PubMed

    Blagus, Rok; Lusa, Lara

    2015-11-04

    Prediction models are used in clinical research to develop rules that can be used to accurately predict the outcome of the patients based on some of their characteristics. They represent a valuable tool in the decision making process of clinicians and health policy makers, as they enable them to estimate the probability that patients have or will develop a disease, will respond to a treatment, or that their disease will recur. The interest devoted to prediction models in the biomedical community has been growing in the last few years. Often the data used to develop the prediction models are class-imbalanced as only few patients experience the event (and therefore belong to minority class). Prediction models developed using class-imbalanced data tend to achieve sub-optimal predictive accuracy in the minority class. This problem can be diminished by using sampling techniques aimed at balancing the class distribution. These techniques include under- and oversampling, where a fraction of the majority class samples are retained in the analysis or new samples from the minority class are generated. The correct assessment of how the prediction model is likely to perform on independent data is of crucial importance; in the absence of an independent data set, cross-validation is normally used. While the importance of correct cross-validation is well documented in the biomedical literature, the challenges posed by the joint use of sampling techniques and cross-validation have not been addressed. We show that care must be taken to ensure that cross-validation is performed correctly on sampled data, and that the risk of overestimating the predictive accuracy is greater when oversampling techniques are used. Examples based on the re-analysis of real datasets and simulation studies are provided. We identify some results from the biomedical literature where the incorrect cross-validation was performed, where we expect that the performance of oversampling techniques was heavily overestimated.

  15. External validation of a Cox prognostic model: principles and methods

    PubMed Central

    2013-01-01

    Background A prognostic model should not enter clinical practice unless it has been demonstrated that it performs a useful role. External validation denotes evaluation of model performance in a sample independent of that used to develop the model. Unlike for logistic regression models, external validation of Cox models is sparsely treated in the literature. Successful validation of a model means achieving satisfactory discrimination and calibration (prediction accuracy) in the validation sample. Validating Cox models is not straightforward because event probabilities are estimated relative to an unspecified baseline function. Methods We describe statistical approaches to external validation of a published Cox model according to the level of published information, specifically (1) the prognostic index only, (2) the prognostic index together with Kaplan-Meier curves for risk groups, and (3) the first two plus the baseline survival curve (the estimated survival function at the mean prognostic index across the sample). The most challenging task, requiring level 3 information, is assessing calibration, for which we suggest a method of approximating the baseline survival function. Results We apply the methods to two comparable datasets in primary breast cancer, treating one as derivation and the other as validation sample. Results are presented for discrimination and calibration. We demonstrate plots of survival probabilities that can assist model evaluation. Conclusions Our validation methods are applicable to a wide range of prognostic studies and provide researchers with a toolkit for external validation of a published Cox model. PMID:23496923

  16. Derivation and validation of a novel risk score for safe discharge after acute lower gastrointestinal bleeding: a modelling study.

    PubMed

    Oakland, Kathryn; Jairath, Vipul; Uberoi, Raman; Guy, Richard; Ayaru, Lakshmana; Mortensen, Neil; Murphy, Mike F; Collins, Gary S

    2017-09-01

    Acute lower gastrointestinal bleeding is a common reason for emergency hospital admission, and identification of patients at low risk of harm, who are therefore suitable for outpatient investigation, is a clinical and research priority. We aimed to develop and externally validate a simple risk score to identify patients with lower gastrointestinal bleeding who could safely avoid hospital admission. We undertook model development with data from the National Comparative Audit of Lower Gastrointestinal Bleeding from 143 hospitals in the UK in 2015. Multivariable logistic regression modelling was used to identify predictors of safe discharge, defined as the absence of rebleeding, blood transfusion, therapeutic intervention, 28 day readmission, or death. The model was converted into a simplified risk scoring system and was externally validated in 288 patients admitted with lower gastrointestinal bleeding (184 safely discharged) from two UK hospitals (Charing Cross Hospital, London, and Hammersmith Hospital, London) that had not contributed data to the development cohort. We calculated C statistics for the new model and did a comparative assessment with six previously developed risk scores. Of 2336 prospectively identified admissions in the development cohort, 1599 (68%) were safely discharged. Age, sex, previous admission for lower gastrointestinal bleeding, rectal examination findings, heart rate, systolic blood pressure, and haemoglobin concentration strongly discriminated safe discharge in the development cohort (C statistic 0·84, 95% CI 0·82-0·86) and in the validation cohort (0·79, 0·73-0·84). Calibration plots showed the new risk score to have good calibration in the validation cohort. The score was better than the Rockall, Blatchford, Strate, BLEED, AIMS65, and NOBLADS scores in predicting safe discharge. A score of 8 or less predicts a 95% probability of safe discharge. We developed and validated a novel clinical prediction model with good discriminative performance to identify patients with lower gastrointestinal bleeding who are suitable for safe outpatient management, which has important economic and resource implications. Bowel Disease Research Foundation and National Health Service Blood and Transplant. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Development of an anaerobic threshold (HRLT, HRVT) estimation equation using the heart rate threshold (HRT) during the treadmill incremental exercise test

    PubMed Central

    Ham, Joo-ho; Park, Hun-Young; Kim, Youn-ho; Bae, Sang-kon; Ko, Byung-hoon

    2017-01-01

    [Purpose] The purpose of this study was to develop a regression model to estimate the heart rate at the lactate threshold (HRLT) and the heart rate at the ventilatory threshold (HRVT) using the heart rate threshold (HRT), and to test the validity of the regression model. [Methods] We performed a graded exercise test with a treadmill in 220 normal individuals (men: 112, women: 108) aged 20–59 years. HRT, HRLT, and HRVT were measured in all subjects. A regression model was developed to estimate HRLT and HRVT using HRT with 70% of the data (men: 79, women: 76) through randomization (7:3), with the Bernoulli trial. The validity of the regression model developed with the remaining 30% of the data (men: 33, women: 32) was also examined. [Results] Based on the regression coefficient, we found that the independent variable HRT was a significant variable in all regression models. The adjusted R2 of the developed regression models averaged about 70%, and the standard error of estimation of the validity test results was 11 bpm, which is similar to that of the developed model. [Conclusion] These results suggest that HRT is a useful parameter for predicting HRLT and HRVT. PMID:29036765

  18. Development of an anaerobic threshold (HRLT, HRVT) estimation equation using the heart rate threshold (HRT) during the treadmill incremental exercise test.

    PubMed

    Ham, Joo-Ho; Park, Hun-Young; Kim, Youn-Ho; Bae, Sang-Kon; Ko, Byung-Hoon; Nam, Sang-Seok

    2017-09-30

    The purpose of this study was to develop a regression model to estimate the heart rate at the lactate threshold (HRLT) and the heart rate at the ventilatory threshold (HRVT) using the heart rate threshold (HRT), and to test the validity of the regression model. We performed a graded exercise test with a treadmill in 220 normal individuals (men: 112, women: 108) aged 20-59 years. HRT, HRLT, and HRVT were measured in all subjects. A regression model was developed to estimate HRLT and HRVT using HRT with 70% of the data (men: 79, women: 76) through randomization (7:3), with the Bernoulli trial. The validity of the regression model developed with the remaining 30% of the data (men: 33, women: 32) was also examined. Based on the regression coefficient, we found that the independent variable HRT was a significant variable in all regression models. The adjusted R2 of the developed regression models averaged about 70%, and the standard error of estimation of the validity test results was 11 bpm, which is similar to that of the developed model. These results suggest that HRT is a useful parameter for predicting HRLT and HRVT. ©2017 The Korean Society for Exercise Nutrition

  19. Green Pea and Garlic Puree Model Food Development for Thermal Pasteurization Process Quality Evaluation.

    PubMed

    Bornhorst, Ellen R; Tang, Juming; Sablani, Shyam S; Barbosa-Cánovas, Gustavo V; Liu, Fang

    2017-07-01

    Development and selection of model foods is a critical part of microwave thermal process development, simulation validation, and optimization. Previously developed model foods for pasteurization process evaluation utilized Maillard reaction products as the time-temperature integrators, which resulted in similar temperature sensitivity among the models. The aim of this research was to develop additional model foods based on different time-temperature integrators, determine their dielectric properties and color change kinetics, and validate the optimal model food in hot water and microwave-assisted pasteurization processes. Color, quantified using a * value, was selected as the time-temperature indicator for green pea and garlic puree model foods. Results showed 915 MHz microwaves had a greater penetration depth into the green pea model food than the garlic. a * value reaction rates for the green pea model were approximately 4 times slower than in the garlic model food; slower reaction rates were preferred for the application of model food in this study, that is quality evaluation for a target process of 90 °C for 10 min at the cold spot. Pasteurization validation used the green pea model food and results showed that there were quantifiable differences between the color of the unheated control, hot water pasteurization, and microwave-assisted thermal pasteurization system. Both model foods developed in this research could be utilized for quality assessment and optimization of various thermal pasteurization processes. © 2017 Institute of Food Technologists®.

  20. Identification of patients at high risk for Clostridium difficile infection: development and validation of a risk prediction model in hospitalized patients treated with antibiotics.

    PubMed

    van Werkhoven, C H; van der Tempel, J; Jajou, R; Thijsen, S F T; Diepersloot, R J A; Bonten, M J M; Postma, D F; Oosterheert, J J

    2015-08-01

    To develop and validate a prediction model for Clostridium difficile infection (CDI) in hospitalized patients treated with systemic antibiotics, we performed a case-cohort study in a tertiary (derivation) and secondary care hospital (validation). Cases had a positive Clostridium test and were treated with systemic antibiotics before suspicion of CDI. Controls were randomly selected from hospitalized patients treated with systemic antibiotics. Potential predictors were selected from the literature. Logistic regression was used to derive the model. Discrimination and calibration of the model were tested in internal and external validation. A total of 180 cases and 330 controls were included for derivation. Age >65 years, recent hospitalization, CDI history, malignancy, chronic renal failure, use of immunosuppressants, receipt of antibiotics before admission, nonsurgical admission, admission to the intensive care unit, gastric tube feeding, treatment with cephalosporins and presence of an underlying infection were independent predictors of CDI. The area under the receiver operating characteristic curve of the model in the derivation cohort was 0.84 (95% confidence interval 0.80-0.87), and was reduced to 0.81 after internal validation. In external validation, consisting of 97 cases and 417 controls, the model area under the curve was 0.81 (95% confidence interval 0.77-0.85) and model calibration was adequate (Brier score 0.004). A simplified risk score was derived. Using a cutoff of 7 points, the positive predictive value, sensitivity and specificity were 1.0%, 72% and 73%, respectively. In conclusion, a risk prediction model was developed and validated, with good discrimination and calibration, that can be used to target preventive interventions in patients with increased risk of CDI. Copyright © 2015 European Society of Clinical Microbiology and Infectious Diseases. Published by Elsevier Ltd. All rights reserved.

  1. Fostering creativity in product and service development: validation in the domain of information technology.

    PubMed

    Zeng, Liang; Proctor, Robert W; Salvendy, Gavriel

    2011-06-01

    This research is intended to empirically validate a general model of creative product and service development proposed in the literature. A current research gap inspired construction of a conceptual model to capture fundamental phases and pertinent facilitating metacognitive strategies in the creative design process. The model also depicts the mechanism by which design creativity affects consumer behavior. The validity and assets of this model have not yet been investigated. Four laboratory studies were conducted to demonstrate the value of the proposed cognitive phases and associated metacognitive strategies in the conceptual model. Realistic product and service design problems were used in creativity assessment to ensure ecological validity. Design creativity was enhanced by explicit problem analysis, whereby one formulates problems from different perspectives and at different levels of abstraction. Remote association in conceptual combination spawned more design creativity than did near association. Abstraction led to greater creativity in conducting conceptual expansion than did specificity, which induced mental fixation. Domain-specific knowledge and experience enhanced design creativity, indicating that design can be of a domain-specific nature. Design creativity added integrated value to products and services and positively influenced customer behavior. The validity and value of the proposed conceptual model is supported by empirical findings. The conceptual model of creative design could underpin future theory development. Propositions advanced in this article should provide insights and approaches to facilitate organizations pursuing product and service creativity to gain competitive advantage.

  2. Simulator validation results and proposed reporting format from flight testing a software model of a complex, high-performance airplane.

    DOT National Transportation Integrated Search

    2008-01-01

    Computer simulations are often used in aviation studies. These simulation tools may require complex, high-fidelity aircraft models. Since many of the flight models used are third-party developed products, independent validation is desired prior to im...

  3. Crazy like a fox. Validity and ethics of animal models of human psychiatric disease.

    PubMed

    Rollin, Michael D H; Rollin, Bernard E

    2014-04-01

    Animal models of human disease play a central role in modern biomedical science. Developing animal models for human mental illness presents unique practical and philosophical challenges. In this article we argue that (1) existing animal models of psychiatric disease are not valid, (2) attempts to model syndromes are undermined by current nosology, (3) models of symptoms are rife with circular logic and anthropomorphism, (4) any model must make unjustified assumptions about subjective experience, and (5) any model deemed valid would be inherently unethical, for if an animal adequately models human subjective experience, then there is no morally relevant difference between that animal and a human.

  4. Analysis of various quality attributes of sunflower and soybean plants by near infra-red reflectance spectroscopy: Development and validation calibration models

    USDA-ARS?s Scientific Manuscript database

    Sunflower and soybean are summer annuals that can be grown as an alternative to corn and may be particularly useful in organic production systems. Rapid and low cost methods of analyzing plant quality would be helpful for crop management. We developed and validated calibration models for Near-infrar...

  5. Development and validation of multivariable predictive model for thromboembolic events in lymphoma patients.

    PubMed

    Antic, Darko; Milic, Natasa; Nikolovski, Srdjan; Todorovic, Milena; Bila, Jelena; Djurdjevic, Predrag; Andjelic, Bosko; Djurasinovic, Vladislava; Sretenovic, Aleksandra; Vukovic, Vojin; Jelicic, Jelena; Hayman, Suzanne; Mihaljevic, Biljana

    2016-10-01

    Lymphoma patients are at increased risk of thromboembolic events but thromboprophylaxis in these patients is largely underused. We sought to develop and validate a simple model, based on individual clinical and laboratory patient characteristics that would designate lymphoma patients at risk for thromboembolic event. The study population included 1,820 lymphoma patients who were treated in the Lymphoma Departments at the Clinics of Hematology, Clinical Center of Serbia and Clinical Center Kragujevac. The model was developed using data from a derivation cohort (n = 1,236), and further assessed in the validation cohort (n = 584). Sixty-five patients (5.3%) in the derivation cohort and 34 (5.8%) patients in the validation cohort developed thromboembolic events. The variables independently associated with risk for thromboembolism were: previous venous and/or arterial events, mediastinal involvement, BMI>30 kg/m(2) , reduced mobility, extranodal localization, development of neutropenia and hemoglobin level < 100g/L. Based on the risk model score, the population was divided into the following risk categories: low (score 0-1), intermediate (score 2-3), and high (score >3). For patients classified at risk (intermediate and high-risk scores), the model produced negative predictive value of 98.5%, positive predictive value of 25.1%, sensitivity of 75.4%, and specificity of 87.5%. A high-risk score had positive predictive value of 65.2%. The diagnostic performance measures retained similar values in the validation cohort. Developed prognostic Thrombosis Lymphoma - ThroLy score is more specific for lymphoma patients than any other available score targeting thrombosis in cancer patients. Am. J. Hematol. 91:1014-1019, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  6. A Framework for Text Mining in Scientometric Study: A Case Study in Biomedicine Publications

    NASA Astrophysics Data System (ADS)

    Silalahi, V. M. M.; Hardiyati, R.; Nadhiroh, I. M.; Handayani, T.; Rahmaida, R.; Amelia, M.

    2018-04-01

    The data of Indonesians research publications in the domain of biomedicine has been collected to be text mined for the purpose of a scientometric study. The goal is to build a predictive model that provides a classification of research publications on the potency for downstreaming. The model is based on the drug development processes adapted from the literatures. An effort is described to build the conceptual model and the development of a corpus on the research publications in the domain of Indonesian biomedicine. Then an investigation is conducted relating to the problems associated with building a corpus and validating the model. Based on our experience, a framework is proposed to manage the scientometric study based on text mining. Our method shows the effectiveness of conducting a scientometric study based on text mining in order to get a valid classification model. This valid model is mainly supported by the iterative and close interactions with the domain experts starting from identifying the issues, building a conceptual model, to the labelling, validation and results interpretation.

  7. A clinical reasoning model focused on clients' behaviour change with reference to physiotherapists: its multiphase development and validation.

    PubMed

    Elvén, Maria; Hochwälder, Jacek; Dean, Elizabeth; Söderlund, Anne

    2015-05-01

    A biopsychosocial approach and behaviour change strategies have long been proposed to serve as a basis for addressing current multifaceted health problems. This emphasis has implications for clinical reasoning of health professionals. This study's aim was to develop and validate a conceptual model to guide physiotherapists' clinical reasoning focused on clients' behaviour change. Phase 1 consisted of the exploration of existing research and the research team's experiences and knowledge. Phases 2a and 2b consisted of validation and refinement of the model based on input from physiotherapy students in two focus groups (n = 5 per group) and from experts in behavioural medicine (n = 9). Phase 1 generated theoretical and evidence bases for the first version of a model. Phases 2a and 2b established the validity and value of the model. The final model described clinical reasoning focused on clients' behaviour change as a cognitive, reflective, collaborative and iterative process with multiple interrelated levels that included input from the client and physiotherapist, a functional behavioural analysis of the activity-related target behaviour and the selection of strategies for behaviour change. This unique model, theory- and evidence-informed, has been developed to help physiotherapists to apply clinical reasoning systematically in the process of behaviour change with their clients.

  8. Use of a Computer-Mediated Delphi Process to Validate a Mass Casualty Conceptual Model

    PubMed Central

    CULLEY, JOAN M.

    2012-01-01

    Since the original work on the Delphi technique, multiple versions have been developed and used in research and industry; however, very little empirical research has been conducted that evaluates the efficacy of using online computer, Internet, and e-mail applications to facilitate a Delphi method that can be used to validate theoretical models. The purpose of this research was to develop computer, Internet, and e-mail applications to facilitate a modified Delphi technique through which experts provide validation for a proposed conceptual model that describes the information needs for a mass-casualty continuum of care. Extant literature and existing theoretical models provided the basis for model development. Two rounds of the Delphi process were needed to satisfy the criteria for consensus and/or stability related to the constructs, relationships, and indicators in the model. The majority of experts rated the online processes favorably (mean of 6.1 on a seven-point scale). Using online Internet and computer applications to facilitate a modified Delphi process offers much promise for future research involving model building or validation. The online Delphi process provided an effective methodology for identifying and describing the complex series of events and contextual factors that influence the way we respond to disasters. PMID:21076283

  9. Use of a computer-mediated Delphi process to validate a mass casualty conceptual model.

    PubMed

    Culley, Joan M

    2011-05-01

    Since the original work on the Delphi technique, multiple versions have been developed and used in research and industry; however, very little empirical research has been conducted that evaluates the efficacy of using online computer, Internet, and e-mail applications to facilitate a Delphi method that can be used to validate theoretical models. The purpose of this research was to develop computer, Internet, and e-mail applications to facilitate a modified Delphi technique through which experts provide validation for a proposed conceptual model that describes the information needs for a mass-casualty continuum of care. Extant literature and existing theoretical models provided the basis for model development. Two rounds of the Delphi process were needed to satisfy the criteria for consensus and/or stability related to the constructs, relationships, and indicators in the model. The majority of experts rated the online processes favorably (mean of 6.1 on a seven-point scale). Using online Internet and computer applications to facilitate a modified Delphi process offers much promise for future research involving model building or validation. The online Delphi process provided an effective methodology for identifying and describing the complex series of events and contextual factors that influence the way we respond to disasters.

  10. Validation of Western North America Models based on finite-frequency and ray theory imaging methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larmat, Carene; Maceira, Monica; Porritt, Robert W.

    2015-02-02

    We validate seismic models developed for western North America with a focus on effect of imaging methods on data fit. We use the DNA09 models for which our collaborators provide models built with both the body-­wave FF approach and the RT approach, when the data selection, processing and reference models are the same.

  11. Identifying novel interventional strategies for psychiatric disorders: integrating genomics, 'enviromics' and gene-environment interactions in valid preclinical models.

    PubMed

    McOmish, Caitlin E; Burrows, Emma L; Hannan, Anthony J

    2014-10-01

    Psychiatric disorders affect a substantial proportion of the population worldwide. This high prevalence, combined with the chronicity of the disorders and the major social and economic impacts, creates a significant burden. As a result, an important priority is the development of novel and effective interventional strategies for reducing incidence rates and improving outcomes. This review explores the progress that has been made to date in establishing valid animal models of psychiatric disorders, while beginning to unravel the complex factors that may be contributing to the limitations of current methodological approaches. We propose some approaches for optimizing the validity of animal models and developing effective interventions. We use schizophrenia and autism spectrum disorders as examples of disorders for which development of valid preclinical models, and fully effective therapeutics, have proven particularly challenging. However, the conclusions have relevance to various other psychiatric conditions, including depression, anxiety and bipolar disorders. We address the key aspects of construct, face and predictive validity in animal models, incorporating genetic and environmental factors. Our understanding of psychiatric disorders is accelerating exponentially, revealing extraordinary levels of genetic complexity, heterogeneity and pleiotropy. The environmental factors contributing to individual, and multiple, disorders also exhibit breathtaking complexity, requiring systematic analysis to experimentally explore the environmental mediators and modulators which constitute the 'envirome' of each psychiatric disorder. Ultimately, genetic and environmental factors need to be integrated via animal models incorporating the spatiotemporal complexity of gene-environment interactions and experience-dependent plasticity, thus better recapitulating the dynamic nature of brain development, function and dysfunction. © 2014 The British Pharmacological Society.

  12. Factor Structure and Convergent Validity of the Stress Index for Parents of Adolescents (SIPA) in Adolescents With ADHD.

    PubMed

    Eadeh, Hana-May; Langberg, Joshua M; Molitor, Stephen J; Behrhorst, Katie; Smith, Zoe R; Evans, Steven W

    2018-02-01

    Parenting stress is common in families with an adolescent with attention-deficit/hyperactivity disorder (ADHD). The Stress Index for Parents of Adolescents (SIPA) was developed to assess parenting stress but has not been validated outside of the original development work. This study examined the factor structure and sources of convergent validity of the SIPA in a sample of adolescents diagnosed with ADHD ( M age = 12.3, N = 327) and their caregivers. Three first-order models, two bifactor models, and one higher order model were evaluated; none met overall model fit criteria but the first-order nine-factor model displayed the best fit. Convergent validity was also assessed and the SIPA adolescent domain was moderately correlated with measures of family impairment and conflict after accounting for ADHD symptom severity. Implications of these findings for use of the SIPA in ADHD samples are discussed along with directions for future research focused on parent stress and ADHD.

  13. CADASTER QSPR Models for Predictions of Melting and Boiling Points of Perfluorinated Chemicals.

    PubMed

    Bhhatarai, Barun; Teetz, Wolfram; Liu, Tao; Öberg, Tomas; Jeliazkova, Nina; Kochev, Nikolay; Pukalov, Ognyan; Tetko, Igor V; Kovarich, Simona; Papa, Ester; Gramatica, Paola

    2011-03-14

    Quantitative structure property relationship (QSPR) studies on per- and polyfluorinated chemicals (PFCs) on melting point (MP) and boiling point (BP) are presented. The training and prediction chemicals used for developing and validating the models were selected from Syracuse PhysProp database and literatures. The available experimental data sets were split in two different ways: a) random selection on response value, and b) structural similarity verified by self-organizing-map (SOM), in order to propose reliable predictive models, developed only on the training sets and externally verified on the prediction sets. Individual linear and non-linear approaches based models developed by different CADASTER partners on 0D-2D Dragon descriptors, E-state descriptors and fragment based descriptors as well as consensus model and their predictions are presented. In addition, the predictive performance of the developed models was verified on a blind external validation set (EV-set) prepared using PERFORCE database on 15 MP and 25 BP data respectively. This database contains only long chain perfluoro-alkylated chemicals, particularly monitored by regulatory agencies like US-EPA and EU-REACH. QSPR models with internal and external validation on two different external prediction/validation sets and study of applicability-domain highlighting the robustness and high accuracy of the models are discussed. Finally, MPs for additional 303 PFCs and BPs for 271 PFCs were predicted for which experimental measurements are unknown. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Helicopter simulation validation using flight data

    NASA Technical Reports Server (NTRS)

    Key, D. L.; Hansen, R. S.; Cleveland, W. B.; Abbott, W. Y.

    1982-01-01

    A joint NASA/Army effort to perform a systematic ground-based piloted simulation validation assessment is described. The best available mathematical model for the subject helicopter (UH-60A Black Hawk) was programmed for real-time operation. Flight data were obtained to validate the math model, and to develop models for the pilot control strategy while performing mission-type tasks. The validated math model is to be combined with motion and visual systems to perform ground based simulation. Comparisons of the control strategy obtained in flight with that obtained on the simulator are to be used as the basis for assessing the fidelity of the results obtained in the simulator.

  15. Does my patient have chronic Chagas disease? Development and temporal validation of a diagnostic risk score.

    PubMed

    Brasil, Pedro Emmanuel Alvarenga Americano do; Xavier, Sergio Salles; Holanda, Marcelo Teixeira; Hasslocher-Moreno, Alejandro Marcel; Braga, José Ueleres

    2016-01-01

    With the globalization of Chagas disease, unexperienced health care providers may have difficulties in identifying which patients should be examined for this condition. This study aimed to develop and validate a diagnostic clinical prediction model for chronic Chagas disease. This diagnostic cohort study included consecutive volunteers suspected to have chronic Chagas disease. The clinical information was blindly compared to serological tests results, and a logistic regression model was fit and validated. The development cohort included 602 patients, and the validation cohort included 138 patients. The Chagas disease prevalence was 19.9%. Sex, age, referral from blood bank, history of living in a rural area, recognizing the kissing bug, systemic hypertension, number of siblings with Chagas disease, number of relatives with a history of stroke, ECG with low voltage, anterosuperior divisional block, pathologic Q wave, right bundle branch block, and any kind of extrasystole were included in the final model. Calibration and discrimination in the development and validation cohorts (ROC AUC 0.904 and 0.912, respectively) were good. Sensitivity and specificity analyses showed that specificity reaches at least 95% above the predicted 43% risk, while sensitivity is at least 95% below the predicted 7% risk. Net benefit decision curves favor the model across all thresholds. A nomogram and an online calculator (available at http://shiny.ipec.fiocruz.br:3838/pedrobrasil/chronic_chagas_disease_prediction/) were developed to aid in individual risk estimation.

  16. Developing evaluation instrument based on CIPP models on the implementation of portfolio assessment

    NASA Astrophysics Data System (ADS)

    Kurnia, Feni; Rosana, Dadan; Supahar

    2017-08-01

    This study aimed to develop an evaluation instrument constructed by CIPP model on the implementation of portfolio assessment in science learning. This study used research and development (R & D) method; adapting 4-D by the development of non-test instrument, and the evaluation instrument constructed by CIPP model. CIPP is the abbreviation of Context, Input, Process, and Product. The techniques of data collection were interviews, questionnaires, and observations. Data collection instruments were: 1) the interview guidelines for the analysis of the problems and the needs, 2) questionnaire to see level of accomplishment of portfolio assessment instrument, and 3) observation sheets for teacher and student to dig up responses to the portfolio assessment instrument. The data obtained was quantitative data obtained from several validators. The validators consist of two lecturers as the evaluation experts, two practitioners (science teachers), and three colleagues. This paper shows the results of content validity obtained from the validators and the analysis result of the data obtained by using Aikens' V formula. The results of this study shows that the evaluation instrument based on CIPP models is proper to evaluate the implementation of portfolio assessment instruments. Based on the experts' judgments, practitioners, and colleagues, the Aikens' V coefficient was between 0.86-1,00 which means that it is valid and can be used in the limited trial and operational field trial.

  17. Validation Test Report For The CRWMS Analysis and Logistics Visually Interactive Model Calvin Version 3.0, 10074-Vtr-3.0-00

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    S. Gillespie

    2000-07-27

    This report describes the tests performed to validate the CRWMS ''Analysis and Logistics Visually Interactive'' Model (CALVIN) Version 3.0 (V3.0) computer code (STN: 10074-3.0-00). To validate the code, a series of test cases was developed in the CALVIN V3.0 Validation Test Plan (CRWMS M&O 1999a) that exercises the principal calculation models and options of CALVIN V3.0. Twenty-five test cases were developed: 18 logistics test cases and 7 cost test cases. These cases test the features of CALVIN in a sequential manner, so that the validation of each test case is used to demonstrate the accuracy of the input to subsequentmore » calculations. Where necessary, the test cases utilize reduced-size data tables to make the hand calculations used to verify the results more tractable, while still adequately testing the code's capabilities. Acceptance criteria, were established for the logistics and cost test cases in the Validation Test Plan (CRWMS M&O 1999a). The Logistics test cases were developed to test the following CALVIN calculation models: Spent nuclear fuel (SNF) and reactivity calculations; Options for altering reactor life; Adjustment of commercial SNF (CSNF) acceptance rates for fiscal year calculations and mid-year acceptance start; Fuel selection, transportation cask loading, and shipping to the Monitored Geologic Repository (MGR); Transportation cask shipping to and storage at an Interim Storage Facility (ISF); Reactor pool allocation options; and Disposal options at the MGR. Two types of cost test cases were developed: cases to validate the detailed transportation costs, and cases to validate the costs associated with the Civilian Radioactive Waste Management System (CRWMS) Management and Operating Contractor (M&O) and Regional Servicing Contractors (RSCs). For each test case, values calculated using Microsoft Excel 97 worksheets were compared to CALVIN V3.0 scenarios with the same input data and assumptions. All of the test case results compare with the CALVIN V3.0 results within the bounds of the acceptance criteria. Therefore, it is concluded that the CALVIN V3.0 calculation models and options tested in this report are validated.« less

  18. Development and validation of a mass casualty conceptual model.

    PubMed

    Culley, Joan M; Effken, Judith A

    2010-03-01

    To develop and validate a conceptual model that provides a framework for the development and evaluation of information systems for mass casualty events. The model was designed based on extant literature and existing theoretical models. A purposeful sample of 18 experts validated the model. Open-ended questions, as well as a 7-point Likert scale, were used to measure expert consensus on the importance of each construct and its relationship in the model and the usefulness of the model to future research. Computer-mediated applications were used to facilitate a modified Delphi technique through which a panel of experts provided validation for the conceptual model. Rounds of questions continued until consensus was reached, as measured by an interquartile range (no more than 1 scale point for each item); stability (change in the distribution of responses less than 15% between rounds); and percent agreement (70% or greater) for indicator questions. Two rounds of the Delphi process were needed to satisfy the criteria for consensus or stability related to the constructs, relationships, and indicators in the model. The panel reached consensus or sufficient stability to retain all 10 constructs, 9 relationships, and 39 of 44 indicators. Experts viewed the model as useful (mean of 5.3 on a 7-point scale). Validation of the model provides the first step in understanding the context in which mass casualty events take place and identifying variables that impact outcomes of care. This study provides a foundation for understanding the complexity of mass casualty care, the roles that nurses play in mass casualty events, and factors that must be considered in designing and evaluating information-communication systems to support effective triage under these conditions.

  19. The OncoSim model: development and use for better decision-making in Canadian cancer control.

    PubMed

    Gauvreau, C L; Fitzgerald, N R; Memon, S; Flanagan, W M; Nadeau, C; Asakawa, K; Garner, R; Miller, A B; Evans, W K; Popadiuk, C M; Wolfson, M; Coldman, A J

    2017-12-01

    The Canadian Partnership Against Cancer was created in 2007 by the federal government to accelerate cancer control across Canada. Its OncoSim microsimulation model platform, which consists of a suite of specific cancer models, was conceived as a tool to augment conventional resources for population-level policy- and decision-making. The Canadian Partnership Against Cancer manages the OncoSim program, with funding from Health Canada and model development by Statistics Canada. Microsimulation modelling allows for the detailed capture of population heterogeneity and health and demographic history over time. Extensive data from multiple Canadian sources were used as inputs or to validate the model. OncoSim has been validated through expert consultation; assessments of face validity, internal validity, and external validity; and model fit against observed data. The platform comprises three in-depth cancer models (lung, colorectal, cervical), with another in-depth model (breast) and a generalized model (25 cancers) being in development. Unique among models of its class, OncoSim is available online for public sector use free of charge. Users can customize input values and output display, and extensive user support is provided. OncoSim has been used to support decision-making at the national and jurisdictional levels. Although simulation studies are generally not included in hierarchies of evidence, they are integral to informing cancer control policy when clinical studies are not feasible. OncoSim can evaluate complex intervention scenarios for multiple cancers. Canadian decision-makers thus have a powerful tool to assess the costs, benefits, cost-effectiveness, and budgetary effects of cancer control interventions when faced with difficult choices for improvements in population health and resource allocation.

  20. Developing a Psychometric Instrument to Measure Physical Education Teachers' Job Demands and Resources

    ERIC Educational Resources Information Center

    Zhang, Tan; Chen, Ang

    2017-01-01

    Based on the job demands-resources model, the study developed and validated an instrument that measures physical education teachers' job demands-resources perception. Expert review established content validity with the average item rating of 3.6/5.0. Construct validity and reliability were determined with a teacher sample (n = 397). Exploratory…

  1. Developing workshop module of realistic mathematics education: Follow-up workshop

    NASA Astrophysics Data System (ADS)

    Palupi, E. L. W.; Khabibah, S.

    2018-01-01

    Realistic Mathematics Education (RME) is a learning approach which fits the aim of the curriculum. The success of RME in teaching mathematics concepts, triggering students’ interest in mathematics and teaching high order thinking skills to the students will make teachers start to learn RME. Hence, RME workshop is often offered and done. This study applied development model proposed by Plomp. Based on the study by RME team, there are three kinds of RME workshop: start-up workshop, follow-up workshop, and quality boost. However, there is no standardized or validated module which is used in that workshops. This study aims to develop a module of RME follow-up workshop which is valid and can be used. Plopm’s developmental model includes materials analysis, design, realization, implementation, and evaluation. Based on the validation, the developed module is valid. While field test shows that the module can be used effectively.

  2. Modelling dimercaptosuccinic acid (DMSA) plasma kinetics in humans.

    PubMed

    van Eijkeren, Jan C H; Olie, J Daniël N; Bradberry, Sally M; Vale, J Allister; de Vries, Irma; Meulenbelt, Jan; Hunault, Claudine C

    2016-11-01

    No kinetic models presently exist which simulate the effect of chelation therapy on lead blood concentrations in lead poisoning. Our aim was to develop a kinetic model that describes the kinetics of dimercaptosuccinic acid (DMSA; succimer), a commonly used chelating agent, that could be used in developing a lead chelating model. This was a kinetic modelling study. We used a two-compartment model, with a non-systemic gastrointestinal compartment (gut lumen) and the whole body as one systemic compartment. The only data available from the literature were used to calibrate the unknown model parameters. The calibrated model was then validated by comparing its predictions with measured data from three different experimental human studies. The model predicted total DMSA plasma and urine concentrations measured in three healthy volunteers after ingestion of DMSA 10 mg/kg. The model was then validated by using data from three other published studies; it predicted concentrations within a factor of two, representing inter-human variability. A simple kinetic model simulating the kinetics of DMSA in humans has been developed and validated. The interest of this model lies in the future potential to use it to predict blood lead concentrations in lead-poisoned patients treated with DMSA.

  3. Development and validation of green eating behaviors, stage of change, decisional balance, and self-efficacy scales in college students.

    PubMed

    Weller, Kathryn E; Greene, Geoffrey W; Redding, Colleen A; Paiva, Andrea L; Lofgren, Ingrid; Nash, Jessica T; Kobayashi, Hisanori

    2014-01-01

    To develop and validate an instrument to assess environmentally conscious eating (Green Eating [GE]) behavior (BEH) and GE Transtheoretical Model constructs including Stage of Change (SOC), Decisional Balance (DB), and Self-efficacy (SE). Cross-sectional instrument development survey. Convenience sample (n = 954) of 18- to 24-year-old college students from a northeastern university. The sample was randomly split: (N1) and (N2). N1 was used for exploratory factor analyses using principal components analyses; N2 was used for confirmatory analyses (structural modeling) and reliability analyses (coefficient α). The full sample was used for measurement invariance (multi-group confirmatory analyses) and convergent validity (BEH) and known group validation (DB and SE) by SOC using analysis of variance. Reliable (α > .7), psychometrically sound, and stable measures included 2 correlated 5-item DB subscales (Pros and Cons), 2 correlated SE subscales (school [5 items] and home [3 items]), and a single 6-item BEH scale. Most students (66%) were in Precontemplation and Contemplation SOC. Behavior, DB, and SE scales differed significantly by SOC (P < .001) with moderate to large effect sizes, as predicted by the Transtheoretical Model, which supported the validity of these measures. Successful development and preliminary validation of this 25-item GE instrument provides a basis for assessment as well as development of tailored interventions for college students. Copyright © 2014 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.

  4. Mining for Data

    NASA Technical Reports Server (NTRS)

    1998-01-01

    AbTech Corporation used an F-18 HARV (High Alpha Research Vehicle) simulation developed by NASA to create an interactive computer-based prototype of the MQ (Model Quest) SV (System Validator) tool. Dryden Flight Research Center provided support to develop, test, and rapidly reprogram the validation function. AbTech's ModelQuest Enterprises highly automated and outperforms other modeling techniques to quickly discover meaningful relationships, patterns, and trends in databases. Applications include technical and business professionals in finance, marketing, business, banking, retail, healthcare, and aerospace.

  5. Risk models for mortality following elective open and endovascular abdominal aortic aneurysm repair: a single institution experience.

    PubMed

    Choke, E; Lee, K; McCarthy, M; Nasim, A; Naylor, A R; Bown, M; Sayers, R

    2012-12-01

    To develop and validate an "in house" risk model for predicting perioperative mortality following elective AAA repair and to compare this with other models. Multivariate logistics regression analysis was used to identify risk factors for perioperative-day mortality from one tertiary institution's prospectively maintained database. Consecutive elective open (564) and endovascular (589) AAA repairs (2000-2010) were split randomly into development (810) and validation (343) data sets. The resultant model was compared to Glasgow Aneurysm Score (GAS), Modified Customised Probability Index (m-CPI), CPI, the Vascular Governance North West (VGNW) model and the Medicare model. Variables associated with perioperative mortality included: increasing age (P = 0.034), myocardial infarct within last 10 years (P = 0.0008), raised serum creatinine (P = 0.005) and open surgery (P = 0.0001). The areas under the receiver operating characteristic curve (AUC) for predicted probability of 30-day mortality in development and validation data sets were 0.79 and 0.82 respectively. AUCs for GAS, m-CPI and CPI were poor (0.63, 0.58 and 0.58 respectively), whilst VGNW and Medicare model were fair (0.73 and 0.79 respectively). In this study, an "in-house" developed and validated risk model has the most accurate discriminative value in predicting perioperative mortality after elective AAA repair. For purposes of comparative audit with case mix adjustments, national models such as the VGNW or Medicare models should be used. Copyright © 2012 European Society for Vascular Surgery. Published by Elsevier Ltd. All rights reserved.

  6. Validation of a Multimarker Model for Assessing Risk of Type 2 Diabetes from a Five-Year Prospective Study of 6784 Danish People (Inter99)

    PubMed Central

    Urdea, Mickey; Kolberg, Janice; Wilber, Judith; Gerwien, Robert; Moler, Edward; Rowe, Michael; Jorgensen, Paul; Hansen, Torben; Pedersen, Oluf; Jørgensen, Torben; Borch-Johnsen, Knut

    2009-01-01

    Background Improved identification of subjects at high risk for development of type 2 diabetes would allow preventive interventions to be targeted toward individuals most likely to benefit. In previous research, predictive biomarkers were identified and used to develop multivariate models to assess an individual's risk of developing diabetes. Here we describe the training and validation of the PreDx™ Diabetes Risk Score (DRS) model in a clinical laboratory setting using baseline serum samples from subjects in the Inter99 cohort, a population-based primary prevention study of cardiovascular disease. Methods Among 6784 subjects free of diabetes at baseline, 215 subjects progressed to diabetes (converters) during five years of follow-up. A nested case-control study was performed using serum samples from 202 converters and 597 randomly selected nonconverters. Samples were randomly assigned to equally sized training and validation sets. Seven biomarkers were measured using assays developed for use in a clinical reference laboratory. Results The PreDx DRS model performed better on the training set (area under the curve [AUC] = 0.837) than fasting plasma glucose alone (AUC = 0.779). When applied to the sequestered validation set, the PreDx DRS showed the same performance (AUC = 0.838), thus validating the model. This model had a better AUC than any other single measure from a fasting sample. Moreover, the model provided further risk stratification among high-risk subpopulations with impaired fasting glucose or metabolic syndrome. Conclusions The PreDx DRS provides the absolute risk of diabetes conversion in five years for subjects identified to be “at risk” using the clinical factors. PMID:20144324

  7. Validation of a multimarker model for assessing risk of type 2 diabetes from a five-year prospective study of 6784 Danish people (Inter99).

    PubMed

    Urdea, Mickey; Kolberg, Janice; Wilber, Judith; Gerwien, Robert; Moler, Edward; Rowe, Michael; Jorgensen, Paul; Hansen, Torben; Pedersen, Oluf; Jørgensen, Torben; Borch-Johnsen, Knut

    2009-07-01

    Improved identification of subjects at high risk for development of type 2 diabetes would allow preventive interventions to be targeted toward individuals most likely to benefit. In previous research, predictive biomarkers were identified and used to develop multivariate models to assess an individual's risk of developing diabetes. Here we describe the training and validation of the PreDx Diabetes Risk Score (DRS) model in a clinical laboratory setting using baseline serum samples from subjects in the Inter99 cohort, a population-based primary prevention study of cardiovascular disease. Among 6784 subjects free of diabetes at baseline, 215 subjects progressed to diabetes (converters) during five years of follow-up. A nested case-control study was performed using serum samples from 202 converters and 597 randomly selected nonconverters. Samples were randomly assigned to equally sized training and validation sets. Seven biomarkers were measured using assays developed for use in a clinical reference laboratory. The PreDx DRS model performed better on the training set (area under the curve [AUC] = 0.837) than fasting plasma glucose alone (AUC = 0.779). When applied to the sequestered validation set, the PreDx DRS showed the same performance (AUC = 0.838), thus validating the model. This model had a better AUC than any other single measure from a fasting sample. Moreover, the model provided further risk stratification among high-risk subpopulations with impaired fasting glucose or metabolic syndrome. The PreDx DRS provides the absolute risk of diabetes conversion in five years for subjects identified to be "at risk" using the clinical factors. Copyright 2009 Diabetes Technology Society.

  8. AdViSHE: A Validation-Assessment Tool of Health-Economic Models for Decision Makers and Model Users.

    PubMed

    Vemer, P; Corro Ramos, I; van Voorn, G A K; Al, M J; Feenstra, T L

    2016-04-01

    A trade-off exists between building confidence in health-economic (HE) decision models and the use of scarce resources. We aimed to create a practical tool providing model users with a structured view into the validation status of HE decision models, to address this trade-off. A Delphi panel was organized, and was completed by a workshop during an international conference. The proposed tool was constructed iteratively based on comments from, and the discussion amongst, panellists. During the Delphi process, comments were solicited on the importance and feasibility of possible validation techniques for modellers, their relevance for decision makers, and the overall structure and formulation in the tool. The panel consisted of 47 experts in HE modelling and HE decision making from various professional and international backgrounds. In addition, 50 discussants actively engaged in the discussion at the conference workshop and returned 19 questionnaires with additional comments. The final version consists of 13 items covering all relevant aspects of HE decision models: the conceptual model, the input data, the implemented software program, and the model outcomes. Assessment of the Validation Status of Health-Economic decision models (AdViSHE) is a validation-assessment tool in which model developers report in a systematic way both on validation efforts performed and on their outcomes. Subsequently, model users can establish whether confidence in the model is justified or whether additional validation efforts should be undertaken. In this way, AdViSHE enhances transparency of the validation status of HE models and supports efficient model validation.

  9. An integrated assessment instrument: Developing and validating instrument for facilitating critical thinking abilities and science process skills on electrolyte and nonelectrolyte solution matter

    NASA Astrophysics Data System (ADS)

    Astuti, Sri Rejeki Dwi; Suyanta, LFX, Endang Widjajanti; Rohaeti, Eli

    2017-05-01

    The demanding of assessment in learning process was impact by policy changes. Nowadays, assessment is not only emphasizing knowledge, but also skills and attitudes. However, in reality there are many obstacles in measuring them. This paper aimed to describe how to develop integrated assessment instrument and to verify instruments' validity such as content validity and construct validity. This instrument development used test development model by McIntire. Development process data was acquired based on development test step. Initial product was observed by three peer reviewer and six expert judgments (two subject matter experts, two evaluation experts and two chemistry teachers) to acquire content validity. This research involved 376 first grade students of two Senior High Schools in Bantul Regency to acquire construct validity. Content validity was analyzed used Aiken's formula. The verifying of construct validity was analyzed by exploratory factor analysis using SPSS ver 16.0. The result show that all constructs in integrated assessment instrument are asserted valid according to content validity and construct validity. Therefore, the integrated assessment instrument is suitable for measuring critical thinking abilities and science process skills of senior high school students on electrolyte solution matter.

  10. Development of the evaluation instrument use CIPP on the implementation of project assessment topic optik

    NASA Astrophysics Data System (ADS)

    Asfaroh, Jati Aurum; Rosana, Dadan; Supahar

    2017-08-01

    This research aims to develop an evaluation instrument models CIPP valid and reliable as well as determine the feasibility and practicality of an evaluation instrument models CIPP. An evaluation instrument models CIPP to evaluate the implementation of the project assessment topic optik to measure problem-solving skills of junior high school class VIII in the Yogyakarta region. This research is a model of development that uses 4-D. Subject of product trials are students in class VIII SMP N 1 Galur and SMP N 1 Sleman. Data collection techniques in this research using non-test techniques include interviews, questionnaires and observations. Validity in this research was analyzed using V'Aikens. Reliability analyzed using ICC. This research uses 7 raters are derived from two lecturers expert (expert judgment), two practitioners (science teacher) and three colleagues. The results of this research is the evaluation's instrument model of CIPP is used to evaluate the implementation of the implementation of the project assessment instruments. The validity result of evaluation instrument have V'Aikens values between 0.86 to 1, which means a valid and 0.836 reliability values into categories so well that it has been worth used as an evaluation instrument.

  11. Estimating the urban bias of surface shelter temperatures using upper-air and satellite data. Part 1: Development of models predicting surface shelter temperatures

    NASA Technical Reports Server (NTRS)

    Epperson, David L.; Davis, Jerry M.; Bloomfield, Peter; Karl, Thomas R.; Mcnab, Alan L.; Gallo, Kevin P.

    1995-01-01

    Multiple regression techniques were used to predict surface shelter temperatures based on the time period 1986-89 using upper-air data from the European Centre for Medium-Range Weather Forecasts (ECMWF) to represent the background climate and site-specific data to represent the local landscape. Global monthly mean temperature models were developed using data from over 5000 stations available in the Global Historical Climate Network (GHCN). Monthly maximum, mean, and minimum temperature models for the United States were also developed using data from over 1000 stations available in the U.S. Cooperative (COOP) Network and comparative monthly mean temperature models were developed using over 1150 U.S. stations in the GHCN. Three-, six-, and full-variable models were developed for comparative purposes. Inferences about the variables selected for the various models were easier for the GHCN models, which displayed month-to-month consistency in which variables were selected, than for the COOP models, which were assigned a different list of variables for nearly every month. These and other results suggest that global calibration is preferred because data from the global spectrum of physical processes that control surface temperatures are incorporated in a global model. All of the models that were developed in this study validated relatively well, especially the global models. Recalibration of the models with validation data resulted in only slightly poorer regression statistics, indicating that the calibration list of variables was valid. Predictions using data from the validation dataset in the calibrated equation were better for the GHCN models, and the globally calibrated GHCN models generally provided better U.S. predictions than the U.S.-calibrated COOP models. Overall, the GHCN and COOP models explained approximately 64%-95% of the total variance of surface shelter temperatures, depending on the month and the number of model variables. In addition, root-mean-square errors (rmse's) were over 3 C for GHCN models and over 2 C for COOP models for winter months, and near 2 C for GHCN models and near 1.5 C for COOP models for summer months.

  12. QSAR Modeling of Rat Acute Toxicity by Oral Exposure

    PubMed Central

    Zhu, Hao; Martin, Todd M.; Ye, Lin; Sedykh, Alexander; Young, Douglas M.; Tropsha, Alexander

    2009-01-01

    Few Quantitative Structure-Activity Relationship (QSAR) studies have successfully modeled large, diverse rodent toxicity endpoints. In this study, a comprehensive dataset of 7,385 compounds with their most conservative lethal dose (LD50) values has been compiled. A combinatorial QSAR approach has been employed to develop robust and predictive models of acute toxicity in rats caused by oral exposure to chemicals. To enable fair comparison between the predictive power of models generated in this study versus a commercial toxicity predictor, TOPKAT (Toxicity Prediction by Komputer Assisted Technology), a modeling subset of the entire dataset was selected that included all 3,472 compounds used in the TOPKAT’s training set. The remaining 3,913 compounds, which were not present in the TOPKAT training set, were used as the external validation set. QSAR models of five different types were developed for the modeling set. The prediction accuracy for the external validation set was estimated by determination coefficient R2 of linear regression between actual and predicted LD50 values. The use of the applicability domain threshold implemented in most models generally improved the external prediction accuracy but expectedly led to the decrease in chemical space coverage; depending on the applicability domain threshold, R2 ranged from 0.24 to 0.70. Ultimately, several consensus models were developed by averaging the predicted LD50 for every compound using all 5 models. The consensus models afforded higher prediction accuracy for the external validation dataset with the higher coverage as compared to individual constituent models. The validated consensus LD50 models developed in this study can be used as reliable computational predictors of in vivo acute toxicity. PMID:19845371

  13. Quantitative structure-activity relationship modeling of rat acute toxicity by oral exposure.

    PubMed

    Zhu, Hao; Martin, Todd M; Ye, Lin; Sedykh, Alexander; Young, Douglas M; Tropsha, Alexander

    2009-12-01

    Few quantitative structure-activity relationship (QSAR) studies have successfully modeled large, diverse rodent toxicity end points. In this study, a comprehensive data set of 7385 compounds with their most conservative lethal dose (LD(50)) values has been compiled. A combinatorial QSAR approach has been employed to develop robust and predictive models of acute toxicity in rats caused by oral exposure to chemicals. To enable fair comparison between the predictive power of models generated in this study versus a commercial toxicity predictor, TOPKAT (Toxicity Prediction by Komputer Assisted Technology), a modeling subset of the entire data set was selected that included all 3472 compounds used in TOPKAT's training set. The remaining 3913 compounds, which were not present in the TOPKAT training set, were used as the external validation set. QSAR models of five different types were developed for the modeling set. The prediction accuracy for the external validation set was estimated by determination coefficient R(2) of linear regression between actual and predicted LD(50) values. The use of the applicability domain threshold implemented in most models generally improved the external prediction accuracy but expectedly led to the decrease in chemical space coverage; depending on the applicability domain threshold, R(2) ranged from 0.24 to 0.70. Ultimately, several consensus models were developed by averaging the predicted LD(50) for every compound using all five models. The consensus models afforded higher prediction accuracy for the external validation data set with the higher coverage as compared to individual constituent models. The validated consensus LD(50) models developed in this study can be used as reliable computational predictors of in vivo acute toxicity.

  14. Validation of the Continuum of Care Conceptual Model for Athletic Therapy

    PubMed Central

    Lafave, Mark R.; Butterwick, Dale; Eubank, Breda

    2015-01-01

    Utilization of conceptual models in field-based emergency care currently borrows from existing standards of medical and paramedical professions. The purpose of this study was to develop and validate a comprehensive conceptual model that could account for injuries ranging from nonurgent to catastrophic events including events that do not follow traditional medical or prehospital care protocols. The conceptual model should represent the continuum of care from the time of initial injury spanning to an athlete's return to participation in their sport. Finally, the conceptual model should accommodate both novices and experts in the AT profession. This paper chronicles the content validation steps of the Continuum of Care Conceptual Model for Athletic Therapy (CCCM-AT). The stages of model development were domain and item generation, content expert validation using a three-stage modified Ebel procedure, and pilot testing. Only the final stage of the modified Ebel procedure reached a priori 80% consensus on three domains of interest: (1) heading descriptors; (2) the order of the model; (3) the conceptual model as a whole. Future research is required to test the use of the CCCM-AT in order to understand its efficacy in teaching and practice within the AT discipline. PMID:26464897

  15. A multi-site cognitive task analysis for biomedical query mediation.

    PubMed

    Hruby, Gregory W; Rasmussen, Luke V; Hanauer, David; Patel, Vimla L; Cimino, James J; Weng, Chunhua

    2016-09-01

    To apply cognitive task analyses of the Biomedical query mediation (BQM) processes for EHR data retrieval at multiple sites towards the development of a generic BQM process model. We conducted semi-structured interviews with eleven data analysts from five academic institutions and one government agency, and performed cognitive task analyses on their BQM processes. A coding schema was developed through iterative refinement and used to annotate the interview transcripts. The annotated dataset was used to reconstruct and verify each BQM process and to develop a harmonized BQM process model. A survey was conducted to evaluate the face and content validity of this harmonized model. The harmonized process model is hierarchical, encompassing tasks, activities, and steps. The face validity evaluation concluded the model to be representative of the BQM process. In the content validity evaluation, out of the 27 tasks for BQM, 19 meet the threshold for semi-valid, including 3 fully valid: "Identify potential index phenotype," "If needed, request EHR database access rights," and "Perform query and present output to medical researcher", and 8 are invalid. We aligned the goals of the tasks within the BQM model with the five components of the reference interview. The similarity between the process of BQM and the reference interview is promising and suggests the BQM tasks are powerful for eliciting implicit information needs. We contribute a BQM process model based on a multi-site study. This model promises to inform the standardization of the BQM process towards improved communication efficiency and accuracy. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  16. A Multi-Site Cognitive Task Analysis for Biomedical Query Mediation

    PubMed Central

    Hruby, Gregory W.; Rasmussen, Luke V.; Hanauer, David; Patel, Vimla; Cimino, James J.; Weng, Chunhua

    2016-01-01

    Objective To apply cognitive task analyses of the Biomedical query mediation (BQM) processes for EHR data retrieval at multiple sites towards the development of a generic BQM process model. Materials and Methods We conducted semi-structured interviews with eleven data analysts from five academic institutions and one government agency, and performed cognitive task analyses on their BQM processes. A coding schema was developed through iterative refinement and used to annotate the interview transcripts. The annotated dataset was used to reconstruct and verify each BQM process and to develop a harmonized BQM process model. A survey was conducted to evaluate the face and content validity of this harmonized model. Results The harmonized process model is hierarchical, encompassing tasks, activities, and steps. The face validity evaluation concluded the model to be representative of the BQM process. In the content validity evaluation, out of the 27 tasks for BQM, 19 meet the threshold for semi-valid, including 3 fully valid: “Identify potential index phenotype,” “If needed, request EHR database access rights,” and “Perform query and present output to medical researcher”, and 8 are invalid. Discussion We aligned the goals of the tasks within the BQM model with the five components of the reference interview. The similarity between the process of BQM and the reference interview is promising and suggests the BQM tasks are powerful for eliciting implicit information needs. Conclusions We contribute a BQM process model based on a multi-site study. This model promises to inform the standardization of the BQM process towards improved communication efficiency and accuracy. PMID:27435950

  17. Lessons Learned from a Cross-Model Validation between a Discrete Event Simulation Model and a Cohort State-Transition Model for Personalized Breast Cancer Treatment.

    PubMed

    Jahn, Beate; Rochau, Ursula; Kurzthaler, Christina; Paulden, Mike; Kluibenschädl, Martina; Arvandi, Marjan; Kühne, Felicitas; Goehler, Alexander; Krahn, Murray D; Siebert, Uwe

    2016-04-01

    Breast cancer is the most common malignancy among women in developed countries. We developed a model (the Oncotyrol breast cancer outcomes model) to evaluate the cost-effectiveness of a 21-gene assay when used in combination with Adjuvant! Online to support personalized decisions about the use of adjuvant chemotherapy. The goal of this study was to perform a cross-model validation. The Oncotyrol model evaluates the 21-gene assay by simulating a hypothetical cohort of 50-year-old women over a lifetime horizon using discrete event simulation. Primary model outcomes were life-years, quality-adjusted life-years (QALYs), costs, and incremental cost-effectiveness ratios (ICERs). We followed the International Society for Pharmacoeconomics and Outcomes Research-Society for Medical Decision Making (ISPOR-SMDM) best practice recommendations for validation and compared modeling results of the Oncotyrol model with the state-transition model developed by the Toronto Health Economics and Technology Assessment (THETA) Collaborative. Both models were populated with Canadian THETA model parameters, and outputs were compared. The differences between the models varied among the different validation end points. The smallest relative differences were in costs, and the greatest were in QALYs. All relative differences were less than 1.2%. The cost-effectiveness plane showed that small differences in the model structure can lead to different sets of nondominated test-treatment strategies with different efficiency frontiers. We faced several challenges: distinguishing between differences in outcomes due to different modeling techniques and initial coding errors, defining meaningful differences, and selecting measures and statistics for comparison (means, distributions, multivariate outcomes). Cross-model validation was crucial to identify and correct coding errors and to explain differences in model outcomes. In our comparison, small differences in either QALYs or costs led to changes in ICERs because of changes in the set of dominated and nondominated strategies. © The Author(s) 2015.

  18. Development and validation of risk models and molecular diagnostics to permit personalized management of cancer.

    PubMed

    Pu, Xia; Ye, Yuanqing; Wu, Xifeng

    2014-01-01

    Despite the advances made in cancer management over the past few decades, improvements in cancer diagnosis and prognosis are still poor, highlighting the need for individualized strategies. Toward this goal, risk prediction models and molecular diagnostic tools have been developed, tailoring each step of risk assessment from diagnosis to treatment and clinical outcomes based on the individual's clinical, epidemiological, and molecular profiles. These approaches hold increasing promise for delivering a new paradigm to maximize the efficiency of cancer surveillance and efficacy of treatment. However, they require stringent study design, methodology development, comprehensive assessment of biomarkers and risk factors, and extensive validation to ensure their overall usefulness for clinical translation. In the current study, the authors conducted a systematic review using breast cancer as an example and provide general guidelines for risk prediction models and molecular diagnostic tools, including development, assessment, and validation. © 2013 American Cancer Society.

  19. Using Modeling and Simulation to Predict Operator Performance and Automation-Induced Complacency With Robotic Automation: A Case Study and Empirical Validation.

    PubMed

    Wickens, Christopher D; Sebok, Angelia; Li, Huiyang; Sarter, Nadine; Gacy, Andrew M

    2015-09-01

    The aim of this study was to develop and validate a computational model of the automation complacency effect, as operators work on a robotic arm task, supported by three different degrees of automation. Some computational models of complacency in human-automation interaction exist, but those are formed and validated within the context of fairly simplified monitoring failures. This research extends model validation to a much more complex task, so that system designers can establish, without need for human-in-the-loop (HITL) experimentation, merits and shortcomings of different automation degrees. We developed a realistic simulation of a space-based robotic arm task that could be carried out with three different levels of trajectory visualization and execution automation support. Using this simulation, we performed HITL testing. Complacency was induced via several trials of correctly performing automation and then was assessed on trials when automation failed. Following a cognitive task analysis of the robotic arm operation, we developed a multicomponent model of the robotic operator and his or her reliance on automation, based in part on visual scanning. The comparison of model predictions with empirical results revealed that the model accurately predicted routine performance and predicted the responses to these failures after complacency developed. However, the scanning models do not account for the entire attention allocation effects of complacency. Complacency modeling can provide a useful tool for predicting the effects of different types of imperfect automation. The results from this research suggest that focus should be given to supporting situation awareness in automation development. © 2015, Human Factors and Ergonomics Society.

  20. Perception of competence in middle school physical education: instrument development and validation.

    PubMed

    Scrabis-Fletcher, Kristin; Silverman, Stephen

    2010-03-01

    Perception of Competence (POC) has been studied extensively in physical activity (PA) research with similar instruments adapted for physical education (PE) research. Such instruments do not account for the unique PE learning environment. Therefore, an instrument was developed and the scores validated to measure POC in middle school PE. A multiphase design was used consisting of an intensive theoretical review, elicitation study, prepilot study, pilot study, content validation study, and final validation study (N=1281). Data analysis included a multistep iterative process to identify the best model fit. A three-factor model for POC was tested and resulted in root mean square error of approximation = .09, root mean square residual = .07, goodness offit index = .90, and adjusted goodness offit index = .86 values in the acceptable range (Hu & Bentler, 1999). A two-factor model was also tested and resulted in a good fit (two-factor fit indexes values = .05, .03, .98, .97, respectively). The results of this study suggest that an instrument using a three- or two-factor model provides reliable and valid scores ofPOC measurement in middle school PE.

  1. Development and external validation of a prediction rule for an unfavorable course of late-life depression: A multicenter cohort study.

    PubMed

    Maarsingh, O R; Heymans, M W; Verhaak, P F; Penninx, B W J H; Comijs, H C

    2018-08-01

    Given the poor prognosis of late-life depression, it is crucial to identify those at risk. Our objective was to construct and validate a prediction rule for an unfavourable course of late-life depression. For development and internal validation of the model, we used The Netherlands Study of Depression in Older Persons (NESDO) data. We included participants with a major depressive disorder (MDD) at baseline (n = 270; 60-90 years), assessed with the Composite International Diagnostic Interview (CIDI). For external validation of the model, we used The Netherlands Study of Depression and Anxiety (NESDA) data (n = 197; 50-66 years). The outcome was MDD after 2 years of follow-up, assessed with the CIDI. Candidate predictors concerned sociodemographics, psychopathology, physical symptoms, medication, psychological determinants, and healthcare setting. Model performance was assessed by calculating calibration and discrimination. 111 subjects (41.1%) had MDD after 2 years of follow-up. Independent predictors of MDD after 2 years were (older) age, (early) onset of depression, severity of depression, anxiety symptoms, comorbid anxiety disorder, fatigue, and loneliness. The final model showed good calibration and reasonable discrimination (AUC of 0.75; 0.70 after external validation). The strongest individual predictor was severity of depression (AUC of 0.69; 0.68 after external validation). The model was developed and validated in The Netherlands, which could affect the cross-country generalizability. Based on rather simple clinical indicators, it is possible to predict the 2-year course of MDD. The prediction rule can be used for monitoring MDD patients and identifying those at risk of an unfavourable outcome. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. Quantitative validation of carbon-fiber laminate low velocity impact simulations

    DOE PAGES

    English, Shawn A.; Briggs, Timothy M.; Nelson, Stacy M.

    2015-09-26

    Simulations of low velocity impact with a flat cylindrical indenter upon a carbon fiber fabric reinforced polymer laminate are rigorously validated. Comparison of the impact energy absorption between the model and experiment is used as the validation metric. Additionally, non-destructive evaluation, including ultrasonic scans and three-dimensional computed tomography, provide qualitative validation of the models. The simulations include delamination, matrix cracks and fiber breaks. An orthotropic damage and failure constitutive model, capable of predicting progressive damage and failure, is developed in conjunction and described. An ensemble of simulations incorporating model parameter uncertainties is used to predict a response distribution which ismore » then compared to experimental output using appropriate statistical methods. Lastly, the model form errors are exposed and corrected for use in an additional blind validation analysis. The result is a quantifiable confidence in material characterization and model physics when simulating low velocity impact in structures of interest.« less

  3. Development and validation of a blade-element mathematical model for the AH-64A Apache helicopter

    NASA Technical Reports Server (NTRS)

    Mansur, M. Hossein

    1995-01-01

    A high-fidelity blade-element mathematical model for the AH-64A Apache Advanced Attack Helicopter has been developed by the Aeroflightdynamics Directorate of the U.S. Army's Aviation and Troop Command (ATCOM) at Ames Research Center. The model is based on the McDonnell Douglas Helicopter Systems' (MDHS) Fly Real Time (FLYRT) model of the AH-64A (acquired under contract) which was modified in-house and augmented with a blade-element-type main-rotor module. This report describes, in detail, the development of the rotor module, and presents some results of an extensive validation effort.

  4. A Clinical Tool for the Prediction of Venous Thromboembolism in Pediatric Trauma Patients.

    PubMed

    Connelly, Christopher R; Laird, Amy; Barton, Jeffrey S; Fischer, Peter E; Krishnaswami, Sanjay; Schreiber, Martin A; Zonies, David H; Watters, Jennifer M

    2016-01-01

    Although rare, the incidence of venous thromboembolism (VTE) in pediatric trauma patients is increasing, and the consequences of VTE in children are significant. Studies have demonstrated increasing VTE risk in older pediatric trauma patients and improved VTE rates with institutional interventions. While national evidence-based guidelines for VTE screening and prevention are in place for adults, none exist for pediatric patients, to our knowledge. To develop a risk prediction calculator for VTE in children admitted to the hospital after traumatic injury to assist efforts in developing screening and prophylaxis guidelines for this population. Retrospective review of 536,423 pediatric patients 0 to 17 years old using the National Trauma Data Bank from January 1, 2007, to December 31, 2012. Five mixed-effects logistic regression models of varying complexity were fit on a training data set. Model validity was determined by comparison of the area under the receiver operating characteristic curve (AUROC) for the training and validation data sets from the original model fit. A clinical tool to predict the risk of VTE based on individual patient clinical characteristics was developed from the optimal model. Diagnosis of VTE during hospital admission. Venous thromboembolism was diagnosed in 1141 of 536,423 children (overall rate, 0.2%). The AUROCs in the training data set were high (range, 0.873-0.946) for each model, with minimal AUROC attenuation in the validation data set. A prediction tool was developed from a model that achieved a balance of high performance (AUROCs, 0.945 and 0.932 in the training and validation data sets, respectively; P = .048) and parsimony. Points are assigned to each variable considered (Glasgow Coma Scale score, age, sex, intensive care unit admission, intubation, transfusion of blood products, central venous catheter placement, presence of pelvic or lower extremity fractures, and major surgery), and the points total is converted to a VTE risk score. The predicted risk of VTE ranged from 0.0% to 14.4%. We developed a simple clinical tool to predict the risk of developing VTE in pediatric trauma patients. It is based on a model created using a large national database and was internally validated. The clinical tool requires external validation but provides an initial step toward the development of the specific VTE protocols for pediatric trauma patients.

  5. The Development and Validation of an End-User Satisfaction Measure in a Student Laptop Environment

    ERIC Educational Resources Information Center

    Kim, Sung; Meng, Juan; Kalinowski, Jon; Shin, Dooyoung

    2014-01-01

    The purpose of this paper is to present the development and validation of a measurement model for student user satisfaction in a laptop environment. Using a "quasi Delphi" method in addition to contributions from prior research we used EFA and CFA (LISREL) to identify a five factor (14 item) measurement model that best fit the data. The…

  6. Software risk management through independent verification and validation

    NASA Technical Reports Server (NTRS)

    Callahan, John R.; Zhou, Tong C.; Wood, Ralph

    1995-01-01

    Software project managers need tools to estimate and track project goals in a continuous fashion before, during, and after development of a system. In addition, they need an ability to compare the current project status with past project profiles to validate management intuition, identify problems, and then direct appropriate resources to the sources of problems. This paper describes a measurement-based approach to calculating the risk inherent in meeting project goals that leverages past project metrics and existing estimation and tracking models. We introduce the IV&V Goal/Questions/Metrics model, explain its use in the software development life cycle, and describe our attempts to validate the model through the reverse engineering of existing projects.

  7. Differential Validation of a Path Analytic Model of University Dropout.

    ERIC Educational Resources Information Center

    Winteler, Adolf

    Tinto's conceptual schema of college dropout forms the theoretical framework for the development of a model of university student dropout intention. This study validated Tinto's model in two different departments within a single university. Analyses were conducted on a sample of 684 college freshmen in the Education and Economics Department. A…

  8. Novel prediction model of renal function after nephrectomy from automated renal volumetry with preoperative multidetector computed tomography (MDCT).

    PubMed

    Isotani, Shuji; Shimoyama, Hirofumi; Yokota, Isao; Noma, Yasuhiro; Kitamura, Kousuke; China, Toshiyuki; Saito, Keisuke; Hisasue, Shin-ichi; Ide, Hisamitsu; Muto, Satoru; Yamaguchi, Raizo; Ukimura, Osamu; Gill, Inderbir S; Horie, Shigeo

    2015-10-01

    The predictive model of postoperative renal function may impact on planning nephrectomy. To develop the novel predictive model using combination of clinical indices with computer volumetry to measure the preserved renal cortex volume (RCV) using multidetector computed tomography (MDCT), and to prospectively validate performance of the model. Total 60 patients undergoing radical nephrectomy from 2011 to 2013 participated, including a development cohort of 39 patients and an external validation cohort of 21 patients. RCV was calculated by voxel count using software (Vincent, FUJIFILM). Renal function before and after radical nephrectomy was assessed via the estimated glomerular filtration rate (eGFR). Factors affecting postoperative eGFR were examined by regression analysis to develop the novel model for predicting postoperative eGFR with a backward elimination method. The predictive model was externally validated and the performance of the model was compared with that of the previously reported models. The postoperative eGFR value was associated with age, preoperative eGFR, preserved renal parenchymal volume (RPV), preserved RCV, % of RPV alteration, and % of RCV alteration (p < 0.01). The significant correlated variables for %eGFR alteration were %RCV preservation (r = 0.58, p < 0.01) and %RPV preservation (r = 0.54, p < 0.01). We developed our regression model as follows: postoperative eGFR = 57.87 - 0.55(age) - 15.01(body surface area) + 0.30(preoperative eGFR) + 52.92(%RCV preservation). Strong correlation was seen between postoperative eGFR and the calculated estimation model (r = 0.83; p < 0.001). The external validation cohort (n = 21) showed our model outperformed previously reported models. Combining MDCT renal volumetry and clinical indices might yield an important tool for predicting postoperative renal function.

  9. Validation Testing of a Peridynamic Impact Damage Model Using NASA's Micro-Particle Gun

    NASA Technical Reports Server (NTRS)

    Baber, Forrest E.; Zelinski, Brian J.; Guven, Ibrahim; Gray, Perry

    2017-01-01

    Through a collaborative effort between the Virginia Commonwealth University and Raytheon, a peridynamic model for sand impact damage has been developed1-3. Model development has focused on simulating impacts of sand particles on ZnS traveling at velocities consistent with aircraft take-off and landing speeds. The model reproduces common features of impact damage including pit and radial cracks, and, under some conditions, lateral cracks. This study focuses on a preliminary validation exercise in which simulation results from the peridynamic model are compared to a limited experimental data set generated by NASA's recently developed micro-particle gun (MPG). The MPG facility measures the dimensions and incoming and rebound velocities of the impact particles. It also links each particle to a specific impact site and its associated damage. In this validation exercise parameters of the peridynamic model are adjusted to fit the experimentally observed pit diameter, average length of radial cracks and rebound velocities for 4 impacts of 300 µm glass beads on ZnS. Results indicate that a reasonable fit of these impact characteristics can be obtained by suitable adjustment of the peridynamic input parameters, demonstrating that the MPG can be used effectively as a validation tool for impact modeling and that the peridynamic sand impact model described herein possesses not only a qualitative but also a quantitative ability to simulate sand impact events.

  10. Quasiglobal reaction model for ethylene combustion

    NASA Technical Reports Server (NTRS)

    Singh, D. J.; Jachimowski, Casimir J.

    1994-01-01

    The objective of this study is to develop a reduced mechanism for ethylene oxidation. The authors are interested in a model with a minimum number of species and reactions that still models the chemistry with reasonable accuracy for the expected combustor conditions. The model will be validated by comparing the results to those calculated with a detailed kinetic model that has been validated against the experimental data.

  11. Validation metrics for turbulent plasma transport

    DOE PAGES

    Holland, C.

    2016-06-22

    Developing accurate models of plasma dynamics is essential for confident predictive modeling of current and future fusion devices. In modern computer science and engineering, formal verification and validation processes are used to assess model accuracy and establish confidence in the predictive capabilities of a given model. This paper provides an overview of the key guiding principles and best practices for the development of validation metrics, illustrated using examples from investigations of turbulent transport in magnetically confined plasmas. Particular emphasis is given to the importance of uncertainty quantification and its inclusion within the metrics, and the need for utilizing synthetic diagnosticsmore » to enable quantitatively meaningful comparisons between simulation and experiment. As a starting point, the structure of commonly used global transport model metrics and their limitations is reviewed. An alternate approach is then presented, which focuses upon comparisons of predicted local fluxes, fluctuations, and equilibrium gradients against observation. Furthermore, the utility of metrics based upon these comparisons is demonstrated by applying them to gyrokinetic predictions of turbulent transport in a variety of discharges performed on the DIII-D tokamak, as part of a multi-year transport model validation activity.« less

  12. Validation metrics for turbulent plasma transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holland, C.

    Developing accurate models of plasma dynamics is essential for confident predictive modeling of current and future fusion devices. In modern computer science and engineering, formal verification and validation processes are used to assess model accuracy and establish confidence in the predictive capabilities of a given model. This paper provides an overview of the key guiding principles and best practices for the development of validation metrics, illustrated using examples from investigations of turbulent transport in magnetically confined plasmas. Particular emphasis is given to the importance of uncertainty quantification and its inclusion within the metrics, and the need for utilizing synthetic diagnosticsmore » to enable quantitatively meaningful comparisons between simulation and experiment. As a starting point, the structure of commonly used global transport model metrics and their limitations is reviewed. An alternate approach is then presented, which focuses upon comparisons of predicted local fluxes, fluctuations, and equilibrium gradients against observation. Furthermore, the utility of metrics based upon these comparisons is demonstrated by applying them to gyrokinetic predictions of turbulent transport in a variety of discharges performed on the DIII-D tokamak, as part of a multi-year transport model validation activity.« less

  13. An Integrated Approach Linking Process to Structural Modeling With Microstructural Characterization for Injections-Molded Long-Fiber Thermoplastics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen, Ba Nghiep; Bapanapalli, Satish K.; Smith, Mark T.

    2008-09-01

    The objective of our work is to enable the optimum design of lightweight automotive structural components using injection-molded long fiber thermoplastics (LFTs). To this end, an integrated approach that links process modeling to structural analysis with experimental microstructural characterization and validation is developed. First, process models for LFTs are developed and implemented into processing codes (e.g. ORIENT, Moldflow) to predict the microstructure of the as-formed composite (i.e. fiber length and orientation distributions). In parallel, characterization and testing methods are developed to obtain necessary microstructural data to validate process modeling predictions. Second, the predicted LFT composite microstructure is imported into amore » structural finite element analysis by ABAQUS to determine the response of the as-formed composite to given boundary conditions. At this stage, constitutive models accounting for the composite microstructure are developed to predict various types of behaviors (i.e. thermoelastic, viscoelastic, elastic-plastic, damage, fatigue, and impact) of LFTs. Experimental methods are also developed to determine material parameters and to validate constitutive models. Such a process-linked-structural modeling approach allows an LFT composite structure to be designed with confidence through numerical simulations. Some recent results of our collaborative research will be illustrated to show the usefulness and applications of this integrated approach.« less

  14. Modeling complex treatment strategies: construction and validation of a discrete event simulation model for glaucoma.

    PubMed

    van Gestel, Aukje; Severens, Johan L; Webers, Carroll A B; Beckers, Henny J M; Jansonius, Nomdo M; Schouten, Jan S A G

    2010-01-01

    Discrete event simulation (DES) modeling has several advantages over simpler modeling techniques in health economics, such as increased flexibility and the ability to model complex systems. Nevertheless, these benefits may come at the cost of reduced transparency, which may compromise the model's face validity and credibility. We aimed to produce a transparent report on the construction and validation of a DES model using a recently developed model of ocular hypertension and glaucoma. Current evidence of associations between prognostic factors and disease progression in ocular hypertension and glaucoma was translated into DES model elements. The model was extended to simulate treatment decisions and effects. Utility and costs were linked to disease status and treatment, and clinical and health economic outcomes were defined. The model was validated at several levels. The soundness of design and the plausibility of input estimates were evaluated in interdisciplinary meetings (face validity). Individual patients were traced throughout the simulation under a multitude of model settings to debug the model, and the model was run with a variety of extreme scenarios to compare the outcomes with prior expectations (internal validity). Finally, several intermediate (clinical) outcomes of the model were compared with those observed in experimental or observational studies (external validity) and the feasibility of evaluating hypothetical treatment strategies was tested. The model performed well in all validity tests. Analyses of hypothetical treatment strategies took about 30 minutes per cohort and lead to plausible health-economic outcomes. There is added value of DES models in complex treatment strategies such as glaucoma. Achieving transparency in model structure and outcomes may require some effort in reporting and validating the model, but it is feasible.

  15. Recovery Act. Development and Validation of an Advanced Stimulation Prediction Model for Enhanced Geothermal System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gutierrez, Marte

    The research project aims to develop and validate an advanced computer model that can be used in the planning and design of stimulation techniques to create engineered reservoirs for Enhanced Geothermal Systems. The specific objectives of the proposal are to: 1) Develop a true three-dimensional hydro-thermal fracturing simulator that is particularly suited for EGS reservoir creation. 2) Perform laboratory scale model tests of hydraulic fracturing and proppant flow/transport using a polyaxial loading device, and use the laboratory results to test and validate the 3D simulator. 3) Perform discrete element/particulate modeling of proppant transport in hydraulic fractures, and use the resultsmore » to improve understand of proppant flow and transport. 4) Test and validate the 3D hydro-thermal fracturing simulator against case histories of EGS energy production. 5) Develop a plan to commercialize the 3D fracturing and proppant flow/transport simulator. The project is expected to yield several specific results and benefits. Major technical products from the proposal include: 1) A true-3D hydro-thermal fracturing computer code that is particularly suited to EGS, 2) Documented results of scale model tests on hydro-thermal fracturing and fracture propping in an analogue crystalline rock, 3) Documented procedures and results of discrete element/particulate modeling of flow and transport of proppants for EGS applications, and 4) Database of monitoring data, with focus of Acoustic Emissions (AE) from lab scale modeling and field case histories of EGS reservoir creation.« less

  16. Recovery Act. Development and Validation of an Advanced Stimulation Prediction Model for Enhanced Geothermal Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gutierrez, Marte

    2013-12-31

    This research project aims to develop and validate an advanced computer model that can be used in the planning and design of stimulation techniques to create engineered reservoirs for Enhanced Geothermal Systems. The specific objectives of the proposal are to; Develop a true three-dimensional hydro-thermal fracturing simulator that is particularly suited for EGS reservoir creation; Perform laboratory scale model tests of hydraulic fracturing and proppant flow/transport using a polyaxial loading device, and use the laboratory results to test and validate the 3D simulator; Perform discrete element/particulate modeling of proppant transport in hydraulic fractures, and use the results to improve understandmore » of proppant flow and transport; Test and validate the 3D hydro-thermal fracturing simulator against case histories of EGS energy production; and Develop a plan to commercialize the 3D fracturing and proppant flow/transport simulator. The project is expected to yield several specific results and benefits. Major technical products from the proposal include; A true-3D hydro-thermal fracturing computer code that is particularly suited to EGS; Documented results of scale model tests on hydro-thermal fracturing and fracture propping in an analogue crystalline rock; Documented procedures and results of discrete element/particulate modeling of flow and transport of proppants for EGS applications; and Database of monitoring data, with focus of Acoustic Emissions (AE) from lab scale modeling and field case histories of EGS reservoir creation.« less

  17. Identification of Reduced-Order Thermal Therapy Models Using Thermal MR Images: Theory and Validation

    PubMed Central

    2013-01-01

    In this paper, we develop and validate a method to identify computationally efficient site- and patient-specific models of ultrasound thermal therapies from MR thermal images. The models of the specific absorption rate of the transduced energy and the temperature response of the therapy target are identified in the reduced basis of proper orthogonal decomposition of thermal images, acquired in response to a mild thermal test excitation. The method permits dynamic reidentification of the treatment models during the therapy by recursively utilizing newly acquired images. Such adaptation is particularly important during high-temperature therapies, which are known to substantially and rapidly change tissue properties and blood perfusion. The developed theory was validated for the case of focused ultrasound heating of a tissue phantom. The experimental and computational results indicate that the developed approach produces accurate low-dimensional treatment models despite temporal and spatial noises in MR images and slow image acquisition rate. PMID:22531754

  18. Prediction of risk of recurrence of venous thromboembolism following treatment for a first unprovoked venous thromboembolism: systematic review, prognostic model and clinical decision rule, and economic evaluation.

    PubMed

    Ensor, Joie; Riley, Richard D; Jowett, Sue; Monahan, Mark; Snell, Kym Ie; Bayliss, Susan; Moore, David; Fitzmaurice, David

    2016-02-01

    Unprovoked first venous thromboembolism (VTE) is defined as VTE in the absence of a temporary provoking factor such as surgery, immobility and other temporary factors. Recurrent VTE in unprovoked patients is highly prevalent, but easily preventable with oral anticoagulant (OAC) therapy. The unprovoked population is highly heterogeneous in terms of risk of recurrent VTE. The first aim of the project is to review existing prognostic models which stratify individuals by their recurrence risk, therefore potentially allowing tailored treatment strategies. The second aim is to enhance the existing research in this field, by developing and externally validating a new prognostic model for individual risk prediction, using a pooled database containing individual patient data (IPD) from several studies. The final aim is to assess the economic cost-effectiveness of the proposed prognostic model if it is used as a decision rule for resuming OAC therapy, compared with current standard treatment strategies. Standard systematic review methodology was used to identify relevant prognostic model development, validation and cost-effectiveness studies. Bibliographic databases (including MEDLINE, EMBASE and The Cochrane Library) were searched using terms relating to the clinical area and prognosis. Reviewing was undertaken by two reviewers independently using pre-defined criteria. Included full-text articles were data extracted and quality assessed. Critical appraisal of included full texts was undertaken and comparisons made of model performance. A prognostic model was developed using IPD from the pooled database of seven trials. A novel internal-external cross-validation (IECV) approach was used to develop and validate a prognostic model, with external validation undertaken in each of the trials iteratively. Given good performance in the IECV approach, a final model was developed using all trials data. A Markov patient-level simulation was used to consider the economic cost-effectiveness of using a decision rule (based on the prognostic model) to decide on resumption of OAC therapy (or not). Three full-text articles were identified by the systematic review. Critical appraisal identified methodological and applicability issues; in particular, all three existing models did not have external validation. To address this, new prognostic models were sought with external validation. Two potential models were considered: one for use at cessation of therapy (pre D-dimer), and one for use after cessation of therapy (post D-dimer). Model performance measured in the external validation trials showed strong calibration performance for both models. The post D-dimer model performed substantially better in terms of discrimination (c = 0.69), better separating high- and low-risk patients. The economic evaluation identified that a decision rule based on the final post D-dimer model may be cost-effective for patients with predicted risk of recurrence of over 8% annually; this suggests continued therapy for patients with predicted risks ≥ 8% and cessation of therapy otherwise. The post D-dimer model performed strongly and could be useful to predict individuals' risk of recurrence at any time up to 2-3 years, thereby aiding patient counselling and treatment decisions. A decision rule using this model may be cost-effective for informing clinical judgement and patient opinion in treatment decisions. Further research may investigate new predictors to enhance model performance and aim to further externally validate to confirm performance in new, non-trial populations. Finally, it is essential that further research is conducted to develop a model predicting bleeding risk on therapy, to manage the balance between the risks of recurrence and bleeding. This study is registered as PROSPERO CRD42013003494. The National Institute for Health Research Health Technology Assessment programme.

  19. The Measure of Adolescent Heterosocial Competence: Development and Initial Validation

    ERIC Educational Resources Information Center

    Grover, Rachel L.; Nangle, Douglas W.; Zeff, Karen R.

    2005-01-01

    We developed and began construct validation of the Measure of Adolescent Heterosocial Competence (MAHC), a self-report instrument assessing the ability to negotiate effectively a range of challenging other-sex social interactions. Development followed the Goldfried and D'Zurilla (1969) behavioral-analytic model for assessing competence.…

  20. DEVELOPMENT AND VALIDATION OF A MULTIFIELD MODEL OF CHURN-TURBULENT GAS/LIQUID FLOWS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elena A. Tselishcheva; Steven P. Antal; Michael Z. Podowski

    The accuracy of numerical predictions for gas/liquid two-phase flows using Computational Multiphase Fluid Dynamics (CMFD) methods strongly depends on the formulation of models governing the interaction between the continuous liquid field and bubbles of different sizes. The purpose of this paper is to develop, test and validate a multifield model of adiabatic gas/liquid flows at intermediate gas concentrations (e.g., churn-turbulent flow regime), in which multiple-size bubbles are divided into a specified number of groups, each representing a prescribed range of sizes. The proposed modeling concept uses transport equations for the continuous liquid field and for each bubble field. The overallmore » model has been implemented in the NPHASE-CMFD computer code. The results of NPHASE-CMFD simulations have been validated against the experimental data from the TOPFLOW test facility. Also, a parametric analysis on the effect of various modeling assumptions has been performed.« less

  1. Validation of the Activities of Community Transportation model for individuals with cognitive impairments.

    PubMed

    Sohlberg, McKay Moore; Fickas, Stephen; Lemoncello, Rik; Hung, Pei-Fang

    2009-01-01

    To develop a theoretical, functional model of community navigation for individuals with cognitive impairments: the Activities of Community Transportation (ACTs). Iterative design using qualitative methods (i.e. document review, focus groups and observations). Four agencies providing travel training to adults with cognitive impairments in the USA participated in the validation study. A thorough document review and series of focus groups led to the development of a comprehensive model (ACTs Wheels) delineating the requisite steps and skills for community navigation. The model was validated and updated based on observations of 395 actual trips by travellers with navigational challenges from the four participating agencies. Results revealed that the 'ACTs Wheel' models were complete and comprehensive. The 'ACTs Wheels' represent a comprehensive model of the steps needed to navigate to destinations using paratransit and fixed-route public transportation systems for travellers with cognitive impairments. Suggestions are made for future investigations of community transportation for this population.

  2. Using Bogner and Wiseman's Model of Ecological Values to Measure the Impact of an Earth Education Programme on Children's Environmental Perceptions

    ERIC Educational Resources Information Center

    Johnson, Bruce; Manoli, Constantinos C.

    2008-01-01

    Investigating the effects of educational programmes on children's environmental perceptions has been hampered by the lack of good theoretical models and valid instruments. In the present study, Bogner and Wiseman's Model of Ecological Values provided a well-developed theoretical model. A validated instrument based on Bogner's Environmental…

  3. Development and Validation of Scores from an Instrument Measuring Student Test-Taking Motivation

    ERIC Educational Resources Information Center

    Eklof, Hanna

    2006-01-01

    Using the expectancy-value model of achievement motivation as a basis, this study's purpose is to develop, apply, and validate scores from a self-report instrument measuring student test-taking motivation. Sampled evidence of construct validity for the present sample indicates that a number of the items in the instrument could be used as an…

  4. Development and Validation of Triarchic Construct Scales from the Psychopathic Personality Inventory

    PubMed Central

    Hall, Jason R.; Drislane, Laura E.; Patrick, Christopher J.; Morano, Mario; Lilienfeld, Scott O.; Poythress, Norman G.

    2014-01-01

    The Triarchic model of psychopathy describes this complex condition in terms of distinct phenotypic components of boldness, meanness, and disinhibition. Brief self-report scales designed specifically to index these psychopathy facets have thus far demonstrated promising construct validity. The present study sought to develop and validate scales for assessing facets of the Triarchic model using items from a well-validated existing measure of psychopathy—the Psychopathic Personality Inventory (PPI). A consensus rating approach was used to identify PPI items relevant to each Triarchic facet, and the convergent and discriminant validity of the resulting PPI-based Triarchic scales were evaluated in relation to multiple criterion variables (i.e., other psychopathy inventories, antisocial personality disorder features, personality traits, psychosocial functioning) in offender and non-offender samples. The PPI-based Triarchic scales showed good internal consistency and related to criterion variables in ways consistent with predictions based on the Triarchic model. Findings are discussed in terms of implications for conceptualization and assessment of psychopathy. PMID:24447280

  5. Development and validation of Triarchic construct scales from the psychopathic personality inventory.

    PubMed

    Hall, Jason R; Drislane, Laura E; Patrick, Christopher J; Morano, Mario; Lilienfeld, Scott O; Poythress, Norman G

    2014-06-01

    The Triarchic model of psychopathy describes this complex condition in terms of distinct phenotypic components of boldness, meanness, and disinhibition. Brief self-report scales designed specifically to index these psychopathy facets have thus far demonstrated promising construct validity. The present study sought to develop and validate scales for assessing facets of the Triarchic model using items from a well-validated existing measure of psychopathy-the Psychopathic Personality Inventory (PPI). A consensus-rating approach was used to identify PPI items relevant to each Triarchic facet, and the convergent and discriminant validity of the resulting PPI-based Triarchic scales were evaluated in relation to multiple criterion variables (i.e., other psychopathy inventories, antisocial personality disorder features, personality traits, psychosocial functioning) in offender and nonoffender samples. The PPI-based Triarchic scales showed good internal consistency and related to criterion variables in ways consistent with predictions based on the Triarchic model. Findings are discussed in terms of implications for conceptualization and assessment of psychopathy.

  6. QSAR-Based Models for Designing Quinazoline/Imidazothiazoles/Pyrazolopyrimidines Based Inhibitors against Wild and Mutant EGFR

    PubMed Central

    Chauhan, Jagat Singh; Dhanda, Sandeep Kumar; Singla, Deepak; Agarwal, Subhash M.; Raghava, Gajendra P. S.

    2014-01-01

    Overexpression of EGFR is responsible for causing a number of cancers, including lung cancer as it activates various downstream signaling pathways. Thus, it is important to control EGFR function in order to treat the cancer patients. It is well established that inhibiting ATP binding within the EGFR kinase domain regulates its function. The existing quinazoline derivative based drugs used for treating lung cancer that inhibits the wild type of EGFR. In this study, we have made a systematic attempt to develop QSAR models for designing quinazoline derivatives that could inhibit wild EGFR and imidazothiazoles/pyrazolopyrimidines derivatives against mutant EGFR. In this study, three types of prediction methods have been developed to design inhibitors against EGFR (wild, mutant and both). First, we developed models for predicting inhibitors against wild type EGFR by training and testing on dataset containing 128 quinazoline based inhibitors. This dataset was divided into two subsets called wild_train and wild_valid containing 103 and 25 inhibitors respectively. The models were trained and tested on wild_train dataset while performance was evaluated on the wild_valid called validation dataset. We achieved a maximum correlation between predicted and experimentally determined inhibition (IC50) of 0.90 on validation dataset. Secondly, we developed models for predicting inhibitors against mutant EGFR (L858R) on mutant_train, and mutant_valid dataset and achieved a maximum correlation between 0.834 to 0.850 on these datasets. Finally, an integrated hybrid model has been developed on a dataset containing wild and mutant inhibitors and got maximum correlation between 0.761 to 0.850 on different datasets. In order to promote open source drug discovery, we developed a webserver for designing inhibitors against wild and mutant EGFR along with providing standalone (http://osddlinux.osdd.net/) and Galaxy (http://osddlinux.osdd.net:8001) version of software. We hope our webserver (http://crdd.osdd.net/oscadd/ntegfr/) will play a vital role in designing new anticancer drugs. PMID:24992720

  7. The first Latin-American risk stratification system for cardiac surgery: can be used as a graphic pocket-card score.

    PubMed

    Carosella, Victorio C; Navia, Jose L; Al-Ruzzeh, Sharif; Grancelli, Hugo; Rodriguez, Walter; Cardenas, Cesar; Bilbao, Jorge; Nojek, Carlos

    2009-08-01

    This study aims to develop the first Latin-American risk model that can be used as a simple, pocket-card graphic score at bedside. The risk model was developed on 2903 patients who underwent cardiac surgery at the Spanish Hospital of Buenos Aires, Argentina, between June 1994 and December 1999. Internal validation was performed on 708 patients between January 2000 and June 2001 at the same center. External validation was performed on 1087 patients between February 2000 and January 2007 at three other centers in Argentina. In the development dataset the area under receiver operating characteristics (ROC) curve was 0.73 and the Hosmer-Lemeshow (HL) test was P=0.88. In the internal validation ROC curve was 0.77. In the external validation ROC curve was 0.81, but imperfect calibration was detected because the observed in-hospital mortality (3.96%) was significantly lower than the development dataset (8.20%) (P<0.0001). Recalibration was done in 2007, showing excellent level of agreement between the observed and predicted mortality rates on all patients (P=0.92). This is the first risk model for cardiac surgery developed in a population of Latin-America with both internal and external validation. A simple graphic pocket-card score allows an easy bedside application with acceptable statistic precision.

  8. Modeling river total bed material load discharge using artificial intelligence approaches (based on conceptual inputs)

    NASA Astrophysics Data System (ADS)

    Roushangar, Kiyoumars; Mehrabani, Fatemeh Vojoudi; Shiri, Jalal

    2014-06-01

    This study presents Artificial Intelligence (AI)-based modeling of total bed material load through developing the accuracy level of the predictions of traditional models. Gene expression programming (GEP) and adaptive neuro-fuzzy inference system (ANFIS)-based models were developed and validated for estimations. Sediment data from Qotur River (Northwestern Iran) were used for developing and validation of the applied techniques. In order to assess the applied techniques in relation to traditional models, stream power-based and shear stress-based physical models were also applied in the studied case. The obtained results reveal that developed AI-based models using minimum number of dominant factors, give more accurate results than the other applied models. Nonetheless, it was revealed that k-fold test is a practical but high-cost technique for complete scanning of applied data and avoiding the over-fitting.

  9. Development and validation of a risk assessment tool for gastric cancer in a general Japanese population.

    PubMed

    Iida, Masahiro; Ikeda, Fumie; Hata, Jun; Hirakawa, Yoichiro; Ohara, Tomoyuki; Mukai, Naoko; Yoshida, Daigo; Yonemoto, Koji; Esaki, Motohiro; Kitazono, Takanari; Kiyohara, Yutaka; Ninomiya, Toshiharu

    2018-05-01

    There have been very few reports of risk score models for the development of gastric cancer. The aim of this study was to develop and validate a risk assessment tool for discerning future gastric cancer risk in Japanese. A total of 2444 subjects aged 40 years or over were followed up for 14 years from 1988 (derivation cohort), and 3204 subjects of the same age group were followed up for 5 years from 2002 (validation cohort). The weighting (risk score) of each risk factor for predicting future gastric cancer in the risk assessment tool was determined based on the coefficients of a Cox proportional hazards model in the derivation cohort. The goodness of fit of the established risk assessment tool was assessed using the c-statistic and the Hosmer-Lemeshow test in the validation cohort. During the follow-up, gastric cancer developed in 90 subjects in the derivation cohort and 35 subjects in the validation cohort. In the derivation cohort, the risk prediction model for gastric cancer was established using significant risk factors: age, sex, the combination of Helicobacter pylori antibody and pepsinogen status, hemoglobin A1c level, and smoking status. The incidence of gastric cancer increased significantly as the sum of risk scores increased (P trend < 0.001). The risk assessment tool was validated internally and showed good discrimination (c-statistic = 0.76) and calibration (Hosmer-Lemeshow test P = 0.43) in the validation cohort. We developed a risk assessment tool for gastric cancer that provides a useful guide for stratifying an individual's risk of future gastric cancer.

  10. Validation Metrics for Improving Our Understanding of Turbulent Transport - Moving Beyond Proof by Pretty Picture and Loud Assertion

    NASA Astrophysics Data System (ADS)

    Holland, C.

    2013-10-01

    Developing validated models of plasma dynamics is essential for confident predictive modeling of current and future fusion devices. This tutorial will present an overview of the key guiding principles and practices for state-of-the-art validation studies, illustrated using examples from investigations of turbulent transport in magnetically confined plasmas. The primary focus of the talk will be the development of quantiatve validation metrics, which are essential for moving beyond qualitative and subjective assessments of model performance and fidelity. Particular emphasis and discussion is given to (i) the need for utilizing synthetic diagnostics to enable quantitatively meaningful comparisons between simulation and experiment, and (ii) the importance of robust uncertainty quantification and its inclusion within the metrics. To illustrate these concepts, we first review the structure and key insights gained from commonly used ``global'' transport model metrics (e.g. predictions of incremental stored energy or radially-averaged temperature), as well as their limitations. Building upon these results, a new form of turbulent transport metrics is then proposed, which focuses upon comparisons of predicted local gradients and fluctuation characteristics against observation. We demonstrate the utility of these metrics by applying them to simulations and modeling of a newly developed ``validation database'' derived from the results of a systematic, multi-year turbulent transport validation campaign on the DIII-D tokamak, in which comprehensive profile and fluctuation measurements have been obtained from a wide variety of heating and confinement scenarios. Finally, we discuss extensions of these metrics and their underlying design concepts to other areas of plasma confinement research, including both magnetohydrodynamic stability and integrated scenario modeling. Supported by the US DOE under DE-FG02-07ER54917 and DE-FC02-08ER54977.

  11. Locating the Seventh Cervical Spinous Process: Development and Validation of a Multivariate Model Using Palpation and Personal Information.

    PubMed

    Ferreira, Ana Paula A; Póvoa, Luciana C; Zanier, José F C; Ferreira, Arthur S

    2017-02-01

    The aim of this study was to develop and validate a multivariate prediction model, guided by palpation and personal information, for locating the seventh cervical spinous process (C7SP). A single-blinded, cross-sectional study at a primary to tertiary health care center was conducted for model development and temporal validation. One-hundred sixty participants were prospectively included for model development (n = 80) and time-split validation stages (n = 80). The C7SP was located using the thorax-rib static method (TRSM). Participants underwent chest radiography for assessment of the inner body structure located with TRSM and using radio-opaque markers placed over the skin. Age, sex, height, body mass, body mass index, and vertex-marker distance (D V-M ) were used to predict the distance from the C7SP to the vertex (D V-C7 ). Multivariate linear regression modeling, limits of agreement plot, histogram of residues, receiver operating characteristic curves, and confusion tables were analyzed. The multivariate linear prediction model for D V-C7 (in centimeters) was D V-C7 = 0.986D V-M + 0.018(mass) + 0.014(age) - 1.008. Receiver operating characteristic curves had better discrimination of D V-C7 (area under the curve = 0.661; 95% confidence interval = 0.541-0.782; P = .015) than D V-M (area under the curve = 0.480; 95% confidence interval = 0.345-0.614; P = .761), with respective cutoff points at 23.40 cm (sensitivity = 41%, specificity = 63%) and 24.75 cm (sensitivity = 69%, specificity = 52%). The C7SP was correctly located more often when using predicted D V-C7 in the validation sample than when using the TRSM in the development sample: n = 53 (66%) vs n = 32 (40%), P < .001. Better accuracy was obtained when locating the C7SP by use of a multivariate model that incorporates palpation and personal information. Copyright © 2016. Published by Elsevier Inc.

  12. Developing and validating a model to predict the success of an IHCS implementation: the Readiness for Implementation Model.

    PubMed

    Wen, Kuang-Yi; Gustafson, David H; Hawkins, Robert P; Brennan, Patricia F; Dinauer, Susan; Johnson, Pauley R; Siegler, Tracy

    2010-01-01

    To develop and validate the Readiness for Implementation Model (RIM). This model predicts a healthcare organization's potential for success in implementing an interactive health communication system (IHCS). The model consists of seven weighted factors, with each factor containing five to seven elements. Two decision-analytic approaches, self-explicated and conjoint analysis, were used to measure the weights of the RIM with a sample of 410 experts. The RIM model with weights was then validated in a prospective study of 25 IHCS implementation cases. Orthogonal main effects design was used to develop 700 conjoint-analysis profiles, which varied on seven factors. Each of the 410 experts rated the importance and desirability of the factors and their levels, as well as a set of 10 different profiles. For the prospective 25-case validation, three time-repeated measures of the RIM scores were collected for comparison with the implementation outcomes. Two of the seven factors, 'organizational motivation' and 'meeting user needs,' were found to be most important in predicting implementation readiness. No statistically significant difference was found in the predictive validity of the two approaches (self-explicated and conjoint analysis). The RIM was a better predictor for the 1-year implementation outcome than the half-year outcome. The expert sample, the order of the survey tasks, the additive model, and basing the RIM cut-off score on experience are possible limitations of the study. The RIM needs to be empirically evaluated in institutions adopting IHCS and sustaining the system in the long term.

  13. Predicting hospital stay, mortality and readmission in people admitted for hypoglycaemia: prognostic models derivation and validation.

    PubMed

    Zaccardi, Francesco; Webb, David R; Davies, Melanie J; Dhalwani, Nafeesa N; Gray, Laura J; Chatterjee, Sudesna; Housley, Gemma; Shaw, Dominick; Hatton, James W; Khunti, Kamlesh

    2017-06-01

    Hospital admissions for hypoglycaemia represent a significant burden on individuals with diabetes and have a substantial economic impact on healthcare systems. To date, no prognostic models have been developed to predict outcomes following admission for hypoglycaemia. We aimed to develop and validate prediction models to estimate risk of inpatient death, 24 h discharge and one month readmission in people admitted to hospital for hypoglycaemia. We used the Hospital Episode Statistics database, which includes data on all hospital admission to National Health Service hospital trusts in England, to extract admissions for hypoglycaemia between 2010 and 2014. We developed, internally and temporally validated, and compared two prognostic risk models for each outcome. The first model included age, sex, ethnicity, region, social deprivation and Charlson score ('base' model). In the second model, we added to the 'base' model the 20 most common medical conditions and applied a stepwise backward selection of variables ('disease' model). We used C-index and calibration plots to assess model performance and developed a calculator to estimate probabilities of outcomes according to individual characteristics. In derivation samples, 296 out of 11,136 admissions resulted in inpatient death, 1789/33,825 in one month readmission and 8396/33,803 in 24 h discharge. Corresponding values for validation samples were: 296/10,976, 1207/22,112 and 5363/22,107. The two models had similar discrimination. In derivation samples, C-indices for the base and disease models, respectively, were: 0.77 (95% CI 0.75, 0.80) and 0.78 (0.75, 0.80) for death, 0.57 (0.56, 0.59) and 0.57 (0.56, 0.58) for one month readmission, and 0.68 (0.67, 0.69) and 0.69 (0.68, 0.69) for 24 h discharge. Corresponding values in validation samples were: 0.74 (0.71, 0.76) and 0.74 (0.72, 0.77), 0.55 (0.54, 0.57) and 0.55 (0.53, 0.56), and 0.66 (0.65, 0.67) and 0.67 (0.66, 0.68). In both derivation and validation samples, calibration plots showed good agreement for the three outcomes. We developed a calculator of probabilities for inpatient death and 24 h discharge given the low performance of one month readmission models. This simple and pragmatic tool to predict in-hospital death and 24 h discharge has the potential to reduce mortality and improve discharge in people admitted for hypoglycaemia.

  14. Ventilation tube insertion simulation: a literature review and validity assessment of five training models.

    PubMed

    Mahalingam, S; Awad, Z; Tolley, N S; Khemani, S

    2016-08-01

    The objective of this study was to identify and investigate the face and content validity of ventilation tube insertion (VTI) training models described in the literature. A review of literature was carried out to identify articles describing VTI simulators. Feasible models were replicated and assessed by a group of experts. Postgraduate simulation centre. Experts were defined as surgeons who had performed at least 100 VTI on patients. Seventeen experts were participated ensuring sufficient statistical power for analysis. A standardised 18-item Likert-scale questionnaire was used. This addressed face validity (realism), global and task-specific content (suitability of the model for teaching) and curriculum recommendation. The search revealed eleven models, of which only five had associated validity data. Five models were found to be feasible to replicate. None of the tested models achieved face or global content validity. Only one model achieved task-specific validity, and hence, there was no agreement on curriculum recommendation. The quality of simulation models is moderate and there is room for improvement. There is a need for new models to be developed or existing ones to be refined in order to construct a more realistic training platform for VTI simulation. © 2015 John Wiley & Sons Ltd.

  15. Development of syntax of intuition-based learning model in solving mathematics problems

    NASA Astrophysics Data System (ADS)

    Yeni Heryaningsih, Nok; Khusna, Hikmatul

    2018-01-01

    The aim of the research was to produce syntax of Intuition Based Learning (IBL) model in solving mathematics problem for improving mathematics students’ achievement that valid, practical and effective. The subject of the research were 2 classes in grade XI students of SMAN 2 Sragen, Central Java. The type of the research was a Research and Development (R&D). Development process adopted Plomp and Borg & Gall development model, they were preliminary investigation step, design step, realization step, evaluation and revision step. Development steps were as follow: (1) Collected the information and studied of theories in Preliminary Investigation step, studied about intuition, learning model development, students condition, and topic analysis, (2) Designed syntax that could bring up intuition in solving mathematics problem and then designed research instruments. They were several phases that could bring up intuition, Preparation phase, Incubation phase, Illumination phase and Verification phase, (3) Realized syntax of Intuition Based Learning model that has been designed to be the first draft, (4) Did validation of the first draft to the validator, (5) Tested the syntax of Intuition Based Learning model in the classrooms to know the effectiveness of the syntax, (6) Conducted Focus Group Discussion (FGD) to evaluate the result of syntax model testing in the classrooms, and then did the revision on syntax IBL model. The results of the research were produced syntax of IBL model in solving mathematics problems that valid, practical and effective. The syntax of IBL model in the classroom were, (1) Opening with apperception, motivations and build students’ positive perceptions, (2) Teacher explains the material generally, (3) Group discussion about the material, (4) Teacher gives students mathematics problems, (5) Doing exercises individually to solve mathematics problems with steps that could bring up students’ intuition: Preparations, Incubation, Illumination, and Verification, (6) Closure with the review of students have learned or giving homework.

  16. The Development of Statistics Textbook Supported with ICT and Portfolio-Based Assessment

    NASA Astrophysics Data System (ADS)

    Hendikawati, Putriaji; Yuni Arini, Florentina

    2016-02-01

    This research was development research that aimed to develop and produce a Statistics textbook model that supported with information and communication technology (ICT) and Portfolio-Based Assessment. This book was designed for students of mathematics at the college to improve students’ ability in mathematical connection and communication. There were three stages in this research i.e. define, design, and develop. The textbooks consisted of 10 chapters which each chapter contains introduction, core materials and include examples and exercises. The textbook developed phase begins with the early stages of designed the book (draft 1) which then validated by experts. Revision of draft 1 produced draft 2 which then limited test for readability test book. Furthermore, revision of draft 2 produced textbook draft 3 which simulated on a small sample to produce a valid model textbook. The data were analysed with descriptive statistics. The analysis showed that the Statistics textbook model that supported with ICT and Portfolio-Based Assessment valid and fill up the criteria of practicality.

  17. Derivation and validation of the prediabetes self-assessment screening score after acute pancreatitis (PERSEUS).

    PubMed

    Soo, Danielle H E; Pendharkar, Sayali A; Jivanji, Chirag J; Gillies, Nicola A; Windsor, John A; Petrov, Maxim S

    2017-10-01

    Approximately 40% of patients develop abnormal glucose metabolism after a single episode of acute pancreatitis. This study aimed to develop and validate a prediabetes self-assessment screening score for patients after acute pancreatitis. Data from non-overlapping training (n=82) and validation (n=80) cohorts were analysed. Univariate logistic and linear regression identified variables associated with prediabetes after acute pancreatitis. Multivariate logistic regression developed the score, ranging from 0 to 215. The area under the receiver-operating characteristic curve (AUROC), Hosmer-Lemeshow χ 2 statistic, and calibration plots were used to assess model discrimination and calibration. The developed score was validated using data from the validation cohort. The score had an AUROC of 0.88 (95% CI, 0.80-0.97) and Hosmer-Lemeshow χ 2 statistic of 5.75 (p=0.676). Patients with a score of ≥75 had a 94.1% probability of having prediabetes, and were 29 times more likely to have prediabetes than those with a score of <75. The AUROC in the validation cohort was 0.81 (95% CI, 0.70-0.92) and the Hosmer-Lemeshow χ 2 statistic was 5.50 (p=0.599). Model calibration of the score showed good calibration in both cohorts. The developed and validated score, called PERSEUS, is the first instrument to identify individuals who are at high risk of developing abnormal glucose metabolism following an episode of acute pancreatitis. Copyright © 2017 Editrice Gastroenterologica Italiana S.r.l. Published by Elsevier Ltd. All rights reserved.

  18. External validation of the diffuse intrinsic pontine glioma survival prediction model: a collaborative report from the International DIPG Registry and the SIOPE DIPG Registry.

    PubMed

    Veldhuijzen van Zanten, Sophie E M; Lane, Adam; Heymans, Martijn W; Baugh, Joshua; Chaney, Brooklyn; Hoffman, Lindsey M; Doughman, Renee; Jansen, Marc H A; Sanchez, Esther; Vandertop, William P; Kaspers, Gertjan J L; van Vuurden, Dannis G; Fouladi, Maryam; Jones, Blaise V; Leach, James

    2017-08-01

    We aimed to perform external validation of the recently developed survival prediction model for diffuse intrinsic pontine glioma (DIPG), and discuss its utility. The DIPG survival prediction model was developed in a cohort of patients from the Netherlands, United Kingdom and Germany, registered in the SIOPE DIPG Registry, and includes age <3 years, longer symptom duration and receipt of chemotherapy as favorable predictors, and presence of ring-enhancement on MRI as unfavorable predictor. Model performance was evaluated by analyzing the discrimination and calibration abilities. External validation was performed using an unselected cohort from the International DIPG Registry, including patients from United States, Canada, Australia and New Zealand. Basic comparison with the results of the original study was performed using descriptive statistics, and univariate- and multivariable regression analyses in the validation cohort. External validation was assessed following a variety of analyses described previously. Baseline patient characteristics and results from the regression analyses were largely comparable. Kaplan-Meier curves of the validation cohort reproduced separated groups of standard (n = 39), intermediate (n = 125), and high-risk (n = 78) patients. This discriminative ability was confirmed by similar values for the hazard ratios across these risk groups. The calibration curve in the validation cohort showed a symmetric underestimation of the predicted survival probabilities. In this external validation study, we demonstrate that the DIPG survival prediction model has acceptable cross-cohort calibration and is able to discriminate patients with short, average, and increased survival. We discuss how this clinico-radiological model may serve a useful role in current clinical practice.

  19. Validity and Practitality of Acid-Base Module Based on Guided Discovery Learning for Senior High School

    NASA Astrophysics Data System (ADS)

    Yerimadesi; Bayharti; Jannah, S. M.; Lufri; Festiyed; Kiram, Y.

    2018-04-01

    This Research and Development(R&D) aims to produce guided discovery learning based module on topic of acid-base and determine its validity and practicality in learning. Module development used Four D (4-D) model (define, design, develop and disseminate).This research was performed until development stage. Research’s instruments were validity and practicality questionnaires. Module was validated by five experts (three chemistry lecturers of Universitas Negeri Padang and two chemistry teachers of SMAN 9 Padang). Practicality test was done by two chemistry teachers and 30 students of SMAN 9 Padang. Kappa Cohen’s was used to analyze validity and practicality. The average moment kappa was 0.86 for validity and those for practicality were 0.85 by teachers and 0.76 by students revealing high category. It can be concluded that validity and practicality was proven for high school chemistry learning.

  20. Development and validation of age-dependent FE human models of a mid-sized male thorax.

    PubMed

    El-Jawahri, Raed E; Laituri, Tony R; Ruan, Jesse S; Rouhana, Stephen W; Barbat, Saeed D

    2010-11-01

    The increasing number of people over 65 years old (YO) is an important research topic in the area of impact biomechanics, and finite element (FE) modeling can provide valuable support for related research. There were three objectives of this study: (1) Estimation of the representative age of the previously-documented Ford Human Body Model (FHBM) -- an FE model which approximates the geometry and mass of a mid-sized male, (2) Development of FE models representing two additional ages, and (3) Validation of the resulting three models to the extent possible with respect to available physical tests. Specifically, the geometry of the model was compared to published data relating rib angles to age, and the mechanical properties of different simulated tissues were compared to a number of published aging functions. The FHBM was determined to represent a 53-59 YO mid-sized male. The aforementioned aging functions were used to develop FE models representing two additional ages: 35 and 75 YO. The rib model was validated against human rib specimens and whole rib tests, under different loading conditions, with and without modeled fracture. In addition, the resulting three age-dependent models were validated by simulating cadaveric tests of blunt and sled impacts. The responses of the models, in general, were within the cadaveric response corridors. When compared to peak responses from individual cadavers similar in size and age to the age-dependent models, some responses were within one standard deviation of the test data. All the other responses, but one, were within two standard deviations.

  1. Validation of the Work-Life Balance Culture Scale (WLBCS).

    PubMed

    Nitzsche, Anika; Jung, Julia; Kowalski, Christoph; Pfaff, Holger

    2014-01-01

    The purpose of this paper is to describe the theoretical development and initial validation of the newly developed Work-Life Balance Culture Scale (WLBCS), an instrument for measuring an organizational culture that promotes the work-life balance of employees. In Study 1 (N=498), the scale was developed and its factorial validity tested through exploratory factor analyses. In Study 2 (N=513), confirmatory factor analysis (CFA) was performed to examine model fit and retest the dimensional structure of the instrument. To assess construct validity, a priori hypotheses were formulated and subsequently tested using correlation analyses. Exploratory and confirmatory factor analyses revealed a one-factor model. Results of the bivariate correlation analyses may be interpreted as preliminary evidence of the scale's construct validity. The five-item WLBCS is a new and efficient instrument with good overall quality. Its conciseness makes it particularly suitable for use in employee surveys to gain initial insight into a company's perceived work-life balance culture.

  2. A Measure for Evaluating the Effectiveness of Teen Pregnancy Prevention Programs.

    ERIC Educational Resources Information Center

    Somers, Cheryl L.; Johnson, Stephanie A.; Sawilowksy, Shlomo S.

    2002-01-01

    The Teen Attitude Pregnancy Scale (TAPS) was developed to measure teen attitudes and intentions regarding teenage pregnancy. The model demonstrated good internal consistency and concurrent validity for the samples in this study. Analysis revealed evidence of validity for this model. (JDM)

  3. Development and Validation of Linear Alternator Models for the Advanced Stirling Convertor

    NASA Technical Reports Server (NTRS)

    Metscher, Jonathan F.; Lewandowski, Edward

    2014-01-01

    Two models of the linear alternator of the Advanced Stirling Convertor (ASC) have been developed using the Sage 1-D modeling software package. The first model relates the piston motion to electric current by means of a motor constant. The second uses electromagnetic model components to model the magnetic circuit of the alternator. The models are tuned and validated using test data and compared against each other. Results show both models can be tuned to achieve results within 7% of ASC test data under normal operating conditions. Using Sage enables the creation of a complete ASC model to be developed and simulations completed quickly compared to more complex multi-dimensional models. These models allow for better insight into overall Stirling convertor performance, aid with Stirling power system modeling, and in the future support NASA mission planning for Stirling-based power systems.

  4. Development and Validation of Linear Alternator Models for the Advanced Stirling Convertor

    NASA Technical Reports Server (NTRS)

    Metscher, Jonathan F.; Lewandowski, Edward J.

    2015-01-01

    Two models of the linear alternator of the Advanced Stirling Convertor (ASC) have been developed using the Sage 1-D modeling software package. The first model relates the piston motion to electric current by means of a motor constant. The second uses electromagnetic model components to model the magnetic circuit of the alternator. The models are tuned and validated using test data and also compared against each other. Results show both models can be tuned to achieve results within 7 of ASC test data under normal operating conditions. Using Sage enables the creation of a complete ASC model to be developed and simulations completed quickly compared to more complex multi-dimensional models. These models allow for better insight into overall Stirling convertor performance, aid with Stirling power system modeling, and in the future support NASA mission planning for Stirling-based power systems.

  5. Nomogram predicting response after chemoradiotherapy in rectal cancer using sequential PETCT imaging: a multicentric prospective study with external validation.

    PubMed

    van Stiphout, Ruud G P M; Valentini, Vincenzo; Buijsen, Jeroen; Lammering, Guido; Meldolesi, Elisa; van Soest, Johan; Leccisotti, Lucia; Giordano, Alessandro; Gambacorta, Maria A; Dekker, Andre; Lambin, Philippe

    2014-11-01

    To develop and externally validate a predictive model for pathologic complete response (pCR) for locally advanced rectal cancer (LARC) based on clinical features and early sequential (18)F-FDG PETCT imaging. Prospective data (i.a. THUNDER trial) were used to train (N=112, MAASTRO Clinic) and validate (N=78, Università Cattolica del S. Cuore) the model for pCR (ypT0N0). All patients received long-course chemoradiotherapy (CRT) and surgery. Clinical parameters were age, gender, clinical tumour (cT) stage and clinical nodal (cN) stage. PET parameters were SUVmax, SUVmean, metabolic tumour volume (MTV) and maximal tumour diameter, for which response indices between pre-treatment and intermediate scan were calculated. Using multivariate logistic regression, three probability groups for pCR were defined. The pCR rates were 21.4% (training) and 23.1% (validation). The selected predictive features for pCR were cT-stage, cN-stage, response index of SUVmean and maximal tumour diameter during treatment. The models' performances (AUC) were 0.78 (training) and 0.70 (validation). The high probability group for pCR resulted in 100% correct predictions for training and 67% for validation. The model is available on the website www.predictcancer.org. The developed predictive model for pCR is accurate and externally validated. This model may assist in treatment decisions during CRT to select complete responders for a wait-and-see policy, good responders for extra RT boost and bad responders for additional chemotherapy. Copyright © 2014 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  6. Further Development of the PCRTM Model and RT Model Inter Comparison

    NASA Technical Reports Server (NTRS)

    Yang, Qiguang; Liu, Xu; Wu, Wan; Kizer, Susan

    2015-01-01

    New results for the development of the PCRTM model will be presented. The new results were used for IASI retrieval validation inter comparison and better results were obtained compare to other fast radiative transfer models.

  7. Machine Learning Algorithms Outperform Conventional Regression Models in Predicting Development of Hepatocellular Carcinoma

    PubMed Central

    Singal, Amit G.; Mukherjee, Ashin; Elmunzer, B. Joseph; Higgins, Peter DR; Lok, Anna S.; Zhu, Ji; Marrero, Jorge A; Waljee, Akbar K

    2015-01-01

    Background Predictive models for hepatocellular carcinoma (HCC) have been limited by modest accuracy and lack of validation. Machine learning algorithms offer a novel methodology, which may improve HCC risk prognostication among patients with cirrhosis. Our study's aim was to develop and compare predictive models for HCC development among cirrhotic patients, using conventional regression analysis and machine learning algorithms. Methods We enrolled 442 patients with Child A or B cirrhosis at the University of Michigan between January 2004 and September 2006 (UM cohort) and prospectively followed them until HCC development, liver transplantation, death, or study termination. Regression analysis and machine learning algorithms were used to construct predictive models for HCC development, which were tested on an independent validation cohort from the Hepatitis C Antiviral Long-term Treatment against Cirrhosis (HALT-C) Trial. Both models were also compared to the previously published HALT-C model. Discrimination was assessed using receiver operating characteristic curve analysis and diagnostic accuracy was assessed with net reclassification improvement and integrated discrimination improvement statistics. Results After a median follow-up of 3.5 years, 41 patients developed HCC. The UM regression model had a c-statistic of 0.61 (95%CI 0.56-0.67), whereas the machine learning algorithm had a c-statistic of 0.64 (95%CI 0.60–0.69) in the validation cohort. The machine learning algorithm had significantly better diagnostic accuracy as assessed by net reclassification improvement (p<0.001) and integrated discrimination improvement (p=0.04). The HALT-C model had a c-statistic of 0.60 (95%CI 0.50-0.70) in the validation cohort and was outperformed by the machine learning algorithm (p=0.047). Conclusion Machine learning algorithms improve the accuracy of risk stratifying patients with cirrhosis and can be used to accurately identify patients at high-risk for developing HCC. PMID:24169273

  8. Interpreting Variance Components as Evidence for Reliability and Validity.

    ERIC Educational Resources Information Center

    Kane, Michael T.

    The reliability and validity of measurement is analyzed by a sampling model based on generalizability theory. A model for the relationship between a measurement procedure and an attribute is developed from an analysis of how measurements are used and interpreted in science. The model provides a basis for analyzing the concept of an error of…

  9. Development and validation of a computational model of the knee joint for the evaluation of surgical treatments for osteoarthritis

    PubMed Central

    Mootanah, R.; Imhauser, C.W.; Reisse, F.; Carpanen, D.; Walker, R.W.; Koff, M.F.; Lenhoff, M.W.; Rozbruch, S.R.; Fragomen, A.T.; Dewan, Z.; Kirane, Y.M.; Cheah, Pamela A.; Dowell, J.K.; Hillstrom, H.J.

    2014-01-01

    A three-dimensional (3D) knee joint computational model was developed and validated to predict knee joint contact forces and pressures for different degrees of malalignment. A 3D computational knee model was created from high-resolution radiological images to emulate passive sagittal rotation (full-extension to 65°-flexion) and weight acceptance. A cadaveric knee mounted on a six-degree-of-freedom robot was subjected to matching boundary and loading conditions. A ligament-tuning process minimised kinematic differences between the robotically loaded cadaver specimen and the finite element (FE) model. The model was validated by measured intra-articular force and pressure measurements. Percent full scale error between EE-predicted and in vitro-measured values in the medial and lateral compartments were 6.67% and 5.94%, respectively, for normalised peak pressure values, and 7.56% and 4.48%, respectively, for normalised force values. The knee model can accurately predict normalised intra-articular pressure and forces for different loading conditions and could be further developed for subject-specific surgical planning. PMID:24786914

  10. Development and validation of a computational model of the knee joint for the evaluation of surgical treatments for osteoarthritis.

    PubMed

    Mootanah, R; Imhauser, C W; Reisse, F; Carpanen, D; Walker, R W; Koff, M F; Lenhoff, M W; Rozbruch, S R; Fragomen, A T; Dewan, Z; Kirane, Y M; Cheah, K; Dowell, J K; Hillstrom, H J

    2014-01-01

    A three-dimensional (3D) knee joint computational model was developed and validated to predict knee joint contact forces and pressures for different degrees of malalignment. A 3D computational knee model was created from high-resolution radiological images to emulate passive sagittal rotation (full-extension to 65°-flexion) and weight acceptance. A cadaveric knee mounted on a six-degree-of-freedom robot was subjected to matching boundary and loading conditions. A ligament-tuning process minimised kinematic differences between the robotically loaded cadaver specimen and the finite element (FE) model. The model was validated by measured intra-articular force and pressure measurements. Percent full scale error between FE-predicted and in vitro-measured values in the medial and lateral compartments were 6.67% and 5.94%, respectively, for normalised peak pressure values, and 7.56% and 4.48%, respectively, for normalised force values. The knee model can accurately predict normalised intra-articular pressure and forces for different loading conditions and could be further developed for subject-specific surgical planning.

  11. Assessing Requirements Quality through Requirements Coverage

    NASA Technical Reports Server (NTRS)

    Rajan, Ajitha; Heimdahl, Mats; Woodham, Kurt

    2008-01-01

    In model-based development, the development effort is centered around a formal description of the proposed software system the model. This model is derived from some high-level requirements describing the expected behavior of the software. For validation and verification purposes, this model can then be subjected to various types of analysis, for example, completeness and consistency analysis [6], model checking [3], theorem proving [1], and test-case generation [4, 7]. This development paradigm is making rapid inroads in certain industries, e.g., automotive, avionics, space applications, and medical technology. This shift towards model-based development naturally leads to changes in the verification and validation (V&V) process. The model validation problem determining that the model accurately captures the customer's high-level requirements has received little attention and the sufficiency of the validation activities has been largely determined through ad-hoc methods. Since the model serves as the central artifact, its correctness with respect to the users needs is absolutely crucial. In our investigation, we attempt to answer the following two questions with respect to validation (1) Are the requirements sufficiently defined for the system? and (2) How well does the model implement the behaviors specified by the requirements? The second question can be addressed using formal verification. Nevertheless, the size and complexity of many industrial systems make formal verification infeasible even if we have a formal model and formalized requirements. Thus, presently, there is no objective way of answering these two questions. To this end, we propose an approach based on testing that, when given a set of formal requirements, explores the relationship between requirements-based structural test-adequacy coverage and model-based structural test-adequacy coverage. The proposed technique uses requirements coverage metrics defined in [9] on formal high-level software requirements and existing model coverage metrics such as the Modified Condition and Decision Coverage (MC/DC) used when testing highly critical software in the avionics industry [8]. Our work is related to Chockler et al. [2], but we base our work on traditional testing techniques as opposed to verification techniques.

  12. Developing and testing a measurement tool for assessing predictors of breakfast consumption based on a health promotion model.

    PubMed

    Dehdari, Tahereh; Rahimi, Tahereh; Aryaeian, Naheed; Gohari, Mahmood Reza; Esfeh, Jabiz Modaresi

    2014-01-01

    To develop an instrument for measuring Health Promotion Model constructs in terms of breakfast consumption, and to identify the constructs that were predictors of breakfast consumption among Iranian female students. A questionnaire on Health Promotion Model variables was developed and potential predictors of breakfast consumption were assessed using this tool. One hundred female students, mean age 13 years (SD ± 1.2 years). Two middle schools from moderate-income areas in Qom, Iran. Health Promotion Model variables were assessed using a 58-item questionnaire. Breakfast consumption was also measured. Internal consistency (Cronbach alpha), content validity index, content validity ratio, multiple linear regression using stepwise method, and Pearson correlation. Content validity index and content validity ratio scores of the developed scale items were 0.89 and 0.93, respectively. Internal consistencies (range, .74-.91) of subscales were acceptable. Prior related behaviors, perceived barriers, self-efficacy, and competing demand and preferences were 4 constructs that could predict 63% variance of breakfast frequency per week among subjects. The instrument developed in this study may be a useful tool for researchers to explore factors affecting breakfast consumption among students. Students with a high level of self-efficacy, more prior related behavior, fewer perceived barriers, and fewer competing demands were most likely to regularly consume breakfast. Copyright © 2014 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.

  13. Reinforced Carbon-Carbon Subcomponent Flat Plate Impact Testing for Space Shuttle Orbiter Return to Flight

    NASA Technical Reports Server (NTRS)

    Melis, Matthew E.; Brand, Jeremy H.; Pereira, J. Michael; Revilock, Duane M.

    2007-01-01

    Following the tragedy of the Space Shuttle Columbia on February 1, 2003, a major effort commenced to develop a better understanding of debris impacts and their effect on the Space Shuttle subsystems. An initiative to develop and validate physics-based computer models to predict damage from such impacts was a fundamental component of this effort. To develop the models it was necessary to physically characterize Reinforced Carbon-Carbon (RCC) and various debris materials which could potentially shed on ascent and impact the Orbiter RCC leading edges. The validated models enabled the launch system community to use the impact analysis software LS DYNA to predict damage by potential and actual impact events on the Orbiter leading edge and nose cap thermal protection systems. Validation of the material models was done through a three-level approach: fundamental tests to obtain independent static and dynamic material model properties of materials of interest, sub-component impact tests to provide highly controlled impact test data for the correlation and validation of the models, and full-scale impact tests to establish the final level of confidence for the analysis methodology. This paper discusses the second level subcomponent test program in detail and its application to the LS DYNA model validation process. The level two testing consisted of over one hundred impact tests in the NASA Glenn Research Center Ballistic Impact Lab on 6 by 6 in. and 6 by 12 in. flat plates of RCC and evaluated three types of debris projectiles: BX 265 External Tank foam, ice, and PDL 1034 External Tank foam. These impact tests helped determine the level of damage generated in the RCC flat plates by each projectile. The information obtained from this testing validated the LS DYNA damage prediction models and provided a certain level of confidence to begin performing analysis for full-size RCC test articles for returning NASA to flight with STS 114 and beyond.

  14. Active Aeroelastic Wing Aerodynamic Model Development and Validation for a Modified F/A-18A Airplane

    NASA Technical Reports Server (NTRS)

    Cumming, Stephen B.; Diebler, Corey G.

    2005-01-01

    A new aerodynamic model has been developed and validated for a modified F/A-18A airplane used for the Active Aeroelastic Wing (AAW) research program. The goal of the program was to demonstrate the advantages of using the inherent flexibility of an aircraft to enhance its performance. The research airplane was an F/A-18A with wings modified to reduce stiffness and a new control system to increase control authority. There have been two flight phases. Data gathered from the first flight phase were used to create the new aerodynamic model. A maximum-likelihood output-error parameter estimation technique was used to obtain stability and control derivatives. The derivatives were incorporated into the National Aeronautics and Space Administration F-18 simulation, validated, and used to develop new AAW control laws. The second phase of flights was used to evaluate the handling qualities of the AAW airplane and the control law design process, and to further test the accuracy of the new model. The flight test envelope covered Mach numbers between 0.85 and 1.30 and dynamic pressures from 600 to 1250 pound-force per square foot. The results presented in this report demonstrate that a thorough parameter identification analysis can be used to improve upon models that were developed using other means. This report describes the parameter estimation technique used, details the validation techniques, discusses differences between previously existing F/A-18 models, and presents results from the second phase of research flights.

  15. Predicting chemically-induced skin reactions. Part I: QSAR models of skin sensitization and their application to identify potentially hazardous compounds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alves, Vinicius M.; Laboratory for Molecular Modeling, Division of Chemical Biology and Medicinal Chemistry, Eshelman School of Pharmacy, University of North Carolina, Chapel Hill, NC 27599; Muratov, Eugene

    Repetitive exposure to a chemical agent can induce an immune reaction in inherently susceptible individuals that leads to skin sensitization. Although many chemicals have been reported as skin sensitizers, there have been very few rigorously validated QSAR models with defined applicability domains (AD) that were developed using a large group of chemically diverse compounds. In this study, we have aimed to compile, curate, and integrate the largest publicly available dataset related to chemically-induced skin sensitization, use this data to generate rigorously validated and QSAR models for skin sensitization, and employ these models as a virtual screening tool for identifying putativemore » sensitizers among environmental chemicals. We followed best practices for model building and validation implemented with our predictive QSAR workflow using Random Forest modeling technique in combination with SiRMS and Dragon descriptors. The Correct Classification Rate (CCR) for QSAR models discriminating sensitizers from non-sensitizers was 71–88% when evaluated on several external validation sets, within a broad AD, with positive (for sensitizers) and negative (for non-sensitizers) predicted rates of 85% and 79% respectively. When compared to the skin sensitization module included in the OECD QSAR Toolbox as well as to the skin sensitization model in publicly available VEGA software, our models showed a significantly higher prediction accuracy for the same sets of external compounds as evaluated by Positive Predicted Rate, Negative Predicted Rate, and CCR. These models were applied to identify putative chemical hazards in the Scorecard database of possible skin or sense organ toxicants as primary candidates for experimental validation. - Highlights: • It was compiled the largest publicly-available skin sensitization dataset. • Predictive QSAR models were developed for skin sensitization. • Developed models have higher prediction accuracy than OECD QSAR Toolbox. • Putative chemical hazards in the Scorecard database were found using our models.« less

  16. Development and Validation of Two Scales to Measure Elaboration and Behaviors Associated with Stewardship in Children

    ERIC Educational Resources Information Center

    Vezeau, Susan Lynn; Powell, Robert B.; Stern, Marc J.; Moore, D. DeWayne; Wright, Brett A.

    2017-01-01

    This investigation examines the development of two scales that measure elaboration and behaviors associated with stewardship in children. The scales were developed using confirmatory factor analysis to investigate their construct validity, reliability, and psychometric properties. Results suggest that a second-order factor model structure provides…

  17. Modeling and validating HL7 FHIR profiles using semantic web Shape Expressions (ShEx).

    PubMed

    Solbrig, Harold R; Prud'hommeaux, Eric; Grieve, Grahame; McKenzie, Lloyd; Mandel, Joshua C; Sharma, Deepak K; Jiang, Guoqian

    2017-03-01

    HL7 Fast Healthcare Interoperability Resources (FHIR) is an emerging open standard for the exchange of electronic healthcare information. FHIR resources are defined in a specialized modeling language. FHIR instances can currently be represented in either XML or JSON. The FHIR and Semantic Web communities are developing a third FHIR instance representation format in Resource Description Framework (RDF). Shape Expressions (ShEx), a formal RDF data constraint language, is a candidate for describing and validating the FHIR RDF representation. Create a FHIR to ShEx model transformation and assess its ability to describe and validate FHIR RDF data. We created the methods and tools that generate the ShEx schemas modeling the FHIR to RDF specification being developed by HL7 ITS/W3C RDF Task Force, and evaluated the applicability of ShEx in the description and validation of FHIR to RDF transformations. The ShEx models contributed significantly to workgroup consensus. Algorithmic transformations from the FHIR model to ShEx schemas and FHIR example data to RDF transformations were incorporated into the FHIR build process. ShEx schemas representing 109 FHIR resources were used to validate 511 FHIR RDF data examples from the Standards for Trial Use (STU 3) Ballot version. We were able to uncover unresolved issues in the FHIR to RDF specification and detect 10 types of errors and root causes in the actual implementation. The FHIR ShEx representations have been included in the official FHIR web pages for the STU 3 Ballot version since September 2016. ShEx can be used to define and validate the syntax of a FHIR resource, which is complementary to the use of RDF Schema (RDFS) and Web Ontology Language (OWL) for semantic validation. ShEx proved useful for describing a standard model of FHIR RDF data. The combination of a formal model and a succinct format enabled comprehensive review and automated validation. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Empirical models for predicting volumes of sediment deposited by debris flows and sediment-laden floods in the transverse ranges of southern California

    USGS Publications Warehouse

    Gartner, Joseph E.; Cannon, Susan H.; Santi, Paul M

    2014-01-01

    Debris flows and sediment-laden floods in the Transverse Ranges of southern California pose severe hazards to nearby communities and infrastructure. Frequent wildfires denude hillslopes and increase the likelihood of these hazardous events. Debris-retention basins protect communities and infrastructure from the impacts of debris flows and sediment-laden floods and also provide critical data for volumes of sediment deposited at watershed outlets. In this study, we supplement existing data for the volumes of sediment deposited at watershed outlets with newly acquired data to develop new empirical models for predicting volumes of sediment produced by watersheds located in the Transverse Ranges of southern California. The sediment volume data represent a broad sample of conditions found in Ventura, Los Angeles and San Bernardino Counties, California. The measured volumes of sediment, watershed morphology, distributions of burn severity within each watershed, the time since the most recent fire, triggering storm rainfall conditions, and engineering soil properties were analyzed using multiple linear regressions to develop two models. A “long-term model” was developed for predicting volumes of sediment deposited by both debris flows and floods at various times since the most recent fire from a database of volumes of sediment deposited by a combination of debris flows and sediment-laden floods with no time limit since the most recent fire (n = 344). A subset of this database was used to develop an “emergency assessment model” for predicting volumes of sediment deposited by debris flows within two years of a fire (n = 92). Prior to developing the models, 32 volumes of sediment, and related parameters for watershed morphology, burn severity and rainfall conditions were retained to independently validate the long-term model. Ten of these volumes of sediment were deposited by debris flows within two years of a fire and were used to validate the emergency assessment model. The models were validated by comparing predicted and measured volumes of sediment. These validations were also performed for previously developed models and identify that the models developed here best predict volumes of sediment for burned watersheds in comparison to previously developed models.

  19. The Model Human Processor and the Older Adult: Parameter Estimation and Validation Within a Mobile Phone Task

    PubMed Central

    Jastrzembski, Tiffany S.; Charness, Neil

    2009-01-01

    The authors estimate weighted mean values for nine information processing parameters for older adults using the Card, Moran, and Newell (1983) Model Human Processor model. The authors validate a subset of these parameters by modeling two mobile phone tasks using two different phones and comparing model predictions to a sample of younger (N = 20; Mage = 20) and older (N = 20; Mage = 69) adults. Older adult models fit keystroke-level performance at the aggregate grain of analysis extremely well (R = 0.99) and produced equivalent fits to previously validated younger adult models. Critical path analyses highlighted points of poor design as a function of cognitive workload, hardware/software design, and user characteristics. The findings demonstrate that estimated older adult information processing parameters are valid for modeling purposes, can help designers understand age-related performance using existing interfaces, and may support the development of age-sensitive technologies. PMID:18194048

  20. The Model Human Processor and the older adult: parameter estimation and validation within a mobile phone task.

    PubMed

    Jastrzembski, Tiffany S; Charness, Neil

    2007-12-01

    The authors estimate weighted mean values for nine information processing parameters for older adults using the Card, Moran, and Newell (1983) Model Human Processor model. The authors validate a subset of these parameters by modeling two mobile phone tasks using two different phones and comparing model predictions to a sample of younger (N = 20; M-sub(age) = 20) and older (N = 20; M-sub(age) = 69) adults. Older adult models fit keystroke-level performance at the aggregate grain of analysis extremely well (R = 0.99) and produced equivalent fits to previously validated younger adult models. Critical path analyses highlighted points of poor design as a function of cognitive workload, hardware/software design, and user characteristics. The findings demonstrate that estimated older adult information processing parameters are valid for modeling purposes, can help designers understand age-related performance using existing interfaces, and may support the development of age-sensitive technologies.

  1. NC truck network model development research.

    DOT National Transportation Integrated Search

    2008-09-01

    This research develops a validated prototype truck traffic network model for North Carolina. The model : includes all counties and metropolitan areas of North Carolina and major economic areas throughout the : U.S. Geographic boundaries, population a...

  2. Development and Validation of a Polarimetric-MCScene 3D Atmospheric Radiation Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berk, Alexander; Hawes, Frederick; Fox, Marsha

    2016-03-15

    Polarimetric measurements can substantially enhance the ability of both spectrally resolved and single band imagery to detect the proliferation of weapons of mass destruction, providing data for locating and identifying facilities, materials, and processes of undeclared and proliferant nuclear weapons programs worldwide. Unfortunately, models do not exist that efficiently and accurately predict spectral polarized signatures for the materials of interest embedded in complex 3D environments. Having such a model would enable one to test hypotheses and optimize both the enhancement of scene contrast and the signal processing for spectral signature extraction. The Phase I set the groundwork for development ofmore » fully validated polarimetric spectral signature and scene simulation models. This has been accomplished 1. by (a) identifying and downloading state-of-the-art surface and atmospheric polarimetric data sources, (b) implementing tools for generating custom polarimetric data, and (c) identifying and requesting US Government funded field measurement data for use in validation; 2. by formulating an approach for upgrading the radiometric spectral signature model MODTRAN to generate polarimetric intensities through (a) ingestion of the polarimetric data, (b) polarimetric vectorization of existing MODTRAN modules, and (c) integration of a newly developed algorithm for computing polarimetric multiple scattering contributions; 3. by generating an initial polarimetric model that demonstrates calculation of polarimetric solar and lunar single scatter intensities arising from the interaction of incoming irradiances with molecules and aerosols; 4. by developing a design and implementation plan to (a) automate polarimetric scene construction and (b) efficiently sample polarimetric scattering and reflection events, for use in a to be developed polarimetric version of the existing first-principles synthetic scene simulation model, MCScene; and 5. by planning a validation field measurement program in collaboration with the Remote Sensing and Exploitation group at Sandia National Laboratories (SNL) in which data from their ongoing polarimetric field and laboratory measurement program will be shared and, to the extent allowed, tailored for model validation in exchange for model predictions under conditions and for geometries outside of their measurement domain.« less

  3. Cross-validation of an employee safety climate model in Malaysia.

    PubMed

    Bahari, Siti Fatimah; Clarke, Sharon

    2013-06-01

    Whilst substantial research has investigated the nature of safety climate, and its importance as a leading indicator of organisational safety, much of this research has been conducted with Western industrial samples. The current study focuses on the cross-validation of a safety climate model in the non-Western industrial context of Malaysian manufacturing. The first-order factorial validity of Cheyne et al.'s (1998) [Cheyne, A., Cox, S., Oliver, A., Tomas, J.M., 1998. Modelling safety climate in the prediction of levels of safety activity. Work and Stress, 12(3), 255-271] model was tested, using confirmatory factor analysis, in a Malaysian sample. Results showed that the model fit indices were below accepted levels, indicating that the original Cheyne et al. (1998) safety climate model was not supported. An alternative three-factor model was developed using exploratory factor analysis. Although these findings are not consistent with previously reported cross-validation studies, we argue that previous studies have focused on validation across Western samples, and that the current study demonstrates the need to take account of cultural factors in the development of safety climate models intended for use in non-Western contexts. The results have important implications for the transferability of existing safety climate models across cultures (for example, in global organisations) and highlight the need for future research to examine cross-cultural issues in relation to safety climate. Copyright © 2013 National Safety Council and Elsevier Ltd. All rights reserved.

  4. Derivation and validation of in-hospital mortality prediction models in ischaemic stroke patients using administrative data.

    PubMed

    Lee, Jason; Morishima, Toshitaka; Kunisawa, Susumu; Sasaki, Noriko; Otsubo, Tetsuya; Ikai, Hiroshi; Imanaka, Yuichi

    2013-01-01

    Stroke and other cerebrovascular diseases are a major cause of death and disability. Predicting in-hospital mortality in ischaemic stroke patients can help to identify high-risk patients and guide treatment approaches. Chart reviews provide important clinical information for mortality prediction, but are laborious and limiting in sample sizes. Administrative data allow for large-scale multi-institutional analyses but lack the necessary clinical information for outcome research. However, administrative claims data in Japan has seen the recent inclusion of patient consciousness and disability information, which may allow more accurate mortality prediction using administrative data alone. The aim of this study was to derive and validate models to predict in-hospital mortality in patients admitted for ischaemic stroke using administrative data. The sample consisted of 21,445 patients from 176 Japanese hospitals, who were randomly divided into derivation and validation subgroups. Multivariable logistic regression models were developed using 7- and 30-day and overall in-hospital mortality as dependent variables. Independent variables included patient age, sex, comorbidities upon admission, Japan Coma Scale (JCS) score, Barthel Index score, modified Rankin Scale (mRS) score, and admissions after hours and on weekends/public holidays. Models were developed in the derivation subgroup, and coefficients from these models were applied to the validation subgroup. Predictive ability was analysed using C-statistics; calibration was evaluated with Hosmer-Lemeshow χ(2) tests. All three models showed predictive abilities similar or surpassing that of chart review-based models. The C-statistics were highest in the 7-day in-hospital mortality prediction model, at 0.906 and 0.901 in the derivation and validation subgroups, respectively. For the 30-day in-hospital mortality prediction models, the C-statistics for the derivation and validation subgroups were 0.893 and 0.872, respectively; in overall in-hospital mortality prediction these values were 0.883 and 0.876. In this study, we have derived and validated in-hospital mortality prediction models for three different time spans using a large population of ischaemic stroke patients in a multi-institutional analysis. The recent inclusion of JCS, Barthel Index, and mRS scores in Japanese administrative data has allowed the prediction of in-hospital mortality with accuracy comparable to that of chart review analyses. The models developed using administrative data had consistently high predictive abilities for all models in both the derivation and validation subgroups. These results have implications in the role of administrative data in future mortality prediction analyses. Copyright © 2013 S. Karger AG, Basel.

  5. Development and validation of a nursing professionalism evaluation model in a career ladder system.

    PubMed

    Kim, Yeon Hee; Jung, Young Sun; Min, Ja; Song, Eun Young; Ok, Jung Hui; Lim, Changwon; Kim, Kyunghee; Kim, Ji-Su

    2017-01-01

    The clinical ladder system categorizes the degree of nursing professionalism and rewards and is an important human resource tool for managing nursing. We developed a model to evaluate nursing professionalism, which determines the clinical ladder system levels, and verified its validity. Data were collected using a clinical competence tool developed in this study, and existing methods such as the nursing professionalism evaluation tool, peer reviews, and face-to-face interviews to evaluate promotions and verify the presented content in a medical institution. Reliability and convergent and discriminant validity of the clinical competence evaluation tool were verified using SmartPLS software. The validity of the model for evaluating overall nursing professionalism was also analyzed. Clinical competence was determined by five dimensions of nursing practice: scientific, technical, ethical, aesthetic, and existential. The structural model explained 66% of the variance. Clinical competence scales, peer reviews, and face-to-face interviews directly determined nursing professionalism levels. The evaluation system can be used for evaluating nurses' professionalism in actual medical institutions from a nursing practice perspective. A conceptual framework for establishing a human resources management system for nurses and a tool for evaluating nursing professionalism at medical institutions is provided.

  6. Kohlberg's Moral Development Model: Cohort Influences on Validity.

    ERIC Educational Resources Information Center

    Bechtel, Ashleah

    An overview of Kohlberg's theory of moral development is presented; three interviews regarding the theory are reported, and the author's own moral development is compared to the model; finally, a critique of the theory is addressed along with recommendations for future enhancement. Lawrence Kohlberg's model of moral development, also referred to…

  7. Dynamic modelling and experimental validation of three wheeled tilting vehicles

    NASA Astrophysics Data System (ADS)

    Amati, Nicola; Festini, Andrea; Pelizza, Luigi; Tonoli, Andrea

    2011-06-01

    The present paper describes the study of the stability in the straight running of a three-wheeled tilting vehicle for urban and sub-urban mobility. The analysis was carried out by developing a multibody model in the Matlab/SimulinkSimMechanics environment. An Adams-Motorcycle model and an equivalent analytical model were developed for the cross-validation and for highlighting the similarities with the lateral dynamics of motorcycles. Field tests were carried out to validate the model and identify some critical parameters, such as the damping on the steering system. The stability analysis demonstrates that the lateral dynamic motions are characterised by vibration modes that are similar to that of a motorcycle. Additionally, it shows that the wobble mode is significantly affected by the castor trail, whereas it is only slightly affected by the dynamics of the front suspension. For the present case study, the frame compliance also has no influence on the weave and wobble.

  8. Jason Jonkman | NREL

    Science.gov Websites

    -based and offshore wind turbines. He also guides projects aimed at verifying, validating, and applying developing, verifying, and validating simulation models for offshore wind turbines. He is the principal investigator for a DOE-funded project to improve the modeling of offshore floating wind system dynamics. He

  9. Avoiding and identifying errors in health technology assessment models: qualitative study and methodological review.

    PubMed

    Chilcott, J; Tappenden, P; Rawdin, A; Johnson, M; Kaltenthaler, E; Paisley, S; Papaioannou, D; Shippam, A

    2010-05-01

    Health policy decisions must be relevant, evidence-based and transparent. Decision-analytic modelling supports this process but its role is reliant on its credibility. Errors in mathematical decision models or simulation exercises are unavoidable but little attention has been paid to processes in model development. Numerous error avoidance/identification strategies could be adopted but it is difficult to evaluate the merits of strategies for improving the credibility of models without first developing an understanding of error types and causes. The study aims to describe the current comprehension of errors in the HTA modelling community and generate a taxonomy of model errors. Four primary objectives are to: (1) describe the current understanding of errors in HTA modelling; (2) understand current processes applied by the technology assessment community for avoiding errors in development, debugging and critically appraising models for errors; (3) use HTA modellers' perceptions of model errors with the wider non-HTA literature to develop a taxonomy of model errors; and (4) explore potential methods and procedures to reduce the occurrence of errors in models. It also describes the model development process as perceived by practitioners working within the HTA community. A methodological review was undertaken using an iterative search methodology. Exploratory searches informed the scope of interviews; later searches focused on issues arising from the interviews. Searches were undertaken in February 2008 and January 2009. In-depth qualitative interviews were performed with 12 HTA modellers from academic and commercial modelling sectors. All qualitative data were analysed using the Framework approach. Descriptive and explanatory accounts were used to interrogate the data within and across themes and subthemes: organisation, roles and communication; the model development process; definition of error; types of model error; strategies for avoiding errors; strategies for identifying errors; and barriers and facilitators. There was no common language in the discussion of modelling errors and there was inconsistency in the perceived boundaries of what constitutes an error. Asked about the definition of model error, there was a tendency for interviewees to exclude matters of judgement from being errors and focus on 'slips' and 'lapses', but discussion of slips and lapses comprised less than 20% of the discussion on types of errors. Interviewees devoted 70% of the discussion to softer elements of the process of defining the decision question and conceptual modelling, mostly the realms of judgement, skills, experience and training. The original focus concerned model errors, but it may be more useful to refer to modelling risks. Several interviewees discussed concepts of validation and verification, with notable consistency in interpretation: verification meaning the process of ensuring that the computer model correctly implemented the intended model, whereas validation means the process of ensuring that a model is fit for purpose. Methodological literature on verification and validation of models makes reference to the Hermeneutic philosophical position, highlighting that the concept of model validation should not be externalized from the decision-makers and the decision-making process. Interviewees demonstrated examples of all major error types identified in the literature: errors in the description of the decision problem, in model structure, in use of evidence, in implementation of the model, in operation of the model, and in presentation and understanding of results. The HTA error classifications were compared against existing classifications of model errors in the literature. A range of techniques and processes are currently used to avoid errors in HTA models: engaging with clinical experts, clients and decision-makers to ensure mutual understanding, producing written documentation of the proposed model, explicit conceptual modelling, stepping through skeleton models with experts, ensuring transparency in reporting, adopting standard housekeeping techniques, and ensuring that those parties involved in the model development process have sufficient and relevant training. Clarity and mutual understanding were identified as key issues. However, their current implementation is not framed within an overall strategy for structuring complex problems. Some of the questioning may have biased interviewees responses but as all interviewees were represented in the analysis no rebalancing of the report was deemed necessary. A potential weakness of the literature review was its focus on spreadsheet and program development rather than specifically on model development. It should also be noted that the identified literature concerning programming errors was very narrow despite broad searches being undertaken. Published definitions of overall model validity comprising conceptual model validation, verification of the computer model, and operational validity of the use of the model in addressing the real-world problem are consistent with the views expressed by the HTA community and are therefore recommended as the basis for further discussions of model credibility. Such discussions should focus on risks, including errors of implementation, errors in matters of judgement and violations. Discussions of modelling risks should reflect the potentially complex network of cognitive breakdowns that lead to errors in models and existing research on the cognitive basis of human error should be included in an examination of modelling errors. There is a need to develop a better understanding of the skills requirements for the development, operation and use of HTA models. Interaction between modeller and client in developing mutual understanding of a model establishes that model's significance and its warranty. This highlights that model credibility is the central concern of decision-makers using models so it is crucial that the concept of model validation should not be externalized from the decision-makers and the decision-making process. Recommendations for future research would be studies of verification and validation; the model development process; and identification of modifications to the modelling process with the aim of preventing the occurrence of errors and improving the identification of errors in models.

  10. A score to estimate the likelihood of detecting advanced colorectal neoplasia at colonoscopy

    PubMed Central

    Kaminski, Michal F; Polkowski, Marcin; Kraszewska, Ewa; Rupinski, Maciej; Butruk, Eugeniusz; Regula, Jaroslaw

    2014-01-01

    Objective This study aimed to develop and validate a model to estimate the likelihood of detecting advanced colorectal neoplasia in Caucasian patients. Design We performed a cross-sectional analysis of database records for 40-year-old to 66-year-old patients who entered a national primary colonoscopy-based screening programme for colorectal cancer in 73 centres in Poland in the year 2007. We used multivariate logistic regression to investigate the associations between clinical variables and the presence of advanced neoplasia in a randomly selected test set, and confirmed the associations in a validation set. We used model coefficients to develop a risk score for detection of advanced colorectal neoplasia. Results Advanced colorectal neoplasia was detected in 2544 of the 35 918 included participants (7.1%). In the test set, a logistic-regression model showed that independent risk factors for advanced colorectal neoplasia were: age, sex, family history of colorectal cancer, cigarette smoking (p<0.001 for these four factors), and Body Mass Index (p=0.033). In the validation set, the model was well calibrated (ratio of expected to observed risk of advanced neoplasia: 1.00 (95% CI 0.95 to 1.06)) and had moderate discriminatory power (c-statistic 0.62). We developed a score that estimated the likelihood of detecting advanced neoplasia in the validation set, from 1.32% for patients scoring 0, to 19.12% for patients scoring 7–8. Conclusions Developed and internally validated score consisting of simple clinical factors successfully estimates the likelihood of detecting advanced colorectal neoplasia in asymptomatic Caucasian patients. Once externally validated, it may be useful for counselling or designing primary prevention studies. PMID:24385598

  11. A score to estimate the likelihood of detecting advanced colorectal neoplasia at colonoscopy.

    PubMed

    Kaminski, Michal F; Polkowski, Marcin; Kraszewska, Ewa; Rupinski, Maciej; Butruk, Eugeniusz; Regula, Jaroslaw

    2014-07-01

    This study aimed to develop and validate a model to estimate the likelihood of detecting advanced colorectal neoplasia in Caucasian patients. We performed a cross-sectional analysis of database records for 40-year-old to 66-year-old patients who entered a national primary colonoscopy-based screening programme for colorectal cancer in 73 centres in Poland in the year 2007. We used multivariate logistic regression to investigate the associations between clinical variables and the presence of advanced neoplasia in a randomly selected test set, and confirmed the associations in a validation set. We used model coefficients to develop a risk score for detection of advanced colorectal neoplasia. Advanced colorectal neoplasia was detected in 2544 of the 35,918 included participants (7.1%). In the test set, a logistic-regression model showed that independent risk factors for advanced colorectal neoplasia were: age, sex, family history of colorectal cancer, cigarette smoking (p<0.001 for these four factors), and Body Mass Index (p=0.033). In the validation set, the model was well calibrated (ratio of expected to observed risk of advanced neoplasia: 1.00 (95% CI 0.95 to 1.06)) and had moderate discriminatory power (c-statistic 0.62). We developed a score that estimated the likelihood of detecting advanced neoplasia in the validation set, from 1.32% for patients scoring 0, to 19.12% for patients scoring 7-8. Developed and internally validated score consisting of simple clinical factors successfully estimates the likelihood of detecting advanced colorectal neoplasia in asymptomatic Caucasian patients. Once externally validated, it may be useful for counselling or designing primary prevention studies. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  12. TEAMS Model Analyzer

    NASA Technical Reports Server (NTRS)

    Tijidjian, Raffi P.

    2010-01-01

    The TEAMS model analyzer is a supporting tool developed to work with models created with TEAMS (Testability, Engineering, and Maintenance System), which was developed by QSI. In an effort to reduce the time spent in the manual process that each TEAMS modeler must perform in the preparation of reporting for model reviews, a new tool has been developed as an aid to models developed in TEAMS. The software allows for the viewing, reporting, and checking of TEAMS models that are checked into the TEAMS model database. The software allows the user to selectively model in a hierarchical tree outline view that displays the components, failure modes, and ports. The reporting features allow the user to quickly gather statistics about the model, and generate an input/output report pertaining to all of the components. Rules can be automatically validated against the model, with a report generated containing resulting inconsistencies. In addition to reducing manual effort, this software also provides an automated process framework for the Verification and Validation (V&V) effort that will follow development of these models. The aid of such an automated tool would have a significant impact on the V&V process.

  13. A new method for assessing content validity in model-based creation and iteration of eHealth interventions.

    PubMed

    Kassam-Adams, Nancy; Marsac, Meghan L; Kohser, Kristen L; Kenardy, Justin A; March, Sonja; Winston, Flaura K

    2015-04-15

    The advent of eHealth interventions to address psychological concerns and health behaviors has created new opportunities, including the ability to optimize the effectiveness of intervention activities and then deliver these activities consistently to a large number of individuals in need. Given that eHealth interventions grounded in a well-delineated theoretical model for change are more likely to be effective and that eHealth interventions can be costly to develop, assuring the match of final intervention content and activities to the underlying model is a key step. We propose to apply the concept of "content validity" as a crucial checkpoint to evaluate the extent to which proposed intervention activities in an eHealth intervention program are valid (eg, relevant and likely to be effective) for the specific mechanism of change that each is intended to target and the intended target population for the intervention. The aims of this paper are to define content validity as it applies to model-based eHealth intervention development, to present a feasible method for assessing content validity in this context, and to describe the implementation of this new method during the development of a Web-based intervention for children. We designed a practical 5-step method for assessing content validity in eHealth interventions that includes defining key intervention targets, delineating intervention activity-target pairings, identifying experts and using a survey tool to gather expert ratings of the relevance of each activity to its intended target, its likely effectiveness in achieving the intended target, and its appropriateness with a specific intended audience, and then using quantitative and qualitative results to identify intervention activities that may need modification. We applied this method during our development of the Coping Coach Web-based intervention for school-age children. In the evaluation of Coping Coach content validity, 15 experts from five countries rated each of 15 intervention activity-target pairings. Based on quantitative indices, content validity was excellent for relevance and good for likely effectiveness and age-appropriateness. Two intervention activities had item-level indicators that suggested the need for further review and potential revision by the development team. This project demonstrated that assessment of content validity can be straightforward and feasible to implement and that results of this assessment provide useful information for ongoing development and iterations of new eHealth interventions, complementing other sources of information (eg, user feedback, effectiveness evaluations). This approach can be utilized at one or more points during the development process to guide ongoing optimization of eHealth interventions.

  14. Development and Validation of Accident Models for FeCrAl Cladding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gamble, Kyle Allan Lawrence; Hales, Jason Dean

    2016-08-01

    The purpose of this milestone report is to present the work completed in regards to material model development for FeCrAl cladding and highlight the results of applying these models to Loss of Coolant Accidents (LOCA) and Station Blackouts (SBO). With the limited experimental data available (essentially only the data used to create the models) true validation is not possible. In the absence of another alternative, qualitative comparisons during postulated accident scenarios between FeCrAl and Zircaloy-4 cladded rods have been completed demonstrating the superior performance of FeCrAl.

  15. Validation of a Best-Fit Pharmacokinetic Model for Scopolamine Disposition after Intranasal Administration

    NASA Technical Reports Server (NTRS)

    Wu, L.; Chow, D. S-L.; Tam, V.; Putcha, L.

    2015-01-01

    An intranasal gel formulation of scopolamine (INSCOP) was developed for the treatment of Motion Sickness. Bioavailability and pharmacokinetics (PK) were determined per Investigative New Drug (IND) evaluation guidance by the Food and Drug Administration. Earlier, we reported the development of a PK model that can predict the relationship between plasma, saliva and urinary scopolamine (SCOP) concentrations using data collected from an IND clinical trial with INSCOP. This data analysis project is designed to validate the reported best fit PK model for SCOP by comparing observed and model predicted SCOP concentration-time profiles after administration of INSCOP.

  16. Object-Oriented Modeling of an Energy Harvesting System Based on Thermoelectric Generators

    NASA Astrophysics Data System (ADS)

    Nesarajah, Marco; Frey, Georg

    This paper deals with the modeling of an energy harvesting system based on thermoelectric generators (TEG), and the validation of the model by means of a test bench. TEGs are capable to improve the overall energy efficiency of energy systems, e.g. combustion engines or heating systems, by using the remaining waste heat to generate electrical power. Previously, a component-oriented model of the TEG itself was developed in Modelica® language. With this model any TEG can be described and simulated given the material properties and the physical dimension. Now, this model was extended by the surrounding components to a complete model of a thermoelectric energy harvesting system. In addition to the TEG, the model contains the cooling system, the heat source, and the power electronics. To validate the simulation model, a test bench was built and installed on an oil-fired household heating system. The paper reports results of the measurements and discusses the validity of the developed simulation models. Furthermore, the efficiency of the proposed energy harvesting system is derived and possible improvements based on design variations tested in the simulation model are proposed.

  17. Base Flow Model Validation

    NASA Technical Reports Server (NTRS)

    Sinha, Neeraj; Brinckman, Kevin; Jansen, Bernard; Seiner, John

    2011-01-01

    A method was developed of obtaining propulsive base flow data in both hot and cold jet environments, at Mach numbers and altitude of relevance to NASA launcher designs. The base flow data was used to perform computational fluid dynamics (CFD) turbulence model assessments of base flow predictive capabilities in order to provide increased confidence in base thermal and pressure load predictions obtained from computational modeling efforts. Predictive CFD analyses were used in the design of the experiments, available propulsive models were used to reduce program costs and increase success, and a wind tunnel facility was used. The data obtained allowed assessment of CFD/turbulence models in a complex flow environment, working within a building-block procedure to validation, where cold, non-reacting test data was first used for validation, followed by more complex reacting base flow validation.

  18. Mathematical modeling in realistic mathematics education

    NASA Astrophysics Data System (ADS)

    Riyanto, B.; Zulkardi; Putri, R. I. I.; Darmawijoyo

    2017-12-01

    The purpose of this paper is to produce Mathematical modelling in Realistics Mathematics Education of Junior High School. This study used development research consisting of 3 stages, namely analysis, design and evaluation. The success criteria of this study were obtained in the form of local instruction theory for school mathematical modelling learning which was valid and practical for students. The data were analyzed using descriptive analysis method as follows: (1) walk through, analysis based on the expert comments in the expert review to get Hypothetical Learning Trajectory for valid mathematical modelling learning; (2) analyzing the results of the review in one to one and small group to gain practicality. Based on the expert validation and students’ opinion and answers, the obtained mathematical modeling problem in Realistics Mathematics Education was valid and practical.

  19. Finite Element Model of the Knee for Investigation of Injury Mechanisms: Development and Validation

    PubMed Central

    Kiapour, Ali; Kiapour, Ata M.; Kaul, Vikas; Quatman, Carmen E.; Wordeman, Samuel C.; Hewett, Timothy E.; Demetropoulos, Constantine K.; Goel, Vijay K.

    2014-01-01

    Multiple computational models have been developed to study knee biomechanics. However, the majority of these models are mainly validated against a limited range of loading conditions and/or do not include sufficient details of the critical anatomical structures within the joint. Due to the multifactorial dynamic nature of knee injuries, anatomic finite element (FE) models validated against multiple factors under a broad range of loading conditions are necessary. This study presents a validated FE model of the lower extremity with an anatomically accurate representation of the knee joint. The model was validated against tibiofemoral kinematics, ligaments strain/force, and articular cartilage pressure data measured directly from static, quasi-static, and dynamic cadaveric experiments. Strong correlations were observed between model predictions and experimental data (r > 0.8 and p < 0.0005 for all comparisons). FE predictions showed low deviations (root-mean-square (RMS) error) from average experimental data under all modes of static and quasi-static loading, falling within 2.5 deg of tibiofemoral rotation, 1% of anterior cruciate ligament (ACL) and medial collateral ligament (MCL) strains, 17 N of ACL load, and 1 mm of tibiofemoral center of pressure. Similarly, the FE model was able to accurately predict tibiofemoral kinematics and ACL and MCL strains during simulated bipedal landings (dynamic loading). In addition to minimal deviation from direct cadaveric measurements, all model predictions fell within 95% confidence intervals of the average experimental data. Agreement between model predictions and experimental data demonstrates the ability of the developed model to predict the kinematics of the human knee joint as well as the complex, nonuniform stress and strain fields that occur in biological soft tissue. Such a model will facilitate the in-depth understanding of a multitude of potential knee injury mechanisms with special emphasis on ACL injury. PMID:24763546

  20. Evaluating the dynamic response of in-flight thrust calculation techniques during throttle transients

    NASA Technical Reports Server (NTRS)

    Ray, Ronald J.

    1994-01-01

    New flight test maneuvers and analysis techniques for evaluating the dynamic response of in-flight thrust models during throttle transients have been developed and validated. The approach is based on the aircraft and engine performance relationship between thrust and drag. Two flight test maneuvers, a throttle step and a throttle frequency sweep, were developed and used in the study. Graphical analysis techniques, including a frequency domain analysis method, were also developed and evaluated. They provide quantitative and qualitative results. Four thrust calculation methods were used to demonstrate and validate the test technique. Flight test applications on two high-performance aircraft confirmed the test methods as valid and accurate. These maneuvers and analysis techniques were easy to implement and use. Flight test results indicate the analysis techniques can identify the combined effects of model error and instrumentation response limitations on the calculated thrust value. The methods developed in this report provide an accurate approach for evaluating, validating, or comparing thrust calculation methods for dynamic flight applications.

  1. A Formal Approach to Empirical Dynamic Model Optimization and Validation

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G; Morelli, Eugene A.; Kenny, Sean P.; Giesy, Daniel P.

    2014-01-01

    A framework was developed for the optimization and validation of empirical dynamic models subject to an arbitrary set of validation criteria. The validation requirements imposed upon the model, which may involve several sets of input-output data and arbitrary specifications in time and frequency domains, are used to determine if model predictions are within admissible error limits. The parameters of the empirical model are estimated by finding the parameter realization for which the smallest of the margins of requirement compliance is as large as possible. The uncertainty in the value of this estimate is characterized by studying the set of model parameters yielding predictions that comply with all the requirements. Strategies are presented for bounding this set, studying its dependence on admissible prediction error set by the analyst, and evaluating the sensitivity of the model predictions to parameter variations. This information is instrumental in characterizing uncertainty models used for evaluating the dynamic model at operating conditions differing from those used for its identification and validation. A practical example based on the short period dynamics of the F-16 is used for illustration.

  2. Time Domain Tool Validation Using ARES I-X Flight Data

    NASA Technical Reports Server (NTRS)

    Hough, Steven; Compton, James; Hannan, Mike; Brandon, Jay

    2011-01-01

    The ARES I-X vehicle was launched from NASA's Kennedy Space Center (KSC) on October 28, 2009 at approximately 11:30 EDT. ARES I-X was the first test flight for NASA s ARES I launch vehicle, and it was the first non-Shuttle launch vehicle designed and flown by NASA since Saturn. The ARES I-X had a 4-segment solid rocket booster (SRB) first stage and a dummy upper stage (US) to emulate the properties of the ARES I US. During ARES I-X pre-flight modeling and analysis, six (6) independent time domain simulation tools were developed and cross validated. Each tool represents an independent implementation of a common set of models and parameters in a different simulation framework and architecture. Post flight data and reconstructed models provide the means to validate a subset of the simulations against actual flight data and to assess the accuracy of pre-flight dispersion analysis. Post flight data consists of telemetered Operational Flight Instrumentation (OFI) data primarily focused on flight computer outputs and sensor measurements as well as Best Estimated Trajectory (BET) data that estimates vehicle state information from all available measurement sources. While pre-flight models were found to provide a reasonable prediction of the vehicle flight, reconstructed models were generated to better represent and simulate the ARES I-X flight. Post flight reconstructed models include: SRB propulsion model, thrust vector bias models, mass properties, base aerodynamics, and Meteorological Estimated Trajectory (wind and atmospheric data). The result of the effort is a set of independently developed, high fidelity, time-domain simulation tools that have been cross validated and validated against flight data. This paper presents the process and results of high fidelity aerospace modeling, simulation, analysis and tool validation in the time domain.

  3. The Johns Hopkins model of psychological first aid (RAPID-PFA): curriculum development and content validation.

    PubMed

    Everly, George S; Barnett, Daniel J; Links, Jonathan M

    2012-01-01

    There appears to be virtual universal endorsement of the need for and value of acute "psychological first aid" (PFA) in the wake of trauma and disasters. In this paper, we describe the development of the curriculum for The Johns Hopkins RAPID-PFA model of psychological first aid. We employed an adaptation of the basic framework for the development of a clinical science as recommended by Millon which entailed: historical review, theoretical development, and content validation. The process of content validation of the RAPID-PFA curriculum entailed the assessment of attitudes (confidence in the application of PFA interventions, preparedness in the application of PFA); knowledge related to the application of immediate mental health interventions; and behavior (the ability to recognize clinical markers in the field as assessed via a videotape recognition exercise). Results of the content validation phase suggest the six-hour RAPID-PFA curriculum, initially based upon structural modeling analysis, can improve confidence in the application of PFA interventions, preparedness in the application of PFA, knowledge related to the application of immediate mental health interventions, and the ability to recognize clinical markers in the field as assessed via a videotape recognition exercise.

  4. Development and validation of instrument for ergonomic evaluation of tablet arm chairs

    PubMed Central

    Tirloni, Adriana Seára; dos Reis, Diogo Cunha; Bornia, Antonio Cezar; de Andrade, Dalton Francisco; Borgatto, Adriano Ferreti; Moro, Antônio Renato Pereira

    2016-01-01

    The purpose of this study was to develop and validate an evaluation instrument for tablet arm chairs based on ergonomic requirements, focused on user perceptions and using Item Response Theory (IRT). This exploratory study involved 1,633 participants (university students and professors) in four steps: a pilot study (n=26), semantic validation (n=430), content validation (n=11) and construct validation (n=1,166). Samejima's graded response model was applied to validate the instrument. The results showed that all the steps (theoretical and practical) of the instrument's development and validation processes were successful and that the group of remaining items (n=45) had a high consistency (0.95). This instrument can be used in the furniture industry by engineers and product designers and in the purchasing process of tablet arm chairs for schools, universities and auditoriums. PMID:28337099

  5. Advanced Reactors-Intermediate Heat Exchanger (IHX) Coupling: Theoretical Modeling and Experimental Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Utgikar, Vivek; Sun, Xiaodong; Christensen, Richard

    2016-12-29

    The overall goal of the research project was to model the behavior of the advanced reactorintermediate heat exchange system and to develop advanced control techniques for off-normal conditions. The specific objectives defined for the project were: 1. To develop the steady-state thermal hydraulic design of the intermediate heat exchanger (IHX); 2. To develop mathematical models to describe the advanced nuclear reactor-IHX-chemical process/power generation coupling during normal and off-normal operations, and to simulate models using multiphysics software; 3. To develop control strategies using genetic algorithm or neural network techniques and couple these techniques with the multiphysics software; 4. To validate themore » models experimentally The project objectives were accomplished by defining and executing four different tasks corresponding to these specific objectives. The first task involved selection of IHX candidates and developing steady state designs for those. The second task involved modeling of the transient and offnormal operation of the reactor-IHX system. The subsequent task dealt with the development of control strategies and involved algorithm development and simulation. The last task involved experimental validation of the thermal hydraulic performances of the two prototype heat exchangers designed and fabricated for the project at steady state and transient conditions to simulate the coupling of the reactor- IHX-process plant system. The experimental work utilized the two test facilities at The Ohio State University (OSU) including one existing High-Temperature Helium Test Facility (HTHF) and the newly developed high-temperature molten salt facility.« less

  6. Cross-validation to select Bayesian hierarchical models in phylogenetics.

    PubMed

    Duchêne, Sebastián; Duchêne, David A; Di Giallonardo, Francesca; Eden, John-Sebastian; Geoghegan, Jemma L; Holt, Kathryn E; Ho, Simon Y W; Holmes, Edward C

    2016-05-26

    Recent developments in Bayesian phylogenetic models have increased the range of inferences that can be drawn from molecular sequence data. Accordingly, model selection has become an important component of phylogenetic analysis. Methods of model selection generally consider the likelihood of the data under the model in question. In the context of Bayesian phylogenetics, the most common approach involves estimating the marginal likelihood, which is typically done by integrating the likelihood across model parameters, weighted by the prior. Although this method is accurate, it is sensitive to the presence of improper priors. We explored an alternative approach based on cross-validation that is widely used in evolutionary analysis. This involves comparing models according to their predictive performance. We analysed simulated data and a range of viral and bacterial data sets using a cross-validation approach to compare a variety of molecular clock and demographic models. Our results show that cross-validation can be effective in distinguishing between strict- and relaxed-clock models and in identifying demographic models that allow growth in population size over time. In most of our empirical data analyses, the model selected using cross-validation was able to match that selected using marginal-likelihood estimation. The accuracy of cross-validation appears to improve with longer sequence data, particularly when distinguishing between relaxed-clock models. Cross-validation is a useful method for Bayesian phylogenetic model selection. This method can be readily implemented even when considering complex models where selecting an appropriate prior for all parameters may be difficult.

  7. Model Development and Experimental Validation of the Fusible Heat Sink Design for Exploration Vehicles

    NASA Technical Reports Server (NTRS)

    Cognata, Thomas J.; Leimkuehler, Thomas O.; Sheth, Rubik B.; Le,Hung

    2012-01-01

    The Fusible Heat Sink is a novel vehicle heat rejection technology which combines a flow through radiator with a phase change material. The combined technologies create a multi-function device able to shield crew members against Solar Particle Events (SPE), reduce radiator extent by permitting sizing to the average vehicle heat load rather than to the peak vehicle heat load, and to substantially absorb heat load excursions from the average while constantly maintaining thermal control system setpoints. This multi-function technology provides great flexibility for mission planning, making it possible to operate a vehicle in hot or cold environments and under high or low heat load conditions for extended periods of time. This paper describes the model development and experimental validation of the Fusible Heat Sink technology. The model developed was intended to meet the radiation and heat rejection requirements of a nominal MMSEV mission. Development parameters and results, including sizing and model performance will be discussed. From this flight-sized model, a scaled test-article design was modeled, designed, and fabricated for experimental validation of the technology at Johnson Space Center thermal vacuum chamber facilities. Testing showed performance comparable to the model at nominal loads and the capability to maintain heat loads substantially greater than nominal for extended periods of time.

  8. Prediction of early death among patients enrolled in phase I trials: development and validation of a new model based on platelet count and albumin.

    PubMed

    Ploquin, A; Olmos, D; Lacombe, D; A'Hern, R; Duhamel, A; Twelves, C; Marsoni, S; Morales-Barrera, R; Soria, J-C; Verweij, J; Voest, E E; Schöffski, P; Schellens, J H; Kramar, A; Kristeleit, R S; Arkenau, H-T; Kaye, S B; Penel, N

    2012-09-25

    Selecting patients with 'sufficient life expectancy' for Phase I oncology trials remains challenging. The Royal Marsden Hospital Score (RMS) previously identified high-risk patients as those with ≥ 2 of the following: albumin <35 g l(-1); LDH > upper limit of normal; >2 metastatic sites. This study developed an alternative prognostic model, and compared its performance with that of the RMS. The primary end point was the 90-day mortality rate. The new model was developed from the same database as RMS, but it used Chi-squared Automatic Interaction Detection (CHAID). The ROC characteristics of both methods were then validated in an independent database of 324 patients enrolled in European Organization on Research and Treatment of Cancer Phase I trials of cytotoxic agents between 2000 and 2009. The CHAID method identified high-risk patients as those with albumin <33 g l(-1) or ≥ 33 g l(-1), but platelet counts ≥ 400.000 mm(-3). In the validation data set, the rates of correctly classified patients were 0.79 vs 0.67 for the CHAID model and RMS, respectively. The negative predictive values (NPV) were similar for the CHAID model and RMS. The CHAID model and RMS provided a similarly high level of NPV, but the CHAID model gave a better accuracy in the validation set. Both CHAID model and RMS may improve the screening process in phase I trials.

  9. Development of a Bayesian model to estimate health care outcomes in the severely wounded

    PubMed Central

    Stojadinovic, Alexander; Eberhardt, John; Brown, Trevor S; Hawksworth, Jason S; Gage, Frederick; Tadaki, Douglas K; Forsberg, Jonathan A; Davis, Thomas A; Potter, Benjamin K; Dunne, James R; Elster, E A

    2010-01-01

    Background: Graphical probabilistic models have the ability to provide insights as to how clinical factors are conditionally related. These models can be used to help us understand factors influencing health care outcomes and resource utilization, and to estimate morbidity and clinical outcomes in trauma patient populations. Study design: Thirty-two combat casualties with severe extremity injuries enrolled in a prospective observational study were analyzed using step-wise machine-learned Bayesian belief network (BBN) and step-wise logistic regression (LR). Models were evaluated using 10-fold cross-validation to calculate area-under-the-curve (AUC) from receiver operating characteristics (ROC) curves. Results: Our BBN showed important associations between various factors in our data set that could not be developed using standard regression methods. Cross-validated ROC curve analysis showed that our BBN model was a robust representation of our data domain and that LR models trained on these findings were also robust: hospital-acquired infection (AUC: LR, 0.81; BBN, 0.79), intensive care unit length of stay (AUC: LR, 0.97; BBN, 0.81), and wound healing (AUC: LR, 0.91; BBN, 0.72) showed strong AUC. Conclusions: A BBN model can effectively represent clinical outcomes and biomarkers in patients hospitalized after severe wounding, and is confirmed by 10-fold cross-validation and further confirmed through logistic regression modeling. The method warrants further development and independent validation in other, more diverse patient populations. PMID:21197361

  10. The NETL MFiX Suite of multiphase flow models: A brief review and recent applications of MFiX-TFM to fossil energy technologies

    DOE PAGES

    Li, Tingwen; Rogers, William A.; Syamlal, Madhava; ...

    2016-07-29

    Here, the MFiX suite of multiphase computational fluid dynamics (CFD) codes is being developed at U.S. Department of Energy's National Energy Technology Laboratory (NETL). It includes several different approaches to multiphase simulation: MFiX-TFM, a two-fluid (Eulerian–Eulerian) model; MFiX-DEM, an Eulerian fluid model with a Lagrangian Discrete Element Model for the solids phase; and MFiX-PIC, Eulerian fluid model with Lagrangian particle ‘parcels’ representing particle groups. These models are undergoing continuous development and application, with verification, validation, and uncertainty quantification (VV&UQ) as integrated activities. After a brief summary of recent progress in the verification, validation and uncertainty quantification (VV&UQ), this article highlightsmore » two recent accomplishments in the application of MFiX-TFM to fossil energy technology development. First, recent application of MFiX to the pilot-scale KBR TRIG™ Transport Gasifier located at DOE's National Carbon Capture Center (NCCC) is described. Gasifier performance over a range of operating conditions was modeled and compared to NCCC operational data to validate the ability of the model to predict parametric behavior. Second, comparison of code predictions at a detailed fundamental scale is presented studying solid sorbents for the post-combustion capture of CO 2 from flue gas. Specifically designed NETL experiments are being used to validate hydrodynamics and chemical kinetics for the sorbent-based carbon capture process.« less

  11. Are Model Transferability And Complexity Antithetical? Insights From Validation of a Variable-Complexity Empirical Snow Model in Space and Time

    NASA Astrophysics Data System (ADS)

    Lute, A. C.; Luce, Charles H.

    2017-11-01

    The related challenges of predictions in ungauged basins and predictions in ungauged climates point to the need to develop environmental models that are transferable across both space and time. Hydrologic modeling has historically focused on modelling one or only a few basins using highly parameterized conceptual or physically based models. However, model parameters and structures have been shown to change significantly when calibrated to new basins or time periods, suggesting that model complexity and model transferability may be antithetical. Empirical space-for-time models provide a framework within which to assess model transferability and any tradeoff with model complexity. Using 497 SNOTEL sites in the western U.S., we develop space-for-time models of April 1 SWE and Snow Residence Time based on mean winter temperature and cumulative winter precipitation. The transferability of the models to new conditions (in both space and time) is assessed using non-random cross-validation tests with consideration of the influence of model complexity on transferability. As others have noted, the algorithmic empirical models transfer best when minimal extrapolation in input variables is required. Temporal split-sample validations use pseudoreplicated samples, resulting in the selection of overly complex models, which has implications for the design of hydrologic model validation tests. Finally, we show that low to moderate complexity models transfer most successfully to new conditions in space and time, providing empirical confirmation of the parsimony principal.

  12. Shielded-Twisted-Pair Cable Model for Chafe Fault Detection via Time-Domain Reflectometry

    NASA Technical Reports Server (NTRS)

    Schuet, Stefan R.; Timucin, Dogan A.; Wheeler, Kevin R.

    2012-01-01

    This report details the development, verification, and validation of an innovative physics-based model of electrical signal propagation through shielded-twisted-pair cable, which is commonly found on aircraft and offers an ideal proving ground for detection of small holes in a shield well before catastrophic damage occurs. The accuracy of this model is verified through numerical electromagnetic simulations using a commercially available software tool. The model is shown to be representative of more realistic (analytically intractable) cable configurations as well. A probabilistic framework is developed for validating the model accuracy with reflectometry data obtained from real aircraft-grade cables chafed in the laboratory.

  13. Developing and Validating the Socio-Technical Model in Ontology Engineering

    NASA Astrophysics Data System (ADS)

    Silalahi, Mesnan; Indra Sensuse, Dana; Giri Sucahyo, Yudho; Fadhilah Akmaliah, Izzah; Rahayu, Puji; Cahyaningsih, Elin

    2018-03-01

    This paper describes results from an attempt to develop a model in ontology engineering methodology and a way to validate the model. The approach to methodology in ontology engineering is from the point view of socio-technical system theory. Qualitative research synthesis is used to build the model using meta-ethnography. In order to ensure the objectivity of the measurement, inter-rater reliability method was applied using a multi-rater Fleiss Kappa. The results show the accordance of the research output with the diamond model in the socio-technical system theory by evidence of the interdependency of the four socio-technical variables namely people, technology, structure and task.

  14. Validation and calibration of structural models that combine information from multiple sources.

    PubMed

    Dahabreh, Issa J; Wong, John B; Trikalinos, Thomas A

    2017-02-01

    Mathematical models that attempt to capture structural relationships between their components and combine information from multiple sources are increasingly used in medicine. Areas covered: We provide an overview of methods for model validation and calibration and survey studies comparing alternative approaches. Expert commentary: Model validation entails a confrontation of models with data, background knowledge, and other models, and can inform judgments about model credibility. Calibration involves selecting parameter values to improve the agreement of model outputs with data. When the goal of modeling is quantitative inference on the effects of interventions or forecasting, calibration can be viewed as estimation. This view clarifies issues related to parameter identifiability and facilitates formal model validation and the examination of consistency among different sources of information. In contrast, when the goal of modeling is the generation of qualitative insights about the modeled phenomenon, calibration is a rather informal process for selecting inputs that result in model behavior that roughly reproduces select aspects of the modeled phenomenon and cannot be equated to an estimation procedure. Current empirical research on validation and calibration methods consists primarily of methodological appraisals or case-studies of alternative techniques and cannot address the numerous complex and multifaceted methodological decisions that modelers must make. Further research is needed on different approaches for developing and validating complex models that combine evidence from multiple sources.

  15. Development of Detonation Modeling Capabilities for Rocket Test Facilities: Hydrogen-Oxygen-Nitrogen Mixtures

    NASA Technical Reports Server (NTRS)

    Allgood, Daniel C.

    2016-01-01

    The objective of the presented work was to develop validated computational fluid dynamics (CFD) based methodologies for predicting propellant detonations and their associated blast environments. Applications of interest were scenarios relevant to rocket propulsion test and launch facilities. All model development was conducted within the framework of the Loci/CHEM CFD tool due to its reliability and robustness in predicting high-speed combusting flow-fields associated with rocket engines and plumes. During the course of the project, verification and validation studies were completed for hydrogen-fueled detonation phenomena such as shock-induced combustion, confined detonation waves, vapor cloud explosions, and deflagration-to-detonation transition (DDT) processes. The DDT validation cases included predicting flame acceleration mechanisms associated with turbulent flame-jets and flow-obstacles. Excellent comparison between test data and model predictions were observed. The proposed CFD methodology was then successfully applied to model a detonation event that occurred during liquid oxygen/gaseous hydrogen rocket diffuser testing at NASA Stennis Space Center.

  16. Development and external validation of a risk-prediction model to predict 5-year overall survival in advanced larynx cancer.

    PubMed

    Petersen, Japke F; Stuiver, Martijn M; Timmermans, Adriana J; Chen, Amy; Zhang, Hongzhen; O'Neill, James P; Deady, Sandra; Vander Poorten, Vincent; Meulemans, Jeroen; Wennerberg, Johan; Skroder, Carl; Day, Andrew T; Koch, Wayne; van den Brekel, Michiel W M

    2018-05-01

    TNM-classification inadequately estimates patient-specific overall survival (OS). We aimed to improve this by developing a risk-prediction model for patients with advanced larynx cancer. Cohort study. We developed a risk prediction model to estimate the 5-year OS rate based on a cohort of 3,442 patients with T3T4N0N+M0 larynx cancer. The model was internally validated using bootstrapping samples and externally validated on patient data from five external centers (n = 770). The main outcome was performance of the model as tested by discrimination, calibration, and the ability to distinguish risk groups based on tertiles from the derivation dataset. The model performance was compared to a model based on T and N classification only. We included age, gender, T and N classification, and subsite as prognostic variables in the standard model. After external validation, the standard model had a significantly better fit than a model based on T and N classification alone (C statistic, 0.59 vs. 0.55, P < .001). The model was able to distinguish well among three risk groups based on tertiles of the risk score. Adding treatment modality to the model did not decrease the predictive power. As a post hoc analysis, we tested the added value of comorbidity as scored by American Society of Anesthesiologists score in a subsample, which increased the C statistic to 0.68. A risk prediction model for patients with advanced larynx cancer, consisting of readily available clinical variables, gives more accurate estimations of the estimated 5-year survival rate when compared to a model based on T and N classification alone. 2c. Laryngoscope, 128:1140-1145, 2018. © 2017 The American Laryngological, Rhinological and Otological Society, Inc.

  17. Risk model of prolonged intensive care unit stay in Chinese patients undergoing heart valve surgery.

    PubMed

    Wang, Chong; Zhang, Guan-xin; Zhang, Hao; Lu, Fang-lin; Li, Bai-ling; Xu, Ji-bin; Han, Lin; Xu, Zhi-yun

    2012-11-01

    The aim of this study was to develop a preoperative risk prediction model and an scorecard for prolonged intensive care unit length of stay (PrlICULOS) in adult patients undergoing heart valve surgery. This is a retrospective observational study of collected data on 3925 consecutive patients older than 18 years, who had undergone heart valve surgery between January 2000 and December 2010. Data were randomly split into a development dataset (n=2401) and a validation dataset (n=1524). A multivariate logistic regression analysis was undertaken using the development dataset to identify independent risk factors for PrlICULOS. Performance of the model was then assessed by observed and expected rates of PrlICULOS on the development and validation dataset. Model calibration and discriminatory ability were analysed by the Hosmer-Lemeshow goodness-of-fit statistic and the area under the receiver operating characteristic (ROC) curve, respectively. There were 491 patients that required PrlICULOS (12.5%). Preoperative independent predictors of PrlICULOS are shown with odds ratio as follows: (1) age, 1.4; (2) chronic obstructive pulmonary disease (COPD), 1.8; (3) atrial fibrillation, 1.4; (4) left bundle branch block, 2.7; (5) ejection fraction, 1.4; (6) left ventricle weight, 1.5; (7) New York Heart Association class III-IV, 1.8; (8) critical preoperative state, 2.0; (9) perivalvular leakage, 6.4; (10) tricuspid valve replacement, 3.8; (11) concurrent CABG, 2.8; and (12) concurrent other cardiac surgery, 1.8. The Hosmer-Lemeshow goodness-of-fit statistic was not statistically significant in both development and validation dataset (P=0.365 vs P=0.310). The ROC curve for the prediction of PrlICULOS in development and validation dataset was 0.717 and 0.700, respectively. We developed and validated a local risk prediction model for PrlICULOS after adult heart valve surgery. This model can be used to calculate patient-specific risk with an equivalent predicted risk at our centre in future clinical practice. Copyright © 2012 Australian and New Zealand Society of Cardiac and Thoracic Surgeons (ANZSCTS) and the Cardiac Society of Australia and New Zealand (CSANZ). Published by Elsevier B.V. All rights reserved.

  18. Flexible Programmes in Higher Professional Education: Expert Validation of a Flexible Educational Model

    ERIC Educational Resources Information Center

    Schellekens, Ad; Paas, Fred; Verbraeck, Alexander; van Merrienboer, Jeroen J. G.

    2010-01-01

    In a preceding case study, a process-focused demand-driven approach for organising flexible educational programmes in higher professional education (HPE) was developed. Operations management and instructional design contributed to designing a flexible educational model by means of discrete-event simulation. Educational experts validated the model…

  19. From Positive Youth Development to Youth's Engagement: The Dream Teens

    ERIC Educational Resources Information Center

    Gaspar de Matos, Margarida; Simões, Celeste

    2016-01-01

    In addition to the empirical validation of "health and happiness" determinants, theoretical models suggesting where to ground actions are necessary. In the beginning of the twentieth century, intervention models focused on evaluation and empirical validation were only concerned about overt behaviours (verbal and non-verbal) and covert…

  20. Validation of numerical model for cook stove using Reynolds averaged Navier-Stokes based solver

    NASA Astrophysics Data System (ADS)

    Islam, Md. Moinul; Hasan, Md. Abdullah Al; Rahman, Md. Mominur; Rahaman, Md. Mashiur

    2017-12-01

    Biomass fired cook stoves, for many years, have been the main cooking appliance for the rural people of developing countries. Several researches have been carried out to the find efficient stoves. In the present study, numerical model of an improved household cook stove is developed to analyze the heat transfer and flow behavior of gas during operation. The numerical model is validated with the experimental results. Computation of the numerical model is executed the using non-premixed combustion model. Reynold's averaged Navier-Stokes (RaNS) equation along with the κ - ɛ model governed the turbulent flow associated within the computed domain. The computational results are in well agreement with the experiment. Developed numerical model can be used to predict the effect of different biomasses on the efficiency of the cook stove.

  1. Predicting Overall Survival After Stereotactic Ablative Radiation Therapy in Early-Stage Lung Cancer: Development and External Validation of the Amsterdam Prognostic Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Louie, Alexander V., E-mail: Dr.alexlouie@gmail.com; Department of Radiation Oncology, London Regional Cancer Program, University of Western Ontario, London, Ontario; Department of Epidemiology, Harvard School of Public Health, Harvard University, Boston, Massachusetts

    Purpose: A prognostic model for 5-year overall survival (OS), consisting of recursive partitioning analysis (RPA) and a nomogram, was developed for patients with early-stage non-small cell lung cancer (ES-NSCLC) treated with stereotactic ablative radiation therapy (SABR). Methods and Materials: A primary dataset of 703 ES-NSCLC SABR patients was randomly divided into a training (67%) and an internal validation (33%) dataset. In the former group, 21 unique parameters consisting of patient, treatment, and tumor factors were entered into an RPA model to predict OS. Univariate and multivariate models were constructed for RPA-selected factors to evaluate their relationship with OS. A nomogrammore » for OS was constructed based on factors significant in multivariate modeling and validated with calibration plots. Both the RPA and the nomogram were externally validated in independent surgical (n=193) and SABR (n=543) datasets. Results: RPA identified 2 distinct risk classes based on tumor diameter, age, World Health Organization performance status (PS) and Charlson comorbidity index. This RPA had moderate discrimination in SABR datasets (c-index range: 0.52-0.60) but was of limited value in the surgical validation cohort. The nomogram predicting OS included smoking history in addition to RPA-identified factors. In contrast to RPA, validation of the nomogram performed well in internal validation (r{sup 2}=0.97) and external SABR (r{sup 2}=0.79) and surgical cohorts (r{sup 2}=0.91). Conclusions: The Amsterdam prognostic model is the first externally validated prognostication tool for OS in ES-NSCLC treated with SABR available to individualize patient decision making. The nomogram retained strong performance across surgical and SABR external validation datasets. RPA performance was poor in surgical patients, suggesting that 2 different distinct patient populations are being treated with these 2 effective modalities.« less

  2. [Development and testing of a preparedness and response capacity questionnaire in public health emergency for Chinese provincial and municipal governments].

    PubMed

    Hu, Guo-Qing; Rao, Ke-Qin; Sun, Zhen-Qiu

    2008-12-01

    To develop a capacity questionnaire in public health emergency for Chinese local governments. Literature reviews, conceptual modelling, stake-holder analysis, focus group, interview, and Delphi technique were employed together to develop the questionnaire. Classical test theory and case study were used to assess the reliability and validity. (1) A 2-dimension conceptual model was built. A preparedness and response capacity questionnaire in public health emergency with 10 dimensions and 204 items, was developed. (2) Reliability and validity results. Internal consistency: except for dimension 3 and 8, the Cronbach's alpha coefficient of other dimensions was higher than 0.60. The alpha coefficients of dimension 3 and dimension 8 were 0.59 and 0.39 respectively; Content validity: the questionnaire was recognized by the investigatees; Construct validity: the Spearman correlation coefficients among the 10 dimensions fluctuated around 0.50, ranging from 0.26 to 0.75 (P<0.05); Discrimination validity: comparisons of 10 dimensions among 4 provinces did not show statistical significance using One-way analysis of variance (P>0.05). Criterion-related validity: case study showed significant difference among the 10 dimensions in Beijing between February 2003 (before SARS event) and November 2005 (after SARS event). The preparedness and response capacity questionnaire in public health emergency is a reliable and valid tool, which can be used in all provinces and municipalities in China.

  3. Working Memory Structure in 10- and 15-Year Old Children with Mild to Borderline Intellectual, Disabilities

    ERIC Educational Resources Information Center

    van der Molen, Mariet J.

    2010-01-01

    The validity of Baddeley's working memory model within the typically developing population, was tested. However, it is not clear if this model also holds in children and adolescents with mild to, borderline intellectual disabilities (ID; IQ score 55-85). The main purpose of this study was therefore, to explore the model's validity in this…

  4. Use of Maple Seeding Canopy Reflectance Dataset for Validation of SART/LEAFMOD Radiative Transfer Model

    NASA Technical Reports Server (NTRS)

    Bond, Barbara J.; Peterson, David L.

    1999-01-01

    This project was a collaborative effort by researchers at ARC, OSU and the University of Arizona. The goal was to use a dataset obtained from a previous study to "empirically validate a new canopy radiative-transfer model (SART) which incorporates a recently-developed leaf-level model (LEAFMOD)". The document includes a short research summary.

  5. Online or on Campus: A Student Tertiary Education Cost Model Comparing the Two, with a Quality Proviso

    ERIC Educational Resources Information Center

    Ioakimidis, Marilou

    2007-01-01

    This paper presents the development and validation of a two-level hierarchical cost model for tertiary education, which enables prospective students to compare the total cost of attending a traditional Baccalaureate degree education with that of the same programme taken through distance e-learning. The model was validated by a sample of Greek…

  6. Cross-Cultural Validation of the Preventive Health Model for Colorectal Cancer Screening: An Australian Study

    ERIC Educational Resources Information Center

    Flight, Ingrid H.; Wilson, Carlene J.; McGillivray, Jane; Myers, Ronald E.

    2010-01-01

    We investigated whether the five-factor structure of the Preventive Health Model for colorectal cancer screening, developed in the United States, has validity in Australia. We also tested extending the model with the addition of the factor Self-Efficacy to Screen using Fecal Occult Blood Test (SESFOBT). Randomly selected men and women aged between…

  7. Maximizing the Information and Validity of a Linear Composite in the Factor Analysis Model for Continuous Item Responses

    ERIC Educational Resources Information Center

    Ferrando, Pere J.

    2008-01-01

    This paper develops results and procedures for obtaining linear composites of factor scores that maximize: (a) test information, and (b) validity with respect to external variables in the multiple factor analysis (FA) model. I treat FA as a multidimensional item response theory model, and use Ackerman's multidimensional information approach based…

  8. Validation metrics for turbulent plasma transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holland, C., E-mail: chholland@ucsd.edu

    Developing accurate models of plasma dynamics is essential for confident predictive modeling of current and future fusion devices. In modern computer science and engineering, formal verification and validation processes are used to assess model accuracy and establish confidence in the predictive capabilities of a given model. This paper provides an overview of the key guiding principles and best practices for the development of validation metrics, illustrated using examples from investigations of turbulent transport in magnetically confined plasmas. Particular emphasis is given to the importance of uncertainty quantification and its inclusion within the metrics, and the need for utilizing synthetic diagnosticsmore » to enable quantitatively meaningful comparisons between simulation and experiment. As a starting point, the structure of commonly used global transport model metrics and their limitations is reviewed. An alternate approach is then presented, which focuses upon comparisons of predicted local fluxes, fluctuations, and equilibrium gradients against observation. The utility of metrics based upon these comparisons is demonstrated by applying them to gyrokinetic predictions of turbulent transport in a variety of discharges performed on the DIII-D tokamak [J. L. Luxon, Nucl. Fusion 42, 614 (2002)], as part of a multi-year transport model validation activity.« less

  9. Development and Validation of Decision Forest Model for Estrogen Receptor Binding Prediction of Chemicals Using Large Data Sets.

    PubMed

    Ng, Hui Wen; Doughty, Stephen W; Luo, Heng; Ye, Hao; Ge, Weigong; Tong, Weida; Hong, Huixiao

    2015-12-21

    Some chemicals in the environment possess the potential to interact with the endocrine system in the human body. Multiple receptors are involved in the endocrine system; estrogen receptor α (ERα) plays very important roles in endocrine activity and is the most studied receptor. Understanding and predicting estrogenic activity of chemicals facilitates the evaluation of their endocrine activity. Hence, we have developed a decision forest classification model to predict chemical binding to ERα using a large training data set of 3308 chemicals obtained from the U.S. Food and Drug Administration's Estrogenic Activity Database. We tested the model using cross validations and external data sets of 1641 chemicals obtained from the U.S. Environmental Protection Agency's ToxCast project. The model showed good performance in both internal (92% accuracy) and external validations (∼ 70-89% relative balanced accuracies), where the latter involved the validations of the model across different ER pathway-related assays in ToxCast. The important features that contribute to the prediction ability of the model were identified through informative descriptor analysis and were related to current knowledge of ER binding. Prediction confidence analysis revealed that the model had both high prediction confidence and accuracy for most predicted chemicals. The results demonstrated that the model constructed based on the large training data set is more accurate and robust for predicting ER binding of chemicals than the published models that have been developed using much smaller data sets. The model could be useful for the evaluation of ERα-mediated endocrine activity potential of environmental chemicals.

  10. Development of speed models for improving travel forecasting and highway performance evaluation : [technical summary].

    DOT National Transportation Integrated Search

    2013-12-01

    Travel forecasting models predict travel demand based on the present transportation system and its use. Transportation modelers must develop, validate, and calibrate models to ensure that predicted travel demand is as close to reality as possible. Mo...

  11. Development and validation of clinical prediction models for mortality, functional outcome and cognitive impairment after stroke: a study protocol

    PubMed Central

    Fahey, Marion; Rudd, Anthony; Béjot, Yannick; Wolfe, Charles; Douiri, Abdel

    2017-01-01

    Introduction Stroke is a leading cause of adult disability and death worldwide. The neurological impairments associated with stroke prevent patients from performing basic daily activities and have enormous impact on families and caregivers. Practical and accurate tools to assist in predicting outcome after stroke at patient level can provide significant aid for patient management. Furthermore, prediction models of this kind can be useful for clinical research, health economics, policymaking and clinical decision support. Methods 2869 patients with first-ever stroke from South London Stroke Register (SLSR) (1995–2004) will be included in the development cohort. We will use information captured after baseline to construct multilevel models and a Cox proportional hazard model to predict cognitive impairment, functional outcome and mortality up to 5 years after stroke. Repeated random subsampling validation (Monte Carlo cross-validation) will be evaluated in model development. Data from participants recruited to the stroke register (2005–2014) will be used for temporal validation of the models. Data from participants recruited to the Dijon Stroke Register (1985–2015) will be used for external validation. Discrimination, calibration and clinical utility of the models will be presented. Ethics Patients, or for patients who cannot consent their relatives, gave written informed consent to participate in stroke-related studies within the SLSR. The SLSR design was approved by the ethics committees of Guy’s and St Thomas’ NHS Foundation Trust, Kings College Hospital, Queens Square and Westminster Hospitals (London). The Dijon Stroke Registry was approved by the Comité National des Registres and the InVS and has authorisation of the Commission Nationale de l’Informatique et des Libertés. PMID:28821511

  12. Developing a model for hospital inherent safety assessment: Conceptualization and validation.

    PubMed

    Yari, Saeed; Akbari, Hesam; Gholami Fesharaki, Mohammad; Khosravizadeh, Omid; Ghasemi, Mohammad; Barsam, Yalda; Akbari, Hamed

    2018-01-01

    Paying attention to the safety of hospitals, as the most crucial institute for providing medical and health services wherein a bundle of facilities, equipment, and human resource exist, is of significant importance. The present research aims at developing a model for assessing hospitals' safety based on principles of inherent safety design. Face validity (30 experts), content validity (20 experts), construct validity (268 examples), convergent validity, and divergent validity have been employed to validate the prepared questionnaire; and the items analysis, the Cronbach's alpha test, ICC test (to measure reliability of the test), composite reliability coefficient have been used to measure primary reliability. The relationship between variables and factors has been confirmed at 0.05 significance level by conducting confirmatory factor analysis (CFA) and structural equations modeling (SEM) technique with the use of Smart-PLS. R-square and load factors values, which were higher than 0.67 and 0.300 respectively, indicated the strong fit. Moderation (0.970), simplification (0.959), substitution (0.943), and minimization (0.5008) have had the most weights in determining the inherent safety of hospital respectively. Moderation, simplification, and substitution, among the other dimensions, have more weight on the inherent safety, while minimization has the less weight, which could be due do its definition as to minimize the risk.

  13. Estimating the weight of Douglas-fir tree boles and logs with an iterative computer model.

    Treesearch

    Dale R. Waddell; Dale L Weyermann; Michael B. Lambert

    1987-01-01

    A computer model that estimates the green weights of standing trees was developed and validated for old-growth Douglas-fir. The model calculates the green weight for the entire bole, for the bole to any merchantable top, and for any log length within the bole. The model was validated by estimating the bias and accuracy of an independent subsample selected from the...

  14. An Assessment of the Validity of the ECERS-R with Implications for Measures of Child Care Quality and Relations to Child Development

    ERIC Educational Resources Information Center

    Gordon, Rachel A.; Fujimoto, Ken; Kaestner, Robert; Korenman, Sanders; Abner, Kristin

    2013-01-01

    The Early Childhood Environment Rating Scale-Revised (ECERS-R) is widely used to associate child care quality with child development, but its validity for this purpose is not well established. We examined the validity of the ECERS-R using the multidimensional Rasch partial credit model (PCM), factor analyses, and regression analyses with data from…

  15. A Quantitative Structure Activity Relationship for acute oral toxicity of pesticides on rats: Validation, domain of application and prediction.

    PubMed

    Hamadache, Mabrouk; Benkortbi, Othmane; Hanini, Salah; Amrane, Abdeltif; Khaouane, Latifa; Si Moussa, Cherif

    2016-02-13

    Quantitative Structure Activity Relationship (QSAR) models are expected to play an important role in the risk assessment of chemicals on humans and the environment. In this study, we developed a validated QSAR model to predict acute oral toxicity of 329 pesticides to rats because a few QSAR models have been devoted to predict the Lethal Dose 50 (LD50) of pesticides on rats. This QSAR model is based on 17 molecular descriptors, and is robust, externally predictive and characterized by a good applicability domain. The best results were obtained with a 17/9/1 Artificial Neural Network model trained with the Quasi Newton back propagation (BFGS) algorithm. The prediction accuracy for the external validation set was estimated by the Q(2)ext and the root mean square error (RMS) which are equal to 0.948 and 0.201, respectively. 98.6% of external validation set is correctly predicted and the present model proved to be superior to models previously published. Accordingly, the model developed in this study provides excellent predictions and can be used to predict the acute oral toxicity of pesticides, particularly for those that have not been tested as well as new pesticides. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Predicting human skin absorption of chemicals: development of a novel quantitative structure activity relationship.

    PubMed

    Luo, Wen; Medrek, Sarah; Misra, Jatin; Nohynek, Gerhard J

    2007-02-01

    The objective of this study was to construct and validate a quantitative structure-activity relationship model for skin absorption. Such models are valuable tools for screening and prioritization in safety and efficacy evaluation, and risk assessment of drugs and chemicals. A database of 340 chemicals with percutaneous absorption was assembled. Two models were derived from the training set consisting 306 chemicals (90/10 random split). In addition to the experimental K(ow) values, over 300 2D and 3D atomic and molecular descriptors were analyzed using MDL's QsarIS computer program. Subsequently, the models were validated using both internal (leave-one-out) and external validation (test set) procedures. Using the stepwise regression analysis, three molecular descriptors were determined to have significant statistical correlation with K(p) (R2 = 0.8225): logK(ow), X0 (quantification of both molecular size and the degree of skeletal branching), and SsssCH (count of aromatic carbon groups). In conclusion, two models to estimate skin absorption were developed. When compared to other skin absorption QSAR models in the literature, our model incorporated more chemicals and explored a large number of descriptors. Additionally, our models are reasonably predictive and have met both internal and external statistical validations.

  17. OECD-NEA Expert Group on Multi-Physics Experimental Data, Benchmarks and Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Valentine, Timothy; Rohatgi, Upendra S.

    High-fidelity, multi-physics modeling and simulation (M&S) tools are being developed and utilized for a variety of applications in nuclear science and technology and show great promise in their abilities to reproduce observed phenomena for many applications. Even with the increasing fidelity and sophistication of coupled multi-physics M&S tools, the underpinning models and data still need to be validated against experiments that may require a more complex array of validation data because of the great breadth of the time, energy and spatial domains of the physical phenomena that are being simulated. The Expert Group on Multi-Physics Experimental Data, Benchmarks and Validationmore » (MPEBV) of the Nuclear Energy Agency (NEA) of the Organization for Economic Cooperation and Development (OECD) was formed to address the challenges with the validation of such tools. The work of the MPEBV expert group is shared among three task forces to fulfill its mandate and specific exercises are being developed to demonstrate validation principles for common industrial challenges. This paper describes the overall mission of the group, the specific objectives of the task forces, the linkages among the task forces, and the development of a validation exercise that focuses on a specific reactor challenge problem.« less

  18. FY2017 Pilot Project Plan for the Nuclear Energy Knowledge and Validation Center Initiative

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ren, Weiju

    To prepare for technical development of computational code validation under the Nuclear Energy Knowledge and Validation Center (NEKVAC) initiative, several meetings were held by a group of experts of the Idaho National Laboratory (INL) and the Oak Ridge National Laboratory (ORNL) to develop requirements of, and formulate a structure for, a transient fuel database through leveraging existing resources. It was concluded in discussions of these meetings that a pilot project is needed to address the most fundamental issues that can generate immediate stimulus to near-future validation developments as well as long-lasting benefits to NEKVAC operation. The present project is proposedmore » based on the consensus of these discussions. Analysis of common scenarios in code validation indicates that the incapability of acquiring satisfactory validation data is often a showstopper that must first be tackled before any confident validation developments can be carried out. Validation data are usually found scattered in different places most likely with interrelationships among the data not well documented, incomplete with information for some parameters missing, nonexistent, or unrealistic to experimentally generate. Furthermore, with very different technical backgrounds, the modeler, the experimentalist, and the knowledgebase developer that must be involved in validation data development often cannot communicate effectively without a data package template that is representative of the data structure for the information domain of interest to the desired code validation. This pilot project is proposed to use the legendary TREAT Experiments Database to provide core elements for creating an ideal validation data package. Data gaps and missing data interrelationships will be identified from these core elements. All the identified missing elements will then be filled in with experimental data if available from other existing sources or with dummy data if nonexistent. The resulting hybrid validation data package (composed of experimental and dummy data) will provide a clear and complete instance delineating the structure of the desired validation data and enabling effective communication among the modeler, the experimentalist, and the knowledgebase developer. With a good common understanding of the desired data structure by the three parties of subject matter experts, further existing data hunting will be effectively conducted, new experimental data generation will be realistically pursued, knowledgebase schema will be practically designed; and code validation will be confidently planned.« less

  19. Recent Advances in Simulation of Eddy Current Testing of Tubes and Experimental Validations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reboud, C.; Premel, D.; Lesselier, D.

    2007-03-21

    Eddy current testing (ECT) is widely used in iron and steel industry for the inspection of tubes during manufacturing. A collaboration between CEA and the Vallourec Research Center led to the development of new numerical functionalities dedicated to the simulation of ECT of non-magnetic tubes by external probes. The achievement of experimental validations led us to the integration of these models into the CIVA platform. Modeling approach and validation results are discussed here. A new numerical scheme is also proposed in order to improve the accuracy of the model.

  20. Recent Advances in Simulation of Eddy Current Testing of Tubes and Experimental Validations

    NASA Astrophysics Data System (ADS)

    Reboud, C.; Prémel, D.; Lesselier, D.; Bisiaux, B.

    2007-03-01

    Eddy current testing (ECT) is widely used in iron and steel industry for the inspection of tubes during manufacturing. A collaboration between CEA and the Vallourec Research Center led to the development of new numerical functionalities dedicated to the simulation of ECT of non-magnetic tubes by external probes. The achievement of experimental validations led us to the integration of these models into the CIVA platform. Modeling approach and validation results are discussed here. A new numerical scheme is also proposed in order to improve the accuracy of the model.

  1. Development, calibration, and validation of performance prediction models for the Texas M-E flexible pavement design system.

    DOT National Transportation Integrated Search

    2010-08-01

    This study was intended to recommend future directions for the development of TxDOTs Mechanistic-Empirical : (TexME) design system. For stress predictions, a multi-layer linear elastic system was evaluated and its validity was : verified by compar...

  2. Validation of design procedure and performance modeling of a heat and fluid transport field experiment in the unsaturated zone

    NASA Astrophysics Data System (ADS)

    Nir, A.; Doughty, C.; Tsang, C. F.

    Validation methods which developed in the context of deterministic concepts of past generations often cannot be directly applied to environmental problems, which may be characterized by limited reproducibility of results and highly complex models. Instead, validation is interpreted here as a series of activities, including both theoretical and experimental tests, designed to enhance our confidence in the capability of a proposed model to describe some aspect of reality. We examine the validation process applied to a project concerned with heat and fluid transport in porous media, in which mathematical modeling, simulation, and results of field experiments are evaluated in order to determine the feasibility of a system for seasonal thermal energy storage in shallow unsaturated soils. Technical details of the field experiments are not included, but appear in previous publications. Validation activities are divided into three stages. The first stage, carried out prior to the field experiments, is concerned with modeling the relevant physical processes, optimization of the heat-exchanger configuration and the shape of the storage volume, and multi-year simulation. Subjects requiring further theoretical and experimental study are identified at this stage. The second stage encompasses the planning and evaluation of the initial field experiment. Simulations are made to determine the experimental time scale and optimal sensor locations. Soil thermal parameters and temperature boundary conditions are estimated using an inverse method. Then results of the experiment are compared with model predictions using different parameter values and modeling approximations. In the third stage, results of an experiment performed under different boundary conditions are compared to predictions made by the models developed in the second stage. Various aspects of this theoretical and experimental field study are described as examples of the verification and validation procedure. There is no attempt to validate a specific model, but several models of increasing complexity are compared with experimental results. The outcome is interpreted as a demonstration of the paradigm proposed by van der Heijde, 26 that different constituencies have different objectives for the validation process and therefore their acceptance criteria differ also.

  3. Development and testing of the CALDs and CLES+T scales for international nursing students' clinical learning environments.

    PubMed

    Mikkonen, Kristina; Elo, Satu; Miettunen, Jouko; Saarikoski, Mikko; Kääriäinen, Maria

    2017-08-01

    The purpose of this study was to develop and test the psychometric properties of the new Cultural and Linguistic Diversity scale, which is designed to be used with the newly validated Clinical Learning Environment, Supervision and Nurse Teacher scale for assessing international nursing students' clinical learning environments. In various developed countries, clinical placements are known to present challenges in the professional development of international nursing students. A cross-sectional survey. Data were collected from eight Finnish universities of applied sciences offering nursing degree courses taught in English during 2015-2016. All the relevant students (N = 664) were invited and 50% chose to participate. Of the total data submitted by the participants, 28% were used for scale validation. The construct validity of the two scales was tested by exploratory factor analysis, while their validity with respect to convergence and discriminability was assessed using Spearman's correlation. Construct validation of the Clinical Learning Environment, Supervision and Nurse Teacher scale yielded an eight-factor model with 34 items, while validation of the Cultural and Linguistic Diversity scale yielded a five-factor model with 21 items. A new scale was developed to improve evidence-based mentorship of international nursing students in clinical learning environments. The instrument will be useful to educators seeking to identify factors that affect the learning of international students. © 2017 John Wiley & Sons Ltd.

  4. Creation of a novel simulator for minimally invasive neurosurgery: fusion of 3D printing and special effects.

    PubMed

    Weinstock, Peter; Rehder, Roberta; Prabhu, Sanjay P; Forbes, Peter W; Roussin, Christopher J; Cohen, Alan R

    2017-07-01

    OBJECTIVE Recent advances in optics and miniaturization have enabled the development of a growing number of minimally invasive procedures, yet innovative training methods for the use of these techniques remain lacking. Conventional teaching models, including cadavers and physical trainers as well as virtual reality platforms, are often expensive and ineffective. Newly developed 3D printing technologies can recreate patient-specific anatomy, but the stiffness of the materials limits fidelity to real-life surgical situations. Hollywood special effects techniques can create ultrarealistic features, including lifelike tactile properties, to enhance accuracy and effectiveness of the surgical models. The authors created a highly realistic model of a pediatric patient with hydrocephalus via a unique combination of 3D printing and special effects techniques and validated the use of this model in training neurosurgery fellows and residents to perform endoscopic third ventriculostomy (ETV), an effective minimally invasive method increasingly used in treating hydrocephalus. METHODS A full-scale reproduction of the head of a 14-year-old adolescent patient with hydrocephalus, including external physical details and internal neuroanatomy, was developed via a unique collaboration of neurosurgeons, simulation engineers, and a group of special effects experts. The model contains "plug-and-play" replaceable components for repetitive practice. The appearance of the training model (face validity) and the reproducibility of the ETV training procedure (content validity) were assessed by neurosurgery fellows and residents of different experience levels based on a 14-item Likert-like questionnaire. The usefulness of the training model for evaluating the performance of the trainees at different levels of experience (construct validity) was measured by blinded observers using the Objective Structured Assessment of Technical Skills (OSATS) scale for the performance of ETV. RESULTS A combination of 3D printing technology and casting processes led to the creation of realistic surgical models that include high-fidelity reproductions of the anatomical features of hydrocephalus and allow for the performance of ETV for training purposes. The models reproduced the pulsations of the basilar artery, ventricles, and cerebrospinal fluid (CSF), thus simulating the experience of performing ETV on an actual patient. The results of the 14-item questionnaire showed limited variability among participants' scores, and the neurosurgery fellows and residents gave the models consistently high ratings for face and content validity. The mean score for the content validity questions (4.88) was higher than the mean score for face validity (4.69) (p = 0.03). On construct validity scores, the blinded observers rated performance of fellows significantly higher than that of residents, indicating that the model provided a means to distinguish between novice and expert surgical skills. CONCLUSIONS A plug-and-play lifelike ETV training model was developed through a combination of 3D printing and special effects techniques, providing both anatomical and haptic accuracy. Such simulators offer opportunities to accelerate the development of expertise with respect to new and novel procedures as well as iterate new surgical approaches and innovations, thus allowing novice neurosurgeons to gain valuable experience in surgical techniques without exposing patients to risk of harm.

  5. Development and validation of light-duty vehicle modal emissions and fuel consumption values for traffic models.

    DOT National Transportation Integrated Search

    1999-03-01

    A methodology for developing modal vehicle emissions and fuel consumption models has been developed by Oak Ridge National Laboratory (ORNL), sponsored by the Federal Highway Administration. These models, in the form of look-up tables for fuel consump...

  6. Southern Regional Center for Lightweight Innovative Design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Paul T.

    The Southern Regional Center for Lightweight Innovative Design (SRCLID) has developed an experimentally validated cradle-to-grave modeling and simulation effort to optimize automotive components in order to decrease weight and cost, yet increase performance and safety in crash scenarios. In summary, the three major objectives of this project are accomplished: To develop experimentally validated cradle-to-grave modeling and simulation tools to optimize automotive and truck components for lightweighting materials (aluminum, steel, and Mg alloys and polymer-based composites) with consideration of uncertainty to decrease weight and cost, yet increase the performance and safety in impact scenarios; To develop multiscale computational models that quantifymore » microstructure-property relations by evaluating various length scales, from the atomic through component levels, for each step of the manufacturing process for vehicles; and To develop an integrated K-12 educational program to educate students on lightweighting designs and impact scenarios. In this final report, we divided the content into two parts: the first part contains the development of building blocks for the project, including materials and process models, process-structure-property (PSP) relationship, and experimental validation capabilities; the second part presents the demonstration task for Mg front-end work associated with USAMP projects.« less

  7. Development and Validation of High Precision Thermal, Mechanical, and Optical Models for the Space Interferometry Mission

    NASA Technical Reports Server (NTRS)

    Lindensmith, Chris A.; Briggs, H. Clark; Beregovski, Yuri; Feria, V. Alfonso; Goullioud, Renaud; Gursel, Yekta; Hahn, Inseob; Kinsella, Gary; Orzewalla, Matthew; Phillips, Charles

    2006-01-01

    SIM Planetquest (SIM) is a large optical interferometer for making microarcsecond measurements of the positions of stars, and to detect Earth-sized planets around nearby stars. To achieve this precision, SIM requires stability of optical components to tens of picometers per hour. The combination of SIM s large size (9 meter baseline) and the high stability requirement makes it difficult and costly to measure all aspects of system performance on the ground. To reduce risks, costs and to allow for a design with fewer intermediate testing stages, the SIM project is developing an integrated thermal, mechanical and optical modeling process that will allow predictions of the system performance to be made at the required high precision. This modeling process uses commercial, off-the-shelf tools and has been validated against experimental results at the precision of the SIM performance requirements. This paper presents the description of the model development, some of the models, and their validation in the Thermo-Opto-Mechanical (TOM3) testbed which includes full scale brassboard optical components and the metrology to test them at the SIM performance requirement levels.

  8. Development and validation of rear impact computer simulation model of an adult manual transit wheelchair with a seated occupant.

    PubMed

    Salipur, Zdravko; Bertocci, Gina

    2010-01-01

    It has been shown that ANSI WC19 transit wheelchairs that are crashworthy in frontal impact exhibit catastrophic failures in rear impact and may not be able to provide stable seating support and thus occupant protection for the wheelchair occupant. Thus far only limited sled test and computer simulation data have been available to study rear impact wheelchair safety. Computer modeling can be used as an economic and comprehensive tool to gain critical knowledge regarding wheelchair integrity and occupant safety. This study describes the development and validation of a computer model simulating an adult wheelchair-seated occupant subjected to a rear impact event. The model was developed in MADYMO and validated rigorously using the results of three similar sled tests conducted to specifications provided in the draft ISO/TC 173 standard. Outcomes from the model can provide critical wheelchair loading information to wheelchair and tiedown manufacturers, resulting in safer wheelchair designs for rear impact conditions. (c) 2009 IPEM. Published by Elsevier Ltd. All rights reserved.

  9. Methodology for Computational Fluid Dynamic Validation for Medical Use: Application to Intracranial Aneurysm.

    PubMed

    Paliwal, Nikhil; Damiano, Robert J; Varble, Nicole A; Tutino, Vincent M; Dou, Zhongwang; Siddiqui, Adnan H; Meng, Hui

    2017-12-01

    Computational fluid dynamics (CFD) is a promising tool to aid in clinical diagnoses of cardiovascular diseases. However, it uses assumptions that simplify the complexities of the real cardiovascular flow. Due to high-stakes in the clinical setting, it is critical to calculate the effect of these assumptions in the CFD simulation results. However, existing CFD validation approaches do not quantify error in the simulation results due to the CFD solver's modeling assumptions. Instead, they directly compare CFD simulation results against validation data. Thus, to quantify the accuracy of a CFD solver, we developed a validation methodology that calculates the CFD model error (arising from modeling assumptions). Our methodology identifies independent error sources in CFD and validation experiments, and calculates the model error by parsing out other sources of error inherent in simulation and experiments. To demonstrate the method, we simulated the flow field of a patient-specific intracranial aneurysm (IA) in the commercial CFD software star-ccm+. Particle image velocimetry (PIV) provided validation datasets for the flow field on two orthogonal planes. The average model error in the star-ccm+ solver was 5.63 ± 5.49% along the intersecting validation line of the orthogonal planes. Furthermore, we demonstrated that our validation method is superior to existing validation approaches by applying three representative existing validation techniques to our CFD and experimental dataset, and comparing the validation results. Our validation methodology offers a streamlined workflow to extract the "true" accuracy of a CFD solver.

  10. Acute Brain Dysfunction: Development and Validation of a Daily Prediction Model.

    PubMed

    Marra, Annachiara; Pandharipande, Pratik P; Shotwell, Matthew S; Chandrasekhar, Rameela; Girard, Timothy D; Shintani, Ayumi K; Peelen, Linda M; Moons, Karl G M; Dittus, Robert S; Ely, E Wesley; Vasilevskis, Eduard E

    2018-03-24

    The goal of this study was to develop and validate a dynamic risk model to predict daily changes in acute brain dysfunction (ie, delirium and coma), discharge, and mortality in ICU patients. Using data from a multicenter prospective ICU cohort, a daily acute brain dysfunction-prediction model (ABD-pm) was developed by using multinomial logistic regression that estimated 15 transition probabilities (from one of three brain function states [normal, delirious, or comatose] to one of five possible outcomes [normal, delirious, comatose, ICU discharge, or died]) using baseline and daily risk factors. Model discrimination was assessed by using predictive characteristics such as negative predictive value (NPV). Calibration was assessed by plotting empirical vs model-estimated probabilities. Internal validation was performed by using a bootstrap procedure. Data were analyzed from 810 patients (6,711 daily transitions). The ABD-pm included individual risk factors: mental status, age, preexisting cognitive impairment, baseline and daily severity of illness, and daily administration of sedatives. The model yielded very high NPVs for "next day" delirium (NPV: 0.823), coma (NPV: 0.892), normal cognitive state (NPV: 0.875), ICU discharge (NPV: 0.905), and mortality (NPV: 0.981). The model demonstrated outstanding calibration when predicting the total number of patients expected to be in any given state across predicted risk. We developed and internally validated a dynamic risk model that predicts the daily risk for one of three cognitive states, ICU discharge, or mortality. The ABD-pm may be useful for predicting the proportion of patients for each outcome state across entire ICU populations to guide quality, safety, and care delivery activities. Copyright © 2018 American College of Chest Physicians. Published by Elsevier Inc. All rights reserved.

  11. Building and validating a prediction model for paediatric type 1 diabetes risk using next generation targeted sequencing of class II HLA genes.

    PubMed

    Zhao, Lue Ping; Carlsson, Annelie; Larsson, Helena Elding; Forsander, Gun; Ivarsson, Sten A; Kockum, Ingrid; Ludvigsson, Johnny; Marcus, Claude; Persson, Martina; Samuelsson, Ulf; Örtqvist, Eva; Pyo, Chul-Woo; Bolouri, Hamid; Zhao, Michael; Nelson, Wyatt C; Geraghty, Daniel E; Lernmark, Åke

    2017-11-01

    It is of interest to predict possible lifetime risk of type 1 diabetes (T1D) in young children for recruiting high-risk subjects into longitudinal studies of effective prevention strategies. Utilizing a case-control study in Sweden, we applied a recently developed next generation targeted sequencing technology to genotype class II genes and applied an object-oriented regression to build and validate a prediction model for T1D. In the training set, estimated risk scores were significantly different between patients and controls (P = 8.12 × 10 -92 ), and the area under the curve (AUC) from the receiver operating characteristic (ROC) analysis was 0.917. Using the validation data set, we validated the result with AUC of 0.886. Combining both training and validation data resulted in a predictive model with AUC of 0.903. Further, we performed a "biological validation" by correlating risk scores with 6 islet autoantibodies, and found that the risk score was significantly correlated with IA-2A (Z-score = 3.628, P < 0.001). When applying this prediction model to the Swedish population, where the lifetime T1D risk ranges from 0.5% to 2%, we anticipate identifying approximately 20 000 high-risk subjects after testing all newborns, and this calculation would identify approximately 80% of all patients expected to develop T1D in their lifetime. Through both empirical and biological validation, we have established a prediction model for estimating lifetime T1D risk, using class II HLA. This prediction model should prove useful for future investigations to identify high-risk subjects for prevention research in high-risk populations. Copyright © 2017 John Wiley & Sons, Ltd.

  12. Validating Pseudo-dynamic Source Models against Observed Ground Motion Data at the SCEC Broadband Platform, Ver 16.5

    NASA Astrophysics Data System (ADS)

    Song, S. G.

    2016-12-01

    Simulation-based ground motion prediction approaches have several benefits over empirical ground motion prediction equations (GMPEs). For instance, full 3-component waveforms can be produced and site-specific hazard analysis is also possible. However, it is important to validate them against observed ground motion data to confirm their efficiency and validity before practical uses. There have been community efforts for these purposes, which are supported by the Broadband Platform (BBP) project at the Southern California Earthquake Center (SCEC). In the simulation-based ground motion prediction approaches, it is a critical element to prepare a possible range of scenario rupture models. I developed a pseudo-dynamic source model for Mw 6.5-7.0 by analyzing a number of dynamic rupture models, based on 1-point and 2-point statistics of earthquake source parameters (Song et al. 2014; Song 2016). In this study, the developed pseudo-dynamic source models were tested against observed ground motion data at the SCEC BBP, Ver 16.5. The validation was performed at two stages. At the first stage, simulated ground motions were validated against observed ground motion data for past events such as the 1992 Landers and 1994 Northridge, California, earthquakes. At the second stage, they were validated against the latest version of empirical GMPEs, i.e., NGA-West2. The validation results show that the simulated ground motions produce ground motion intensities compatible with observed ground motion data at both stages. The compatibility of the pseudo-dynamic source models with the omega-square spectral decay and the standard deviation of the simulated ground motion intensities are also discussed in the study

  13. Developing a short measure of organizational justice: a multisample health professionals study.

    PubMed

    Elovainio, Marko; Heponiemi, Tarja; Kuusio, Hannamaria; Sinervo, Timo; Hintsa, Taina; Aalto, Anna-Mari

    2010-11-01

    To develop and test the validity of a short version of the original questionnaire measuring organizational justice. The study samples comprised working physicians (N = 2792) and registered nurses (n = 2137) from the Finnish Health Professionals study. Structural equation modelling was applied to test structural validity, using the justice scales. Furthermore, criterion validity was explored with well-being (sleeping problems) and health indicators (psychological distress/self-rated health). The short version of the organizational justice questionnaire (eight items) provides satisfactory psychometric properties (internal consistency, a good model fit of the data). All scales were associated with an increased risk of sleeping problems and psychological distress, indicating satisfactory criterion validity. This short version of the organizational justice questionnaire provides a useful tool for epidemiological studies focused on health-adverse effects of work environment.

  14. Development and validation of a predictive model for excessive postpartum blood loss: A retrospective, cohort study.

    PubMed

    Rubio-Álvarez, Ana; Molina-Alarcón, Milagros; Arias-Arias, Ángel; Hernández-Martínez, Antonio

    2018-03-01

    postpartum haemorrhage is one of the leading causes of maternal morbidity and mortality worldwide. Despite the use of uterotonics agents as preventive measure, it remains a challenge to identify those women who are at increased risk of postpartum bleeding. to develop and to validate a predictive model to assess the risk of excessive bleeding in women with vaginal birth. retrospective cohorts study. "Mancha-Centro Hospital" (Spain). the elaboration of the predictive model was based on a derivation cohort consisting of 2336 women between 2009 and 2011. For validation purposes, a prospective cohort of 953 women between 2013 and 2014 were employed. Women with antenatal fetal demise, multiple pregnancies and gestations under 35 weeks were excluded METHODS: we used a multivariate analysis with binary logistic regression, Ridge Regression and areas under the Receiver Operating Characteristic curves to determine the predictive ability of the proposed model. there was 197 (8.43%) women with excessive bleeding in the derivation cohort and 63 (6.61%) women in the validation cohort. Predictive factors in the final model were: maternal age, primiparity, duration of the first and second stages of labour, neonatal birth weight and antepartum haemoglobin levels. Accordingly, the predictive ability of this model in the derivation cohort was 0.90 (95% CI: 0.85-0.93), while it remained 0.83 (95% CI: 0.74-0.92) in the validation cohort. this predictive model is proved to have an excellent predictive ability in the derivation cohort, and its validation in a latter population equally shows a good ability for prediction. This model can be employed to identify women with a higher risk of postpartum haemorrhage. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Prototype of NASA's Global Precipitation Measurement Mission Ground Validation System

    NASA Technical Reports Server (NTRS)

    Schwaller, M. R.; Morris, K. R.; Petersen, W. A.

    2007-01-01

    NASA is developing a Ground Validation System (GVS) as one of its contributions to the Global Precipitation Mission (GPM). The GPM GVS provides an independent means for evaluation, diagnosis, and ultimately improvement of GPM spaceborne measurements and precipitation products. NASA's GPM GVS consists of three elements: field campaigns/physical validation, direct network validation, and modeling and simulation. The GVS prototype of direct network validation compares Tropical Rainfall Measuring Mission (TRMM) satellite-borne radar data to similar measurements from the U.S. national network of operational weather radars. A prototype field campaign has also been conducted; modeling and simulation prototypes are under consideration.

  16. Multiscale and Multiphysics Modeling of Additive Manufacturing of Advanced Materials

    NASA Technical Reports Server (NTRS)

    Liou, Frank; Newkirk, Joseph; Fan, Zhiqiang; Sparks, Todd; Chen, Xueyang; Fletcher, Kenneth; Zhang, Jingwei; Zhang, Yunlu; Kumar, Kannan Suresh; Karnati, Sreekar

    2015-01-01

    The objective of this proposed project is to research and develop a prediction tool for advanced additive manufacturing (AAM) processes for advanced materials and develop experimental methods to provide fundamental properties and establish validation data. Aircraft structures and engines demand materials that are stronger, useable at much higher temperatures, provide less acoustic transmission, and enable more aeroelastic tailoring than those currently used. Significant improvements in properties can only be achieved by processing the materials under nonequilibrium conditions, such as AAM processes. AAM processes encompass a class of processes that use a focused heat source to create a melt pool on a substrate. Examples include Electron Beam Freeform Fabrication and Direct Metal Deposition. These types of additive processes enable fabrication of parts directly from CAD drawings. To achieve the desired material properties and geometries of the final structure, assessing the impact of process parameters and predicting optimized conditions with numerical modeling as an effective prediction tool is necessary. The targets for the processing are multiple and at different spatial scales, and the physical phenomena associated occur in multiphysics and multiscale. In this project, the research work has been developed to model AAM processes in a multiscale and multiphysics approach. A macroscale model was developed to investigate the residual stresses and distortion in AAM processes. A sequentially coupled, thermomechanical, finite element model was developed and validated experimentally. The results showed the temperature distribution, residual stress, and deformation within the formed deposits and substrates. A mesoscale model was developed to include heat transfer, phase change with mushy zone, incompressible free surface flow, solute redistribution, and surface tension. Because of excessive computing time needed, a parallel computing approach was also tested. In addition, after investigating various methods, a Smoothed Particle Hydrodynamics Model (SPH Model) was developed to model wire feeding process. Its computational efficiency and simple architecture makes it more robust and flexible than other models. More research on material properties may be needed to realistically model the AAM processes. A microscale model was developed to investigate heterogeneous nucleation, dendritic grain growth, epitaxial growth of columnar grains, columnar-to-equiaxed transition, grain transport in melt, and other properties. The orientations of the columnar grains were almost perpendicular to the laser motion's direction. Compared to the similar studies in the literature, the multiple grain morphology modeling result is in the same order of magnitude as optical morphologies in the experiment. Experimental work was conducted to validate different models. An infrared camera was incorporated as a process monitoring and validating tool to identify the solidus and mushy zones during deposition. The images were successfully processed to identify these regions. This research project has investigated multiscale and multiphysics of the complex AAM processes thus leading to advanced understanding of these processes. The project has also developed several modeling tools and experimental validation tools that will be very critical in the future of AAM process qualification and certification.

  17. Model Validation Status Review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    E.L. Hardin

    The primary objective for the Model Validation Status Review was to perform a one-time evaluation of model validation associated with the analysis/model reports (AMRs) containing model input to total-system performance assessment (TSPA) for the Yucca Mountain site recommendation (SR). This review was performed in response to Corrective Action Request BSC-01-C-01 (Clark 2001, Krisha 2001) pursuant to Quality Assurance review findings of an adverse trend in model validation deficiency. The review findings in this report provide the following information which defines the extent of model validation deficiency and the corrective action needed: (1) AMRs that contain or support models are identified,more » and conversely, for each model the supporting documentation is identified. (2) The use for each model is determined based on whether the output is used directly for TSPA-SR, or for screening (exclusion) of features, events, and processes (FEPs), and the nature of the model output. (3) Two approaches are used to evaluate the extent to which the validation for each model is compliant with AP-3.10Q (Analyses and Models). The approaches differ in regard to whether model validation is achieved within individual AMRs as originally intended, or whether model validation could be readily achieved by incorporating information from other sources. (4) Recommendations are presented for changes to the AMRs, and additional model development activities or data collection, that will remedy model validation review findings, in support of licensing activities. The Model Validation Status Review emphasized those AMRs that support TSPA-SR (CRWMS M&O 2000bl and 2000bm). A series of workshops and teleconferences was held to discuss and integrate the review findings. The review encompassed 125 AMRs (Table 1) plus certain other supporting documents and data needed to assess model validity. The AMRs were grouped in 21 model areas representing the modeling of processes affecting the natural and engineered barriers, plus the TSPA model itself Description of the model areas is provided in Section 3, and the documents reviewed are described in Section 4. The responsible manager for the Model Validation Status Review was the Chief Science Officer (CSO) for Bechtel-SAIC Co. (BSC). The team lead was assigned by the CSO. A total of 32 technical specialists were engaged to evaluate model validation status in the 21 model areas. The technical specialists were generally independent of the work reviewed, meeting technical qualifications as discussed in Section 5.« less

  18. Recent modelling advances for ultrasonic TOFD inspections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Darmon, Michel; Ferrand, Adrien; Dorval, Vincent

    The ultrasonic TOFD (Time of Flight Diffraction) Technique is commonly used to detect and characterize disoriented cracks using their edge diffraction echoes. An overview of the models integrated in the CIVA software platform and devoted to TOFD simulation is presented. CIVA allows to predict diffraction echoes from complex 3D flaws using a PTD (Physical Theory of Diffraction) based model. Other dedicated developments have been added to simulate lateral waves in 3D on planar entry surfaces and in 2D on irregular surfaces by a ray approach. Calibration echoes from Side Drilled Holes (SDHs), specimen echoes and shadowing effects from flaws canmore » also been modelled. Some examples of theoretical validation of the models are presented. In addition, experimental validations have been performed both on planar blocks containing calibration holes and various notches and also on a specimen with an irregular entry surface and allow to draw conclusions on the validity of all the developed models.« less

  19. A diagnostic model for chronic hypersensitivity pneumonitis

    PubMed Central

    Johannson, Kerri A; Elicker, Brett M; Vittinghoff, Eric; Assayag, Deborah; de Boer, Kaïssa; Golden, Jeffrey A; Jones, Kirk D; King, Talmadge E; Koth, Laura L; Lee, Joyce S; Ley, Brett; Wolters, Paul J; Collard, Harold R

    2017-01-01

    The objective of this study was to develop a diagnostic model that allows for a highly specific diagnosis of chronic hypersensitivity pneumonitis using clinical and radiological variables alone. Chronic hypersensitivity pneumonitis and other interstitial lung disease cases were retrospectively identified from a longitudinal database. High-resolution CT scans were blindly scored for radiographic features (eg, ground-glass opacity, mosaic perfusion) as well as the radiologist’s diagnostic impression. Candidate models were developed then evaluated using clinical and radiographic variables and assessed by the cross-validated C-statistic. Forty-four chronic hypersensitivity pneumonitis and eighty other interstitial lung disease cases were identified. Two models were selected based on their statistical performance, clinical applicability and face validity. Key model variables included age, down feather and/or bird exposure, radiographic presence of ground-glass opacity and mosaic perfusion and moderate or high confidence in the radiographic impression of chronic hypersensitivity pneumonitis. Models were internally validated with good performance, and cut-off values were established that resulted in high specificity for a diagnosis of chronic hypersensitivity pneumonitis. PMID:27245779

  20. Validation of Solar Sail Simulations for the NASA Solar Sail Demonstration Project

    NASA Technical Reports Server (NTRS)

    Braafladt, Alexander C.; Artusio-Glimpse, Alexandra B.; Heaton, Andrew F.

    2014-01-01

    NASA's Solar Sail Demonstration project partner L'Garde is currently assembling a flight-like sail assembly for a series of ground demonstration tests beginning in 2015. For future missions of this sail that might validate solar sail technology, it is necessary to have an accurate sail thrust model. One of the primary requirements of a proposed potential technology validation mission will be to demonstrate solar sail thrust over a set time period, which for this project is nominally 30 days. This requirement would be met by comparing a L'Garde-developed trajectory simulation to the as-flown trajectory. The current sail simulation baseline for L'Garde is a Systems Tool Kit (STK) plug-in that includes a custom-designed model of the L'Garde sail. The STK simulation has been verified for a flat plate model by comparing it to the NASA-developed Solar Sail Spaceflight Simulation Software (S5). S5 matched STK with a high degree of accuracy and the results of the validation indicate that the L'Garde STK model is accurate enough to meet the potential future mission requirements. Additionally, since the L'Garde sail deviates considerably from a flat plate, a force model for a non-flat sail provided by L'Garde sail was also tested and compared to a flat plate model in S5. This result will be used in the future as a basis of comparison to the non-flat sail model being developed for STK.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rainer, Leo I.; Hoeschele, Marc A.; Apte, Michael G.

    This report addresses the results of detailed monitoring completed under Program Element 6 of Lawrence Berkeley National Laboratory's High Performance Commercial Building Systems (HPCBS) PIER program. The purpose of the Energy Simulations and Projected State-Wide Energy Savings project is to develop reasonable energy performance and cost models for high performance relocatable classrooms (RCs) across California climates. A key objective of the energy monitoring was to validate DOE2 simulations for comparison to initial DOE2 performance projections. The validated DOE2 model was then used to develop statewide savings projections by modeling base case and high performance RC operation in the 16 Californiamore » climate zones. The primary objective of this phase of work was to utilize detailed field monitoring data to modify DOE2 inputs and generate performance projections based on a validated simulation model. Additional objectives include the following: (1) Obtain comparative performance data on base case and high performance HVAC systems to determine how they are operated, how they perform, and how the occupants respond to the advanced systems. This was accomplished by installing both HVAC systems side-by-side (i.e., one per module of a standard two module, 24 ft by 40 ft RC) on the study RCs and switching HVAC operating modes on a weekly basis. (2) Develop projected statewide energy and demand impacts based on the validated DOE2 model. (3) Develop cost effectiveness projections for the high performance HVAC system in the 16 California climate zones.« less

  2. Sorbent, Sublimation, and Icing Modeling Methods: Experimental Validation and Application to an Integrated MTSA Subassembly Thermal Model

    NASA Technical Reports Server (NTRS)

    Bower, Chad; Padilla, Sebastian; Iacomini, Christie; Paul, Heather L.

    2010-01-01

    This paper details the validation of modeling methods for the three core components of a Metabolic heat regenerated Temperature Swing Adsorption (MTSA) subassembly, developed for use in a Portable Life Support System (PLSS). The first core component in the subassembly is a sorbent bed, used to capture and reject metabolically produced carbon dioxide (CO2). The sorbent bed performance can be augmented with a temperature swing driven by a liquid CO2 (LCO2) sublimation heat exchanger (SHX) for cooling the sorbent bed, and a condensing, icing heat exchanger (CIHX) for warming the sorbent bed. As part of the overall MTSA effort, scaled design validation test articles for each of these three components have been independently tested in laboratory conditions. Previously described modeling methodologies developed for implementation in Thermal Desktop and SINDA/FLUINT are reviewed and updated, their application in test article models outlined, and the results of those model correlations relayed. Assessment of the applicability of each modeling methodology to the challenge of simulating the response of the test articles and their extensibility to a full scale integrated subassembly model is given. The independent verified and validated modeling methods are applied to the development of a MTSA subassembly prototype model and predictions of the subassembly performance are given. These models and modeling methodologies capture simulation of several challenging and novel physical phenomena in the Thermal Desktop and SINDA/FLUINT software suite. Novel methodologies include CO2 adsorption front tracking and associated thermal response in the sorbent bed, heat transfer associated with sublimation of entrained solid CO2 in the SHX, and water mass transfer in the form of ice as low as 210 K in the CIHX.

  3. Bayesian cross-validation for model evaluation and selection, with application to the North American Breeding Bird Survey

    USGS Publications Warehouse

    Link, William; Sauer, John R.

    2016-01-01

    The analysis of ecological data has changed in two important ways over the last 15 years. The development and easy availability of Bayesian computational methods has allowed and encouraged the fitting of complex hierarchical models. At the same time, there has been increasing emphasis on acknowledging and accounting for model uncertainty. Unfortunately, the ability to fit complex models has outstripped the development of tools for model selection and model evaluation: familiar model selection tools such as Akaike's information criterion and the deviance information criterion are widely known to be inadequate for hierarchical models. In addition, little attention has been paid to the evaluation of model adequacy in context of hierarchical modeling, i.e., to the evaluation of fit for a single model. In this paper, we describe Bayesian cross-validation, which provides tools for model selection and evaluation. We describe the Bayesian predictive information criterion and a Bayesian approximation to the BPIC known as the Watanabe-Akaike information criterion. We illustrate the use of these tools for model selection, and the use of Bayesian cross-validation as a tool for model evaluation, using three large data sets from the North American Breeding Bird Survey.

  4. A Diagnostic Model for Impending Death in Cancer Patients: Preliminary Report

    PubMed Central

    Hui, David; Hess, Kenneth; dos Santos, Renata; Chisholm, Gary; Bruera, Eduardo

    2015-01-01

    Background We recently identified several highly specific bedside physical signs associated with impending death within 3 days among patients with advanced cancer. In this study, we developed and assessed a diagnostic model for impending death based on these physical signs. Methods We systematically documented 62 physical signs every 12 hours from admission to death or discharge in 357 patients with advanced cancer admitted to acute palliative care units (APCUs) at two tertiary care cancer centers. We used recursive partitioning analysis (RPA) to develop a prediction model for impending death in 3 days using admission data. We validated the model with 5 iterations of 10-fold cross-validation, and also applied the model to APCU days 2/3/4/5/6. Results Among 322/357 (90%) patients with complete data for all signs, the 3-day mortality was 24% on admission. The final model was based on 2 variables (palliative performance scale [PPS] and drooping of nasolabial fold) and had 4 terminal leaves: PPS≤20% and drooping of nasolabial fold present, PPS≤20% and drooping of nasolabial fold absent, PPS 30–60% and PPS ≥ 70%, with 3-day mortality of 94%, 42%, 16% and 3%, respectively. The diagnostic accuracy was 81% for the original tree, 80% for cross-validation, and 79%–84% for subsequent APCU days. Conclusion(s) We developed a diagnostic model for impending death within 3 days based on 2 objective bedside physical signs. This model was applicable to both APCU admission and subsequent days. Upon further external validation, this model may help clinicians to formulate the diagnosis of impending death. PMID:26218612

  5. Model development and experimental validation of capnophilic lactic fermentation and hydrogen synthesis by Thermotoga neapolitana.

    PubMed

    Pradhan, Nirakar; Dipasquale, Laura; d'Ippolito, Giuliana; Fontana, Angelo; Panico, Antonio; Pirozzi, Francesco; Lens, Piet N L; Esposito, Giovanni

    2016-08-01

    The aim of the present study was to develop a kinetic model for a recently proposed unique and novel metabolic process called capnophilic (CO2-requiring) lactic fermentation (CLF) pathway in Thermotoga neapolitana. The model was based on Monod kinetics and the mathematical expressions were developed to enable the simulation of biomass growth, substrate consumption and product formation. The calibrated kinetic parameters such as maximum specific uptake rate (k), semi-saturation constant (kS), biomass yield coefficient (Y) and endogenous decay rate (kd) were 1.30 h(-1), 1.42 g/L, 0.1195 and 0.0205 h(-1), respectively. A high correlation (>0.98) was obtained between the experimental data and model predictions for both model validation and cross validation processes. An increase of the lactate production in the range of 40-80% was obtained through CLF pathway compared to the classic dark fermentation model. The proposed kinetic model is the first mechanistically based model for the CLF pathway. This model provides useful information to improve the knowledge about how acetate and CO2 are recycled back by Thermotoga neapolitana to produce lactate without compromising the overall hydrogen yield. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Prognostics of Power Electronics, Methods and Validation Experiments

    NASA Technical Reports Server (NTRS)

    Kulkarni, Chetan S.; Celaya, Jose R.; Biswas, Gautam; Goebel, Kai

    2012-01-01

    Abstract Failure of electronic devices is a concern for future electric aircrafts that will see an increase of electronics to drive and control safety-critical equipment throughout the aircraft. As a result, investigation of precursors to failure in electronics and prediction of remaining life of electronic components is of key importance. DC-DC power converters are power electronics systems employed typically as sourcing elements for avionics equipment. Current research efforts in prognostics for these power systems focuses on the identification of failure mechanisms and the development of accelerated aging methodologies and systems to accelerate the aging process of test devices, while continuously measuring key electrical and thermal parameters. Preliminary model-based prognostics algorithms have been developed making use of empirical degradation models and physics-inspired degradation model with focus on key components like electrolytic capacitors and power MOSFETs (metal-oxide-semiconductor-field-effect-transistor). This paper presents current results on the development of validation methods for prognostics algorithms of power electrolytic capacitors. Particularly, in the use of accelerated aging systems for algorithm validation. Validation of prognostics algorithms present difficulties in practice due to the lack of run-to-failure experiments in deployed systems. By using accelerated experiments, we circumvent this problem in order to define initial validation activities.

  7. Model-based clinical dose optimization for phenobarbital in neonates: An illustration of the importance of data sharing and external validation.

    PubMed

    Völler, Swantje; Flint, Robert B; Stolk, Leo M; Degraeuwe, Pieter L J; Simons, Sinno H P; Pokorna, Paula; Burger, David M; de Groot, Ronald; Tibboel, Dick; Knibbe, Catherijne A J

    2017-11-15

    Particularly in the pediatric clinical pharmacology field, data-sharing offers the possibility of making the most of all available data. In this study, we utilize previously collected therapeutic drug monitoring (TDM) data of term and preterm newborns to develop a population pharmacokinetic model for phenobarbital. We externally validate the model using prospective phenobarbital data from an ongoing pharmacokinetic study in preterm neonates. TDM data from 53 neonates (gestational age (GA): 37 (24-42) weeks, bodyweight: 2.7 (0.45-4.5) kg; postnatal age (PNA): 4.5 (0-22) days) contained information on dosage histories, concentration and covariate data (including birth weight, actual weight, post-natal age (PNA), postmenstrual age, GA, sex, liver and kidney function, APGAR-score). Model development was carried out using NONMEM ® 7.3. After assessment of model fit, the model was validated using data of 17 neonates included in the DINO (Drug dosage Improvement in NeOnates)-study. Modelling of 229 plasma concentrations, ranging from 3.2 to 75.2mg/L, resulted in a one compartment model for phenobarbital. Clearance (CL) and volume (V d ) for a child with a birthweight of 2.6kg at PNA day 4.5 was 0.0091L/h (9%) and 2.38L (5%), respectively. Birthweight and PNA were the best predictors for CL maturation, increasing CL by 36.7% per kg birthweight and 5.3% per postnatal day of living, respectively. The best predictor for the increase in V d was actual bodyweight (0.31L/kg). External validation showed that the model can adequately predict the pharmacokinetics in a prospective study. Data-sharing can help to successfully develop and validate population pharmacokinetic models in neonates. From the results it seems that both PNA and bodyweight are required to guide dosing of phenobarbital in term and preterm neonates. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  8. Development and validation of deterioration models for concrete bridge decks - phase 2 : mechanics-based degradation models.

    DOT National Transportation Integrated Search

    2013-06-01

    This report summarizes a research project aimed at developing degradation models for bridge decks in the state of Michigan based on durability mechanics. A probabilistic framework to implement local-level mechanistic-based models for predicting the c...

  9. "La Clave Profesional": Validation of a Vocational Guidance Instrument

    ERIC Educational Resources Information Center

    Mudarra, Maria J.; Lázaro Martínez, Ángel

    2014-01-01

    Introduction: The current study demonstrates empirical and cultural validity of "La Clave Profesional" (Spanish adaptation of Career Key, Jones's test based Holland's RIASEC model). The process of providing validity evidence also includes a reflection on personal and career development and examines the relationahsips between RIASEC…

  10. From control to causation: Validating a 'complex systems model' of running-related injury development and prevention.

    PubMed

    Hulme, A; Salmon, P M; Nielsen, R O; Read, G J M; Finch, C F

    2017-11-01

    There is a need for an ecological and complex systems approach for better understanding the development and prevention of running-related injury (RRI). In a previous article, we proposed a prototype model of the Australian recreational distance running system which was based on the Systems Theoretic Accident Mapping and Processes (STAMP) method. That model included the influence of political, organisational, managerial, and sociocultural determinants alongside individual-level factors in relation to RRI development. The purpose of this study was to validate that prototype model by drawing on the expertise of both systems thinking and distance running experts. This study used a modified Delphi technique involving a series of online surveys (December 2016- March 2017). The initial survey was divided into four sections containing a total of seven questions pertaining to different features associated with the prototype model. Consensus in opinion about the validity of the prototype model was reached when the number of experts who agreed or disagreed with survey statement was ≥75% of the total number of respondents. A total of two Delphi rounds was needed to validate the prototype model. Out of a total of 51 experts who were initially contacted, 50.9% (n = 26) completed the first round of the Delphi, and 92.3% (n = 24) of those in the first round participated in the second. Most of the 24 full participants considered themselves to be a running expert (66.7%), and approximately a third indicated their expertise as a systems thinker (33.3%). After the second round, 91.7% of the experts agreed that the prototype model was a valid description of the Australian distance running system. This is the first study to formally examine the development and prevention of RRI from an ecological and complex systems perspective. The validated model of the Australian distance running system facilitates theoretical advancement in terms of identifying practical system-wide opportunities for the implementation of sustainable RRI prevention interventions. This 'big picture' perspective represents the first step required when thinking about the range of contributory causal factors that affect other system elements, as well as runners' behaviours in relation to RRI risk. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Development of a Detailed Volumetric Finite Element Model of the Spine to Simulate Surgical Correction of Spinal Deformities

    PubMed Central

    Driscoll, Mark; Mac-Thiong, Jean-Marc; Labelle, Hubert; Parent, Stefan

    2013-01-01

    A large spectrum of medical devices exists; it aims to correct deformities associated with spinal disorders. The development of a detailed volumetric finite element model of the osteoligamentous spine would serve as a valuable tool to assess, compare, and optimize spinal devices. Thus the purpose of the study was to develop and initiate validation of a detailed osteoligamentous finite element model of the spine with simulated correction from spinal instrumentation. A finite element of the spine from T1 to L5 was developed using properties and geometry from the published literature and patient data. Spinal instrumentation, consisting of segmental translation of a scoliotic spine, was emulated. Postoperative patient and relevant published data of intervertebral disc stress, screw/vertebra pullout forces, and spinal profiles was used to evaluate the models validity. Intervertebral disc and vertebral reaction stresses respected published in vivo, ex vivo, and in silico values. Screw/vertebra reaction forces agreed with accepted pullout threshold values. Cobb angle measurements of spinal deformity following simulated surgical instrumentation corroborated with patient data. This computational biomechanical analysis validated a detailed volumetric spine model. Future studies seek to exploit the model to explore the performance of corrective spinal devices. PMID:23991426

  12. The Development and Validation of a Mental Toughness Scale for Adolescents

    ERIC Educational Resources Information Center

    McGeown, Sarah; St. Clair-Thompson, Helen; Putwain, David W.

    2018-01-01

    The present study examined the validity of a newly developed instrument, the Mental Toughness Scale for Adolescents, which examines the attributes of challenge, commitment, confidence (abilities and interpersonal), and control (life and emotion). The six-factor model was supported using exploratory factor analysis (n = 373) and confirmatory factor…

  13. Design, Development, and Validation of Learning Objects

    ERIC Educational Resources Information Center

    Nugent, Gwen; Soh, Leen-Kiat; Samal, Ashok

    2006-01-01

    A learning object is a small, stand-alone, mediated content resource that can be reused in multiple instructional contexts. In this article, we describe our approach to design, develop, and validate Shareable Content Object Reference Model (SCORM) compliant learning objects for undergraduate computer science education. We discuss the advantages of…

  14. Calibration and Validation of a Finite ELement Model of THor-K Anthropomorphic Test Device for Aerospace Safety Applications

    NASA Technical Reports Server (NTRS)

    Putnam, J. B.; Unataroiu, C. D.; Somers, J. T.

    2014-01-01

    The THOR anthropomorphic test device (ATD) has been developed and continuously improved by the National Highway Traffic Safety Administration to provide automotive manufacturers an advanced tool that can be used to assess the injury risk of vehicle occupants in crash tests. Recently, a series of modifications were completed to improve the biofidelity of THOR ATD [1]. The updated THOR Modification Kit (THOR-K) ATD was employed at Wright-Patterson Air Base in 22 impact tests in three configurations: vertical, lateral, and spinal [2]. Although a computational finite element (FE) model of the THOR had been previously developed [3], updates to the model were needed to incorporate the recent changes in the modification kit. The main goal of this study was to develop and validate a FE model of the THOR-K ATD. The CAD drawings of the THOR-K ATD were reviewed and FE models were developed for the updated parts. For example, the head-skin geometry was found to change significantly, so its model was re-meshed (Fig. 1a). A protocol was developed to calibrate each component identified as key to the kinematic and kinetic response of the THOR-K head/neck ATD FE model (Fig. 1b). The available ATD tests were divided in two groups: a) calibration tests where the unknown material parameters of deformable parts (e.g., head skin, pelvis foam) were optimized to match the data and b) validation tests where the model response was only compared with test data by calculating their score using CORrelation and Analysis (CORA) rating system. Finally, the whole ATD model was validated under horizontal-, vertical-, and lateral-loading conditions against data recorded in the Wright Patterson tests [2]. Overall, the final THOR-K ATD model developed in this study is shown to respond similarly to the ATD in all validation tests. This good performance indicates that the optimization performed during calibration by using the CORA score as objective function is not test specific. Therefore confidence is provided in the ATD model for uses in predicting response in test conditions not performed in this study such those observed in the spacecraft landing. Comparison studies with ATD and human models may also be performed to contribute to future changes in THOR ATD design in an effort to improve its biofidelity, which has been traditionally based on post-mortem human subject testing and designer experience.

  15. Real-Time Onboard Global Nonlinear Aerodynamic Modeling from Flight Data

    NASA Technical Reports Server (NTRS)

    Brandon, Jay M.; Morelli, Eugene A.

    2014-01-01

    Flight test and modeling techniques were developed to accurately identify global nonlinear aerodynamic models onboard an aircraft. The techniques were developed and demonstrated during piloted flight testing of an Aermacchi MB-326M Impala jet aircraft. Advanced piloting techniques and nonlinear modeling techniques based on fuzzy logic and multivariate orthogonal function methods were implemented with efficient onboard calculations and flight operations to achieve real-time maneuver monitoring and analysis, and near-real-time global nonlinear aerodynamic modeling and prediction validation testing in flight. Results demonstrated that global nonlinear aerodynamic models for a large portion of the flight envelope were identified rapidly and accurately using piloted flight test maneuvers during a single flight, with the final identified and validated models available before the aircraft landed.

  16. MIZMAS: Modeling the Evolution of Ice Thickness and Floe Size Distributions in the Marginal Ice Zone of the Chukchi and Beaufort Seas

    DTIC Science & Technology

    2014-09-30

    existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this...through downscaling future projection simulations. APPROACH To address the scientific objectives, we plan to develop, implement, and validate a...ITD and FSD at the same time. The development of MIZMAS will be based on systematic model parameterization, calibration, and validation, and data

  17. Developing Learning Model P3E to Improve Students’ Critical Thinking Skills of Islamic Senior High School

    NASA Astrophysics Data System (ADS)

    Bahtiar; Rahayu, Y. S.; Wasis

    2018-01-01

    This research aims to produce P3E learning model to improve students’ critical thinking skills. The developed model is named P3E, consisting of 4 (four) stages namely; organization, inquiry, presentation, and evaluation. This development research refers to the development stage by Kemp. The design of the wide scale try-out used pretest-posttest group design. The wide scale try-out was conducted in grade X of 2016/2017 academic year. The analysis of the results of this development research inludes three aspects, namely: validity, practicality, and effectiveness of the model developed. The research results showed; (1) the P3E learning model was valid, according to experts with an average value of 3.7; (2) The completion of the syntax of the learning model developed obtained 98.09% and 94.39% for two schools based on the assessment of the observers. This shows that the developed model is practical to be implemented; (3) the developed model is effective for improving students’ critical thinking skills, although the n-gain of the students’ critical thinking skills was 0.54 with moderate category. Based on the results of the research above, it can be concluded that the developed P3E learning model is suitable to be used to improve students’ critical thinking skills.

  18. Validity of worksheet-based guided inquiry and mind mapping for training students’ creative thinking skills

    NASA Astrophysics Data System (ADS)

    Susanti, L. B.; Poedjiastoeti, S.; Taufikurohmah, T.

    2018-04-01

    The purpose of this study is to explain the validity of guided inquiry and mind mapping-based worksheet that has been developed in this study. The worksheet implemented the phases of guided inquiry teaching models in order to train students’ creative thinking skills. The creative thinking skills which were trained in this study included fluency, flexibility, originality and elaboration. The types of validity used in this study included content and construct validity. The type of this study is development research with Research and Development (R & D) method. The data of this study were collected using review and validation sheets. Sources of the data were chemistry lecturer and teacher. The data is the analyzed descriptively. The results showed that the worksheet is very valid and could be used as a learning media with the percentage of validity ranged from 82.5%-92.5%.

  19. Development, Validation and Parametric study of a 3-Year-Old Child Head Finite Element Model

    NASA Astrophysics Data System (ADS)

    Cui, Shihai; Chen, Yue; Li, Haiyan; Ruan, ShiJie

    2015-12-01

    Traumatic brain injury caused by drop and traffic accidents is an important reason for children's death and disability. Recently, the computer finite element (FE) head model has been developed to investigate brain injury mechanism and biomechanical responses. Based on CT data of a healthy 3-year-old child head, the FE head model with detailed anatomical structure was developed. The deep brain structures such as white matter, gray matter, cerebral ventricle, hippocampus, were firstly created in this FE model. The FE model was validated by comparing the simulation results with that of cadaver experiments based on reconstructing the child and adult cadaver experiments. In addition, the effects of skull stiffness on the child head dynamic responses were further investigated. All the simulation results confirmed the good biofidelity of the FE model.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    English, Shawn A.; Briggs, Timothy M.; Nelson, Stacy M.

    Simulations of low velocity impact with a flat cylindrical indenter upon a carbon fiber fabric reinforced polymer laminate are rigorously validated. Comparison of the impact energy absorption between the model and experiment is used as the validation metric. Additionally, non-destructive evaluation, including ultrasonic scans and three-dimensional computed tomography, provide qualitative validation of the models. The simulations include delamination, matrix cracks and fiber breaks. An orthotropic damage and failure constitutive model, capable of predicting progressive damage and failure, is developed in conjunction and described. An ensemble of simulations incorporating model parameter uncertainties is used to predict a response distribution which ismore » then compared to experimental output using appropriate statistical methods. Lastly, the model form errors are exposed and corrected for use in an additional blind validation analysis. The result is a quantifiable confidence in material characterization and model physics when simulating low velocity impact in structures of interest.« less

  1. Design and Development Computer-Based E-Learning Teaching Material for Improving Mathematical Understanding Ability and Spatial Sense of Junior High School Students

    NASA Astrophysics Data System (ADS)

    Nurjanah; Dahlan, J. A.; Wibisono, Y.

    2017-02-01

    This paper aims to make a design and development computer-based e-learning teaching material for improving mathematical understanding ability and spatial sense of junior high school students. Furthermore, the particular aims are (1) getting teaching material design, evaluation model, and intrument to measure mathematical understanding ability and spatial sense of junior high school students; (2) conducting trials computer-based e-learning teaching material model, asessment, and instrument to develop mathematical understanding ability and spatial sense of junior high school students; (3) completing teaching material models of computer-based e-learning, assessment, and develop mathematical understanding ability and spatial sense of junior high school students; (4) resulting research product is teaching materials of computer-based e-learning. Furthermore, the product is an interactive learning disc. The research method is used of this study is developmental research which is conducted by thought experiment and instruction experiment. The result showed that teaching materials could be used very well. This is based on the validation of computer-based e-learning teaching materials, which is validated by 5 multimedia experts. The judgement result of face and content validity of 5 validator shows that the same judgement result to the face and content validity of each item test of mathematical understanding ability and spatial sense. The reliability test of mathematical understanding ability and spatial sense are 0,929 and 0,939. This reliability test is very high. While the validity of both tests have a high and very high criteria.

  2. Watershed Regressions for Pesticides (WARP) for Predicting Annual Maximum and Annual Maximum Moving-Average Concentrations of Atrazine in Streams

    USGS Publications Warehouse

    Stone, Wesley W.; Gilliom, Robert J.; Crawford, Charles G.

    2008-01-01

    Regression models were developed for predicting annual maximum and selected annual maximum moving-average concentrations of atrazine in streams using the Watershed Regressions for Pesticides (WARP) methodology developed by the National Water-Quality Assessment Program (NAWQA) of the U.S. Geological Survey (USGS). The current effort builds on the original WARP models, which were based on the annual mean and selected percentiles of the annual frequency distribution of atrazine concentrations. Estimates of annual maximum and annual maximum moving-average concentrations for selected durations are needed to characterize the levels of atrazine and other pesticides for comparison to specific water-quality benchmarks for evaluation of potential concerns regarding human health or aquatic life. Separate regression models were derived for the annual maximum and annual maximum 21-day, 60-day, and 90-day moving-average concentrations. Development of the regression models used the same explanatory variables, transformations, model development data, model validation data, and regression methods as those used in the original development of WARP. The models accounted for 72 to 75 percent of the variability in the concentration statistics among the 112 sampling sites used for model development. Predicted concentration statistics from the four models were within a factor of 10 of the observed concentration statistics for most of the model development and validation sites. Overall, performance of the models for the development and validation sites supports the application of the WARP models for predicting annual maximum and selected annual maximum moving-average atrazine concentration in streams and provides a framework to interpret the predictions in terms of uncertainty. For streams with inadequate direct measurements of atrazine concentrations, the WARP model predictions for the annual maximum and the annual maximum moving-average atrazine concentrations can be used to characterize the probable levels of atrazine for comparison to specific water-quality benchmarks. Sites with a high probability of exceeding a benchmark for human health or aquatic life can be prioritized for monitoring.

  3. Development of Macroscale Models of UO 2 Fuel Sintering and Densification using Multiscale Modeling and Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greenquist, Ian; Tonks, Michael

    2016-10-01

    Light water reactor fuel pellets are fabricated using sintering to final densities of 95% or greater. During reactor operation, the porosity remaining in the fuel after fabrication decreases further due to irradiation-assisted densification. While empirical models have been developed to describe this densification process, a mechanistic model is needed as part of the ongoing work by the NEAMS program to develop a more predictive fuel performance code. In this work we will develop a phase field model of sintering of UO 2 in the MARMOT code, and validate it by comparing to published sintering data. We will then add themore » capability to capture irradiation effects into the model, and use it to develop a mechanistic model of densification that will go into the BISON code and add another essential piece to the microstructure-based materials models. The final step will be to add the effects of applied fields, to model field-assisted sintering of UO 2. The results of the phase field model will be validated by comparing to data from field-assisted sintering. Tasks over three years: 1. Develop a sintering model for UO 2 in MARMOT 2. Expand model to account for irradiation effects 3. Develop a mechanistic macroscale model of densification for BISON« less

  4. 75 FR 75532 - Shipping Coordinating Committee; Notice of Committee Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-03

    ...; --Validation of model training courses; --Unlawful practices associated with certificates of competency... Recommendations for entering enclosed spaces aboard ships; --Development of model procedures for executing shipboard emergency measures; --Development of training standards for recovery systems; --Development of...

  5. Development and validation of a set of six adaptable prognosis prediction (SAP) models based on time-series real-world big data analysis for patients with cancer receiving chemotherapy: A multicenter case crossover study

    PubMed Central

    Kanai, Masashi; Okamoto, Kazuya; Yamamoto, Yosuke; Yoshioka, Akira; Hiramoto, Shuji; Nozaki, Akira; Nishikawa, Yoshitaka; Yamaguchi, Daisuke; Tomono, Teruko; Nakatsui, Masahiko; Baba, Mika; Morita, Tatsuya; Matsumoto, Shigemi; Kuroda, Tomohiro; Okuno, Yasushi; Muto, Manabu

    2017-01-01

    Background We aimed to develop an adaptable prognosis prediction model that could be applied at any time point during the treatment course for patients with cancer receiving chemotherapy, by applying time-series real-world big data. Methods Between April 2004 and September 2014, 4,997 patients with cancer who had received systemic chemotherapy were registered in a prospective cohort database at the Kyoto University Hospital. Of these, 2,693 patients with a death record were eligible for inclusion and divided into training (n = 1,341) and test (n = 1,352) cohorts. In total, 3,471,521 laboratory data at 115,738 time points, representing 40 laboratory items [e.g., white blood cell counts and albumin (Alb) levels] that were monitored for 1 year before the death event were applied for constructing prognosis prediction models. All possible prediction models comprising three different items from 40 laboratory items (40C3 = 9,880) were generated in the training cohort, and the model selection was performed in the test cohort. The fitness of the selected models was externally validated in the validation cohort from three independent settings. Results A prognosis prediction model utilizing Alb, lactate dehydrogenase, and neutrophils was selected based on a strong ability to predict death events within 1–6 months and a set of six prediction models corresponding to 1,2, 3, 4, 5, and 6 months was developed. The area under the curve (AUC) ranged from 0.852 for the 1 month model to 0.713 for the 6 month model. External validation supported the performance of these models. Conclusion By applying time-series real-world big data, we successfully developed a set of six adaptable prognosis prediction models for patients with cancer receiving chemotherapy. PMID:28837592

  6. Analysis and prediction of agricultural pest dynamics with Tiko'n, a generic tool to develop agroecological food web models

    NASA Astrophysics Data System (ADS)

    Malard, J. J.; Rojas, M.; Adamowski, J. F.; Anandaraja, N.; Tuy, H.; Melgar-Quiñonez, H.

    2016-12-01

    While several well-validated crop growth models are currently widely used, very few crop pest models of the same caliber have been developed or applied, and pest models that take trophic interactions into account are even rarer. This may be due to several factors, including 1) the difficulty of representing complex agroecological food webs in a quantifiable model, and 2) the general belief that pesticides effectively remove insect pests from immediate concern. However, pests currently claim a substantial amount of harvests every year (and account for additional control costs), and the impact of insects and of their trophic interactions on agricultural crops cannot be ignored, especially in the context of changing climates and increasing pressures on crops across the globe. Unfortunately, most integrated pest management frameworks rely on very simple models (if at all), and most examples of successful agroecological management remain more anecdotal than scientifically replicable. In light of this, there is a need for validated and robust agroecological food web models that allow users to predict the response of these webs to changes in management, crops or climate, both in order to predict future pest problems under a changing climate as well as to develop effective integrated management plans. Here we present Tiko'n, a Python-based software whose API allows users to rapidly build and validate trophic web agroecological models that predict pest dynamics in the field. The programme uses a Bayesian inference approach to calibrate the models according to field data, allowing for the reuse of literature data from various sources and reducing the need for extensive field data collection. We apply the model to the cononut black-headed caterpillar (Opisina arenosella) and associated parasitoid data from Sri Lanka, showing how the modeling framework can be used to rapidly develop, calibrate and validate models that elucidate how the internal structures of food webs determine their behaviour and allow users to evaluate different integrated management options.

  7. Personalized prediction of chronic wound healing: an exponential mixed effects model using stereophotogrammetric measurement.

    PubMed

    Xu, Yifan; Sun, Jiayang; Carter, Rebecca R; Bogie, Kath M

    2014-05-01

    Stereophotogrammetric digital imaging enables rapid and accurate detailed 3D wound monitoring. This rich data source was used to develop a statistically validated model to provide personalized predictive healing information for chronic wounds. 147 valid wound images were obtained from a sample of 13 category III/IV pressure ulcers from 10 individuals with spinal cord injury. Statistical comparison of several models indicated the best fit for the clinical data was a personalized mixed-effects exponential model (pMEE), with initial wound size and time as predictors and observed wound size as the response variable. Random effects capture personalized differences. Other models are only valid when wound size constantly decreases. This is often not achieved for clinical wounds. Our model accommodates this reality. Two criteria to determine effective healing time outcomes are proposed: r-fold wound size reduction time, t(r-fold), is defined as the time when wound size reduces to 1/r of initial size. t(δ) is defined as the time when the rate of the wound healing/size change reduces to a predetermined threshold δ < 0. Healing rate differs from patient to patient. Model development and validation indicates that accurate monitoring of wound geometry can adaptively predict healing progression and that larger wounds heal more rapidly. Accuracy of the prediction curve in the current model improves with each additional evaluation. Routine assessment of wounds using detailed stereophotogrammetric imaging can provide personalized predictions of wound healing time. Application of a valid model will help the clinical team to determine wound management care pathways. Published by Elsevier Ltd.

  8. Validation of PV-RPM Code in the System Advisor Model.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klise, Geoffrey Taylor; Lavrova, Olga; Freeman, Janine

    2017-04-01

    This paper describes efforts made by Sandia National Laboratories (SNL) and the National Renewable Energy Laboratory (NREL) to validate the SNL developed PV Reliability Performance Model (PV - RPM) algorithm as implemented in the NREL System Advisor Model (SAM). The PV - RPM model is a library of functions that estimates component failure and repair in a photovoltaic system over a desired simulation period. The failure and repair distributions in this paper are probabilistic representations of component failure and repair based on data collected by SNL for a PV power plant operating in Arizona. The validation effort focuses on whethermore » the failure and repair dist ributions used in the SAM implementation result in estimated failures that match the expected failures developed in the proof - of - concept implementation. Results indicate that the SAM implementation of PV - RPM provides the same results as the proof - of - concep t implementation, indicating the algorithms were reproduced successfully.« less

  9. Data Association Algorithms for Tracking Satellites

    DTIC Science & Technology

    2013-03-27

    validation of the new tools. The description provided here includes the mathematical back ground and description of the models implemented, as well as a...simulation development. This work includes the addition of higher-fidelity models in CU-TurboProp and validation of the new tools. The description...ode45(), used in Ananke, and (3) provide the necessary inputs to the bidirectional reflectance distribution function ( BRDF ) model provided by Pacific

  10. Validation of BEHAVE fire behavior predictions in oak savannas using five fuel models

    Treesearch

    Keith Grabner; John Dwyer; Bruce Cutter

    1997-01-01

    Prescribed fire is a valuable tool in the restoration and management of oak savannas. BEHAVE, a fire behavior prediction system developed by the United States Forest Service, can be a useful tool when managing oak savannas with prescribed fire. BEHAVE predictions of fire rate-of-spread and flame length were validated using four standardized fuel models: Fuel Model 1 (...

  11. INACTIVATION OF CRYPTOSPORIDIUM OOCYSTS IN A PILOT-SCALE OZONE BUBBLE-DIFFUSER CONTACTOR - II: MODEL VALIDATION AND APPLICATION

    EPA Science Inventory

    The ADR model developed in Part I of this study was successfully validated with experimenta data obtained for the inactivation of C. parvum and C. muris oocysts with a pilot-scale ozone-bubble diffuser contactor operated with treated Ohio River water. Kinetic parameters, required...

  12. Validation of Short-Term Noise Assessment Procedures: FY16 Summary of Procedures, Progress, and Preliminary Results

    DTIC Science & Technology

    Validation project. This report describes the procedure used to generate the noise models output dataset , and then it compares that dataset to the...benchmark, the Engineer Research and Development Centers Long-Range Sound Propagation dataset . It was found that the models consistently underpredict the

  13. Designing Mouse Behavioral Tasks Relevant to Autistic-Like Behaviors

    ERIC Educational Resources Information Center

    Crawley, Jacqueline N.

    2004-01-01

    The importance of genetic factors in autism has prompted the development of mutant mouse models to advance our understanding of biological mechanisms underlying autistic behaviors. Mouse models of human neuropsychiatric diseases are designed to optimize (1) face validity, i.e., resemblance to the human symptoms; (2) construct validity, i.e.,…

  14. Evaluation of Weighted Scale Reliability and Criterion Validity: A Latent Variable Modeling Approach

    ERIC Educational Resources Information Center

    Raykov, Tenko

    2007-01-01

    A method is outlined for evaluating the reliability and criterion validity of weighted scales based on sets of unidimensional measures. The approach is developed within the framework of latent variable modeling methodology and is useful for point and interval estimation of these measurement quality coefficients in counseling and education…

  15. Validation and Application of Pharmacokinetic Models for Interspecies Extrapolations in Toxicity Risk Assessments of Volatile Organics

    DTIC Science & Technology

    1989-07-21

    formulation of physiologically-based pharmacokinetic models. Adult male Sprague-Dawley rats and male beagle dogs will be administered equal doses...experiments in the 0 dog . Physiologically-based pharmacokinetic models will be developed and validated for oral and inhalation exposures to halocarbons...of conducting experiments in dogs . The original physiolo ic model for the rat will be scaled up to predict halocarbon pharmacokinetics in the dog . The

  16. Parameter Selection Methods in Inverse Problem Formulation

    DTIC Science & Technology

    2010-11-03

    clinical data and used for prediction and a model for the reaction of the cardiovascular system to an ergometric workload. Key Words: Parameter selection...model for HIV dynamics which has been successfully validated with clinical data and used for prediction and a model for the reaction of the...recently developed in-host model for HIV dynamics which has been successfully validated with clinical data and used for prediction [4, 8]; b) a global

  17. The HTA core model: a novel method for producing and reporting health technology assessments.

    PubMed

    Lampe, Kristian; Mäkelä, Marjukka; Garrido, Marcial Velasco; Anttila, Heidi; Autti-Rämö, Ilona; Hicks, Nicholas J; Hofmann, Björn; Koivisto, Juha; Kunz, Regina; Kärki, Pia; Malmivaara, Antti; Meiesaar, Kersti; Reiman-Möttönen, Päivi; Norderhaug, Inger; Pasternack, Iris; Ruano-Ravina, Alberto; Räsänen, Pirjo; Saalasti-Koskinen, Ulla; Saarni, Samuli I; Walin, Laura; Kristensen, Finn Børlum

    2009-12-01

    The aim of this study was to develop and test a generic framework to enable international collaboration for producing and sharing results of health technology assessments (HTAs). Ten international teams constructed the HTA Core Model, dividing information contained in a comprehensive HTA into standardized pieces, the assessment elements. Each element contains a generic issue that is translated into practical research questions while performing an assessment. Elements were described in detail in element cards. Two pilot assessments, designated as Core HTAs were also produced. The Model and Core HTAs were both validated. Guidance on the use of the HTA Core Model was compiled into a Handbook. The HTA Core Model considers health technologies through nine domains. Two applications of the Model were developed, one for medical and surgical interventions and another for diagnostic technologies. Two Core HTAs were produced in parallel with developing the model, providing the first real-life testing of the Model and input for further development. The results of formal validation and public feedback were primarily positive. Development needs were also identified and considered. An online Handbook is available. The HTA Core Model is a novel approach to HTA. It enables effective international production and sharing of HTA results in a structured format. The face validity of the Model was confirmed during the project, but further testing and refining are needed to ensure optimal usefulness and user-friendliness. Core HTAs are intended to serve as a basis for local HTA reports. Core HTAs do not contain recommendations on technology use.

  18. Forecast of dengue incidence using temperature and rainfall.

    PubMed

    Hii, Yien Ling; Zhu, Huaiping; Ng, Nawi; Ng, Lee Ching; Rocklöv, Joacim

    2012-01-01

    An accurate early warning system to predict impending epidemics enhances the effectiveness of preventive measures against dengue fever. The aim of this study was to develop and validate a forecasting model that could predict dengue cases and provide timely early warning in Singapore. We developed a time series Poisson multivariate regression model using weekly mean temperature and cumulative rainfall over the period 2000-2010. Weather data were modeled using piecewise linear spline functions. We analyzed various lag times between dengue and weather variables to identify the optimal dengue forecasting period. Autoregression, seasonality and trend were considered in the model. We validated the model by forecasting dengue cases for week 1 of 2011 up to week 16 of 2012 using weather data alone. Model selection and validation were based on Akaike's Information Criterion, standardized Root Mean Square Error, and residuals diagnoses. A Receiver Operating Characteristics curve was used to analyze the sensitivity of the forecast of epidemics. The optimal period for dengue forecast was 16 weeks. Our model forecasted correctly with errors of 0.3 and 0.32 of the standard deviation of reported cases during the model training and validation periods, respectively. It was sensitive enough to distinguish between outbreak and non-outbreak to a 96% (CI = 93-98%) in 2004-2010 and 98% (CI = 95%-100%) in 2011. The model predicted the outbreak in 2011 accurately with less than 3% possibility of false alarm. We have developed a weather-based dengue forecasting model that allows warning 16 weeks in advance of dengue epidemics with high sensitivity and specificity. We demonstrate that models using temperature and rainfall could be simple, precise, and low cost tools for dengue forecasting which could be used to enhance decision making on the timing, scale of vector control operations, and utilization of limited resources.

  19. Piloted Evaluation of a UH-60 Mixer Equivalent Turbulence Simulation Model

    NASA Technical Reports Server (NTRS)

    Lusardi, Jeff A.; Blanken, Chris L.; Tischeler, Mark B.

    2002-01-01

    A simulation study of a recently developed hover/low speed Mixer Equivalent Turbulence Simulation (METS) model for the UH-60 Black Hawk helicopter was conducted in the NASA Ames Research Center Vertical Motion Simulator (VMS). The experiment was a continuation of previous work to develop a simple, but validated, turbulence model for hovering rotorcraft. To validate the METS model, two experienced test pilots replicated precision hover tasks that had been conducted in an instrumented UH-60 helicopter in turbulence. Objective simulation data were collected for comparison with flight test data, and subjective data were collected that included handling qualities ratings and pilot comments for increasing levels of turbulence. Analyses of the simulation results show good analytic agreement between the METS model and flight test data, with favorable pilot perception of the simulated turbulence. Precision hover tasks were also repeated using the more complex rotating-frame SORBET (Simulation Of Rotor Blade Element Turbulence) model to generate turbulence. Comparisons of the empirically derived METS model with the theoretical SORBET model show good agreement providing validation of the more complex blade element method of simulating turbulence.

  20. Simulation, Model Verification and Controls Development of Brayton Cycle PM Alternator: Testing and Simulation of 2 KW PM Generator with Diode Bridge Output

    NASA Technical Reports Server (NTRS)

    Stankovic, Ana V.

    2003-01-01

    Professor Stankovic will be developing and refining Simulink based models of the PM alternator and comparing the simulation results with experimental measurements taken from the unit. Her first task is to validate the models using the experimental data. Her next task is to develop alternative control techniques for the application of the Brayton Cycle PM Alternator in a nuclear electric propulsion vehicle. The control techniques will be first simulated using the validated models then tried experimentally with hardware available at NASA. Testing and simulation of a 2KW PM synchronous generator with diode bridge output is described. The parameters of a synchronous PM generator have been measured and used in simulation. Test procedures have been developed to verify the PM generator model with diode bridge output. Experimental and simulation results are in excellent agreement.

  1. Finite Element Vibration Modeling and Experimental Validation for an Aircraft Engine Casing

    NASA Astrophysics Data System (ADS)

    Rabbitt, Christopher

    This thesis presents a procedure for the development and validation of a theoretical vibration model, applies this procedure to a pair of aircraft engine casings, and compares select parameters from experimental testing of those casings to those from a theoretical model using the Modal Assurance Criterion (MAC) and linear regression coefficients. A novel method of determining the optimal MAC between axisymmetric results is developed and employed. It is concluded that the dynamic finite element models developed as part of this research are fully capable of modelling the modal parameters within the frequency range of interest. Confidence intervals calculated in this research for correlation coefficients provide important information regarding the reliability of predictions, and it is recommended that these intervals be calculated for all comparable coefficients. The procedure outlined for aligning mode shapes around an axis of symmetry proved useful, and the results are promising for the development of further optimization techniques.

  2. Development and Validation of a Musculoskeletal Model of the Fully Articulated Thoracolumbar Spine and Rib Cage

    PubMed Central

    Bruno, Alexander G.; Bouxsein, Mary L.; Anderson, Dennis E.

    2015-01-01

    We developed and validated a fully articulated model of the thoracolumbar spine in opensim that includes the individual vertebrae, ribs, and sternum. To ensure trunk muscles in the model accurately represent muscles in vivo, we used a novel approach to adjust muscle cross-sectional area (CSA) and position using computed tomography (CT) scans of the trunk sampled from a community-based cohort. Model predictions of vertebral compressive loading and trunk muscle tension were highly correlated to previous in vivo measures of intradiscal pressure (IDP), vertebral loading from telemeterized implants and trunk muscle myoelectric activity recorded by electromyography (EMG). PMID:25901907

  3. A validated finite element model of a soft artificial muscle motor

    NASA Astrophysics Data System (ADS)

    Tse, Tony Chun H.; O'Brien, Benjamin; McKay, Thomas; Anderson, Iain A.

    2011-04-01

    The Biomimetics Laboratory has developed a soft artificial muscle motor based on Dielectric Elastomers. The motor, 'Flexidrive', is light-weight and has low system complexity. It works by gripping and turning a shaft with a soft gear, like we would with our fingers. The motor's performance depends on many factors, such as actuation waveform, electrode patterning, geometries and contact tribology between the shaft and gear. We have developed a finite element model (FEM) of the motor as a study and design tool. Contact interaction was integrated with previous material and electromechanical coupling models in ABAQUS. The model was experimentally validated through a shape and blocked force analysis.

  4. SAMICS Validation. SAMICS Support Study, Phase 3

    NASA Technical Reports Server (NTRS)

    1979-01-01

    SAMICS provides a consistent basis for estimating array costs and compares production technology costs. A review and a validation of the SAMICS model are reported. The review had the following purposes: (1) to test the computational validity of the computer model by comparison with preliminary hand calculations based on conventional cost estimating techniques; (2) to review and improve the accuracy of the cost relationships being used by the model: and (3) to provide an independent verification to users of the model's value in decision making for allocation of research and developement funds and for investment in manufacturing capacity. It is concluded that the SAMICS model is a flexible, accurate, and useful tool for managerial decision making.

  5. Global Aerodynamic Modeling for Stall/Upset Recovery Training Using Efficient Piloted Flight Test Techniques

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.; Cunningham, Kevin; Hill, Melissa A.

    2013-01-01

    Flight test and modeling techniques were developed for efficiently identifying global aerodynamic models that can be used to accurately simulate stall, upset, and recovery on large transport airplanes. The techniques were developed and validated in a high-fidelity fixed-base flight simulator using a wind-tunnel aerodynamic database, realistic sensor characteristics, and a realistic flight deck representative of a large transport aircraft. Results demonstrated that aerodynamic models for stall, upset, and recovery can be identified rapidly and accurately using relatively simple piloted flight test maneuvers. Stall maneuver predictions and comparisons of identified aerodynamic models with data from the underlying simulation aerodynamic database were used to validate the techniques.

  6. A microRNA-based prediction model for lymph node metastasis in hepatocellular carcinoma.

    PubMed

    Zhang, Li; Xiang, Zuo-Lin; Zeng, Zhao-Chong; Fan, Jia; Tang, Zhao-You; Zhao, Xiao-Mei

    2016-01-19

    We developed an efficient microRNA (miRNA) model that could predict the risk of lymph node metastasis (LNM) in hepatocellular carcinoma (HCC). We first evaluated a training cohort of 192 HCC patients after hepatectomy and found five LNM associated predictive factors: vascular invasion, Barcelona Clinic Liver Cancer stage, miR-145, miR-31, and miR-92a. The five statistically independent factors were used to develop a predictive model. The predictive value of the miRNA-based model was confirmed in a validation cohort of 209 consecutive HCC patients. The prediction model was scored for LNM risk from 0 to 8. The cutoff value 4 was used to distinguish high-risk and low-risk groups. The model sensitivity and specificity was 69.6 and 80.2%, respectively, during 5 years in the validation cohort. And the area under the curve (AUC) for the miRNA-based prognostic model was 0.860. The 5-year positive and negative predictive values of the model in the validation cohort were 30.3 and 95.5%, respectively. Cox regression analysis revealed that the LNM hazard ratio of the high-risk versus low-risk groups was 11.751 (95% CI, 5.110-27.021; P < 0.001) in the validation cohort. In conclusion, the miRNA-based model is reliable and accurate for the early prediction of LNM in patients with HCC.

  7. Development of the Galaxy Chronic Obstructive Pulmonary Disease (COPD) Model Using Data from ECLIPSE: Internal Validation of a Linked-Equations Cohort Model.

    PubMed

    Briggs, Andrew H; Baker, Timothy; Risebrough, Nancy A; Chambers, Mike; Gonzalez-McQuire, Sebastian; Ismaila, Afisi S; Exuzides, Alex; Colby, Chris; Tabberer, Maggie; Muellerova, Hana; Locantore, Nicholas; Rutten van Mölken, Maureen P M H; Lomas, David A

    2017-05-01

    The recent joint International Society for Pharmacoeconomics and Outcomes Research / Society for Medical Decision Making Modeling Good Research Practices Task Force emphasized the importance of conceptualizing and validating models. We report a new model of chronic obstructive pulmonary disease (COPD) (part of the Galaxy project) founded on a conceptual model, implemented using a novel linked-equation approach, and internally validated. An expert panel developed a conceptual model including causal relationships between disease attributes, progression, and final outcomes. Risk equations describing these relationships were estimated using data from the Evaluation of COPD Longitudinally to Identify Predictive Surrogate Endpoints (ECLIPSE) study, with costs estimated from the TOwards a Revolution in COPD Health (TORCH) study. Implementation as a linked-equation model enabled direct estimation of health service costs and quality-adjusted life years (QALYs) for COPD patients over their lifetimes. Internal validation compared 3 years of predicted cohort experience with ECLIPSE results. At 3 years, the Galaxy COPD model predictions of annual exacerbation rate and annual decline in forced expiratory volume in 1 second fell within the ECLIPSE data confidence limits, although 3-year overall survival was outside the observed confidence limits. Projections of the risk equations over time permitted extrapolation to patient lifetimes. Averaging the predicted cost/QALY outcomes for the different patients within the ECLIPSE cohort gives an estimated lifetime cost of £25,214 (undiscounted)/£20,318 (discounted) and lifetime QALYs of 6.45 (undiscounted/5.24 [discounted]) per ECLIPSE patient. A new form of model for COPD was conceptualized, implemented, and internally validated, based on a series of linked equations using epidemiological data (ECLIPSE) and cost data (TORCH). This Galaxy model predicts COPD outcomes from treatment effects on disease attributes such as lung function, exacerbations, symptoms, or exercise capacity; further external validation is required.

  8. Consensus QSAR model for identifying novel H5N1 inhibitors.

    PubMed

    Sharma, Nitin; Yap, Chun Wei

    2012-08-01

    Due to the importance of neuraminidase in the pathogenesis of influenza virus infection, it has been regarded as the most important drug target for the treatment of influenza. Resistance to currently available drugs and new findings related to structure of the protein requires novel neuraminidase 1 (N1) inhibitors. In this study, a consensus QSAR model with defined applicability domain (AD) was developed using published N1 inhibitors. The consensus model was validated using an external validation set. The model achieved high sensitivity, specificity, and overall accuracy along with low false positive rate (FPR) and false discovery rate (FDR). The performance of model on the external validation set and training set were comparable, thus it was unlikely to be overfitted. The low FPR and low FDR will increase its accuracy in screening large chemical libraries. Screening of ZINC library resulted in 64,772 compounds as probable N1 inhibitors, while 173,674 compounds were defined to be outside the AD of the consensus model. The advantage of the current model is that it was developed using a large and diverse dataset and has a defined AD which prevents its use on compounds that it is not capable of predicting. The consensus model developed in this study is made available via the free software, PaDEL-DDPredictor.

  9. Validating spatiotemporal predictions of an important pest of small grains.

    PubMed

    Merrill, Scott C; Holtzer, Thomas O; Peairs, Frank B; Lester, Philip J

    2015-01-01

    Arthropod pests are typically managed using tactics applied uniformly to the whole field. Precision pest management applies tactics under the assumption that within-field pest pressure differences exist. This approach allows for more precise and judicious use of scouting resources and management tactics. For example, a portion of a field delineated as attractive to pests may be selected to receive extra monitoring attention. Likely because of the high variability in pest dynamics, little attention has been given to developing precision pest prediction models. Here, multimodel synthesis was used to develop a spatiotemporal model predicting the density of a key pest of wheat, the Russian wheat aphid, Diuraphis noxia (Kurdjumov). Spatially implicit and spatially explicit models were synthesized to generate spatiotemporal pest pressure predictions. Cross-validation and field validation were used to confirm model efficacy. A strong within-field signal depicting aphid density was confirmed with low prediction errors. Results show that the within-field model predictions will provide higher-quality information than would be provided by traditional field scouting. With improvements to the broad-scale model component, the model synthesis approach and resulting tool could improve pest management strategy and provide a template for the development of spatially explicit pest pressure models. © 2014 Society of Chemical Industry.

  10. Modeling the Relationship between Safety Climate and Safety Performance in a Developing Construction Industry: A Cross-Cultural Validation Study

    PubMed Central

    Zahoor, Hafiz; Chan, Albert P. C.; Utama, Wahyudi P.; Gao, Ran; Zafar, Irfan

    2017-01-01

    This study attempts to validate a safety performance (SP) measurement model in the cross-cultural setting of a developing country. In addition, it highlights the variations in investigating the relationship between safety climate (SC) factors and SP indicators. The data were collected from forty under-construction multi-storey building projects in Pakistan. Based on the results of exploratory factor analysis, a SP measurement model was hypothesized. It was tested and validated by conducting confirmatory factor analysis on calibration and validation sub-samples respectively. The study confirmed the significant positive impact of SC on safety compliance and safety participation, and negative impact on number of self-reported accidents/injuries. However, number of near-misses could not be retained in the final SP model because it attained a lower standardized path coefficient value. Moreover, instead of safety participation, safety compliance established a stronger impact on SP. The study uncovered safety enforcement and promotion as a novel SC factor, whereas safety rules and work practices was identified as the most neglected factor. The study contributed to the body of knowledge by unveiling the deviations in existing dimensions of SC and SP. The refined model is expected to concisely measure the SP in the Pakistani construction industry, however, caution must be exercised while generalizing the study results to other developing countries. PMID:28350366

  11. Modeling the Relationship between Safety Climate and Safety Performance in a Developing Construction Industry: A Cross-Cultural Validation Study.

    PubMed

    Zahoor, Hafiz; Chan, Albert P C; Utama, Wahyudi P; Gao, Ran; Zafar, Irfan

    2017-03-28

    This study attempts to validate a safety performance (SP) measurement model in the cross-cultural setting of a developing country. In addition, it highlights the variations in investigating the relationship between safety climate (SC) factors and SP indicators. The data were collected from forty under-construction multi-storey building projects in Pakistan. Based on the results of exploratory factor analysis, a SP measurement model was hypothesized. It was tested and validated by conducting confirmatory factor analysis on calibration and validation sub-samples respectively. The study confirmed the significant positive impact of SC on safety compliance and safety participation , and negative impact on number of self-reported accidents/injuries . However, number of near-misses could not be retained in the final SP model because it attained a lower standardized path coefficient value. Moreover, instead of safety participation , safety compliance established a stronger impact on SP. The study uncovered safety enforcement and promotion as a novel SC factor, whereas safety rules and work practices was identified as the most neglected factor. The study contributed to the body of knowledge by unveiling the deviations in existing dimensions of SC and SP. The refined model is expected to concisely measure the SP in the Pakistani construction industry, however, caution must be exercised while generalizing the study results to other developing countries.

  12. Real-time quality monitoring in debutanizer column with regression tree and ANFIS

    NASA Astrophysics Data System (ADS)

    Siddharth, Kumar; Pathak, Amey; Pani, Ajaya Kumar

    2018-05-01

    A debutanizer column is an integral part of any petroleum refinery. Online composition monitoring of debutanizer column outlet streams is highly desirable in order to maximize the production of liquefied petroleum gas. In this article, data-driven models for debutanizer column are developed for real-time composition monitoring. The dataset used has seven process variables as inputs and the output is the butane concentration in the debutanizer column bottom product. The input-output dataset is divided equally into a training (calibration) set and a validation (testing) set. The training set data were used to develop fuzzy inference, adaptive neuro fuzzy (ANFIS) and regression tree models for the debutanizer column. The accuracy of the developed models were evaluated by simulation of the models with the validation dataset. It is observed that the ANFIS model has better estimation accuracy than other models developed in this work and many data-driven models proposed so far in the literature for the debutanizer column.

  13. Development and validation of the short-form Adolescent Health Promotion Scale.

    PubMed

    Chen, Mei-Yen; Lai, Li-Ju; Chen, Hsiu-Chih; Gaete, Jorge

    2014-10-26

    Health-promoting lifestyle choices of adolescents are closely related to current and subsequent health status. However, parsimonious yet reliable and valid screening tools are scarce. The original 40-item adolescent health promotion (AHP) scale was developed by our research team and has been applied to measure adolescent health-promoting behaviors worldwide. The aim of our study was to examine the psychometric properties of a newly developed short-form version of the AHP (AHP-SF) including tests of its reliability and validity. The study was conducted in nine middle and high schools in southern Taiwan. Participants were 814 adolescents randomly divided into two subgroups with equal size and homogeneity of baseline characteristics. The first subsample (calibration sample) was used to modify and shorten the factorial model while the second subsample (validation sample) was utilized to validate the result obtained from the first one. The psychometric testing of the AHP-SF included internal reliability of McDonald's omega and Cronbach's alpha, convergent validity, discriminant validity, and construct validity with confirmatory factor analysis (CFA). The results of the CFA supported a six-factor model and 21 items were retained in the AHP-SF with acceptable model fit. For the discriminant validity test, results indicated that adolescents with lower AHP-SF scores were more likely to be overweight or obese, skip breakfast, and spend more time watching TV and playing computer games. The AHP-SF also showed excellent internal consistency with a McDonald's omega of 0.904 (Cronbach's alpha 0.905) in the calibration group. The current findings suggest that the AHP-SF is a valid and reliable instrument for the evaluation of adolescent health-promoting behaviors. Primary health care providers and clinicians can use the AHP-SF to assess these behaviors and evaluate the outcome of health promotion programs in the adolescent population.

  14. Preliminary Assessment of Turbomachinery Codes

    NASA Technical Reports Server (NTRS)

    Mazumder, Quamrul H.

    2007-01-01

    This report assesses different CFD codes developed and currently being used at Glenn Research Center to predict turbomachinery fluid flow and heat transfer behavior. This report will consider the following codes: APNASA, TURBO, GlennHT, H3D, and SWIFT. Each code will be described separately in the following section with their current modeling capabilities, level of validation, pre/post processing, and future development and validation requirements. This report addresses only previously published and validations of the codes. However, the codes have been further developed to extend the capabilities of the codes.

  15. Modeling Piezoelectric Stack Actuators for Control of Micromanipulation

    NASA Technical Reports Server (NTRS)

    Goldfarb, Michael; Celanovic, Nikola

    1997-01-01

    A nonlinear lumped-parameter model of a piezoelectric stack actuator has been developed to describe actuator behavior for purposes of control system analysis and design, and, in particular, for microrobotic applications requiring accurate position and/or force control. In formulating this model, the authors propose a generalized Maxwell resistive capacitor as a lumped-parameter causal representation of rate-independent hysteresis. Model formulation is validated by comparing results of numerical simulations to experimental data. Validation is followed by a discussion of model implications for purposes of actuator control.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Kandler A; Santhanagopalan, Shriram; Yang, Chuanbo

    Computer models are helping to accelerate the design and validation of next generation batteries and provide valuable insights not possible through experimental testing alone. Validated 3-D physics-based models exist for predicting electrochemical performance, thermal and mechanical response of cells and packs under normal and abuse scenarios. The talk describes present efforts to make the models better suited for engineering design, including improving their computation speed, developing faster processes for model parameter identification including under aging, and predicting the performance of a proposed electrode material recipe a priori using microstructure models.

  17. Development and validation of Dutch version of Lasater Clinical Judgment Rubric in hospital practice: An instrument design study.

    PubMed

    Vreugdenhil, Jettie; Spek, Bea

    2018-03-01

    Clinical reasoning in patient care is a skill that cannot be observed directly. So far, no reliable, valid instrument exists for the assessment of nursing students' clinical reasoning skills in hospital practice. Lasater's clinical judgment rubric (LCJR), based on Tanner's model "Thinking like a nurse" has been tested, mainly in academic simulation settings. The aim is to develop a Dutch version of the LCJR (D-LCJR) and to test its psychometric properties when used in a hospital traineeship context. A mixed-model approach was used to develop and to validate the instrument. Ten dedicated educational units in a university hospital. A well-mixed group of 52 nursing students, nurse coaches and nurse educators. A Delphi panel developed the D-LCJR. Students' clinical reasoning skills were assessed "live" by nurse coaches, nurse educators and students who rated themselves. The psychometric properties tested during the assessment process are reliability, reproducibility, content validity and construct validity by testing two hypothesis: 1) a positive correlation between assessed and self-reported sum scores (convergent validity) and 2) a linear relation between experience and sum score (clinical validity). The obtained D-LCJR was found to be internally consistent, Cronbach's alpha 0.93. The rubric is also reproducible with intraclass correlations between 0.69 and 0.78. Experts judged it to be content valid. The two hypothesis were both tested significant, supporting evidence for construct validity. The translated and modified LCJR, is a promising tool for the evaluation of nursing students' development in clinical reasoning in hospital traineeships, by students, nurse coaches and nurse educators. More evidence on construct validity is necessary, in particular for students at the end of their hospital traineeship. Based on our research, the D-LCJR applied in hospital traineeships is a usable and reliable tool. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Discrete and continuous dynamics modeling of a mass moving on a flexible structure

    NASA Technical Reports Server (NTRS)

    Herman, Deborah Ann

    1992-01-01

    A general discrete methodology for modeling the dynamics of a mass that moves on the surface of a flexible structure is developed. This problem was motivated by the Space Station/Mobile Transporter system. A model reduction approach is developed to make the methodology applicable to large structural systems. To validate the discrete methodology, continuous formulations are also developed. Three different systems are examined: (1) simply-supported beam, (2) free-free beam, and (3) free-free beam with two points of contact between the mass and the flexible beam. In addition to validating the methodology, parametric studies were performed to examine how the system's physical properties affect its dynamics.

  19. A Bayesian framework for adaptive selection, calibration, and validation of coarse-grained models of atomistic systems

    NASA Astrophysics Data System (ADS)

    Farrell, Kathryn; Oden, J. Tinsley; Faghihi, Danial

    2015-08-01

    A general adaptive modeling algorithm for selection and validation of coarse-grained models of atomistic systems is presented. A Bayesian framework is developed to address uncertainties in parameters, data, and model selection. Algorithms for computing output sensitivities to parameter variances, model evidence and posterior model plausibilities for given data, and for computing what are referred to as Occam Categories in reference to a rough measure of model simplicity, make up components of the overall approach. Computational results are provided for representative applications.

  20. A comprehensive validation toolbox for regional ocean models - Outline, implementation and application to the Baltic Sea

    NASA Astrophysics Data System (ADS)

    Jandt, Simon; Laagemaa, Priidik; Janssen, Frank

    2014-05-01

    The systematic and objective comparison between output from a numerical ocean model and a set of observations, called validation in the context of this presentation, is a beneficial activity at several stages, starting from early steps in model development and ending at the quality control of model based products delivered to customers. Even though the importance of this kind of validation work is widely acknowledged it is often not among the most popular tasks in ocean modelling. In order to ease the validation work a comprehensive toolbox has been developed in the framework of the MyOcean-2 project. The objective of this toolbox is to carry out validation integrating different data sources, e.g. time-series at stations, vertical profiles, surface fields or along track satellite data, with one single program call. The validation toolbox, implemented in MATLAB, features all parts of the validation process - ranging from read-in procedures of datasets to the graphical and numerical output of statistical metrics of the comparison. The basic idea is to have only one well-defined validation schedule for all applications, in which all parts of the validation process are executed. Each part, e.g. read-in procedures, forms a module in which all available functions of this particular part are collected. The interface between the functions, the module and the validation schedule is highly standardized. Functions of a module are set up for certain validation tasks, new functions can be implemented into the appropriate module without affecting the functionality of the toolbox. The functions are assigned for each validation task in user specific settings, which are externally stored in so-called namelists and gather all information of the used datasets as well as paths and metadata. In the framework of the MyOcean-2 project the toolbox is frequently used to validate the forecast products of the Baltic Sea Marine Forecasting Centre. Hereby the performance of any new product version is compared with the previous version. Although, the toolbox is mainly tested for the Baltic Sea yet, it can easily be adapted to different datasets and parameters, regardless of the geographic region. In this presentation the usability of the toolbox is demonstrated along with several results of the validation process.

  1. Development of machine learning models for diagnosis of glaucoma.

    PubMed

    Kim, Seong Jae; Cho, Kyong Jin; Oh, Sejong

    2017-01-01

    The study aimed to develop machine learning models that have strong prediction power and interpretability for diagnosis of glaucoma based on retinal nerve fiber layer (RNFL) thickness and visual field (VF). We collected various candidate features from the examination of retinal nerve fiber layer (RNFL) thickness and visual field (VF). We also developed synthesized features from original features. We then selected the best features proper for classification (diagnosis) through feature evaluation. We used 100 cases of data as a test dataset and 399 cases of data as a training and validation dataset. To develop the glaucoma prediction model, we considered four machine learning algorithms: C5.0, random forest (RF), support vector machine (SVM), and k-nearest neighbor (KNN). We repeatedly composed a learning model using the training dataset and evaluated it by using the validation dataset. Finally, we got the best learning model that produces the highest validation accuracy. We analyzed quality of the models using several measures. The random forest model shows best performance and C5.0, SVM, and KNN models show similar accuracy. In the random forest model, the classification accuracy is 0.98, sensitivity is 0.983, specificity is 0.975, and AUC is 0.979. The developed prediction models show high accuracy, sensitivity, specificity, and AUC in classifying among glaucoma and healthy eyes. It will be used for predicting glaucoma against unknown examination records. Clinicians may reference the prediction results and be able to make better decisions. We may combine multiple learning models to increase prediction accuracy. The C5.0 model includes decision rules for prediction. It can be used to explain the reasons for specific predictions.

  2. Comparison of machine-learning algorithms to build a predictive model for detecting undiagnosed diabetes - ELSA-Brasil: accuracy study.

    PubMed

    Olivera, André Rodrigues; Roesler, Valter; Iochpe, Cirano; Schmidt, Maria Inês; Vigo, Álvaro; Barreto, Sandhi Maria; Duncan, Bruce Bartholow

    2017-01-01

    Type 2 diabetes is a chronic disease associated with a wide range of serious health complications that have a major impact on overall health. The aims here were to develop and validate predictive models for detecting undiagnosed diabetes using data from the Longitudinal Study of Adult Health (ELSA-Brasil) and to compare the performance of different machine-learning algorithms in this task. Comparison of machine-learning algorithms to develop predictive models using data from ELSA-Brasil. After selecting a subset of 27 candidate variables from the literature, models were built and validated in four sequential steps: (i) parameter tuning with tenfold cross-validation, repeated three times; (ii) automatic variable selection using forward selection, a wrapper strategy with four different machine-learning algorithms and tenfold cross-validation (repeated three times), to evaluate each subset of variables; (iii) error estimation of model parameters with tenfold cross-validation, repeated ten times; and (iv) generalization testing on an independent dataset. The models were created with the following machine-learning algorithms: logistic regression, artificial neural network, naïve Bayes, K-nearest neighbor and random forest. The best models were created using artificial neural networks and logistic regression. -These achieved mean areas under the curve of, respectively, 75.24% and 74.98% in the error estimation step and 74.17% and 74.41% in the generalization testing step. Most of the predictive models produced similar results, and demonstrated the feasibility of identifying individuals with highest probability of having undiagnosed diabetes, through easily-obtained clinical data.

  3. Cumulative impact of developments on the surrounding roadways' traffic.

    DOT National Transportation Integrated Search

    2011-10-01

    "In order to recommend a procedure for cumulative impact study, four different travel : demand models were developed, calibrated, and validated. The base year for the models was 2005. : Two study areas were used, and the models were run for three per...

  4. Quantification of Neutral Wind Variability in the Upper Thermosphere

    NASA Technical Reports Server (NTRS)

    Richards, Philip G.

    2000-01-01

    The overall objective of this grant was to: 1) Quantify thermospheric neutral wind behavior in the ionosphere. This was to be achieved by developing an improved empirical wind model. 2) Validating the procedure for obtaining winds from the height of the peak density. 3) Improving the model capabilities and making updated versions of the model available to other scientists. The approach is to use neutral winds derived from ionosonde measurements of the height of the peak electron density (h(sub m)F(sub 2)). One of the proposed first year tasks was to perform some validation studies on the method. Substantial progress has been made with regard to both the empirical model and the validation study. Funding from this grant has also enabled a number of fruitful collaborations with other researchers; one of the stated aims in the proposal. Graduate student Mayra Martinez has developed the mathematical formulation for the empirical wind model as part of her dissertation. As proposed, authors continued validation studies of the technique for determining winds from h(sub m)F(sub 2). They are submitted a paper to the Journal of Geophysical Research in December 1996 entitled "Therinospheric neutral winds at southern mid-latitudes: comparison of optical and ionosonde h(sub m)F(sub 2) methods. A second paper entitled "Ionospheric behavior at a southern mid-latitude in March 1995" has come out of the March 1995 data set and was published in The Journal of Geophysical Research. A new algorithm was developed. The ionosphere also have been modeled.

  5. Non-clinical studies required for new drug development - Part I: early in silico and in vitro studies, new target discovery and validation, proof of principles and robustness of animal studies.

    PubMed

    Andrade, E L; Bento, A F; Cavalli, J; Oliveira, S K; Freitas, C S; Marcon, R; Schwanke, R C; Siqueira, J M; Calixto, J B

    2016-10-24

    This review presents a historical overview of drug discovery and the non-clinical stages of the drug development process, from initial target identification and validation, through in silico assays and high throughput screening (HTS), identification of leader molecules and their optimization, the selection of a candidate substance for clinical development, and the use of animal models during the early studies of proof-of-concept (or principle). This report also discusses the relevance of validated and predictive animal models selection, as well as the correct use of animal tests concerning the experimental design, execution and interpretation, which affect the reproducibility, quality and reliability of non-clinical studies necessary to translate to and support clinical studies. Collectively, improving these aspects will certainly contribute to the robustness of both scientific publications and the translation of new substances to clinical development.

  6. Capitalizing on Citizen Science Data for Validating Models and Generating Hypotheses Describing Meteorological Drivers of Mosquito-Borne Disease Risk

    NASA Astrophysics Data System (ADS)

    Boger, R. A.; Low, R.; Paull, S.; Anyamba, A.; Soebiyanto, R. P.

    2017-12-01

    Temperature and precipitation are important drivers of mosquito population dynamics, and a growing set of models have been proposed to characterize these relationships. Validation of these models, and development of broader theories across mosquito species and regions could nonetheless be improved by comparing observations from a global dataset of mosquito larvae with satellite-based measurements of meteorological variables. Citizen science data can be particularly useful for two such aspects of research into the meteorological drivers of mosquito populations: i) Broad-scale validation of mosquito distribution models and ii) Generation of quantitative hypotheses regarding changes to mosquito abundance and phenology across scales. The recently released GLOBE Observer Mosquito Habitat Mapper (GO-MHM) app engages citizen scientists in identifying vector taxa, mapping breeding sites and decommissioning non-natural habitats, and provides a potentially useful new tool for validating mosquito ubiquity projections based on the analysis of remotely sensed environmental data. Our early work with GO-MHM data focuses on two objectives: validating citizen science reports of Aedes aegypti distribution through comparison with accepted scientific data sources, and exploring the relationship between extreme temperature and precipitation events and subsequent observations of mosquito larvae. Ultimately the goal is to develop testable hypotheses regarding the shape and character of this relationship between mosquito species and regions.

  7. Simplified predictive models for CO 2 sequestration performance assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mishra, Srikanta; Ganesh, Priya; Schuetter, Jared

    CO2 sequestration in deep saline formations is increasingly being considered as a viable strategy for the mitigation of greenhouse gas emissions from anthropogenic sources. In this context, detailed numerical simulation based models are routinely used to understand key processes and parameters affecting pressure propagation and buoyant plume migration following CO2 injection into the subsurface. As these models are data and computation intensive, the development of computationally-efficient alternatives to conventional numerical simulators has become an active area of research. Such simplified models can be valuable assets during preliminary CO2 injection project screening, serve as a key element of probabilistic system assessmentmore » modeling tools, and assist regulators in quickly evaluating geological storage projects. We present three strategies for the development and validation of simplified modeling approaches for CO2 sequestration in deep saline formations: (1) simplified physics-based modeling, (2) statisticallearning based modeling, and (3) reduced-order method based modeling. In the first category, a set of full-physics compositional simulations is used to develop correlations for dimensionless injectivity as a function of the slope of the CO2 fractional-flow curve, variance of layer permeability values, and the nature of vertical permeability arrangement. The same variables, along with a modified gravity number, can be used to develop a correlation for the total storage efficiency within the CO2 plume footprint. Furthermore, the dimensionless average pressure buildup after the onset of boundary effects can be correlated to dimensionless time, CO2 plume footprint, and storativity contrast between the reservoir and caprock. In the second category, statistical “proxy models” are developed using the simulation domain described previously with two approaches: (a) classical Box-Behnken experimental design with a quadratic response surface, and (b) maximin Latin Hypercube sampling (LHS) based design with a multidimensional kriging metamodel fit. For roughly the same number of simulations, the LHS-based metamodel yields a more robust predictive model, as verified by a k-fold cross-validation approach (with data split into training and test sets) as well by validation with an independent dataset. In the third category, a reduced-order modeling procedure is utilized that combines proper orthogonal decomposition (POD) for reducing problem dimensionality with trajectory-piecewise linearization (TPWL) in order to represent system response at new control settings from a limited number of training runs. Significant savings in computational time are observed with reasonable accuracy from the PODTPWL reduced-order model for both vertical and horizontal well problems – which could be important in the context of history matching, uncertainty quantification and optimization problems. The simplified physics and statistical learning based models are also validated using an uncertainty analysis framework. Reference cumulative distribution functions of key model outcomes (i.e., plume radius and reservoir pressure buildup) generated using a 97-run full-physics simulation are successfully validated against the CDF from 10,000 sample probabilistic simulations using the simplified models. The main contribution of this research project is the development and validation of a portfolio of simplified modeling approaches that will enable rapid feasibility and risk assessment for CO2 sequestration in deep saline formations.« less

  8. A New Z Score Curve of the Coronary Arterial Internal Diameter Using the Lambda-Mu-Sigma Method in a Pediatric Population.

    PubMed

    Kobayashi, Tohru; Fuse, Shigeto; Sakamoto, Naoko; Mikami, Masashi; Ogawa, Shunichi; Hamaoka, Kenji; Arakaki, Yoshio; Nakamura, Tsuneyuki; Nagasawa, Hiroyuki; Kato, Taichi; Jibiki, Toshiaki; Iwashima, Satoru; Yamakawa, Masaru; Ohkubo, Takashi; Shimoyama, Shinya; Aso, Kentaro; Sato, Seiichi; Saji, Tsutomu

    2016-08-01

    Several coronary artery Z score models have been developed. However, a Z score model derived by the lambda-mu-sigma (LMS) method has not been established. Echocardiographic measurements of the proximal right coronary artery, left main coronary artery, proximal left anterior descending coronary artery, and proximal left circumflex artery were prospectively collected in 3,851 healthy children ≤18 years of age and divided into developmental and validation data sets. In the developmental data set, smooth curves were fitted for each coronary artery using linear, logarithmic, square-root, and LMS methods for both sexes. The relative goodness of fit of these models was compared using the Bayesian information criterion. The best-fitting model was tested for reproducibility using the validation data set. The goodness of fit of the selected model was visually compared with that of the previously reported regression models using a Q-Q plot. Because the internal diameter of each coronary artery was not similar between sexes, sex-specific Z score models were developed. The LMS model with body surface area as the independent variable showed the best goodness of fit; therefore, the internal diameter of each coronary artery was transformed into a sex-specific Z score on the basis of body surface area using the LMS method. In the validation data set, a Q-Q plot of each model indicated that the distribution of Z scores in the LMS models was closer to the normal distribution compared with previously reported regression models. Finally, the final models for each coronary artery in both sexes were developed using the developmental and validation data sets. A Microsoft Excel-based Z score calculator was also created, which is freely available online (http://raise.umin.jp/zsp/calculator/). Novel LMS models with which to estimate the sex-specific Z score of each internal coronary artery diameter were generated and validated using a large pediatric population. Copyright © 2016 American Society of Echocardiography. Published by Elsevier Inc. All rights reserved.

  9. Initial Development and Validation of the BullyHARM: The Bullying, Harassment, and Aggression Receipt Measure.

    PubMed

    Hall, William J

    2016-11-01

    This article describes the development and preliminary validation of the Bullying, Harassment, and Aggression Receipt Measure (BullyHARM). The development of the BullyHARM involved a number of steps and methods, including a literature review, expert review, cognitive testing, readability testing, data collection from a large sample, reliability testing, and confirmatory factor analysis. A sample of 275 middle school students was used to examine the psychometric properties and factor structure of the BullyHARM, which consists of 22 items and 6 subscales: physical bullying, verbal bullying, social/relational bullying, cyber-bullying, property bullying, and sexual bullying. First-order and second-order factor models were evaluated. Results demonstrate that the first-order factor model had superior fit. Results of reliability testing indicate that the BullyHARM scale and subscales have very good internal consistency reliability. Findings indicate that the BullyHARM has good properties regarding content validation and respondent-related validation and is a promising instrument for measuring bullying victimization in school.

  10. Initial Development and Validation of the BullyHARM: The Bullying, Harassment, and Aggression Receipt Measure

    PubMed Central

    Hall, William J.

    2017-01-01

    This article describes the development and preliminary validation of the Bullying, Harassment, and Aggression Receipt Measure (BullyHARM). The development of the BullyHARM involved a number of steps and methods, including a literature review, expert review, cognitive testing, readability testing, data collection from a large sample, reliability testing, and confirmatory factor analysis. A sample of 275 middle school students was used to examine the psychometric properties and factor structure of the BullyHARM, which consists of 22 items and 6 subscales: physical bullying, verbal bullying, social/relational bullying, cyber-bullying, property bullying, and sexual bullying. First-order and second-order factor models were evaluated. Results demonstrate that the first-order factor model had superior fit. Results of reliability testing indicate that the BullyHARM scale and subscales have very good internal consistency reliability. Findings indicate that the BullyHARM has good properties regarding content validation and respondent-related validation and is a promising instrument for measuring bullying victimization in school. PMID:28194041

  11. Conducting field studies for testing pesticide leaching models

    USGS Publications Warehouse

    Smith, Charles N.; Parrish, Rudolph S.; Brown, David S.

    1990-01-01

    A variety of predictive models are being applied to evaluate the transport and transformation of pesticides in the environment. These include well known models such as the Pesticide Root Zone Model (PRZM), the Risk of Unsaturated-Saturated Transport and Transformation Interactions for Chemical Concentrations Model (RUSTIC) and the Groundwater Loading Effects of Agricultural Management Systems Model (GLEAMS). The potentially large impacts of using these models as tools for developing pesticide management strategies and regulatory decisions necessitates development of sound model validation protocols. This paper offers guidance on many of the theoretical and practical problems encountered in the design and implementation of field-scale model validation studies. Recommendations are provided for site selection and characterization, test compound selection, data needs, measurement techniques, statistical design considerations and sampling techniques. A strategy is provided for quantitatively testing models using field measurements.

  12. Groundwater Model Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahmed E. Hassan

    2006-01-24

    Models have an inherent uncertainty. The difficulty in fully characterizing the subsurface environment makes uncertainty an integral component of groundwater flow and transport models, which dictates the need for continuous monitoring and improvement. Building and sustaining confidence in closure decisions and monitoring networks based on models of subsurface conditions require developing confidence in the models through an iterative process. The definition of model validation is postulated as a confidence building and long-term iterative process (Hassan, 2004a). Model validation should be viewed as a process not an end result. Following Hassan (2004b), an approach is proposed for the validation process ofmore » stochastic groundwater models. The approach is briefly summarized herein and detailed analyses of acceptance criteria for stochastic realizations and of using validation data to reduce input parameter uncertainty are presented and applied to two case studies. During the validation process for stochastic models, a question arises as to the sufficiency of the number of acceptable model realizations (in terms of conformity with validation data). Using a hierarchical approach to make this determination is proposed. This approach is based on computing five measures or metrics and following a decision tree to determine if a sufficient number of realizations attain satisfactory scores regarding how they represent the field data used for calibration (old) and used for validation (new). The first two of these measures are applied to hypothetical scenarios using the first case study and assuming field data consistent with the model or significantly different from the model results. In both cases it is shown how the two measures would lead to the appropriate decision about the model performance. Standard statistical tests are used to evaluate these measures with the results indicating they are appropriate measures for evaluating model realizations. The use of validation data to constrain model input parameters is shown for the second case study using a Bayesian approach known as Markov Chain Monte Carlo. The approach shows a great potential to be helpful in the validation process and in incorporating prior knowledge with new field data to derive posterior distributions for both model input and output.« less

  13. Development and Validation of New Discriminative Dissolution Method for Carvedilol Tablets

    PubMed Central

    Raju, V.; Murthy, K. V. R.

    2011-01-01

    The objective of the present study was to develop and validate a discriminative dissolution method for evaluation of carvedilol tablets. Different conditions such as type of dissolution medium, volume of dissolution medium and rotation speed of paddle were evaluated. The best in vitro dissolution profile was obtained using Apparatus II (paddle), 50 rpm, 900 ml of pH 6.8 phosphate buffer as dissolution medium. The drug release was evaluated by high-performance liquid chromatographic method. The dissolution method was validated according to current ICH and FDA guidelines using parameters such as the specificity, accuracy, precision and stability were evaluated and obtained results were within the acceptable range. The comparison of the obtained dissolution profiles of three different products were investigated using ANOVA-based, model-dependent and model-independent methods, results showed that there is significant difference between the products. The dissolution test developed and validated was adequate for its higher discriminative capacity in differentiating the release characteristics of the products tested and could be applied for development and quality control of carvedilol tablets. PMID:22923865

  14. Development of an online, publicly accessible naive Bayesian decision support tool for mammographic mass lesions based on the American College of Radiology (ACR) BI-RADS lexicon.

    PubMed

    Benndorf, Matthias; Kotter, Elmar; Langer, Mathias; Herda, Christoph; Wu, Yirong; Burnside, Elizabeth S

    2015-06-01

    To develop and validate a decision support tool for mammographic mass lesions based on a standardized descriptor terminology (BI-RADS lexicon) to reduce variability of practice. We used separate training data (1,276 lesions, 138 malignant) and validation data (1,177 lesions, 175 malignant). We created naïve Bayes (NB) classifiers from the training data with tenfold cross-validation. Our "inclusive model" comprised BI-RADS categories, BI-RADS descriptors, and age as predictive variables; our "descriptor model" comprised BI-RADS descriptors and age. The resulting NB classifiers were applied to the validation data. We evaluated and compared classifier performance with ROC-analysis. In the training data, the inclusive model yields an AUC of 0.959; the descriptor model yields an AUC of 0.910 (P < 0.001). The inclusive model is superior to the clinical performance (BI-RADS categories alone, P < 0.001); the descriptor model performs similarly. When applied to the validation data, the inclusive model yields an AUC of 0.935; the descriptor model yields an AUC of 0.876 (P < 0.001). Again, the inclusive model is superior to the clinical performance (P < 0.001); the descriptor model performs similarly. We consider our classifier a step towards a more uniform interpretation of combinations of BI-RADS descriptors. We provide our classifier at www.ebm-radiology.com/nbmm/index.html . • We provide a decision support tool for mammographic masses at www.ebm-radiology.com/nbmm/index.html . • Our tool may reduce variability of practice in BI-RADS category assignment. • A formal analysis of BI-RADS descriptors may enhance radiologists' diagnostic performance.

  15. Development and field validation of a regional, management-scale habitat model: A koala Phascolarctos cinereus case study.

    PubMed

    Law, Bradley; Caccamo, Gabriele; Roe, Paul; Truskinger, Anthony; Brassil, Traecey; Gonsalves, Leroy; McConville, Anna; Stanton, Matthew

    2017-09-01

    Species distribution models have great potential to efficiently guide management for threatened species, especially for those that are rare or cryptic. We used MaxEnt to develop a regional-scale model for the koala Phascolarctos cinereus at a resolution (250 m) that could be used to guide management. To ensure the model was fit for purpose, we placed emphasis on validating the model using independently-collected field data. We reduced substantial spatial clustering of records in coastal urban areas using a 2-km spatial filter and by modeling separately two subregions separated by the 500-m elevational contour. A bias file was prepared that accounted for variable survey effort. Frequency of wildfire, soil type, floristics and elevation had the highest relative contribution to the model, while a number of other variables made minor contributions. The model was effective in discriminating different habitat suitability classes when compared with koala records not used in modeling. We validated the MaxEnt model at 65 ground-truth sites using independent data on koala occupancy (acoustic sampling) and habitat quality (browse tree availability). Koala bellows ( n  = 276) were analyzed in an occupancy modeling framework, while site habitat quality was indexed based on browse trees. Field validation demonstrated a linear increase in koala occupancy with higher modeled habitat suitability at ground-truth sites. Similarly, a site habitat quality index at ground-truth sites was correlated positively with modeled habitat suitability. The MaxEnt model provided a better fit to estimated koala occupancy than the site-based habitat quality index, probably because many variables were considered simultaneously by the model rather than just browse species. The positive relationship of the model with both site occupancy and habitat quality indicates that the model is fit for application at relevant management scales. Field-validated models of similar resolution would assist in guiding management of conservation-dependent species.

  16. Response surface models for effects of temperature and previous growth sodium chloride on growth kinetics of Salmonella typhimurium on cooked chicken breast.

    PubMed

    Oscar, T P

    1999-12-01

    Response surface models were developed and validated for effects of temperature (10 to 40 degrees C) and previous growth NaCl (0.5 to 4.5%) on lag time (lambda) and specific growth rate (mu) of Salmonella Typhimurium on cooked chicken breast. Growth curves for model development (n = 55) and model validation (n = 16) were fit to a two-phase linear growth model to obtain lambda and mu of Salmonella Typhimurium on cooked chicken breast. Response surface models for natural logarithm transformations of lambda and mu as a function of temperature and previous growth NaCl were obtained by regression analysis. Both lambda and mu of Salmonella Typhimurium were affected (P < 0.0001) by temperature but not by previous growth NaCl. Models were validated against data not used in their development. Mean absolute relative error of predictions (model accuracy) was 26.6% for lambda and 15.4% for mu. Median relative error of predictions (model bias) was 0.9% for lambda and 5.2% for mu. Results indicated that the models developed provided reliable predictions of lambda and mu of Salmonella Typhimurium on cooked chicken breast within the matrix of conditions modeled. In addition, results indicated that previous growth NaCl (0.5 to 4.5%) was not a major factor affecting subsequent growth kinetics of Salmonella Typhimurium on cooked chicken breast. Thus, inclusion of previous growth NaCl in predictive models may not significantly improve our ability to predict growth of Salmonella spp. on food subjected to temperature abuse.

  17. Validation of Risk Assessment Models of Venous Thromboembolism in Hospitalized Medical Patients.

    PubMed

    Greene, M Todd; Spyropoulos, Alex C; Chopra, Vineet; Grant, Paul J; Kaatz, Scott; Bernstein, Steven J; Flanders, Scott A

    2016-09-01

    Patients hospitalized for acute medical illness are at increased risk for venous thromboembolism. Although risk assessment is recommended and several at-admission risk assessment models have been developed, these have not been adequately derived or externally validated. Therefore, an optimal approach to evaluate venous thromboembolism risk in medical patients is not known. We conducted an external validation study of existing venous thromboembolism risk assessment models using data collected on 63,548 hospitalized medical patients as part of the Michigan Hospital Medicine Safety (HMS) Consortium. For each patient, cumulative venous thromboembolism risk scores and risk categories were calculated. Cox regression models were used to quantify the association between venous thromboembolism events and assigned risk categories. Model discrimination was assessed using Harrell's C-index. Venous thromboembolism incidence in hospitalized medical patients is low (1%). Although existing risk assessment models demonstrate good calibration (hazard ratios for "at-risk" range 2.97-3.59), model discrimination is generally poor for all risk assessment models (C-index range 0.58-0.64). The performance of several existing risk assessment models for predicting venous thromboembolism among acutely ill, hospitalized medical patients at admission is limited. Given the low venous thromboembolism incidence in this nonsurgical patient population, careful consideration of how best to utilize existing venous thromboembolism risk assessment models is necessary, and further development and validation of novel venous thromboembolism risk assessment models for this patient population may be warranted. Published by Elsevier Inc.

  18. Predicting long-term survival after coronary artery bypass graft surgery.

    PubMed

    Karim, Md N; Reid, Christopher M; Huq, Molla; Brilleman, Samuel L; Cochrane, Andrew; Tran, Lavinia; Billah, Baki

    2018-02-01

    To develop a model for predicting long-term survival following coronary artery bypass graft surgery. This study included 46 573 patients from the Australian and New Zealand Society of Cardiac and Thoracic Surgeons (ANZCTS) registry, who underwent isolated coronary artery bypass graft surgery between 2001 and 2014. Data were randomly split into development (23 282) and validation (23 291) samples. Cox regression models were fitted separately, using the important preoperative variables, for 4 'time intervals' (31-90 days, 91-365 days, 1-3 years and >3 years), with optimal predictors selected using the bootstrap bagging technique. Model performance was assessed both in validation data and in combined data (development and validation samples). Coefficients of all 4 final models were estimated on the combined data adjusting for hospital-level clustering. The Kaplan-Meier mortality rates estimated in the sample were 1.7% at 90 days, 2.8% at 1 year, 4.4% at 2 years and 6.1% at 3 years. Age, peripheral vascular disease, respiratory disease, reduced ejection fraction, renal dysfunction, arrhythmia, diabetes, hypercholesterolaemia, cerebrovascular disease, hypertension, congestive heart failure, steroid use and smoking were included in all 4 models. However, their magnitude of effect varied across the time intervals. Harrell's C-statistics was 0.83, 0.78, 0.75 and 0.74 for 31-90 days, 91-365 days, 1-3 years and >3 years models, respectively. Models showed excellent discrimination and calibration in validation data. Models were developed for predicting long-term survival at 4 time intervals after isolated coronary artery bypass graft surgery. These models can be used in conjunction with the existing 30-day mortality prediction model. © The Author 2017. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.

  19. Development of an Advanced Computational Model for OMCVD of Indium Nitride

    NASA Technical Reports Server (NTRS)

    Cardelino, Carlos A.; Moore, Craig E.; Cardelino, Beatriz H.; Zhou, Ning; Lowry, Sam; Krishnan, Anantha; Frazier, Donald O.; Bachmann, Klaus J.

    1999-01-01

    An advanced computational model is being developed to predict the formation of indium nitride (InN) film from the reaction of trimethylindium (In(CH3)3) with ammonia (NH3). The components are introduced into the reactor in the gas phase within a background of molecular nitrogen (N2). Organometallic chemical vapor deposition occurs on a heated sapphire surface. The model simulates heat and mass transport with gas and surface chemistry under steady state and pulsed conditions. The development and validation of an accurate model for the interactions between the diffusion of gas phase species and surface kinetics is essential to enable the regulation of the process in order to produce a low defect material. The validation of the model will be performed in concert with a NASA-North Carolina State University project.

  20. Flight Testing an Iced Business Jet for Flight Simulation Model Validation

    NASA Technical Reports Server (NTRS)

    Ratvasky, Thomas P.; Barnhart, Billy P.; Lee, Sam; Cooper, Jon

    2007-01-01

    A flight test of a business jet aircraft with various ice accretions was performed to obtain data to validate flight simulation models developed through wind tunnel tests. Three types of ice accretions were tested: pre-activation roughness, runback shapes that form downstream of the thermal wing ice protection system, and a wing ice protection system failure shape. The high fidelity flight simulation models of this business jet aircraft were validated using a software tool called "Overdrive." Through comparisons of flight-extracted aerodynamic forces and moments to simulation-predicted forces and moments, the simulation models were successfully validated. Only minor adjustments in the simulation database were required to obtain adequate match, signifying the process used to develop the simulation models was successful. The simulation models were implemented in the NASA Ice Contamination Effects Flight Training Device (ICEFTD) to enable company pilots to evaluate flight characteristics of the simulation models. By and large, the pilots confirmed good similarities in the flight characteristics when compared to the real airplane. However, pilots noted pitch up tendencies at stall with the flaps extended that were not representative of the airplane and identified some differences in pilot forces. The elevator hinge moment model and implementation of the control forces on the ICEFTD were identified as a driver in the pitch ups and control force issues, and will be an area for future work.

  1. Model-based verification and validation of the SMAP uplink processes

    NASA Astrophysics Data System (ADS)

    Khan, M. O.; Dubos, G. F.; Tirona, J.; Standley, S.

    Model-Based Systems Engineering (MBSE) is being used increasingly within the spacecraft design community because of its benefits when compared to document-based approaches. As the complexity of projects expands dramatically with continually increasing computational power and technology infusion, the time and effort needed for verification and validation (V& V) increases geometrically. Using simulation to perform design validation with system-level models earlier in the life cycle stands to bridge the gap between design of the system (based on system-level requirements) and verifying those requirements/validating the system as a whole. This case study stands as an example of how a project can validate a system-level design earlier in the project life cycle than traditional V& V processes by using simulation on a system model. Specifically, this paper describes how simulation was added to a system model of the Soil Moisture Active-Passive (SMAP) mission's uplink process. Also discussed are the advantages and disadvantages of the methods employed and the lessons learned; which are intended to benefit future model-based and simulation-based development efforts.

  2. Developing and Validating Path-Dependent Uncertainty Estimates for use with the Regional Seismic Travel Time (RSTT) Model

    NASA Astrophysics Data System (ADS)

    Begnaud, M. L.; Anderson, D. N.; Phillips, W. S.; Myers, S. C.; Ballard, S.

    2016-12-01

    The Regional Seismic Travel Time (RSTT) tomography model has been developed to improve travel time predictions for regional phases (Pn, Sn, Pg, Lg) in order to increase seismic location accuracy, especially for explosion monitoring. The RSTT model is specifically designed to exploit regional phases for location, especially when combined with teleseismic arrivals. The latest RSTT model (version 201404um) has been released (http://www.sandia.gov/rstt). Travel time uncertainty estimates for RSTT are determined using one-dimensional (1D), distance-dependent error models, that have the benefit of being very fast to use in standard location algorithms, but do not account for path-dependent variations in error, and structural inadequacy of the RSTTT model (e.g., model error). Although global in extent, the RSTT tomography model is only defined in areas where data exist. A simple 1D error model does not accurately model areas where RSTT has not been calibrated. We are developing and validating a new error model for RSTT phase arrivals by mathematically deriving this multivariate model directly from a unified model of RSTT embedded into a statistical random effects model that captures distance, path and model error effects. An initial method developed is a two-dimensional path-distributed method using residuals. The goals for any RSTT uncertainty method are for it to be both readily useful for the standard RSTT user as well as improve travel time uncertainty estimates for location. We have successfully tested using the new error model for Pn phases and will demonstrate the method and validation of the error model for Sn, Pg, and Lg phases.

  3. The Model Analyst’s Toolkit: Scientific Model Development, Analysis, and Validation

    DTIC Science & Technology

    2013-11-20

    Granger causality F-test validation 3.1.2. Dynamic time warping for uneven temporal relationships Many causal relationships are imperfectly...mapping for dynamic feedback models Granger causality and DTW can identify causal relationships and consider complex temporal factors. However, many ...variant of the tf-idf algorithm (Manning, Raghavan, Schutze et al., 2008), typically used in search engines, to “score” features. The (-log tf) in

  4. Verification, Validation and Sensitivity Studies in Computational Biomechanics

    PubMed Central

    Anderson, Andrew E.; Ellis, Benjamin J.; Weiss, Jeffrey A.

    2012-01-01

    Computational techniques and software for the analysis of problems in mechanics have naturally moved from their origins in the traditional engineering disciplines to the study of cell, tissue and organ biomechanics. Increasingly complex models have been developed to describe and predict the mechanical behavior of such biological systems. While the availability of advanced computational tools has led to exciting research advances in the field, the utility of these models is often the subject of criticism due to inadequate model verification and validation. The objective of this review is to present the concepts of verification, validation and sensitivity studies with regard to the construction, analysis and interpretation of models in computational biomechanics. Specific examples from the field are discussed. It is hoped that this review will serve as a guide to the use of verification and validation principles in the field of computational biomechanics, thereby improving the peer acceptance of studies that use computational modeling techniques. PMID:17558646

  5. A novel integrated framework and improved methodology of computer-aided drug design.

    PubMed

    Chen, Calvin Yu-Chian

    2013-01-01

    Computer-aided drug design (CADD) is a critical initiating step of drug development, but a single model capable of covering all designing aspects remains to be elucidated. Hence, we developed a drug design modeling framework that integrates multiple approaches, including machine learning based quantitative structure-activity relationship (QSAR) analysis, 3D-QSAR, Bayesian network, pharmacophore modeling, and structure-based docking algorithm. Restrictions for each model were defined for improved individual and overall accuracy. An integration method was applied to join the results from each model to minimize bias and errors. In addition, the integrated model adopts both static and dynamic analysis to validate the intermolecular stabilities of the receptor-ligand conformation. The proposed protocol was applied to identifying HER2 inhibitors from traditional Chinese medicine (TCM) as an example for validating our new protocol. Eight potent leads were identified from six TCM sources. A joint validation system comprised of comparative molecular field analysis, comparative molecular similarity indices analysis, and molecular dynamics simulation further characterized the candidates into three potential binding conformations and validated the binding stability of each protein-ligand complex. The ligand pathway was also performed to predict the ligand "in" and "exit" from the binding site. In summary, we propose a novel systematic CADD methodology for the identification, analysis, and characterization of drug-like candidates.

  6. Distance Education in Taiwan: A Model Validated.

    ERIC Educational Resources Information Center

    Shih, Mei-Yau; Zvacek, Susan M.

    The Triad Perspective Model of Distance Education (TPMDE) guides researchers in developing research questions, gathering data, and producing a comprehensive description of a distance education program. It was developed around three theoretical perspectives: (1) curriculum development theory (Tyler's four questions, 1949); (2) systems theory…

  7. Development and validation of a comprehensive model for map of fruits based on enzyme kinetics theory and arrhenius relation.

    PubMed

    Mangaraj, S; K Goswami, T; Mahajan, P V

    2015-07-01

    MAP is a dynamic system where respiration of the packaged product and gas permeation through the packaging film takes place simultaneously. The desired level of O2 and CO2 in a package is achieved by matching film permeation rates for O2 and CO2 with respiration rate of the packaged product. A mathematical model for MAP of fresh fruits applying enzyme kinetics based respiration equation coupled with the Arrhenious type model was developed. The model was solved numerically using MATLAB programme. The model was used to determine the time to reach to the equilibrium concentration inside the MA package and the level of O2 and CO2 concentration at equilibrium state. The developed model for prediction of equilibrium O2 and CO2 concentration was validated using experimental data for MA packaging of apple, guava and litchi.

  8. Generic Raman-based calibration models enabling real-time monitoring of cell culture bioreactors.

    PubMed

    Mehdizadeh, Hamidreza; Lauri, David; Karry, Krizia M; Moshgbar, Mojgan; Procopio-Melino, Renee; Drapeau, Denis

    2015-01-01

    Raman-based multivariate calibration models have been developed for real-time in situ monitoring of multiple process parameters within cell culture bioreactors. Developed models are generic, in the sense that they are applicable to various products, media, and cell lines based on Chinese Hamster Ovarian (CHO) host cells, and are scalable to large pilot and manufacturing scales. Several batches using different CHO-based cell lines and corresponding proprietary media and process conditions have been used to generate calibration datasets, and models have been validated using independent datasets from separate batch runs. All models have been validated to be generic and capable of predicting process parameters with acceptable accuracy. The developed models allow monitoring multiple key bioprocess metabolic variables, and hence can be utilized as an important enabling tool for Quality by Design approaches which are strongly supported by the U.S. Food and Drug Administration. © 2015 American Institute of Chemical Engineers.

  9. Mashup Model and Verification Using Mashup Processing Network

    NASA Astrophysics Data System (ADS)

    Zahoor, Ehtesham; Perrin, Olivier; Godart, Claude

    Mashups are defined to be lightweight Web applications aggregating data from different Web services, built using ad-hoc composition and being not concerned with long term stability and robustness. In this paper we present a pattern based approach, called Mashup Processing Network (MPN). The idea is based on Event Processing Network and is supposed to facilitate the creation, modeling and the verification of mashups. MPN provides a view of how different actors interact for the mashup development namely the producer, consumer, mashup processing agent and the communication channels. It also supports modeling transformations and validations of data and offers validation of both functional and non-functional requirements, such as reliable messaging and security, that are key issues within the enterprise context. We have enriched the model with a set of processing operations and categorize them into data composition, transformation and validation categories. These processing operations can be seen as a set of patterns for facilitating the mashup development process. MPN also paves a way for realizing Mashup Oriented Architecture where mashups along with services are used as building blocks for application development.

  10. Development and validation of the Bullying and Cyberbullying Scale for Adolescents: A multi-dimensional measurement model.

    PubMed

    Thomas, Hannah J; Scott, James G; Coates, Jason M; Connor, Jason P

    2018-05-03

    Intervention on adolescent bullying is reliant on valid and reliable measurement of victimization and perpetration experiences across different behavioural expressions. This study developed and validated a survey tool that integrates measurement of both traditional and cyber bullying to test a theoretically driven multi-dimensional model. Adolescents from 10 mainstream secondary schools completed a baseline and follow-up survey (N = 1,217; M age  = 14 years; 66.2% male). The Bullying and cyberbullying Scale for Adolescents (BCS-A) developed for this study comprised parallel victimization and perpetration subscales, each with 20 items. Additional measures of bullying (Olweus Global Bullying and the Forms of Bullying Scale [FBS]), as well as measures of internalizing and externalizing problems, school connectedness, social support, and personality, were used to further assess validity. Factor structure was determined, and then, the suitability of items was assessed according to the following criteria: (1) factor interpretability, (2) item correlations, (3) model parsimony, and (4) measurement equivalence across victimization and perpetration experiences. The final models comprised four factors: physical, verbal, relational, and cyber. The final scale was revised to two 13-item subscales. The BCS-A demonstrated acceptable concurrent and convergent validity (internalizing and externalizing problems, school connectedness, social support, and personality), as well as predictive validity over 6 months. The BCS-A has sound psychometric properties. This tool establishes measurement equivalence across types of involvement and behavioural forms common among adolescents. An improved measurement method could add greater rigour to the evaluation of intervention programmes and also enable interventions to be tailored to subscale profiles. © 2018 The British Psychological Society.

  11. Development of a Turbofan Engine Simulation in a Graphical Simulation Environment

    NASA Technical Reports Server (NTRS)

    Parker, Khary I.; Guo, Ten-Heui

    2003-01-01

    This paper presents the development of a generic component level model of a turbofan engine simulation with a digital controller, in an advanced graphical simulation environment. The goal of this effort is to develop and demonstrate a flexible simulation platform for future research in propulsion system control and diagnostic technology. A previously validated FORTRAN-based model of a modern, high-performance, military-type turbofan engine is being used to validate the platform development. The implementation process required the development of various innovative procedures, which are discussed in the paper. Open-loop and closed-loop comparisons are made between the two simulations. Future enhancements that are to be made to the modular engine simulation are summarized.

  12. Design, Development and Validation of a Model of Problem Solving for Egyptian Science Classes

    ERIC Educational Resources Information Center

    Shahat, Mohamed A.; Ohle, Annika; Treagust, David F.; Fischer, Hans E.

    2013-01-01

    Educators and policymakers envision the future of education in Egypt as enabling learners to acquire scientific inquiry and problem-solving skills. In this article, we describe the validation of a model for problem solving and the design of instruments for evaluating new teaching methods in Egyptian science classes. The instruments were based on…

  13. Comparative Validity of the Shedler and Westen Assessment Procedure-200

    ERIC Educational Resources Information Center

    Mullins-Sweatt, Stephanie N.; Widiger, Thomas A.

    2008-01-01

    A predominant dimensional model of general personality structure is the five-factor model (FFM). Quite a number of alternative instruments have been developed to assess the domains of the FFM. The current study compares the validity of 2 alternative versions of the Shedler and Westen Assessment Procedure (SWAP-200) FFM scales, 1 that was developed…

  14. Challenges of forest landscape modeling - simulating large landscapes and validating results

    Treesearch

    Hong S. He; Jian Yang; Stephen R. Shifley; Frank R. Thompson

    2011-01-01

    Over the last 20 years, we have seen a rapid development in the field of forest landscape modeling, fueled by both technological and theoretical advances. Two fundamental challenges have persisted since the inception of FLMs: (1) balancing realistic simulation of ecological processes at broad spatial and temporal scales with computing capacity, and (2) validating...

  15. Model Development and Experimental Validation of the Fusible Heat Sink Design for Exploration Vehicles

    NASA Technical Reports Server (NTRS)

    Cognata, Thomas J.; Leimkuehler, Thomas; Sheth, Rubik; Le, Hung

    2013-01-01

    The Fusible Heat Sink is a novel vehicle heat rejection technology which combines a flow through radiator with a phase change material. The combined technologies create a multi-function device able to shield crew members against Solar Particle Events (SPE), reduce radiator extent by permitting sizing to the average vehicle heat load rather than to the peak vehicle heat load, and to substantially absorb heat load excursions from the average while constantly maintaining thermal control system setpoints. This multi-function technology provides great flexibility for mission planning, making it possible to operate a vehicle in hot or cold environments and under high or low heat load conditions for extended periods of time. This paper describes the modeling and experimental validation of the Fusible Heat Sink technology. The model developed was intended to meet the radiation and heat rejection requirements of a nominal MMSEV mission. Development parameters and results, including sizing and model performance will be discussed. From this flight-sized model, a scaled test-article design was modeled, designed, and fabricated for experimental validation of the technology at Johnson Space Center thermal vacuum chamber facilities. Testing showed performance comparable to the model at nominal loads and the capability to maintain heat loads substantially greater than nominal for extended periods of time.

  16. Towards Automatic Validation and Healing of Citygml Models for Geometric and Semantic Consistency

    NASA Astrophysics Data System (ADS)

    Alam, N.; Wagner, D.; Wewetzer, M.; von Falkenhausen, J.; Coors, V.; Pries, M.

    2013-09-01

    A steadily growing number of application fields for large 3D city models have emerged in recent years. Like in many other domains, data quality is recognized as a key factor for successful business. Quality management is mandatory in the production chain nowadays. Automated domain-specific tools are widely used for validation of business-critical data but still common standards defining correct geometric modeling are not precise enough to define a sound base for data validation of 3D city models. Although the workflow for 3D city models is well-established from data acquisition to processing, analysis and visualization, quality management is not yet a standard during this workflow. Processing data sets with unclear specification leads to erroneous results and application defects. We show that this problem persists even if data are standard compliant. Validation results of real-world city models are presented to demonstrate the potential of the approach. A tool to repair the errors detected during the validation process is under development; first results are presented and discussed. The goal is to heal defects of the models automatically and export a corrected CityGML model.

  17. Validated Risk Score for Predicting 6-Month Mortality in Infective Endocarditis.

    PubMed

    Park, Lawrence P; Chu, Vivian H; Peterson, Gail; Skoutelis, Athanasios; Lejko-Zupa, Tatjana; Bouza, Emilio; Tattevin, Pierre; Habib, Gilbert; Tan, Ren; Gonzalez, Javier; Altclas, Javier; Edathodu, Jameela; Fortes, Claudio Querido; Siciliano, Rinaldo Focaccia; Pachirat, Orathai; Kanj, Souha; Wang, Andrew

    2016-04-18

    Host factors and complications have been associated with higher mortality in infective endocarditis (IE). We sought to develop and validate a model of clinical characteristics to predict 6-month mortality in IE. Using a large multinational prospective registry of definite IE (International Collaboration on Endocarditis [ICE]-Prospective Cohort Study [PCS], 2000-2006, n=4049), a model to predict 6-month survival was developed by Cox proportional hazards modeling with inverse probability weighting for surgery treatment and was internally validated by the bootstrapping method. This model was externally validated in an independent prospective registry (ICE-PLUS, 2008-2012, n=1197). The 6-month mortality was 971 of 4049 (24.0%) in the ICE-PCS cohort and 342 of 1197 (28.6%) in the ICE-PLUS cohort. Surgery during the index hospitalization was performed in 48.1% and 54.0% of the cohorts, respectively. In the derivation model, variables related to host factors (age, dialysis), IE characteristics (prosthetic or nosocomial IE, causative organism, left-sided valve vegetation), and IE complications (severe heart failure, stroke, paravalvular complication, and persistent bacteremia) were independently associated with 6-month mortality, and surgery was associated with a lower risk of mortality (Harrell's C statistic 0.715). In the validation model, these variables had similar hazard ratios (Harrell's C statistic 0.682), with a similar, independent benefit of surgery (hazard ratio 0.74, 95% CI 0.62-0.89). A simplified risk model was developed by weight adjustment of these variables. Six-month mortality after IE is ≈25% and is predicted by host factors, IE characteristics, and IE complications. Surgery during the index hospitalization is associated with lower mortality but is performed less frequently in the highest risk patients. A simplified risk model may be used to identify specific risk subgroups in IE. © 2016 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.

  18. Population Screening for Hereditary Haemochromatosis in Australia: Construction and Validation of a State-Transition Cost-Effectiveness Model.

    PubMed

    de Graaff, Barbara; Si, Lei; Neil, Amanda L; Yee, Kwang Chien; Sanderson, Kristy; Gurrin, Lyle C; Palmer, Andrew J

    2017-03-01

    HFE-associated haemochromatosis, the most common monogenic disorder amongst populations of northern European ancestry, is characterised by iron overload. Excess iron is stored in parenchymal tissues, leading to morbidity and mortality. Population screening programmes are likely to improve early diagnosis, thereby decreasing associated disease. Our aim was to develop and validate a health economics model of screening using utilities and costs from a haemochromatosis cohort. A state-transition model was developed with Markov states based on disease severity. Australian males (aged 30 years) and females (aged 45 years) of northern European ancestry were the target populations. The screening strategy was the status quo approach in Australia; the model was run over a lifetime horizon. Costs were estimated from the government perspective and reported in 2015 Australian dollars ($A); costs and quality-adjusted life-years (QALYs) were discounted at 5% annually. Model validity was assessed using goodness-of-fit analyses. Second-order Monte-Carlo simulation was used to account for uncertainty in multiple parameters. For validity, the model reproduced mortality, life expectancy (LE) and prevalence rates in line with published data. LE for C282Y homozygote males and females were 49.9 and 40.2 years, respectively, slightly lower than population rates. Mean (95% confidence interval) QALYS were 15.7 (7.7-23.7) for males and 14.4 (6.7-22.1) for females. Mean discounted lifetime costs for C282Y homozygotes were $A22,737 (3670-85,793) for males and $A13,840 (1335-67,377) for females. Sensitivity analyses revealed discount rates and prevalence had the greatest impacts on outcomes. We have developed a transparent, validated health economics model of C282Y homozygote haemochromatosis. The model will be useful to decision makers to identify cost-effective screening strategies.

  19. Modelling human skull growth: a validated computational model

    PubMed Central

    Marghoub, Arsalan; Johnson, David; Khonsari, Roman H.; Fagan, Michael J.; Moazen, Mehran

    2017-01-01

    During the first year of life, the brain grows rapidly and the neurocranium increases to about 65% of its adult size. Our understanding of the relationship between the biomechanical forces, especially from the growing brain, the craniofacial soft tissue structures and the individual bone plates of the skull vault is still limited. This basic knowledge could help in the future planning of craniofacial surgical operations. The aim of this study was to develop a validated computational model of skull growth, based on the finite-element (FE) method, to help understand the biomechanics of skull growth. To do this, a two-step validation study was carried out. First, an in vitro physical three-dimensional printed model and an in silico FE model were created from the same micro-CT scan of an infant skull and loaded with forces from the growing brain from zero to two months of age. The results from the in vitro model validated the FE model before it was further developed to expand from 0 to 12 months of age. This second FE model was compared directly with in vivo clinical CT scans of infants without craniofacial conditions (n = 56). The various models were compared in terms of predicted skull width, length and circumference, while the overall shape was quantified using three-dimensional distance plots. Statistical analysis yielded no significant differences between the male skull models. All size measurements from the FE model versus the in vitro physical model were within 5%, with one exception showing a 7.6% difference. The FE model and in vivo data also correlated well, with the largest percentage difference in size being 8.3%. Overall, the FE model results matched well with both the in vitro and in vivo data. With further development and model refinement, this modelling method could be used to assist in preoperative planning of craniofacial surgery procedures and could help to reduce reoperation rates. PMID:28566514

  20. Modelling human skull growth: a validated computational model.

    PubMed

    Libby, Joseph; Marghoub, Arsalan; Johnson, David; Khonsari, Roman H; Fagan, Michael J; Moazen, Mehran

    2017-05-01

    During the first year of life, the brain grows rapidly and the neurocranium increases to about 65% of its adult size. Our understanding of the relationship between the biomechanical forces, especially from the growing brain, the craniofacial soft tissue structures and the individual bone plates of the skull vault is still limited. This basic knowledge could help in the future planning of craniofacial surgical operations. The aim of this study was to develop a validated computational model of skull growth, based on the finite-element (FE) method, to help understand the biomechanics of skull growth. To do this, a two-step validation study was carried out. First, an in vitro physical three-dimensional printed model and an in silico FE model were created from the same micro-CT scan of an infant skull and loaded with forces from the growing brain from zero to two months of age. The results from the in vitro model validated the FE model before it was further developed to expand from 0 to 12 months of age. This second FE model was compared directly with in vivo clinical CT scans of infants without craniofacial conditions ( n = 56). The various models were compared in terms of predicted skull width, length and circumference, while the overall shape was quantified using three-dimensional distance plots. Statistical analysis yielded no significant differences between the male skull models. All size measurements from the FE model versus the in vitro physical model were within 5%, with one exception showing a 7.6% difference. The FE model and in vivo data also correlated well, with the largest percentage difference in size being 8.3%. Overall, the FE model results matched well with both the in vitro and in vivo data. With further development and model refinement, this modelling method could be used to assist in preoperative planning of craniofacial surgery procedures and could help to reduce reoperation rates. © 2017 The Author(s).

  1. Models to predict length of stay in the Intensive Care Unit after coronary artery bypass grafting: a systematic review.

    PubMed

    Atashi, Alireza; Verburg, Ilona W; Karim, Hesam; Miri, Mirmohammad; Abu-Hanna, Ameen; de Jonge, Evert; de Keizer, Nicolette F; Eslami, Saeid

    2018-06-01

    Intensive Care Units (ICU) length of stay (LoS) prediction models are used to compare different institutions and surgeons on their performance, and is useful as an efficiency indicator for quality control. There is little consensus about which prediction methods are most suitable to predict (ICU) length of stay. The aim of this study is to systematically review models for predicting ICU LoS after coronary artery bypass grafting and to assess the reporting and methodological quality of these models to apply them for benchmarking. A general search was conducted in Medline and Embase up to 31-12-2016. Three authors classified the papers for inclusion by reading their title, abstract and full text. All original papers describing development and/or validation of a prediction model for LoS in the ICU after CABG surgery were included. We used a checklist developed for critical appraisal and data extraction for systematic reviews of prediction modeling and extended it on handling specific patients subgroups. We also defined other items and scores to assess the methodological and reporting quality of the models. Of 5181 uniquely identified articles, fifteen studies were included of which twelve on development of new models and three on validation of existing models. All studies used linear or logistic regression as method for model development, and reported various performance measures based on the difference between predicted and observed ICU LoS. Most used a prospective (46.6%) or retrospective study design (40%). We found heterogeneity in patient inclusion/exclusion criteria; sample size; reported accuracy rates; and methods of candidate predictor selection. Most (60%) studies have not mentioned the handling of missing values and none compared the model outcome measure of survivors with non-survivors. For model development and validation studies respectively, the maximum reporting (methodological) scores were 66/78 and 62/62 (14/22 and 12/22). There are relatively few models for predicting ICU length of stay after CABG. Several aspects of methodological and reporting quality of studies in this field should be improved. There is a need for standardizing outcome and risk factor definitions in order to develop/validate a multi-institutional and international risk scoring system.

  2. A diagnostic model for impending death in cancer patients: Preliminary report.

    PubMed

    Hui, David; Hess, Kenneth; dos Santos, Renata; Chisholm, Gary; Bruera, Eduardo

    2015-11-01

    Several highly specific bedside physical signs associated with impending death within 3 days for patients with advanced cancer were recently identified. A diagnostic model for impending death based on these physical signs was developed and assessed. Sixty-two physical signs were systematically documented every 12 hours from admission to death or discharge for 357 patients with advanced cancer who were admitted to acute palliative care units (APCUs) at 2 tertiary care cancer centers. Recursive partitioning analysis was used to develop a prediction model for impending death within 3 days with admission data. The model was validated with 5 iterations of 10-fold cross-validation, and the model was also applied to APCU days 2 to 6. For the 322 of 357 patients (90%) with complete data for all signs, the 3-day mortality rate was 24% on admission. The final model was based on 2 variables (Palliative Performance Scale [PPS] and drooping of nasolabial folds) and had 4 terminal leaves: PPS score ≤ 20% and drooping of nasolabial folds present, PPS score ≤ 20% and drooping of nasolabial folds absent, PPS score of 30% to 60%, and PPS score ≥ 70%. The 3-day mortality rates were 94%, 42%, 16%, and 3%, respectively. The diagnostic accuracy was 81% for the original tree, 80% for cross-validation, and 79% to 84% for subsequent APCU days. Based on 2 objective bedside physical signs, a diagnostic model was developed for impending death within 3 days. This model was applicable to both APCU admission and subsequent days. Upon further external validation, this model may help clinicians to formulate the diagnosis of impending death. © 2015 American Cancer Society.

  3. A unified bond theory, probabilistic meso-scale modeling, and experimental validation of deformed steel rebar in normal strength concrete

    NASA Astrophysics Data System (ADS)

    Wu, Chenglin

    Bond between deformed rebar and concrete is affected by rebar deformation pattern, concrete properties, concrete confinement, and rebar-concrete interfacial properties. Two distinct groups of bond models were traditionally developed based on the dominant effects of concrete splitting and near-interface shear-off failures. Their accuracy highly depended upon the test data sets selected in analysis and calibration. In this study, a unified bond model is proposed and developed based on an analogy to the indentation problem around the rib front of deformed rebar. This mechanics-based model can take into account the combined effect of concrete splitting and interface shear-off failures, resulting in average bond strengths for all practical scenarios. To understand the fracture process associated with bond failure, a probabilistic meso-scale model of concrete is proposed and its sensitivity to interface and confinement strengths are investigated. Both the mechanical and finite element models are validated with the available test data sets and are superior to existing models in prediction of average bond strength (< 6% error) and crack spacing (< 6% error). The validated bond model is applied to derive various interrelations among concrete crushing, concrete splitting, interfacial behavior, and the rib spacing-to-height ratio of deformed rebar. It can accurately predict the transition of failure modes from concrete splitting to rebar pullout and predict the effect of rebar surface characteristics as the rib spacing-to-height ratio increases. Based on the unified theory, a global bond model is proposed and developed by introducing bond-slip laws, and validated with testing of concrete beams with spliced reinforcement, achieving a load capacity prediction error of less than 26%. The optimal rebar parameters and concrete cover in structural designs can be derived from this study.

  4. Barotropic Tidal Predictions and Validation in a Relocatable Modeling Environment. Revised

    NASA Technical Reports Server (NTRS)

    Mehra, Avichal; Passi, Ranjit; Kantha, Lakshmi; Payne, Steven; Brahmachari, Shuvobroto

    1998-01-01

    Under funding from the Office of Naval Research (ONR), the Mississippi State University Center for Air Sea Technology (CAST) has been working on developing a Relocatable Modeling Environment (RME) to provide a uniform and unbiased infrastructure for efficiently configuring numerical models in any geographic or oceanic region. Under Naval Oceanographic Office (NAVOCEANO) funding, the model was implemented and tested for NAVOCEANO use. With our current emphasis on ocean tidal modeling, CAST has adopted the Colorado University's numerical ocean model, known as CURReNTSS (Colorado University Rapidly Relocatable Nestable Storm Surge) Model, as the model of choice. During the RME development process, CURReNTSS has been relocated to several coastal oceanic regions, providing excellent results that demonstrate its veracity. This report documents the model validation results and provides a brief description of the Graphic user Interface.

  5. Using Model Replication to Improve the Reliability of Agent-Based Models

    NASA Astrophysics Data System (ADS)

    Zhong, Wei; Kim, Yushim

    The basic presupposition of model replication activities for a computational model such as an agent-based model (ABM) is that, as a robust and reliable tool, it must be replicable in other computing settings. This assumption has recently gained attention in the community of artificial society and simulation due to the challenges of model verification and validation. Illustrating the replication of an ABM representing fraudulent behavior in a public service delivery system originally developed in the Java-based MASON toolkit for NetLogo by a different author, this paper exemplifies how model replication exercises provide unique opportunities for model verification and validation process. At the same time, it helps accumulate best practices and patterns of model replication and contributes to the agenda of developing a standard methodological protocol for agent-based social simulation.

  6. Development and validation of deterioration models for concrete bridge decks - phase 1 : artificial intelligence models and bridge management system.

    DOT National Transportation Integrated Search

    2013-06-01

    This research documents the development and evaluation of artificial neural network (ANN) models to predict the condition ratings of concrete highway bridge decks in Michigan. Historical condition assessments chronicled in the national bridge invento...

  7. Development and validation of a prediction model for measurement variability of lung nodule volumetry in patients with pulmonary metastases.

    PubMed

    Hwang, Eui Jin; Goo, Jin Mo; Kim, Jihye; Park, Sang Joon; Ahn, Soyeon; Park, Chang Min; Shin, Yeong-Gil

    2017-08-01

    To develop a prediction model for the variability range of lung nodule volumetry and validate the model in detecting nodule growth. For model development, 50 patients with metastatic nodules were prospectively included. Two consecutive CT scans were performed to assess volumetry for 1,586 nodules. Nodule volume, surface voxel proportion (SVP), attachment proportion (AP) and absolute percentage error (APE) were calculated for each nodule and quantile regression analyses were performed to model the 95% percentile of APE. For validation, 41 patients who underwent metastasectomy were included. After volumetry of resected nodules, sensitivity and specificity for diagnosis of metastatic nodules were compared between two different thresholds of nodule growth determination: uniform 25% volume change threshold and individualized threshold calculated from the model (estimated 95% percentile APE). SVP and AP were included in the final model: Estimated 95% percentile APE = 37.82 · SVP + 48.60 · AP-10.87. In the validation session, the individualized threshold showed significantly higher sensitivity for diagnosis of metastatic nodules than the uniform 25% threshold (75.0% vs. 66.0%, P = 0.004) CONCLUSION: Estimated 95% percentile APE as an individualized threshold of nodule growth showed greater sensitivity in diagnosing metastatic nodules than a global 25% threshold. • The 95 % percentile APE of a particular nodule can be predicted. • Estimated 95 % percentile APE can be utilized as an individualized threshold. • More sensitive diagnosis of metastasis can be made with an individualized threshold. • Tailored nodule management can be provided during nodule growth follow-up.

  8. Validation of Agricultural Mechanics Curriculum Manual.

    ERIC Educational Resources Information Center

    Hatcher, Elizabeth; And Others

    This study was concerned with the validation of the Oklahoma Curriculum and Instructional Materials Center's agricultural mechanics curriculum manual and the development of a model whereby future manuals can be validated. Five units in the manual were randomly selected from a list of units to be taught during the second semester of the 1977-78…

  9. Development and Validation of Osteoporosis Risk-Assessment Model for Korean Men

    PubMed Central

    Oh, Sun Min; Song, Bo Mi; Nam, Byung-Ho; Rhee, Yumie; Moon, Seong-Hwan; Kim, Deog Young; Kang, Dae Ryong

    2016-01-01

    Purpose The aim of the present study was to develop an osteoporosis risk-assessment model to identify high-risk individuals among Korean men. Materials and Methods The study used data from 1340 and 1110 men ≥50 years who participated in the 2009 and 2010 Korean National Health and Nutrition Examination Survey, respectively, for development and validation of an osteoporosis risk-assessment model. Osteoporosis was defined as T score ≤-2.5 at either the femoral neck or lumbar spine. Performance of the candidate models and the Osteoporosis Self-assessment Tool for Asian (OSTA) was compared with sensitivity, specificity, and area under the receiver operating characteristics curve (AUC). A net reclassification improvement was further calculated to compare the developed Korean Osteoporosis Risk-Assessment Model for Men (KORAM-M) with OSTA. Results In the development dataset, the prevalence of osteoporosis was 8.1%. KORAM-M, consisting of age and body weight, had a sensitivity of 90.8%, a specificity of 42.4%, and an AUC of 0.666 with a cut-off score of -9. In the validation dataset, similar results were shown: sensitivity 87.9%, specificity 39.7%, and AUC 0.638. Additionally, risk categorization with KORAM-M showed improved reclassification over that of OSTA up to 22.8%. Conclusion KORAM-M can be simply used as a pre-screening tool to identify candidates for dual energy X-ray absorptiometry tests. PMID:26632400

  10. Developing and Testing a Model to Predict Outcomes of Organizational Change

    PubMed Central

    Gustafson, David H; Sainfort, François; Eichler, Mary; Adams, Laura; Bisognano, Maureen; Steudel, Harold

    2003-01-01

    Objective To test the effectiveness of a Bayesian model employing subjective probability estimates for predicting success and failure of health care improvement projects. Data Sources Experts' subjective assessment data for model development and independent retrospective data on 221 healthcare improvement projects in the United States, Canada, and the Netherlands collected between 1996 and 2000 for validation. Methods A panel of theoretical and practical experts and literature in organizational change were used to identify factors predicting the outcome of improvement efforts. A Bayesian model was developed to estimate probability of successful change using subjective estimates of likelihood ratios and prior odds elicited from the panel of experts. A subsequent retrospective empirical analysis of change efforts in 198 health care organizations was performed to validate the model. Logistic regression and ROC analysis were used to evaluate the model's performance using three alternative definitions of success. Data Collection For the model development, experts' subjective assessments were elicited using an integrative group process. For the validation study, a staff person intimately involved in each improvement project responded to a written survey asking questions about model factors and project outcomes. Results Logistic regression chi-square statistics and areas under the ROC curve demonstrated a high level of model performance in predicting success. Chi-square statistics were significant at the 0.001 level and areas under the ROC curve were greater than 0.84. Conclusions A subjective Bayesian model was effective in predicting the outcome of actual improvement projects. Additional prospective evaluations as well as testing the impact of this model as an intervention are warranted. PMID:12785571

  11. Modern modeling techniques had limited external validity in predicting mortality from traumatic brain injury.

    PubMed

    van der Ploeg, Tjeerd; Nieboer, Daan; Steyerberg, Ewout W

    2016-10-01

    Prediction of medical outcomes may potentially benefit from using modern statistical modeling techniques. We aimed to externally validate modeling strategies for prediction of 6-month mortality of patients suffering from traumatic brain injury (TBI) with predictor sets of increasing complexity. We analyzed individual patient data from 15 different studies including 11,026 TBI patients. We consecutively considered a core set of predictors (age, motor score, and pupillary reactivity), an extended set with computed tomography scan characteristics, and a further extension with two laboratory measurements (glucose and hemoglobin). With each of these sets, we predicted 6-month mortality using default settings with five statistical modeling techniques: logistic regression (LR), classification and regression trees, random forests (RFs), support vector machines (SVM) and neural nets. For external validation, a model developed on one of the 15 data sets was applied to each of the 14 remaining sets. This process was repeated 15 times for a total of 630 validations. The area under the receiver operating characteristic curve (AUC) was used to assess the discriminative ability of the models. For the most complex predictor set, the LR models performed best (median validated AUC value, 0.757), followed by RF and support vector machine models (median validated AUC value, 0.735 and 0.732, respectively). With each predictor set, the classification and regression trees models showed poor performance (median validated AUC value, <0.7). The variability in performance across the studies was smallest for the RF- and LR-based models (inter quartile range for validated AUC values from 0.07 to 0.10). In the area of predicting mortality from TBI, nonlinear and nonadditive effects are not pronounced enough to make modern prediction methods beneficial. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. Early Detection of Increased Intracranial Pressure Episodes in Traumatic Brain Injury: External Validation in an Adult and in a Pediatric Cohort.

    PubMed

    Güiza, Fabian; Depreitere, Bart; Piper, Ian; Citerio, Giuseppe; Jorens, Philippe G; Maas, Andrew; Schuhmann, Martin U; Lo, Tsz-Yan Milly; Donald, Rob; Jones, Patricia; Maier, Gottlieb; Van den Berghe, Greet; Meyfroidt, Geert

    2017-03-01

    A model for early detection of episodes of increased intracranial pressure in traumatic brain injury patients has been previously developed and validated based on retrospective adult patient data from the multicenter Brain-IT database. The purpose of the present study is to validate this early detection model in different cohorts of recently treated adult and pediatric traumatic brain injury patients. Prognostic modeling. Noninterventional, observational, retrospective study. The adult validation cohort comprised recent traumatic brain injury patients from San Gerardo Hospital in Monza (n = 50), Leuven University Hospital (n = 26), Antwerp University Hospital (n = 19), Tübingen University Hospital (n = 18), and Southern General Hospital in Glasgow (n = 8). The pediatric validation cohort comprised patients from neurosurgical and intensive care centers in Edinburgh and Newcastle (n = 79). None. The model's performance was evaluated with respect to discrimination, calibration, overall performance, and clinical usefulness. In the recent adult validation cohort, the model retained excellent performance as in the original study. In the pediatric validation cohort, the model retained good discrimination and a positive net benefit, albeit with a performance drop in the remaining criteria. The obtained external validation results confirm the robustness of the model to predict future increased intracranial pressure events 30 minutes in advance, in adult and pediatric traumatic brain injury patients. These results are a large step toward an early warning system for increased intracranial pressure that can be generally applied. Furthermore, the sparseness of this model that uses only two routinely monitored signals as inputs (intracranial pressure and mean arterial blood pressure) is an additional asset.

  13. Validation in the Absence of Observed Events

    DOE PAGES

    Lathrop, John; Ezell, Barry

    2015-07-22

    Here our paper addresses the problem of validating models in the absence of observed events, in the area of Weapons of Mass Destruction terrorism risk assessment. We address that problem with a broadened definition of “Validation,” based on “backing up” to the reason why modelers and decision makers seek validation, and from that basis re-define validation as testing how well the model can advise decision makers in terrorism risk management decisions. We develop that into two conditions: Validation must be based on cues available in the observable world; and it must focus on what can be done to affect thatmore » observable world, i.e. risk management. That in turn leads to two foci: 1.) the risk generating process, 2.) best use of available data. Based on our experience with nine WMD terrorism risk assessment models, we then describe three best use of available data pitfalls: SME confidence bias, lack of SME cross-referencing, and problematic initiation rates. Those two foci and three pitfalls provide a basis from which we define validation in this context in terms of four tests -- Does the model: … capture initiation? … capture the sequence of events by which attack scenarios unfold? … consider unanticipated scenarios? … consider alternative causal chains? Finally, we corroborate our approach against three key validation tests from the DOD literature: Is the model a correct representation of the simuland? To what degree are the model results comparable to the real world? Over what range of inputs are the model results useful?« less

  14. Guiding Development Based Approach Practicum Vertebrates Taxonomy Scientific Study Program for Students of Biology Education

    NASA Astrophysics Data System (ADS)

    Arieska, M.; Syamsurizal, S.; Sumarmin, R.

    2018-04-01

    Students having difficulty in identifying and describing the vertebrate animals as well as less skilled in science process as practical. Increased expertise in scientific skills, one of which is through practical activities using practical guidance based on scientific approach. This study aims to produce practical guidance vertebrate taxonomy for biology education students PGRI STKIP West Sumatra valid. This study uses a model of Plomp development consisting of three phases: the initial investigation, floating or prototype stage, and the stage of assessment. Data collection instruments used in this study is a validation sheet guiding practicum. Data were analyzed descriptively based on data obtained from the field. The result of the development of practical guidance vertebrate taxonomic validity value of 3.22 is obtained with very valid category. Research and development has produced a practical guide based vertebrate taxonomic scientific approach very valid.

  15. Developing a Psychometric Instrument to Measure Physical Education Teachers' Job Demands and Resources.

    PubMed

    Zhang, Tan; Chen, Ang

    2017-01-01

    Based on the job demands-resources model, the study developed and validated an instrument that measures physical education teachers' job demands-resources perception. Expert review established content validity with the average item rating of 3.6/5.0. Construct validity and reliability were determined with a teacher sample ( n = 397). Exploratory factor analysis established a five-dimension construct structure matching the theoretical construct deliberated in the literature. The composite reliability scores for the five dimensions range from .68 to .83. Validity coefficients (intraclass correlational coefficients) are .69 for job resources items and .82 for job demands items. Inter-scale correlational coefficients range from -.32 to .47. Confirmatory factor analysis confirmed the construct validity with high dimensional factor loadings (ranging from .47 to .84 for job resources scale and from .50 to .85 for job demands scale) and adequate model fit indexes (root mean square error of approximation = .06). The instrument provides a tool to measure physical education teachers' perception of their working environment.

  16. Developing a Psychometric Instrument to Measure Physical Education Teachers’ Job Demands and Resources

    PubMed Central

    Zhang, Tan; Chen, Ang

    2017-01-01

    Based on the job demands–resources model, the study developed and validated an instrument that measures physical education teachers’ job demands–resources perception. Expert review established content validity with the average item rating of 3.6/5.0. Construct validity and reliability were determined with a teacher sample (n = 397). Exploratory factor analysis established a five-dimension construct structure matching the theoretical construct deliberated in the literature. The composite reliability scores for the five dimensions range from .68 to .83. Validity coefficients (intraclass correlational coefficients) are .69 for job resources items and .82 for job demands items. Inter-scale correlational coefficients range from −.32 to .47. Confirmatory factor analysis confirmed the construct validity with high dimensional factor loadings (ranging from .47 to .84 for job resources scale and from .50 to .85 for job demands scale) and adequate model fit indexes (root mean square error of approximation = .06). The instrument provides a tool to measure physical education teachers’ perception of their working environment. PMID:29200808

  17. Experimental psychiatric illness and drug abuse models: from human to animal, an overview.

    PubMed

    Edwards, Scott; Koob, George F

    2012-01-01

    Preclinical animal models have supported much of the recent rapid expansion of neuroscience research and have facilitated critical discoveries that undoubtedly benefit patients suffering from psychiatric disorders. This overview serves as an introduction for the following chapters describing both in vivo and in vitro preclinical models of psychiatric disease components and briefly describes models related to drug dependence and affective disorders. Although there are no perfect animal models of any psychiatric disorder, models do exist for many elements of each disease state or stage. In many cases, the development of certain models is essentially restricted to the human clinical laboratory domain for the purpose of maximizing validity, whereas the use of in vitro models may best represent an adjunctive, well-controlled means to model specific signaling mechanisms associated with psychiatric disease states. The data generated by preclinical models are only as valid as the model itself, and the development and refinement of animal models for human psychiatric disorders continues to be an important challenge. Collaborative relationships between basic neuroscience and clinical modeling could greatly benefit the development of new and better models, in addition to facilitating medications development.

  18. Experimental Validation of a Closed Brayton Cycle System Transient Simulation

    NASA Technical Reports Server (NTRS)

    Johnson, Paul K.; Hervol, David S.

    2006-01-01

    The Brayton Power Conversion Unit (BPCU) located at NASA Glenn Research Center (GRC) in Cleveland, Ohio was used to validate the results of a computational code known as Closed Cycle System Simulation (CCSS). Conversion system thermal transient behavior was the focus of this validation. The BPCU was operated at various steady state points and then subjected to transient changes involving shaft rotational speed and thermal energy input. These conditions were then duplicated in CCSS. Validation of the CCSS BPCU model provides confidence in developing future Brayton power system performance predictions, and helps to guide high power Brayton technology development.

  19. Developing calculus textbook model that supported with GeoGebra to enhancing students’ mathematical problem solving and mathematical representation

    NASA Astrophysics Data System (ADS)

    Dewi, N. R.; Arini, F. Y.

    2018-03-01

    The main purpose of this research is developing and produces a Calculus textbook model that supported with GeoGebra. This book was designed to enhancing students’ mathematical problem solving and mathematical representation. There were three stages in this research i.e. define, design, and develop. The textbooks consisted of 6 chapters which each chapter contains introduction, core materials and include examples and exercises. The textbook developed phase begins with the early stages of designed the book (draft 1) which then validated by experts. Revision of draft 1 produced draft 2. The data were analyzed with descriptive statistics. The analysis showed that the Calculus textbook model that supported with GeoGebra, valid and fill up the criteria of practicality.

  20. Validating agent oriented methodology (AOM) for netlogo modelling and simulation

    NASA Astrophysics Data System (ADS)

    WaiShiang, Cheah; Nissom, Shane; YeeWai, Sim; Sharbini, Hamizan

    2017-10-01

    AOM (Agent Oriented Modeling) is a comprehensive and unified agent methodology for agent oriented software development. AOM methodology was proposed to aid developers with the introduction of technique, terminology, notation and guideline during agent systems development. Although AOM methodology is claimed to be capable of developing a complex real world system, its potential is yet to be realized and recognized by the mainstream software community and the adoption of AOM is still at its infancy. Among the reason is that there are not much case studies or success story of AOM. This paper presents two case studies on the adoption of AOM for individual based modelling and simulation. It demonstrate how the AOM is useful for epidemiology study and ecological study. Hence, it further validate the AOM in a qualitative manner.

  1. Risk assessment model for development of advanced age-related macular degeneration.

    PubMed

    Klein, Michael L; Francis, Peter J; Ferris, Frederick L; Hamon, Sara C; Clemons, Traci E

    2011-12-01

    To design a risk assessment model for development of advanced age-related macular degeneration (AMD) incorporating phenotypic, demographic, environmental, and genetic risk factors. We evaluated longitudinal data from 2846 participants in the Age-Related Eye Disease Study. At baseline, these individuals had all levels of AMD, ranging from none to unilateral advanced AMD (neovascular or geographic atrophy). Follow-up averaged 9.3 years. We performed a Cox proportional hazards analysis with demographic, environmental, phenotypic, and genetic covariates and constructed a risk assessment model for development of advanced AMD. Performance of the model was evaluated using the C statistic and the Brier score and externally validated in participants in the Complications of Age-Related Macular Degeneration Prevention Trial. The final model included the following independent variables: age, smoking history, family history of AMD (first-degree member), phenotype based on a modified Age-Related Eye Disease Study simple scale score, and genetic variants CFH Y402H and ARMS2 A69S. The model did well on performance measures, with very good discrimination (C statistic = 0.872) and excellent calibration and overall performance (Brier score at 5 years = 0.08). Successful external validation was performed, and a risk assessment tool was designed for use with or without the genetic component. We constructed a risk assessment model for development of advanced AMD. The model performed well on measures of discrimination, calibration, and overall performance and was successfully externally validated. This risk assessment tool is available for online use.

  2. Development of a model for predicting reaction rate constants of organic chemicals with ozone at different temperatures.

    PubMed

    Li, Xuehua; Zhao, Wenxing; Li, Jing; Jiang, Jingqiu; Chen, Jianji; Chen, Jingwen

    2013-08-01

    To assess the persistence and fate of volatile organic compounds in the troposphere, the rate constants for the reaction with ozone (kO3) are needed. As kO3 values are only available for hundreds of compounds, and experimental determination of kO3 is costly and time-consuming, it is of importance to develop predictive models on kO3. In this study, a total of 379 logkO3 values at different temperatures were used to develop and validate a model for the prediction of kO3, based on quantum chemical descriptors, Dragon descriptors and structural fragments. Molecular descriptors were screened by stepwise multiple linear regression, and the model was constructed by partial least-squares regression. The cross validation coefficient QCUM(2) of the model is 0.836, and the external validation coefficient Qext(2) is 0.811, indicating that the model has high robustness and good predictive performance. The most significant descriptor explaining logkO3 is the BELm2 descriptor with connectivity information weighted atomic masses. kO3 increases with increasing BELm2, and decreases with increasing ionization potential. The applicability domain of the proposed model was visualized by the Williams plot. The developed model can be used to predict kO3 at different temperatures for a wide range of organic chemicals, including alkenes, cycloalkenes, haloalkenes, alkynes, oxygen-containing compounds, nitrogen-containing compounds (except primary amines) and aromatic compounds. Copyright © 2013 Elsevier Ltd. All rights reserved.

  3. Diabetic retinopathy risk prediction for fundus examination using sparse learning: a cross-sectional study.

    PubMed

    Oh, Ein; Yoo, Tae Keun; Park, Eun-Cheol

    2013-09-13

    Blindness due to diabetic retinopathy (DR) is the major disability in diabetic patients. Although early management has shown to prevent vision loss, diabetic patients have a low rate of routine ophthalmologic examination. Hence, we developed and validated sparse learning models with the aim of identifying the risk of DR in diabetic patients. Health records from the Korea National Health and Nutrition Examination Surveys (KNHANES) V-1 were used. The prediction models for DR were constructed using data from 327 diabetic patients, and were validated internally on 163 patients in the KNHANES V-1. External validation was performed using 562 diabetic patients in the KNHANES V-2. The learning models, including ridge, elastic net, and LASSO, were compared to the traditional indicators of DR. Considering the Bayesian information criterion, LASSO predicted DR most efficiently. In the internal and external validation, LASSO was significantly superior to the traditional indicators by calculating the area under the curve (AUC) of the receiver operating characteristic. LASSO showed an AUC of 0.81 and an accuracy of 73.6% in the internal validation, and an AUC of 0.82 and an accuracy of 75.2% in the external validation. The sparse learning model using LASSO was effective in analyzing the epidemiological underlying patterns of DR. This is the first study to develop a machine learning model to predict DR risk using health records. LASSO can be an excellent choice when both discriminative power and variable selection are important in the analysis of high-dimensional electronic health records.

  4. Computational Modeling of Space Physiology

    NASA Technical Reports Server (NTRS)

    Lewandowski, Beth E.; Griffin, Devon W.

    2016-01-01

    The Digital Astronaut Project (DAP), within NASAs Human Research Program, develops and implements computational modeling for use in the mitigation of human health and performance risks associated with long duration spaceflight. Over the past decade, DAP developed models to provide insights into space flight related changes to the central nervous system, cardiovascular system and the musculoskeletal system. Examples of the models and their applications include biomechanical models applied to advanced exercise device development, bone fracture risk quantification for mission planning, accident investigation, bone health standards development, and occupant protection. The International Space Station (ISS), in its role as a testing ground for long duration spaceflight, has been an important platform for obtaining human spaceflight data. DAP has used preflight, in-flight and post-flight data from short and long duration astronauts for computational model development and validation. Examples include preflight and post-flight bone mineral density data, muscle cross-sectional area, and muscle strength measurements. Results from computational modeling supplement space physiology research by informing experimental design. Using these computational models, DAP personnel can easily identify both important factors associated with a phenomenon and areas where data are lacking. This presentation will provide examples of DAP computational models, the data used in model development and validation, and applications of the model.

  5. Measuring nursing competencies in the operating theatre: instrument development and psychometric analysis using Item Response Theory.

    PubMed

    Nicholson, Patricia; Griffin, Patrick; Gillis, Shelley; Wu, Margaret; Dunning, Trisha

    2013-09-01

    Concern about the process of identifying underlying competencies that contribute to effective nursing performance has been debated with a lack of consensus surrounding an approved measurement instrument for assessing clinical performance. Although a number of methodologies are noted in the development of competency-based assessment measures, these studies are not without criticism. The primary aim of the study was to develop and validate a Performance Based Scoring Rubric, which included both analytical and holistic scales. The aim included examining the validity and reliability of the rubric, which was designed to measure clinical competencies in the operating theatre. The fieldwork observations of 32 nurse educators and preceptors assessing the performance of 95 instrument nurses in the operating theatre were used in the calibration of the rubric. The Rasch model, a particular model among Item Response Models, was used in the calibration of each item in the rubric in an attempt at improving the measurement properties of the scale. This is done by establishing the 'fit' of the data to the conditions demanded by the Rasch model. Acceptable reliability estimates, specifically a high Cronbach's alpha reliability coefficient (0.940), as well as empirical support for construct and criterion validity for the rubric were achieved. Calibration of the Performance Based Scoring Rubric using Rasch model revealed that the fit statistics for most items were acceptable. The use of the Rasch model offers a number of features in developing and refining healthcare competency-based assessments, improving confidence in measuring clinical performance. The Rasch model was shown to be useful in developing and validating a competency-based assessment for measuring the competence of the instrument nurse in the operating theatre with implications for use in other areas of nursing practice. Crown Copyright © 2012. Published by Elsevier Ltd. All rights reserved.

  6. Predictions of the pathological response to neoadjuvant chemotherapy in patients with primary breast cancer using a data mining technique.

    PubMed

    Takada, M; Sugimoto, M; Ohno, S; Kuroi, K; Sato, N; Bando, H; Masuda, N; Iwata, H; Kondo, M; Sasano, H; Chow, L W C; Inamoto, T; Naito, Y; Tomita, M; Toi, M

    2012-07-01

    Nomogram, a standard technique that utilizes multiple characteristics to predict efficacy of treatment and likelihood of a specific status of an individual patient, has been used for prediction of response to neoadjuvant chemotherapy (NAC) in breast cancer patients. The aim of this study was to develop a novel computational technique to predict the pathological complete response (pCR) to NAC in primary breast cancer patients. A mathematical model using alternating decision trees, an epigone of decision tree, was developed using 28 clinicopathological variables that were retrospectively collected from patients treated with NAC (n = 150), and validated using an independent dataset from a randomized controlled trial (n = 173). The model selected 15 variables to predict the pCR with yielding area under the receiver operating characteristics curve (AUC) values of 0.766 [95 % confidence interval (CI)], 0.671-0.861, P value < 0.0001) in cross-validation using training dataset and 0.787 (95 % CI 0.716-0.858, P value < 0.0001) in the validation dataset. Among three subtypes of breast cancer, the luminal subgroup showed the best discrimination (AUC = 0.779, 95 % CI 0.641-0.917, P value = 0.0059). The developed model (AUC = 0.805, 95 % CI 0.716-0.894, P value < 0.0001) outperformed multivariate logistic regression (AUC = 0.754, 95 % CI 0.651-0.858, P value = 0.00019) of validation datasets without missing values (n = 127). Several analyses, e.g. bootstrap analysis, revealed that the developed model was insensitive to missing values and also tolerant to distribution bias among the datasets. Our model based on clinicopathological variables showed high predictive ability for pCR. This model might improve the prediction of the response to NAC in primary breast cancer patients.

  7. A Unified Model of Performance: Validation of its Predictions across Different Sleep/Wake Schedules

    PubMed Central

    Ramakrishnan, Sridhar; Wesensten, Nancy J.; Balkin, Thomas J.; Reifman, Jaques

    2016-01-01

    Study Objectives: Historically, mathematical models of human neurobehavioral performance developed on data from one sleep study were limited to predicting performance in similar studies, restricting their practical utility. We recently developed a unified model of performance (UMP) to predict the effects of the continuum of sleep loss—from chronic sleep restriction (CSR) to total sleep deprivation (TSD) challenges—and validated it using data from two studies of one laboratory. Here, we significantly extended this effort by validating the UMP predictions across a wide range of sleep/wake schedules from different studies and laboratories. Methods: We developed the UMP on psychomotor vigilance task (PVT) lapse data from one study encompassing four different CSR conditions (7 d of 3, 5, 7, and 9 h of sleep/night), and predicted performance in five other studies (from four laboratories), including different combinations of TSD (40 to 88 h), CSR (2 to 6 h of sleep/night), control (8 to 10 h of sleep/night), and nap (nocturnal and diurnal) schedules. Results: The UMP accurately predicted PVT performance trends across 14 different sleep/wake conditions, yielding average prediction errors between 7% and 36%, with the predictions lying within 2 standard errors of the measured data 87% of the time. In addition, the UMP accurately predicted performance impairment (average error of 15%) for schedules (TSD and naps) not used in model development. Conclusions: The unified model of performance can be used as a tool to help design sleep/wake schedules to optimize the extent and duration of neurobehavioral performance and to accelerate recovery after sleep loss. Citation: Ramakrishnan S, Wesensten NJ, Balkin TJ, Reifman J. A unified model of performance: validation of its predictions across different sleep/wake schedules. SLEEP 2016;39(1):249–262. PMID:26518594

  8. Design and validation of a model to predict early mortality in haemodialysis patients.

    PubMed

    Mauri, Joan M; Clèries, Montse; Vela, Emili

    2008-05-01

    Mortality and morbidity rates are higher in patients receiving haemodialysis therapy than in the general population. Detection of risk factors related to early death in these patients could be of aid for clinical and administrative decision making. Objectives. The aims of this study were (1) to identify risk factors (comorbidity and variables specific to haemodialysis) associated with death in the first year following the start of haemodialysis and (2) to design and validate a prognostic model to quantify the probability of death for each patient. An analysis was carried out on all patients starting haemodialysis treatment in Catalonia during the period 1997-2003 (n = 5738). The data source was the Renal Registry of Catalonia, a mandatory population registry. Patients were randomly divided into two samples: 60% (n = 3455) of the total were used to develop the prognostic model and the remaining 40% (n = 2283) to validate the model. Logistic regression analysis was used to construct the model. One-year mortality in the total study population was 16.5%. The predictive model included the following variables: age, sex, primary renal disease, grade of functional autonomy, chronic obstructive pulmonary disease, malignant processes, chronic liver disease, cardiovascular disease, initial vascular access and malnutrition. The analyses showed adequate calibration for both the sample to develop the model and the validation sample (Hosmer-Lemeshow statistic 0.97 and P = 0.49, respectively) as well as adequate discrimination (ROC curve 0.78 in both cases). Risk factors implicated in mortality at one year following the start of haemodialysis have been determined and a prognostic model designed. The validated, easy-to-apply model quantifies individual patient risk attributable to various factors, some of them amenable to correction by directed interventions.

  9. Computational solution verification and validation applied to a thermal model of a ruggedized instrumentation package

    DOE PAGES

    Scott, Sarah Nicole; Templeton, Jeremy Alan; Hough, Patricia Diane; ...

    2014-01-01

    This study details a methodology for quantification of errors and uncertainties of a finite element heat transfer model applied to a Ruggedized Instrumentation Package (RIP). The proposed verification and validation (V&V) process includes solution verification to examine errors associated with the code's solution techniques, and model validation to assess the model's predictive capability for quantities of interest. The model was subjected to mesh resolution and numerical parameters sensitivity studies to determine reasonable parameter values and to understand how they change the overall model response and performance criteria. To facilitate quantification of the uncertainty associated with the mesh, automatic meshing andmore » mesh refining/coarsening algorithms were created and implemented on the complex geometry of the RIP. Automated software to vary model inputs was also developed to determine the solution’s sensitivity to numerical and physical parameters. The model was compared with an experiment to demonstrate its accuracy and determine the importance of both modelled and unmodelled physics in quantifying the results' uncertainty. An emphasis is placed on automating the V&V process to enable uncertainty quantification within tight development schedules.« less

  10. Modelling seagrass growth and development to evaluate transplanting strategies for restoration.

    PubMed

    Renton, Michael; Airey, Michael; Cambridge, Marion L; Kendrick, Gary A

    2011-10-01

    Seagrasses are important marine plants that are under threat globally. Restoration by transplanting vegetative fragments or seedlings into areas where seagrasses have been lost is possible, but long-term trial data are limited. The goal of this study is to use available short-term data to predict long-term outcomes of transplanting seagrass. A functional-structural plant model of seagrass growth that integrates data collected from short-term trials and experiments is presented. The model was parameterized for the species Posidonia australis, a limited validation of the model against independent data and a sensitivity analysis were conducted and the model was used to conduct a preliminary evaluation of different transplanting strategies. The limited validation was successful, and reasonable long-term outcomes could be predicted, based only on short-term data. This approach for modelling seagrass growth and development enables long-term predictions of the outcomes to be made from different strategies for transplanting seagrass, even when empirical long-term data are difficult or impossible to collect. More validation is required to improve confidence in the model's predictions, and inclusion of more mechanism will extend the model's usefulness. Marine restoration represents a novel application of functional-structural plant modelling.

  11. Phase 1 Validation Testing and Simulation for the WEC-Sim Open Source Code

    NASA Astrophysics Data System (ADS)

    Ruehl, K.; Michelen, C.; Gunawan, B.; Bosma, B.; Simmons, A.; Lomonaco, P.

    2015-12-01

    WEC-Sim is an open source code to model wave energy converters performance in operational waves, developed by Sandia and NREL and funded by the US DOE. The code is a time-domain modeling tool developed in MATLAB/SIMULINK using the multibody dynamics solver SimMechanics, and solves the WEC's governing equations of motion using the Cummins time-domain impulse response formulation in 6 degrees of freedom. The WEC-Sim code has undergone verification through code-to-code comparisons; however validation of the code has been limited to publicly available experimental data sets. While these data sets provide preliminary code validation, the experimental tests were not explicitly designed for code validation, and as a result are limited in their ability to validate the full functionality of the WEC-Sim code. Therefore, dedicated physical model tests for WEC-Sim validation have been performed. This presentation provides an overview of the WEC-Sim validation experimental wave tank tests performed at the Oregon State University's Directional Wave Basin at Hinsdale Wave Research Laboratory. Phase 1 of experimental testing was focused on device characterization and completed in Fall 2015. Phase 2 is focused on WEC performance and scheduled for Winter 2015/2016. These experimental tests were designed explicitly to validate the performance of WEC-Sim code, and its new feature additions. Upon completion, the WEC-Sim validation data set will be made publicly available to the wave energy community. For the physical model test, a controllable model of a floating wave energy converter has been designed and constructed. The instrumentation includes state-of-the-art devices to measure pressure fields, motions in 6 DOF, multi-axial load cells, torque transducers, position transducers, and encoders. The model also incorporates a fully programmable Power-Take-Off system which can be used to generate or absorb wave energy. Numerical simulations of the experiments using WEC-Sim will be presented. These simulations highlight the code features included in the latest release of WEC-Sim (v1.2), including: wave directionality, nonlinear hydrostatics and hydrodynamics, user-defined wave elevation time-series, state space radiation, and WEC-Sim compatibility with BEMIO (open source AQWA/WAMI/NEMOH coefficient parser).

  12. Development and validation of a predictive model for 90-day readmission following elective spine surgery.

    PubMed

    Parker, Scott L; Sivaganesan, Ahilan; Chotai, Silky; McGirt, Matthew J; Asher, Anthony L; Devin, Clinton J

    2018-06-15

    OBJECTIVE Hospital readmissions lead to a significant increase in the total cost of care in patients undergoing elective spine surgery. Understanding factors associated with an increased risk of postoperative readmission could facilitate a reduction in such occurrences. The aims of this study were to develop and validate a predictive model for 90-day hospital readmission following elective spine surgery. METHODS All patients undergoing elective spine surgery for degenerative disease were enrolled in a prospective longitudinal registry. All 90-day readmissions were prospectively recorded. For predictive modeling, all covariates were selected by choosing those variables that were significantly associated with readmission and by incorporating other relevant variables based on clinical intuition and the Akaike information criterion. Eighty percent of the sample was randomly selected for model development and 20% for model validation. Multiple logistic regression analysis was performed with Bayesian model averaging (BMA) to model the odds of 90-day readmission. Goodness of fit was assessed via the C-statistic, that is, the area under the receiver operating characteristic curve (AUC), using the training data set. Discrimination (predictive performance) was assessed using the C-statistic, as applied to the 20% validation data set. RESULTS A total of 2803 consecutive patients were enrolled in the registry, and their data were analyzed for this study. Of this cohort, 227 (8.1%) patients were readmitted to the hospital (for any cause) within 90 days postoperatively. Variables significantly associated with an increased risk of readmission were as follows (OR [95% CI]): lumbar surgery 1.8 [1.1-2.8], government-issued insurance 2.0 [1.4-3.0], hypertension 2.1 [1.4-3.3], prior myocardial infarction 2.2 [1.2-3.8], diabetes 2.5 [1.7-3.7], and coagulation disorder 3.1 [1.6-5.8]. These variables, in addition to others determined a priori to be clinically relevant, comprised 32 inputs in the predictive model constructed using BMA. The AUC value for the training data set was 0.77 for model development and 0.76 for model validation. CONCLUSIONS Identification of high-risk patients is feasible with the novel predictive model presented herein. Appropriate allocation of resources to reduce the postoperative incidence of readmission may reduce the readmission rate and the associated health care costs.

  13. The specification-based validation of reliable multicast protocol: Problem Report. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Wu, Yunqing

    1995-01-01

    Reliable Multicast Protocol (RMP) is a communication protocol that provides an atomic, totally ordered, reliable multicast service on top of unreliable IP multicasting. In this report, we develop formal models for RMP using existing automated verification systems, and perform validation on the formal RMP specifications. The validation analysis help identifies some minor specification and design problems. We also use the formal models of RMP to generate a test suite for conformance testing of the implementation. Throughout the process of RMP development, we follow an iterative, interactive approach that emphasizes concurrent and parallel progress of implementation and verification processes. Through this approach, we incorporate formal techniques into our development process, promote a common understanding for the protocol, increase the reliability of our software, and maintain high fidelity between the specifications of RMP and its implementation.

  14. Validating soil denitrification models based on laboratory N_{2} and N_{2}O fluxes and underlying processes derived by stable isotope approaches

    NASA Astrophysics Data System (ADS)

    Well, Reinhard; Böttcher, Jürgen; Butterbach-Bahl, Klaus; Dannenmann, Michael; Deppe, Marianna; Dittert, Klaus; Dörsch, Peter; Horn, Marcus; Ippisch, Olaf; Mikutta, Robert; Müller, Carsten; Müller, Christoph; Senbayram, Mehmet; Vogel, Hans-Jörg; Wrage-Mönnig, Nicole

    2016-04-01

    Robust denitrification data suitable to validate soil N2 fluxes in denitrification models are scarce due to methodical limitations and the extreme spatio-temporal heterogeneity of denitrification in soils. Numerical models have become essential tools to predict denitrification at different scales. Model performance could either be tested for total gaseous flux (NO + N2O + N2), individual denitrification products (e.g. N2O and/or NO) or for the effect of denitrification factors (e.g. C-availability, respiration, diffusivity, anaerobic volume, etc.). While there are numerous examples for validating N2O fluxes, there are neither robust field data of N2 fluxes nor sufficiently resolved measurements of control factors used as state variables in the models. To the best of our knowledge there has been only one published validation of modelled soil N2 flux by now, using a laboratory data set to validate an ecosystem model. Hence there is a need for validation data at both, the mesocosm and the field scale including validation of individual denitrification controls. Here we present the concept for collecting model validation data which is be part of the DFG-research unit "Denitrification in Agricultural Soils: Integrated Control and Modelling at Various Scales (DASIM)" starting this year. We will use novel approaches including analysis of stable isotopes, microbial communities, pores structure and organic matter fractions to provide denitrification data sets comprising as much detail on activity and regulation as possible as a basis to validate existing and calibrate new denitrification models that are applied and/or developed by DASIM subprojects. The basic idea is to simulate "field-like" conditions as far as possible in an automated mesocosm system without plants in order to mimic processes in the soil parts not significantly influenced by the rhizosphere (rhizosphere soils are studied by other DASIM projects). Hence, to allow model testing in a wide range of conditions, denitrification control factors will be varied in the initial settings (pore volume, plant residues, mineral N, pH) but also over time, where moisture, temperature, and mineral N will be manipulated according to typical time patterns in the field. This will be realized by including precipitation events, fertilization (via irrigation), drainage (via water potential) and temperature in the course of incubations. Moreover, oxygen concentration will be varied to simulate anaerobic events. These data will be used to calibrate the newly to develop DASIM models as well as existing denitrification models. One goal of DASIM is to create a public data base as a joint basis for model testing by denitrification modellers. Therefore we invite contributions of suitable data-sets from the scientific community. Requirements will be briefly outlined.

  15. Modeling the Earth System, volume 3

    NASA Technical Reports Server (NTRS)

    Ojima, Dennis (Editor)

    1992-01-01

    The topics covered fall under the following headings: critical gaps in the Earth system conceptual framework; development needs for simplified models; and validating Earth system models and their subcomponents.

  16. High-fidelity Simulation of Jet Noise from Rectangular Nozzles . [Large Eddy Simulation (LES) Model for Noise Reduction in Advanced Jet Engines and Automobiles

    NASA Technical Reports Server (NTRS)

    Sinha, Neeraj

    2014-01-01

    This Phase II project validated a state-of-the-art LES model, coupled with a Ffowcs Williams-Hawkings (FW-H) far-field acoustic solver, to support the development of advanced engine concepts. These concepts include innovative flow control strategies to attenuate jet noise emissions. The end-to-end LES/ FW-H noise prediction model was demonstrated and validated by applying it to rectangular nozzle designs with a high aspect ratio. The model also was validated against acoustic and flow-field data from a realistic jet-pylon experiment, thereby significantly advancing the state of the art for LES.

  17. QSAR study of curcumine derivatives as HIV-1 integrase inhibitors.

    PubMed

    Gupta, Pawan; Sharma, Anju; Garg, Prabha; Roy, Nilanjan

    2013-03-01

    A QSAR study was performed on curcumine derivatives as HIV-1 integrase inhibitors using multiple linear regression. The statistically significant model was developed with squared correlation coefficients (r(2)) 0.891 and cross validated r(2) (r(2) cv) 0.825. The developed model revealed that electronic, shape, size, geometry, substitution's information and hydrophilicity were important atomic properties for determining the inhibitory activity of these molecules. The model was also tested successfully for external validation (r(2) pred = 0.849) as well as Tropsha's test for model predictability. Furthermore, the domain analysis was carried out to evaluate the prediction reliability of external set molecules. The model was statistically robust and had good predictive power which can be successfully utilized for screening of new molecules.

  18. Analysis, testing, and evaluation of faulted and unfaulted Wye, Delta, and open Delta connected electromechanical actuators

    NASA Technical Reports Server (NTRS)

    Nehl, T. W.; Demerdash, N. A.

    1983-01-01

    Mathematical models capable of simulating the transient, steady state, and faulted performance characteristics of various brushless dc machine-PSA (power switching assembly) configurations were developed. These systems are intended for possible future use as primemovers in EMAs (electromechanical actuators) for flight control applications. These machine-PSA configurations include wye, delta, and open-delta connected systems. The research performed under this contract was initially broken down into the following six tasks: development of mathematical models for various machine-PSA configurations; experimental validation of the model for failure modes; experimental validation of the mathematical model for shorted turn-failure modes; tradeoff study; and documentation of results and methodology.

  19. Face, content, and construct validity of human placenta as a haptic training tool in neurointerventional surgery.

    PubMed

    Ribeiro de Oliveira, Marcelo Magaldi; Nicolato, Arthur; Santos, Marcilea; Godinho, Joao Victor; Brito, Rafael; Alvarenga, Alexandre; Martins, Ana Luiza Valle; Prosdocimi, André; Trivelato, Felipe Padovani; Sabbagh, Abdulrahman J; Reis, Augusto Barbosa; Maestro, Rolando Del

    2016-05-01

    OBJECT The development of neurointerventional treatments of central nervous system disorders has resulted in the need for adequate training environments for novice interventionalists. Virtual simulators offer anatomical definition but lack adequate tactile feedback. Animal models, which provide more lifelike training, require an appropriate infrastructure base. The authors describe a training model for neurointerventional procedures using the human placenta (HP), which affords haptic training with significantly fewer resource requirements, and discuss its validation. METHODS Twelve HPs were prepared for simulated endovascular procedures. Training exercises performed by interventional neuroradiologists and novice fellows were placental angiography, stent placement, aneurysm coiling, and intravascular liquid embolic agent injection. RESULTS The endovascular training exercises proposed can be easily reproduced in the HP. Face, content, and construct validity were assessed by 6 neurointerventional radiologists and 6 novice fellows in interventional radiology. CONCLUSIONS The use of HP provides an inexpensive training model for the training of neurointerventionalists. Preliminary validation results show that this simulation model has face and content validity and has demonstrated construct validity for the interventions assessed in this study.

  20. Development and validation of a predictive risk model for all-cause mortality in type 2 diabetes.

    PubMed

    Robinson, Tom E; Elley, C Raina; Kenealy, Tim; Drury, Paul L

    2015-06-01

    Type 2 diabetes is common and is associated with an approximate 80% increase in the rate of mortality. Management decisions may be assisted by an estimate of the patient's absolute risk of adverse outcomes, including death. This study aimed to derive a predictive risk model for all-cause mortality in type 2 diabetes. We used primary care data from a large national multi-ethnic cohort of patients with type 2 diabetes in New Zealand and linked mortality records to develop a predictive risk model for 5-year risk of mortality. We then validated this model using information from a separate cohort of patients with type 2 diabetes. 26,864 people were included in the development cohort with a median follow up time of 9.1 years. We developed three models initially using demographic information and then progressively more clinical detail. The final model, which also included markers of renal disease, proved to give best prediction of all-cause mortality with a C-statistic of 0.80 in the development cohort and 0.79 in the validation cohort (7610 people) and was well calibrated. Ethnicity was a major factor with hazard ratios of 1.37 for indigenous Maori, 0.41 for East Asian and 0.55 for Indo Asian compared with European (P<0.001). We have developed a model using information usually available in primary care that provides good assessment of patient's risk of death. Results are similar to models previously published from smaller cohorts in other countries and apply to a wider range of patient ethnic groups. Copyright © 2015. Published by Elsevier Ireland Ltd.

Top