van Soest, Johan; Meldolesi, Elisa; van Stiphout, Ruud; Gatta, Roberto; Damiani, Andrea; Valentini, Vincenzo; Lambin, Philippe; Dekker, Andre
2017-09-01
Multiple models have been developed to predict pathologic complete response (pCR) in locally advanced rectal cancer patients. Unfortunately, validation of these models normally omit the implications of cohort differences on prediction model performance. In this work, we will perform a prospective validation of three pCR models, including information whether this validation will target transferability or reproducibility (cohort differences) of the given models. We applied a novel methodology, the cohort differences model, to predict whether a patient belongs to the training or to the validation cohort. If the cohort differences model performs well, it would suggest a large difference in cohort characteristics meaning we would validate the transferability of the model rather than reproducibility. We tested our method in a prospective validation of three existing models for pCR prediction in 154 patients. Our results showed a large difference between training and validation cohort for one of the three tested models [Area under the Receiver Operating Curve (AUC) cohort differences model: 0.85], signaling the validation leans towards transferability. Two out of three models had a lower AUC for validation (0.66 and 0.58), one model showed a higher AUC in the validation cohort (0.70). We have successfully applied a new methodology in the validation of three prediction models, which allows us to indicate if a validation targeted transferability (large differences between training/validation cohort) or reproducibility (small cohort differences). © 2017 American Association of Physicists in Medicine.
Empirical validation of an agent-based model of wood markets in Switzerland
Hilty, Lorenz M.; Lemm, Renato; Thees, Oliver
2018-01-01
We present an agent-based model of wood markets and show our efforts to validate this model using empirical data from different sources, including interviews, workshops, experiments, and official statistics. Own surveys closed gaps where data was not available. Our approach to model validation used a variety of techniques, including the replication of historical production amounts, prices, and survey results, as well as a historical case study of a large sawmill entering the market and becoming insolvent only a few years later. Validating the model using this case provided additional insights, showing how the model can be used to simulate scenarios of resource availability and resource allocation. We conclude that the outcome of the rigorous validation qualifies the model to simulate scenarios concerning resource availability and allocation in our study region. PMID:29351300
Model Checking Verification and Validation at JPL and the NASA Fairmont IV and V Facility
NASA Technical Reports Server (NTRS)
Schneider, Frank; Easterbrook, Steve; Callahan, Jack; Montgomery, Todd
1999-01-01
We show how a technology transfer effort was carried out. The successful use of model checking on a pilot JPL flight project demonstrates the usefulness and the efficacy of the approach. The pilot project was used to model a complex spacecraft controller. Software design and implementation validation were carried out successfully. To suggest future applications we also show how the implementation validation step can be automated. The effort was followed by the formal introduction of the modeling technique as a part of the JPL Quality Assurance process.
Statistical validation of normal tissue complication probability models.
Xu, Cheng-Jian; van der Schaaf, Arjen; Van't Veld, Aart A; Langendijk, Johannes A; Schilstra, Cornelis
2012-09-01
To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. A penalized regression method, LASSO (least absolute shrinkage and selection operator), was used to build NTCP models for xerostomia after radiation therapy treatment of head-and-neck cancer. Model assessment was based on the likelihood function and the area under the receiver operating characteristic curve. Repeated double cross-validation showed the uncertainty and instability of the NTCP models and indicated that the statistical significance of model performance can be obtained by permutation testing. Repeated double cross-validation and permutation tests are recommended to validate NTCP models before clinical use. Copyright © 2012 Elsevier Inc. All rights reserved.
Validity of the Eating Attitude Test among Exercisers.
Lane, Helen J; Lane, Andrew M; Matheson, Hilary
2004-12-01
Theory testing and construct measurement are inextricably linked. To date, no published research has looked at the factorial validity of an existing eating attitude inventory for use with exercisers. The Eating Attitude Test (EAT) is a 26-item measure that yields a single index of disordered eating attitudes. The original factor analysis showed three interrelated factors: Dieting behavior (13-items), oral control (7-items), and bulimia nervosa-food preoccupation (6-items). The primary purpose of the study was to examine the factorial validity of the EAT among a sample of exercisers. The second purpose was to investigate relationships between eating attitudes scores and selected psychological constructs. In stage one, 598 regular exercisers completed the EAT. Confirmatory factor analysis (CFA) was used to test the single-factor, a three-factor model, and a four-factor model, which distinguished bulimia from food pre-occupation. CFA of the single-factor model (RCFI = 0.66, RMSEA = 0.10), the three-factor-model (RCFI = 0.74; RMSEA = 0.09) showed poor model fit. There was marginal fit for the 4-factor model (RCFI = 0.91, RMSEA = 0.06). Results indicated five-items showed poor factor loadings. After these 5-items were discarded, the three models were re-analyzed. CFA results indicated that the single-factor model (RCFI = 0.76, RMSEA = 0.10) and three-factor model (RCFI = 0.82, RMSEA = 0.08) showed poor fit. CFA results for the four-factor model showed acceptable fit indices (RCFI = 0.98, RMSEA = 0.06). Stage two explored relationships between EAT scores, mood, self-esteem, and motivational indices toward exercise in terms of self-determination, enjoyment and competence. Correlation results indicated that depressed mood scores positively correlated with bulimia and dieting scores. Further, dieting was inversely related with self-determination toward exercising. Collectively, findings suggest that a 21-item four-factor model shows promising validity coefficients among exercise participants, and that future research is needed to investigate eating attitudes among samples of exercisers. Key PointsValidity of psychometric measures should be thoroughly investigated. Researchers should not assume that a scale validation on one sample will show the same validity coefficients in a different population.The Eating Attitude Test is a commonly used scale. The present study shows a revised 21-item scale was suitable for exercisers.Researchers using the Eating Attitude Test should use subscales of Dieting, Oral control, Food pre-occupation, and Bulimia.Future research should involve qualitative techniques and interview exercise participants to explore the nature of eating attitudes.
Wang, Wenyi; Kim, Marlene T.; Sedykh, Alexander
2015-01-01
Purpose Experimental Blood–Brain Barrier (BBB) permeability models for drug molecules are expensive and time-consuming. As alternative methods, several traditional Quantitative Structure-Activity Relationship (QSAR) models have been developed previously. In this study, we aimed to improve the predictivity of traditional QSAR BBB permeability models by employing relevant public bio-assay data in the modeling process. Methods We compiled a BBB permeability database consisting of 439 unique compounds from various resources. The database was split into a modeling set of 341 compounds and a validation set of 98 compounds. Consensus QSAR modeling workflow was employed on the modeling set to develop various QSAR models. A five-fold cross-validation approach was used to validate the developed models, and the resulting models were used to predict the external validation set compounds. Furthermore, we used previously published membrane transporter models to generate relevant transporter profiles for target compounds. The transporter profiles were used as additional biological descriptors to develop hybrid QSAR BBB models. Results The consensus QSAR models have R2=0.638 for fivefold cross-validation and R2=0.504 for external validation. The consensus model developed by pooling chemical and transporter descriptors showed better predictivity (R2=0.646 for five-fold cross-validation and R2=0.526 for external validation). Moreover, several external bio-assays that correlate with BBB permeability were identified using our automatic profiling tool. Conclusions The BBB permeability models developed in this study can be useful for early evaluation of new compounds (e.g., new drug candidates). The combination of chemical and biological descriptors shows a promising direction to improve the current traditional QSAR models. PMID:25862462
Identifying model error in metabolic flux analysis - a generalized least squares approach.
Sokolenko, Stanislav; Quattrociocchi, Marco; Aucoin, Marc G
2016-09-13
The estimation of intracellular flux through traditional metabolic flux analysis (MFA) using an overdetermined system of equations is a well established practice in metabolic engineering. Despite the continued evolution of the methodology since its introduction, there has been little focus on validation and identification of poor model fit outside of identifying "gross measurement error". The growing complexity of metabolic models, which are increasingly generated from genome-level data, has necessitated robust validation that can directly assess model fit. In this work, MFA calculation is framed as a generalized least squares (GLS) problem, highlighting the applicability of the common t-test for model validation. To differentiate between measurement and model error, we simulate ideal flux profiles directly from the model, perturb them with estimated measurement error, and compare their validation to real data. Application of this strategy to an established Chinese Hamster Ovary (CHO) cell model shows how fluxes validated by traditional means may be largely non-significant due to a lack of model fit. With further simulation, we explore how t-test significance relates to calculation error and show that fluxes found to be non-significant have 2-4 fold larger error (if measurement uncertainty is in the 5-10 % range). The proposed validation method goes beyond traditional detection of "gross measurement error" to identify lack of fit between model and data. Although the focus of this work is on t-test validation and traditional MFA, the presented framework is readily applicable to other regression analysis methods and MFA formulations.
Campbell, J Q; Coombs, D J; Rao, M; Rullkoetter, P J; Petrella, A J
2016-09-06
The purpose of this study was to seek broad verification and validation of human lumbar spine finite element models created using a previously published automated algorithm. The automated algorithm takes segmented CT scans of lumbar vertebrae, automatically identifies important landmarks and contact surfaces, and creates a finite element model. Mesh convergence was evaluated by examining changes in key output variables in response to mesh density. Semi-direct validation was performed by comparing experimental results for a single specimen to the automated finite element model results for that specimen with calibrated material properties from a prior study. Indirect validation was based on a comparison of results from automated finite element models of 18 individual specimens, all using one set of generalized material properties, to a range of data from the literature. A total of 216 simulations were run and compared to 186 experimental data ranges in all six primary bending modes up to 7.8Nm with follower loads up to 1000N. Mesh convergence results showed less than a 5% difference in key variables when the original mesh density was doubled. The semi-direct validation results showed that the automated method produced results comparable to manual finite element modeling methods. The indirect validation results showed a wide range of outcomes due to variations in the geometry alone. The studies showed that the automated models can be used to reliably evaluate lumbar spine biomechanics, specifically within our intended context of use: in pure bending modes, under relatively low non-injurious simulated in vivo loads, to predict torque rotation response, disc pressures, and facet forces. Copyright © 2016 Elsevier Ltd. All rights reserved.
DOT National Transportation Integrated Search
2006-01-01
A previous study developed a procedure for microscopic simulation model calibration and validation and evaluated the procedure via two relatively simple case studies using three microscopic simulation models. Results showed that default parameters we...
Early Prediction of Intensive Care Unit-Acquired Weakness: A Multicenter External Validation Study.
Witteveen, Esther; Wieske, Luuk; Sommers, Juultje; Spijkstra, Jan-Jaap; de Waard, Monique C; Endeman, Henrik; Rijkenberg, Saskia; de Ruijter, Wouter; Sleeswijk, Mengalvio; Verhamme, Camiel; Schultz, Marcus J; van Schaik, Ivo N; Horn, Janneke
2018-01-01
An early diagnosis of intensive care unit-acquired weakness (ICU-AW) is often not possible due to impaired consciousness. To avoid a diagnostic delay, we previously developed a prediction model, based on single-center data from 212 patients (development cohort), to predict ICU-AW at 2 days after ICU admission. The objective of this study was to investigate the external validity of the original prediction model in a new, multicenter cohort and, if necessary, to update the model. Newly admitted ICU patients who were mechanically ventilated at 48 hours after ICU admission were included. Predictors were prospectively recorded, and the outcome ICU-AW was defined by an average Medical Research Council score <4. In the validation cohort, consisting of 349 patients, we analyzed performance of the original prediction model by assessment of calibration and discrimination. Additionally, we updated the model in this validation cohort. Finally, we evaluated a new prediction model based on all patients of the development and validation cohort. Of 349 analyzed patients in the validation cohort, 190 (54%) developed ICU-AW. Both model calibration and discrimination of the original model were poor in the validation cohort. The area under the receiver operating characteristics curve (AUC-ROC) was 0.60 (95% confidence interval [CI]: 0.54-0.66). Model updating methods improved calibration but not discrimination. The new prediction model, based on all patients of the development and validation cohort (total of 536 patients) had a fair discrimination, AUC-ROC: 0.70 (95% CI: 0.66-0.75). The previously developed prediction model for ICU-AW showed poor performance in a new independent multicenter validation cohort. Model updating methods improved calibration but not discrimination. The newly derived prediction model showed fair discrimination. This indicates that early prediction of ICU-AW is still challenging and needs further attention.
Psychometric Properties and Validation of the Arabic Social Media Addiction Scale.
Al-Menayes, Jamal
2015-01-01
This study investigated the psychometric properties of the Arabic version of the SMAS. SMAS is a variant of IAT customized to measure addiction to social media instead of the Internet as a whole. Using a self-report instrument on a cross-sectional sample of undergraduate students, the results revealed the following. First, the exploratory factor analysis showed that a three-factor model fits the data well. Second, concurrent validity analysis showed the SMAS to be a valid measure of social media addiction. However, further studies and data should verify the hypothesized model. Finally, this study showed that the Arabic version of the SMAS is a valid and reliable instrument for use in measuring social media addiction in the Arab world.
Psychometric Properties and Validation of the Arabic Social Media Addiction Scale
Al-Menayes, Jamal
2015-01-01
This study investigated the psychometric properties of the Arabic version of the SMAS. SMAS is a variant of IAT customized to measure addiction to social media instead of the Internet as a whole. Using a self-report instrument on a cross-sectional sample of undergraduate students, the results revealed the following. First, the exploratory factor analysis showed that a three-factor model fits the data well. Second, concurrent validity analysis showed the SMAS to be a valid measure of social media addiction. However, further studies and data should verify the hypothesized model. Finally, this study showed that the Arabic version of the SMAS is a valid and reliable instrument for use in measuring social media addiction in the Arab world. PMID:26347848
NASA Astrophysics Data System (ADS)
Nafsiati Astuti, Rini
2018-04-01
Argumentation skill is the ability to compose and maintain arguments consisting of claims, supports for evidence, and strengthened-reasons. Argumentation is an important skill student needs to face the challenges of globalization in the 21st century. It is not an ability that can be developed by itself along with the physical development of human, but it must be developed under nerve like process, giving stimulus so as to require a person to be able to argue. Therefore, teachers should develop students’ skill of arguing in science learning in the classroom. The purpose of this study is to obtain an innovative learning model that are valid in terms of content and construct in improving the skills of argumentation and concept understanding of junior high school students. The assessment of content validity and construct validity was done through Focus Group Discussion (FGD), using the content and construct validation sheet, book model, learning video, and a set of learning aids for one meeting. Assessment results from 3 (three) experts showed that the learning model developed in the category was valid. The validity itself shows that the developed learning model has met the content requirement, the student needs, state of the art, strong theoretical and empirical foundation and construct validity, which has a connection of syntax stages and components of learning model so that it can be applied in the classroom activities
Verification and Validation of EnergyPlus Phase Change Material Model for Opaque Wall Assemblies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tabares-Velasco, P. C.; Christensen, C.; Bianchi, M.
2012-08-01
Phase change materials (PCMs) represent a technology that may reduce peak loads and HVAC energy consumption in buildings. A few building energy simulation programs have the capability to simulate PCMs, but their accuracy has not been completely tested. This study shows the procedure used to verify and validate the PCM model in EnergyPlus using a similar approach as dictated by ASHRAE Standard 140, which consists of analytical verification, comparative testing, and empirical validation. This process was valuable, as two bugs were identified and fixed in the PCM model, and version 7.1 of EnergyPlus will have a validated PCM model. Preliminarymore » results using whole-building energy analysis show that careful analysis should be done when designing PCMs in homes, as their thermal performance depends on several variables such as PCM properties and location in the building envelope.« less
Kumar, Y Kiran; Mehta, Shashi Bhushan; Ramachandra, Manjunath
2017-01-01
The purpose of this work is to provide some validation methods for evaluating the hemodynamic assessment of Cerebral Arteriovenous Malformation (CAVM). This article emphasizes the importance of validating noninvasive measurements for CAVM patients, which are designed using lumped models for complex vessel structure. The validation of the hemodynamics assessment is based on invasive clinical measurements and cross-validation techniques with the Philips proprietary validated software's Qflow and 2D Perfursion. The modeling results are validated for 30 CAVM patients for 150 vessel locations. Mean flow, diameter, and pressure were compared between modeling results and with clinical/cross validation measurements, using an independent two-tailed Student t test. Exponential regression analysis was used to assess the relationship between blood flow, vessel diameter, and pressure between them. Univariate analysis is used to assess the relationship between vessel diameter, vessel cross-sectional area, AVM volume, AVM pressure, and AVM flow results were performed with linear or exponential regression. Modeling results were compared with clinical measurements from vessel locations of cerebral regions. Also, the model is cross validated with Philips proprietary validated software's Qflow and 2D Perfursion. Our results shows that modeling results and clinical results are nearly matching with a small deviation. In this article, we have validated our modeling results with clinical measurements. The new approach for cross-validation is proposed by demonstrating the accuracy of our results with a validated product in a clinical environment.
Validation of recent geopotential models in Tierra Del Fuego
NASA Astrophysics Data System (ADS)
Gomez, Maria Eugenia; Perdomo, Raul; Del Cogliano, Daniel
2017-10-01
This work presents a validation study of global geopotential models (GGM) in the region of Fagnano Lake, located in the southern Andes. This is an excellent area for this type of validation because it is surrounded by the Andes Mountains, and there is no terrestrial gravity or GNSS/levelling data. However, there are mean lake level (MLL) observations, and its surface is assumed to be almost equipotential. Furthermore, in this article, we propose improved geoid solutions through the Residual Terrain Modelling (RTM) approach. Using a global geopotential model, the results achieved allow us to conclude that it is possible to use this technique to extend an existing geoid model to those regions that lack any information (neither gravimetric nor GNSS/levelling observations). As GGMs have evolved, our results have improved progressively. While the validation of EGM2008 with MLL data shows a standard deviation of 35 cm, GOCO05C shows a deviation of 13 cm, similar to the results obtained on land.
Combat Simulation Using Breach Computer Language
1979-09-01
simulation and weapon system analysis computer language Two types of models were constructed: a stochastic duel and a dynamic engagement model The... duel model validates the BREACH approach by comparing results with mathematical solutions. The dynamic model shows the capability of the BREACH...BREACH 2 Background 2 The Language 3 Static Duel 4 Background and Methodology 4 Validation 5 Results 8 Tank Duel Simulation 8 Dynamic Assault Model
On the validity of cosmic no-hair conjecture in an anisotropic inationary model
NASA Astrophysics Data System (ADS)
Do, Tuan Q.
2018-05-01
We will present main results of our recent investigations on the validity of cosmic no-hair conjecture proposed by Hawking and his colleagues long time ago in the framework of an anisotropic inflationary model proposed by Kanno, Soda, and Watanabe. As a result, we will show that the cosmic no-hair conjecture seems to be generally violated in the Kanno-Soda- Watanabe model for both canonical and non-canonical scalar fields due to the existence of a non-trivial coupling term between scalar and electromagnetic fields. However, we will also show that the validity of the cosmic no-hair conjecture will be ensured once a unusual scalar field called the phantom field, whose kinetic energy term is negative definite, is introduced into the Kanno-Soda-Watanabe model.
Cross-validation pitfalls when selecting and assessing regression and classification models.
Krstajic, Damjan; Buturovic, Ljubomir J; Leahy, David E; Thomas, Simon
2014-03-29
We address the problem of selecting and assessing classification and regression models using cross-validation. Current state-of-the-art methods can yield models with high variance, rendering them unsuitable for a number of practical applications including QSAR. In this paper we describe and evaluate best practices which improve reliability and increase confidence in selected models. A key operational component of the proposed methods is cloud computing which enables routine use of previously infeasible approaches. We describe in detail an algorithm for repeated grid-search V-fold cross-validation for parameter tuning in classification and regression, and we define a repeated nested cross-validation algorithm for model assessment. As regards variable selection and parameter tuning we define two algorithms (repeated grid-search cross-validation and double cross-validation), and provide arguments for using the repeated grid-search in the general case. We show results of our algorithms on seven QSAR datasets. The variation of the prediction performance, which is the result of choosing different splits of the dataset in V-fold cross-validation, needs to be taken into account when selecting and assessing classification and regression models. We demonstrate the importance of repeating cross-validation when selecting an optimal model, as well as the importance of repeating nested cross-validation when assessing a prediction error.
NASA Astrophysics Data System (ADS)
Lufri, L.; Fitri, R.; Yogica, R.
2018-04-01
The purpose of this study is to produce a learning model based on problem solving and meaningful learning standards by expert assessment or validation for the course of Animal Development. This research is a development research that produce the product in the form of learning model, which consist of sub product, namely: the syntax of learning model and student worksheets. All of these products are standardized through expert validation. The research data is the level of validity of all sub products obtained using questionnaire, filled by validators from various field of expertise (field of study, learning strategy, Bahasa). Data were analysed using descriptive statistics. The result of the research shows that the problem solving and meaningful learning model has been produced. Sub products declared appropriate by expert include the syntax of learning model and student worksheet.
Cuesta-Gragera, Ana; Navarro-Fontestad, Carmen; Mangas-Sanjuan, Victor; González-Álvarez, Isabel; García-Arieta, Alfredo; Trocóniz, Iñaki F; Casabó, Vicente G; Bermejo, Marival
2015-07-10
The objective of this paper is to apply a previously developed semi-physiologic pharmacokinetic model implemented in NONMEM to simulate bioequivalence trials (BE) of acetyl salicylic acid (ASA) in order to validate the model performance against ASA human experimental data. ASA is a drug with first-pass hepatic and intestinal metabolism following Michaelis-Menten kinetics that leads to the formation of two main metabolites in two generations (first and second generation metabolites). The first aim was to adapt the semi-physiological model for ASA in NOMMEN using ASA pharmacokinetic parameters from literature, showing its sequential metabolism. The second aim was to validate this model by comparing the results obtained in NONMEM simulations with published experimental data at a dose of 1000 mg. The validated model was used to simulate bioequivalence trials at 3 dose schemes (100, 1000 and 3000 mg) and with 6 test formulations with decreasing in vivo dissolution rate constants versus the reference formulation (kD 8-0.25 h (-1)). Finally, the third aim was to determine which analyte (parent drug, first generation or second generation metabolite) was more sensitive to changes in formulation performance. The validation results showed that the concentration-time curves obtained with the simulations reproduced closely the published experimental data, confirming model performance. The parent drug (ASA) was the analyte that showed to be more sensitive to the decrease in pharmaceutical quality, with the highest decrease in Cmax and AUC ratio between test and reference formulations. Copyright © 2015 Elsevier B.V. All rights reserved.
Yahya, Noorazrul; Ebert, Martin A; Bulsara, Max; Kennedy, Angel; Joseph, David J; Denham, James W
2016-08-01
Most predictive models are not sufficiently validated for prospective use. We performed independent external validation of published predictive models for urinary dysfunctions following radiotherapy of the prostate. Multivariable models developed to predict atomised and generalised urinary symptoms, both acute and late, were considered for validation using a dataset representing 754 participants from the TROG 03.04-RADAR trial. Endpoints and features were harmonised to match the predictive models. The overall performance, calibration and discrimination were assessed. 14 models from four publications were validated. The discrimination of the predictive models in an independent external validation cohort, measured using the area under the receiver operating characteristic (ROC) curve, ranged from 0.473 to 0.695, generally lower than in internal validation. 4 models had ROC >0.6. Shrinkage was required for all predictive models' coefficients ranging from -0.309 (prediction probability was inverse to observed proportion) to 0.823. Predictive models which include baseline symptoms as a feature produced the highest discrimination. Two models produced a predicted probability of 0 and 1 for all patients. Predictive models vary in performance and transferability illustrating the need for improvements in model development and reporting. Several models showed reasonable potential but efforts should be increased to improve performance. Baseline symptoms should always be considered as potential features for predictive models. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Testing and validating environmental models
Kirchner, J.W.; Hooper, R.P.; Kendall, C.; Neal, C.; Leavesley, G.
1996-01-01
Generally accepted standards for testing and validating ecosystem models would benefit both modellers and model users. Universally applicable test procedures are difficult to prescribe, given the diversity of modelling approaches and the many uses for models. However, the generally accepted scientific principles of documentation and disclosure provide a useful framework for devising general standards for model evaluation. Adequately documenting model tests requires explicit performance criteria, and explicit benchmarks against which model performance is compared. A model's validity, reliability, and accuracy can be most meaningfully judged by explicit comparison against the available alternatives. In contrast, current practice is often characterized by vague, subjective claims that model predictions show 'acceptable' agreement with data; such claims provide little basis for choosing among alternative models. Strict model tests (those that invalid models are unlikely to pass) are the only ones capable of convincing rational skeptics that a model is probably valid. However, 'false positive' rates as low as 10% can substantially erode the power of validation tests, making them insufficiently strict to convince rational skeptics. Validation tests are often undermined by excessive parameter calibration and overuse of ad hoc model features. Tests are often also divorced from the conditions under which a model will be used, particularly when it is designed to forecast beyond the range of historical experience. In such situations, data from laboratory and field manipulation experiments can provide particularly effective tests, because one can create experimental conditions quite different from historical data, and because experimental data can provide a more precisely defined 'target' for the model to hit. We present a simple demonstration showing that the two most common methods for comparing model predictions to environmental time series (plotting model time series against data time series, and plotting predicted versus observed values) have little diagnostic power. We propose that it may be more useful to statistically extract the relationships of primary interest from the time series, and test the model directly against them.
Zhang, Bo; Liu, Wei; Zhang, Zhiwei; Qu, Yanping; Chen, Zhen; Albert, Paul S
2017-08-01
Joint modeling and within-cluster resampling are two approaches that are used for analyzing correlated data with informative cluster sizes. Motivated by a developmental toxicity study, we examined the performances and validity of these two approaches in testing covariate effects in generalized linear mixed-effects models. We show that the joint modeling approach is robust to the misspecification of cluster size models in terms of Type I and Type II errors when the corresponding covariates are not included in the random effects structure; otherwise, statistical tests may be affected. We also evaluate the performance of the within-cluster resampling procedure and thoroughly investigate the validity of it in modeling correlated data with informative cluster sizes. We show that within-cluster resampling is a valid alternative to joint modeling for cluster-specific covariates, but it is invalid for time-dependent covariates. The two methods are applied to a developmental toxicity study that investigated the effect of exposure to diethylene glycol dimethyl ether.
Cross-Validation of Survival Bump Hunting by Recursive Peeling Methods.
Dazard, Jean-Eudes; Choe, Michael; LeBlanc, Michael; Rao, J Sunil
2014-08-01
We introduce a survival/risk bump hunting framework to build a bump hunting model with a possibly censored time-to-event type of response and to validate model estimates. First, we describe the use of adequate survival peeling criteria to build a survival/risk bump hunting model based on recursive peeling methods. Our method called "Patient Recursive Survival Peeling" is a rule-induction method that makes use of specific peeling criteria such as hazard ratio or log-rank statistics. Second, to validate our model estimates and improve survival prediction accuracy, we describe a resampling-based validation technique specifically designed for the joint task of decision rule making by recursive peeling (i.e. decision-box) and survival estimation. This alternative technique, called "combined" cross-validation is done by combining test samples over the cross-validation loops, a design allowing for bump hunting by recursive peeling in a survival setting. We provide empirical results showing the importance of cross-validation and replication.
Cross-Validation of Survival Bump Hunting by Recursive Peeling Methods
Dazard, Jean-Eudes; Choe, Michael; LeBlanc, Michael; Rao, J. Sunil
2015-01-01
We introduce a survival/risk bump hunting framework to build a bump hunting model with a possibly censored time-to-event type of response and to validate model estimates. First, we describe the use of adequate survival peeling criteria to build a survival/risk bump hunting model based on recursive peeling methods. Our method called “Patient Recursive Survival Peeling” is a rule-induction method that makes use of specific peeling criteria such as hazard ratio or log-rank statistics. Second, to validate our model estimates and improve survival prediction accuracy, we describe a resampling-based validation technique specifically designed for the joint task of decision rule making by recursive peeling (i.e. decision-box) and survival estimation. This alternative technique, called “combined” cross-validation is done by combining test samples over the cross-validation loops, a design allowing for bump hunting by recursive peeling in a survival setting. We provide empirical results showing the importance of cross-validation and replication. PMID:26997922
Development of a Bayesian model to estimate health care outcomes in the severely wounded
Stojadinovic, Alexander; Eberhardt, John; Brown, Trevor S; Hawksworth, Jason S; Gage, Frederick; Tadaki, Douglas K; Forsberg, Jonathan A; Davis, Thomas A; Potter, Benjamin K; Dunne, James R; Elster, E A
2010-01-01
Background: Graphical probabilistic models have the ability to provide insights as to how clinical factors are conditionally related. These models can be used to help us understand factors influencing health care outcomes and resource utilization, and to estimate morbidity and clinical outcomes in trauma patient populations. Study design: Thirty-two combat casualties with severe extremity injuries enrolled in a prospective observational study were analyzed using step-wise machine-learned Bayesian belief network (BBN) and step-wise logistic regression (LR). Models were evaluated using 10-fold cross-validation to calculate area-under-the-curve (AUC) from receiver operating characteristics (ROC) curves. Results: Our BBN showed important associations between various factors in our data set that could not be developed using standard regression methods. Cross-validated ROC curve analysis showed that our BBN model was a robust representation of our data domain and that LR models trained on these findings were also robust: hospital-acquired infection (AUC: LR, 0.81; BBN, 0.79), intensive care unit length of stay (AUC: LR, 0.97; BBN, 0.81), and wound healing (AUC: LR, 0.91; BBN, 0.72) showed strong AUC. Conclusions: A BBN model can effectively represent clinical outcomes and biomarkers in patients hospitalized after severe wounding, and is confirmed by 10-fold cross-validation and further confirmed through logistic regression modeling. The method warrants further development and independent validation in other, more diverse patient populations. PMID:21197361
Suarthana, Eva; Vergouwe, Yvonne; Moons, Karel G; de Monchy, Jan; Grobbee, Diederick; Heederik, Dick; Meijer, Evert
2010-09-01
To develop and validate a prediction model to detect sensitization to wheat allergens in bakery workers. The prediction model was developed in 867 Dutch bakery workers (development set, prevalence of sensitization 13%) and included questionnaire items (candidate predictors). First, principal component analysis was used to reduce the number of candidate predictors. Then, multivariable logistic regression analysis was used to develop the model. Internal validation and extent of optimism was assessed with bootstrapping. External validation was studied in 390 independent Dutch bakery workers (validation set, prevalence of sensitization 20%). The prediction model contained the predictors nasoconjunctival symptoms, asthma symptoms, shortness of breath and wheeze, work-related upper and lower respiratory symptoms, and traditional bakery. The model showed good discrimination with an area under the receiver operating characteristic (ROC) curve area of 0.76 (and 0.75 after internal validation). Application of the model in the validation set gave a reasonable discrimination (ROC area=0.69) and good calibration after a small adjustment of the model intercept. A simple model with questionnaire items only can be used to stratify bakers according to their risk of sensitization to wheat allergens. Its use may increase the cost-effectiveness of (subsequent) medical surveillance.
A computational continuum model of poroelastic beds
Zampogna, G. A.
2017-01-01
Despite the ubiquity of fluid flows interacting with porous and elastic materials, we lack a validated non-empirical macroscale method for characterizing the flow over and through a poroelastic medium. We propose a computational tool to describe such configurations by deriving and validating a continuum model for the poroelastic bed and its interface with the above free fluid. We show that, using stress continuity condition and slip velocity condition at the interface, the effective model captures the effects of small changes in the microstructure anisotropy correctly and predicts the overall behaviour in a physically consistent and controllable manner. Moreover, we show that the performance of the effective model is accurate by validating with fully microscopic resolved simulations. The proposed computational tool can be used in investigations in a wide range of fields, including mechanical engineering, bio-engineering and geophysics. PMID:28413355
Lamain-de Ruiter, Marije; Kwee, Anneke; Naaktgeboren, Christiana A; de Groot, Inge; Evers, Inge M; Groenendaal, Floris; Hering, Yolanda R; Huisjes, Anjoke J M; Kirpestein, Cornel; Monincx, Wilma M; Siljee, Jacqueline E; Van 't Zelfde, Annewil; van Oirschot, Charlotte M; Vankan-Buitelaar, Simone A; Vonk, Mariska A A W; Wiegers, Therese A; Zwart, Joost J; Franx, Arie; Moons, Karel G M; Koster, Maria P H
2016-08-30
To perform an external validation and direct comparison of published prognostic models for early prediction of the risk of gestational diabetes mellitus, including predictors applicable in the first trimester of pregnancy. External validation of all published prognostic models in large scale, prospective, multicentre cohort study. 31 independent midwifery practices and six hospitals in the Netherlands. Women recruited in their first trimester (<14 weeks) of pregnancy between December 2012 and January 2014, at their initial prenatal visit. Women with pre-existing diabetes mellitus of any type were excluded. Discrimination of the prognostic models was assessed by the C statistic, and calibration assessed by calibration plots. 3723 women were included for analysis, of whom 181 (4.9%) developed gestational diabetes mellitus in pregnancy. 12 prognostic models for the disorder could be validated in the cohort. C statistics ranged from 0.67 to 0.78. Calibration plots showed that eight of the 12 models were well calibrated. The four models with the highest C statistics included almost all of the following predictors: maternal age, maternal body mass index, history of gestational diabetes mellitus, ethnicity, and family history of diabetes. Prognostic models had a similar performance in a subgroup of nulliparous women only. Decision curve analysis showed that the use of these four models always had a positive net benefit. In this external validation study, most of the published prognostic models for gestational diabetes mellitus show acceptable discrimination and calibration. The four models with the highest discriminative abilities in this study cohort, which also perform well in a subgroup of nulliparous women, are easy models to apply in clinical practice and therefore deserve further evaluation regarding their clinical impact. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Validation of Modelled Ice Dynamics of the Greenland Ice Sheet using Historical Forcing
NASA Astrophysics Data System (ADS)
Hoffman, M. J.; Price, S. F.; Howat, I. M.; Bonin, J. A.; Chambers, D. P.; Tezaur, I.; Kennedy, J. H.; Lenaerts, J.; Lipscomb, W. H.; Neumann, T.; Nowicki, S.; Perego, M.; Saba, J. L.; Salinger, A.; Guerber, J. R.
2015-12-01
Although ice sheet models are used for sea level rise projections, the degree to which these models have been validated by observations is fairly limited, due in part to the limited duration of the satellite observation era and the long adjustment time scales of ice sheets. Here we describe a validation framework for the Greenland Ice Sheet applied to the Community Ice Sheet Model by forcing the model annually with flux anomalies at the major outlet glaciers (Enderlin et al., 2014, observed from Landsat/ASTER/Operation IceBridge) and surface mass balance (van Angelen et al., 2013, calculated from RACMO2) for the period 1991-2012. The ice sheet model output is compared to ice surface elevation observations from ICESat and ice sheet mass change observations from GRACE. Early results show promise for assessing the performance of different model configurations. Additionally, we explore the effect of ice sheet model resolution on validation skill.
Validating Remotely Sensed Land Surface Evapotranspiration Based on Multi-scale Field Measurements
NASA Astrophysics Data System (ADS)
Jia, Z.; Liu, S.; Ziwei, X.; Liang, S.
2012-12-01
The land surface evapotranspiration plays an important role in the surface energy balance and the water cycle. There have been significant technical and theoretical advances in our knowledge of evapotranspiration over the past two decades. Acquisition of the temporally and spatially continuous distribution of evapotranspiration using remote sensing technology has attracted the widespread attention of researchers and managers. However, remote sensing technology still has many uncertainties coming from model mechanism, model inputs, parameterization schemes, and scaling issue in the regional estimation. Achieving remotely sensed evapotranspiration (RS_ET) with confident certainty is required but difficult. As a result, it is indispensable to develop the validation methods to quantitatively assess the accuracy and error sources of the regional RS_ET estimations. This study proposes an innovative validation method based on multi-scale evapotranspiration acquired from field measurements, with the validation results including the accuracy assessment, error source analysis, and uncertainty analysis of the validation process. It is a potentially useful approach to evaluate the accuracy and analyze the spatio-temporal properties of RS_ET at both the basin and local scales, and is appropriate to validate RS_ET in diverse resolutions at different time-scales. An independent RS_ET validation using this method was presented over the Hai River Basin, China in 2002-2009 as a case study. Validation at the basin scale showed good agreements between the 1 km annual RS_ET and the validation data such as the water balanced evapotranspiration, MODIS evapotranspiration products, precipitation, and landuse types. Validation at the local scale also had good results for monthly, daily RS_ET at 30 m and 1 km resolutions, comparing to the multi-scale evapotranspiration measurements from the EC and LAS, respectively, with the footprint model over three typical landscapes. Although some validation experiments demonstrated that the models yield accurate estimates at flux measurement sites, the question remains whether they are performing well over the broader landscape. Moreover, a large number of RS_ET products have been released in recent years. Thus, we also pay attention to the cross-validation method of RS_ET derived from multi-source models. "The Multi-scale Observation Experiment on Evapotranspiration over Heterogeneous Land Surfaces: Flux Observation Matrix" campaign is carried out at the middle reaches of the Heihe River Basin, China in 2012. Flux measurements from an observation matrix composed of 22 EC and 4 LAS are acquired to investigate the cross-validation of multi-source models over different landscapes. In this case, six remote sensing models, including the empirical statistical model, the one-source and two-source models, the Penman-Monteith equation based model, the Priestley-Taylor equation based model, and the complementary relationship based model, are used to perform an intercomparison. All the results from the two cases of RS_ET validation showed that the proposed validation methods are reasonable and feasible.
A Baseline Patient Model to Support Testing of Medical Cyber-Physical Systems.
Silva, Lenardo C; Perkusich, Mirko; Almeida, Hyggo O; Perkusich, Angelo; Lima, Mateus A M; Gorgônio, Kyller C
2015-01-01
Medical Cyber-Physical Systems (MCPS) are currently a trending topic of research. The main challenges are related to the integration and interoperability of connected medical devices, patient safety, physiologic closed-loop control, and the verification and validation of these systems. In this paper, we focus on patient safety and MCPS validation. We present a formal patient model to be used in health care systems validation without jeopardizing the patient's health. To determine the basic patient conditions, our model considers the four main vital signs: heart rate, respiratory rate, blood pressure and body temperature. To generate the vital signs we used regression models based on statistical analysis of a clinical database. Our solution should be used as a starting point for a behavioral patient model and adapted to specific clinical scenarios. We present the modeling process of the baseline patient model and show its evaluation. The conception process may be used to build different patient models. The results show the feasibility of the proposed model as an alternative to the immediate need for clinical trials to test these medical systems.
Cross-validation to select Bayesian hierarchical models in phylogenetics.
Duchêne, Sebastián; Duchêne, David A; Di Giallonardo, Francesca; Eden, John-Sebastian; Geoghegan, Jemma L; Holt, Kathryn E; Ho, Simon Y W; Holmes, Edward C
2016-05-26
Recent developments in Bayesian phylogenetic models have increased the range of inferences that can be drawn from molecular sequence data. Accordingly, model selection has become an important component of phylogenetic analysis. Methods of model selection generally consider the likelihood of the data under the model in question. In the context of Bayesian phylogenetics, the most common approach involves estimating the marginal likelihood, which is typically done by integrating the likelihood across model parameters, weighted by the prior. Although this method is accurate, it is sensitive to the presence of improper priors. We explored an alternative approach based on cross-validation that is widely used in evolutionary analysis. This involves comparing models according to their predictive performance. We analysed simulated data and a range of viral and bacterial data sets using a cross-validation approach to compare a variety of molecular clock and demographic models. Our results show that cross-validation can be effective in distinguishing between strict- and relaxed-clock models and in identifying demographic models that allow growth in population size over time. In most of our empirical data analyses, the model selected using cross-validation was able to match that selected using marginal-likelihood estimation. The accuracy of cross-validation appears to improve with longer sequence data, particularly when distinguishing between relaxed-clock models. Cross-validation is a useful method for Bayesian phylogenetic model selection. This method can be readily implemented even when considering complex models where selecting an appropriate prior for all parameters may be difficult.
Radiative transfer model validations during the First ISLSCP Field Experiment
NASA Technical Reports Server (NTRS)
Frouin, Robert; Breon, Francois-Marie; Gautier, Catherine
1990-01-01
Two simple radiative transfer models, the 5S model based on Tanre et al. (1985, 1986) and the wide-band model of Morcrette (1984) are validated by comparing their outputs with results obtained during the First ISLSCP Field Experiment on concomitant radiosonde, aerosol turbidity, and radiation measurements and sky photographs. Results showed that the 5S model overestimates the short-wave irradiance by 13.2 W/sq m, whereas the Morcrette model underestimated the long-wave irradiance by 7.4 W/sq m.
Hariharan, Prasanna; D’Souza, Gavin A.; Horner, Marc; Morrison, Tina M.; Malinauskas, Richard A.; Myers, Matthew R.
2017-01-01
A “credible” computational fluid dynamics (CFD) model has the potential to provide a meaningful evaluation of safety in medical devices. One major challenge in establishing “model credibility” is to determine the required degree of similarity between the model and experimental results for the model to be considered sufficiently validated. This study proposes a “threshold-based” validation approach that provides a well-defined acceptance criteria, which is a function of how close the simulation and experimental results are to the safety threshold, for establishing the model validity. The validation criteria developed following the threshold approach is not only a function of Comparison Error, E (which is the difference between experiments and simulations) but also takes in to account the risk to patient safety because of E. The method is applicable for scenarios in which a safety threshold can be clearly defined (e.g., the viscous shear-stress threshold for hemolysis in blood contacting devices). The applicability of the new validation approach was tested on the FDA nozzle geometry. The context of use (COU) was to evaluate if the instantaneous viscous shear stress in the nozzle geometry at Reynolds numbers (Re) of 3500 and 6500 was below the commonly accepted threshold for hemolysis. The CFD results (“S”) of velocity and viscous shear stress were compared with inter-laboratory experimental measurements (“D”). The uncertainties in the CFD and experimental results due to input parameter uncertainties were quantified following the ASME V&V 20 standard. The CFD models for both Re = 3500 and 6500 could not be sufficiently validated by performing a direct comparison between CFD and experimental results using the Student’s t-test. However, following the threshold-based approach, a Student’s t-test comparing |S-D| and |Threshold-S| showed that relative to the threshold, the CFD and experimental datasets for Re = 3500 were statistically similar and the model could be considered sufficiently validated for the COU. However, for Re = 6500, at certain locations where the shear stress is close the hemolysis threshold, the CFD model could not be considered sufficiently validated for the COU. Our analysis showed that the model could be sufficiently validated either by reducing the uncertainties in experiments, simulations, and the threshold or by increasing the sample size for the experiments and simulations. The threshold approach can be applied to all types of computational models and provides an objective way of determining model credibility and for evaluating medical devices. PMID:28594889
Hariharan, Prasanna; D'Souza, Gavin A; Horner, Marc; Morrison, Tina M; Malinauskas, Richard A; Myers, Matthew R
2017-01-01
A "credible" computational fluid dynamics (CFD) model has the potential to provide a meaningful evaluation of safety in medical devices. One major challenge in establishing "model credibility" is to determine the required degree of similarity between the model and experimental results for the model to be considered sufficiently validated. This study proposes a "threshold-based" validation approach that provides a well-defined acceptance criteria, which is a function of how close the simulation and experimental results are to the safety threshold, for establishing the model validity. The validation criteria developed following the threshold approach is not only a function of Comparison Error, E (which is the difference between experiments and simulations) but also takes in to account the risk to patient safety because of E. The method is applicable for scenarios in which a safety threshold can be clearly defined (e.g., the viscous shear-stress threshold for hemolysis in blood contacting devices). The applicability of the new validation approach was tested on the FDA nozzle geometry. The context of use (COU) was to evaluate if the instantaneous viscous shear stress in the nozzle geometry at Reynolds numbers (Re) of 3500 and 6500 was below the commonly accepted threshold for hemolysis. The CFD results ("S") of velocity and viscous shear stress were compared with inter-laboratory experimental measurements ("D"). The uncertainties in the CFD and experimental results due to input parameter uncertainties were quantified following the ASME V&V 20 standard. The CFD models for both Re = 3500 and 6500 could not be sufficiently validated by performing a direct comparison between CFD and experimental results using the Student's t-test. However, following the threshold-based approach, a Student's t-test comparing |S-D| and |Threshold-S| showed that relative to the threshold, the CFD and experimental datasets for Re = 3500 were statistically similar and the model could be considered sufficiently validated for the COU. However, for Re = 6500, at certain locations where the shear stress is close the hemolysis threshold, the CFD model could not be considered sufficiently validated for the COU. Our analysis showed that the model could be sufficiently validated either by reducing the uncertainties in experiments, simulations, and the threshold or by increasing the sample size for the experiments and simulations. The threshold approach can be applied to all types of computational models and provides an objective way of determining model credibility and for evaluating medical devices.
Piloted Evaluation of a UH-60 Mixer Equivalent Turbulence Simulation Model
NASA Technical Reports Server (NTRS)
Lusardi, Jeff A.; Blanken, Chris L.; Tischeler, Mark B.
2002-01-01
A simulation study of a recently developed hover/low speed Mixer Equivalent Turbulence Simulation (METS) model for the UH-60 Black Hawk helicopter was conducted in the NASA Ames Research Center Vertical Motion Simulator (VMS). The experiment was a continuation of previous work to develop a simple, but validated, turbulence model for hovering rotorcraft. To validate the METS model, two experienced test pilots replicated precision hover tasks that had been conducted in an instrumented UH-60 helicopter in turbulence. Objective simulation data were collected for comparison with flight test data, and subjective data were collected that included handling qualities ratings and pilot comments for increasing levels of turbulence. Analyses of the simulation results show good analytic agreement between the METS model and flight test data, with favorable pilot perception of the simulated turbulence. Precision hover tasks were also repeated using the more complex rotating-frame SORBET (Simulation Of Rotor Blade Element Turbulence) model to generate turbulence. Comparisons of the empirically derived METS model with the theoretical SORBET model show good agreement providing validation of the more complex blade element method of simulating turbulence.
Schleier, Jerome J.; Peterson, Robert K.D.; Irvine, Kathryn M.; Marshall, Lucy M.; Weaver, David K.; Preftakes, Collin J.
2012-01-01
One of the more effective ways of managing high densities of adult mosquitoes that vector human and animal pathogens is ultra-low-volume (ULV) aerosol applications of insecticides. The U.S. Environmental Protection Agency uses models that are not validated for ULV insecticide applications and exposure assumptions to perform their human and ecological risk assessments. Currently, there is no validated model that can accurately predict deposition of insecticides applied using ULV technology for adult mosquito management. In addition, little is known about the deposition and drift of small droplets like those used under conditions encountered during ULV applications. The objective of this study was to perform field studies to measure environmental concentrations of insecticides and to develop a validated model to predict the deposition of ULV insecticides. The final regression model was selected by minimizing the Bayesian Information Criterion and its prediction performance was evaluated using k-fold cross validation. Density of the formulation and the density and CMD interaction coefficients were the largest in the model. The results showed that as density of the formulation decreases, deposition increases. The interaction of density and CMD showed that higher density formulations and larger droplets resulted in greater deposition. These results are supported by the aerosol physics literature. A k-fold cross validation demonstrated that the mean square error of the selected regression model is not biased, and the mean square error and mean square prediction error indicated good predictive ability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bossavit, A.
The authors show how to pass from the local Bean`s model, assumed to be valid as a behavior law for a homogeneous superconductor, to a model of similar form, valid on a larger space scale. The process, which can be iterated to higher and higher space scales, consists in solving for the fields e and j over a ``periodicity cell`` with periodic boundary conditions.
In-line pressure-flow module for in vitro modelling of haemodynamics and biosensor validation
NASA Technical Reports Server (NTRS)
Koenig, S. C.; Schaub, J. D.; Ewert, D. L.; Swope, R. D.; Convertino, V. A. (Principal Investigator)
1997-01-01
An in-line pressure-flow module for in vitro modelling of haemodynamics and biosensor validation has been developed. Studies show that good accuracy can be achieved in the measurement of pressure and of flow, in steady and pulstile flow systems. The model can be used for development, testing and evaluation of cardiovascular-mechanical-electrical anlogue models, cardiovascular prosthetics (i.e. valves, vascular grafts) and pressure and flow biosensors.
Fleischmann-Struzek, Carolin; Rüddel, Hendrik; Reinhart, Konrad; Thomas-Rüddel, Daniel O.
2018-01-01
Background Sepsis is a major cause of preventable deaths in hospitals. Feasible and valid methods for comparing quality of sepsis care between hospitals are needed. The aim of this study was to develop a risk-adjustment model suitable for comparing sepsis-related mortality between German hospitals. Methods We developed a risk-model using national German claims data. Since these data are available with a time-lag of 1.5 years only, the stability of the model across time was investigated. The model was derived from inpatient cases with severe sepsis or septic shock treated in 2013 using logistic regression with backward selection and generalized estimating equations to correct for clustering. It was validated among cases treated in 2015. Finally, the model development was repeated in 2015. To investigate secular changes, the risk-adjusted trajectory of mortality across the years 2010–2015 was analyzed. Results The 2013 deviation sample consisted of 113,750 cases; the 2015 validation sample consisted of 134,851 cases. The model developed in 2013 showed good validity regarding discrimination (AUC = 0.74), calibration (observed mortality in 1st and 10th risk-decile: 11%-78%), and fit (R2 = 0.16). Validity remained stable when the model was applied to 2015 (AUC = 0.74, 1st and 10th risk-decile: 10%-77%, R2 = 0.17). There was no indication of overfitting of the model. The final model developed in year 2015 contained 40 risk-factors. Between 2010 and 2015 hospital mortality in sepsis decreased from 48% to 42%. Adjusted for risk-factors the trajectory of decrease was still significant. Conclusions The risk-model shows good predictive validity and stability across time. The model is suitable to be used as an external algorithm for comparing risk-adjusted sepsis mortality among German hospitals or regions based on administrative claims data, but secular changes need to be taken into account when interpreting risk-adjusted mortality. PMID:29558486
Schwarzkopf, Daniel; Fleischmann-Struzek, Carolin; Rüddel, Hendrik; Reinhart, Konrad; Thomas-Rüddel, Daniel O
2018-01-01
Sepsis is a major cause of preventable deaths in hospitals. Feasible and valid methods for comparing quality of sepsis care between hospitals are needed. The aim of this study was to develop a risk-adjustment model suitable for comparing sepsis-related mortality between German hospitals. We developed a risk-model using national German claims data. Since these data are available with a time-lag of 1.5 years only, the stability of the model across time was investigated. The model was derived from inpatient cases with severe sepsis or septic shock treated in 2013 using logistic regression with backward selection and generalized estimating equations to correct for clustering. It was validated among cases treated in 2015. Finally, the model development was repeated in 2015. To investigate secular changes, the risk-adjusted trajectory of mortality across the years 2010-2015 was analyzed. The 2013 deviation sample consisted of 113,750 cases; the 2015 validation sample consisted of 134,851 cases. The model developed in 2013 showed good validity regarding discrimination (AUC = 0.74), calibration (observed mortality in 1st and 10th risk-decile: 11%-78%), and fit (R2 = 0.16). Validity remained stable when the model was applied to 2015 (AUC = 0.74, 1st and 10th risk-decile: 10%-77%, R2 = 0.17). There was no indication of overfitting of the model. The final model developed in year 2015 contained 40 risk-factors. Between 2010 and 2015 hospital mortality in sepsis decreased from 48% to 42%. Adjusted for risk-factors the trajectory of decrease was still significant. The risk-model shows good predictive validity and stability across time. The model is suitable to be used as an external algorithm for comparing risk-adjusted sepsis mortality among German hospitals or regions based on administrative claims data, but secular changes need to be taken into account when interpreting risk-adjusted mortality.
A trace map comparison algorithm for the discrete fracture network models of rock masses
NASA Astrophysics Data System (ADS)
Han, Shuai; Wang, Gang; Li, Mingchao
2018-06-01
Discrete fracture networks (DFN) are widely used to build refined geological models. However, validating whether a refined model can match to reality is a crucial problem, concerning whether the model can be used for analysis. The current validation methods include numerical validation and graphical validation. However, the graphical validation, aiming at estimating the similarity between a simulated trace map and the real trace map by visual observation, is subjective. In this paper, an algorithm for the graphical validation of DFN is set up. Four main indicators, including total gray, gray grade curve, characteristic direction and gray density distribution curve, are presented to assess the similarity between two trace maps. A modified Radon transform and loop cosine similarity are presented based on Radon transform and cosine similarity respectively. Besides, how to use Bézier curve to reduce the edge effect is described. Finally, a case study shows that the new algorithm can effectively distinguish which simulated trace map is more similar to the real trace map.
Influence of Model's Race and Sex on Interviewees' Self-Disclosure.
ERIC Educational Resources Information Center
Casciani, Joseph M.
1978-01-01
Examined how a model's personal characteristics and disclosure characteristics affect White interviewees' self-disclosures. Validity of Jourard's Self-Disclosure Questionnaire was investigated. Results showed that the model's race did not affect his/her behavior or ratings of models. Self-Disclosure Questionnaire showed subjects would disclose as…
Validation of tsunami inundation model TUNA-RP using OAR-PMEL-135 benchmark problem set
NASA Astrophysics Data System (ADS)
Koh, H. L.; Teh, S. Y.; Tan, W. K.; Kh'ng, X. Y.
2017-05-01
A standard set of benchmark problems, known as OAR-PMEL-135, is developed by the US National Tsunami Hazard Mitigation Program for tsunami inundation model validation. Any tsunami inundation model must be tested for its accuracy and capability using this standard set of benchmark problems before it can be gainfully used for inundation simulation. The authors have previously developed an in-house tsunami inundation model known as TUNA-RP. This inundation model solves the two-dimensional nonlinear shallow water equations coupled with a wet-dry moving boundary algorithm. This paper presents the validation of TUNA-RP against the solutions provided in the OAR-PMEL-135 benchmark problem set. This benchmark validation testing shows that TUNA-RP can indeed perform inundation simulation with accuracy consistent with that in the tested benchmark problem set.
Guidelines for Use of the Approximate Beta-Poisson Dose-Response Model.
Xie, Gang; Roiko, Anne; Stratton, Helen; Lemckert, Charles; Dunn, Peter K; Mengersen, Kerrie
2017-07-01
For dose-response analysis in quantitative microbial risk assessment (QMRA), the exact beta-Poisson model is a two-parameter mechanistic dose-response model with parameters α>0 and β>0, which involves the Kummer confluent hypergeometric function. Evaluation of a hypergeometric function is a computational challenge. Denoting PI(d) as the probability of infection at a given mean dose d, the widely used dose-response model PI(d)=1-(1+dβ)-α is an approximate formula for the exact beta-Poisson model. Notwithstanding the required conditions α<β and β>1, issues related to the validity and approximation accuracy of this approximate formula have remained largely ignored in practice, partly because these conditions are too general to provide clear guidance. Consequently, this study proposes a probability measure Pr(0 < r < 1 | α̂, β̂) as a validity measure (r is a random variable that follows a gamma distribution; α̂ and β̂ are the maximum likelihood estimates of α and β in the approximate model); and the constraint conditions β̂>(22α̂)0.50 for 0.02<α̂<2 as a rule of thumb to ensure an accurate approximation (e.g., Pr(0 < r < 1 | α̂, β̂) >0.99) . This validity measure and rule of thumb were validated by application to all the completed beta-Poisson models (related to 85 data sets) from the QMRA community portal (QMRA Wiki). The results showed that the higher the probability Pr(0 < r < 1 | α̂, β̂), the better the approximation. The results further showed that, among the total 85 models examined, 68 models were identified as valid approximate model applications, which all had a near perfect match to the corresponding exact beta-Poisson model dose-response curve. © 2016 Society for Risk Analysis.
Olondo, C; Legarda, F; Herranz, M; Idoeta, R
2017-04-01
This paper shows the procedure performed to validate the migration equation and the migration parameters' values presented in a previous paper (Legarda et al., 2011) regarding the migration of 137 Cs in Spanish mainland soils. In this paper, this model validation has been carried out checking experimentally obtained activity concentration values against those predicted by the model. This experimental data come from the measured vertical activity profiles of 8 new sampling points which are located in northern Spain. Before testing predicted values of the model, the uncertainty of those values has been assessed with the appropriate uncertainty analysis. Once establishing the uncertainty of the model, both activity concentration values, experimental versus model predicted ones, have been compared. Model validation has been performed analyzing its accuracy, studying it as a whole and also at different depth intervals. As a result, this model has been validated as a tool to predict 137 Cs behaviour in a Mediterranean environment. Copyright © 2017 Elsevier Ltd. All rights reserved.
Prediction models for successful external cephalic version: a systematic review.
Velzel, Joost; de Hundt, Marcella; Mulder, Frederique M; Molkenboer, Jan F M; Van der Post, Joris A M; Mol, Ben W; Kok, Marjolein
2015-12-01
To provide an overview of existing prediction models for successful ECV, and to assess their quality, development and performance. We searched MEDLINE, EMBASE and the Cochrane Library to identify all articles reporting on prediction models for successful ECV published from inception to January 2015. We extracted information on study design, sample size, model-building strategies and validation. We evaluated the phases of model development and summarized their performance in terms of discrimination, calibration and clinical usefulness. We collected different predictor variables together with their defined significance, in order to identify important predictor variables for successful ECV. We identified eight articles reporting on seven prediction models. All models were subjected to internal validation. Only one model was also validated in an external cohort. Two prediction models had a low overall risk of bias, of which only one showed promising predictive performance at internal validation. This model also completed the phase of external validation. For none of the models their impact on clinical practice was evaluated. The most important predictor variables for successful ECV described in the selected articles were parity, placental location, breech engagement and the fetal head being palpable. One model was assessed using discrimination and calibration using internal (AUC 0.71) and external validation (AUC 0.64), while two other models were assessed with discrimination and calibration, respectively. We found one prediction model for breech presentation that was validated in an external cohort and had acceptable predictive performance. This model should be used to council women considering ECV. Copyright © 2015. Published by Elsevier Ireland Ltd.
Robertson, Amy N.; Wendt, Fabian; Jonkman, Jason M.; ...
2017-10-01
This paper summarizes the findings from Phase II of the Offshore Code Comparison, Collaboration, Continued, with Correlation project. The project is run under the International Energy Agency Wind Research Task 30, and is focused on validating the tools used for modeling offshore wind systems through the comparison of simulated responses of select system designs to physical test data. Validation activities such as these lead to improvement of offshore wind modeling tools, which will enable the development of more innovative and cost-effective offshore wind designs. For Phase II of the project, numerical models of the DeepCwind floating semisubmersible wind system weremore » validated using measurement data from a 1/50th-scale validation campaign performed at the Maritime Research Institute Netherlands offshore wave basin. Validation of the models was performed by comparing the calculated ultimate and fatigue loads for eight different wave-only and combined wind/wave test cases against the measured data, after calibration was performed using free-decay, wind-only, and wave-only tests. The results show a decent estimation of both the ultimate and fatigue loads for the simulated results, but with a fairly consistent underestimation in the tower and upwind mooring line loads that can be attributed to an underestimation of wave-excitation forces outside the linear wave-excitation region, and the presence of broadband frequency excitation in the experimental measurements from wind. Participant results showed varied agreement with the experimental measurements based on the modeling approach used. Modeling attributes that enabled better agreement included: the use of a dynamic mooring model; wave stretching, or some other hydrodynamic modeling approach that excites frequencies outside the linear wave region; nonlinear wave kinematics models; and unsteady aerodynamics models. Also, it was observed that a Morison-only hydrodynamic modeling approach could create excessive pitch excitation and resulting tower loads in some frequency bands.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robertson, Amy N.; Wendt, Fabian; Jonkman, Jason M.
This paper summarizes the findings from Phase II of the Offshore Code Comparison, Collaboration, Continued, with Correlation project. The project is run under the International Energy Agency Wind Research Task 30, and is focused on validating the tools used for modeling offshore wind systems through the comparison of simulated responses of select system designs to physical test data. Validation activities such as these lead to improvement of offshore wind modeling tools, which will enable the development of more innovative and cost-effective offshore wind designs. For Phase II of the project, numerical models of the DeepCwind floating semisubmersible wind system weremore » validated using measurement data from a 1/50th-scale validation campaign performed at the Maritime Research Institute Netherlands offshore wave basin. Validation of the models was performed by comparing the calculated ultimate and fatigue loads for eight different wave-only and combined wind/wave test cases against the measured data, after calibration was performed using free-decay, wind-only, and wave-only tests. The results show a decent estimation of both the ultimate and fatigue loads for the simulated results, but with a fairly consistent underestimation in the tower and upwind mooring line loads that can be attributed to an underestimation of wave-excitation forces outside the linear wave-excitation region, and the presence of broadband frequency excitation in the experimental measurements from wind. Participant results showed varied agreement with the experimental measurements based on the modeling approach used. Modeling attributes that enabled better agreement included: the use of a dynamic mooring model; wave stretching, or some other hydrodynamic modeling approach that excites frequencies outside the linear wave region; nonlinear wave kinematics models; and unsteady aerodynamics models. Also, it was observed that a Morison-only hydrodynamic modeling approach could create excessive pitch excitation and resulting tower loads in some frequency bands.« less
NASA Astrophysics Data System (ADS)
Steger, Stefan; Brenning, Alexander; Bell, Rainer; Petschko, Helene; Glade, Thomas
2016-06-01
Empirical models are frequently applied to produce landslide susceptibility maps for large areas. Subsequent quantitative validation results are routinely used as the primary criteria to infer the validity and applicability of the final maps or to select one of several models. This study hypothesizes that such direct deductions can be misleading. The main objective was to explore discrepancies between the predictive performance of a landslide susceptibility model and the geomorphic plausibility of subsequent landslide susceptibility maps while a particular emphasis was placed on the influence of incomplete landslide inventories on modelling and validation results. The study was conducted within the Flysch Zone of Lower Austria (1,354 km2) which is known to be highly susceptible to landslides of the slide-type movement. Sixteen susceptibility models were generated by applying two statistical classifiers (logistic regression and generalized additive model) and two machine learning techniques (random forest and support vector machine) separately for two landslide inventories of differing completeness and two predictor sets. The results were validated quantitatively by estimating the area under the receiver operating characteristic curve (AUROC) with single holdout and spatial cross-validation technique. The heuristic evaluation of the geomorphic plausibility of the final results was supported by findings of an exploratory data analysis, an estimation of odds ratios and an evaluation of the spatial structure of the final maps. The results showed that maps generated by different inventories, classifiers and predictors appeared differently while holdout validation revealed similar high predictive performances. Spatial cross-validation proved useful to expose spatially varying inconsistencies of the modelling results while additionally providing evidence for slightly overfitted machine learning-based models. However, the highest predictive performances were obtained for maps that explicitly expressed geomorphically implausible relationships indicating that the predictive performance of a model might be misleading in the case a predictor systematically relates to a spatially consistent bias of the inventory. Furthermore, we observed that random forest-based maps displayed spatial artifacts. The most plausible susceptibility map of the study area showed smooth prediction surfaces while the underlying model revealed a high predictive capability and was generated with an accurate landslide inventory and predictors that did not directly describe a bias. However, none of the presented models was found to be completely unbiased. This study showed that high predictive performances cannot be equated with a high plausibility and applicability of subsequent landslide susceptibility maps. We suggest that greater emphasis should be placed on identifying confounding factors and biases in landslide inventories. A joint discussion between modelers and decision makers of the spatial pattern of the final susceptibility maps in the field might increase their acceptance and applicability.
NASA Astrophysics Data System (ADS)
Zhang, Yaning; Xu, Fei; Li, Bingxi; Kim, Yong-Song; Zhao, Wenke; Xie, Gongnan; Fu, Zhongbin
2018-04-01
This study aims to validate the three-phase heat and mass transfer model developed in the first part (Three phase heat and mass transfer model for unsaturated soil freezing process: Part 1 - model development). Experimental results from studies and experiments were used for the validation. The results showed that the correlation coefficients for the simulated and experimental water contents at different soil depths were between 0.83 and 0.92. The correlation coefficients for the simulated and experimental liquid water contents at different soil temperatures were between 0.95 and 0.99. With these high accuracies, the developed model can be well used to predict the water contents at different soil depths and temperatures.
Validating EHR clinical models using ontology patterns.
Martínez-Costa, Catalina; Schulz, Stefan
2017-12-01
Clinical models are artefacts that specify how information is structured in electronic health records (EHRs). However, the makeup of clinical models is not guided by any formal constraint beyond a semantically vague information model. We address this gap by advocating ontology design patterns as a mechanism that makes the semantics of clinical models explicit. This paper demonstrates how ontology design patterns can validate existing clinical models using SHACL. Based on the Clinical Information Modelling Initiative (CIMI), we show how ontology patterns detect both modeling and terminology binding errors in CIMI models. SHACL, a W3C constraint language for the validation of RDF graphs, builds on the concept of "Shape", a description of data in terms of expected cardinalities, datatypes and other restrictions. SHACL, as opposed to OWL, subscribes to the Closed World Assumption (CWA) and is therefore more suitable for the validation of clinical models. We have demonstrated the feasibility of the approach by manually describing the correspondences between six CIMI clinical models represented in RDF and two SHACL ontology design patterns. Using a Java-based SHACL implementation, we found at least eleven modeling and binding errors within these CIMI models. This demonstrates the usefulness of ontology design patterns not only as a modeling tool but also as a tool for validation. Copyright © 2017 Elsevier Inc. All rights reserved.
1997-09-01
Illinois Institute of Technology Research Institute (IITRI) calibrated seven parametric models including SPQR /20, the forerunner of CHECKPOINT. The...a semicolon); thus, SPQR /20 was calibrated using SLOC sizing data (IITRI, 1989: 3-4). The results showed only slight overall improvements in accuracy...even when validating the calibrated models with the same data sets. The IITRI study demonstrated SPQR /20 to be one of two models that were most
Evaluation of a Computational Model of Situational Awareness
NASA Technical Reports Server (NTRS)
Burdick, Mark D.; Shively, R. Jay; Rutkewski, Michael (Technical Monitor)
2000-01-01
Although the use of the psychological construct of situational awareness (SA) assists researchers in creating a flight environment that is safer and more predictable, its true potential remains untapped until a valid means of predicting SA a priori becomes available. Previous work proposed a computational model of SA (CSA) that sought to Fill that void. The current line of research is aimed at validating that model. The results show that the model accurately predicted SA in a piloted simulation.
1979-04-25
Airport (Bedford, MA ) and Ft. Devens, MA. (2) validation of the models for building reflections based on elevation field measurements at JFK airport and...angles. 2-60 III. BUILDING REFLECTIONS A. Van Measurements at John F. Kennedy (JFK) International Airport, New York Figure 3-1 shows a map of JFK airport with
Combined expectancies: electrophysiological evidence for the adjustment of expectancy effects
Mattler, Uwe; van der Lugt, Arie; Münte, Thomas F
2006-01-01
Background When subjects use cues to prepare for a likely stimulus or a likely response, reaction times are facilitated by valid cues but prolonged by invalid cues. In studies on combined expectancy effects, two cues can independently give information regarding two dimensions of the forthcoming task. In certain situations, cueing effects on one dimension are reduced when the cue on the other dimension is invalid. According to the Adjusted Expectancy Model, cues affect different processing levels and a mechanism is presumed which is sensitive to the validity of early level cues and leads to online adjustment of expectancy effects at later levels. To examine the predictions of this model cueing of stimulus modality was combined with response cueing. Results Behavioral measures showed the interaction of cueing effects. Electrophysiological measures of the lateralized readiness potential (LRP) and the N200 amplitude confirmed the predictions of the model. The LRP showed larger effects of response cues on response activation when modality cues were valid rather than invalid. N200 amplitude was largest with valid modality cues and invalid response cues, medium with invalid modality cues, and smallest with two valid cues. Conclusion Findings support the view that the validity of early level expectancies modulates the effects of late level expectancies, which included response activation and response conflict in the present study. PMID:16674805
Crins, Martine H. P.; Roorda, Leo D.; Smits, Niels; de Vet, Henrica C. W.; Westhovens, Rene; Cella, David; Cook, Karon F.; Revicki, Dennis; van Leeuwen, Jaap; Boers, Maarten; Dekker, Joost; Terwee, Caroline B.
2015-01-01
The Dutch-Flemish PROMIS Group translated the adult PROMIS Pain Interference item bank into Dutch-Flemish. The aims of the current study were to calibrate the parameters of these items using an item response theory (IRT) model, to evaluate the cross-cultural validity of the Dutch-Flemish translations compared to the original English items, and to evaluate their reliability and construct validity. The 40 items in the bank were completed by 1085 Dutch chronic pain patients. Before calibrating the items, IRT model assumptions were evaluated using confirmatory factor analysis (CFA). Items were calibrated using the graded response model (GRM), an IRT model appropriate for items with more than two response options. To evaluate cross-cultural validity, differential item functioning (DIF) for language (Dutch vs. English) was examined. Reliability was evaluated based on standard errors and Cronbach’s alpha. To evaluate construct validity correlations with scores on legacy instruments (e.g., the Disabilities of the Arm, Shoulder and Hand Questionnaire) were calculated. Unidimensionality of the Dutch-Flemish PROMIS Pain Interference item bank was supported by CFA tests of model fit (CFI = 0.986, TLI = 0.986). Furthermore, the data fit the GRM and showed good coverage across the pain interference continuum (threshold-parameters range: -3.04 to 3.44). The Dutch-Flemish PROMIS Pain Interference item bank has good cross-cultural validity (only two out of 40 items showing DIF), good reliability (Cronbach’s alpha = 0.98), and good construct validity (Pearson correlations between 0.62 and 0.75). A computer adaptive test (CAT) and Dutch-Flemish PROMIS short forms of the Dutch-Flemish PROMIS Pain Interference item bank can now be developed. PMID:26214178
Crins, Martine H P; Roorda, Leo D; Smits, Niels; de Vet, Henrica C W; Westhovens, Rene; Cella, David; Cook, Karon F; Revicki, Dennis; van Leeuwen, Jaap; Boers, Maarten; Dekker, Joost; Terwee, Caroline B
2015-01-01
The Dutch-Flemish PROMIS Group translated the adult PROMIS Pain Interference item bank into Dutch-Flemish. The aims of the current study were to calibrate the parameters of these items using an item response theory (IRT) model, to evaluate the cross-cultural validity of the Dutch-Flemish translations compared to the original English items, and to evaluate their reliability and construct validity. The 40 items in the bank were completed by 1085 Dutch chronic pain patients. Before calibrating the items, IRT model assumptions were evaluated using confirmatory factor analysis (CFA). Items were calibrated using the graded response model (GRM), an IRT model appropriate for items with more than two response options. To evaluate cross-cultural validity, differential item functioning (DIF) for language (Dutch vs. English) was examined. Reliability was evaluated based on standard errors and Cronbach's alpha. To evaluate construct validity correlations with scores on legacy instruments (e.g., the Disabilities of the Arm, Shoulder and Hand Questionnaire) were calculated. Unidimensionality of the Dutch-Flemish PROMIS Pain Interference item bank was supported by CFA tests of model fit (CFI = 0.986, TLI = 0.986). Furthermore, the data fit the GRM and showed good coverage across the pain interference continuum (threshold-parameters range: -3.04 to 3.44). The Dutch-Flemish PROMIS Pain Interference item bank has good cross-cultural validity (only two out of 40 items showing DIF), good reliability (Cronbach's alpha = 0.98), and good construct validity (Pearson correlations between 0.62 and 0.75). A computer adaptive test (CAT) and Dutch-Flemish PROMIS short forms of the Dutch-Flemish PROMIS Pain Interference item bank can now be developed.
Zhang, Jinshui; Yuan, Zhoumiqi; Shuai, Guanyuan; Pan, Yaozhong; Zhu, Xiufang
2017-04-26
This paper developed an approach, the window-based validation set for support vector data description (WVS-SVDD), to determine optimal parameters for support vector data description (SVDD) model to map specific land cover by integrating training and window-based validation sets. Compared to the conventional approach where the validation set included target and outlier pixels selected visually and randomly, the validation set derived from WVS-SVDD constructed a tightened hypersphere because of the compact constraint by the outlier pixels which were located neighboring to the target class in the spectral feature space. The overall accuracies for wheat and bare land achieved were as high as 89.25% and 83.65%, respectively. However, target class was underestimated because the validation set covers only a small fraction of the heterogeneous spectra of the target class. The different window sizes were then tested to acquire more wheat pixels for validation set. The results showed that classification accuracy increased with the increasing window size and the overall accuracies were higher than 88% at all window size scales. Moreover, WVS-SVDD showed much less sensitivity to the untrained classes than the multi-class support vector machine (SVM) method. Therefore, the developed method showed its merits using the optimal parameters, tradeoff coefficient ( C ) and kernel width ( s ), in mapping homogeneous specific land cover.
Velpuri, N.M.; Senay, G.B.; Asante, K.O.
2011-01-01
Managing limited surface water resources is a great challenge in areas where ground-based data are either limited or unavailable. Direct or indirect measurements of surface water resources through remote sensing offer several advantages of monitoring in ungauged basins. A physical based hydrologic technique to monitor lake water levels in ungauged basins using multi-source satellite data such as satellite-based rainfall estimates, modelled runoff, evapotranspiration, a digital elevation model, and other data is presented. This approach is applied to model Lake Turkana water levels from 1998 to 2009. Modelling results showed that the model can reasonably capture all the patterns and seasonal variations of the lake water level fluctuations. A composite lake level product of TOPEX/Poseidon, Jason-1, and ENVISAT satellite altimetry data is used for model calibration (1998-2000) and model validation (2001-2009). Validation results showed that model-based lake levels are in good agreement with observed satellite altimetry data. Compared to satellite altimetry data, the Pearson's correlation coefficient was found to be 0.81 during the validation period. The model efficiency estimated using NSCE is found to be 0.93, 0.55 and 0.66 for calibration, validation and combined periods, respectively. Further, the model-based estimates showed a root mean square error of 0.62 m and mean absolute error of 0.46 m with a positive mean bias error of 0.36 m for the validation period (2001-2009). These error estimates were found to be less than 15 % of the natural variability of the lake, thus giving high confidence on the modelled lake level estimates. The approach presented in this paper can be used to (a) simulate patterns of lake water level variations in data scarce regions, (b) operationally monitor lake water levels in ungauged basins, (c) derive historical lake level information using satellite rainfall and evapotranspiration data, and (d) augment the information provided by the satellite altimetry systems on changes in lake water levels. ?? Author(s) 2011.
Gomes, Anna; van der Wijk, Lars; Proost, Johannes H; Sinha, Bhanu; Touw, Daan J
2017-01-01
Gentamicin shows large variations in half-life and volume of distribution (Vd) within and between individuals. Thus, monitoring and accurately predicting serum levels are required to optimize effectiveness and minimize toxicity. Currently, two population pharmacokinetic models are applied for predicting gentamicin doses in adults. For endocarditis patients the optimal model is unknown. We aimed at: 1) creating an optimal model for endocarditis patients; and 2) assessing whether the endocarditis and existing models can accurately predict serum levels. We performed a retrospective observational two-cohort study: one cohort to parameterize the endocarditis model by iterative two-stage Bayesian analysis, and a second cohort to validate and compare all three models. The Akaike Information Criterion and the weighted sum of squares of the residuals divided by the degrees of freedom were used to select the endocarditis model. Median Prediction Error (MDPE) and Median Absolute Prediction Error (MDAPE) were used to test all models with the validation dataset. We built the endocarditis model based on data from the modeling cohort (65 patients) with a fixed 0.277 L/h/70kg metabolic clearance, 0.698 (±0.358) renal clearance as fraction of creatinine clearance, and Vd 0.312 (±0.076) L/kg corrected lean body mass. External validation with data from 14 validation cohort patients showed a similar predictive power of the endocarditis model (MDPE -1.77%, MDAPE 4.68%) as compared to the intensive-care (MDPE -1.33%, MDAPE 4.37%) and standard (MDPE -0.90%, MDAPE 4.82%) models. All models acceptably predicted pharmacokinetic parameters for gentamicin in endocarditis patients. However, these patients appear to have an increased Vd, similar to intensive care patients. Vd mainly determines the height of peak serum levels, which in turn correlate with bactericidal activity. In order to maintain simplicity, we advise to use the existing intensive-care model in clinical practice to avoid potential underdosing of gentamicin in endocarditis patients.
van der Wijk, Lars; Proost, Johannes H.; Sinha, Bhanu; Touw, Daan J.
2017-01-01
Gentamicin shows large variations in half-life and volume of distribution (Vd) within and between individuals. Thus, monitoring and accurately predicting serum levels are required to optimize effectiveness and minimize toxicity. Currently, two population pharmacokinetic models are applied for predicting gentamicin doses in adults. For endocarditis patients the optimal model is unknown. We aimed at: 1) creating an optimal model for endocarditis patients; and 2) assessing whether the endocarditis and existing models can accurately predict serum levels. We performed a retrospective observational two-cohort study: one cohort to parameterize the endocarditis model by iterative two-stage Bayesian analysis, and a second cohort to validate and compare all three models. The Akaike Information Criterion and the weighted sum of squares of the residuals divided by the degrees of freedom were used to select the endocarditis model. Median Prediction Error (MDPE) and Median Absolute Prediction Error (MDAPE) were used to test all models with the validation dataset. We built the endocarditis model based on data from the modeling cohort (65 patients) with a fixed 0.277 L/h/70kg metabolic clearance, 0.698 (±0.358) renal clearance as fraction of creatinine clearance, and Vd 0.312 (±0.076) L/kg corrected lean body mass. External validation with data from 14 validation cohort patients showed a similar predictive power of the endocarditis model (MDPE -1.77%, MDAPE 4.68%) as compared to the intensive-care (MDPE -1.33%, MDAPE 4.37%) and standard (MDPE -0.90%, MDAPE 4.82%) models. All models acceptably predicted pharmacokinetic parameters for gentamicin in endocarditis patients. However, these patients appear to have an increased Vd, similar to intensive care patients. Vd mainly determines the height of peak serum levels, which in turn correlate with bactericidal activity. In order to maintain simplicity, we advise to use the existing intensive-care model in clinical practice to avoid potential underdosing of gentamicin in endocarditis patients. PMID:28475651
Corron, Louise; Marchal, François; Condemi, Silvana; Chaumoître, Kathia; Adalian, Pascal
2017-01-01
Juvenile age estimation methods used in forensic anthropology generally lack methodological consistency and/or statistical validity. Considering this, a standard approach using nonparametric Multivariate Adaptive Regression Splines (MARS) models were tested to predict age from iliac biometric variables of male and female juveniles from Marseilles, France, aged 0-12 years. Models using unidimensional (length and width) and bidimensional iliac data (module and surface) were constructed on a training sample of 176 individuals and validated on an independent test sample of 68 individuals. Results show that MARS prediction models using iliac width, module and area give overall better and statistically valid age estimates. These models integrate punctual nonlinearities of the relationship between age and osteometric variables. By constructing valid prediction intervals whose size increases with age, MARS models take into account the normal increase of individual variability. MARS models can qualify as a practical and standardized approach for juvenile age estimation. © 2016 American Academy of Forensic Sciences.
Finding Furfural Hydrogenation Catalysts via Predictive Modelling
Strassberger, Zea; Mooijman, Maurice; Ruijter, Eelco; Alberts, Albert H; Maldonado, Ana G; Orru, Romano V A; Rothenberg, Gadi
2010-01-01
Abstract We combine multicomponent reactions, catalytic performance studies and predictive modelling to find transfer hydrogenation catalysts. An initial set of 18 ruthenium-carbene complexes were synthesized and screened in the transfer hydrogenation of furfural to furfurol with isopropyl alcohol complexes gave varied yields, from 62% up to >99.9%, with no obvious structure/activity correlations. Control experiments proved that the carbene ligand remains coordinated to the ruthenium centre throughout the reaction. Deuterium-labelling studies showed a secondary isotope effect (kH:kD=1.5). Further mechanistic studies showed that this transfer hydrogenation follows the so-called monohydride pathway. Using these data, we built a predictive model for 13 of the catalysts, based on 2D and 3D molecular descriptors. We tested and validated the model using the remaining five catalysts (cross-validation, R2=0.913). Then, with this model, the conversion and selectivity were predicted for four completely new ruthenium-carbene complexes. These four catalysts were then synthesized and tested. The results were within 3% of the model’s predictions, demonstrating the validity and value of predictive modelling in catalyst optimization. PMID:23193388
Pandey, Daya Shankar; Pan, Indranil; Das, Saptarshi; Leahy, James J; Kwapinski, Witold
2015-03-01
A multi-gene genetic programming technique is proposed as a new method to predict syngas yield production and the lower heating value for municipal solid waste gasification in a fluidized bed gasifier. The study shows that the predicted outputs of the municipal solid waste gasification process are in good agreement with the experimental dataset and also generalise well to validation (untrained) data. Published experimental datasets are used for model training and validation purposes. The results show the effectiveness of the genetic programming technique for solving complex nonlinear regression problems. The multi-gene genetic programming are also compared with a single-gene genetic programming model to show the relative merits and demerits of the technique. This study demonstrates that the genetic programming based data-driven modelling strategy can be a good candidate for developing models for other types of fuels as well. Copyright © 2014 Elsevier Ltd. All rights reserved.
Towards Automatic Validation and Healing of Citygml Models for Geometric and Semantic Consistency
NASA Astrophysics Data System (ADS)
Alam, N.; Wagner, D.; Wewetzer, M.; von Falkenhausen, J.; Coors, V.; Pries, M.
2013-09-01
A steadily growing number of application fields for large 3D city models have emerged in recent years. Like in many other domains, data quality is recognized as a key factor for successful business. Quality management is mandatory in the production chain nowadays. Automated domain-specific tools are widely used for validation of business-critical data but still common standards defining correct geometric modeling are not precise enough to define a sound base for data validation of 3D city models. Although the workflow for 3D city models is well-established from data acquisition to processing, analysis and visualization, quality management is not yet a standard during this workflow. Processing data sets with unclear specification leads to erroneous results and application defects. We show that this problem persists even if data are standard compliant. Validation results of real-world city models are presented to demonstrate the potential of the approach. A tool to repair the errors detected during the validation process is under development; first results are presented and discussed. The goal is to heal defects of the models automatically and export a corrected CityGML model.
Choo, Min Soo; Jeong, Seong Jin; Cho, Sung Yong; Yoo, Changwon; Jeong, Chang Wook; Ku, Ja Hyeon; Oh, Seung-June
2017-04-01
We aimed to externally validate the prediction model we developed for having bladder outlet obstruction (BOO) and requiring prostatic surgery using 2 independent data sets from tertiary referral centers, and also aimed to validate a mobile app for using this model through usability testing. Formulas and nomograms predicting whether a subject has BOO and needs prostatic surgery were validated with an external validation cohort from Seoul National University Bundang Hospital and Seoul Metropolitan Government-Seoul National University Boramae Medical Center between January 2004 and April 2015. A smartphone-based app was developed, and 8 young urologists were enrolled for usability testing to identify any human factor issues of the app. A total of 642 patients were included in the external validation cohort. No significant differences were found in the baseline characteristics of major parameters between the original (n=1,179) and the external validation cohort, except for the maximal flow rate. Predictions of requiring prostatic surgery in the validation cohort showed a sensitivity of 80.6%, a specificity of 73.2%, a positive predictive value of 49.7%, and a negative predictive value of 92.0%, and area under receiver operating curve of 0.84. The calibration plot indicated that the predictions have good correspondence. The decision curve showed also a high net benefit. Similar evaluation results using the external validation cohort were seen in the predictions of having BOO. Overall results of the usability test demonstrated that the app was user-friendly with no major human factor issues. External validation of these newly developed a prediction model demonstrated a moderate level of discrimination, adequate calibration, and high net benefit gains for predicting both having BOO and requiring prostatic surgery. Also a smartphone app implementing the prediction model was user-friendly with no major human factor issue.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahmed E. Hassan
2006-01-24
Models have an inherent uncertainty. The difficulty in fully characterizing the subsurface environment makes uncertainty an integral component of groundwater flow and transport models, which dictates the need for continuous monitoring and improvement. Building and sustaining confidence in closure decisions and monitoring networks based on models of subsurface conditions require developing confidence in the models through an iterative process. The definition of model validation is postulated as a confidence building and long-term iterative process (Hassan, 2004a). Model validation should be viewed as a process not an end result. Following Hassan (2004b), an approach is proposed for the validation process ofmore » stochastic groundwater models. The approach is briefly summarized herein and detailed analyses of acceptance criteria for stochastic realizations and of using validation data to reduce input parameter uncertainty are presented and applied to two case studies. During the validation process for stochastic models, a question arises as to the sufficiency of the number of acceptable model realizations (in terms of conformity with validation data). Using a hierarchical approach to make this determination is proposed. This approach is based on computing five measures or metrics and following a decision tree to determine if a sufficient number of realizations attain satisfactory scores regarding how they represent the field data used for calibration (old) and used for validation (new). The first two of these measures are applied to hypothetical scenarios using the first case study and assuming field data consistent with the model or significantly different from the model results. In both cases it is shown how the two measures would lead to the appropriate decision about the model performance. Standard statistical tests are used to evaluate these measures with the results indicating they are appropriate measures for evaluating model realizations. The use of validation data to constrain model input parameters is shown for the second case study using a Bayesian approach known as Markov Chain Monte Carlo. The approach shows a great potential to be helpful in the validation process and in incorporating prior knowledge with new field data to derive posterior distributions for both model input and output.« less
Campos, Juliana Alvares Duarte Bonini; Spexoto, Maria Cláudia Bernardes; da Silva, Wanderson Roberto; Serrano, Sergio Vicente; Marôco, João
2018-01-01
ABSTRACT Objective To evaluate the psychometric properties of the seven theoretical models proposed in the literature for European Organization for Research and Treatment of Cancer Quality of Life Questionnaire Core 30 (EORTC QLQ-C30), when applied to a sample of Brazilian cancer patients. Methods Content and construct validity (factorial, convergent, discriminant) were estimated. Confirmatory factor analysis was performed. Convergent validity was analyzed using the average variance extracted. Discriminant validity was analyzed using correlational analysis. Internal consistency and composite reliability were used to assess the reliability of instrument. Results A total of 1,020 cancer patients participated. The mean age was 53.3±13.0 years, and 62% were female. All models showed adequate factorial validity for the study sample. Convergent and discriminant validities and the reliability were compromised in all of the models for all of the single items referring to symptoms, as well as for the “physical function” and “cognitive function” factors. Conclusion All theoretical models assessed in this study presented adequate factorial validity when applied to Brazilian cancer patients. The choice of the best model for use in research and/or clinical protocols should be centered on the purpose and underlying theory of each model. PMID:29694609
Bianchi, Lorenzo; Schiavina, Riccardo; Borghesi, Marco; Bianchi, Federico Mineo; Briganti, Alberto; Carini, Marco; Terrone, Carlo; Mottrie, Alex; Gacci, Mauro; Gontero, Paolo; Imbimbo, Ciro; Marchioro, Giansilvio; Milanese, Giulio; Mirone, Vincenzo; Montorsi, Francesco; Morgia, Giuseppe; Novara, Giacomo; Porreca, Angelo; Volpe, Alessandro; Brunocilla, Eugenio
2018-04-06
To assess the predictive accuracy and the clinical value of a recent nomogram predicting cancer-specific mortality-free survival after surgery in pN1 prostate cancer patients through an external validation. We evaluated 518 prostate cancer patients treated with radical prostatectomy and pelvic lymph node dissection with evidence of nodal metastases at final pathology, at 10 tertiary centers. External validation was carried out using regression coefficients of the previously published nomogram. The performance characteristics of the model were assessed by quantifying predictive accuracy, according to the area under the curve in the receiver operating characteristic curve and model calibration. Furthermore, we systematically analyzed the specificity, sensitivity, positive predictive value and negative predictive value for each nomogram-derived probability cut-off. Finally, we implemented decision curve analysis, in order to quantify the nomogram's clinical value in routine practice. External validation showed inferior predictive accuracy as referred to in the internal validation (65.8% vs 83.3%, respectively). The discrimination (area under the curve) of the multivariable model was 66.7% (95% CI 60.1-73.0%) by testing with receiver operating characteristic curve analysis. The calibration plot showed an overestimation throughout the range of predicted cancer-specific mortality-free survival rates probabilities. However, in decision curve analysis, the nomogram's use showed a net benefit when compared with the scenarios of treating all patients or none. In an external setting, the nomogram showed inferior predictive accuracy and suboptimal calibration characteristics as compared to that reported in the original population. However, decision curve analysis showed a clinical net benefit, suggesting a clinical implication to correctly manage pN1 prostate cancer patients after surgery. © 2018 The Japanese Urological Association.
Mechanisms of complex network growth: Synthesis of the preferential attachment and fitness models
NASA Astrophysics Data System (ADS)
Golosovsky, Michael
2018-06-01
We analyze growth mechanisms of complex networks and focus on their validation by measurements. To this end we consider the equation Δ K =A (t ) (K +K0) Δ t , where K is the node's degree, Δ K is its increment, A (t ) is the aging constant, and K0 is the initial attractivity. This equation has been commonly used to validate the preferential attachment mechanism. We show that this equation is undiscriminating and holds for the fitness model [Caldarelli et al., Phys. Rev. Lett. 89, 258702 (2002), 10.1103/PhysRevLett.89.258702] as well. In other words, accepted method of the validation of the microscopic mechanism of network growth does not discriminate between "rich-gets-richer" and "good-gets-richer" scenarios. This means that the growth mechanism of many natural complex networks can be based on the fitness model rather than on the preferential attachment, as it was believed so far. The fitness model yields the long-sought explanation for the initial attractivity K0, an elusive parameter which was left unexplained within the framework of the preferential attachment model. We show that the initial attractivity is determined by the width of the fitness distribution. We also present the network growth model based on recursive search with memory and show that this model contains both the preferential attachment and the fitness models as extreme cases.
EMRinger: side chain–directed model and map validation for 3D cryo-electron microscopy
Barad, Benjamin A.; Echols, Nathaniel; Wang, Ray Yu-Ruei; ...
2015-08-17
Advances in high-resolution cryo-electron microscopy (cryo-EM) require the development of validation metrics to independently assess map quality and model geometry. We report that EMRinger is a tool that assesses the precise fitting of an atomic model into the map during refinement and shows how radiation damage alters scattering from negatively charged amino acids. EMRinger (https://github.com/fraser-lab/EMRinger) will be useful for monitoring progress in resolving and modeling high-resolution features in cryo-EM.
NASA Astrophysics Data System (ADS)
Pourghasemi, Hamid Reza; Rossi, Mauro
2017-10-01
Landslides are identified as one of the most important natural hazards in many areas throughout the world. The essential purpose of this study is to compare general linear model (GLM), general additive model (GAM), multivariate adaptive regression spline (MARS), and modified analytical hierarchy process (M-AHP) models and assessment of their performances for landslide susceptibility modeling in the west of Mazandaran Province, Iran. First, landslides were identified by interpreting aerial photographs, and extensive field works. In total, 153 landslides were identified in the study area. Among these, 105 landslides were randomly selected as training data (i.e. used in the models training) and the remaining 48 (30 %) cases were used for the validation (i.e. used in the models validation). Afterward, based on a deep literature review on 220 scientific papers (period between 2005 and 2012), eleven conditioning factors including lithology, land use, distance from rivers, distance from roads, distance from faults, slope angle, slope aspect, altitude, topographic wetness index (TWI), plan curvature, and profile curvature were selected. The Certainty Factor (CF) model was used for managing uncertainty in rule-based systems and evaluation of the correlation between the dependent (landslides) and independent variables. Finally, the landslide susceptibility zonation was produced using GLM, GAM, MARS, and M-AHP models. For evaluation of the models, the area under the curve (AUC) method was used and both success and prediction rate curves were calculated. The evaluation of models for GLM, GAM, and MARS showed 90.50, 88.90, and 82.10 % for training data and 77.52, 70.49, and 78.17 % for validation data, respectively. Furthermore, The AUC value of the produced landslide susceptibility map using M-AHP showed a training value of 77.82 % and validation value of 82.77 % accuracy. Based on the overall assessments, the proposed approaches showed reasonable results for landslide susceptibility mapping in the study area. Moreover, results obtained showed that the M-AHP model performed slightly better than the MARS, GLM, and GAM models in prediction. These algorithms can be very useful for landslide susceptibility and hazard mapping and land use planning in regional scale.
NASA Astrophysics Data System (ADS)
Cortesi, N.; Trigo, R.; Gonzalez-Hidalgo, J. C.; Ramos, A. M.
2012-06-01
Precipitation over the Iberian Peninsula (IP) is highly variable and shows large spatial contrasts between wet mountainous regions, to the north, and dry regions in the inland plains and southern areas. In this work, a high-density monthly precipitation dataset for the IP was coupled with a set of 26 atmospheric circulation weather types (Trigo and DaCamara, 2000) to reconstruct Iberian monthly precipitation from October to May with a very high resolution of 3030 precipitation series (overall mean density one station each 200 km2). A stepwise linear regression model with forward selection was used to develop monthly reconstructed precipitation series calibrated and validated over 1948-2003 period. Validation was conducted by means of a leave-one-out cross-validation over the calibration period. The results show a good model performance for selected months, with a mean coefficient of variation (CV) around 0.6 for validation period, being particularly robust over the western and central sectors of IP, while the predicted values in the Mediterranean and northern coastal areas are less acute. We show for three long stations (Lisbon, Madrid and Valencia) the comparison between model and original data as an example to how these models can be used in order to obtain monthly precipitation fields since the 1850s over most of IP for this very high density network.
Prognostic models for complete recovery in ischemic stroke: a systematic review and meta-analysis.
Jampathong, Nampet; Laopaiboon, Malinee; Rattanakanokchai, Siwanon; Pattanittum, Porjai
2018-03-09
Prognostic models have been increasingly developed to predict complete recovery in ischemic stroke. However, questions arise about the performance characteristics of these models. The aim of this study was to systematically review and synthesize performance of existing prognostic models for complete recovery in ischemic stroke. We searched journal publications indexed in PUBMED, SCOPUS, CENTRAL, ISI Web of Science and OVID MEDLINE from inception until 4 December, 2017, for studies designed to develop and/or validate prognostic models for predicting complete recovery in ischemic stroke patients. Two reviewers independently examined titles and abstracts, and assessed whether each study met the pre-defined inclusion criteria and also independently extracted information about model development and performance. We evaluated validation of the models by medians of the area under the receiver operating characteristic curve (AUC) or c-statistic and calibration performance. We used a random-effects meta-analysis to pool AUC values. We included 10 studies with 23 models developed from elderly patients with a moderately severe ischemic stroke, mainly in three high income countries. Sample sizes for each study ranged from 75 to 4441. Logistic regression was the only analytical strategy used to develop the models. The number of various predictors varied from one to 11. Internal validation was performed in 12 models with a median AUC of 0.80 (95% CI 0.73 to 0.84). One model reported good calibration. Nine models reported external validation with a median AUC of 0.80 (95% CI 0.76 to 0.82). Four models showed good discrimination and calibration on external validation. The pooled AUC of the two validation models of the same developed model was 0.78 (95% CI 0.71 to 0.85). The performance of the 23 models found in the systematic review varied from fair to good in terms of internal and external validation. Further models should be developed with internal and external validation in low and middle income countries.
Development and Validation of a Disease Severity Scoring Model for Pediatric Sepsis.
Hu, Li; Zhu, Yimin; Chen, Mengshi; Li, Xun; Lu, Xiulan; Liang, Ying; Tan, Hongzhuan
2016-07-01
Multiple severity scoring systems have been devised and evaluated in adult sepsis, but a simplified scoring model for pediatric sepsis has not yet been developed. This study aimed to develop and validate a new scoring model to stratify the severity of pediatric sepsis, thus assisting the treatment of sepsis in children. Data from 634 consecutive patients who presented with sepsis at Children's hospital of Hunan province in China in 2011-2013 were analyzed, with 476 patients placed in training group and 158 patients in validation group. Stepwise discriminant analysis was used to develop the accurate discriminate model. A simplified scoring model was generated using weightings defined by the discriminate coefficients. The discriminant ability of the model was tested by receiver operating characteristic curves (ROC). The discriminant analysis showed that prothrombin time, D-dimer, total bilirubin, serum total protein, uric acid, PaO2/FiO2 ratio, myoglobin were associated with severity of sepsis. These seven variables were assigned with values of 4, 3, 3, 4, 3, 3, 3 respectively based on the standardized discriminant coefficients. Patients with higher scores had higher risk of severe sepsis. The areas under ROC (AROC) were 0.836 for accurate discriminate model, and 0.825 for simplified scoring model in validation group. The proposed disease severity scoring model for pediatric sepsis showed adequate discriminatory capacity and sufficient accuracy, which has important clinical significance in evaluating the severity of pediatric sepsis and predicting its progress.
van der Ploeg, Tjeerd; Nieboer, Daan; Steyerberg, Ewout W
2016-10-01
Prediction of medical outcomes may potentially benefit from using modern statistical modeling techniques. We aimed to externally validate modeling strategies for prediction of 6-month mortality of patients suffering from traumatic brain injury (TBI) with predictor sets of increasing complexity. We analyzed individual patient data from 15 different studies including 11,026 TBI patients. We consecutively considered a core set of predictors (age, motor score, and pupillary reactivity), an extended set with computed tomography scan characteristics, and a further extension with two laboratory measurements (glucose and hemoglobin). With each of these sets, we predicted 6-month mortality using default settings with five statistical modeling techniques: logistic regression (LR), classification and regression trees, random forests (RFs), support vector machines (SVM) and neural nets. For external validation, a model developed on one of the 15 data sets was applied to each of the 14 remaining sets. This process was repeated 15 times for a total of 630 validations. The area under the receiver operating characteristic curve (AUC) was used to assess the discriminative ability of the models. For the most complex predictor set, the LR models performed best (median validated AUC value, 0.757), followed by RF and support vector machine models (median validated AUC value, 0.735 and 0.732, respectively). With each predictor set, the classification and regression trees models showed poor performance (median validated AUC value, <0.7). The variability in performance across the studies was smallest for the RF- and LR-based models (inter quartile range for validated AUC values from 0.07 to 0.10). In the area of predicting mortality from TBI, nonlinear and nonadditive effects are not pronounced enough to make modern prediction methods beneficial. Copyright © 2016 Elsevier Inc. All rights reserved.
Ribeiro de Oliveira, Marcelo Magaldi; Nicolato, Arthur; Santos, Marcilea; Godinho, Joao Victor; Brito, Rafael; Alvarenga, Alexandre; Martins, Ana Luiza Valle; Prosdocimi, André; Trivelato, Felipe Padovani; Sabbagh, Abdulrahman J; Reis, Augusto Barbosa; Maestro, Rolando Del
2016-05-01
OBJECT The development of neurointerventional treatments of central nervous system disorders has resulted in the need for adequate training environments for novice interventionalists. Virtual simulators offer anatomical definition but lack adequate tactile feedback. Animal models, which provide more lifelike training, require an appropriate infrastructure base. The authors describe a training model for neurointerventional procedures using the human placenta (HP), which affords haptic training with significantly fewer resource requirements, and discuss its validation. METHODS Twelve HPs were prepared for simulated endovascular procedures. Training exercises performed by interventional neuroradiologists and novice fellows were placental angiography, stent placement, aneurysm coiling, and intravascular liquid embolic agent injection. RESULTS The endovascular training exercises proposed can be easily reproduced in the HP. Face, content, and construct validity were assessed by 6 neurointerventional radiologists and 6 novice fellows in interventional radiology. CONCLUSIONS The use of HP provides an inexpensive training model for the training of neurointerventionalists. Preliminary validation results show that this simulation model has face and content validity and has demonstrated construct validity for the interventions assessed in this study.
Infinite hidden conditional random fields for human behavior analysis.
Bousmalis, Konstantinos; Zafeiriou, Stefanos; Morency, Louis-Philippe; Pantic, Maja
2013-01-01
Hidden conditional random fields (HCRFs) are discriminative latent variable models that have been shown to successfully learn the hidden structure of a given classification problem (provided an appropriate validation of the number of hidden states). In this brief, we present the infinite HCRF (iHCRF), which is a nonparametric model based on hierarchical Dirichlet processes and is capable of automatically learning the optimal number of hidden states for a classification task. We show how we learn the model hyperparameters with an effective Markov-chain Monte Carlo sampling technique, and we explain the process that underlines our iHCRF model with the Restaurant Franchise Rating Agencies analogy. We show that the iHCRF is able to converge to a correct number of represented hidden states, and outperforms the best finite HCRFs--chosen via cross-validation--for the difficult tasks of recognizing instances of agreement, disagreement, and pain. Moreover, the iHCRF manages to achieve this performance in significantly less total training, validation, and testing time.
Blanchard, P; Wong, AJ; Gunn, GB; Garden, AS; Mohamed, ASR; Rosenthal, DI; Crutison, J; Wu, R; Zhang, X; Zhu, XR; Mohan, R; Amin, MV; Fuller, CD; Frank, SJ
2017-01-01
Objective To externally validate head and neck cancer (HNC) photon-derived normal tissue complication probability (NTCP) models in patients treated with proton beam therapy (PBT). Methods This prospective cohort consisted of HNC patients treated with PBT at a single institution. NTCP models were selected based on the availability of data for validation and evaluated using the leave-one-out cross-validated area under the curve (AUC) for the receiver operating characteristics curve. Results 192 patients were included. The most prevalent tumor site was oropharynx (n=86, 45%), followed by sinonasal (n=28), nasopharyngeal (n=27) or parotid (n=27) tumors. Apart from the prediction of acute mucositis (reduction of AUC of 0.17), the models overall performed well. The validation (PBT) AUC and the published AUC were respectively 0.90 versus 0.88 for feeding tube 6 months post-PBT; 0.70 versus 0.80 for physician rated dysphagia 6 months post-PBT; 0.70 versus 0.80 for dry mouth 6 months post-PBT; and 0.73 versus 0.85 for hypothyroidism 12 months post-PBT. Conclusion While the drop in NTCP model performance was expected in PBT patients, the models showed robustness and remained valid. Further work is warranted, but these results support the validity of the model-based approach for treatment selection for HNC patients. PMID:27641784
Partial validation of the Dutch model for emission and transport of nutrients (STONE).
Overbeek, G B; Tiktak, A; Beusen, A H; van Puijenbroek, P J
2001-11-17
The Netherlands has to cope with large losses of N and P to groundwater and surface water. Agriculture is the dominant source of these nutrients, particularly with reference to nutrient excretion due to intensive animal husbandry in combination with fertilizer use. The Dutch government has recently launched a stricter eutrophication abatement policy to comply with the EC nitrate directive. The Dutch consensus model for N and P emission to groundwater and surface water (STONE) has been developed to evaluate the environmental benefits of abatement plans. Due to the possibly severe socioeconomic consequences of eutrophication abatement plans, it is of utmost importance that the model is thoroughly validated. Because STONE is applied on a nationwide scale, the model validation has also been carried out on this scale. For this purpose the model outputs were compared with lumped results from monitoring networks in the upper groundwater and in surface waters. About 13,000 recent point source observations of nitrate in the upper groundwater were available, along with several hundreds of observations showing N and P in local surface water systems. Comparison of observations from the different spatial scales available showed the issue of scale to be important. Scale issues will be addressed in the next stages of the validation study.
Kimura, Koji; Yoshida, Atsushi; Takayanagi, Risa; Yamada, Yasuhiko
2018-05-23
Adalimumab (ADA) is used as a therapeutic agent for Crohn's disease (CD). Although that dosage regimen has been established through clinical trial experience, it has not been analyzed theoretically. In the present study, we analyzed of sequential changes of the Crohn's disease activity index (CDAI) after repeated administrations of ADA using a pharmacokinetic and pharmacodynamic model. In addition, we analyzed the validity of the dosage regimen, and potential efficacy gained by increasing the dose and reducing the interval of administration. The sequential changes in CDAI values obtained with our model were in good agreement with observed CDAI values, which was considered to show the validity of our analysis. We considered that our results showed the importance of the loading dose of ADA to obtain remission in an early stage of active CD. In addition, we showed that patients who have an incomplete response to ADA can obtain similar efficacy from increasing the dose and reducing the dose interval. In conclusion, our results showed that the present model may be applied to predict the CDAI values of ADA for CD. They indicated the validity of the dosage regimen, as well as the efficacy of increasing the dose and reducing the dose interval. This article is protected by copyright. All rights reserved.
Modeling and validating the cost and clinical pathway of colorectal cancer.
Joranger, Paal; Nesbakken, Arild; Hoff, Geir; Sorbye, Halfdan; Oshaug, Arne; Aas, Eline
2015-02-01
Cancer is a major cause of morbidity and mortality, and colorectal cancer (CRC) is the third most common cancer in the world. The estimated costs of CRC treatment vary considerably, and if CRC costs in a model are based on empirically estimated total costs of stage I, II, III, or IV treatments, then they lack some flexibility to capture future changes in CRC treatment. The purpose was 1) to describe how to model CRC costs and survival and 2) to validate the model in a transparent and reproducible way. We applied a semi-Markov model with 70 health states and tracked age and time since specific health states (using tunnels and 3-dimensional data matrix). The model parameters are based on an observational study at Oslo University Hospital (2049 CRC patients), the National Patient Register, literature, and expert opinion. The target population was patients diagnosed with CRC. The model followed the patients diagnosed with CRC from the age of 70 until death or 100 years. The study focused on the perspective of health care payers. The model was validated for face validity, internal and external validity, and cross-validity. The validation showed a satisfactory match with other models and empirical estimates for both cost and survival time, without any preceding calibration of the model. The model can be used to 1) address a range of CRC-related themes (general model) like survival and evaluation of the cost of treatment and prevention measures; 2) make predictions from intermediate to final outcomes; 3) estimate changes in resource use and costs due to changing guidelines; and 4) adjust for future changes in treatment and trends over time. The model is adaptable to other populations. © The Author(s) 2014.
NASA Astrophysics Data System (ADS)
Julianto, E. A.; Suntoro, W. A.; Dewi, W. S.; Partoyo
2018-03-01
Climate change has been reported to exacerbate land resources degradation including soil fertility decline. The appropriate validity use on soil fertility evaluation could reduce the risk of climate change effect on plant cultivation. This study aims to assess the validity of a Soil Fertility Evaluation Model using a graphical approach. The models evaluated were the Indonesian Soil Research Center (PPT) version model, the FAO Unesco version model, and the Kyuma version model. Each model was then correlated with rice production (dry grain weight/GKP). The goodness of fit of each model can be tested to evaluate the quality and validity of a model, as well as the regression coefficient (R2). This research used the Eviews 9 programme by a graphical approach. The results obtained three curves, namely actual, fitted, and residual curves. If the actual and fitted curves are widely apart or irregular, this means that the quality of the model is not good, or there are many other factors that are still not included in the model (large residual) and conversely. Indeed, if the actual and fitted curves show exactly the same shape, it means that all factors have already been included in the model. Modification of the standard soil fertility evaluation models can improve the quality and validity of a model.
Developing evaluation instrument based on CIPP models on the implementation of portfolio assessment
NASA Astrophysics Data System (ADS)
Kurnia, Feni; Rosana, Dadan; Supahar
2017-08-01
This study aimed to develop an evaluation instrument constructed by CIPP model on the implementation of portfolio assessment in science learning. This study used research and development (R & D) method; adapting 4-D by the development of non-test instrument, and the evaluation instrument constructed by CIPP model. CIPP is the abbreviation of Context, Input, Process, and Product. The techniques of data collection were interviews, questionnaires, and observations. Data collection instruments were: 1) the interview guidelines for the analysis of the problems and the needs, 2) questionnaire to see level of accomplishment of portfolio assessment instrument, and 3) observation sheets for teacher and student to dig up responses to the portfolio assessment instrument. The data obtained was quantitative data obtained from several validators. The validators consist of two lecturers as the evaluation experts, two practitioners (science teachers), and three colleagues. This paper shows the results of content validity obtained from the validators and the analysis result of the data obtained by using Aikens' V formula. The results of this study shows that the evaluation instrument based on CIPP models is proper to evaluate the implementation of portfolio assessment instruments. Based on the experts' judgments, practitioners, and colleagues, the Aikens' V coefficient was between 0.86-1,00 which means that it is valid and can be used in the limited trial and operational field trial.
Bray, Benjamin D; Campbell, James; Cloud, Geoffrey C; Hoffman, Alex; James, Martin; Tyrrell, Pippa J; Wolfe, Charles D A; Rudd, Anthony G
2014-11-01
Case mix adjustment is required to allow valid comparison of outcomes across care providers. However, there is a lack of externally validated models suitable for use in unselected stroke admissions. We therefore aimed to develop and externally validate prediction models to enable comparison of 30-day post-stroke mortality outcomes using routine clinical data. Models were derived (n=9000 patients) and internally validated (n=18 169 patients) using data from the Sentinel Stroke National Audit Program, the national register of acute stroke in England and Wales. External validation (n=1470 patients) was performed in the South London Stroke Register, a population-based longitudinal study. Models were fitted using general estimating equations. Discrimination and calibration were assessed using receiver operating characteristic curve analysis and correlation plots. Two final models were derived. Model A included age (<60, 60-69, 70-79, 80-89, and ≥90 years), National Institutes of Health Stroke Severity Score (NIHSS) on admission, presence of atrial fibrillation on admission, and stroke type (ischemic versus primary intracerebral hemorrhage). Model B was similar but included only the consciousness component of the NIHSS in place of the full NIHSS. Both models showed excellent discrimination and calibration in internal and external validation. The c-statistics in external validation were 0.87 (95% confidence interval, 0.84-0.89) and 0.86 (95% confidence interval, 0.83-0.89) for models A and B, respectively. We have derived and externally validated 2 models to predict mortality in unselected patients with acute stroke using commonly collected clinical variables. In settings where the ability to record the full NIHSS on admission is limited, the level of consciousness component of the NIHSS provides a good approximation of the full NIHSS for mortality prediction. © 2014 American Heart Association, Inc.
Hu, Alan Shiun Yew; Donohue, Peter O'; Gunnarsson, Ronny K; de Costa, Alan
2018-03-14
Valid and user-friendly prediction models for conversion to open cholecystectomy allow for proper planning prior to surgery. The Cairns Prediction Model (CPM) has been in use clinically in the original study site for the past three years, but has not been tested at other sites. A retrospective, single-centred study collected ultrasonic measurements and clinical variables alongside with conversion status from consecutive patients who underwent laparoscopic cholecystectomy from 2013 to 2016 in The Townsville Hospital, North Queensland, Australia. An area under the curve (AUC) was calculated to externally validate of the CPM. Conversion was necessary in 43 (4.2%) out of 1035 patients. External validation showed an area under the curve of 0.87 (95% CI 0.82-0.93, p = 1.1 × 10 -14 ). In comparison with most previously published models, which have an AUC of approximately 0.80 or less, the CPM has the highest AUC of all published prediction models both for internal and external validation. Crown Copyright © 2018. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Eschenbach, Wolfram; Budziak, Dörte; Elbracht, Jörg; Höper, Heinrich; Krienen, Lisa; Kunkel, Ralf; Meyer, Knut; Well, Reinhard; Wendland, Frank
2018-06-01
Valid models for estimating nitrate emissions from agriculture to groundwater are an indispensable forecasting tool. A major challenge for model validation is the spatial and temporal inconsistency between data from groundwater monitoring points and modelled nitrate inputs into groundwater, and the fact that many existing groundwater monitoring wells cannot be used for validation. With the help of the N2/Ar-method, groundwater monitoring wells in areas with reduced groundwater can now be used for model validation. For this purpose, 484 groundwater monitoring wells were sampled in Lower Saxony. For the first time, modelled potential nitrate concentrations in groundwater recharge (from the DENUZ model) were compared with nitrate input concentrations, which were calculated using the N2/Ar method. The results show a good agreement between both methods for glacial outwash plains and moraine deposits. Although the nitrate degradation processes in groundwater and soil merge seamlessly in areas with a shallow groundwater table, the DENUZ model only calculates denitrification in the soil zone. The DENUZ model thus predicts 27% higher nitrate emissions into the groundwater than the N2/Ar method in such areas. To account for high temporal and spatial variability of nitrate emissions into groundwater, a large number of groundwater monitoring points must be investigated for model validation.
Mind the Noise When Identifying Computational Models of Cognition from Brain Activity.
Kolossa, Antonio; Kopp, Bruno
2016-01-01
The aim of this study was to analyze how measurement error affects the validity of modeling studies in computational neuroscience. A synthetic validity test was created using simulated P300 event-related potentials as an example. The model space comprised four computational models of single-trial P300 amplitude fluctuations which differed in terms of complexity and dependency. The single-trial fluctuation of simulated P300 amplitudes was computed on the basis of one of the models, at various levels of measurement error and at various numbers of data points. Bayesian model selection was performed based on exceedance probabilities. At very low numbers of data points, the least complex model generally outperformed the data-generating model. Invalid model identification also occurred at low levels of data quality and under low numbers of data points if the winning model's predictors were closely correlated with the predictors from the data-generating model. Given sufficient data quality and numbers of data points, the data-generating model could be correctly identified, even against models which were very similar to the data-generating model. Thus, a number of variables affects the validity of computational modeling studies, and data quality and numbers of data points are among the main factors relevant to the issue. Further, the nature of the model space (i.e., model complexity, model dependency) should not be neglected. This study provided quantitative results which show the importance of ensuring the validity of computational modeling via adequately prepared studies. The accomplishment of synthetic validity tests is recommended for future applications. Beyond that, we propose to render the demonstration of sufficient validity via adequate simulations mandatory to computational modeling studies.
NASA Astrophysics Data System (ADS)
Basith, Abdul; Prakoso, Yudhono; Kongko, Widjo
2017-07-01
A tsunami model using high resolution geometric data is indispensable in efforts to tsunami mitigation, especially in tsunami prone areas. It is one of the factors that affect the accuracy results of numerical modeling of tsunami. Sadeng Port is a new infrastructure in the Southern Coast of Java which could potentially hit by massive tsunami from seismic gap. This paper discusses validation and error estimation of tsunami model created using high resolution geometric data in Sadeng Port. Tsunami model validation uses the height wave of Tsunami Pangandaran 2006 recorded by Tide Gauge of Sadeng. Tsunami model will be used to accommodate the tsunami numerical modeling involves the parameters of earthquake-tsunami which is derived from the seismic gap. The validation results using t-test (student) shows that the height of the tsunami modeling results and observation in Tide Gauge of Sadeng are considered statistically equal at 95% confidence level and the value of the RMSE and NRMSE are 0.428 m and 22.12%, while the differences of tsunami wave travel time is 12 minutes.
Effect of nonlinearity in hybrid kinetic Monte Carlo-continuum models.
Balter, Ariel; Lin, Guang; Tartakovsky, Alexandre M
2012-01-01
Recently there has been interest in developing efficient ways to model heterogeneous surface reactions with hybrid computational models that couple a kinetic Monte Carlo (KMC) model for a surface to a finite-difference model for bulk diffusion in a continuous domain. We consider two representative problems that validate a hybrid method and show that this method captures the combined effects of nonlinearity and stochasticity. We first validate a simple deposition-dissolution model with a linear rate showing that the KMC-continuum hybrid agrees with both a fully deterministic model and its analytical solution. We then study a deposition-dissolution model including competitive adsorption, which leads to a nonlinear rate, and show that in this case the KMC-continuum hybrid and fully deterministic simulations do not agree. However, we are able to identify the difference as a natural result of the stochasticity coming from the KMC surface process. Because KMC captures inherent fluctuations, we consider it to be more realistic than a purely deterministic model. Therefore, we consider the KMC-continuum hybrid to be more representative of a real system.
Effect of Nonlinearity in Hybrid Kinetic Monte Carlo-Continuum Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balter, Ariel I.; Lin, Guang; Tartakovsky, Alexandre M.
2012-04-23
Recently there has been interest in developing efficient ways to model heterogeneous surface reactions with hybrid computational models that couple a KMC model for a surface to a finite difference model for bulk diffusion in a continuous domain. We consider two representative problems that validate a hybrid method and also show that this method captures the combined effects of nonlinearity and stochasticity. We first validate a simple deposition/dissolution model with a linear rate showing that the KMC-continuum hybrid agrees with both a fully deterministic model and its analytical solution. We then study a deposition/dissolution model including competitive adsorption, which leadsmore » to a nonlinear rate, and show that, in this case, the KMC-continuum hybrid and fully deterministic simulations do not agree. However, we are able to identify the difference as a natural result of the stochasticity coming from the KMC surface process. Because KMC captures inherent fluctuations, we consider it to be more realistic than a purely deterministic model. Therefore, we consider the KMC-continuum hybrid to be more representative of a real system.« less
Experimental Errors in QSAR Modeling Sets: What We Can Do and What We Cannot Do.
Zhao, Linlin; Wang, Wenyi; Sedykh, Alexander; Zhu, Hao
2017-06-30
Numerous chemical data sets have become available for quantitative structure-activity relationship (QSAR) modeling studies. However, the quality of different data sources may be different based on the nature of experimental protocols. Therefore, potential experimental errors in the modeling sets may lead to the development of poor QSAR models and further affect the predictions of new compounds. In this study, we explored the relationship between the ratio of questionable data in the modeling sets, which was obtained by simulating experimental errors, and the QSAR modeling performance. To this end, we used eight data sets (four continuous endpoints and four categorical endpoints) that have been extensively curated both in-house and by our collaborators to create over 1800 various QSAR models. Each data set was duplicated to create several new modeling sets with different ratios of simulated experimental errors (i.e., randomizing the activities of part of the compounds) in the modeling process. A fivefold cross-validation process was used to evaluate the modeling performance, which deteriorates when the ratio of experimental errors increases. All of the resulting models were also used to predict external sets of new compounds, which were excluded at the beginning of the modeling process. The modeling results showed that the compounds with relatively large prediction errors in cross-validation processes are likely to be those with simulated experimental errors. However, after removing a certain number of compounds with large prediction errors in the cross-validation process, the external predictions of new compounds did not show improvement. Our conclusion is that the QSAR predictions, especially consensus predictions, can identify compounds with potential experimental errors. But removing those compounds by the cross-validation procedure is not a reasonable means to improve model predictivity due to overfitting.
Experimental Errors in QSAR Modeling Sets: What We Can Do and What We Cannot Do
2017-01-01
Numerous chemical data sets have become available for quantitative structure–activity relationship (QSAR) modeling studies. However, the quality of different data sources may be different based on the nature of experimental protocols. Therefore, potential experimental errors in the modeling sets may lead to the development of poor QSAR models and further affect the predictions of new compounds. In this study, we explored the relationship between the ratio of questionable data in the modeling sets, which was obtained by simulating experimental errors, and the QSAR modeling performance. To this end, we used eight data sets (four continuous endpoints and four categorical endpoints) that have been extensively curated both in-house and by our collaborators to create over 1800 various QSAR models. Each data set was duplicated to create several new modeling sets with different ratios of simulated experimental errors (i.e., randomizing the activities of part of the compounds) in the modeling process. A fivefold cross-validation process was used to evaluate the modeling performance, which deteriorates when the ratio of experimental errors increases. All of the resulting models were also used to predict external sets of new compounds, which were excluded at the beginning of the modeling process. The modeling results showed that the compounds with relatively large prediction errors in cross-validation processes are likely to be those with simulated experimental errors. However, after removing a certain number of compounds with large prediction errors in the cross-validation process, the external predictions of new compounds did not show improvement. Our conclusion is that the QSAR predictions, especially consensus predictions, can identify compounds with potential experimental errors. But removing those compounds by the cross-validation procedure is not a reasonable means to improve model predictivity due to overfitting. PMID:28691113
Prediction of resource volumes at untested locations using simple local prediction models
Attanasi, E.D.; Coburn, T.C.; Freeman, P.A.
2006-01-01
This paper shows how local spatial nonparametric prediction models can be applied to estimate volumes of recoverable gas resources at individual undrilled sites, at multiple sites on a regional scale, and to compute confidence bounds for regional volumes based on the distribution of those estimates. An approach that combines cross-validation, the jackknife, and bootstrap procedures is used to accomplish this task. Simulation experiments show that cross-validation can be applied beneficially to select an appropriate prediction model. The cross-validation procedure worked well for a wide range of different states of nature and levels of information. Jackknife procedures are used to compute individual prediction estimation errors at undrilled locations. The jackknife replicates also are used with a bootstrap resampling procedure to compute confidence bounds for the total volume. The method was applied to data (partitioned into a training set and target set) from the Devonian Antrim Shale continuous-type gas play in the Michigan Basin in Otsego County, Michigan. The analysis showed that the model estimate of total recoverable volumes at prediction sites is within 4 percent of the total observed volume. The model predictions also provide frequency distributions of the cell volumes at the production unit scale. Such distributions are the basis for subsequent economic analyses. ?? Springer Science+Business Media, LLC 2007.
Validity of "Hi_Science" as instructional media based-android refer to experiential learning model
NASA Astrophysics Data System (ADS)
Qamariah, Jumadi, Senam, Wilujeng, Insih
2017-08-01
Hi_Science is instructional media based-android in learning science on material environmental pollution and global warming. This study is aimed: (a) to show the display of Hi_Science that will be applied in Junior High School, and (b) to describe the validity of Hi_Science. Hi_Science as instructional media created with colaboration of innovative learning model and development of technology at the current time. Learning media selected is based-android and collaborated with experiential learning model as an innovative learning model. Hi_Science had adapted student worksheet by Taufiq (2015). Student worksheet had very good category by two expert lecturers and two science teachers (Taufik, 2015). This student worksheet is refined and redeveloped in android as an instructional media which can be used by students for learning science not only in the classroom, but also at home. Therefore, student worksheet which has become instructional media based-android must be validated again. Hi_Science has been validated by two experts. The validation is based on assessment of meterials aspects and media aspects. The data collection was done by media assessment instrument. The result showed the assessment of material aspects has obtained the average value 4,72 with percentage of agreement 96,47%, that means Hi_Science on the material aspects is in excellent category or very valid category. The assessment of media aspects has obtained the average value 4,53 with percentage of agreement 98,70%, that means Hi_Science on the media aspects is in excellent category or very valid category. It was concluded that Hi_Science as instructional media can be applied in the junior high school.
Stevenson, Douglass E; Feng, Ge; Zhang, Runjie; Harris, Marvin K
2005-08-01
Scirpophaga incertulas (Walker) (Lepidoptera: Pyralidae) is autochthonous and monophagous on rice, Oryza spp., which favors the development of a physiological time model using degree-days (degrees C) to establish a well defined window during which adults will be present in fields. Model development of S. incertulas adult flight phenology used climatic data and historical field observations of S. incertulas from 1962 through 1988. Analysis of variance was used to evaluate 5,203 prospective models with starting dates ranging from 1 January (day 1) to 30 April (day 121) and base temperatures ranging from -3 through 18.5 degrees C. From six candidate models, which shared the lowest standard deviation of prediction error, a model with a base temperature of 10 degrees C starting on 19 January was selected for validation. Validation with linear regression evaluated the differences between predicted and observed events and showed the model consistently predicted phenological events of 10 to 90% cumulative flight activity within a 3.5-d prediction interval regarded as acceptable for pest management decision making. The degree-day phenology model developed here is expected to find field application in Guandong Province. Expansion to other areas of rice production will require field validation. We expect the degree-day characterization of the activity period will remain essentially intact, but the start day may vary based on climate and geographic location. The development and validation of the phenology model of the S. incertulas by using procedures originally developed for pecan nut casebearer, Acrobasis nuxvorella Neunzig, shows the fungibility of this approach to developing prediction models for other insects.
Ochoa-Meza, Gerardo; Sierra, Juan Carlos; Pérez-Rodrigo, Carmen; Aranceta Bartrina, Javier; Esparza-Del Villar, Óscar A
2014-11-24
To test the goodness of fit of a Motivation-Ability-Opportunity model (MAO-model) to evaluate the observed variance in Mexican schoolchildren's preferences to eat fruit and daily fruit intake; also to evaluate the factorial invariance across the gender and type of population (urban and semi-urban) in which children reside. A model with seven constructs was designed from a validated questionnaire to assess preferences, cognitive abilities, attitude, modelling, perceived barriers, accessibility at school, accessibility at home, and fruit intake frequency. The instrument was administered in a representative sample of 1434 schoolchildren of 5th and 6th grade of primary school in a cross-sectional and ex post fact study conducted in 2013 in six cities of the State of Chihuahua, Mexico. The goodness of fit indexes was adequate for the MAO-model and explained 39% of the variance in preference to eat fruit. The structure of the model showed very good factor structure stability and the dimensions of the scale were equivalent in the different samples analyzed. The model analyzed with structural equation modeling showed a parsimonious model that can be used to explain the variation in fruit intake of 10 to 12 year old Mexican schoolchildren. The structure of the model was strictly invariant in the different samples analyzed and showed evidence of cross validation. Finally, implications about the modification model to fit data from scholar settings and guidelines for future research are discussed. Copyright AULA MEDICA EDICIONES 2014. Published by AULA MEDICA. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, X; Wang, J; Hu, W
Purpose: The Varian RapidPlan™ is a commercial knowledge-based optimization process which uses a set of clinically used treatment plans to train a model that can predict individualized dose-volume objectives. The purpose of this study is to evaluate the performance of RapidPlan to generate intensity modulated radiation therapy (IMRT) plans for cervical cancer. Methods: Totally 70 IMRT plans for cervical cancer with varying clinical and physiological indications were enrolled in this study. These patients were all previously treated in our institution. There were two prescription levels usually used in our institution: 45Gy/25 fractions and 50.4Gy/28 fractions. 50 of these plans weremore » selected to train the RapidPlan model for predicting dose-volume constraints. After model training, this model was validated with 10 plans from training pool(internal validation) and additional other 20 new plans(external validation). All plans used for the validation were re-optimized with the original beam configuration and the generated priorities from RapidPlan were manually adjusted to ensure that re-optimized DVH located in the range of the model prediction. DVH quantitative analysis was performed to compare the RapidPlan generated and the original manual optimized plans. Results: For all the validation cases, RapidPlan based plans (RapidPlan) showed similar or superior results compared to the manual optimized ones. RapidPlan increased the result of D98% and homogeneity in both two validations. For organs at risk, the RapidPlan decreased mean doses of bladder by 1.25Gy/1.13Gy (internal/external validation) on average, with p=0.12/p<0.01. The mean dose of rectum and bowel were also decreased by an average of 2.64Gy/0.83Gy and 0.66Gy/1.05Gy,with p<0.01/ p<0.01and p=0.04/<0.01 for the internal/external validation, respectively. Conclusion: The RapidPlan model based cervical cancer plans shows ability to systematically improve the IMRT plan quality. It suggests that RapidPlan has great potential to make the treatment planning process more efficient.« less
Holgado-Tello, Fco P; Chacón-Moscoso, Salvador; Sanduvete-Chaves, Susana; Pérez-Gil, José A
2016-01-01
The Campbellian tradition provides a conceptual framework to assess threats to validity. On the other hand, different models of causal analysis have been developed to control estimation biases in different research designs. However, the link between design features, measurement issues, and concrete impact estimation analyses is weak. In order to provide an empirical solution to this problem, we use Structural Equation Modeling (SEM) as a first approximation to operationalize the analytical implications of threats to validity in quasi-experimental designs. Based on the analogies established between the Classical Test Theory (CTT) and causal analysis, we describe an empirical study based on SEM in which range restriction and statistical power have been simulated in two different models: (1) A multistate model in the control condition (pre-test); and (2) A single-trait-multistate model in the control condition (post-test), adding a new mediator latent exogenous (independent) variable that represents a threat to validity. Results show, empirically, how the differences between both the models could be partially or totally attributed to these threats. Therefore, SEM provides a useful tool to analyze the influence of potential threats to validity.
Holgado-Tello, Fco. P.; Chacón-Moscoso, Salvador; Sanduvete-Chaves, Susana; Pérez-Gil, José A.
2016-01-01
The Campbellian tradition provides a conceptual framework to assess threats to validity. On the other hand, different models of causal analysis have been developed to control estimation biases in different research designs. However, the link between design features, measurement issues, and concrete impact estimation analyses is weak. In order to provide an empirical solution to this problem, we use Structural Equation Modeling (SEM) as a first approximation to operationalize the analytical implications of threats to validity in quasi-experimental designs. Based on the analogies established between the Classical Test Theory (CTT) and causal analysis, we describe an empirical study based on SEM in which range restriction and statistical power have been simulated in two different models: (1) A multistate model in the control condition (pre-test); and (2) A single-trait-multistate model in the control condition (post-test), adding a new mediator latent exogenous (independent) variable that represents a threat to validity. Results show, empirically, how the differences between both the models could be partially or totally attributed to these threats. Therefore, SEM provides a useful tool to analyze the influence of potential threats to validity. PMID:27378991
Huang, Yanqi; He, Lan; Dong, Di; Yang, Caiyun; Liang, Cuishan; Chen, Xin; Ma, Zelan; Huang, Xiaomei; Yao, Su; Liang, Changhong; Tian, Jie; Liu, Zaiyi
2018-02-01
To develop and validate a radiomics prediction model for individualized prediction of perineural invasion (PNI) in colorectal cancer (CRC). After computed tomography (CT) radiomics features extraction, a radiomics signature was constructed in derivation cohort (346 CRC patients). A prediction model was developed to integrate the radiomics signature and clinical candidate predictors [age, sex, tumor location, and carcinoembryonic antigen (CEA) level]. Apparent prediction performance was assessed. After internal validation, independent temporal validation (separate from the cohort used to build the model) was then conducted in 217 CRC patients. The final model was converted to an easy-to-use nomogram. The developed radiomics nomogram that integrated the radiomics signature and CEA level showed good calibration and discrimination performance [Harrell's concordance index (c-index): 0.817; 95% confidence interval (95% CI): 0.811-0.823]. Application of the nomogram in validation cohort gave a comparable calibration and discrimination (c-index: 0.803; 95% CI: 0.794-0.812). Integrating the radiomics signature and CEA level into a radiomics prediction model enables easy and effective risk assessment of PNI in CRC. This stratification of patients according to their PNI status may provide a basis for individualized auxiliary treatment.
Fouad, Marwa A; Tolba, Enas H; El-Shal, Manal A; El Kerdawy, Ahmed M
2018-05-11
The justified continuous emerging of new β-lactam antibiotics provokes the need for developing suitable analytical methods that accelerate and facilitate their analysis. A face central composite experimental design was adopted using different levels of phosphate buffer pH, acetonitrile percentage at zero time and after 15 min in a gradient program to obtain the optimum chromatographic conditions for the elution of 31 β-lactam antibiotics. Retention factors were used as the target property to build two QSRR models utilizing the conventional forward selection and the advanced nature-inspired firefly algorithm for descriptor selection, coupled with multiple linear regression. The obtained models showed high performance in both internal and external validation indicating their robustness and predictive ability. Williams-Hotelling test and student's t-test showed that there is no statistical significant difference between the models' results. Y-randomization validation showed that the obtained models are due to significant correlation between the selected molecular descriptors and the analytes' chromatographic retention. These results indicate that the generated FS-MLR and FFA-MLR models are showing comparable quality on both the training and validation levels. They also gave comparable information about the molecular features that influence the retention behavior of β-lactams under the current chromatographic conditions. We can conclude that in some cases simple conventional feature selection algorithm can be used to generate robust and predictive models comparable to that are generated using advanced ones. Copyright © 2018 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dyachkov, Sergey, E-mail: serj.dyachkov@gmail.com; Moscow Institute of Physics and Technology, 9 Institutskiy per., Dolgoprudny, Moscow Region 141700; Levashov, Pavel, E-mail: pasha@ihed.ras.ru
We determine the region of applicability of the finite–temperature Thomas–Fermi model and its thermal part with respect to quantum and exchange corrections. Very high accuracy of computations has been achieved by using a special approach for the solution of the boundary problem and numerical integration. We show that the thermal part of the model can be applied at lower temperatures than the full model. Also we offer simple approximations of the boundaries of validity for practical applications.
A Framework for Text Mining in Scientometric Study: A Case Study in Biomedicine Publications
NASA Astrophysics Data System (ADS)
Silalahi, V. M. M.; Hardiyati, R.; Nadhiroh, I. M.; Handayani, T.; Rahmaida, R.; Amelia, M.
2018-04-01
The data of Indonesians research publications in the domain of biomedicine has been collected to be text mined for the purpose of a scientometric study. The goal is to build a predictive model that provides a classification of research publications on the potency for downstreaming. The model is based on the drug development processes adapted from the literatures. An effort is described to build the conceptual model and the development of a corpus on the research publications in the domain of Indonesian biomedicine. Then an investigation is conducted relating to the problems associated with building a corpus and validating the model. Based on our experience, a framework is proposed to manage the scientometric study based on text mining. Our method shows the effectiveness of conducting a scientometric study based on text mining in order to get a valid classification model. This valid model is mainly supported by the iterative and close interactions with the domain experts starting from identifying the issues, building a conceptual model, to the labelling, validation and results interpretation.
Innstrand, Siw Tone; Christensen, Marit; Undebakke, Kirsti Godal; Svarva, Kyrre
2015-12-01
The aim of the present paper is to present and validate a Knowledge-Intensive Work Environment Survey Target (KIWEST), a questionnaire developed for assessing the psychosocial factors among people in knowledge-intensive work environments. The construct validity and reliability of the measurement model where tested on a representative sample of 3066 academic and administrative staff working at one of the largest universities in Norway. Confirmatory factor analysis provided initial support for the convergent validity and internal consistency of the 30 construct KIWEST measurement model. However, discriminant validity tests indicated that some of the constructs might overlap to some degree. Overall, the KIWEST measure showed promising psychometric properties as a psychosocial work environment measure. © 2015 the Nordic Societies of Public Health.
Hydrological Modeling of the Jiaoyi Watershed (China) Using HSPF Model
Yan, Chang-An; Zhang, Wanchang; Zhang, Zhijie
2014-01-01
A watershed hydrological model, hydrological simulation program-Fortran (HSPF), was applied to simulate the spatial and temporal variation of hydrological processes in the Jiaoyi watershed of Huaihe River Basin, the heaviest shortage of water resources and polluted area in China. The model was calibrated using the years 2001–2004 and validated with data from 2005 to 2006. Calibration and validation results showed that the model generally simulated mean monthly and daily runoff precisely due to the close matching hydrographs between simulated and observed runoff, as well as the excellent evaluation indicators such as Nash-Sutcliffe efficiency (NSE), coefficient of correlation (R 2), and the relative error (RE). The similar simulation results between calibration and validation period showed that all the calibrated parameters had a certain representation in Jiaoyi watershed. Additionally, the simulation in rainy months was more accurate than the drought months. Another result in this paper was that HSPF was also capable of estimating the water balance components reasonably and realistically in space through the whole watershed. The calibrated model can be used to explore the effects of climate change scenarios and various watershed management practices on the water resources and water environment in the basin. PMID:25013863
Anderson, P. S. L.; Rayfield, E. J.
2012-01-01
Computational models such as finite-element analysis offer biologists a means of exploring the structural mechanics of biological systems that cannot be directly observed. Validated against experimental data, a model can be manipulated to perform virtual experiments, testing variables that are hard to control in physical experiments. The relationship between tooth form and the ability to break down prey is key to understanding the evolution of dentition. Recent experimental work has quantified how tooth shape promotes fracture in biological materials. We present a validated finite-element model derived from physical compression experiments. The model shows close agreement with strain patterns observed in photoelastic test materials and reaction forces measured during these experiments. We use the model to measure strain energy within the test material when different tooth shapes are used. Results show that notched blades deform materials for less strain energy cost than straight blades, giving insights into the energetic relationship between tooth form and prey materials. We identify a hypothetical ‘optimal’ blade angle that minimizes strain energy costs and test alternative prey materials via virtual experiments. Using experimental data and computational models offers an integrative approach to understand the mechanics of tooth morphology. PMID:22399789
NASA Astrophysics Data System (ADS)
Banerjee, Polash; Ghose, Mrinal Kanti; Pradhan, Ratika
2018-05-01
Spatial analysis of water quality impact assessment of highway projects in mountainous areas remains largely unexplored. A methodology is presented here for Spatial Water Quality Impact Assessment (SWQIA) due to highway-broadening-induced vehicular traffic change in the East district of Sikkim. Pollution load of the highway runoff was estimated using an Average Annual Daily Traffic-Based Empirical model in combination with mass balance model to predict pollution in the rivers within the study area. Spatial interpolation and overlay analysis were used for impact mapping. Analytic Hierarchy Process-Based Water Quality Status Index was used to prepare a composite impact map. Model validation criteria, cross-validation criteria, and spatial explicit sensitivity analysis show that the SWQIA model is robust. The study shows that vehicular traffic is a significant contributor to water pollution in the study area. The model is catering specifically to impact analysis of the concerned project. It can be an aid for decision support system for the project stakeholders. The applicability of SWQIA model needs to be explored and validated in the context of a larger set of water quality parameters and project scenarios at a greater spatial scale.
Bajoub, Aadil; Medina-Rodríguez, Santiago; Ajal, El Amine; Cuadros-Rodríguez, Luis; Monasterio, Romina Paula; Vercammen, Joeri; Fernández-Gutiérrez, Alberto; Carrasco-Pancorbo, Alegría
2018-04-01
Selected Ion flow tube mass spectrometry (SIFT-MS) in combination with chemometrics was used to authenticate the geographical origin of Mediterranean virgin olive oils (VOOs) produced under geographical origin labels. In particular, 130 oil samples from six different Mediterranean regions (Kalamata (Greece); Toscana (Italy); Meknès and Tyout (Morocco); and Priego de Córdoba and Baena (Spain)) were considered. The headspace volatile fingerprints were measured by SIFT-MS in full scan with H 3 O + , NO + and O 2 + as precursor ions and the results were subjected to chemometric treatments. Principal Component Analysis (PCA) was used for preliminary multivariate data analysis and Partial Least Squares-Discriminant Analysis (PLS-DA) was applied to build different models (considering the three reagent ions) to classify samples according to the country of origin and regions (within the same country). The multi-class PLS-DA models showed very good performance in terms of fitting accuracy (98.90-100%) and prediction accuracy (96.70-100% accuracy for cross validation and 97.30-100% accuracy for external validation (test set)). Considering the two-class PLS-DA models, the one for the Spanish samples showed 100% sensitivity, specificity and accuracy in calibration, cross validation and external validation; the model for Moroccan oils also showed very satisfactory results (with perfect scores for almost every parameter in all the cases). Copyright © 2017 Elsevier Ltd. All rights reserved.
Eslami, Mohammad H; Rybin, Denis V; Doros, Gheorghe; Siracuse, Jeffrey J; Farber, Alik
2018-01-01
The purpose of this study is to externally validate a recently reported Vascular Study Group of New England (VSGNE) risk predictive model of postoperative mortality after elective abdominal aortic aneurysm (AAA) repair and to compare its predictive ability across different patients' risk categories and against the established risk predictive models using the Vascular Quality Initiative (VQI) AAA sample. The VQI AAA database (2010-2015) was queried for patients who underwent elective AAA repair. The VSGNE cases were excluded from the VQI sample. The external validation of a recently published VSGNE AAA risk predictive model, which includes only preoperative variables (age, gender, history of coronary artery disease, chronic obstructive pulmonary disease, cerebrovascular disease, creatinine levels, and aneurysm size) and planned type of repair, was performed using the VQI elective AAA repair sample. The predictive value of the model was assessed via the C-statistic. Hosmer-Lemeshow method was used to assess calibration and goodness of fit. This model was then compared with the Medicare, Vascular Governance Northwest model, and Glasgow Aneurysm Score for predicting mortality in VQI sample. The Vuong test was performed to compare the model fit between the models. Model discrimination was assessed in different risk group VQI quintiles. Data from 4431 cases from the VSGNE sample with the overall mortality rate of 1.4% was used to develop the model. The internally validated VSGNE model showed a very high discriminating ability in predicting mortality (C = 0.822) and good model fit (Hosmer-Lemeshow P = .309) among the VSGNE elective AAA repair sample. External validation on 16,989 VQI cases with an overall 0.9% mortality rate showed very robust predictive ability of mortality (C = 0.802). Vuong tests yielded a significant fit difference favoring the VSGNE over then Medicare model (C = 0.780), Vascular Governance Northwest (0.774), and Glasgow Aneurysm Score (0.639). Across the 5 risk quintiles, the VSGNE model predicted observed mortality significantly with great accuracy. This simple VSGNE AAA risk predictive model showed very high discriminative ability in predicting mortality after elective AAA repair among a large external independent sample of AAA cases performed by a diverse array of physicians nationwide. The risk score based on this simple VSGNE model can reliably stratify patients according to their risk of mortality after elective AAA repair better than other established models. Copyright © 2017 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.
Development and Validation of the Faceted Inventory of the Five-Factor Model (FI-FFM).
Watson, David; Nus, Ericka; Wu, Kevin D
2017-06-01
The Faceted Inventory of the Five-Factor Model (FI-FFM) is a comprehensive hierarchical measure of personality. The FI-FFM was created across five phases of scale development. It includes five facets apiece for neuroticism, extraversion, and conscientiousness; four facets within agreeableness; and three facets for openness. We present reliability and validity data obtained from three samples. The FI-FFM scales are internally consistent and highly stable over 2 weeks (retest rs ranged from .64 to .82, median r = .77). They show strong convergent and discriminant validity vis-à-vis the NEO, the Big Five Inventory, and the Personality Inventory for DSM-5. Moreover, self-ratings on the scales show moderate to strong agreement with corresponding ratings made by informants ( rs ranged from .26 to .66, median r = .42). Finally, in joint analyses with the NEO Personality Inventory-3, the FI-FFM neuroticism facet scales display significant incremental validity in predicting indicators of internalizing psychopathology.
Dong, Ren G; Welcome, Daniel E; McDowell, Thomas W; Wu, John Z
2013-11-25
The relationship between the vibration transmissibility and driving-point response functions (DPRFs) of the human body is important for understanding vibration exposures of the system and for developing valid models. This study identified their theoretical relationship and demonstrated that the sum of the DPRFs can be expressed as a linear combination of the transmissibility functions of the individual mass elements distributed throughout the system. The relationship is verified using several human vibration models. This study also clarified the requirements for reliably quantifying transmissibility values used as references for calibrating the system models. As an example application, this study used the developed theory to perform a preliminary analysis of the method for calibrating models using both vibration transmissibility and DPRFs. The results of the analysis show that the combined method can theoretically result in a unique and valid solution of the model parameters, at least for linear systems. However, the validation of the method itself does not guarantee the validation of the calibrated model, because the validation of the calibration also depends on the model structure and the reliability and appropriate representation of the reference functions. The basic theory developed in this study is also applicable to the vibration analyses of other structures.
Monte Carlo simulation of Ray-Scan 64 PET system and performance evaluation using GATE toolkit
NASA Astrophysics Data System (ADS)
Li, Suying; Zhang, Qiushi; Vuletic, Ivan; Xie, Zhaoheng; Yang, Kun; Ren, Qiushi
2017-02-01
In this study, we aimed to develop a GATE model for the simulation of Ray-Scan 64 PET scanner and model its performance characteristics. A detailed implementation of system geometry and physical process were included in the simulation model. Then we modeled the performance characteristics of Ray-Scan 64 PET system for the first time, based on National Electrical Manufacturers Association (NEMA) NU-2 2007 protocols and validated the model against experimental measurement, including spatial resolution, sensitivity, counting rates and noise equivalent count rate (NECR). Moreover, an accurate dead time module was investigated to simulate the counting rate performance. Overall results showed reasonable agreement between simulation and experimental data. The validation results showed the reliability and feasibility of the GATE model to evaluate major performance of Ray-Scan 64 PET system. It provided a useful tool for a wide range of research applications.
Improved modeling of GaN HEMTs for predicting thermal and trapping-induced-kink effects
NASA Astrophysics Data System (ADS)
Jarndal, Anwar; Ghannouchi, Fadhel M.
2016-09-01
In this paper, an improved modeling approach has been developed and validated for GaN high electron mobility transistors (HEMTs). The proposed analytical model accurately simulates the drain current and its inherent trapping and thermal effects. Genetic-algorithm-based procedure is developed to automatically find the fitting parameters of the model. The developed modeling technique is implemented on a packaged GaN-on-Si HEMT and validated by DC and small-/large-signal RF measurements. The model is also employed for designing and realizing a switch-mode inverse class-F power amplifier. The amplifier simulations showed a very good agreement with RF large-signal measurements.
Impact of Cross-Axis Structural Dynamics on Validation of Linear Models for Space Launch System
NASA Technical Reports Server (NTRS)
Pei, Jing; Derry, Stephen D.; Zhou Zhiqiang; Newsom, Jerry R.
2014-01-01
A feasibility study was performed to examine the advisability of incorporating a set of Programmed Test Inputs (PTIs) during the Space Launch System (SLS) vehicle flight. The intent of these inputs is to provide validation to the preflight models for control system stability margins, aerodynamics, and structural dynamics. During October 2009, Ares I-X program was successful in carrying out a series of PTI maneuvers which provided a significant amount of valuable data for post-flight analysis. The resulting data comparisons showed excellent agreement with the preflight linear models across the frequency spectrum of interest. However unlike Ares I-X, the structural dynamics associated with the SLS boost phase configuration are far more complex and highly coupled in all three axes. This presents a challenge when implementing this similar system identification technique to SLS. Preliminary simulation results show noticeable mismatches between PTI validation and analytical linear models in the frequency range of the structural dynamics. An alternate approach was examined which demonstrates the potential for better overall characterization of the system frequency response as well as robustness of the control design.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Y.; Parsons, T.; King, R.
This report summarizes the theory, verification, and validation of a new sizing tool for wind turbine drivetrain components, the Drivetrain Systems Engineering (DriveSE) tool. DriveSE calculates the dimensions and mass properties of the hub, main shaft, main bearing(s), gearbox, bedplate, transformer if up-tower, and yaw system. The level of fi¬ delity for each component varies depending on whether semiempirical parametric or physics-based models are used. The physics-based models have internal iteration schemes based on system constraints and design criteria. Every model is validated against available industry data or finite-element analysis. The verification and validation results show that the models reasonablymore » capture primary drivers for the sizing and design of major drivetrain components.« less
CheckMyMetal: a macromolecular metal-binding validation tool
Porebski, Przemyslaw J.
2017-01-01
Metals are essential in many biological processes, and metal ions are modeled in roughly 40% of the macromolecular structures in the Protein Data Bank (PDB). However, a significant fraction of these structures contain poorly modeled metal-binding sites. CheckMyMetal (CMM) is an easy-to-use metal-binding site validation server for macromolecules that is freely available at http://csgid.org/csgid/metal_sites. The CMM server can detect incorrect metal assignments as well as geometrical and other irregularities in the metal-binding sites. Guidelines for metal-site modeling and validation in macromolecules are illustrated by several practical examples grouped by the type of metal. These examples show CMM users (and crystallographers in general) problems they may encounter during the modeling of a specific metal ion. PMID:28291757
A physics based method for combining multiple anatomy models with application to medical simulation.
Zhu, Yanong; Magee, Derek; Ratnalingam, Rishya; Kessel, David
2009-01-01
We present a physics based approach to the construction of anatomy models by combining components from different sources; different image modalities, protocols, and patients. Given an initial anatomy, a mass-spring model is generated which mimics the physical properties of the solid anatomy components. This helps maintain valid spatial relationships between the components, as well as the validity of their shapes. Combination can be either replacing/modifying an existing component, or inserting a new component. The external forces that deform the model components to fit the new shape are estimated from Gradient Vector Flow and Distance Transform maps. We demonstrate the applicability and validity of the described approach in the area of medical simulation, by showing the processes of non-rigid surface alignment, component replacement, and component insertion.
Gupta, Meenal; Moily, Nagaraj S; Kaur, Harpreet; Jajodia, Ajay; Jain, Sanjeev; Kukreti, Ritushree
2013-08-01
Atypical antipsychotic (AAP) drugs are the preferred choice of treatment for schizophrenia patients. Patients who do not show favorable response to AAP monotherapy are subjected to random prolonged therapeutic treatment with AAP multitherapy, typical antipsychotics or a combination of both. Therefore, prior identification of patients' response to drugs can be an important step in providing efficacious and safe therapeutic treatment. We thus attempted to elucidate a genetic signature which could predict patients' response to AAP monotherapy. Our logistic regression analyses indicated the probability that 76% patients carrying combination of four SNPs will not show favorable response to AAP therapy. The robustness of this prediction model was assessed using repeated 10-fold cross validation method, and the results across n-fold cross-validations (mean accuracy=71.91%; 95%CI=71.47-72.35) suggest high accuracy and reliability of the prediction model. Further validations of these results in large sample sets are likely to establish their clinical applicability. Copyright © 2013 Elsevier Inc. All rights reserved.
Validating Dimensions of Psychosis Symptomatology: Neural Correlates and 20-year Outcomes
Kotov, Roman; Foti, Dan; Li, Kaiqiao; Bromet, Evelyn J.; Hajcak, Greg; Ruggero, Camilo J.
2016-01-01
Heterogeneity of psychosis presents significant challenges for classification. Between two and 12 symptom dimensions have been proposed, and consensus is lacking. The present study sought to identify uniquely informative models by comparing the validity of these alternatives. An epidemiologic cohort of 628 first-admission inpatients with psychosis was interviewed 6 times over two decades and completed an electrophysiological assessment of error processing at year 20. We first analyzed a comprehensive set of 49 symptoms rated by interviewers at baseline, progressively extracting from one to 12 factors. Next, we compared the ability of resulting factor solutions to (a) account for concurrent neural dysfunction and (b) predict 20-year role, social, residential, and global functioning, and life satisfaction. A four-factor model showed incremental validity with all outcomes, and more complex models did not improve explanatory power. The four dimensions—reality distortion, disorganization, inexpressivity, and apathy/asociality—were replicable in 5 follow-ups, internally consistent, stable across assessments, and showed strong discriminant validity. These results reaffirm the value of separating disorganization and reality distortion, are consistent with recent findings distinguishing inexpressivity and apathy/asociality, and suggest that these four dimensions are fundamental to understanding neural abnormalities and long-term outcomes in psychosis. PMID:27819471
Hu, Ming-Hsia; Yeh, Chih-Jun; Chen, Tou-Rong; Wang, Ching-Yi
2014-01-01
A valid, time-efficient and easy-to-use instrument is important for busy clinical settings, large scale surveys, or community screening use. The purpose of this study was to validate the mobility hierarchical disability categorization model (an abbreviated model) by investigating its concurrent validity with the multidimensional hierarchical disability categorization model (a comprehensive model) and triangulating both models with physical performance measures in older adults. 604 community-dwelling older adults of at least 60 years in age volunteered to participate. Self-reported function on mobility, instrumental activities of daily living (IADL) and activities of daily living (ADL) domains were recorded and then the disability status determined based on both the multidimensional hierarchical categorization model and the mobility hierarchical categorization model. The physical performance measures, consisting of grip strength and usual and fastest gait speeds (UGS, FGS), were collected on the same day. Both categorization models showed high correlation (γs = 0.92, p < 0.001) and agreement (kappa = 0.61, p < 0.0001). Physical performance measures demonstrated significant different group means among the disability subgroups based on both categorization models. The results of multiple regression analysis indicated that both models individually explain similar amount of variance on all physical performances, with adjustments for age, sex, and number of comorbidities. Our results found that the mobility hierarchical disability categorization model is a valid and time efficient tool for large survey or screening use.
NASA Technical Reports Server (NTRS)
Daniels, Janet L.; Smith, G. Louis; Priestley, Kory J.; Thomas, Susan
2014-01-01
The validation of in-orbit instrument performance requires stability in both instrument and calibration source. This paper describes a method of validation using lunar observations scanning near full moon by the Clouds and Earth Radiant Energy System (CERES) instruments. Unlike internal calibrations, the Moon offers an external source whose signal variance is predictable and non-degrading. From 2006 to present, in-orbit observations have become standardized and compiled for the Flight Models-1 and -2 aboard the Terra satellite, for Flight Models-3 and -4 aboard the Aqua satellite, and beginning 2012, for Flight Model-5 aboard Suomi-NPP. Instrument performance parameters which can be gleaned are detector gain, pointing accuracy and static detector point response function validation. Lunar observations are used to examine the stability of all three detectors on each of these instruments from 2006 to present. This validation method has yielded results showing trends per CERES data channel of 1.2% per decade or less.
Development and Validation of Triarchic Construct Scales from the Psychopathic Personality Inventory
Hall, Jason R.; Drislane, Laura E.; Patrick, Christopher J.; Morano, Mario; Lilienfeld, Scott O.; Poythress, Norman G.
2014-01-01
The Triarchic model of psychopathy describes this complex condition in terms of distinct phenotypic components of boldness, meanness, and disinhibition. Brief self-report scales designed specifically to index these psychopathy facets have thus far demonstrated promising construct validity. The present study sought to develop and validate scales for assessing facets of the Triarchic model using items from a well-validated existing measure of psychopathy—the Psychopathic Personality Inventory (PPI). A consensus rating approach was used to identify PPI items relevant to each Triarchic facet, and the convergent and discriminant validity of the resulting PPI-based Triarchic scales were evaluated in relation to multiple criterion variables (i.e., other psychopathy inventories, antisocial personality disorder features, personality traits, psychosocial functioning) in offender and non-offender samples. The PPI-based Triarchic scales showed good internal consistency and related to criterion variables in ways consistent with predictions based on the Triarchic model. Findings are discussed in terms of implications for conceptualization and assessment of psychopathy. PMID:24447280
Hall, Jason R; Drislane, Laura E; Patrick, Christopher J; Morano, Mario; Lilienfeld, Scott O; Poythress, Norman G
2014-06-01
The Triarchic model of psychopathy describes this complex condition in terms of distinct phenotypic components of boldness, meanness, and disinhibition. Brief self-report scales designed specifically to index these psychopathy facets have thus far demonstrated promising construct validity. The present study sought to develop and validate scales for assessing facets of the Triarchic model using items from a well-validated existing measure of psychopathy-the Psychopathic Personality Inventory (PPI). A consensus-rating approach was used to identify PPI items relevant to each Triarchic facet, and the convergent and discriminant validity of the resulting PPI-based Triarchic scales were evaluated in relation to multiple criterion variables (i.e., other psychopathy inventories, antisocial personality disorder features, personality traits, psychosocial functioning) in offender and nonoffender samples. The PPI-based Triarchic scales showed good internal consistency and related to criterion variables in ways consistent with predictions based on the Triarchic model. Findings are discussed in terms of implications for conceptualization and assessment of psychopathy.
A geomorphic approach to 100-year floodplain mapping for the Conterminous United States
NASA Astrophysics Data System (ADS)
Jafarzadegan, Keighobad; Merwade, Venkatesh; Saksena, Siddharth
2018-06-01
Floodplain mapping using hydrodynamic models is difficult in data scarce regions. Additionally, using hydrodynamic models to map floodplain over large stream network can be computationally challenging. Some of these limitations of floodplain mapping using hydrodynamic modeling can be overcome by developing computationally efficient statistical methods to identify floodplains in large and ungauged watersheds using publicly available data. This paper proposes a geomorphic model to generate probabilistic 100-year floodplain maps for the Conterminous United States (CONUS). The proposed model first categorizes the watersheds in the CONUS into three classes based on the height of the water surface corresponding to the 100-year flood from the streambed. Next, the probability that any watershed in the CONUS belongs to one of these three classes is computed through supervised classification using watershed characteristics related to topography, hydrography, land use and climate. The result of this classification is then fed into a probabilistic threshold binary classifier (PTBC) to generate the probabilistic 100-year floodplain maps. The supervised classification algorithm is trained by using the 100-year Flood Insurance Rated Maps (FIRM) from the U.S. Federal Emergency Management Agency (FEMA). FEMA FIRMs are also used to validate the performance of the proposed model in areas not included in the training. Additionally, HEC-RAS model generated flood inundation extents are used to validate the model performance at fifteen sites that lack FEMA maps. Validation results show that the probabilistic 100-year floodplain maps, generated by proposed model, match well with both FEMA and HEC-RAS generated maps. On average, the error of predicted flood extents is around 14% across the CONUS. The high accuracy of the validation results shows the reliability of the geomorphic model as an alternative approach for fast and cost effective delineation of 100-year floodplains for the CONUS.
Development of machine learning models for diagnosis of glaucoma.
Kim, Seong Jae; Cho, Kyong Jin; Oh, Sejong
2017-01-01
The study aimed to develop machine learning models that have strong prediction power and interpretability for diagnosis of glaucoma based on retinal nerve fiber layer (RNFL) thickness and visual field (VF). We collected various candidate features from the examination of retinal nerve fiber layer (RNFL) thickness and visual field (VF). We also developed synthesized features from original features. We then selected the best features proper for classification (diagnosis) through feature evaluation. We used 100 cases of data as a test dataset and 399 cases of data as a training and validation dataset. To develop the glaucoma prediction model, we considered four machine learning algorithms: C5.0, random forest (RF), support vector machine (SVM), and k-nearest neighbor (KNN). We repeatedly composed a learning model using the training dataset and evaluated it by using the validation dataset. Finally, we got the best learning model that produces the highest validation accuracy. We analyzed quality of the models using several measures. The random forest model shows best performance and C5.0, SVM, and KNN models show similar accuracy. In the random forest model, the classification accuracy is 0.98, sensitivity is 0.983, specificity is 0.975, and AUC is 0.979. The developed prediction models show high accuracy, sensitivity, specificity, and AUC in classifying among glaucoma and healthy eyes. It will be used for predicting glaucoma against unknown examination records. Clinicians may reference the prediction results and be able to make better decisions. We may combine multiple learning models to increase prediction accuracy. The C5.0 model includes decision rules for prediction. It can be used to explain the reasons for specific predictions.
Validity and reliability of the Multidimensional Body Image Scale in Malaysian university students.
Gan, W Y; Mohd, Nasir M T; Siti, Aishah H; Zalilah, M S
2012-12-01
This study aimed to evaluate the validity and reliability of the Multidimensional Body Image Scale (MBIS), a seven-factor, 62-item scale developed for Malaysian female adolescents. This scale was evaluated among male and female Malaysian university students. A total of 671 university students (52.2% women and 47.8% men) completed a self-administered questionnaire on MBIS, Eating Attitude Test-26, and Rosenberg Self-Esteem Scale. Their height and weight were measured. Results in confirmatory factor analysis showed that the 62-item MBIS reported poor fit to the data, xhi2/df = 4.126, p < 0.001, CFI = 0.808, SRMR = 0.070, RMSEA = 0.068 (90% CI = 0.067, 0.070). After re-specification of the model, the model fit was improved with 46 items remaining, chi2/df = 3.346, p < 0.001, CFI = 0.903, SRMR = 0.053, RMSEA = 0.059 (90% CI = 0.057, 0.061), and the model showed good fit to the data for men and women separately. This 46-item MBIS had good internal consistency in both men (Cronbach's alpha = 0.88) and women (Cronbach's alpha = 0.92). In terms of construct validity, it showed positive correlations with disordered eating and body weight status, but negative correlation with self-esteem. Also, this scale discriminated well between participants with and without disordered eating. The MBIS-46 demonstrated good reliability and validity for the evaluation of body image among university students. Further studies need to be conducted to confirm the validation results of the 46-item MBIS.
Preventing patient absenteeism: validation of a predictive overbooking model.
Reid, Mark W; Cohen, Samuel; Wang, Hank; Kaung, Aung; Patel, Anish; Tashjian, Vartan; Williams, Demetrius L; Martinez, Bibiana; Spiegel, Brennan M R
2015-12-01
To develop a model that identifies patients at high risk for missing scheduled appointments ("no-shows" and cancellations) and to project the impact of predictive overbooking in a gastrointestinal endoscopy clinic-an exemplar resource-intensive environment with a high no-show rate. We retrospectively developed an algorithm that uses electronic health record (EHR) data to identify patients who do not show up to their appointments. Next, we prospectively validated the algorithm at a Veterans Administration healthcare network clinic. We constructed a multivariable logistic regression model that assigned a no-show risk score optimized by receiver operating characteristic curve analysis. Based on these scores, we created a calendar of projected open slots to offer to patients and compared the daily performance of predictive overbooking with fixed overbooking and typical "1 patient, 1 slot" scheduling. Data from 1392 patients identified several predictors of no-show, including previous absenteeism, comorbid disease burden, and current diagnoses of mood and substance use disorders. The model correctly classified most patients during the development (area under the curve [AUC] = 0.80) and validation phases (AUC = 0.75). Prospective testing in 1197 patients found that predictive overbooking averaged 0.51 unused appointments per day versus 6.18 for typical booking (difference = -5.67; 95% CI, -6.48 to -4.87; P < .0001). Predictive overbooking could have increased service utilization from 62% to 97% of capacity, with only rare clinic overflows. Information from EHRs can accurately predict whether patients will no-show. This method can be used to overbook appointments, thereby maximizing service utilization while staying within clinic capacity.
Luo, Chuan; Li, Zhaofu; Li, Hengpeng; Chen, Xiaomin
2015-09-02
The application of hydrological and water quality models is an efficient approach to better understand the processes of environmental deterioration. This study evaluated the ability of the Annualized Agricultural Non-Point Source (AnnAGNPS) model to predict runoff, total nitrogen (TN) and total phosphorus (TP) loading in a typical small watershed of a hilly region near Taihu Lake, China. Runoff was calibrated and validated at both an annual and monthly scale, and parameter sensitivity analysis was performed for TN and TP before the two water quality components were calibrated. The results showed that the model satisfactorily simulated runoff at annual and monthly scales, both during calibration and validation processes. Additionally, results of parameter sensitivity analysis showed that the parameters Fertilizer rate, Fertilizer organic, Canopy cover and Fertilizer inorganic were more sensitive to TN output. In terms of TP, the parameters Residue mass ratio, Fertilizer rate, Fertilizer inorganic and Canopy cover were the most sensitive. Based on these sensitive parameters, calibration was performed. TN loading produced satisfactory results for both the calibration and validation processes, whereas the performance of TP loading was slightly poor. The simulation results showed that AnnAGNPS has the potential to be used as a valuable tool for the planning and management of watersheds.
Zumpano, Camila Eugênia; Mendonça, Tânia Maria da Silva; Silva, Carlos Henrique Martins da; Correia, Helena; Arnold, Benjamin; Pinto, Rogério de Melo Costa
2017-01-23
This study aimed to perform the cross-cultural adaptation and validation of the Patient-Reported Outcomes Measurement Information System (PROMIS) Global Health scale in the Portuguese language. The ten Global Health items were cross-culturally adapted by the method proposed in the Functional Assessment of Chronic Illness Therapy (FACIT). The instrument's final version in Portuguese was self-administered by 1,010 participants in Brazil. The scale's precision was verified by floor and ceiling effects analysis, reliability of internal consistency, and test-retest reliability. Exploratory and confirmatory factor analyses were used to assess the construct's validity and instrument's dimensionality. Calibration of the items used the Gradual Response Model proposed by Samejima. Four global items required adjustments after the pretest. Analysis of the psychometric properties showed that the Global Health scale has good reliability, with Cronbach's alpha of 0.83 and intra-class correlation of 0.89. Exploratory and confirmatory factor analyses showed good fit in the previously established two-dimensional model. The Global Physical Health and Global Mental Health scale showed good latent trait coverage according to the Gradual Response Model. The PROMIS Global Health items showed equivalence in Portuguese compared to the original version and satisfactory psychometric properties for application in clinical practice and research in the Brazilian population.
Computational Fluid Dynamics Best Practice Guidelines in the Analysis of Storage Dry Cask
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zigh, A.; Solis, J.
2008-07-01
Computational fluid dynamics (CFD) methods are used to evaluate the thermal performance of a dry cask under long term storage conditions in accordance with NUREG-1536 [NUREG-1536, 1997]. A three-dimensional CFD model was developed and validated using data for a ventilated storage cask (VSC-17) collected by Idaho National Laboratory (INL). The developed Fluent CFD model was validated to minimize the modeling and application uncertainties. To address modeling uncertainties, the paper focused on turbulence modeling of buoyancy driven air flow. Similarly, in the application uncertainties, the pressure boundary conditions used to model the air inlet and outlet vents were investigated and validated.more » Different turbulence models were used to reduce the modeling uncertainty in the CFD simulation of the air flow through the annular gap between the overpack and the multi-assembly sealed basket (MSB). Among the chosen turbulence models, the validation showed that the low Reynolds k-{epsilon} and the transitional k-{omega} turbulence models predicted the measured temperatures closely. To assess the impact of pressure boundary conditions used at the air inlet and outlet channels on the application uncertainties, a sensitivity analysis of operating density was undertaken. For convergence purposes, all available commercial CFD codes include the operating density in the pressure gradient term of the momentum equation. The validation showed that the correct operating density corresponds to the density evaluated at the air inlet condition of pressure and temperature. Next, the validated CFD method was used to predict the thermal performance of an existing dry cask storage system. The evaluation uses two distinct models: a three-dimensional and an axisymmetrical representation of the cask. In the 3-D model, porous media was used to model only the volume occupied by the rodded region that is surrounded by the BWR channel box. In the axisymmetric model, porous media was used to model the entire region that encompasses the fuel assemblies as well as the gaps in between. Consequently, a larger volume is represented by porous media in the second model; hence, a higher frictional flow resistance is introduced in the momentum equations. The conservatism and the safety margins of these models were compared to assess the applicability and the realism of these two models. The three-dimensional model included fewer geometry simplifications and is recommended as it predicted less conservative fuel cladding temperature values, while still assuring the existence of adequate safety margins. (authors)« less
Kang, Kyoung-Tak; Kim, Sung-Hwan; Son, Juhyun; Lee, Young Han; Koh, Yong-Gon
2017-01-01
Computational models have been identified as efficient techniques in the clinical decision-making process. However, computational model was validated using published data in most previous studies, and the kinematic validation of such models still remains a challenge. Recently, studies using medical imaging have provided a more accurate visualization of knee joint kinematics. The purpose of the present study was to perform kinematic validation for the subject-specific computational knee joint model by comparison with subject's medical imaging under identical laxity condition. The laxity test was applied to the anterior-posterior drawer under 90° flexion and the varus-valgus under 20° flexion with a series of stress radiographs, a Telos device, and computed tomography. The loading condition in the computational subject-specific knee joint model was identical to the laxity test condition in the medical image. Our computational model showed knee laxity kinematic trends that were consistent with the computed tomography images, except for negligible differences because of the indirect application of the subject's in vivo material properties. Medical imaging based on computed tomography with the laxity test allowed us to measure not only the precise translation but also the rotation of the knee joint. This methodology will be beneficial in the validation of laxity tests for subject- or patient-specific computational models.
NASA Astrophysics Data System (ADS)
Andromeda, A.; Lufri; Festiyed; Ellizar, E.; Iryani, I.; Guspatni, G.; Fitri, L.
2018-04-01
This Research & Development study aims to produce a valid and practical experiment integrated guided inquiry based module on topic of colloidal chemistry. 4D instructional design model was selected in this study. Limited trial of the product was conducted at SMAN 7 Padang. Instruments used were validity and practicality questionnaires. Validity and practicality data were analyzed using Kappa moment. Analysis of the data shows that Kappa moment for validity was 0.88 indicating a very high degree of validity. Kappa moments for the practicality from students and teachers were 0.89 and 0.95 respectively indicating high degree of practicality. Analysis on the module filled in by students shows that 91.37% students could correctly answer critical thinking, exercise, prelab, postlab and worksheet questions asked in the module. These findings indicate that the integrated guided inquiry based module on topic of colloidal chemistry was valid and practical for chemistry learning in senior high school.
Hu, Guo-Qing; Rao, Ke-Qin; Sun, Zhen-Qiu
2008-12-01
To develop a capacity questionnaire in public health emergency for Chinese local governments. Literature reviews, conceptual modelling, stake-holder analysis, focus group, interview, and Delphi technique were employed together to develop the questionnaire. Classical test theory and case study were used to assess the reliability and validity. (1) A 2-dimension conceptual model was built. A preparedness and response capacity questionnaire in public health emergency with 10 dimensions and 204 items, was developed. (2) Reliability and validity results. Internal consistency: except for dimension 3 and 8, the Cronbach's alpha coefficient of other dimensions was higher than 0.60. The alpha coefficients of dimension 3 and dimension 8 were 0.59 and 0.39 respectively; Content validity: the questionnaire was recognized by the investigatees; Construct validity: the Spearman correlation coefficients among the 10 dimensions fluctuated around 0.50, ranging from 0.26 to 0.75 (P<0.05); Discrimination validity: comparisons of 10 dimensions among 4 provinces did not show statistical significance using One-way analysis of variance (P>0.05). Criterion-related validity: case study showed significant difference among the 10 dimensions in Beijing between February 2003 (before SARS event) and November 2005 (after SARS event). The preparedness and response capacity questionnaire in public health emergency is a reliable and valid tool, which can be used in all provinces and municipalities in China.
Hwang, Eui Jin; Goo, Jin Mo; Kim, Jihye; Park, Sang Joon; Ahn, Soyeon; Park, Chang Min; Shin, Yeong-Gil
2017-08-01
To develop a prediction model for the variability range of lung nodule volumetry and validate the model in detecting nodule growth. For model development, 50 patients with metastatic nodules were prospectively included. Two consecutive CT scans were performed to assess volumetry for 1,586 nodules. Nodule volume, surface voxel proportion (SVP), attachment proportion (AP) and absolute percentage error (APE) were calculated for each nodule and quantile regression analyses were performed to model the 95% percentile of APE. For validation, 41 patients who underwent metastasectomy were included. After volumetry of resected nodules, sensitivity and specificity for diagnosis of metastatic nodules were compared between two different thresholds of nodule growth determination: uniform 25% volume change threshold and individualized threshold calculated from the model (estimated 95% percentile APE). SVP and AP were included in the final model: Estimated 95% percentile APE = 37.82 · SVP + 48.60 · AP-10.87. In the validation session, the individualized threshold showed significantly higher sensitivity for diagnosis of metastatic nodules than the uniform 25% threshold (75.0% vs. 66.0%, P = 0.004) CONCLUSION: Estimated 95% percentile APE as an individualized threshold of nodule growth showed greater sensitivity in diagnosing metastatic nodules than a global 25% threshold. • The 95 % percentile APE of a particular nodule can be predicted. • Estimated 95 % percentile APE can be utilized as an individualized threshold. • More sensitive diagnosis of metastasis can be made with an individualized threshold. • Tailored nodule management can be provided during nodule growth follow-up.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zuniga, Cristal; Li, Chien -Ting; Huelsman, Tyler
The green microalgae Chlorella vulgaris has been widely recognized as a promising candidate for biofuel production due to its ability to store high lipid content and its natural metabolic versatility. Compartmentalized genome-scale metabolic models constructed from genome sequences enable quantitative insight into the transport and metabolism of compounds within a target organism. These metabolic models have long been utilized to generate optimized design strategies for an improved production process. Here, we describe the reconstruction, validation, and application of a genome-scale metabolic model for C. vulgaris UTEX 395, iCZ843. The reconstruction represents the most comprehensive model for any eukaryotic photosynthetic organismmore » to date, based on the genome size and number of genes in the reconstruction. The highly curated model accurately predicts phenotypes under photoautotrophic, heterotrophic, and mixotrophic conditions. The model was validated against experimental data and lays the foundation for model-driven strain design and medium alteration to improve yield. Calculated flux distributions under different trophic conditions show that a number of key pathways are affected by nitrogen starvation conditions, including central carbon metabolism and amino acid, nucleotide, and pigment biosynthetic pathways. Moreover, model prediction of growth rates under various medium compositions and subsequent experimental validation showed an increased growth rate with the addition of tryptophan and methionine.« less
Zuniga, Cristal; Li, Chien -Ting; Huelsman, Tyler; ...
2016-07-02
The green microalgae Chlorella vulgaris has been widely recognized as a promising candidate for biofuel production due to its ability to store high lipid content and its natural metabolic versatility. Compartmentalized genome-scale metabolic models constructed from genome sequences enable quantitative insight into the transport and metabolism of compounds within a target organism. These metabolic models have long been utilized to generate optimized design strategies for an improved production process. Here, we describe the reconstruction, validation, and application of a genome-scale metabolic model for C. vulgaris UTEX 395, iCZ843. The reconstruction represents the most comprehensive model for any eukaryotic photosynthetic organismmore » to date, based on the genome size and number of genes in the reconstruction. The highly curated model accurately predicts phenotypes under photoautotrophic, heterotrophic, and mixotrophic conditions. The model was validated against experimental data and lays the foundation for model-driven strain design and medium alteration to improve yield. Calculated flux distributions under different trophic conditions show that a number of key pathways are affected by nitrogen starvation conditions, including central carbon metabolism and amino acid, nucleotide, and pigment biosynthetic pathways. Moreover, model prediction of growth rates under various medium compositions and subsequent experimental validation showed an increased growth rate with the addition of tryptophan and methionine.« less
Zuñiga, Cristal; Li, Chien-Ting; Huelsman, Tyler; Levering, Jennifer; Zielinski, Daniel C; McConnell, Brian O; Long, Christopher P; Knoshaug, Eric P; Guarnieri, Michael T; Antoniewicz, Maciek R; Betenbaugh, Michael J; Zengler, Karsten
2016-09-01
The green microalga Chlorella vulgaris has been widely recognized as a promising candidate for biofuel production due to its ability to store high lipid content and its natural metabolic versatility. Compartmentalized genome-scale metabolic models constructed from genome sequences enable quantitative insight into the transport and metabolism of compounds within a target organism. These metabolic models have long been utilized to generate optimized design strategies for an improved production process. Here, we describe the reconstruction, validation, and application of a genome-scale metabolic model for C. vulgaris UTEX 395, iCZ843. The reconstruction represents the most comprehensive model for any eukaryotic photosynthetic organism to date, based on the genome size and number of genes in the reconstruction. The highly curated model accurately predicts phenotypes under photoautotrophic, heterotrophic, and mixotrophic conditions. The model was validated against experimental data and lays the foundation for model-driven strain design and medium alteration to improve yield. Calculated flux distributions under different trophic conditions show that a number of key pathways are affected by nitrogen starvation conditions, including central carbon metabolism and amino acid, nucleotide, and pigment biosynthetic pathways. Furthermore, model prediction of growth rates under various medium compositions and subsequent experimental validation showed an increased growth rate with the addition of tryptophan and methionine. © 2016 American Society of Plant Biologists. All rights reserved.
Zuñiga, Cristal; Li, Chien-Ting; Zielinski, Daniel C.; Guarnieri, Michael T.; Antoniewicz, Maciek R.; Zengler, Karsten
2016-01-01
The green microalga Chlorella vulgaris has been widely recognized as a promising candidate for biofuel production due to its ability to store high lipid content and its natural metabolic versatility. Compartmentalized genome-scale metabolic models constructed from genome sequences enable quantitative insight into the transport and metabolism of compounds within a target organism. These metabolic models have long been utilized to generate optimized design strategies for an improved production process. Here, we describe the reconstruction, validation, and application of a genome-scale metabolic model for C. vulgaris UTEX 395, iCZ843. The reconstruction represents the most comprehensive model for any eukaryotic photosynthetic organism to date, based on the genome size and number of genes in the reconstruction. The highly curated model accurately predicts phenotypes under photoautotrophic, heterotrophic, and mixotrophic conditions. The model was validated against experimental data and lays the foundation for model-driven strain design and medium alteration to improve yield. Calculated flux distributions under different trophic conditions show that a number of key pathways are affected by nitrogen starvation conditions, including central carbon metabolism and amino acid, nucleotide, and pigment biosynthetic pathways. Furthermore, model prediction of growth rates under various medium compositions and subsequent experimental validation showed an increased growth rate with the addition of tryptophan and methionine. PMID:27372244
Qin, Li-Tang; Liu, Shu-Shen; Liu, Hai-Ling
2010-02-01
A five-variable model (model M2) was developed for the bioconcentration factors (BCFs) of nonpolar organic compounds (NPOCs) by using molecular electronegativity distance vector (MEDV) to characterize the structures of NPOCs and variable selection and modeling based on prediction (VSMP) to select the optimum descriptors. The estimated correlation coefficient (r (2)) and the leave-one-out cross-validation correlation coefficients (q (2)) of model M2 were 0.9271 and 0.9171, respectively. The model was externally validated by splitting the whole data set into a representative training set of 85 chemicals and a validation set of 29 chemicals. The results show that the main structural factors influencing the BCFs of NPOCs are -cCc, cCcc, -Cl, and -Br (where "-" refers to a single bond and "c" refers to a conjugated bond). The quantitative structure-property relationship (QSPR) model can effectively predict the BCFs of NPOCs, and the predictions of the model can also extend the current BCF database of experimental values.
PACIC Instrument: disentangling dimensions using published validation models.
Iglesias, K; Burnand, B; Peytremann-Bridevaux, I
2014-06-01
To better understand the structure of the Patient Assessment of Chronic Illness Care (PACIC) instrument. More specifically to test all published validation models, using one single data set and appropriate statistical tools. Validation study using data from cross-sectional survey. A population-based sample of non-institutionalized adults with diabetes residing in Switzerland (canton of Vaud). French version of the 20-items PACIC instrument (5-point response scale). We conducted validation analyses using confirmatory factor analysis (CFA). The original five-dimension model and other published models were tested with three types of CFA: based on (i) a Pearson estimator of variance-covariance matrix, (ii) a polychoric correlation matrix and (iii) a likelihood estimation with a multinomial distribution for the manifest variables. All models were assessed using loadings and goodness-of-fit measures. The analytical sample included 406 patients. Mean age was 64.4 years and 59% were men. Median of item responses varied between 1 and 4 (range 1-5), and range of missing values was between 5.7 and 12.3%. Strong floor and ceiling effects were present. Even though loadings of the tested models were relatively high, the only model showing acceptable fit was the 11-item single-dimension model. PACIC was associated with the expected variables of the field. Our results showed that the model considering 11 items in a single dimension exhibited the best fit for our data. A single score, in complement to the consideration of single-item results, might be used instead of the five dimensions usually described. © The Author 2014. Published by Oxford University Press in association with the International Society for Quality in Health Care; all rights reserved.
Development and Validation of a Safety Climate Scale for Manufacturing Industry
Ghahramani, Abolfazl; Khalkhali, Hamid R.
2015-01-01
Background This paper describes the development of a scale for measuring safety climate. Methods This study was conducted in six manufacturing companies in Iran. The scale developed through conducting a literature review about the safety climate and constructing a question pool. The number of items was reduced to 71 after performing a screening process. Results The result of content validity analysis showed that 59 items had excellent item content validity index (≥ 0.78) and content validity ratio (> 0.38). The exploratory factor analysis resulted in eight safety climate dimensions. The reliability value for the final 45-item scale was 0.96. The result of confirmatory factor analysis showed that the safety climate model is satisfactory. Conclusion This study produced a valid and reliable scale for measuring safety climate in manufacturing companies. PMID:26106508
Bornhorst, Ellen R; Tang, Juming; Sablani, Shyam S; Barbosa-Cánovas, Gustavo V; Liu, Fang
2017-07-01
Development and selection of model foods is a critical part of microwave thermal process development, simulation validation, and optimization. Previously developed model foods for pasteurization process evaluation utilized Maillard reaction products as the time-temperature integrators, which resulted in similar temperature sensitivity among the models. The aim of this research was to develop additional model foods based on different time-temperature integrators, determine their dielectric properties and color change kinetics, and validate the optimal model food in hot water and microwave-assisted pasteurization processes. Color, quantified using a * value, was selected as the time-temperature indicator for green pea and garlic puree model foods. Results showed 915 MHz microwaves had a greater penetration depth into the green pea model food than the garlic. a * value reaction rates for the green pea model were approximately 4 times slower than in the garlic model food; slower reaction rates were preferred for the application of model food in this study, that is quality evaluation for a target process of 90 °C for 10 min at the cold spot. Pasteurization validation used the green pea model food and results showed that there were quantifiable differences between the color of the unheated control, hot water pasteurization, and microwave-assisted thermal pasteurization system. Both model foods developed in this research could be utilized for quality assessment and optimization of various thermal pasteurization processes. © 2017 Institute of Food Technologists®.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reynoso, F; Cho, S
Purpose: To develop and validate a Monte Carlo (MC) model of a Phillips RT-250 orthovoltage unit to test various beam spectrum modulation strategies for in vitro/vivo studies. A model of this type would enable the production of unconventional beams from a typical orthovoltage unit for novel therapeutic applications such as gold nanoparticle-aided radiotherapy. Methods: The MCNP5 code system was used to create a MC model of the head of RT-250 and a 30 × 30 × 30 cm{sup 3} water phantom. For the x-ray machine head, the current model includes the vacuum region, beryllium window, collimators, inherent filters and exteriormore » steel housing. For increased computational efficiency, the primary x-ray spectrum from the target was calculated from a well-validated analytical software package. Calculated percentage-depth-dose (PDD) values and photon spectra were validated against experimental data from film and Compton-scatter spectrum measurements. Results: The model was validated for three common settings of the machine namely, 250 kVp (0.25 mm Cu), 125 kVp (2 mm Al), and 75 kVp (2 mm Al). The MC results for the PDD curves were compared with film measurements and showed good agreement for all depths with a maximum difference of 4 % around dmax and under 2.5 % for all other depths. The primary photon spectra were also measured and compared with the MC results showing reasonable agreement between the two, validating the input spectra and the final spectra as predicted by the current MC model. Conclusion: The current MC model accurately predicted the dosimetric and spectral characteristics of each beam from the RT-250 orthovoltage unit, demonstrating its applicability and reliability for beam spectrum modulation tasks. It accomplished this without the need to model the bremsstrahlung xray production from the target, while significantly improved on computational efficiency by at least two orders of magnitude. Supported by DOD/PCRP grant W81XWH-12-1-0198.« less
The Self-Description Inventory+, Part 1: Factor Structure and Convergent Validity Analyses
2013-07-01
measures 12 scales of personality. The current report examines the possibility of replacing the EQ with a Five Factor Model ( FFM ) measure of...Checklist. Our results show that the SDI + has scales that are intercorrelated in a manner consistent with the FFM (Experiment 1), a factor structure...met the criteria showing it to be an FFM instrument, we will conduct concurrent validity research to determine if the SDI+ has greater predictive
Results and Validation of MODIS Aerosol Retrievals Over Land and Ocean
NASA Technical Reports Server (NTRS)
Remer, Lorraine; Einaudi, Franco (Technical Monitor)
2001-01-01
The MODerate Resolution Imaging Spectroradiometer (MODIS) instrument aboard the Terra spacecraft has been retrieving aerosol parameters since late February 2000. Initial qualitative checking of the products showed very promising results including matching of land and ocean retrievals at coastlines. Using AERONET ground-based radiometers as our primary validation tool, we have established quantitative validation as well. Our results show that for most aerosol types, the MODIS products fall within the pre-launch estimated uncertainties. Surface reflectance and aerosol model assumptions appear to be sufficiently accurate for the optical thickness retrieval. Dust provides a possible exception, which may be due to non-spherical effects. Over ocean the MODIS products include information on particle size, and these parameters are also validated with AERONET retrievals.
Results and Validation of MODIS Aerosol Retrievals over Land and Ocean
NASA Technical Reports Server (NTRS)
Remer, L. A.; Kaufman, Y. J.; Tanre, D.; Ichoku, C.; Chu, D. A.; Mattoo, S.; Levy, R.; Martins, J. V.; Li, R.-R.; Einaudi, Franco (Technical Monitor)
2000-01-01
The MODerate Resolution Imaging Spectroradiometer (MODIS) instrument aboard the Terra spacecraft has been retrieving aerosol parameters since late February 2000. Initial qualitative checking of the products showed very promising results including matching of land and ocean retrievals at coastlines. Using AERONET ground-based radiometers as our primary validation tool, we have established quantitative validation as well. Our results show that for most aerosol types, the MODIS products fall within the pre-launch estimated uncertainties. Surface reflectance and aerosol model assumptions appear to be sufficiently accurate for the optical thickness retrieval. Dust provides a possible exception, which may be due to non-spherical effects. Over ocean the MODIS products include information on particle size, and these parameters are also validated with AERONET retrievals.
Veldhuijzen van Zanten, Sophie E M; Lane, Adam; Heymans, Martijn W; Baugh, Joshua; Chaney, Brooklyn; Hoffman, Lindsey M; Doughman, Renee; Jansen, Marc H A; Sanchez, Esther; Vandertop, William P; Kaspers, Gertjan J L; van Vuurden, Dannis G; Fouladi, Maryam; Jones, Blaise V; Leach, James
2017-08-01
We aimed to perform external validation of the recently developed survival prediction model for diffuse intrinsic pontine glioma (DIPG), and discuss its utility. The DIPG survival prediction model was developed in a cohort of patients from the Netherlands, United Kingdom and Germany, registered in the SIOPE DIPG Registry, and includes age <3 years, longer symptom duration and receipt of chemotherapy as favorable predictors, and presence of ring-enhancement on MRI as unfavorable predictor. Model performance was evaluated by analyzing the discrimination and calibration abilities. External validation was performed using an unselected cohort from the International DIPG Registry, including patients from United States, Canada, Australia and New Zealand. Basic comparison with the results of the original study was performed using descriptive statistics, and univariate- and multivariable regression analyses in the validation cohort. External validation was assessed following a variety of analyses described previously. Baseline patient characteristics and results from the regression analyses were largely comparable. Kaplan-Meier curves of the validation cohort reproduced separated groups of standard (n = 39), intermediate (n = 125), and high-risk (n = 78) patients. This discriminative ability was confirmed by similar values for the hazard ratios across these risk groups. The calibration curve in the validation cohort showed a symmetric underestimation of the predicted survival probabilities. In this external validation study, we demonstrate that the DIPG survival prediction model has acceptable cross-cohort calibration and is able to discriminate patients with short, average, and increased survival. We discuss how this clinico-radiological model may serve a useful role in current clinical practice.
Developing and Validating the Socio-Technical Model in Ontology Engineering
NASA Astrophysics Data System (ADS)
Silalahi, Mesnan; Indra Sensuse, Dana; Giri Sucahyo, Yudho; Fadhilah Akmaliah, Izzah; Rahayu, Puji; Cahyaningsih, Elin
2018-03-01
This paper describes results from an attempt to develop a model in ontology engineering methodology and a way to validate the model. The approach to methodology in ontology engineering is from the point view of socio-technical system theory. Qualitative research synthesis is used to build the model using meta-ethnography. In order to ensure the objectivity of the measurement, inter-rater reliability method was applied using a multi-rater Fleiss Kappa. The results show the accordance of the research output with the diamond model in the socio-technical system theory by evidence of the interdependency of the four socio-technical variables namely people, technology, structure and task.
Navidpour, Fariba; Dolatian, Mahrokh; Yaghmaei, Farideh; Majd, Hamid Alavi; Hashemi, Seyed Saeed
2015-04-23
Pregnant women tend to experience anxiety and stress when faced with the changes to their biology, environment and personal relationships. The identification of these factors and the prevention of their side effects are vital for both mother and fetus. The present study was conducted to validate and to examine the factor structure of the Persian version of the Pregnancy's Worries and Stress Questionnaire. The 25-item PWSQ was first translated by specialists into Persian. The questionnaire's validity was determined using face, content, criterion and construct validity and reliability of questionnaire was examined using Cronbach's alpha. Confirmatory factor analysis was performed in AMOS and SPSS 21. Participants included healthy Iranian pregnant women (8-39 weeks) who refer to selected hospitals for prenatal care. Hospitals included private, social security and university hospitals and selected through the random cluster sampling method. The results of validity and reliability assessments of the questionnaire were acceptable. Cronbach's alpha calculated showed a high internal consistency of 0.89. The confirmatory factor analysis using the c2, CMIN/DF, IFI, CFI, NFI and NNFI indexes showed the 6-factor model to be the best fitted model for explaining the data. The questionnaire was translated into Persian to examine stress and worry specific to Iranian pregnant women. The psychometric results showed that the questionnaire is suitable for identifying Iranian pregnant women with pregnancy-related stress.
[Elaboration and validation of a tool to measure psychological well-being: WBMMS].
Massé, R; Poulin, C; Dassa, C; Lambert, J; Bélair, S; Battaglini, M A
1998-01-01
Psychological well-being scales used in epidemiologic surveys usually show high construct validity. The content validation, however, is less convincing since these scales rest on lists of items that reflect the theoretical model of the authors. In this study we present results of the construct and criterion validation of a new Well-Being Manifestations Measure Scale (WBMMS) founded on an initial list of manifestations derived from an original content validation in a general population. It is concluded that national and public health epidemiologic surveys should include both measures of positive and negative mental health.
Flexible energy harvesting from hard piezoelectric beams
NASA Astrophysics Data System (ADS)
Delnavaz, Aidin; Voix, Jérémie
2016-11-01
This paper presents design, multiphysics finite element modeling and experimental validation of a new miniaturized PZT generator that integrates a bulk piezoelectric ceramic onto a flexible platform for energy harvesting from the human body pressing force. In spite of its flexibility, the mechanical structure of the proposed device is simple to fabricate and efficient for the energy conversion. The finite element model involves both mechanical and piezoelectric parts of the device coupled with the electrical circuit model. The energy harvester prototype was fabricated and tested under the low frequency periodic pressing force during 10 seconds. The experimental results show that several nano joules of electrical energy is stored in a capacitor that is quite significant given the size of the device. The finite element model is validated by observing a good agreement between experimental and simulation results. the validated model could be used for optimizing the device for energy harvesting from earcanal deformations.
Dynamic modelling and experimental validation of three wheeled tilting vehicles
NASA Astrophysics Data System (ADS)
Amati, Nicola; Festini, Andrea; Pelizza, Luigi; Tonoli, Andrea
2011-06-01
The present paper describes the study of the stability in the straight running of a three-wheeled tilting vehicle for urban and sub-urban mobility. The analysis was carried out by developing a multibody model in the Matlab/SimulinkSimMechanics environment. An Adams-Motorcycle model and an equivalent analytical model were developed for the cross-validation and for highlighting the similarities with the lateral dynamics of motorcycles. Field tests were carried out to validate the model and identify some critical parameters, such as the damping on the steering system. The stability analysis demonstrates that the lateral dynamic motions are characterised by vibration modes that are similar to that of a motorcycle. Additionally, it shows that the wobble mode is significantly affected by the castor trail, whereas it is only slightly affected by the dynamics of the front suspension. For the present case study, the frame compliance also has no influence on the weave and wobble.
Velpuri, N.M.; Senay, G.B.; Asante, K.O.
2012-01-01
Lake Turkana is one of the largest desert lakes in the world and is characterized by high degrees of interand intra-annual fluctuations. The hydrology and water balance of this lake have not been well understood due to its remote location and unavailability of reliable ground truth datasets. Managing surface water resources is a great challenge in areas where in-situ data are either limited or unavailable. In this study, multi-source satellite-driven data such as satellite-based rainfall estimates, modelled runoff, evapotranspiration, and a digital elevation dataset were used to model Lake Turkana water levels from 1998 to 2009. Due to the unavailability of reliable lake level data, an approach is presented to calibrate and validate the water balance model of Lake Turkana using a composite lake level product of TOPEX/Poseidon, Jason-1, and ENVISAT satellite altimetry data. Model validation results showed that the satellitedriven water balance model can satisfactorily capture the patterns and seasonal variations of the Lake Turkana water level fluctuations with a Pearson's correlation coefficient of 0.90 and a Nash-Sutcliffe Coefficient of Efficiency (NSCE) of 0.80 during the validation period (2004-2009). Model error estimates were within 10% of the natural variability of the lake. Our analysis indicated that fluctuations in Lake Turkana water levels are mainly driven by lake inflows and over-the-lake evaporation. Over-the-lake rainfall contributes only up to 30% of lake evaporative demand. During the modelling time period, Lake Turkana showed seasonal variations of 1-2m. The lake level fluctuated in the range up to 4m between the years 1998-2009. This study demonstrated the usefulness of satellite altimetry data to calibrate and validate the satellite-driven hydrological model for Lake Turkana without using any in-situ data. Furthermore, for Lake Turkana, we identified and outlined opportunities and challenges of using a calibrated satellite-driven water balance model for (i) quantitative assessment of the impact of basin developmental activities on lake levels and for (ii) forecasting lake level changes and their impact on fisheries. From this study, we suggest that globally available satellite altimetry data provide a unique opportunity for calibration and validation of hydrologic models in ungauged basins. ?? Author(s) 2012.
A FAST BAYESIAN METHOD FOR UPDATING AND FORECASTING HOURLY OZONE LEVELS
A Bayesian hierarchical space-time model is proposed by combining information from real-time ambient AIRNow air monitoring data, and output from a computer simulation model known as the Community Multi-scale Air Quality (Eta-CMAQ) forecast model. A model validation analysis shows...
Use of the Ames Check Standard Model for the Validation of Wall Interference Corrections
NASA Technical Reports Server (NTRS)
Ulbrich, N.; Amaya, M.; Flach, R.
2018-01-01
The new check standard model of the NASA Ames 11-ft Transonic Wind Tunnel was chosen for a future validation of the facility's wall interference correction system. The chosen validation approach takes advantage of the fact that test conditions experienced by a large model in the slotted part of the tunnel's test section will change significantly if a subset of the slots is temporarily sealed. Therefore, the model's aerodynamic coefficients have to be recorded, corrected, and compared for two different test section configurations in order to perform the validation. Test section configurations with highly accurate Mach number and dynamic pressure calibrations were selected for the validation. First, the model is tested with all test section slots in open configuration while keeping the model's center of rotation on the tunnel centerline. In the next step, slots on the test section floor are sealed and the model is moved to a new center of rotation that is 33 inches below the tunnel centerline. Then, the original angle of attack sweeps are repeated. Afterwards, wall interference corrections are applied to both test data sets and response surface models of the resulting aerodynamic coefficients in interference-free flow are generated. Finally, the response surface models are used to predict the aerodynamic coefficients for a family of angles of attack while keeping dynamic pressure, Mach number, and Reynolds number constant. The validation is considered successful if the corrected aerodynamic coefficients obtained from the related response surface model pair show good agreement. Residual differences between the corrected coefficient sets will be analyzed as well because they are an indicator of the overall accuracy of the facility's wall interference correction process.
Modeling apple surface temperature dynamics based on weather data.
Li, Lei; Peters, Troy; Zhang, Qin; Zhang, Jingjin; Huang, Danfeng
2014-10-27
The exposure of fruit surfaces to direct sunlight during the summer months can result in sunburn damage. Losses due to sunburn damage are a major economic problem when marketing fresh apples. The objective of this study was to develop and validate a model for simulating fruit surface temperature (FST) dynamics based on energy balance and measured weather data. A series of weather data (air temperature, humidity, solar radiation, and wind speed) was recorded for seven hours between 11:00-18:00 for two months at fifteen minute intervals. To validate the model, the FSTs of "Fuji" apples were monitored using an infrared camera in a natural orchard environment. The FST dynamics were measured using a series of thermal images. For the apples that were completely exposed to the sun, the RMSE of the model for estimating FST was less than 2.0 °C. A sensitivity analysis of the emissivity of the apple surface and the conductance of the fruit surface to water vapour showed that accurate estimations of the apple surface emissivity were important for the model. The validation results showed that the model was capable of accurately describing the thermal performances of apples under different solar radiation intensities. Thus, this model could be used to more accurately estimate the FST relative to estimates that only consider the air temperature. In addition, this model provides useful information for sunburn protection management.
Modeling Apple Surface Temperature Dynamics Based on Weather Data
Li, Lei; Peters, Troy; Zhang, Qin; Zhang, Jingjin; Huang, Danfeng
2014-01-01
The exposure of fruit surfaces to direct sunlight during the summer months can result in sunburn damage. Losses due to sunburn damage are a major economic problem when marketing fresh apples. The objective of this study was to develop and validate a model for simulating fruit surface temperature (FST) dynamics based on energy balance and measured weather data. A series of weather data (air temperature, humidity, solar radiation, and wind speed) was recorded for seven hours between 11:00–18:00 for two months at fifteen minute intervals. To validate the model, the FSTs of “Fuji” apples were monitored using an infrared camera in a natural orchard environment. The FST dynamics were measured using a series of thermal images. For the apples that were completely exposed to the sun, the RMSE of the model for estimating FST was less than 2.0 °C. A sensitivity analysis of the emissivity of the apple surface and the conductance of the fruit surface to water vapour showed that accurate estimations of the apple surface emissivity were important for the model. The validation results showed that the model was capable of accurately describing the thermal performances of apples under different solar radiation intensities. Thus, this model could be used to more accurately estimate the FST relative to estimates that only consider the air temperature. In addition, this model provides useful information for sunburn protection management. PMID:25350507
NASA Astrophysics Data System (ADS)
Alexander, M. Joan; Stephan, Claudia
2015-04-01
In climate models, gravity waves remain too poorly resolved to be directly modelled. Instead, simplified parameterizations are used to include gravity wave effects on model winds. A few climate models link some of the parameterized waves to convective sources, providing a mechanism for feedback between changes in convection and gravity wave-driven changes in circulation in the tropics and above high-latitude storms. These convective wave parameterizations are based on limited case studies with cloud-resolving models, but they are poorly constrained by observational validation, and tuning parameters have large uncertainties. Our new work distills results from complex, full-physics cloud-resolving model studies to essential variables for gravity wave generation. We use the Weather Research Forecast (WRF) model to study relationships between precipitation, latent heating/cooling and other cloud properties to the spectrum of gravity wave momentum flux above midlatitude storm systems. Results show the gravity wave spectrum is surprisingly insensitive to the representation of microphysics in WRF. This is good news for use of these models for gravity wave parameterization development since microphysical properties are a key uncertainty. We further use the full-physics cloud-resolving model as a tool to directly link observed precipitation variability to gravity wave generation. We show that waves in an idealized model forced with radar-observed precipitation can quantitatively reproduce instantaneous satellite-observed features of the gravity wave field above storms, which is a powerful validation of our understanding of waves generated by convection. The idealized model directly links observations of surface precipitation to observed waves in the stratosphere, and the simplicity of the model permits deep/large-area domains for studies of wave-mean flow interactions. This unique validated model tool permits quantitative studies of gravity wave driving of regional circulation and provides a new method for future development of realistic convective gravity wave parameterizations.
Validation of an immortalized human (hBMEC) in vitro blood-brain barrier model.
Eigenmann, Daniela Elisabeth; Jähne, Evelyn Andrea; Smieško, Martin; Hamburger, Matthias; Oufir, Mouhssin
2016-03-01
We recently established and optimized an immortalized human in vitro blood-brain barrier (BBB) model based on the hBMEC cell line. In the present work, we validated this mono-culture 24-well model with a representative series of drug substances which are known to cross or not to cross the BBB. For each individual compound, a quantitative UHPLC-MS/MS method in Ringer HEPES buffer was developed and validated according to current regulatory guidelines, with respect to selectivity, precision, and reliability. Various biological and analytical challenges were met during method validation, highlighting the importance of careful method development. The positive controls antipyrine, caffeine, diazepam, and propranolol showed mean endothelial permeability coefficients (P e) in the range of 17-70 × 10(-6) cm/s, indicating moderate to high BBB permeability when compared to the barrier integrity marker sodium fluorescein (mean P e 3-5 × 10(-6) cm/s). The negative controls atenolol, cimetidine, and vinblastine showed mean P e values < 10 × 10(-6) cm/s, suggesting low permeability. In silico calculations were in agreement with in vitro data. With the exception of quinidine (P-glycoprotein inhibitor and substrate), BBB permeability of all control compounds was correctly predicted by this new, easy, and fast to set up human in vitro BBB model. Addition of retinoic acid and puromycin did not increase transendothelial electrical resistance (TEER) values of the BBB model.
NASA Astrophysics Data System (ADS)
Brown, Alexander; Eviston, Connor
2017-02-01
Multiple FEM models of complex eddy current coil geometries were created and validated to calculate the change of impedance due to the presence of a notch. Capable realistic simulations of eddy current inspections are required for model assisted probability of detection (MAPOD) studies, inversion algorithms, experimental verification, and tailored probe design for NDE applications. An FEM solver was chosen to model complex real world situations including varying probe dimensions and orientations along with complex probe geometries. This will also enable creation of a probe model library database with variable parameters. Verification and validation was performed using other commercially available eddy current modeling software as well as experimentally collected benchmark data. Data analysis and comparison showed that the created models were able to correctly model the probe and conductor interactions and accurately calculate the change in impedance of several experimental scenarios with acceptable error. The promising results of the models enabled the start of an eddy current probe model library to give experimenters easy access to powerful parameter based eddy current models for alternate project applications.
Cross-validation of an employee safety climate model in Malaysia.
Bahari, Siti Fatimah; Clarke, Sharon
2013-06-01
Whilst substantial research has investigated the nature of safety climate, and its importance as a leading indicator of organisational safety, much of this research has been conducted with Western industrial samples. The current study focuses on the cross-validation of a safety climate model in the non-Western industrial context of Malaysian manufacturing. The first-order factorial validity of Cheyne et al.'s (1998) [Cheyne, A., Cox, S., Oliver, A., Tomas, J.M., 1998. Modelling safety climate in the prediction of levels of safety activity. Work and Stress, 12(3), 255-271] model was tested, using confirmatory factor analysis, in a Malaysian sample. Results showed that the model fit indices were below accepted levels, indicating that the original Cheyne et al. (1998) safety climate model was not supported. An alternative three-factor model was developed using exploratory factor analysis. Although these findings are not consistent with previously reported cross-validation studies, we argue that previous studies have focused on validation across Western samples, and that the current study demonstrates the need to take account of cultural factors in the development of safety climate models intended for use in non-Western contexts. The results have important implications for the transferability of existing safety climate models across cultures (for example, in global organisations) and highlight the need for future research to examine cross-cultural issues in relation to safety climate. Copyright © 2013 National Safety Council and Elsevier Ltd. All rights reserved.
Meertens, Linda J E; van Montfort, Pim; Scheepers, Hubertina C J; van Kuijk, Sander M J; Aardenburg, Robert; Langenveld, Josje; van Dooren, Ivo M A; Zwaan, Iris M; Spaanderman, Marc E A; Smits, Luc J M
2018-04-17
Prediction models may contribute to personalized risk-based management of women at high risk of spontaneous preterm delivery. Although prediction models are published frequently, often with promising results, external validation generally is lacking. We performed a systematic review of prediction models for the risk of spontaneous preterm birth based on routine clinical parameters. Additionally, we externally validated and evaluated the clinical potential of the models. Prediction models based on routinely collected maternal parameters obtainable during first 16 weeks of gestation were eligible for selection. Risk of bias was assessed according to the CHARMS guidelines. We validated the selected models in a Dutch multicenter prospective cohort study comprising 2614 unselected pregnant women. Information on predictors was obtained by a web-based questionnaire. Predictive performance of the models was quantified by the area under the receiver operating characteristic curve (AUC) and calibration plots for the outcomes spontaneous preterm birth <37 weeks and <34 weeks of gestation. Clinical value was evaluated by means of decision curve analysis and calculating classification accuracy for different risk thresholds. Four studies describing five prediction models fulfilled the eligibility criteria. Risk of bias assessment revealed a moderate to high risk of bias in three studies. The AUC of the models ranged from 0.54 to 0.67 and from 0.56 to 0.70 for the outcomes spontaneous preterm birth <37 weeks and <34 weeks of gestation, respectively. A subanalysis showed that the models discriminated poorly (AUC 0.51-0.56) for nulliparous women. Although we recalibrated the models, two models retained evidence of overfitting. The decision curve analysis showed low clinical benefit for the best performing models. This review revealed several reporting and methodological shortcomings of published prediction models for spontaneous preterm birth. Our external validation study indicated that none of the models had the ability to predict spontaneous preterm birth adequately in our population. Further improvement of prediction models, using recent knowledge about both model development and potential risk factors, is necessary to provide an added value in personalized risk assessment of spontaneous preterm birth. © 2018 The Authors Acta Obstetricia et Gynecologica Scandinavica published by John Wiley & Sons Ltd on behalf of Nordic Federation of Societies of Obstetrics and Gynecology (NFOG).
Validated simulator for space debris removal with nets and other flexible tethers applications
NASA Astrophysics Data System (ADS)
Gołębiowski, Wojciech; Michalczyk, Rafał; Dyrek, Michał; Battista, Umberto; Wormnes, Kjetil
2016-12-01
In the context of active debris removal technologies and preparation activities for the e.Deorbit mission, a simulator for net-shaped elastic bodies dynamics and their interactions with rigid bodies, has been developed. Its main application is to aid net design and test scenarios for space debris deorbitation. The simulator can model all the phases of the debris capturing process: net launch, flight and wrapping around the target. It handles coupled simulation of rigid and flexible bodies dynamics. Flexible bodies were implemented using Cosserat rods model. It allows to simulate flexible threads or wires with elasticity and damping for stretching, bending and torsion. Threads may be combined into structures of any topology, so the software is able to simulate nets, pure tethers, tether bundles, cages, trusses, etc. Full contact dynamics was implemented. Programmatic interaction with simulation is possible - i.e. for control implementation. The underlying model has been experimentally validated and due to significant gravity influence, experiment had to be performed in microgravity conditions. Validation experiment for parabolic flight was a downscaled process of Envisat capturing. The prepacked net was launched towards the satellite model, it expanded, hit the model and wrapped around it. The whole process was recorded with 2 fast stereographic camera sets for full 3D trajectory reconstruction. The trajectories were used to compare net dynamics to respective simulations and then to validate the simulation tool. The experiments were performed on board of a Falcon-20 aircraft, operated by National Research Council in Ottawa, Canada. Validation results show that model reflects phenomenon physics accurately enough, so it may be used for scenario evaluation and mission design purposes. The functionalities of the simulator are described in detail in the paper, as well as its underlying model, sample cases and methodology behind validation. Results are presented and typical use cases are discussed showing that the software may be used to design throw nets for space debris capturing, but also to simulate deorbitation process, chaser control system or general interactions between rigid and elastic bodies - all in convenient and efficient way. The presented work was led by SKA Polska under the ESA contract, within the CleanSpace initiative.
A cross-validation package driving Netica with python
Fienen, Michael N.; Plant, Nathaniel G.
2014-01-01
Bayesian networks (BNs) are powerful tools for probabilistically simulating natural systems and emulating process models. Cross validation is a technique to avoid overfitting resulting from overly complex BNs. Overfitting reduces predictive skill. Cross-validation for BNs is known but rarely implemented due partly to a lack of software tools designed to work with available BN packages. CVNetica is open-source, written in Python, and extends the Netica software package to perform cross-validation and read, rebuild, and learn BNs from data. Insights gained from cross-validation and implications on prediction versus description are illustrated with: a data-driven oceanographic application; and a model-emulation application. These examples show that overfitting occurs when BNs become more complex than allowed by supporting data and overfitting incurs computational costs as well as causing a reduction in prediction skill. CVNetica evaluates overfitting using several complexity metrics (we used level of discretization) and its impact on performance metrics (we used skill).
Validating and Optimizing the Effects of Model Progression in Simulation-Based Inquiry Learning
ERIC Educational Resources Information Center
Mulder, Yvonne G.; Lazonder, Ard W.; de Jong, Ton; Anjewierden, Anjo; Bollen, Lars
2012-01-01
Model progression denotes the organization of the inquiry learning process in successive phases of increasing complexity. This study investigated the effectiveness of model progression in general, and explored the added value of either broadening or narrowing students' possibilities to change model progression phases. Results showed that…
NASA Astrophysics Data System (ADS)
Nurjanah; Dahlan, J. A.; Wibisono, Y.
2017-02-01
This paper aims to make a design and development computer-based e-learning teaching material for improving mathematical understanding ability and spatial sense of junior high school students. Furthermore, the particular aims are (1) getting teaching material design, evaluation model, and intrument to measure mathematical understanding ability and spatial sense of junior high school students; (2) conducting trials computer-based e-learning teaching material model, asessment, and instrument to develop mathematical understanding ability and spatial sense of junior high school students; (3) completing teaching material models of computer-based e-learning, assessment, and develop mathematical understanding ability and spatial sense of junior high school students; (4) resulting research product is teaching materials of computer-based e-learning. Furthermore, the product is an interactive learning disc. The research method is used of this study is developmental research which is conducted by thought experiment and instruction experiment. The result showed that teaching materials could be used very well. This is based on the validation of computer-based e-learning teaching materials, which is validated by 5 multimedia experts. The judgement result of face and content validity of 5 validator shows that the same judgement result to the face and content validity of each item test of mathematical understanding ability and spatial sense. The reliability test of mathematical understanding ability and spatial sense are 0,929 and 0,939. This reliability test is very high. While the validity of both tests have a high and very high criteria.
NASA Technical Reports Server (NTRS)
Morris, A. Terry
1999-01-01
This paper examines various sources of error in MIT's improved top oil temperature rise over ambient temperature model and estimation process. The sources of error are the current parameter estimation technique, quantization noise, and post-processing of the transformer data. Results from this paper will show that an output error parameter estimation technique should be selected to replace the current least squares estimation technique. The output error technique obtained accurate predictions of transformer behavior, revealed the best error covariance, obtained consistent parameter estimates, and provided for valid and sensible parameters. This paper will also show that the output error technique should be used to minimize errors attributed to post-processing (decimation) of the transformer data. Models used in this paper are validated using data from a large transformer in service.
NASA Astrophysics Data System (ADS)
Lute, A. C.; Luce, Charles H.
2017-11-01
The related challenges of predictions in ungauged basins and predictions in ungauged climates point to the need to develop environmental models that are transferable across both space and time. Hydrologic modeling has historically focused on modelling one or only a few basins using highly parameterized conceptual or physically based models. However, model parameters and structures have been shown to change significantly when calibrated to new basins or time periods, suggesting that model complexity and model transferability may be antithetical. Empirical space-for-time models provide a framework within which to assess model transferability and any tradeoff with model complexity. Using 497 SNOTEL sites in the western U.S., we develop space-for-time models of April 1 SWE and Snow Residence Time based on mean winter temperature and cumulative winter precipitation. The transferability of the models to new conditions (in both space and time) is assessed using non-random cross-validation tests with consideration of the influence of model complexity on transferability. As others have noted, the algorithmic empirical models transfer best when minimal extrapolation in input variables is required. Temporal split-sample validations use pseudoreplicated samples, resulting in the selection of overly complex models, which has implications for the design of hydrologic model validation tests. Finally, we show that low to moderate complexity models transfer most successfully to new conditions in space and time, providing empirical confirmation of the parsimony principal.
The Role of Integrated Modeling in the Design and Verification of the James Webb Space Telescope
NASA Technical Reports Server (NTRS)
Mosier, Gary E.; Howard, Joseph M.; Johnston, John D.; Parrish, Keith A.; Hyde, T. Tupper; McGinnis, Mark A.; Bluth, Marcel; Kim, Kevin; Ha, Kong Q.
2004-01-01
The James Web Space Telescope (JWST) is a large, infrared-optimized space telescope scheduled for launch in 2011. System-level verification of critical optical performance requirements will rely on integrated modeling to a considerable degree. In turn, requirements for accuracy of the models are significant. The size of the lightweight observatory structure, coupled with the need to test at cryogenic temperatures, effectively precludes validation of the models and verification of optical performance with a single test in 1-g. Rather, a complex series of steps are planned by which the components of the end-to-end models are validated at various levels of subassembly, and the ultimate verification of optical performance is by analysis using the assembled models. This paper describes the critical optical performance requirements driving the integrated modeling activity, shows how the error budget is used to allocate and track contributions to total performance, and presents examples of integrated modeling methods and results that support the preliminary observatory design. Finally, the concepts for model validation and the role of integrated modeling in the ultimate verification of observatory are described.
Fuchs, Eberhard
2005-03-01
Animal models are invaluable in preclinical research on human psychopathology. Valid animal models to study the pathophysiology of depression and specific biological and behavioral responses to antidepressant drug treatments are of prime interest. In order to improve our knowledge of the causal mechanisms of stress-related disorders such as depression, we need animal models that mirror the situation seen in patients. One promising model is the chronic psychosocial stress paradigm in male tree shrews. Coexistence of two males in visual and olfactory contact leads to a stable dominant/subordinate relationship, with the subordinates showing obvious changes in behavioral, neuroendocrine, and central nervous activity that are similar to the signs and symptoms observed during episodes of depression in patients. To discover whether this model, besides its "face validity" for depression, also has "predictive validity," we treated subordinate animals with the tricyclic antidepressant clomipramine and found a time-dependent recovery of both endocrine function and normal behavior. In contrast, the anxiolytic diazepam was ineffective. Chronic psychosocial stress in male tree shrews significantly decreased hippocampal volume and the proliferation rate of the granule precursor cells in the dentate gyrus. These stress-induced changes can be prevented by treating the animals with clomipramine, tianeptine, or the selective neurokinin receptor antagonist L-760,735. In addition to its apparent face and predictive validity, the tree shrew model also has a "molecular validity" due to the degradation routes of psychotropic compounds and gene sequences of receptors are very similar to those in humans. Although further research is required to validate this model fully, it provides an adequate and interesting non-rodent experimental paradigm for preclinical research on depression.
NASA Astrophysics Data System (ADS)
Risnawati; Khairinnisa, S.; Darwis, A. H.
2018-01-01
The purpose of this study was to develop a CORE model-based worksheet with recitation task that were valid and practical and could facilitate students’ communication skills in Linear Algebra course. This study was conducted in mathematics education department of one public university in Riau, Indonesia. Participants of the study were media and subject matter experts as validators as well as students from mathematics education department. The objects of this study are students’ worksheet and students’ mathematical communication skills. The results of study showed that: (1) based on validation of the experts, the developed students’ worksheet was valid and could be applied for students in Linear Algebra courses; (2) based on the group trial, the practicality percentage was 92.14% in small group and 90.19% in large group, so the worksheet was very practical and could attract students to learn; and (3) based on the post test, the average percentage of ideals was 87.83%. In addition, the results showed that the students’ worksheet was able to facilitate students’ mathematical communication skills in linear algebra course.
Using EEG and stimulus context to probe the modelling of auditory-visual speech.
Paris, Tim; Kim, Jeesun; Davis, Chris
2016-02-01
We investigated whether internal models of the relationship between lip movements and corresponding speech sounds [Auditory-Visual (AV) speech] could be updated via experience. AV associations were indexed by early and late event related potentials (ERPs) and by oscillatory power and phase locking. Different AV experience was produced via a context manipulation. Participants were presented with valid (the conventional pairing) and invalid AV speech items in either a 'reliable' context (80% AVvalid items) or an 'unreliable' context (80% AVinvalid items). The results showed that for the reliable context, there was N1 facilitation for AV compared to auditory only speech. This N1 facilitation was not affected by AV validity. Later ERPs showed a difference in amplitude between valid and invalid AV speech and there was significant enhancement of power for valid versus invalid AV speech. These response patterns did not change over the context manipulation, suggesting that the internal models of AV speech were not updated by experience. The results also showed that the facilitation of N1 responses did not vary as a function of the salience of visual speech (as previously reported); in post-hoc analyses, it appeared instead that N1 facilitation varied according to the relative time of the acoustic onset, suggesting for AV events N1 may be more sensitive to the relationship of AV timing than form. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.
Vergara-Romero, Manuel; Morales-Asencio, José Miguel; Morales-Fernández, Angelines; Canca-Sanchez, Jose Carlos; Rivas-Ruiz, Francisco; Reinaldo-Lapuerta, Jose Antonio
2017-06-07
Preoperative anxiety is a frequent and challenging problem with deleterious effects on the development of surgical procedures and postoperative outcomes. To prevent and treat preoperative anxiety effectively, the level of anxiety of patients needs to be assessed through valid and reliable measuring instruments. One such measurement tool is the Amsterdam Preoperative Anxiety and Information Scale (APAIS), of which a Spanish version has not been validated yet. To perform a Spanish cultural adaptation and empirical validation of the APAIS for assessing preoperative anxiety in the Spanish population. A two-step forward/back translation of the APAIS scale was performed to ensure a reliable Spanish cultural adaptation. The final Spanish version of the APAIS questionnaire was administered to 529 patients between the ages of 18 to 70 undergoing elective surgery at hospitals of the Agencia Sanitaria Costa del Sol (Spain). Cronbach's alpha, homogeneity index, intra-class correlation coefficient, and confirmatory factor analysis were calculated to assess internal consistency and criteria and construct validity. Confirmatory factor analysis showed that a one-factor model was better fitted than a two-factor model, with good fitting patterns (root mean square error of approximation: 0.05, normed-fit index: 0.99, goodness-of-fit statistic: 0.99). The questionnaire showed high internal consistency (Cronbach's alpha: 0.84) and a good correlation with the Goldberg Anxiety Scale (CCI: 0.62 (95% CI: 0.55 to 0.68). The Spanish version of the APAIS is a valid and reliable preoperative anxiety measurement tool and shows psychometric properties similar to those obtained by similar previous studies.
Burns, G Leonard; Walsh, James A; Servera, Mateu; Lorenzo-Seva, Urbano; Cardo, Esther; Rodríguez-Fornells, Antoni
2013-01-01
Exploratory structural equation modeling (SEM) was applied to a multiple indicator (26 individual symptom ratings) by multitrait (ADHD-IN, ADHD-HI and ODD factors) by multiple source (mothers, fathers and teachers) model to test the invariance, convergent and discriminant validity of the Child and Adolescent Disruptive Behavior Inventory with 872 Thai adolescents and the ADHD Rating Scale-IV and ODD scale of the Disruptive Behavior Inventory with 1,749 Spanish children. Most of the individual ADHD/ODD symptoms showed convergent and discriminant validity with the loadings and thresholds being invariant over mothers, fathers and teachers in both samples (the three latent factor means were higher for parents than teachers). The ADHD-IN, ADHD-HI and ODD latent factors demonstrated convergent and discriminant validity between mothers and fathers within the two samples. Convergent and discriminant validity between parents and teachers for the three factors was either absent (Thai sample) or only partial (Spanish sample). The application of exploratory SEM to a multiple indicator by multitrait by multisource model should prove useful for the evaluation of the construct validity of the forthcoming DSM-V ADHD/ODD rating scales.
NASA Astrophysics Data System (ADS)
Darma, I. K.
2018-01-01
This research is aimed at determining: 1) the differences of mathematical problem solving ability between the students facilitated with problem-based learning model and conventional learning model, 2) the differences of mathematical problem solving ability between the students facilitated with authentic and conventional assessment model, and 3) interaction effect between learning and assessment model on mathematical problem solving. The research was conducted in Bali State Polytechnic, using the 2x2 experiment factorial design. The samples of this research were 110 students. The data were collected using a theoretically and empirically-validated test. Instruments were validated by using Aiken’s approach of technique content validity and item analysis, and then analyzed using anova stylistic. The result of the analysis shows that the students facilitated with problem-based learning and authentic assessment models get the highest score average compared to the other students, both in the concept understanding and mathematical problem solving. The result of hypothesis test shows that, significantly: 1) there is difference of mathematical problem solving ability between the students facilitated with problem-based learning model and conventional learning model, 2) there is difference of mathematical problem solving ability between the students facilitated with authentic assessment model and conventional assessment model, and 3) there is interaction effect between learning model and assessment model on mathematical problem solving. In order to improve the effectiveness of mathematics learning, collaboration between problem-based learning model and authentic assessment model can be considered as one of learning models in class.
Validating dimensions of psychosis symptomatology: Neural correlates and 20-year outcomes.
Kotov, Roman; Foti, Dan; Li, Kaiqiao; Bromet, Evelyn J; Hajcak, Greg; Ruggero, Camilo J
2016-11-01
Heterogeneity of psychosis presents significant challenges for classification. Between 2 and 12 symptom dimensions have been proposed, and consensus is lacking. The present study sought to identify uniquely informative models by comparing the validity of these alternatives. An epidemiologic cohort of 628 first-admission inpatients with psychosis was interviewed 6 times over 2 decades and completed an electrophysiological assessment of error processing at year 20. We first analyzed a comprehensive set of 49 symptoms rated by interviewers at baseline, progressively extracting from 1 to 12 factors. Next, we compared the ability of resulting factor solutions to (a) account for concurrent neural dysfunction and (b) predict 20-year role, social, residential, and global functioning, and life satisfaction. A four-factor model showed incremental validity with all outcomes, and more complex models did not improve explanatory power. The 4 dimensions-reality distortion, disorganization, inexpressivity, and apathy/asociality-were replicable in 5 follow-ups, internally consistent, stable across assessments, and showed strong discriminant validity. These results reaffirm the value of separating disorganization and reality distortion, are consistent with recent findings distinguishing inexpressivity and apathy/asociality, and suggest that these 4 dimensions are fundamental to understanding neural abnormalities and long-term outcomes in psychosis. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Effects of human running cadence and experimental validation of the bouncing ball model
NASA Astrophysics Data System (ADS)
Bencsik, László; Zelei, Ambrus
2017-05-01
The biomechanical analysis of human running is a complex problem, because of the large number of parameters and degrees of freedom. However, simplified models can be constructed, which are usually characterized by some fundamental parameters, like step length, foot strike pattern and cadence. The bouncing ball model of human running is analysed theoretically and experimentally in this work. It is a minimally complex dynamic model when the aim is to estimate the energy cost of running and the tendency of ground-foot impact intensity as a function of cadence. The model shows that cadence has a direct effect on energy efficiency of running and ground-foot impact intensity. Furthermore, it shows that higher cadence implies lower risk of injury and better energy efficiency. An experimental data collection of 121 amateur runners is presented. The experimental results validate the model and provides information about the walk-to-run transition speed and the typical development of cadence and grounded phase ratio in different running speed ranges.
Janssen, Daniël M C; van Kuijk, Sander M J; d'Aumerie, Boudewijn B; Willems, Paul C
2018-05-16
A prediction model for surgical site infection (SSI) after spine surgery was developed in 2014 by Lee et al. This model was developed to compute an individual estimate of the probability of SSI after spine surgery based on the patient's comorbidity profile and invasiveness of surgery. Before any prediction model can be validly implemented in daily medical practice, it should be externally validated to assess how the prediction model performs in patients sampled independently from the derivation cohort. We included 898 consecutive patients who underwent instrumented thoracolumbar spine surgery. To quantify overall performance using Nagelkerke's R 2 statistic, the discriminative ability was quantified as the area under the receiver operating characteristic curve (AUC). We computed the calibration slope of the calibration plot, to judge prediction accuracy. Sixty patients developed an SSI. The overall performance of the prediction model in our population was poor: Nagelkerke's R 2 was 0.01. The AUC was 0.61 (95% confidence interval (CI) 0.54-0.68). The estimated slope of the calibration plot was 0.52. The previously published prediction model showed poor performance in our academic external validation cohort. To predict SSI after instrumented thoracolumbar spine surgery for the present population, a better fitting prediction model should be developed.
Chatterjee, Abhijit; Bhattacharya, Swati
2015-09-21
Several studies in the past have generated Markov State Models (MSMs), i.e., kinetic models, of biomolecular systems by post-analyzing long standard molecular dynamics (MD) calculations at the temperature of interest and focusing on the maximally ergodic subset of states. Questions related to goodness of these models, namely, importance of the missing states and kinetic pathways, and the time for which the kinetic model is valid, are generally left unanswered. We show that similar questions arise when we generate a room-temperature MSM (denoted MSM-A) for solvated alanine dipeptide using state-constrained MD calculations at higher temperatures and Arrhenius relation — the main advantage of such a procedure being a speed-up of several thousand times over standard MD-based MSM building procedures. Bounds for rate constants calculated using probability theory from state-constrained MD at room temperature help validate MSM-A. However, bounds for pathways possibly missing in MSM-A show that alternate kinetic models exist that produce the same dynamical behaviour at short time scales as MSM-A but diverge later. Even in the worst case scenario, MSM-A is found to be valid longer than the time required to generate it. Concepts introduced here can be straightforwardly extended to other MSM building techniques.
Validation of TGLF in C-Mod and DIII-D using machine learning and integrated modeling tools
NASA Astrophysics Data System (ADS)
Rodriguez-Fernandez, P.; White, Ae; Cao, Nm; Creely, Aj; Greenwald, Mj; Grierson, Ba; Howard, Nt; Meneghini, O.; Petty, Cc; Rice, Je; Sciortino, F.; Yuan, X.
2017-10-01
Predictive models for steady-state and perturbative transport are necessary to support burning plasma operations. A combination of machine learning algorithms and integrated modeling tools is used to validate TGLF in C-Mod and DIII-D. First, a new code suite, VITALS, is used to compare SAT1 and SAT0 models in C-Mod. VITALS exploits machine learning and optimization algorithms for the validation of transport codes. Unlike SAT0, the SAT1 saturation rule contains a model to capture cross-scale turbulence coupling. Results show that SAT1 agrees better with experiments, further confirming that multi-scale effects are needed to model heat transport in C-Mod L-modes. VITALS will next be used to analyze past data from DIII-D: L-mode ``Shortfall'' plasma and ECH swing experiments. A second code suite, PRIMA, allows for integrated modeling of the plasma response to Laser Blow-Off cold pulses. Preliminary results show that SAT1 qualitatively reproduces the propagation of cold pulses after LBO injections and SAT0 does not, indicating that cross-scale coupling effects play a role in the plasma response. PRIMA will be used to ``predict-first'' cold pulse experiments using the new LBO system at DIII-D, and analyze existing ECH heat pulse data. Work supported by DE-FC02-99ER54512, DE-FC02-04ER54698.
Bouarfa, Loubna; Atallah, Louis; Kwasnicki, Richard Mark; Pettitt, Claire; Frost, Gary; Yang, Guang-Zhong
2014-02-01
Accurate estimation of daily total energy expenditure (EE)is a prerequisite for assisted weight management and assessing certain health conditions. The use of wearable sensors for predicting free-living EE is challenged by consistent sensor placement, user compliance, and estimation methods used. This paper examines whether a single ear-worn accelerometer can be used for EE estimation under free-living conditions.An EE prediction model as first derived and validated in a controlled setting using healthy subjects involving different physical activities. Ten different activities were assessed showing a tenfold cross validation error of 0.24. Furthermore, the EE prediction model shows a mean absolute deviation(MAD) below 1.2 metabolic equivalent of tasks. The same model was applied to a free-living setting with a different population for further validation. The results were compared against those derived from doubly labeled water. In free-living settings, the predicted daily EE has a correlation of 0.74, p 0.008, and a MAD of 272 kcal day. These results demonstrate that laboratory-derived prediction models can be used to predict EE under free-living conditions [corrected].
Verifying and Validating Proposed Models for FSW Process Optimization
NASA Technical Reports Server (NTRS)
Schneider, Judith
2008-01-01
This slide presentation reviews Friction Stir Welding (FSW) and the attempts to model the process in order to optimize and improve the process. The studies are ongoing to validate and refine the model of metal flow in the FSW process. There are slides showing the conventional FSW process, a couple of weld tool designs and how the design interacts with the metal flow path. The two basic components of the weld tool are shown, along with geometries of the shoulder design. Modeling of the FSW process is reviewed. Other topics include (1) Microstructure features, (2) Flow Streamlines, (3) Steady-state Nature, and (4) Grain Refinement Mechanisms
Measuring striving for understanding and learning value of geometry: a validity study
NASA Astrophysics Data System (ADS)
Ubuz, Behiye; Aydınyer, Yurdagül
2017-11-01
The current study aimed to construct a questionnaire that measures students' personality traits related to striving for understanding and learning value of geometry and then examine its psychometric properties. Through the use of multiple methods on two independent samples of 402 and 521 middle school students, two studies were performed to address this issue to provide support for its validity. In Study 1, exploratory factor analysis indicated the two-factor model. In Study 2, confirmatory factor analysis indicated the better fit of two-factor model compared to one or three-factor model. Convergent and discriminant validity evidence provided insight into the distinctiveness of the two factors. Subgroup validity evidence revealed gender differences for striving for understanding geometry trait favouring girls and grade level differences for learning value of geometry trait favouring the sixth- and seventh-grade students. Predictive validity evidence demonstrated that the striving for understanding geometry trait but not learning value of geometry trait was significantly correlated with prior mathematics achievement. In both studies, each factor and the entire questionnaire showed satisfactory reliability. In conclusion, the questionnaire was psychometrically sound.
Verification and validation of a Work Domain Analysis with turing machine task analysis.
Rechard, J; Bignon, A; Berruet, P; Morineau, T
2015-03-01
While the use of Work Domain Analysis as a methodological framework in cognitive engineering is increasing rapidly, verification and validation of work domain models produced by this method are becoming a significant issue. In this article, we propose the use of a method based on Turing machine formalism named "Turing Machine Task Analysis" to verify and validate work domain models. The application of this method on two work domain analyses, one of car driving which is an "intentional" domain, and the other of a ship water system which is a "causal domain" showed the possibility of highlighting improvements needed by these models. More precisely, the step by step analysis of a degraded task scenario in each work domain model pointed out unsatisfactory aspects in the first modelling, like overspecification, underspecification, omission of work domain affordances, or unsuitable inclusion of objects in the work domain model. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.
NASA Astrophysics Data System (ADS)
Berthold, T.; Milbradt, P.; Berkhahn, V.
2018-04-01
This paper presents a model for the approximation of multiple, spatially distributed grain size distributions based on a feedforward neural network. Since a classical feedforward network does not guarantee to produce valid cumulative distribution functions, a priori information is incor porated into the model by applying weight and architecture constraints. The model is derived in two steps. First, a model is presented that is able to produce a valid distribution function for a single sediment sample. Although initially developed for sediment samples, the model is not limited in its application; it can also be used to approximate any other multimodal continuous distribution function. In the second part, the network is extended in order to capture the spatial variation of the sediment samples that have been obtained from 48 locations in the investigation area. Results show that the model provides an adequate approximation of grain size distributions, satisfying the requirements of a cumulative distribution function.
Luo, Chuan; Li, Zhaofu; Li, Hengpeng; Chen, Xiaomin
2015-01-01
The application of hydrological and water quality models is an efficient approach to better understand the processes of environmental deterioration. This study evaluated the ability of the Annualized Agricultural Non-Point Source (AnnAGNPS) model to predict runoff, total nitrogen (TN) and total phosphorus (TP) loading in a typical small watershed of a hilly region near Taihu Lake, China. Runoff was calibrated and validated at both an annual and monthly scale, and parameter sensitivity analysis was performed for TN and TP before the two water quality components were calibrated. The results showed that the model satisfactorily simulated runoff at annual and monthly scales, both during calibration and validation processes. Additionally, results of parameter sensitivity analysis showed that the parameters Fertilizer rate, Fertilizer organic, Canopy cover and Fertilizer inorganic were more sensitive to TN output. In terms of TP, the parameters Residue mass ratio, Fertilizer rate, Fertilizer inorganic and Canopy cover were the most sensitive. Based on these sensitive parameters, calibration was performed. TN loading produced satisfactory results for both the calibration and validation processes, whereas the performance of TP loading was slightly poor. The simulation results showed that AnnAGNPS has the potential to be used as a valuable tool for the planning and management of watersheds. PMID:26364642
Objective validation of central sensitization in the rat UVB and heat rekindling model
Weerasinghe, NS; Lumb, BM; Apps, R; Koutsikou, S; Murrell, JC
2014-01-01
Background The UVB and heat rekindling (UVB/HR) model shows potential as a translatable inflammatory pain model. However, the occurrence of central sensitization in this model, a fundamental mechanism underlying chronic pain, has been debated. Face, construct and predictive validity are key requisites of animal models; electromyogram (EMG) recordings were utilized to objectively demonstrate validity of the rat UVB/HR model. Methods The UVB/HR model was induced on the heel of the hind paw under anaesthesia. Mechanical withdrawal thresholds (MWTs) were obtained from biceps femoris EMG responses to a gradually increasing pinch at the mid hind paw region under alfaxalone anaesthesia, 96 h after UVB irradiation. MWT was compared between UVB/HR and SHAM-treated rats (anaesthetic only). Underlying central mechanisms in the model were pharmacologically validated by MWT measurement following intrathecal N-methyl-d-aspartate (NMDA) receptor antagonist, MK-801, or saline. Results Secondary hyperalgesia was confirmed by a significantly lower pre-drug MWT {mean [±standard error of the mean (SEM)]} in UVB/HR [56.3 (±2.1) g/mm2, n = 15] compared with SHAM-treated rats [69.3 (±2.9) g/mm2, n = 8], confirming face validity of the model. Predictive validity was demonstrated by the attenuation of secondary hyperalgesia by MK-801, where mean (±SEM) MWT was significantly higher [77.2 (±5.9) g/mm2 n = 7] in comparison with pre-drug [57.8 (±3.5) g/mm2 n = 7] and saline [57.0 (±3.2) g/mm2 n = 8] at peak drug effect. The occurrence of central sensitization confirmed construct validity of the UVB/HR model. Conclusions This study used objective outcome measures of secondary hyperalgesia to validate the rat UVB/HR model as a translational model of inflammatory pain. What's already known about this topic? Most current animal chronic pain models lack translatability to human subjects. Primary hyperalgesia is an established feature of the UVB/heat rekindling inflammatory pain model in rodents and humans, but the presence of secondary hyperalgesia, a hallmark feature of central sensitization and thus chronic pain, is contentious. What does this study add? Secondary hyperalgesia was demonstrated in the rat UVB/heat rekindling model using an objective outcome measure (electromyogram), overcoming the subjective limitations of previous behavioural studies. PMID:24590815
Multi-body modeling method for rollover using MADYMO
NASA Astrophysics Data System (ADS)
Liu, Changye; Lin, Zhigui; Lv, Juncheng; Luo, Qinyue; Qin, Zhenyao; Zhang, Pu; Chen, Tao
2017-04-01
Rollovers are complex road accidents causing a big deal of fatalities. FE model for rollover study will costtoo much time due to its long duration.A new multi-body modeling method is proposed in this paper which can save a lot of time and has high-fidelity meanwhile. Following works were carried out to validate this new method. First, a small van was tested following the FMVSS 208 protocol for the validation of the proposed modeling method. Second, a MADYMO model of this small van was reconstructed. The vehicle body was divided into two main parts, the deformable upper body and the rigid lower body, modeled by different waysbased on an FE model. The specific method of modeling is offered in this paper. Finally, the trajectories of the vehicle from test and simulation were comparedand the match was very good. Acceleration of left B pillar was taken into consideration, which turned out fitting the test result well in the time of event. The final deformation status of the vehicle in test and simulation showed similar trend. This validated model provides a reliable wayfor further research in occupant injuries during rollovers.
NASA Astrophysics Data System (ADS)
Pradhan, Biswajeet
2013-02-01
The purpose of the present study is to compare the prediction performances of three different approaches such as decision tree (DT), support vector machine (SVM) and adaptive neuro-fuzzy inference system (ANFIS) for landslide susceptibility mapping at Penang Hill area, Malaysia. The necessary input parameters for the landslide susceptibility assessments were obtained from various sources. At first, landslide locations were identified by aerial photographs and field surveys and a total of 113 landslide locations were constructed. The study area contains 340,608 pixels while total 8403 pixels include landslides. The landslide inventory was randomly partitioned into two subsets: (1) part 1 that contains 50% (4000 landslide grid cells) was used in the training phase of the models; (2) part 2 is a validation dataset 50% (4000 landslide grid cells) for validation of three models and to confirm its accuracy. The digitally processed images of input parameters were combined in GIS. Finally, landslide susceptibility maps were produced, and the performances were assessed and discussed. Total fifteen landslide susceptibility maps were produced using DT, SVM and ANFIS based models, and the resultant maps were validated using the landslide locations. Prediction performances of these maps were checked by receiver operating characteristics (ROC) by using both success rate curve and prediction rate curve. The validation results showed that, area under the ROC curve for the fifteen models produced using DT, SVM and ANFIS varied from 0.8204 to 0.9421 for success rate curve and 0.7580 to 0.8307 for prediction rate curves, respectively. Moreover, the prediction curves revealed that model 5 of DT has slightly higher prediction performance (83.07), whereas the success rate showed that model 5 of ANFIS has better prediction (94.21) capability among all models. The results of this study showed that landslide susceptibility mapping in the Penang Hill area using the three approaches (e.g., DT, SVM and ANFIS) is viable. As far as the performance of the models are concerned, the results appeared to be quite satisfactory, i.e., the zones determined on the map being zones of relative susceptibility.
Panken, Guus; Verhagen, Arianne P; Terwee, Caroline B; Heymans, Martijn W
2017-08-01
Study Design Systematic review and validation study. Background Many prognostic models of knee pain outcomes have been developed for use in primary care. Variability among published studies with regard to patient population, outcome measures, and relevant prognostic factors hampers the generalizability and implementation of these models. Objectives To summarize existing prognostic models in patients with knee pain in a primary care setting and to develop and internally validate new summary prognostic models. Methods After a sensitive search strategy, 2 reviewers independently selected prognostic models for patients with nontraumatic knee pain and assessed the methodological quality of the included studies. All predictors of the included studies were evaluated, summarized, and classified. The predictors assessed in multiple studies of sufficient quality are presented in this review. Using data from the Musculoskeletal System Study (BAS) cohort of patients with a new episode of knee pain, recruited consecutively by Dutch general medical practitioners (n = 372), we used predictors with a strong level of evidence to develop new prognostic models for each outcome measure and internally validated these models. Results Sixteen studies were eligible for inclusion. We considered 11 studies to be of sufficient quality. None of these studies validated their models. Five predictors with strong evidence were related to function and 6 to recovery, and were used to compose 2 prognostic models for patients with knee pain at 1 year. Running these new models in another data set showed explained variances (R 2 ) of 0.36 (function) and 0.33 (recovery). The area under the curve of the recovery model was 0.79. After internal validation, the adjusted R 2 values of the models were 0.30 (function) and 0.20 (recovery), and the area under the curve was 0.73. Conclusion We developed 2 valid prognostic models for function and recovery for patients with nontraumatic knee pain, based on predictors with strong evidence. A longer duration of complaints predicted poorer function but did not adequately predict chance of recovery. Level of Evidence Prognosis, levels 1a and 1b. J Orthop Sports Phys Ther 2017;47(8):518-529. Epub 16 Jun 2017. doi:10.2519/jospt.2017.7142.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Yuanyuan; Diao, Ruisheng; Huang, Renke
Maintaining good quality of power plant stability models is of critical importance to ensure the secure and economic operation and planning of today’s power grid with its increasing stochastic and dynamic behavior. According to North American Electric Reliability (NERC) standards, all generators in North America with capacities larger than 10 MVA are required to validate their models every five years. Validation is quite costly and can significantly affect the revenue of generator owners, because the traditional staged testing requires generators to be taken offline. Over the past few years, validating and calibrating parameters using online measurements including phasor measurement unitsmore » (PMUs) and digital fault recorders (DFRs) has been proven to be a cost-effective approach. In this paper, an innovative open-source tool suite is presented for validating power plant models using PPMV tool, identifying bad parameters with trajectory sensitivity analysis, and finally calibrating parameters using an ensemble Kalman filter (EnKF) based algorithm. The architectural design and the detailed procedures to run the tool suite are presented, with results of test on a realistic hydro power plant using PMU measurements for 12 different events. The calibrated parameters of machine, exciter, governor and PSS models demonstrate much better performance than the original models for all the events and show the robustness of the proposed calibration algorithm.« less
2012-01-01
numerical oil spill model validation showing the need for improvedmodel param- eterizations of basic oil spill processes (Cheng et al., 2010). 3.1.2...2004). Modelling the bidirectional reflectance distribution function ( BRDF ) of seawater polluted by an oil film. Optics Express, 12, 1671–1676. Pilon...display a currently valid OMB control number. 1. REPORT DATE 2012 2. REPORT TYPE 3. DATES COVERED 00-00-2012 to 00-00-2012 4. TITLE AND
Morettini, Micaela; Faelli, Emanuela; Perasso, Luisa; Fioretti, Sandro; Burattini, Laura; Ruggeri, Piero; Di Nardo, Francesco
2017-01-01
For the assessment of glucose tolerance from IVGTT data in Zucker rat, minimal model methodology is reliable but time- and money-consuming. This study aimed to validate for the first time in Zucker rat, simple surrogate indexes of insulin sensitivity and secretion against the glucose-minimal-model insulin sensitivity index (SI) and against first- (Φ1) and second-phase (Φ2) β-cell responsiveness indexes provided by C-peptide minimal model. Validation of the surrogate insulin sensitivity index (ISI) and of two sets of coupled insulin-based indexes for insulin secretion, differing from the cut-off point between phases (FPIR3-SPIR3, t = 3 min and FPIR5-SPIR5, t = 5 min), was carried out in a population of ten Zucker fatty rats (ZFR) and ten Zucker lean rats (ZLR). Considering the whole rat population (ZLR+ZFR), ISI showed a significant strong correlation with SI (Spearman's correlation coefficient, r = 0.88; P<0.001). Both FPIR3 and FPIR5 showed a significant (P<0.001) strong correlation with Φ1 (r = 0.76 and r = 0.75, respectively). Both SPIR3 and SPIR5 showed a significant (P<0.001) strong correlation with Φ2 (r = 0.85 and r = 0.83, respectively). ISI is able to detect (P<0.001) the well-recognized reduction in insulin sensitivity in ZFRs, compared to ZLRs. The insulin-based indexes of insulin secretion are able to detect in ZFRs (P<0.001) the compensatory increase of first- and second-phase secretion, associated to the insulin-resistant state. The ability of the surrogate indexes in describing glucose tolerance in the ZFRs was confirmed by the Disposition Index analysis. The model-based validation performed in the present study supports the utilization of low-cost, insulin-based indexes for the assessment of glucose tolerance in Zucker rat, reliable animal model of human metabolic syndrome.
Model identification using stochastic differential equation grey-box models in diabetes.
Duun-Henriksen, Anne Katrine; Schmidt, Signe; Røge, Rikke Meldgaard; Møller, Jonas Bech; Nørgaard, Kirsten; Jørgensen, John Bagterp; Madsen, Henrik
2013-03-01
The acceptance of virtual preclinical testing of control algorithms is growing and thus also the need for robust and reliable models. Models based on ordinary differential equations (ODEs) can rarely be validated with standard statistical tools. Stochastic differential equations (SDEs) offer the possibility of building models that can be validated statistically and that are capable of predicting not only a realistic trajectory, but also the uncertainty of the prediction. In an SDE, the prediction error is split into two noise terms. This separation ensures that the errors are uncorrelated and provides the possibility to pinpoint model deficiencies. An identifiable model of the glucoregulatory system in a type 1 diabetes mellitus (T1DM) patient is used as the basis for development of a stochastic-differential-equation-based grey-box model (SDE-GB). The parameters are estimated on clinical data from four T1DM patients. The optimal SDE-GB is determined from likelihood-ratio tests. Finally, parameter tracking is used to track the variation in the "time to peak of meal response" parameter. We found that the transformation of the ODE model into an SDE-GB resulted in a significant improvement in the prediction and uncorrelated errors. Tracking of the "peak time of meal absorption" parameter showed that the absorption rate varied according to meal type. This study shows the potential of using SDE-GBs in diabetes modeling. Improved model predictions were obtained due to the separation of the prediction error. SDE-GBs offer a solid framework for using statistical tools for model validation and model development. © 2013 Diabetes Technology Society.
de Vroege, Lars; Emons, Wilco H M; Sijtsma, Klaas; van der Feltz-Cornelis, Christina M
2018-01-01
The Bermond-Vorst Alexithymia Questionnaire (BVAQ) has been validated in student samples and small clinical samples, but not in the general population; thus, representative general-population norms are lacking. We examined the factor structure of the BVAQ in Longitudinal Internet Studies for the Social Sciences panel data from the Dutch general population ( N = 974). Factor analyses revealed a first-order five-factor model and a second-order two-factor model. However, in the second-order model, the factor interpreted as analyzing ability loaded on both the affective factor and the cognitive factor. Further analyses showed that the first-order test scores are more reliable than the second-order test scores. External and construct validity were addressed by comparing BVAQ scores with a clinical sample of patients suffering from somatic symptom and related disorder (SSRD) ( N = 235). BVAQ scores differed significantly between the general population and patients suffering from SSRD, suggesting acceptable construct validity. Age was positively associated with alexithymia. Males showed higher levels of alexithymia. The BVAQ is a reliable alternative measure for measuring alexithymia.
Alves, Vinicius M.; Muratov, Eugene; Fourches, Denis; Strickland, Judy; Kleinstreuer, Nicole; Andrade, Carolina H.; Tropsha, Alexander
2015-01-01
Repetitive exposure to a chemical agent can induce an immune reaction in inherently susceptible individuals that leads to skin sensitization. Although many chemicals have been reported as skin sensitizers, there have been very few rigorously validated QSAR models with defined applicability domains (AD) that were developed using a large group of chemically diverse compounds. In this study, we have aimed to compile, curate, and integrate the largest publicly available dataset related to chemically-induced skin sensitization, use this data to generate rigorously validated and QSAR models for skin sensitization, and employ these models as a virtual screening tool for identifying putative sensitizers among environmental chemicals. We followed best practices for model building and validation implemented with our predictive QSAR workflow using random forest modeling technique in combination with SiRMS and Dragon descriptors. The Correct Classification Rate (CCR) for QSAR models discriminating sensitizers from non-sensitizers were 71–88% when evaluated on several external validation sets, within a broad AD, with positive (for sensitizers) and negative (for non-sensitizers) predicted rates of 85% and 79% respectively. When compared to the skin sensitization module included in the OECD QSAR toolbox as well as to the skin sensitization model in publicly available VEGA software, our models showed a significantly higher prediction accuracy for the same sets of external compounds as evaluated by Positive Predicted Rate, Negative Predicted Rate, and CCR. These models were applied to identify putative chemical hazards in the ScoreCard database of possible skin or sense organ toxicants as primary candidates for experimental validation. PMID:25560674
A Finite Element Model of a Midsize Male for Simulating Pedestrian Accidents.
Untaroiu, Costin D; Pak, Wansoo; Meng, Yunzhu; Schap, Jeremy; Koya, Bharath; Gayzik, Scott
2018-01-01
Pedestrians represent one of the most vulnerable road users and comprise nearly 22% the road crash-related fatalities in the world. Therefore, protection of pedestrians in car-to-pedestrian collisions (CPC) has recently generated increased attention with regulations involving three subsystem tests. The development of a finite element (FE) pedestrian model could provide a complementary component that characterizes the whole-body response of vehicle-pedestrian interactions and assesses the pedestrian injuries. The main goal of this study was to develop and to validate a simplified full body FE model corresponding to a 50th male pedestrian in standing posture (M50-PS). The FE model mesh and defined material properties are based on a 50th percentile male occupant model. The lower limb-pelvis and lumbar spine regions of the human model were validated against the postmortem human surrogate (PMHS) test data recorded in four-point lateral knee bending tests, pelvic\\abdomen\\shoulder\\thoracic impact tests, and lumbar spine bending tests. Then, a pedestrian-to-vehicle impact simulation was performed using the whole pedestrian model, and the results were compared to corresponding PMHS tests. Overall, the simulation results showed that lower leg response is mostly within the boundaries of PMHS corridors. In addition, the model shows the capability to predict the most common lower extremity injuries observed in pedestrian accidents. Generally, the validated pedestrian model may be used by safety researchers in the design of front ends of new vehicles in order to increase pedestrian protection.
NASA Astrophysics Data System (ADS)
Song, S. G.
2016-12-01
Simulation-based ground motion prediction approaches have several benefits over empirical ground motion prediction equations (GMPEs). For instance, full 3-component waveforms can be produced and site-specific hazard analysis is also possible. However, it is important to validate them against observed ground motion data to confirm their efficiency and validity before practical uses. There have been community efforts for these purposes, which are supported by the Broadband Platform (BBP) project at the Southern California Earthquake Center (SCEC). In the simulation-based ground motion prediction approaches, it is a critical element to prepare a possible range of scenario rupture models. I developed a pseudo-dynamic source model for Mw 6.5-7.0 by analyzing a number of dynamic rupture models, based on 1-point and 2-point statistics of earthquake source parameters (Song et al. 2014; Song 2016). In this study, the developed pseudo-dynamic source models were tested against observed ground motion data at the SCEC BBP, Ver 16.5. The validation was performed at two stages. At the first stage, simulated ground motions were validated against observed ground motion data for past events such as the 1992 Landers and 1994 Northridge, California, earthquakes. At the second stage, they were validated against the latest version of empirical GMPEs, i.e., NGA-West2. The validation results show that the simulated ground motions produce ground motion intensities compatible with observed ground motion data at both stages. The compatibility of the pseudo-dynamic source models with the omega-square spectral decay and the standard deviation of the simulated ground motion intensities are also discussed in the study
Quantitative validation of an air-coupled ultrasonic probe model by Interferometric laser tomography
NASA Astrophysics Data System (ADS)
Revel, G. M.; Pandarese, G.; Cavuto, A.
2012-06-01
The present paper describes the quantitative validation of a finite element (FE) model of the ultrasound beam generated by an air coupled non-contact ultrasound transducer. The model boundary conditions are given by vibration velocities measured by laser vibrometry on the probe membrane. The proposed validation method is based on the comparison between the simulated 3D pressure field and the pressure data measured with interferometric laser tomography technique. The model details and the experimental techniques are described in paper. The analysis of results shows the effectiveness of the proposed approach and the possibility to quantitatively assess and predict the generated acoustic pressure field, with maximum discrepancies in the order of 20% due to uncertainty effects. This step is important for determining in complex problems the real applicability of air-coupled probes and for the simulation of the whole inspection procedure, also when the component is designed, so as to virtually verify its inspectability.
Investigation of the Thermomechanical Response of Shape Memory Alloy Hybrid Composite Beams
NASA Technical Reports Server (NTRS)
Davis, Brian A.
2005-01-01
Previous work at NASA Langley Research Center (LaRC) involved fabrication and testing of composite beams with embedded, pre-strained shape memory alloy (SMA) ribbons. That study also provided comparison of experimental results with numerical predictions from a research code making use of a new thermoelastic model for shape memory alloy hybrid composite (SMAHC) structures. The previous work showed qualitative validation of the numerical model. However, deficiencies in the experimental-numerical correlation were noted and hypotheses for the discrepancies were given for further investigation. The goal of this work is to refine the experimental measurement and numerical modeling approaches in order to better understand the discrepancies, improve the correlation between prediction and measurement, and provide rigorous quantitative validation of the numerical model. Thermal buckling, post-buckling, and random responses to thermal and inertial (base acceleration) loads are studied. Excellent agreement is achieved between the predicted and measured results, thereby quantitatively validating the numerical tool.
Kim, Dong Hee; Im, Yeo Jin
2013-02-01
To develop and test the validity and reliability of the Korean version of the Family Management Measure (Korean FaMM) to assess applicability for families with children having chronic illnesses. The Korean FaMM was articulated through forward-backward translation methods. Internal consistency reliability, construct and criterion validity were calculated using PASW WIN (19.0) and AMOS (20.0). Survey data were collected from 341 mothers of children suffering from chronic disease enrolled in a university hospital in Seoul, South Korea. The Korean version of FaMM showed reliable internal consistency with Cronbach's alpha for the total scale of .69-.91. Factor loadings of the 53 items on the six sub-scales ranged from 0.28-0.84. The model of six subscales for the Korean FaMM was validated by expiratory and confirmatory factor analysis (χ²<.001, RMR<.05, GFI, AGFI, NFI, NNFI>.08). Criterion validity compared to the Parental Stress Index (PSI) showed significant correlation. The findings of this study demonstrate that the Korean FaMM showed satisfactory construct and criterion validity and reliability. It is useful to measure Korean family's management style with their children who have a chronic illness.
Rubio-Álvarez, Ana; Molina-Alarcón, Milagros; Arias-Arias, Ángel; Hernández-Martínez, Antonio
2018-03-01
postpartum haemorrhage is one of the leading causes of maternal morbidity and mortality worldwide. Despite the use of uterotonics agents as preventive measure, it remains a challenge to identify those women who are at increased risk of postpartum bleeding. to develop and to validate a predictive model to assess the risk of excessive bleeding in women with vaginal birth. retrospective cohorts study. "Mancha-Centro Hospital" (Spain). the elaboration of the predictive model was based on a derivation cohort consisting of 2336 women between 2009 and 2011. For validation purposes, a prospective cohort of 953 women between 2013 and 2014 were employed. Women with antenatal fetal demise, multiple pregnancies and gestations under 35 weeks were excluded METHODS: we used a multivariate analysis with binary logistic regression, Ridge Regression and areas under the Receiver Operating Characteristic curves to determine the predictive ability of the proposed model. there was 197 (8.43%) women with excessive bleeding in the derivation cohort and 63 (6.61%) women in the validation cohort. Predictive factors in the final model were: maternal age, primiparity, duration of the first and second stages of labour, neonatal birth weight and antepartum haemoglobin levels. Accordingly, the predictive ability of this model in the derivation cohort was 0.90 (95% CI: 0.85-0.93), while it remained 0.83 (95% CI: 0.74-0.92) in the validation cohort. this predictive model is proved to have an excellent predictive ability in the derivation cohort, and its validation in a latter population equally shows a good ability for prediction. This model can be employed to identify women with a higher risk of postpartum haemorrhage. Copyright © 2017 Elsevier Ltd. All rights reserved.
Fluid-structure interaction with the entropic lattice Boltzmann method
NASA Astrophysics Data System (ADS)
Dorschner, B.; Chikatamarla, S. S.; Karlin, I. V.
2018-02-01
We propose a fluid-structure interaction (FSI) scheme using the entropic multi-relaxation time lattice Boltzmann (KBC) model for the fluid domain in combination with a nonlinear finite element solver for the structural part. We show the validity of the proposed scheme for various challenging setups by comparison to literature data. Beyond validation, we extend the KBC model to multiphase flows and couple it with a finite element method (FEM) solver. Robustness and viability of the entropic multi-relaxation time model for complex FSI applications is shown by simulations of droplet impact on elastic superhydrophobic surfaces.
Simulating the evolution of glyphosate resistance in grains farming in northern Australia.
Thornby, David F; Walker, Steve R
2009-09-01
The evolution of resistance to herbicides is a substantial problem in contemporary agriculture. Solutions to this problem generally consist of the use of practices to control the resistant population once it evolves, and/or to institute preventative measures before populations become resistant. Herbicide resistance evolves in populations over years or decades, so predicting the effectiveness of preventative strategies in particular relies on computational modelling approaches. While models of herbicide resistance already exist, none deals with the complex regional variability in the northern Australian sub-tropical grains farming region. For this reason, a new computer model was developed. The model consists of an age- and stage-structured population model of weeds, with an existing crop model used to simulate plant growth and competition, and extensions to the crop model added to simulate seed bank ecology and population genetics factors. Using awnless barnyard grass (Echinochloa colona) as a test case, the model was used to investigate the likely rate of evolution under conditions expected to produce high selection pressure. Simulating continuous summer fallows with glyphosate used as the only means of weed control resulted in predicted resistant weed populations after approx. 15 years. Validation of the model against the paddock history for the first real-world glyphosate-resistant awnless barnyard grass population shows that the model predicted resistance evolution to within a few years of the real situation. This validation work shows that empirical validation of herbicide resistance models is problematic. However, the model simulates the complexities of sub-tropical grains farming in Australia well, and can be used to investigate, generate and improve glyphosate resistance prevention strategies.
Navidpour, Fariba; Dolatian, Mahrokh; Yaghmaei, Farideh; Majd, Hamid Alavi; Hashemi, Seyed Saeed
2015-01-01
Background and Objectives: Pregnant women tend to experience anxiety and stress when faced with the changes to their biology, environment and personal relationships. The identification of these factors and the prevention of their side effects are vital for both mother and fetus. The present study was conducted to validate and to examine the factor structure of the Persian version of the Pregnancy’s Worries and Stress Questionnaire. Materials and Methods: The 25-item PWSQ was first translated by specialists into Persian. The questionnaire’s validity was determined using face, content, criterion and construct validity and reliability of questionnaire was examined using Cronbach’s alpha. Confirmatory factor analysis was performed in AMOS and SPSS 21. Participants included healthy Iranian pregnant women (8-39 weeks) who refer to selected hospitals for prenatal care. Hospitals included private, social security and university hospitals and selected through the random cluster sampling method. Findings: The results of validity and reliability assessments of the questionnaire were acceptable. Cronbach’s alpha calculated showed a high internal consistency of 0.89. The confirmatory factor analysis using the χ2, CMIN/DF, IFI, CFI, NFI and NNFI indexes showed the 6-factor model to be the best fitted model for explaining the data. Conclusion: The questionnaire was translated into Persian to examine stress and worry specific to Iranian pregnant women. The psychometric results showed that the questionnaire is suitable for identifying Iranian pregnant women with pregnancy-related stress. PMID:26153186
Apostol, Izydor; Kelner, Drew; Jiang, Xinzhao Grace; Huang, Gang; Wypych, Jette; Zhang, Xin; Gastwirt, Jessica; Chen, Kenneth; Fodor, Szilan; Hapuarachchi, Suminda; Meriage, Dave; Ye, Frank; Poppe, Leszek; Szpankowski, Wojciech
2012-12-01
To predict precision and other performance characteristics of chromatographic purity methods, which represent the most widely used form of analysis in the biopharmaceutical industry. We have conducted a comprehensive survey of purity methods, and show that all performance characteristics fall within narrow measurement ranges. This observation was used to develop a model called Uncertainty Based on Current Information (UBCI), which expresses these performance characteristics as a function of the signal and noise levels, hardware specifications, and software settings. We applied the UCBI model to assess the uncertainty of purity measurements, and compared the results to those from conventional qualification. We demonstrated that the UBCI model is suitable to dynamically assess method performance characteristics, based on information extracted from individual chromatograms. The model provides an opportunity for streamlining qualification and validation studies by implementing a "live validation" of test results utilizing UBCI as a concurrent assessment of measurement uncertainty. Therefore, UBCI can potentially mitigate the challenges associated with laborious conventional method validation and facilitates the introduction of more advanced analytical technologies during the method lifecycle.
Validating a two-high-threshold measurement model for confidence rating data in recognition.
Bröder, Arndt; Kellen, David; Schütz, Julia; Rohrmeier, Constanze
2013-01-01
Signal Detection models as well as the Two-High-Threshold model (2HTM) have been used successfully as measurement models in recognition tasks to disentangle memory performance and response biases. A popular method in recognition memory is to elicit confidence judgements about the presumed old/new status of an item, allowing for the easy construction of ROCs. Since the 2HTM assumes fewer latent memory states than response options are available in confidence ratings, the 2HTM has to be extended by a mapping function which models individual rating scale usage. Unpublished data from 2 experiments in Bröder and Schütz (2009) validate the core memory parameters of the model, and 3 new experiments show that the response mapping parameters are selectively affected by manipulations intended to affect rating scale use, and this is independent of overall old/new bias. Comparisons with SDT show that both models behave similarly, a case that highlights the notion that both modelling approaches can be valuable (and complementary) elements in a researcher's toolbox.
Analytical Performance Modeling and Validation of Intel’s Xeon Phi Architecture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chunduri, Sudheer; Balaprakash, Prasanna; Morozov, Vitali
Modeling the performance of scientific applications on emerging hardware plays a central role in achieving extreme-scale computing goals. Analytical models that capture the interaction between applications and hardware characteristics are attractive because even a reasonably accurate model can be useful for performance tuning before the hardware is made available. In this paper, we develop a hardware model for Intel’s second-generation Xeon Phi architecture code-named Knights Landing (KNL) for the SKOPE framework. We validate the KNL hardware model by projecting the performance of mini-benchmarks and application kernels. The results show that our KNL model can project the performance with prediction errorsmore » of 10% to 20%. The hardware model also provides informative recommendations for code transformations and tuning.« less
SPR Hydrostatic Column Model Verification and Validation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bettin, Giorgia; Lord, David; Rudeen, David Keith
2015-10-01
A Hydrostatic Column Model (HCM) was developed to help differentiate between normal "tight" well behavior and small-leak behavior under nitrogen for testing the pressure integrity of crude oil storage wells at the U.S. Strategic Petroleum Reserve. This effort was motivated by steady, yet distinct, pressure behavior of a series of Big Hill caverns that have been placed under nitrogen for extended period of time. This report describes the HCM model, its functional requirements, the model structure and the verification and validation process. Different modes of operation are also described, which illustrate how the software can be used to model extendedmore » nitrogen monitoring and Mechanical Integrity Tests by predicting wellhead pressures along with nitrogen interface movements. Model verification has shown that the program runs correctly and it is implemented as intended. The cavern BH101 long term nitrogen test was used to validate the model which showed very good agreement with measured data. This supports the claim that the model is, in fact, capturing the relevant physical phenomena and can be used to make accurate predictions of both wellhead pressure and interface movements.« less
Foust, Thomas D.; Ziegler, Jack L.; Pannala, Sreekanth; ...
2017-02-28
Here in this computational study, we model the mixing of biomass pyrolysis vapor with solid catalyst in circulating riser reactors with a focus on the determination of solid catalyst residence time distributions (RTDs). A comprehensive set of 2D and 3D simulations were conducted for a pilot-scale riser using the Eulerian-Eulerian two-fluid modeling framework with and without sub-grid-scale models for the gas-solids interaction. A validation test case was also simulated and compared to experiments, showing agreement in the pressure gradient and RTD mean and spread. For simulation cases, it was found that for accurate RTD prediction, the Johnson and Jackson partialmore » slip solids boundary condition was required for all models and a sub-grid model is useful so that ultra high resolutions grids that are very computationally intensive are not required. Finally, we discovered a 2/3 scaling relation for the RTD mean and spread when comparing resolved 2D simulations to validated unresolved 3D sub-grid-scale model simulations.« less
Construct Validity of the Autism Impact Measure (AIM).
Mazurek, Micah O; Carlson, Coleen; Baker-Ericzén, Mary; Butter, Eric; Norris, Megan; Kanne, Stephen
2018-01-17
The Autism Impact Measure (AIM) was designed to track incremental change in frequency and impact of core ASD symptoms. The current study examined the structural and convergent validity of the AIM in a large sample of children with ASD. The results of a series of exploratory and confirmatory factor analyses yielded a final model with five theoretically and empirically meaningful subdomains: Repetitive Behavior, Atypical Behavior, Communication, Social Reciprocity, and Peer Interaction. The final model showed very good fit both overall and for each of the five factors, indicating excellent structural validity. AIM subdomain scores were significantly correlated with measures of similar constructs across all five domains. The results provide further support for the psychometric properties of the AIM.
Thangaratinam, Shakila; Allotey, John; Marlin, Nadine; Mol, Ben W; Von Dadelszen, Peter; Ganzevoort, Wessel; Akkermans, Joost; Ahmed, Asif; Daniels, Jane; Deeks, Jon; Ismail, Khaled; Barnard, Ann Marie; Dodds, Julie; Kerry, Sally; Moons, Carl; Riley, Richard D; Khan, Khalid S
2017-04-01
The prognosis of early-onset pre-eclampsia (before 34 weeks' gestation) is variable. Accurate prediction of complications is required to plan appropriate management in high-risk women. To develop and validate prediction models for outcomes in early-onset pre-eclampsia. Prospective cohort for model development, with validation in two external data sets. Model development: 53 obstetric units in the UK. Model transportability: PIERS (Pre-eclampsia Integrated Estimate of RiSk for mothers) and PETRA (Pre-Eclampsia TRial Amsterdam) studies. Pregnant women with early-onset pre-eclampsia. Nine hundred and forty-six women in the model development data set and 850 women (634 in PIERS, 216 in PETRA) in the transportability (external validation) data sets. The predictors were identified from systematic reviews of tests to predict complications in pre-eclampsia and were prioritised by Delphi survey. The primary outcome was the composite of adverse maternal outcomes established using Delphi surveys. The secondary outcome was the composite of fetal and neonatal complications. We developed two prediction models: a logistic regression model (PREP-L) to assess the overall risk of any maternal outcome until postnatal discharge and a survival analysis model (PREP-S) to obtain individual risk estimates at daily intervals from diagnosis until 34 weeks. Shrinkage was used to adjust for overoptimism of predictor effects. For internal validation (of the full models in the development data) and external validation (of the reduced models in the transportability data), we computed the ability of the models to discriminate between those with and without poor outcomes ( c -statistic), and the agreement between predicted and observed risk (calibration slope). The PREP-L model included maternal age, gestational age at diagnosis, medical history, systolic blood pressure, urine protein-to-creatinine ratio, platelet count, serum urea concentration, oxygen saturation, baseline treatment with antihypertensive drugs and administration of magnesium sulphate. The PREP-S model additionally included exaggerated tendon reflexes and serum alanine aminotransaminase and creatinine concentration. Both models showed good discrimination for maternal complications, with anoptimism-adjusted c -statistic of 0.82 [95% confidence interval (CI) 0.80 to 0.84] for PREP-L and 0.75 (95% CI 0.73 to 0.78) for the PREP-S model in the internal validation. External validation of the reduced PREP-L model showed good performance with a c -statistic of 0.81 (95% CI 0.77 to 0.85) in PIERS and 0.75 (95% CI 0.64 to 0.86) in PETRA cohorts for maternal complications, and calibrated well with slopes of 0.93 (95% CI 0.72 to 1.10) and 0.90 (95% CI 0.48 to 1.32), respectively. In the PIERS data set, the reduced PREP-S model had a c -statistic of 0.71 (95% CI 0.67 to 0.75) and a calibration slope of 0.67 (95% CI 0.56 to 0.79). Low gestational age at diagnosis, high urine protein-to-creatinine ratio, increased serum urea concentration, treatment with antihypertensive drugs, magnesium sulphate, abnormal uterine artery Doppler scan findings and estimated fetal weight below the 10th centile were associated with fetal complications. The PREP-L model provided individualised risk estimates in early-onset pre-eclampsia to plan management of high- or low-risk individuals. The PREP-S model has the potential to be used as a triage tool for risk assessment. The impacts of the model use on outcomes need further evaluation. Current Controlled Trials ISRCTN40384046. The National Institute for Health Research Health Technology Assessment programme.
Guo, Lisha; Vanrolleghem, Peter A
2014-02-01
An activated sludge model for greenhouse gases no. 1 was calibrated with data from a wastewater treatment plant (WWTP) without control systems and validated with data from three similar plants equipped with control systems. Special about the calibration/validation approach adopted in this paper is that the data are obtained from simulations with a mathematical model that is widely accepted to describe effluent quality and operating costs of actual WWTPs, the Benchmark Simulation Model No. 2 (BSM2). The calibration also aimed at fitting the model to typical observed nitrous oxide (N₂O) emission data, i.e., a yearly average of 0.5% of the influent total nitrogen load emitted as N₂O-N. Model validation was performed by challenging the model in configurations with different control strategies. The kinetic term describing the dissolved oxygen effect on the denitrification by ammonia-oxidizing bacteria (AOB) was modified into a Haldane term. Both original and Haldane-modified models passed calibration and validation. Even though their yearly averaged values were similar, the two models presented different dynamic N₂O emissions under cold temperature conditions and control. Therefore, data collected in such situations can potentially permit model discrimination. Observed seasonal trends in N₂O emissions are simulated well with both original and Haldane-modified models. A mechanistic explanation based on the temperature-dependent interaction between heterotrophic and autotrophic N₂O pathways was provided. Finally, while adding the AOB denitrification pathway to a model with only heterotrophic N₂O production showed little impact on effluent quality and operating cost criteria, it clearly affected N2O emission productions.
Kang, Xiaofeng; Dennison Himmelfarb, Cheryl R; Li, Zheng; Zhang, Jian; Lv, Rong; Guo, Jinyu
2015-01-01
The Self-care of Heart Failure Index (SCHFI) is an empirically tested instrument for measuring the self-care of patients with heart failure. The aim of this study was to develop a simplified Chinese version of the SCHFI and provide evidence for its construct validity. A total of 182 Chinese with heart failure were surveyed. A 2-step structural equation modeling procedure was applied to test construct validity. Factor analysis showed 3 factors explaining 43% of the variance. Structural equation model confirmed that self-care maintenance, self-care management, and self-care confidence are indeed indicators of self-care, and self-care confidence was a positive and equally strong predictor of self-care maintenance and self-care management. Moreover, self-care scores were correlated with the Partners in Health Scale, indicating satisfactory concurrent validity. The Chinese version of the SCHFI is a theory-based instrument for assessing self-care of Chinese patients with heart failure.
NASA Astrophysics Data System (ADS)
Sagita, R.; Azra, F.; Azhar, M.
2018-04-01
The research has created the module of mole concept based on structured inquiry with interconection of macro, submicro, and symbolic representation and determined the validity and practicality of the module. The research type was Research and Development (R&D). The development model was 4-D models that consist of four steps: define, design, develop, and disseminate. The research was limited on develop step. The instrument of the research was questionnaire form that consist of validity and practicality sheets. The module was validated by 5 validators. Practicality module was tested by 2 chemistry teachers and 28 students of grade XI MIA 5 at SMAN 4 of Padang. Validity and practicality data were analysed by using the kappa Cohen formula. The moment kappa average of 5 validators was 0,95 with highest validity category. The moment kappa average of teachers and students were 0,89 and 0,91 praticality with high category. The result of the research showed that the module of mole concept based on structured inquiry with interconection of macro, submicro, and symbolic representation was valid and practice to be used on the learning chemistry.
Dynamic Simulation of Human Gait Model With Predictive Capability.
Sun, Jinming; Wu, Shaoli; Voglewede, Philip A
2018-03-01
In this paper, it is proposed that the central nervous system (CNS) controls human gait using a predictive control approach in conjunction with classical feedback control instead of exclusive classical feedback control theory that controls based on past error. To validate this proposition, a dynamic model of human gait is developed using a novel predictive approach to investigate the principles of the CNS. The model developed includes two parts: a plant model that represents the dynamics of human gait and a controller that represents the CNS. The plant model is a seven-segment, six-joint model that has nine degrees-of-freedom (DOF). The plant model is validated using data collected from able-bodied human subjects. The proposed controller utilizes model predictive control (MPC). MPC uses an internal model to predict the output in advance, compare the predicted output to the reference, and optimize the control input so that the predicted error is minimal. To decrease the complexity of the model, two joints are controlled using a proportional-derivative (PD) controller. The developed predictive human gait model is validated by simulating able-bodied human gait. The simulation results show that the developed model is able to simulate the kinematic output close to experimental data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valencia, Antoni; Prous, Josep; Mora, Oscar
As indicated in ICH M7 draft guidance, in silico predictive tools including statistically-based QSARs and expert analysis may be used as a computational assessment for bacterial mutagenicity for the qualification of impurities in pharmaceuticals. To address this need, we developed and validated a QSAR model to predict Salmonella t. mutagenicity (Ames assay outcome) of pharmaceutical impurities using Prous Institute's Symmetry℠, a new in silico solution for drug discovery and toxicity screening, and the Mold2 molecular descriptor package (FDA/NCTR). Data was sourced from public benchmark databases with known Ames assay mutagenicity outcomes for 7300 chemicals (57% mutagens). Of these data, 90%more » was used to train the model and the remaining 10% was set aside as a holdout set for validation. The model's applicability to drug impurities was tested using a FDA/CDER database of 951 structures, of which 94% were found within the model's applicability domain. The predictive performance of the model is acceptable for supporting regulatory decision-making with 84 ± 1% sensitivity, 81 ± 1% specificity, 83 ± 1% concordance and 79 ± 1% negative predictivity based on internal cross-validation, while the holdout dataset yielded 83% sensitivity, 77% specificity, 80% concordance and 78% negative predictivity. Given the importance of having confidence in negative predictions, an additional external validation of the model was also carried out, using marketed drugs known to be Ames-negative, and obtained 98% coverage and 81% specificity. Additionally, Ames mutagenicity data from FDA/CFSAN was used to create another data set of 1535 chemicals for external validation of the model, yielding 98% coverage, 73% sensitivity, 86% specificity, 81% concordance and 84% negative predictivity. - Highlights: • A new in silico QSAR model to predict Ames mutagenicity is described. • The model is extensively validated with chemicals from the FDA and the public domain. • Validation tests show desirable high sensitivity and high negative predictivity. • The model predicted 14 reportedly difficult to predict drug impurities with accuracy. • The model is suitable to support risk evaluation of potentially mutagenic compounds.« less
Wan, Eric Yuk Fai; Fong, Daniel Yee Tak; Fung, Colman Siu Cheung; Yu, Esther Yee Tak; Chin, Weng Yee; Chan, Anca Ka Chun; Lam, Cindy Lo Kuen
2017-06-01
This study aimed to develop and validate an all-cause mortality risk prediction model for Chinese primary care patients with type 2 diabetes mellitus(T2DM) in Hong Kong. A population-based retrospective cohort study was conducted on 132,462 Chinese patients who had received public primary care services during 2010. Each gender sample was randomly split on a 2:1 basis into derivation and validation cohorts and was followed-up for a median period of 5years. Gender-specific mortality risk prediction models showing the interaction effect between predictors and age were derived using Cox proportional hazards regression with forward stepwise approach. Developed models were compared with pre-existing models by Harrell's C-statistic and calibration plot using validation cohort. Common predictors of increased mortality risk in both genders included: age; smoking habit; diabetes duration; use of anti-hypertensive agents, insulin and lipid-lowering drugs; body mass index; hemoglobin A1c; systolic blood pressure(BP); total cholesterol to high-density lipoprotein-cholesterol ratio; urine albumin to creatinine ratio(urine ACR); and estimated glomerular filtration rate(eGFR). Prediction models showed better discrimination with Harrell"'s C-statistics of 0.768(males) and 0.782(females) and calibration power from the plots than previously established models. Our newly developed gender-specific models provide a more accurate predicted 5-year mortality risk for Chinese diabetic patients than other established models. Copyright © 2017 Elsevier Inc. All rights reserved.
Ferrando-Vivas, Paloma; Jones, Andrew; Rowan, Kathryn M; Harrison, David A
2017-04-01
To develop and validate an improved risk model to predict acute hospital mortality for admissions to adult critical care units in the UK. 155,239 admissions to 232 adult critical care units in England, Wales and Northern Ireland between January and December 2012 were used to develop a risk model from a set of 38 candidate predictors. The model was validated using 90,017 admissions between January and September 2013. The final model incorporated 15 physiological predictors (modelled with continuous nonlinear models), age, dependency prior to hospital admission, chronic liver disease, metastatic disease, haematological malignancy, CPR prior to admission, location prior to admission/urgency of admission, primary reason for admission and interaction terms. The model was well calibrated and outperformed the current ICNARC model on measures of discrimination (area under the receiver operating characteristic curve 0.885 versus 0.869) and model fit (Brier's score 0.108 versus 0.115). On average, the new model reclassified patients into more appropriate risk categories (net reclassification improvement 19.9; P<0.0001). The model performed well across patient subgroups and in specialist critical care units. The risk model developed in this study showed excellent discrimination and calibration and when validated on a different period of time and across different types of critical care unit. This in turn allows improved accuracy of comparisons between UK critical care providers. Copyright © 2016. Published by Elsevier Inc.
Cross-validation and Peeling Strategies for Survival Bump Hunting using Recursive Peeling Methods
Dazard, Jean-Eudes; Choe, Michael; LeBlanc, Michael; Rao, J. Sunil
2015-01-01
We introduce a framework to build a survival/risk bump hunting model with a censored time-to-event response. Our Survival Bump Hunting (SBH) method is based on a recursive peeling procedure that uses a specific survival peeling criterion derived from non/semi-parametric statistics such as the hazards-ratio, the log-rank test or the Nelson--Aalen estimator. To optimize the tuning parameter of the model and validate it, we introduce an objective function based on survival or prediction-error statistics, such as the log-rank test and the concordance error rate. We also describe two alternative cross-validation techniques adapted to the joint task of decision-rule making by recursive peeling and survival estimation. Numerical analyses show the importance of replicated cross-validation and the differences between criteria and techniques in both low and high-dimensional settings. Although several non-parametric survival models exist, none addresses the problem of directly identifying local extrema. We show how SBH efficiently estimates extreme survival/risk subgroups unlike other models. This provides an insight into the behavior of commonly used models and suggests alternatives to be adopted in practice. Finally, our SBH framework was applied to a clinical dataset. In it, we identified subsets of patients characterized by clinical and demographic covariates with a distinct extreme survival outcome, for which tailored medical interventions could be made. An R package PRIMsrc (Patient Rule Induction Method in Survival, Regression and Classification settings) is available on CRAN (Comprehensive R Archive Network) and GitHub. PMID:27034730
External validation of EPIWIN biodegradation models.
Posthumus, R; Traas, T P; Peijnenburg, W J G M; Hulzebos, E M
2005-01-01
The BIOWIN biodegradation models were evaluated for their suitability for regulatory purposes. BIOWIN includes the linear and non-linear BIODEG and MITI models for estimating the probability of rapid aerobic biodegradation and an expert survey model for primary and ultimate biodegradation estimation. Experimental biodegradation data for 110 newly notified substances were compared with the estimations of the different models. The models were applied separately and in combinations to determine which model(s) showed the best performance. The results of this study were compared with the results of other validation studies and other biodegradation models. The BIOWIN models predict not-readily biodegradable substances with high accuracy in contrast to ready biodegradability. In view of the high environmental concern of persistent chemicals and in view of the large number of not-readily biodegradable chemicals compared to the readily ones, a model is preferred that gives a minimum of false positives without a corresponding high percentage false negatives. A combination of the BIOWIN models (BIOWIN2 or BIOWIN6) showed the highest predictive value for not-readily biodegradability. However, the highest score for overall predictivity with lowest percentage false predictions was achieved by applying BIOWIN3 (pass level 2.75) and BIOWIN6.
Independent validation of Swarm Level 2 magnetic field products and `Quick Look' for Level 1b data
NASA Astrophysics Data System (ADS)
Beggan, Ciarán D.; Macmillan, Susan; Hamilton, Brian; Thomson, Alan W. P.
2013-11-01
Magnetic field models are produced on behalf of the European Space Agency (ESA) by an independent scientific consortium known as the Swarm Satellite Constellation Application and Research Facility (SCARF), through the Level 2 Processor (L2PS). The consortium primarily produces magnetic field models for the core, lithosphere, ionosphere and magnetosphere. Typically, for each magnetic product, two magnetic field models are produced in separate chains using complementary data selection and processing techniques. Hence, the magnetic field models from the complementary processing chains will be similar but not identical. The final step in the overall L2PS therefore involves inspection and validation of the magnetic field models against each other and against data from (semi-) independent sources (e.g. ground observatories). We describe the validation steps for each magnetic field product and the comparison against independent datasets, and we show examples of the output of the validation. In addition, the L2PS also produces a daily set of `Quick Look' output graphics and statistics to monitor the overall quality of Level 1b data issued by ESA. We describe the outputs of the `Quick Look' chain.
A hierarchical (multicomponent) model of in-group identification: examining in Russian samples.
Lovakov, Andrey V; Agadullina, Elena R; Osin, Evgeny N
2015-06-03
The aim of this study was to examine the validity and reliability of Leach et al.'s (2008) model of in-group identification in two studies using Russian samples (overall N = 621). In Study 1, a series of multi-group confirmatory factor analysis revealed that the hierarchical model of in-group identification, which included two second-order factors, self-definition (individual self-stereotyping, and in-group homogeneity) and self-investment (satisfaction, solidarity, and centrality), fitted the data well for all four group identities (ethnic, religious, university, and gender) (CFI > .93, TLI > .92, RMSEA < .06, SRMR < .06) and demonstrated a better fit, compared to the alternative models. In Study 2, the construct validity and reliability of the Russian version of the in-group identification measure was examined. Results show that these measures have adequate psychometric properties. In short, our results show that Leach et al.'s model is reproduced in Russian culture. The Russian version of this measure can be recommended for use in future in-group research in Russian-speaking samples.
Li, Zhenghua; Cheng, Fansheng; Xia, Zhining
2011-01-01
The chemical structures of 114 polycyclic aromatic sulfur heterocycles (PASHs) have been studied by molecular electronegativity-distance vector (MEDV). The linear relationships between gas chromatographic retention index and the MEDV have been established by a multiple linear regression (MLR) model. The results of variable selection by stepwise multiple regression (SMR) and the powerful predictive abilities of the optimization model appraised by leave-one-out cross-validation showed that the optimization model with the correlation coefficient (R) of 0.994 7 and the cross-validated correlation coefficient (Rcv) of 0.994 0 possessed the best statistical quality. Furthermore, when the 114 PASHs compounds were divided into calibration and test sets in the ratio of 2:1, the statistical analysis showed our models possesses almost equal statistical quality, the very similar regression coefficients and the good robustness. The quantitative structure-retention relationship (QSRR) model established may provide a convenient and powerful method for predicting the gas chromatographic retention of PASHs.
Church, Sheri A; Livingstone, Kevin; Lai, Zhao; Kozik, Alexander; Knapp, Steven J; Michelmore, Richard W; Rieseberg, Loren H
2007-02-01
Using likelihood-based variable selection models, we determined if positive selection was acting on 523 EST sequence pairs from two lineages of sunflower and lettuce. Variable rate models are generally not used for comparisons of sequence pairs due to the limited information and the inaccuracy of estimates of specific substitution rates. However, previous studies have shown that the likelihood ratio test (LRT) is reliable for detecting positive selection, even with low numbers of sequences. These analyses identified 56 genes that show a signature of selection, of which 75% were not identified by simpler models that average selection across codons. Subsequent mapping studies in sunflower show four of five of the positively selected genes identified by these methods mapped to domestication QTLs. We discuss the validity and limitations of using variable rate models for comparisons of sequence pairs, as well as the limitations of using ESTs for identification of positively selected genes.
DOT National Transportation Integrated Search
1972-07-01
The TSC electromagnetic scattering model has been used to predict the course deviation indications (CDI) at the planned Dallas Fort Worth Regional Airport. The results show that the CDI due to scattering from the modeled airport structures are within...
Validation of Atmospheric Forcing Data for PIPS 3
2001-09-30
members shortly. RESULTS Surface Temperature: Figure 1 shows a comparison of surface air temperatures from the NOGAPS model , the IABP and the NCEP...with some 8,000 daily velocity observations from the IABP buoys shows that the sea-ice model performs better when driven with NOGAPS surface stresses...forcing variables, surface radiative fluxes, surface winds, and precipitation estimates to be used in the development and operation of the PIPS 3.0 model
Ma, Baoshun; Ruwet, Vincent; Corieri, Patricia; Theunissen, Raf; Riethmuller, Michel; Darquenne, Chantal
2009-01-01
Accurate modeling of air flow and aerosol transport in the alveolated airways is essential for quantitative predictions of pulmonary aerosol deposition. However, experimental validation of such modeling studies has been scarce. The objective of this study is to validate CFD predictions of flow field and particle trajectory with experiments within a scaled-up model of alveolated airways. Steady flow (Re = 0.13) of silicone oil was captured by particle image velocimetry (PIV), and the trajectories of 0.5 mm and 1.2 mm spherical iron beads (representing 0.7 to 14.6 μm aerosol in vivo) were obtained by particle tracking velocimetry (PTV). At twelve selected cross sections, the velocity profiles obtained by CFD matched well with those by PIV (within 1.7% on average). The CFD predicted trajectories also matched well with PTV experiments. These results showed that air flow and aerosol transport in models of human alveolated airways can be simulated by CFD techniques with reasonable accuracy. PMID:20161301
Ma, Baoshun; Ruwet, Vincent; Corieri, Patricia; Theunissen, Raf; Riethmuller, Michel; Darquenne, Chantal
2009-05-01
Accurate modeling of air flow and aerosol transport in the alveolated airways is essential for quantitative predictions of pulmonary aerosol deposition. However, experimental validation of such modeling studies has been scarce. The objective of this study is to validate CFD predictions of flow field and particle trajectory with experiments within a scaled-up model of alveolated airways. Steady flow (Re = 0.13) of silicone oil was captured by particle image velocimetry (PIV), and the trajectories of 0.5 mm and 1.2 mm spherical iron beads (representing 0.7 to 14.6 mum aerosol in vivo) were obtained by particle tracking velocimetry (PTV). At twelve selected cross sections, the velocity profiles obtained by CFD matched well with those by PIV (within 1.7% on average). The CFD predicted trajectories also matched well with PTV experiments. These results showed that air flow and aerosol transport in models of human alveolated airways can be simulated by CFD techniques with reasonable accuracy.
Design and landing dynamic analysis of reusable landing leg for a near-space manned capsule
NASA Astrophysics Data System (ADS)
Yue, Shuai; Nie, Hong; Zhang, Ming; Wei, Xiaohui; Gan, Shengyong
2018-06-01
To improve the landing performance of a near-space manned capsule under various landing conditions, a novel landing system is designed that employs double chamber and single chamber dampers in the primary and auxiliary struts, respectively. A dynamic model of the landing system is established, and the damper parameters are determined by employing the design method. A single-leg drop test with different initial pitch angles is then conducted to compare and validate the simulation model. Based on the validated simulation model, seven critical landing conditions regarding nine crucial landing responses are found by combining the radial basis function (RBF) surrogate model and adaptive simulated annealing (ASA) optimization method. Subsequently, the adaptability of the landing system under critical landing conditions is analyzed. The results show that the simulation effectively results match the test results, which validates the accuracy of the dynamic model. In addition, all of the crucial responses under their corresponding critical landing conditions satisfy the design specifications, demonstrating the feasibility of the landing system.
Climate Change Impacts for Conterminous USA: An Integrated Assessment Part 2. Models and Validation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomson, Allison M.; Rosenberg, Norman J.; Izaurralde, R Cesar C.
As CO{sub 2} and other greenhouse gases accumulate in the atmosphere and contribute to rising global temperatures, it is important to examine how a changing climate may affect natural and managed ecosystems. In this series of papers, we study the impacts of climate change on agriculture, water resources and natural ecosystems in the conterminous United States using a suite of climate change predictions from General Circulation Models (GCMs) as described in Part 1. Here we describe the agriculture model EPIC and the HUMUS water model and validate them with historical crop yields and streamflow data. We compare EPIC simulated grainmore » and forage crop yields with historical crop yields from the US Department of Agriculture and find an acceptable level of agreement for this study. The validation of HUMUS simulated streamflow with estimates of natural streamflow from the US Geological Survey shows that the model is able to reproduce significant relationships and capture major trends.« less
Less is more? Assessing the validity of the ICD-11 model of PTSD across multiple trauma samples
Hansen, Maj; Hyland, Philip; Armour, Cherie; Shevlin, Mark; Elklit, Ask
2015-01-01
Background In the 5th edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5), the symptom profile of posttraumatic stress disorder (PTSD) was expanded to include 20 symptoms. An alternative model of PTSD is outlined in the proposed 11th edition of the International Classification of Diseases (ICD-11) that includes just six symptoms. Objectives and method The objectives of the current study are: 1) to independently investigate the fit of the ICD-11 model of PTSD, and three DSM-5-based models of PTSD, across seven different trauma samples (N=3,746) using confirmatory factor analysis; 2) to assess the concurrent validity of the ICD-11 model of PTSD; and 3) to determine if there are significant differences in diagnostic rates between the ICD-11 guidelines and the DSM-5 criteria. Results The ICD-11 model of PTSD was found to provide excellent model fit in six of the seven trauma samples, and tests of factorial invariance showed that the model performs equally well for males and females. DSM-5 models provided poor fit of the data. Concurrent validity was established as the ICD-11 PTSD factors were all moderately to strongly correlated with scores of depression, anxiety, dissociation, and aggression. Levels of association were similar for ICD-11 and DSM-5 suggesting that explanatory power is not affected due to the limited number of items included in the ICD-11 model. Diagnostic rates were significantly lower according to ICD-11 guidelines compared to the DSM-5 criteria. Conclusions The proposed factor structure of the ICD-11 model of PTSD appears valid across multiple trauma types, possesses good concurrent validity, and is more stringent in terms of diagnosis compared to the DSM-5 criteria. PMID:26450830
Less is more? Assessing the validity of the ICD-11 model of PTSD across multiple trauma samples.
Hansen, Maj; Hyland, Philip; Armour, Cherie; Shevlin, Mark; Elklit, Ask
2015-01-01
In the 5th edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5), the symptom profile of posttraumatic stress disorder (PTSD) was expanded to include 20 symptoms. An alternative model of PTSD is outlined in the proposed 11th edition of the International Classification of Diseases (ICD-11) that includes just six symptoms. The objectives of the current study are: 1) to independently investigate the fit of the ICD-11 model of PTSD, and three DSM-5-based models of PTSD, across seven different trauma samples (N=3,746) using confirmatory factor analysis; 2) to assess the concurrent validity of the ICD-11 model of PTSD; and 3) to determine if there are significant differences in diagnostic rates between the ICD-11 guidelines and the DSM-5 criteria. The ICD-11 model of PTSD was found to provide excellent model fit in six of the seven trauma samples, and tests of factorial invariance showed that the model performs equally well for males and females. DSM-5 models provided poor fit of the data. Concurrent validity was established as the ICD-11 PTSD factors were all moderately to strongly correlated with scores of depression, anxiety, dissociation, and aggression. Levels of association were similar for ICD-11 and DSM-5 suggesting that explanatory power is not affected due to the limited number of items included in the ICD-11 model. Diagnostic rates were significantly lower according to ICD-11 guidelines compared to the DSM-5 criteria. The proposed factor structure of the ICD-11 model of PTSD appears valid across multiple trauma types, possesses good concurrent validity, and is more stringent in terms of diagnosis compared to the DSM-5 criteria.
Focks, Andreas; Belgers, Dick; Boerwinkel, Marie-Claire; Buijse, Laura; Roessink, Ivo; Van den Brink, Paul J
2018-05-01
Exposure patterns in ecotoxicological experiments often do not match the exposure profiles for which a risk assessment needs to be performed. This limitation can be overcome by using toxicokinetic-toxicodynamic (TKTD) models for the prediction of effects under time-variable exposure. For the use of TKTD models in the environmental risk assessment of chemicals, it is required to calibrate and validate the model for specific compound-species combinations. In this study, the survival of macroinvertebrates after exposure to the neonicotinoid insecticide was modelled using TKTD models from the General Unified Threshold models of Survival (GUTS) framework. The models were calibrated on existing survival data from acute or chronic tests under static exposure regime. Validation experiments were performed for two sets of species-compound combinations: one set focussed on multiple species sensitivity to a single compound: imidacloprid, and the other set on the effects of multiple compounds for a single species, i.e., the three neonicotinoid compounds imidacloprid, thiacloprid and thiamethoxam, on the survival of the mayfly Cloeon dipterum. The calibrated models were used to predict survival over time, including uncertainty ranges, for the different time-variable exposure profiles used in the validation experiments. From the comparison between observed and predicted survival, it appeared that the accuracy of the model predictions was acceptable for four of five tested species in the multiple species data set. For compounds such as neonicotinoids, which are known to have the potential to show increased toxicity under prolonged exposure, the calibration and validation of TKTD models for survival needs to be performed ideally by considering calibration data from both acute and chronic tests.
Razzaq, Misbah; Ahmad, Jamil
2015-01-01
Internet worms are analogous to biological viruses since they can infect a host and have the ability to propagate through a chosen medium. To prevent the spread of a worm or to grasp how to regulate a prevailing worm, compartmental models are commonly used as a means to examine and understand the patterns and mechanisms of a worm spread. However, one of the greatest challenge is to produce methods to verify and validate the behavioural properties of a compartmental model. This is why in this study we suggest a framework based on Petri Nets and Model Checking through which we can meticulously examine and validate these models. We investigate Susceptible-Exposed-Infectious-Recovered (SEIR) model and propose a new model Susceptible-Exposed-Infectious-Recovered-Delayed-Quarantined (Susceptible/Recovered) (SEIDQR(S/I)) along with hybrid quarantine strategy, which is then constructed and analysed using Stochastic Petri Nets and Continuous Time Markov Chain. The analysis shows that the hybrid quarantine strategy is extremely effective in reducing the risk of propagating the worm. Through Model Checking, we gained insight into the functionality of compartmental models. Model Checking results validate simulation ones well, which fully support the proposed framework. PMID:26713449
Razzaq, Misbah; Ahmad, Jamil
2015-01-01
Internet worms are analogous to biological viruses since they can infect a host and have the ability to propagate through a chosen medium. To prevent the spread of a worm or to grasp how to regulate a prevailing worm, compartmental models are commonly used as a means to examine and understand the patterns and mechanisms of a worm spread. However, one of the greatest challenge is to produce methods to verify and validate the behavioural properties of a compartmental model. This is why in this study we suggest a framework based on Petri Nets and Model Checking through which we can meticulously examine and validate these models. We investigate Susceptible-Exposed-Infectious-Recovered (SEIR) model and propose a new model Susceptible-Exposed-Infectious-Recovered-Delayed-Quarantined (Susceptible/Recovered) (SEIDQR(S/I)) along with hybrid quarantine strategy, which is then constructed and analysed using Stochastic Petri Nets and Continuous Time Markov Chain. The analysis shows that the hybrid quarantine strategy is extremely effective in reducing the risk of propagating the worm. Through Model Checking, we gained insight into the functionality of compartmental models. Model Checking results validate simulation ones well, which fully support the proposed framework.
Roozenbeek, Bob; Lingsma, Hester F.; Lecky, Fiona E.; Lu, Juan; Weir, James; Butcher, Isabella; McHugh, Gillian S.; Murray, Gordon D.; Perel, Pablo; Maas, Andrew I.R.; Steyerberg, Ewout W.
2012-01-01
Objective The International Mission on Prognosis and Analysis of Clinical Trials (IMPACT) and Corticoid Randomisation After Significant Head injury (CRASH) prognostic models predict outcome after traumatic brain injury (TBI) but have not been compared in large datasets. The objective of this is study is to validate externally and compare the IMPACT and CRASH prognostic models for prediction of outcome after moderate or severe TBI. Design External validation study. Patients We considered 5 new datasets with a total of 9036 patients, comprising three randomized trials and two observational series, containing prospectively collected individual TBI patient data. Measurements Outcomes were mortality and unfavourable outcome, based on the Glasgow Outcome Score (GOS) at six months after injury. To assess performance, we studied the discrimination of the models (by AUCs), and calibration (by comparison of the mean observed to predicted outcomes and calibration slopes). Main Results The highest discrimination was found in the TARN trauma registry (AUCs between 0.83 and 0.87), and the lowest discrimination in the Pharmos trial (AUCs between 0.65 and 0.71). Although differences in predictor effects between development and validation populations were found (calibration slopes varying between 0.58 and 1.53), the differences in discrimination were largely explained by differences in case-mix in the validation studies. Calibration was good, the fraction of observed outcomes generally agreed well with the mean predicted outcome. No meaningful differences were noted in performance between the IMPACT and CRASH models. More complex models discriminated slightly better than simpler variants. Conclusions Since both the IMPACT and the CRASH prognostic models show good generalizability to more recent data, they are valid instruments to quantify prognosis in TBI. PMID:22511138
Takada, M; Sugimoto, M; Ohno, S; Kuroi, K; Sato, N; Bando, H; Masuda, N; Iwata, H; Kondo, M; Sasano, H; Chow, L W C; Inamoto, T; Naito, Y; Tomita, M; Toi, M
2012-07-01
Nomogram, a standard technique that utilizes multiple characteristics to predict efficacy of treatment and likelihood of a specific status of an individual patient, has been used for prediction of response to neoadjuvant chemotherapy (NAC) in breast cancer patients. The aim of this study was to develop a novel computational technique to predict the pathological complete response (pCR) to NAC in primary breast cancer patients. A mathematical model using alternating decision trees, an epigone of decision tree, was developed using 28 clinicopathological variables that were retrospectively collected from patients treated with NAC (n = 150), and validated using an independent dataset from a randomized controlled trial (n = 173). The model selected 15 variables to predict the pCR with yielding area under the receiver operating characteristics curve (AUC) values of 0.766 [95 % confidence interval (CI)], 0.671-0.861, P value < 0.0001) in cross-validation using training dataset and 0.787 (95 % CI 0.716-0.858, P value < 0.0001) in the validation dataset. Among three subtypes of breast cancer, the luminal subgroup showed the best discrimination (AUC = 0.779, 95 % CI 0.641-0.917, P value = 0.0059). The developed model (AUC = 0.805, 95 % CI 0.716-0.894, P value < 0.0001) outperformed multivariate logistic regression (AUC = 0.754, 95 % CI 0.651-0.858, P value = 0.00019) of validation datasets without missing values (n = 127). Several analyses, e.g. bootstrap analysis, revealed that the developed model was insensitive to missing values and also tolerant to distribution bias among the datasets. Our model based on clinicopathological variables showed high predictive ability for pCR. This model might improve the prediction of the response to NAC in primary breast cancer patients.
Perandini, Simone; Soardi, G A; Larici, A R; Del Ciello, A; Rizzardi, G; Solazzo, A; Mancino, L; Zeraj, F; Bernhart, M; Signorini, M; Motton, M; Montemezzi, S
2017-05-01
To achieve multicentre external validation of the Herder and Bayesian Inference Malignancy Calculator (BIMC) models. Two hundred and fifty-nine solitary pulmonary nodules (SPNs) collected from four major hospitals which underwent 18-FDG-PET characterization were included in this multicentre retrospective study. The Herder model was tested on all available lesions (group A). A subgroup of 180 SPNs (group B) was used to provide unbiased comparison between the Herder and BIMC models. Receiver operating characteristic (ROC) area under the curve (AUC) analysis was performed to assess diagnostic accuracy. Decision analysis was performed by adopting the risk threshold stated in British Thoracic Society (BTS) guidelines. Unbiased comparison performed In Group B showed a ROC AUC for the Herder model of 0.807 (95 % CI 0.742-0.862) and for the BIMC model of 0.822 (95 % CI 0.758-0.875). Both the Herder and the BIMC models were proven to accurately predict the risk of malignancy when tested on a large multicentre external case series. The BIMC model seems advantageous on the basis of a more favourable decision analysis. • The Herder model showed a ROC AUC of 0.807 on 180 SPNs. • The BIMC model showed a ROC AUC of 0.822 on 180 SPNs. • Decision analysis is more favourable to the BIMC model.
NASA Astrophysics Data System (ADS)
Kennedy, J. H.; Bennett, A. R.; Evans, K. J.; Fyke, J. G.; Vargo, L.; Price, S. F.; Hoffman, M. J.
2016-12-01
Accurate representation of ice sheets and glaciers are essential for robust predictions of arctic climate within Earth System models. Verification and Validation (V&V) is a set of techniques used to quantify the correctness and accuracy of a model, which builds developer/modeler confidence, and can be used to enhance the credibility of the model. Fundamentally, V&V is a continuous process because each model change requires a new round of V&V testing. The Community Ice Sheet Model (CISM) development community is actively developing LIVVkit, the Land Ice Verification and Validation toolkit, which is designed to easily integrate into an ice-sheet model's development workflow (on both personal and high-performance computers) to provide continuous V&V testing.LIVVkit is a robust and extensible python package for V&V, which has components for both software V&V (construction and use) and model V&V (mathematics and physics). The model Verification component is used, for example, to verify model results against community intercomparisons such as ISMIP-HOM. The model validation component is used, for example, to generate a series of diagnostic plots showing the differences between model results against observations for variables such as thickness, surface elevation, basal topography, surface velocity, surface mass balance, etc. Because many different ice-sheet models are under active development, new validation datasets are becoming available, and new methods of analysing these models are actively being researched, LIVVkit includes a framework to easily extend the model V&V analyses by ice-sheet modelers. This allows modelers and developers to develop evaluations of parameters, implement changes, and quickly see how those changes effect the ice-sheet model and earth system model (when coupled). Furthermore, LIVVkit outputs a portable hierarchical website allowing evaluations to be easily shared, published, and analysed throughout the arctic and Earth system communities.
Using remote sensing for validation of a large scale hydrologic and hydrodynamic model in the Amazon
NASA Astrophysics Data System (ADS)
Paiva, R. C.; Bonnet, M.; Buarque, D. C.; Collischonn, W.; Frappart, F.; Mendes, C. B.
2011-12-01
We present the validation of the large-scale, catchment-based hydrological MGB-IPH model in the Amazon River basin. In this model, physically-based equations are used to simulate the hydrological processes, such as the Penman Monteith method to estimate evapotranspiration, or the Moore and Clarke infiltration model. A new feature recently introduced in the model is a 1D hydrodynamic module for river routing. It uses the full Saint-Venant equations and a simple floodplain storage model. River and floodplain geometry parameters are extracted from SRTM DEM using specially developed GIS algorithms that provide catchment discretization, estimation of river cross-sections geometry and water storage volume variations in the floodplains. The model was forced using satellite-derived daily rainfall TRMM 3B42, calibrated against discharge data and first validated using daily discharges and water levels from 111 and 69 stream gauges, respectively. Then, we performed a validation against remote sensing derived hydrological products, including (i) monthly Terrestrial Water Storage (TWS) anomalies derived from GRACE, (ii) river water levels derived from ENVISAT satellite altimetry data (212 virtual stations from Santos da Silva et al., 2010) and (iii) a multi-satellite monthly global inundation extent dataset at ~25 x 25 km spatial resolution (Papa et al., 2010). Validation against river discharges shows good performance of the MGB-IPH model. For 70% of the stream gauges, the Nash and Suttcliffe efficiency index (ENS) is higher than 0.6 and at Óbidos, close to Amazon river outlet, ENS equals 0.9 and the model bias equals,-4.6%. Largest errors are located in drainage areas outside Brazil and we speculate that it is due to the poor quality of rainfall datasets in these areas poorly monitored and/or mountainous. Validation against water levels shows that model is performing well in the major tributaries. For 60% of virtual stations, ENS is higher than 0.6. But, similarly, largest errors are also located in drainage areas outside Brazil, mostly Japurá River, and in the lower Amazon River. In the latter, correlation with observations is high but the model underestimates the amplitude of water levels. We also found a large bias between model and ENVISAT water levels, ranging from -3 to -15 m. The model provided TWS in good accordance with GRACE estimates. ENS values for TWS over the whole Amazon equals 0.93. We also analyzed results in 21 sub-regions of 4 x 4°. ENS is smaller than 0.8 only in 5 areas, and these are found mostly in the northwest part of the Amazon, possibly due to same errors reported in discharge results. Flood extent validation is under development, but a previous analysis in Brazilian part of Solimões River basin suggests a good model performance. The authors are grateful for the financial and operational support from the brazilian agencies FINEP, CNPq and ANA and from the french observatories HYBAM and SOERE RBV.
Vuong, Kylie; Armstrong, Bruce K; Weiderpass, Elisabete; Lund, Eiliv; Adami, Hans-Olov; Veierod, Marit B; Barrett, Jennifer H; Davies, John R; Bishop, D Timothy; Whiteman, David C; Olsen, Catherine M; Hopper, John L; Mann, Graham J; Cust, Anne E; McGeechan, Kevin
2016-08-01
Identifying individuals at high risk of melanoma can optimize primary and secondary prevention strategies. To develop and externally validate a risk prediction model for incident first-primary cutaneous melanoma using self-assessed risk factors. We used unconditional logistic regression to develop a multivariable risk prediction model. Relative risk estimates from the model were combined with Australian melanoma incidence and competing mortality rates to obtain absolute risk estimates. A risk prediction model was developed using the Australian Melanoma Family Study (629 cases and 535 controls) and externally validated using 4 independent population-based studies: the Western Australia Melanoma Study (511 case-control pairs), Leeds Melanoma Case-Control Study (960 cases and 513 controls), Epigene-QSkin Study (44 544, of which 766 with melanoma), and Swedish Women's Lifestyle and Health Cohort Study (49 259 women, of which 273 had melanoma). We validated model performance internally and externally by assessing discrimination using the area under the receiver operating curve (AUC). Additionally, using the Swedish Women's Lifestyle and Health Cohort Study, we assessed model calibration and clinical usefulness. The risk prediction model included hair color, nevus density, first-degree family history of melanoma, previous nonmelanoma skin cancer, and lifetime sunbed use. On internal validation, the AUC was 0.70 (95% CI, 0.67-0.73). On external validation, the AUC was 0.66 (95% CI, 0.63-0.69) in the Western Australia Melanoma Study, 0.67 (95% CI, 0.65-0.70) in the Leeds Melanoma Case-Control Study, 0.64 (95% CI, 0.62-0.66) in the Epigene-QSkin Study, and 0.63 (95% CI, 0.60-0.67) in the Swedish Women's Lifestyle and Health Cohort Study. Model calibration showed close agreement between predicted and observed numbers of incident melanomas across all deciles of predicted risk. In the external validation setting, there was higher net benefit when using the risk prediction model to classify individuals as high risk compared with classifying all individuals as high risk. The melanoma risk prediction model performs well and may be useful in prevention interventions reliant on a risk assessment using self-assessed risk factors.
Validation of individual and aggregate global flood hazard models for two major floods in Africa.
NASA Astrophysics Data System (ADS)
Trigg, M.; Bernhofen, M.; Whyman, C.
2017-12-01
A recent intercomparison of global flood hazard models undertaken by the Global Flood Partnership shows that there is an urgent requirement to undertake more validation of the models against flood observations. As part of the intercomparison, the aggregated model dataset resulting from the project was provided as open access data. We compare the individual and aggregated flood extent output from the six global models and test these against two major floods in the African Continent within the last decade, namely severe flooding on the Niger River in Nigeria in 2012, and on the Zambezi River in Mozambique in 2007. We test if aggregating different number and combination of models increases model fit to the observations compared with the individual model outputs. We present results that illustrate some of the challenges of comparing imperfect models with imperfect observations and also that of defining the probability of a real event in order to test standard model output probabilities. Finally, we propose a collective set of open access validation flood events, with associated observational data and descriptions that provide a standard set of tests across different climates and hydraulic conditions.
Scopolamine provocation-based pharmacological MRI model for testing procognitive agents.
Hegedűs, Nikolett; Laszy, Judit; Gyertyán, István; Kocsis, Pál; Gajári, Dávid; Dávid, Szabolcs; Deli, Levente; Pozsgay, Zsófia; Tihanyi, Károly
2015-04-01
There is a huge unmet need to understand and treat pathological cognitive impairment. The development of disease modifying cognitive enhancers is hindered by the lack of correct pathomechanism and suitable animal models. Most animal models to study cognition and pathology do not fulfil either the predictive validity, face validity or construct validity criteria, and also outcome measures greatly differ from those of human trials. Fortunately, some pharmacological agents such as scopolamine evoke similar effects on cognition and cerebral circulation in rodents and humans and functional MRI enables us to compare cognitive agents directly in different species. In this paper we report the validation of a scopolamine based rodent pharmacological MRI provocation model. The effects of deemed procognitive agents (donepezil, vinpocetine, piracetam, alpha 7 selective cholinergic compounds EVP-6124, PNU-120596) were compared on the blood-oxygen-level dependent responses and also linked to rodent cognitive models. These drugs revealed significant effect on scopolamine induced blood-oxygen-level dependent change except for piracetam. In the water labyrinth test only PNU-120596 did not show a significant effect. This provocational model is suitable for testing procognitive compounds. These functional MR imaging experiments can be paralleled with human studies, which may help reduce the number of false cognitive clinical trials. © The Author(s) 2015.
Robustness and Uncertainty: Applications for Policy in Climate and Hydrological Modeling
NASA Astrophysics Data System (ADS)
Fields, A. L., III
2015-12-01
Policymakers must often decide how to proceed when presented with conflicting simulation data from hydrological, climatological, and geological models. While laboratory sciences often appeal to the reproducibility of results to argue for the validity of their conclusions, simulations cannot use this strategy for a number of pragmatic and methodological reasons. However, robustness of predictions and causal structures can serve the same function for simulations as reproducibility does for laboratory experiments and field observations in either adjudicating between conflicting results or showing that there is insufficient justification to externally validate the results. Additionally, an interpretation of the argument from robustness is presented that involves appealing to the convergence of many well-built and diverse models rather than the more common version which involves appealing to the probability that one of a set of models is likely to be true. This interpretation strengthens the case for taking robustness as an additional requirement for the validation of simulation results and ultimately supports the idea that computer simulations can provide information about the world that is just as trustworthy as data from more traditional laboratory studies and field observations. Understanding the importance of robust results for the validation of simulation data is especially important for policymakers making decisions on the basis of potentially conflicting models. Applications will span climate, hydrological, and hydroclimatological models.
Development and Validation of Linear Alternator Models for the Advanced Stirling Convertor
NASA Technical Reports Server (NTRS)
Metscher, Jonathan F.; Lewandowski, Edward
2014-01-01
Two models of the linear alternator of the Advanced Stirling Convertor (ASC) have been developed using the Sage 1-D modeling software package. The first model relates the piston motion to electric current by means of a motor constant. The second uses electromagnetic model components to model the magnetic circuit of the alternator. The models are tuned and validated using test data and compared against each other. Results show both models can be tuned to achieve results within 7% of ASC test data under normal operating conditions. Using Sage enables the creation of a complete ASC model to be developed and simulations completed quickly compared to more complex multi-dimensional models. These models allow for better insight into overall Stirling convertor performance, aid with Stirling power system modeling, and in the future support NASA mission planning for Stirling-based power systems.
Development and Validation of Linear Alternator Models for the Advanced Stirling Convertor
NASA Technical Reports Server (NTRS)
Metscher, Jonathan F.; Lewandowski, Edward J.
2015-01-01
Two models of the linear alternator of the Advanced Stirling Convertor (ASC) have been developed using the Sage 1-D modeling software package. The first model relates the piston motion to electric current by means of a motor constant. The second uses electromagnetic model components to model the magnetic circuit of the alternator. The models are tuned and validated using test data and also compared against each other. Results show both models can be tuned to achieve results within 7 of ASC test data under normal operating conditions. Using Sage enables the creation of a complete ASC model to be developed and simulations completed quickly compared to more complex multi-dimensional models. These models allow for better insight into overall Stirling convertor performance, aid with Stirling power system modeling, and in the future support NASA mission planning for Stirling-based power systems.
Hofman, Jelle; Samson, Roeland
2014-09-01
Biomagnetic monitoring of tree leaf deposited particles has proven to be a good indicator of the ambient particulate concentration. The objective of this study is to apply this method to validate a local-scale air quality model (ENVI-met), using 96 tree crown sampling locations in a typical urban street canyon. To the best of our knowledge, the application of biomagnetic monitoring for the validation of pollutant dispersion modeling is hereby presented for the first time. Quantitative ENVI-met validation showed significant correlations between modeled and measured results throughout the entire in-leaf period. ENVI-met performed much better at the first half of the street canyon close to the ring road (r=0.58-0.79, RMSE=44-49%), compared to second part (r=0.58-0.64, RMSE=74-102%). The spatial model behavior was evaluated by testing effects of height, azimuthal position, tree position and distance from the main pollution source on the obtained model results and magnetic measurements. Our results demonstrate that biomagnetic monitoring seems to be a valuable method to evaluate the performance of air quality models. Due to the high spatial and temporal resolution of this technique, biomagnetic monitoring can be applied anywhere in the city (where urban green is present) to evaluate model performance at different spatial scales. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Yu, Hesheng; Thé, Jesse
2016-11-01
The prediction of the dispersion of air pollutants in urban areas is of great importance to public health, homeland security, and environmental protection. Computational Fluid Dynamics (CFD) emerges as an effective tool for pollutant dispersion modelling. This paper reports and quantitatively validates the shear stress transport (SST) k-ω turbulence closure model and its transitional variant for pollutant dispersion under complex urban environment for the first time. Sensitivity analysis is performed to establish recommendation for the proper use of turbulence models in urban settings. The current SST k-ω simulation is validated rigorously by extensive experimental data using hit rate for velocity components, and the "factor of two" of observations (FAC2) and fractional bias (FB) for concentration field. The simulation results show that current SST k-ω model can predict flow field nicely with an overall hit rate of 0.870, and concentration dispersion with FAC2 = 0.721 and FB = 0.045. The flow simulation of the current SST k-ω model is slightly inferior to that of a detached eddy simulation (DES), but better than that of standard k-ε model. However, the current study is the best among these three model approaches, when validated against measurements of pollutant dispersion in the atmosphere. This work aims to provide recommendation for proper use of CFD to predict pollutant dispersion in urban environment.
Electromagnetic Compatibility Testing Studies
NASA Technical Reports Server (NTRS)
Trost, Thomas F.; Mitra, Atindra K.
1996-01-01
This report discusses the results on analytical models and measurement and simulation of statistical properties from a study of microwave reverberation (mode-stirred) chambers performed at Texas Tech University. Two analytical models of power transfer vs. frequency in a chamber, one for antenna-to-antenna transfer and the other for antenna to D-dot sensor, were experimentally validated in our chamber. Two examples are presented of the measurement and calculation of chamber Q, one for each of the models. Measurements of EM power density validate a theoretical probability distribution on and away from the chamber walls and also yield a distribution with larger standard deviation at frequencies below the range of validity of the theory. Measurements of EM power density at pairs of points which validate a theoretical spatial correlation function on the chamber walls and also yield a correlation function with larger correlation length, R(sub corr), at frequencies below the range of validity of the theory. A numerical simulation, employing a rectangular cavity with a moving wall shows agreement with the measurements. The determination that the lowest frequency at which the theoretical spatial correlation function is valid in our chamber is considerably higher than the lowest frequency recommended by current guidelines for utilizing reverberation chambers in EMC testing. Two suggestions have been made for future studies related to EMC testing.
Sharma, Mukesh C; Sharma, S
2016-12-01
A series of 2-dihydro-4-quinazolin with potent highly selective inhibitors of inducible nitric oxide synthase activities was subjected to quantitative structure activity relationships (QSAR) analysis. Statistically significant equations with high correlation coefficient (r 2 = 0.8219) were developed. The k-nearest neighbor model has showed good cross-validated correlation coefficient and external validation values of 0.7866 and 0.7133, respectively. The selected electrostatic field descriptors the presence of blue ball around R1 and R4 in the quinazolinamine moiety showed electronegative groups favorable for nitric oxide synthase activity. The QSAR models may lead to the structural requirements of inducible nitric oxide compounds and help in the design of new compounds.
Akram, Waqas; Hussein, Maryam S E; Ahmad, Sohail; Mamat, Mohd N; Ismail, Nahlah E
2015-10-01
There is no instrument which collectively assesses the knowledge, attitude and perceived practice of asthma among community pharmacists. Therefore, this study aimed to validate the instrument which measured the knowledge, attitude and perceived practice of asthma among community pharmacists by producing empirical evidence of validity and reliability of the items using Rasch model (Bond & Fox software®) for dichotomous and polytomous data. This baseline study recruited 33 community pharmacists from Penang, Malaysia. The results showed that all PTMEA Corr were in positive values, where an item was able to distinguish between the ability of respondents. Based on the MNSQ infit and outfit range (0.60-1.40), out of 55 items, 2 items from the instrument were suggested to be removed. The findings indicated that the instrument fitted with Rasch measurement model and showed the acceptable reliability values of 0.88 and 0.83 and 0.79 for knowledge, attitude and perceived practice respectively.
Jahn, Beate; Rochau, Ursula; Kurzthaler, Christina; Paulden, Mike; Kluibenschädl, Martina; Arvandi, Marjan; Kühne, Felicitas; Goehler, Alexander; Krahn, Murray D; Siebert, Uwe
2016-04-01
Breast cancer is the most common malignancy among women in developed countries. We developed a model (the Oncotyrol breast cancer outcomes model) to evaluate the cost-effectiveness of a 21-gene assay when used in combination with Adjuvant! Online to support personalized decisions about the use of adjuvant chemotherapy. The goal of this study was to perform a cross-model validation. The Oncotyrol model evaluates the 21-gene assay by simulating a hypothetical cohort of 50-year-old women over a lifetime horizon using discrete event simulation. Primary model outcomes were life-years, quality-adjusted life-years (QALYs), costs, and incremental cost-effectiveness ratios (ICERs). We followed the International Society for Pharmacoeconomics and Outcomes Research-Society for Medical Decision Making (ISPOR-SMDM) best practice recommendations for validation and compared modeling results of the Oncotyrol model with the state-transition model developed by the Toronto Health Economics and Technology Assessment (THETA) Collaborative. Both models were populated with Canadian THETA model parameters, and outputs were compared. The differences between the models varied among the different validation end points. The smallest relative differences were in costs, and the greatest were in QALYs. All relative differences were less than 1.2%. The cost-effectiveness plane showed that small differences in the model structure can lead to different sets of nondominated test-treatment strategies with different efficiency frontiers. We faced several challenges: distinguishing between differences in outcomes due to different modeling techniques and initial coding errors, defining meaningful differences, and selecting measures and statistics for comparison (means, distributions, multivariate outcomes). Cross-model validation was crucial to identify and correct coding errors and to explain differences in model outcomes. In our comparison, small differences in either QALYs or costs led to changes in ICERs because of changes in the set of dominated and nondominated strategies. © The Author(s) 2015.
Development and validation of a prediction model for functional decline in older medical inpatients.
Takada, Toshihiko; Fukuma, Shingo; Yamamoto, Yosuke; Tsugihashi, Yukio; Nagano, Hiroyuki; Hayashi, Michio; Miyashita, Jun; Azuma, Teruhisa; Fukuhara, Shunichi
2018-05-17
To prevent functional decline in older inpatients, identification of high-risk patients is crucial. The aim of this study was to develop and validate a prediction model to assess the risk of functional decline in older medical inpatients. In this retrospective cohort study, patients ≥65 years admitted acutely to medical wards were included. The healthcare database of 246 acute care hospitals (n = 229,913) was used for derivation, and two acute care hospitals (n = 1767 and 5443, respectively) were used for validation. Data were collected using a national administrative claims and discharge database. Functional decline was defined as a decline of the Katz score at discharge compared with on admission. About 6% of patients in the derivation cohort and 9% and 2% in each validation cohort developed functional decline. A model with 7 items, age, body mass index, living in a nursing home, ambulance use, need for assistance in walking, dementia, and bedsore, was developed. On internal validation, it demonstrated a c-statistic of 0.77 (95% confidence interval (CI) = 0.767-0.771) and good fit on the calibration plot. On external validation, the c-statistics were 0.79 (95% CI = 0.77-0.81) and 0.75 (95% CI = 0.73-0.77) for each cohort, respectively. Calibration plots showed good fit in one cohort and overestimation in the other one. A prediction model for functional decline in older medical inpatients was derived and validated. It is expected that use of the model would lead to early identification of high-risk patients and introducing early intervention. Copyright © 2018 Elsevier B.V. All rights reserved.
Maarsingh, O R; Heymans, M W; Verhaak, P F; Penninx, B W J H; Comijs, H C
2018-08-01
Given the poor prognosis of late-life depression, it is crucial to identify those at risk. Our objective was to construct and validate a prediction rule for an unfavourable course of late-life depression. For development and internal validation of the model, we used The Netherlands Study of Depression in Older Persons (NESDO) data. We included participants with a major depressive disorder (MDD) at baseline (n = 270; 60-90 years), assessed with the Composite International Diagnostic Interview (CIDI). For external validation of the model, we used The Netherlands Study of Depression and Anxiety (NESDA) data (n = 197; 50-66 years). The outcome was MDD after 2 years of follow-up, assessed with the CIDI. Candidate predictors concerned sociodemographics, psychopathology, physical symptoms, medication, psychological determinants, and healthcare setting. Model performance was assessed by calculating calibration and discrimination. 111 subjects (41.1%) had MDD after 2 years of follow-up. Independent predictors of MDD after 2 years were (older) age, (early) onset of depression, severity of depression, anxiety symptoms, comorbid anxiety disorder, fatigue, and loneliness. The final model showed good calibration and reasonable discrimination (AUC of 0.75; 0.70 after external validation). The strongest individual predictor was severity of depression (AUC of 0.69; 0.68 after external validation). The model was developed and validated in The Netherlands, which could affect the cross-country generalizability. Based on rather simple clinical indicators, it is possible to predict the 2-year course of MDD. The prediction rule can be used for monitoring MDD patients and identifying those at risk of an unfavourable outcome. Copyright © 2018 Elsevier B.V. All rights reserved.
Statistical Selection of Biological Models for Genome-Wide Association Analyses.
Bi, Wenjian; Kang, Guolian; Pounds, Stanley B
2018-05-24
Genome-wide association studies have discovered many biologically important associations of genes with phenotypes. Typically, genome-wide association analyses formally test the association of each genetic feature (SNP, CNV, etc) with the phenotype of interest and summarize the results with multiplicity-adjusted p-values. However, very small p-values only provide evidence against the null hypothesis of no association without indicating which biological model best explains the observed data. Correctly identifying a specific biological model may improve the scientific interpretation and can be used to more effectively select and design a follow-up validation study. Thus, statistical methodology to identify the correct biological model for a particular genotype-phenotype association can be very useful to investigators. Here, we propose a general statistical method to summarize how accurately each of five biological models (null, additive, dominant, recessive, co-dominant) represents the data observed for each variant in a GWAS study. We show that the new method stringently controls the false discovery rate and asymptotically selects the correct biological model. Simulations of two-stage discovery-validation studies show that the new method has these properties and that its validation power is similar to or exceeds that of simple methods that use the same statistical model for all SNPs. Example analyses of three data sets also highlight these advantages of the new method. An R package is freely available at www.stjuderesearch.org/site/depts/biostats/maew. Copyright © 2018. Published by Elsevier Inc.
Spacecraft Internal Acoustic Environment Modeling
NASA Technical Reports Server (NTRS)
Chu, Shao-Sheng R.; Allen Christopher S.
2010-01-01
Acoustic modeling can be used to identify key noise sources, determine/analyze sub-allocated requirements, keep track of the accumulation of minor noise sources, and to predict vehicle noise levels at various stages in vehicle development, first with estimates of noise sources, later with experimental data. This paper describes the implementation of acoustic modeling for design purposes by incrementally increasing model fidelity and validating the accuracy of the model while predicting the noise of sources under various conditions. During FY 07, a simple-geometry Statistical Energy Analysis (SEA) model was developed and validated using a physical mockup and acoustic measurements. A process for modeling the effects of absorptive wall treatments and the resulting reverberation environment were developed. During FY 08, a model with more complex and representative geometry of the Orion Crew Module (CM) interior was built, and noise predictions based on input noise sources were made. A corresponding physical mockup was also built. Measurements were made inside this mockup, and comparisons were made with the model and showed excellent agreement. During FY 09, the fidelity of the mockup and corresponding model were increased incrementally by including a simple ventilation system. The airborne noise contribution of the fans was measured using a sound intensity technique, since the sound power levels were not known beforehand. This is opposed to earlier studies where Reference Sound Sources (RSS) with known sound power level were used. Comparisons of the modeling result with the measurements in the mockup showed excellent results. During FY 10, the fidelity of the mockup and the model were further increased by including an ECLSS (Environmental Control and Life Support System) wall, associated closeout panels, and the gap between ECLSS wall and mockup wall. The effect of sealing the gap and adding sound absorptive treatment to ECLSS wall were also modeled and validated.
Ahmad, Sohail; Ismail, Ahmad Izuanuddin; Khan, Tahir Mehmood; Akram, Waqas; Mohd Zim, Mohd Arif; Ismail, Nahlah Elkudssiah
2017-04-01
The stigmatisation degree, self-esteem and knowledge either directly or indirectly influence the control and self-management of asthma. To date, there is no valid and reliable instrument that can assess these key issues collectively. The main aim of this study was to test the reliability and validity of the newly devised and translated "Stigmatisation Degree, Self-Esteem and Knowledge Questionnaire" among adult asthma patients using the Rasch measurement model. This cross-sectional study recruited thirty adult asthma patients from two respiratory specialist clinics in Selangor, Malaysia. The newly devised self-administered questionnaire was adapted from relevant publications and translated into the Malay language using international standard translation guidelines. Content and face validation was done. The data were extracted and analysed for real item reliability and construct validation using the Rasch model. The translated "Stigmatisation Degree, Self-Esteem and Knowledge Questionnaire" showed high real item reliability values of 0.90, 0.86 and 0.89 for stigmatisation degree, self-esteem, and knowledge of asthma, respectively. Furthermore, all values of point measure correlation (PTMEA Corr) analysis were within the acceptable specified range of the Rasch model. Infit/outfit mean square values and Z standard (ZSTD) values of each item verified the construct validity and suggested retaining all the items in the questionnaire. The reliability analyses and output tables of item measures for construct validation proved the translated Malaysian version of "Stigmatisation Degree, Self-Esteem and Knowledge Questionnaire" as a valid and highly reliable questionnaire.
De Leersnyder, Fien; Peeters, Elisabeth; Djalabi, Hasna; Vanhoorne, Valérie; Van Snick, Bernd; Hong, Ke; Hammond, Stephen; Liu, Angela Yang; Ziemons, Eric; Vervaet, Chris; De Beer, Thomas
2018-03-20
A calibration model for in-line API quantification based on near infrared (NIR) spectra collection during tableting in the tablet press feed frame was developed and validated. First, the measurement set-up was optimised and the effect of filling degree of the feed frame on the NIR spectra was investigated. Secondly, a predictive API quantification model was developed and validated by calculating the accuracy profile based on the analysis results of validation experiments. Furthermore, based on the data of the accuracy profile, the measurement uncertainty was determined. Finally, the robustness of the API quantification model was evaluated. An NIR probe (SentroPAT FO) was implemented into the feed frame of a rotary tablet press (Modul™ P) to monitor physical mixtures of a model API (sodium saccharine) and excipients with two different API target concentrations: 5 and 20% (w/w). Cutting notches into the paddle wheel fingers did avoid disturbances of the NIR signal caused by the rotating paddle wheel fingers and hence allowed better and more complete feed frame monitoring. The effect of the design of the notched paddle wheel fingers was also investigated and elucidated that straight paddle wheel fingers did cause less variation in NIR signal compared to curved paddle wheel fingers. The filling degree of the feed frame was reflected in the raw NIR spectra. Several different calibration models for the prediction of the API content were developed, based on the use of single spectra or averaged spectra, and using partial least squares (PLS) regression or ratio models. These predictive models were then evaluated and validated by processing physical mixtures with different API concentrations not used in the calibration models (validation set). The β-expectation tolerance intervals were calculated for each model and for each of the validated API concentration levels (β was set at 95%). PLS models showed the best predictive performance. For each examined saccharine concentration range (i.e., between 4.5 and 6.5% and between 15 and 25%), at least 95% of future measurements will not deviate more than 15% from the true value. Copyright © 2018 Elsevier B.V. All rights reserved.
Jung, Seung-Hyun; Cho, Sung-Min; Yim, Seon-Hee; Kim, So-Hee; Park, Hyeon-Chun; Cho, Mi-La; Shim, Seung-Cheol; Kim, Tae-Hwan; Park, Sung-Hwan; Chung, Yeun-Jun
2016-12-01
To develop a genotype-based ankylosing spondylitis (AS) risk prediction model that is more sensitive and specific than HLA-B27 typing. To develop the AS genetic risk scoring (AS-GRS) model, 648 individuals (285 cases and 363 controls) were examined for 5 copy number variants (CNV), 7 single-nucleotide polymorphisms (SNP), and an HLA-B27 marker by TaqMan assays. The AS-GRS model was developed using logistic regression and validated with a larger independent set (576 cases and 680 controls). Through logistic regression, we built the AS-GRS model consisting of 5 genetic components: HLA-B27, 3 CNV (1q32.2, 13q13.1, and 16p13.3), and 1 SNP (rs10865331). All significant associations of genetic factors in the model were replicated in the independent validation set. The discriminative ability of the AS-GRS model measured by the area under the curve was excellent: 0.976 (95% CI 0.96-0.99) in the model construction set and 0.951 (95% CI 0.94-0.96) in the validation set. The AS-GRS model showed higher specificity and accuracy than the HLA-B27-only model when the sensitivity was set to over 94%. When we categorized the individuals into quartiles based on the AS-GRS scores, OR of the 4 groups (low, intermediate-1, intermediate-2, and high risk) showed an increasing trend with the AS-GRS scores (r 2 = 0.950) and the highest risk group showed a 494× higher risk of AS than the lowest risk group (95% CI 237.3-1029.1). Our AS-GRS could be used to identify individuals at high risk for AS before major symptoms appear, which may improve the prognosis for them through early treatment.
Simulating the Cyclone Induced Turbulent Mixing in the Bay of Bengal using COAWST Model
NASA Astrophysics Data System (ADS)
Prakash, K. R.; Nigam, T.; Pant, V.
2017-12-01
Mixing in the upper oceanic layers (up to a few tens of meters from surface) is an important process to understand the evolution of sea surface properties. Enhanced mixing due to strong wind forcing at surface leads to deepening of mixed layer that affects the air-sea exchange of heat and momentum fluxes and modulates sea surface temperature (SST). In the present study, we used Coupled-Ocean-Atmosphere-Wave-Sediment Transport (COAWST) model to demonstrate and quantify the enhanced cyclone induced turbulent mixing in case of a severe cyclonic storm. The COAWST model was configured over the Bay of Bengal (BoB) and used to simulate the atmospheric and oceanic conditions prevailing during the tropical cyclone (TC) Phailin that occurred over the BoB during 10-15 October 2013. The model simulated cyclone track was validated with IMD best-track and model SST validated with daily AVHRR SST data. Validation shows that model simulated track & intensity, SST and salinity were in good agreement with observations and the cyclone induced cooling of the sea surface was well captured by the model. Model simulations show a considerable deepening (by 10-15 m) of the mixed layer and shoaling of thermocline during TC Phailin. The power spectrum analysis was performed on the zonal and meridional baroclinic current components, which shows strongest energy at 14 m depth. Model results were analyzed to investigate the non-uniform energy distribution in the water column from surface up to the thermocline depth. The rotary spectra analysis highlights the downward direction of turbulent mixing during the TC Phailin period. Model simulations were used to quantify and interpret the near-inertial mixing, which were generated by cyclone induced strong wind stress and the near-inertial energy. These near-inertial oscillations are responsible for the enhancement of the mixing operative in the strong post-monsoon (October-November) stratification in the BoB.
Multisample cross-validation of a model of childhood posttraumatic stress disorder symptomatology.
Anthony, Jason L; Lonigan, Christopher J; Vernberg, Eric M; Greca, Annette M La; Silverman, Wendy K; Prinstein, Mitchell J
2005-12-01
This study is the latest advancement of our research aimed at best characterizing children's posttraumatic stress reactions. In a previous study, we compared existing nosologic and empirical models of PTSD dimensionality and determined the superior model was a hierarchical one with three symptom clusters (Intrusion/Active Avoidance, Numbing/Passive Avoidance, and Arousal; Anthony, Lonigan, & Hecht, 1999). In this study, we cross-validate this model in two populations. Participants were 396 fifth graders who were exposed to either Hurricane Andrew or Hurricane Hugo. Multisample confirmatory factor analysis demonstrated the model's factorial invariance across populations who experienced traumatic events that differed in severity. These results show the model's robustness to characterize children's posttraumatic stress reactions. Implications for diagnosis, classification criteria, and an empirically supported theory of PTSD are discussed.
NASA Astrophysics Data System (ADS)
Morrissey, Liam S.; Nakhla, Sam
2018-07-01
The effect of porosity on elastic modulus in low-porosity materials is investigated. First, several models used to predict the reduction in elastic modulus due to porosity are compared with a compilation of experimental data to determine their ranges of validity and accuracy. The overlapping solid spheres model is found to be most accurate with the experimental data and valid between 3 and 10 pct porosity. Next, a FEM is developed with the objective of demonstrating that a macroscale plate with a center hole can be used to model the effect of microscale porosity on elastic modulus. The FEM agrees best with the overlapping solid spheres model and shows higher accuracy with experimental data than the overlapping solid spheres model.
Shen, Minxue; Cui, Yuanwu; Hu, Ming; Xu, Linyong
2017-01-13
The study aimed to validate a scale to assess the severity of "Yin deficiency, intestine heat" pattern of functional constipation based on the modern test theory. Pooled longitudinal data of 237 patients with "Yin deficiency, intestine heat" pattern of constipation from a prospective cohort study were used to validate the scale. Exploratory factor analysis was used to examine the common factors of items. A multidimensional item response model was used to assess the scale with the presence of multidimensionality. The Cronbach's alpha ranged from 0.79 to 0.89, and the split-half reliability ranged from 0.67 to 0.79 at different measurements. Exploratory factor analysis identified two common factors, and all items had cross factor loadings. Bidimensional model had better goodness of fit than the unidimensional model. Multidimensional item response model showed that the all items had moderate to high discrimination parameters. Parameters indicated that the first latent trait signified intestine heat, while the second trait characterized Yin deficiency. Information function showed that items demonstrated highest discrimination power among patients with moderate to high level of disease severity. Multidimensional item response theory provides a useful and rational approach in validating scales for assessing the severity of patterns in traditional Chinese medicine.
The Safety Culture Enactment Questionnaire (SCEQ): Theoretical model and empirical validation.
de Castro, Borja López; Gracia, Francisco J; Tomás, Inés; Peiró, José M
2017-06-01
This paper presents the Safety Culture Enactment Questionnaire (SCEQ), designed to assess the degree to which safety is an enacted value in the day-to-day running of nuclear power plants (NPPs). The SCEQ is based on a theoretical safety culture model that is manifested in three fundamental components of the functioning and operation of any organization: strategic decisions, human resources practices, and daily activities and behaviors. The extent to which the importance of safety is enacted in each of these three components provides information about the pervasiveness of the safety culture in the NPP. To validate the SCEQ and the model on which it is based, two separate studies were carried out with data collection in 2008 and 2014, respectively. In Study 1, the SCEQ was administered to the employees of two Spanish NPPs (N=533) belonging to the same company. Participants in Study 2 included 598 employees from the same NPPs, who completed the SCEQ and other questionnaires measuring different safety outcomes (safety climate, safety satisfaction, job satisfaction and risky behaviors). Study 1 comprised item formulation and examination of the factorial structure and reliability of the SCEQ. Study 2 tested internal consistency and provided evidence of factorial validity, validity based on relationships with other variables, and discriminant validity between the SCEQ and safety climate. Exploratory Factor Analysis (EFA) carried out in Study 1 revealed a three-factor solution corresponding to the three components of the theoretical model. Reliability analyses showed strong internal consistency for the three scales of the SCEQ, and each of the 21 items on the questionnaire contributed to the homogeneity of its theoretically developed scale. Confirmatory Factor Analysis (CFA) carried out in Study 2 supported the internal structure of the SCEQ; internal consistency of the scales was also supported. Furthermore, the three scales of the SCEQ showed the expected correlation patterns with the measured safety outcomes. Finally, results provided evidence of discriminant validity between the SCEQ and safety climate. We conclude that the SCEQ is a valid, reliable instrument supported by a theoretical framework, and it is useful to measure the enactment of safety culture in NPPs. Copyright © 2017 Elsevier Ltd. All rights reserved.
Validating the simulation of large-scale parallel applications using statistical characteristics
Zhang, Deli; Wilke, Jeremiah; Hendry, Gilbert; ...
2016-03-01
Simulation is a widely adopted method to analyze and predict the performance of large-scale parallel applications. Validating the hardware model is highly important for complex simulations with a large number of parameters. Common practice involves calculating the percent error between the projected and the real execution time of a benchmark program. However, in a high-dimensional parameter space, this coarse-grained approach often suffers from parameter insensitivity, which may not be known a priori. Moreover, the traditional approach cannot be applied to the validation of software models, such as application skeletons used in online simulations. In this work, we present a methodologymore » and a toolset for validating both hardware and software models by quantitatively comparing fine-grained statistical characteristics obtained from execution traces. Although statistical information has been used in tasks like performance optimization, this is the first attempt to apply it to simulation validation. Lastly, our experimental results show that the proposed evaluation approach offers significant improvement in fidelity when compared to evaluation using total execution time, and the proposed metrics serve as reliable criteria that progress toward automating the simulation tuning process.« less
Beyond Corroboration: Strengthening Model Validation by Looking for Unexpected Patterns
Chérel, Guillaume; Cottineau, Clémentine; Reuillon, Romain
2015-01-01
Models of emergent phenomena are designed to provide an explanation to global-scale phenomena from local-scale processes. Model validation is commonly done by verifying that the model is able to reproduce the patterns to be explained. We argue that robust validation must not only be based on corroboration, but also on attempting to falsify the model, i.e. making sure that the model behaves soundly for any reasonable input and parameter values. We propose an open-ended evolutionary method based on Novelty Search to look for the diverse patterns a model can produce. The Pattern Space Exploration method was tested on a model of collective motion and compared to three common a priori sampling experiment designs. The method successfully discovered all known qualitatively different kinds of collective motion, and performed much better than the a priori sampling methods. The method was then applied to a case study of city system dynamics to explore the model’s predicted values of city hierarchisation and population growth. This case study showed that the method can provide insights on potential predictive scenarios as well as falsifiers of the model when the simulated dynamics are highly unrealistic. PMID:26368917
A Case Study on a Combination NDVI Forecasting Model Based on the Entropy Weight Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Shengzhi; Ming, Bo; Huang, Qiang
It is critically meaningful to accurately predict NDVI (Normalized Difference Vegetation Index), which helps guide regional ecological remediation and environmental managements. In this study, a combination forecasting model (CFM) was proposed to improve the performance of NDVI predictions in the Yellow River Basin (YRB) based on three individual forecasting models, i.e., the Multiple Linear Regression (MLR), Artificial Neural Network (ANN), and Support Vector Machine (SVM) models. The entropy weight method was employed to determine the weight coefficient for each individual model depending on its predictive performance. Results showed that: (1) ANN exhibits the highest fitting capability among the four orecastingmore » models in the calibration period, whilst its generalization ability becomes weak in the validation period; MLR has a poor performance in both calibration and validation periods; the predicted results of CFM in the calibration period have the highest stability; (2) CFM generally outperforms all individual models in the validation period, and can improve the reliability and stability of predicted results through combining the strengths while reducing the weaknesses of individual models; (3) the performances of all forecasting models are better in dense vegetation areas than in sparse vegetation areas.« less
Boerboom, T B B; Dolmans, D H J M; Jaarsma, A D C; Muijtjens, A M M; Van Beukelen, P; Scherpbier, A J J A
2011-01-01
Feedback to aid teachers in improving their teaching requires validated evaluation instruments. When implementing an evaluation instrument in a different context, it is important to collect validity evidence from multiple sources. We examined the validity and reliability of the Maastricht Clinical Teaching Questionnaire (MCTQ) as an instrument to evaluate individual clinical teachers during short clinical rotations in veterinary education. We examined four sources of validity evidence: (1) Content was examined based on theory of effective learning. (2) Response process was explored in a pilot study. (3) Internal structure was assessed by confirmatory factor analysis using 1086 student evaluations and reliability was examined utilizing generalizability analysis. (4) Relations with other relevant variables were examined by comparing factor scores with other outcomes. Content validity was supported by theory underlying the cognitive apprenticeship model on which the instrument is based. The pilot study resulted in an additional question about supervision time. A five-factor model showed a good fit with the data. Acceptable reliability was achievable with 10-12 questionnaires per teacher. Correlations between the factors and overall teacher judgement were strong. The MCTQ appears to be a valid and reliable instrument to evaluate clinical teachers' performance during short rotations.
The effect of leverage and/or influential on structure-activity relationships.
Bolboacă, Sorana D; Jäntschi, Lorentz
2013-05-01
In the spirit of reporting valid and reliable Quantitative Structure-Activity Relationship (QSAR) models, the aim of our research was to assess how the leverage (analysis with Hat matrix, h(i)) and the influential (analysis with Cook's distance, D(i)) of QSAR models may reflect the models reliability and their characteristics. The datasets included in this research were collected from previously published papers. Seven datasets which accomplished the imposed inclusion criteria were analyzed. Three models were obtained for each dataset (full-model, h(i)-model and D(i)-model) and several statistical validation criteria were applied to the models. In 5 out of 7 sets the correlation coefficient increased when compounds with either h(i) or D(i) higher than the threshold were removed. Withdrawn compounds varied from 2 to 4 for h(i)-models and from 1 to 13 for D(i)-models. Validation statistics showed that D(i)-models possess systematically better agreement than both full-models and h(i)-models. Removal of influential compounds from training set significantly improves the model and is recommended to be conducted in the process of quantitative structure-activity relationships developing. Cook's distance approach should be combined with hat matrix analysis in order to identify the compounds candidates for removal.
Simulating the evolution of glyphosate resistance in grains farming in northern Australia
Thornby, David F.; Walker, Steve R.
2009-01-01
Background and Aims The evolution of resistance to herbicides is a substantial problem in contemporary agriculture. Solutions to this problem generally consist of the use of practices to control the resistant population once it evolves, and/or to institute preventative measures before populations become resistant. Herbicide resistance evolves in populations over years or decades, so predicting the effectiveness of preventative strategies in particular relies on computational modelling approaches. While models of herbicide resistance already exist, none deals with the complex regional variability in the northern Australian sub-tropical grains farming region. For this reason, a new computer model was developed. Methods The model consists of an age- and stage-structured population model of weeds, with an existing crop model used to simulate plant growth and competition, and extensions to the crop model added to simulate seed bank ecology and population genetics factors. Using awnless barnyard grass (Echinochloa colona) as a test case, the model was used to investigate the likely rate of evolution under conditions expected to produce high selection pressure. Key Results Simulating continuous summer fallows with glyphosate used as the only means of weed control resulted in predicted resistant weed populations after approx. 15 years. Validation of the model against the paddock history for the first real-world glyphosate-resistant awnless barnyard grass population shows that the model predicted resistance evolution to within a few years of the real situation. Conclusions This validation work shows that empirical validation of herbicide resistance models is problematic. However, the model simulates the complexities of sub-tropical grains farming in Australia well, and can be used to investigate, generate and improve glyphosate resistance prevention strategies. PMID:19567415
NASA Astrophysics Data System (ADS)
Salon, Stefano; Cossarini, Gianpiero; Bolzon, Giorgio; Teruzzi, Anna
2017-04-01
The Mediterranean Monitoring and Forecasting Centre (Med-MFC) is one of the regional production centres of the EU Copernicus Marine Environment Monitoring Service (CMEMS). Med-MFC manages a suite of numerical model systems for the operational delivery of the CMEMS products, providing continuous monitoring and forecasting of the Mediterranean marine environment. The CMEMS products of fundamental biogeochemical variables (chlorophyll, nitrate, phosphate, oxygen, phytoplankton biomass, primary productivity, pH, pCO2) are organised as gridded datasets and are available at the marine.copernicus.eu web portal. Quantitative estimates of CMEMS products accuracy are prerequisites to release reliable information to intermediate users, end users and to other downstream services. In particular, validation activities aim to deliver accuracy information of the model products and to serve as a long term monitoring of the performance of the modelling systems. The quality assessment of model output is implemented using a multiple-stages approach, basically inspired to the classic "GODAE 4 Classes" metrics and criteria (consistency, quality, performance and benefit). Firstly, pre-operational runs qualify the operational model system against historical data, also providing a verification of the improvements of the new model system release with respect to the previous version. Then, the near real time (NRT) validation aims at delivering a sustained on-line skill assessment of the model analysis and forecast, relying on the NRT available relevant observations (e.g. in situ, Bio Argo and satellite observations). NRT validation results are operated on weekly basis and published on the MEDEAF web portal (www.medeaf.inogs.it). On a quarterly basis, the integration of the NRT validation activities delivers a comprehensive view of the accuracy of model forecast through the official CMEMS validation webpage. Multi-Year production (e.g. reanalysis runs) follows a similar procedure, and the validation is achieved using the same metrics on available historical observations (e.g. the World Ocean Atlas 2013 dataset). Results of the validation activities show that the comparison of the different variables of the CMEMS products with experimental data is feasible at different levels (i.e. either as skill assessment of the short-term forecast and as model consistency through different system versions) and at different spatial and temporal scales. In particular, the accuracy of some variables (chlorophyll, nitrate, oxygen) can be provided at weekly scale and sub-mesoscale, others (carbonate system, phosphate) at quarterly/annual and sub-basin scale, and others (phytoplankton biomass, primary production) only at the level of consistency of model functioning (e.g. literature- or climatology-based). In spite of a wide literature on model validation has been produced so far, maintaining a validation framework in the biogeochemical operational contest that fulfils GODAE criteria is still a challenge. Recent results of the validation activities and new potential validation framework at the Med-MFC will be presented in our contribution.
Testing the Predictive Validity of the Hendrich II Fall Risk Model.
Jung, Hyesil; Park, Hyeoun-Ae
2018-03-01
Cumulative data on patient fall risk have been compiled in electronic medical records systems, and it is possible to test the validity of fall-risk assessment tools using these data between the times of admission and occurrence of a fall. The Hendrich II Fall Risk Model scores assessed during three time points of hospital stays were extracted and used for testing the predictive validity: (a) upon admission, (b) when the maximum fall-risk score from admission to falling or discharge, and (c) immediately before falling or discharge. Predictive validity was examined using seven predictive indicators. In addition, logistic regression analysis was used to identify factors that significantly affect the occurrence of a fall. Among the different time points, the maximum fall-risk score assessed between admission and falling or discharge showed the best predictive performance. Confusion or disorientation and having a poor ability to rise from a sitting position were significant risk factors for a fall.
Eaton, Jennifer L; Mohr, David C; Hodgson, Michael J; McPhaul, Kathleen M
2018-02-01
To describe development and validation of the work-related well-being (WRWB) index. Principal components analysis was performed using Federal Employee Viewpoint Survey (FEVS) data (N = 392,752) to extract variables representing worker well-being constructs. Confirmatory factor analysis was performed to verify factor structure. To validate the WRWB index, we used multiple regression analysis to examine relationships with burnout associated outcomes. Principal Components Analysis identified three positive psychology constructs: "Work Positivity", "Co-worker Relationships", and "Work Mastery". An 11 item index explaining 63.5% of variance was achieved. The structural equation model provided a very good fit to the data. Higher WRWB scores were positively associated with all three employee experience measures examined in regression models. The new WRWB index shows promise as a valid and widely accessible instrument to assess worker well-being.
Eaton, Jennifer L; Mohr, David C; Hodgson, Michael J; McPhaul, Kathleen M
2017-10-11
To describe development and validation of the Work-Related Well-Being (WRWB) Index. Principal Components Analysis was performed using Federal Employee Viewpoint Survey (FEVS) data (N = 392,752) to extract variables representing worker well-being constructs. Confirmatory factor analysis was performed to verify factor structure. To validate the WRWB index, we used multiple regression analysis to examine relationships with burnout associated outcomes. PCA identified three positive psychology constructs: "Work Positivity", "Co-worker Relationships", and "Work Mastery". An 11 item index explaining 63.5% of variance was achieved. The structural equation model provided a very good fit to the data. Higher WRWB scores were positively associated with all 3 employee experience measures examined in regression models. The new WRWB index shows promise as a valid and widely accessible instrument to assess worker well-being.
Developing workshop module of realistic mathematics education: Follow-up workshop
NASA Astrophysics Data System (ADS)
Palupi, E. L. W.; Khabibah, S.
2018-01-01
Realistic Mathematics Education (RME) is a learning approach which fits the aim of the curriculum. The success of RME in teaching mathematics concepts, triggering students’ interest in mathematics and teaching high order thinking skills to the students will make teachers start to learn RME. Hence, RME workshop is often offered and done. This study applied development model proposed by Plomp. Based on the study by RME team, there are three kinds of RME workshop: start-up workshop, follow-up workshop, and quality boost. However, there is no standardized or validated module which is used in that workshops. This study aims to develop a module of RME follow-up workshop which is valid and can be used. Plopm’s developmental model includes materials analysis, design, realization, implementation, and evaluation. Based on the validation, the developed module is valid. While field test shows that the module can be used effectively.
Oh, Ein; Yoo, Tae Keun; Park, Eun-Cheol
2013-09-13
Blindness due to diabetic retinopathy (DR) is the major disability in diabetic patients. Although early management has shown to prevent vision loss, diabetic patients have a low rate of routine ophthalmologic examination. Hence, we developed and validated sparse learning models with the aim of identifying the risk of DR in diabetic patients. Health records from the Korea National Health and Nutrition Examination Surveys (KNHANES) V-1 were used. The prediction models for DR were constructed using data from 327 diabetic patients, and were validated internally on 163 patients in the KNHANES V-1. External validation was performed using 562 diabetic patients in the KNHANES V-2. The learning models, including ridge, elastic net, and LASSO, were compared to the traditional indicators of DR. Considering the Bayesian information criterion, LASSO predicted DR most efficiently. In the internal and external validation, LASSO was significantly superior to the traditional indicators by calculating the area under the curve (AUC) of the receiver operating characteristic. LASSO showed an AUC of 0.81 and an accuracy of 73.6% in the internal validation, and an AUC of 0.82 and an accuracy of 75.2% in the external validation. The sparse learning model using LASSO was effective in analyzing the epidemiological underlying patterns of DR. This is the first study to develop a machine learning model to predict DR risk using health records. LASSO can be an excellent choice when both discriminative power and variable selection are important in the analysis of high-dimensional electronic health records.
Limited sampling strategy models for estimating the AUC of gliclazide in Chinese healthy volunteers.
Huang, Ji-Han; Wang, Kun; Huang, Xiao-Hui; He, Ying-Chun; Li, Lu-Jin; Sheng, Yu-Cheng; Yang, Juan; Zheng, Qing-Shan
2013-06-01
The aim of this work is to reduce the cost of required sampling for the estimation of the area under the gliclazide plasma concentration versus time curve within 60 h (AUC0-60t ). The limited sampling strategy (LSS) models were established and validated by the multiple regression model within 4 or fewer gliclazide concentration values. Absolute prediction error (APE), root of mean square error (RMSE) and visual prediction check were used as criterion. The results of Jack-Knife validation showed that 10 (25.0 %) of the 40 LSS based on the regression analysis were not within an APE of 15 % using one concentration-time point. 90.2, 91.5 and 92.4 % of the 40 LSS models were capable of prediction using 2, 3 and 4 points, respectively. Limited sampling strategies were developed and validated for estimating AUC0-60t of gliclazide. This study indicates that the implementation of an 80 mg dosage regimen enabled accurate predictions of AUC0-60t by the LSS model. This study shows that 12, 6, 4, 2 h after administration are the key sampling times. The combination of (12, 2 h), (12, 8, 2 h) or (12, 8, 4, 2 h) can be chosen as sampling hours for predicting AUC0-60t in practical application according to requirement.
Design and validation of a model to predict early mortality in haemodialysis patients.
Mauri, Joan M; Clèries, Montse; Vela, Emili
2008-05-01
Mortality and morbidity rates are higher in patients receiving haemodialysis therapy than in the general population. Detection of risk factors related to early death in these patients could be of aid for clinical and administrative decision making. Objectives. The aims of this study were (1) to identify risk factors (comorbidity and variables specific to haemodialysis) associated with death in the first year following the start of haemodialysis and (2) to design and validate a prognostic model to quantify the probability of death for each patient. An analysis was carried out on all patients starting haemodialysis treatment in Catalonia during the period 1997-2003 (n = 5738). The data source was the Renal Registry of Catalonia, a mandatory population registry. Patients were randomly divided into two samples: 60% (n = 3455) of the total were used to develop the prognostic model and the remaining 40% (n = 2283) to validate the model. Logistic regression analysis was used to construct the model. One-year mortality in the total study population was 16.5%. The predictive model included the following variables: age, sex, primary renal disease, grade of functional autonomy, chronic obstructive pulmonary disease, malignant processes, chronic liver disease, cardiovascular disease, initial vascular access and malnutrition. The analyses showed adequate calibration for both the sample to develop the model and the validation sample (Hosmer-Lemeshow statistic 0.97 and P = 0.49, respectively) as well as adequate discrimination (ROC curve 0.78 in both cases). Risk factors implicated in mortality at one year following the start of haemodialysis have been determined and a prognostic model designed. The validated, easy-to-apply model quantifies individual patient risk attributable to various factors, some of them amenable to correction by directed interventions.
Systematic review of prediction models for delirium in the older adult inpatient.
Lindroth, Heidi; Bratzke, Lisa; Purvis, Suzanne; Brown, Roger; Coburn, Mark; Mrkobrada, Marko; Chan, Matthew T V; Davis, Daniel H J; Pandharipande, Pratik; Carlsson, Cynthia M; Sanders, Robert D
2018-04-28
To identify existing prognostic delirium prediction models and evaluate their validity and statistical methodology in the older adult (≥60 years) acute hospital population. Systematic review. PubMed, CINAHL, PsychINFO, SocINFO, Cochrane, Web of Science and Embase were searched from 1 January 1990 to 31 December 2016. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses and CHARMS Statement guided protocol development. age >60 years, inpatient, developed/validated a prognostic delirium prediction model. alcohol-related delirium, sample size ≤50. The primary performance measures were calibration and discrimination statistics. Two authors independently conducted search and extracted data. The synthesis of data was done by the first author. Disagreement was resolved by the mentoring author. The initial search resulted in 7,502 studies. Following full-text review of 192 studies, 33 were excluded based on age criteria (<60 years) and 27 met the defined criteria. Twenty-three delirium prediction models were identified, 14 were externally validated and 3 were internally validated. The following populations were represented: 11 medical, 3 medical/surgical and 13 surgical. The assessment of delirium was often non-systematic, resulting in varied incidence. Fourteen models were externally validated with an area under the receiver operating curve range from 0.52 to 0.94. Limitations in design, data collection methods and model metric reporting statistics were identified. Delirium prediction models for older adults show variable and typically inadequate predictive capabilities. Our review highlights the need for development of robust models to predict delirium in older inpatients. We provide recommendations for the development of such models. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Stochastic Petri Net extension of a yeast cell cycle model.
Mura, Ivan; Csikász-Nagy, Attila
2008-10-21
This paper presents the definition, solution and validation of a stochastic model of the budding yeast cell cycle, based on Stochastic Petri Nets (SPN). A specific family of SPNs is selected for building a stochastic version of a well-established deterministic model. We describe the procedure followed in defining the SPN model from the deterministic ODE model, a procedure that can be largely automated. The validation of the SPN model is conducted with respect to both the results provided by the deterministic one and the experimental results available from literature. The SPN model catches the behavior of the wild type budding yeast cells and a variety of mutants. We show that the stochastic model matches some characteristics of budding yeast cells that cannot be found with the deterministic model. The SPN model fine-tunes the simulation results, enriching the breadth and the quality of its outcome.
Development and validation of instrument for ergonomic evaluation of tablet arm chairs
Tirloni, Adriana Seára; dos Reis, Diogo Cunha; Bornia, Antonio Cezar; de Andrade, Dalton Francisco; Borgatto, Adriano Ferreti; Moro, Antônio Renato Pereira
2016-01-01
The purpose of this study was to develop and validate an evaluation instrument for tablet arm chairs based on ergonomic requirements, focused on user perceptions and using Item Response Theory (IRT). This exploratory study involved 1,633 participants (university students and professors) in four steps: a pilot study (n=26), semantic validation (n=430), content validation (n=11) and construct validation (n=1,166). Samejima's graded response model was applied to validate the instrument. The results showed that all the steps (theoretical and practical) of the instrument's development and validation processes were successful and that the group of remaining items (n=45) had a high consistency (0.95). This instrument can be used in the furniture industry by engineers and product designers and in the purchasing process of tablet arm chairs for schools, universities and auditoriums. PMID:28337099
Walz, Yvonne; Wegmann, Martin; Dech, Stefan; Vounatsou, Penelope; Poda, Jean-Noël; N'Goran, Eliézer K.; Utzinger, Jürg; Raso, Giovanna
2015-01-01
Background Schistosomiasis is the most widespread water-based disease in sub-Saharan Africa. Transmission is governed by the spatial distribution of specific freshwater snails that act as intermediate hosts and human water contact patterns. Remote sensing data have been utilized for spatially explicit risk profiling of schistosomiasis. We investigated the potential of remote sensing to characterize habitat conditions of parasite and intermediate host snails and discuss the relevance for public health. Methodology We employed high-resolution remote sensing data, environmental field measurements, and ecological data to model environmental suitability for schistosomiasis-related parasite and snail species. The model was developed for Burkina Faso using a habitat suitability index (HSI). The plausibility of remote sensing habitat variables was validated using field measurements. The established model was transferred to different ecological settings in Côte d’Ivoire and validated against readily available survey data from school-aged children. Principal Findings Environmental suitability for schistosomiasis transmission was spatially delineated and quantified by seven habitat variables derived from remote sensing data. The strengths and weaknesses highlighted by the plausibility analysis showed that temporal dynamic water and vegetation measures were particularly useful to model parasite and snail habitat suitability, whereas the measurement of water surface temperature and topographic variables did not perform appropriately. The transferability of the model showed significant relations between the HSI and infection prevalence in study sites of Côte d’Ivoire. Conclusions/Significance A predictive map of environmental suitability for schistosomiasis transmission can support measures to gain and sustain control. This is particularly relevant as emphasis is shifting from morbidity control to interrupting transmission. Further validation of our mechanistic model needs to be complemented by field data of parasite- and snail-related fitness. Our model provides a useful tool to monitor the development of new hotspots of potential schistosomiasis transmission based on regularly updated remote sensing data. PMID:26587839
Walz, Yvonne; Wegmann, Martin; Dech, Stefan; Vounatsou, Penelope; Poda, Jean-Noël; N'Goran, Eliézer K; Utzinger, Jürg; Raso, Giovanna
2015-11-01
Schistosomiasis is the most widespread water-based disease in sub-Saharan Africa. Transmission is governed by the spatial distribution of specific freshwater snails that act as intermediate hosts and human water contact patterns. Remote sensing data have been utilized for spatially explicit risk profiling of schistosomiasis. We investigated the potential of remote sensing to characterize habitat conditions of parasite and intermediate host snails and discuss the relevance for public health. We employed high-resolution remote sensing data, environmental field measurements, and ecological data to model environmental suitability for schistosomiasis-related parasite and snail species. The model was developed for Burkina Faso using a habitat suitability index (HSI). The plausibility of remote sensing habitat variables was validated using field measurements. The established model was transferred to different ecological settings in Côte d'Ivoire and validated against readily available survey data from school-aged children. Environmental suitability for schistosomiasis transmission was spatially delineated and quantified by seven habitat variables derived from remote sensing data. The strengths and weaknesses highlighted by the plausibility analysis showed that temporal dynamic water and vegetation measures were particularly useful to model parasite and snail habitat suitability, whereas the measurement of water surface temperature and topographic variables did not perform appropriately. The transferability of the model showed significant relations between the HSI and infection prevalence in study sites of Côte d'Ivoire. A predictive map of environmental suitability for schistosomiasis transmission can support measures to gain and sustain control. This is particularly relevant as emphasis is shifting from morbidity control to interrupting transmission. Further validation of our mechanistic model needs to be complemented by field data of parasite- and snail-related fitness. Our model provides a useful tool to monitor the development of new hotspots of potential schistosomiasis transmission based on regularly updated remote sensing data.
Testing the Validity of a Cognitive Behavioral Model for Gambling Behavior.
Raylu, Namrata; Oei, Tian Po S; Loo, Jasmine M Y; Tsai, Jung-Shun
2016-06-01
Currently, cognitive behavioral therapies appear to be one of the most studied treatments for gambling problems and studies show it is effective in treating gambling problems. However, cognitive behavior models have not been widely tested using statistical means. Thus, the aim of this study was to test the validity of the pathways postulated in the cognitive behavioral theory of gambling behavior using structural equation modeling (AMOS 20). Several questionnaires assessing a range of gambling specific variables (e.g., gambling urges, cognitions and behaviors) and gambling correlates (e.g., psychological states, and coping styles) were distributed to 969 participants from the community. Results showed that negative psychological states (i.e., depression, anxiety and stress) only directly predicted gambling behavior, whereas gambling urges predicted gambling behavior directly as well as indirectly via gambling cognitions. Avoidance coping predicted gambling behavior only indirectly via gambling cognitions. Negative psychological states were significantly related to gambling cognitions as well as avoidance coping. In addition, significant gender differences were also found. The results provided confirmation for the validity of the pathways postulated in the cognitive behavioral theory of gambling behavior. It also highlighted the importance of gender differences in conceptualizing gambling behavior.
Validation of the Brazilian version of the 'Spanish Burnout Inventory' in teachers.
Gil-Monte, Pedro R; Carlotto, Mary Sandra; Câmara, Sheila Gonçalves
2010-02-01
To assess factorial validity and internal consistency of the Brazilian version of the 'Spanish Burnout Inventory' (SBI). The translation process of the SBI into Brazilian Portuguese included translation, back translation, and semantic equivalence. A confirmatory factor analysis was carried out using a four-factor model, which was similar to the original SBI. The sample consisted of 714 teachers working in schools in the metropolitan area of the city of Porto Alegre, Southern Brazil, in 2008. The instrument comprises 20 items and four subscales: Enthusiasm towards job (5 items), Psychological exhaustion (4 items), Indolence (6 items), and Guilt (5 items). The model was analyzed using LISREL 8. Goodness-of-Fit statistics showed that the hypothesized model had adequate fit: chi2(164) = 605.86 (p<0.000); Goodness-of-Fit Index = 0.92; Adjusted Goodness-of-Fit Index = 0.90; Root Mean Square Error of Approximation = 0.062; Nonnormed Fit Index = 0.91; Comparative Fit Index = 0.92; and Parsimony Normed Fit Index = 0.77. Cronbach's alpha measures for all subscales were higher than 0.70. The study showed that the SBI has adequate factorial validity and internal consistency to assess burnout in Brazilian teachers.
Confirmatory factorial analysis of the children´s attraction to physical activity scale (capa).
Seabra, A C; Maia, J A; Parker, M; Seabra, A; Brustad, R; Fonseca, A M
2015-03-27
Attraction to physical activity (PA) is an important contributor to children´s intrinsic motivation to engage in games, and sports. Previous studies have supported the utility of the children´s attraction to PA scale (CAPA) (Brustad, 1996) but the validity of this measure for use in Portugal has not been established. The purpose of this study was to cross-validate the shorter version of the CAPA scale in the Portuguese cultural context. A sample of 342 children (8--10 years of age) was used. Confirmatory factor analyses using EQS software ( version 6.1) tested t hree competing measurement models: a single--factor model, a five factor model, and a second order factor model. The single--factor model and the second order model showed a poor fit to the data. It was found that a five-factor model similar to the original one revealed good fit to the data (S--B χ 2 (67) =94.27,p=0.02; NNFI=0.93; CFI=0.95; RMSEA=0.04; 90%CI=0.02;0.05). The results indicated that the CAPA scale is valid and appropriate for use in the Portuguese cultural context. The availability of a valid scale to evaluate attraction to PA at schools should provide improved opportunities for better assessment and understanding of children´s involvement in PA.
Simulated training in colonoscopic stenting of colonic strictures: validation of a cadaver model.
Iordache, F; Bucobo, J C; Devlin, D; You, K; Bergamaschi, R
2015-07-01
There are currently no available simulation models for training in colonoscopic stent deployment. The aim of this study was to validate a cadaver model for simulation training in colonoscopy with stent deployment for colonic strictures. This was a prospective study enrolling surgeons at a single institution. Participants performed colonoscopic stenting on a cadaver model. Their performance was assessed by two independent observers. Measurements were performed for quantitative analysis (time to identify stenosis, time for deployment, accuracy) and a weighted score was devised for assessment. The Mann-Whitney U-test and Student's t-test were used for nonparametric and parametric data, respectively. Cohen's kappa coefficient was used for reliability. Twenty participants performed a colonoscopy with deployment of a self-expandable metallic stent in two cadavers (groups A and B) with 20 strictures overall. The median time was 206 s. The model was able to differentiate between experts and novices (P = 0. 013). The results showed a good consensus estimate of reliability, with kappa = 0.571 (P < 0.0001). The cadaver model described in this study has content, construct and concurrent validity for simulation training in colonoscopic deployment of self-expandable stents for colonic strictures. Further studies are needed to evaluate the predictive validity of this model in terms of skill transfer to clinical practice. Colorectal Disease © 2014 The Association of Coloproctology of Great Britain and Ireland.
Overgaauw, Sandy; Rieffe, Carolien; Broekhof, Evelien; Crone, Eveline A.; Güroğlu, Berna
2017-01-01
Empathy plays a crucial role in healthy social functioning and in maintaining positive social relationships. In this study, 1250 children and adolescents (10–15 year olds) completed the newly developed Empathy Questionnaire for Children and Adolescents (EmQue-CA) that was tested on reliability, construct validity, convergent validity, and concurrent validity. The EmQue-CA aims to assess empathy using the following scales: affective empathy, cognitive empathy, and intention to comfort. A Principal Components Analysis, which was directly tested with a Confirmatory Factor Analysis, confirmed the proposed three-factor model resulting in 14 final items. Reliability analyses demonstrated high internal consistency of the scales. Furthermore, the scales showed high convergent validity, as they were positively correlated with related scales of the Interpersonal Reactivity Index (Davis, 1983). With regard to concurrent validity, higher empathy was related to more attention to others’ emotions, higher friendship quality, less focus on own affective state, and lower levels of bullying behavior. Taken together, we show that the EmQue-CA is a reliable and valid instrument to measure empathy in typically developing children and adolescents aged 10 and older. PMID:28611713
Testability of evolutionary game dynamics based on experimental economics data
NASA Astrophysics Data System (ADS)
Wang, Yijia; Chen, Xiaojie; Wang, Zhijian
2017-11-01
Understanding the dynamic processes of a real game system requires an appropriate dynamics model, and rigorously testing a dynamics model is nontrivial. In our methodological research, we develop an approach to testing the validity of game dynamics models that considers the dynamic patterns of angular momentum and speed as measurement variables. Using Rock-Paper-Scissors (RPS) games as an example, we illustrate the geometric patterns in the experiment data. We then derive the related theoretical patterns from a series of typical dynamics models. By testing the goodness-of-fit between the experimental and theoretical patterns, we show that the validity of these models can be evaluated quantitatively. Our approach establishes a link between dynamics models and experimental systems, which is, to the best of our knowledge, the most effective and rigorous strategy for ascertaining the testability of evolutionary game dynamics models.
Yilmaz, Meryem; Sayin, Yazile Yazici
2014-07-01
To examine the translation and adaptation process from English to Turkish and the validity and reliability of the Champion's Health Belief Model Scales for Mammography Screening. Its aim (1) is to provide data about and (2) to assess Turkish women's attitudes and behaviours towards mammography. The proportion of women who have mammography is lower in Turkey. The Champion's Health Belief Model Scales for Mammography Screening-Turkish version can be helpful to determine Turkish women's health beliefs, particularly about mammography. Cross-sectional design was used to collect survey data from Turkish women: classical measurement method. The Champion's Health Belief Model Scales for Mammography Screening was translated from English to Turkish. Again, it was back translated into English. Later, the meaning and clarity of the scale items were evaluated by a bilingual group representing the culture of the target population. Finally, the tool was evaluated by two bilingual professional researchers in terms of content validity, translation validity and psychometric estimates of the validity and reliability. The analysis included a total of 209 Turkish women. The validity of the scale was confirmed by confirmatory factor analysis and criterion-related validity testing. The Champion's Health Belief Model Scales for Mammography Screening aligned to four factors that were coherent and relatively independent of each other. There was a statistically significant relationship among all of the subscale items: the positive and high correlation of the total item test score and high Cronbach's α. The scale has a strong stability over time: the Champion's Health Belief Model Scales for Mammography Screening demonstrated acceptable preliminary values of reliability and validity. The Champion's Health Belief Model Scales for Mammography Screening is both a reliable and valid instrument that can be useful in measuring the health beliefs of Turkish women. It can be used to provide data about healthcare practices required for mammography screening and breast cancer prevention. This scale will show nurses that nursing intervention planning is essential for increasing Turkish women's participation in mammography screening. © 2013 John Wiley & Sons Ltd.
Estimation and Validation of Oceanic Mass Circulation from the GRACE Mission
NASA Technical Reports Server (NTRS)
Boy, J.-P.; Rowlands, D. D.; Sabaka, T. J.; Luthcke, S. B.; Lemoine, F. G.
2011-01-01
Since the launch of the Gravity Recovery And Climate Experiment (GRACE) in March 2002, the Earth's surface mass variations have been monitored with unprecedented accuracy and resolution. Compared to the classical spherical harmonic solutions, global high-resolution mascon solutions allows the retrieval of mass variations with higher spatial and temporal sampling (2 degrees and 10 days). We present here the validation of the GRACE global mascon solutions by comparing mass estimates to a set of about 100 ocean bottom pressure (OSP) records, and show that the forward modelling of continental hydrology prior to the inversion of the K-band range rate data allows better estimates of ocean mass variations. We also validate our GRACE results to OSP variations modelled by different state-of-the-art ocean general circulation models, including ECCO (Estimating the Circulation and Climate of the Ocean) and operational and reanalysis from the MERCATOR project.
Up-scaling of multi-variable flood loss models from objects to land use units at the meso-scale
NASA Astrophysics Data System (ADS)
Kreibich, Heidi; Schröter, Kai; Merz, Bruno
2016-05-01
Flood risk management increasingly relies on risk analyses, including loss modelling. Most of the flood loss models usually applied in standard practice have in common that complex damaging processes are described by simple approaches like stage-damage functions. Novel multi-variable models significantly improve loss estimation on the micro-scale and may also be advantageous for large-scale applications. However, more input parameters also reveal additional uncertainty, even more in upscaling procedures for meso-scale applications, where the parameters need to be estimated on a regional area-wide basis. To gain more knowledge about challenges associated with the up-scaling of multi-variable flood loss models the following approach is applied: Single- and multi-variable micro-scale flood loss models are up-scaled and applied on the meso-scale, namely on basis of ATKIS land-use units. Application and validation is undertaken in 19 municipalities, which were affected during the 2002 flood by the River Mulde in Saxony, Germany by comparison to official loss data provided by the Saxon Relief Bank (SAB).In the meso-scale case study based model validation, most multi-variable models show smaller errors than the uni-variable stage-damage functions. The results show the suitability of the up-scaling approach, and, in accordance with micro-scale validation studies, that multi-variable models are an improvement in flood loss modelling also on the meso-scale. However, uncertainties remain high, stressing the importance of uncertainty quantification. Thus, the development of probabilistic loss models, like BT-FLEMO used in this study, which inherently provide uncertainty information are the way forward.
Innovative use of self-organising maps (SOMs) in model validation.
NASA Astrophysics Data System (ADS)
Jolly, Ben; McDonald, Adrian; Coggins, Jack
2016-04-01
We present an innovative combination of techniques for validation of numerical weather prediction (NWP) output against both observations and reanalyses using two classification schemes, demonstrated by a validation of the operational NWP 'AMPS' (the Antarctic Mesoscale Prediction System). Historically, model validation techniques have centred on case studies or statistics at various time scales (yearly/seasonal/monthly). Within the past decade the latter technique has been expanded by the addition of classification schemes in place of time scales, allowing more precise analysis. Classifications are typically generated for either the model or the observations, then used to create composites for both which are compared. Our method creates and trains a single self-organising map (SOM) on both the model output and observations, which is then used to classify both datasets using the same class definitions. In addition to the standard statistics on class composites, we compare the classifications themselves between the model and the observations. To add further context to the area studied, we use the same techniques to compare the SOM classifications with regimes developed for another study to great effect. The AMPS validation study compares model output against surface observations from SNOWWEB and existing University of Wisconsin-Madison Antarctic Automatic Weather Stations (AWS) during two months over the austral summer of 2014-15. Twelve SOM classes were defined in a '4 x 3' pattern, trained on both model output and observations of 2 m wind components, then used to classify both training datasets. Simple statistics (correlation, bias and normalised root-mean-square-difference) computed for SOM class composites showed that AMPS performed well during extreme weather events, but less well during lighter winds and poorly during the more changeable conditions between either extreme. Comparison of the classification time-series showed that, while correlations were lower during lighter wind periods, AMPS actually forecast the existence of those periods well suggesting that the correlations may be unfairly low. Further investigation showed poor temporal alignment during more changeable conditions, highlighting problems AMPS has around the exact timing of events. There was also a tendency for AMPS to over-predict certain wind flow patterns at the expense of others. In order to gain a larger scale perspective, we compared our mesoscale SOM classification time-series with synoptic scale regimes developed by another study using ERA-Interim reanalysis output and k-means clustering. There was good alignment between the regimes and the observations classifications (observations/regimes), highlighting the effect of synoptic scale forcing on the area. However, comparing the alignment between observations/regimes and AMPS/regimes showed that AMPS may have problems accurately resolving the strength and location of cyclones in the Ross Sea to the north of the target area.
Journal Article: Infant Exposure to Dioxin-Like Compounds in Breast Milk
A simple, one-compartment, first-order pharmacokinetic model is used to predict the infant body burden of dioxin-like compounds that results from breast-feeding. Validation testing of the model showed a good match between predictions and measurements of dioxin toxic equivalents ...
Optimal test selection for prediction uncertainty reduction
Mullins, Joshua; Mahadevan, Sankaran; Urbina, Angel
2016-12-02
Economic factors and experimental limitations often lead to sparse and/or imprecise data used for the calibration and validation of computational models. This paper addresses resource allocation for calibration and validation experiments, in order to maximize their effectiveness within given resource constraints. When observation data are used for model calibration, the quality of the inferred parameter descriptions is directly affected by the quality and quantity of the data. This paper characterizes parameter uncertainty within a probabilistic framework, which enables the uncertainty to be systematically reduced with additional data. The validation assessment is also uncertain in the presence of sparse and imprecisemore » data; therefore, this paper proposes an approach for quantifying the resulting validation uncertainty. Since calibration and validation uncertainty affect the prediction of interest, the proposed framework explores the decision of cost versus importance of data in terms of the impact on the prediction uncertainty. Often, calibration and validation tests may be performed for different input scenarios, and this paper shows how the calibration and validation results from different conditions may be integrated into the prediction. Then, a constrained discrete optimization formulation that selects the number of tests of each type (calibration or validation at given input conditions) is proposed. Furthermore, the proposed test selection methodology is demonstrated on a microelectromechanical system (MEMS) example.« less
Capelli, Claudio; Biglino, Giovanni; Petrini, Lorenza; Migliavacca, Francesco; Cosentino, Daria; Bonhoeffer, Philipp; Taylor, Andrew M; Schievano, Silvia
2012-12-01
Finite element (FE) modelling can be a very resourceful tool in the field of cardiovascular devices. To ensure result reliability, FE models must be validated experimentally against physical data. Their clinical application (e.g., patients' suitability, morphological evaluation) also requires fast simulation process and access to results, while engineering applications need highly accurate results. This study shows how FE models with different mesh discretisations can suit clinical and engineering requirements for studying a novel device designed for percutaneous valve implantation. Following sensitivity analysis and experimental characterisation of the materials, the stent-graft was first studied in a simplified geometry (i.e., compliant cylinder) and validated against in vitro data, and then in a patient-specific implantation site (i.e., distensible right ventricular outflow tract). Different meshing strategies using solid, beam and shell elements were tested. Results showed excellent agreement between computational and experimental data in the simplified implantation site. Beam elements were found to be convenient for clinical applications, providing reliable results in less than one hour in a patient-specific anatomical model. Solid elements remain the FE choice for engineering applications, albeit more computationally expensive (>100 times). This work also showed how information on device mechanical behaviour differs when acquired in a simplified model as opposed to a patient-specific model.
AlMenhali, Entesar Ali; Khalid, Khalizani; Iyanna, Shilpa
2018-01-01
The Environmental Attitudes Inventory (EAI) was developed to evaluate the multidimensional nature of environmental attitudes; however, it is based on a dataset from outside the Arab context. This study reinvestigated the construct validity of the EAI with a new dataset and confirmed the feasibility of applying it in the Arab context. One hundred and forty-eight subjects in Study 1 and 130 in Study 2 provided valid responses. An exploratory factor analysis (EFA) was used to extract a new factor structure in Study 1, and confirmatory factor analysis (CFA) was performed in Study 2. Both studies generated a seven-factor model, and the model fit was discussed for both the studies. Study 2 exhibited satisfactory model fit indices compared to Study 1. Factor loading values of a few items in Study 1 affected the reliability values and average variance extracted values, which demonstrated low discriminant validity. Based on the results of the EFA and CFA, this study showed sufficient model fit and suggested the feasibility of applying the EAI in the Arab context with a good construct validity and internal consistency.
2018-01-01
The Environmental Attitudes Inventory (EAI) was developed to evaluate the multidimensional nature of environmental attitudes; however, it is based on a dataset from outside the Arab context. This study reinvestigated the construct validity of the EAI with a new dataset and confirmed the feasibility of applying it in the Arab context. One hundred and forty-eight subjects in Study 1 and 130 in Study 2 provided valid responses. An exploratory factor analysis (EFA) was used to extract a new factor structure in Study 1, and confirmatory factor analysis (CFA) was performed in Study 2. Both studies generated a seven-factor model, and the model fit was discussed for both the studies. Study 2 exhibited satisfactory model fit indices compared to Study 1. Factor loading values of a few items in Study 1 affected the reliability values and average variance extracted values, which demonstrated low discriminant validity. Based on the results of the EFA and CFA, this study showed sufficient model fit and suggested the feasibility of applying the EAI in the Arab context with a good construct validity and internal consistency. PMID:29758021
NASA Astrophysics Data System (ADS)
Singh, Sarabjeet; Howard, Carl Q.; Hansen, Colin H.; Köpke, Uwe G.
2018-03-01
In this paper, numerically modelled vibration response of a rolling element bearing with a localised outer raceway line spall is presented. The results were obtained from a finite element (FE) model of the defective bearing solved using an explicit dynamics FE software package, LS-DYNA. Time domain vibration signals of the bearing obtained directly from the FE modelling were processed further to estimate time-frequency and frequency domain results, such as spectrogram and power spectrum, using standard signal processing techniques pertinent to the vibration-based monitoring of rolling element bearings. A logical approach to analyses of the numerically modelled results was developed with an aim to presenting the analytical validation of the modelled results. While the time and frequency domain analyses of the results show that the FE model generates accurate bearing kinematics and defect frequencies, the time-frequency analysis highlights the simulation of distinct low- and high-frequency characteristic vibration signals associated with the unloading and reloading of the rolling elements as they move in and out of the defect, respectively. Favourable agreement of the numerical and analytical results demonstrates the validation of the results from the explicit FE modelling of the bearing.
Psychometric Properties of the “Sport Motivation Scale (SMS)” Adapted to Physical Education
Granero-Gallegos, Antonio; Baena-Extremera, Antonio; Gómez-López, Manuel; Sánchez-Fuentes, José Antonio; Abraldes, J. Arturo
2014-01-01
The aim of this study was to investigate the factor structure of a Spanish version of the Sport Motivation Scale adapted to physical education. A second aim was to test which one of three hypothesized models (three, five and seven-factor) provided best model fit. 758 Spanish high school students completed the Sport Motivation Scale adapted for Physical Education and also completed the Learning and Performance Orientation in Physical Education Classes Questionnaire. We examined the factor structure of each model using confirmatory factor analysis and also assessed internal consistency and convergent validity. The results showed that all three models in Spanish produce good indicators of fitness, but we suggest using the seven-factor model (χ2/gl = 2.73; ECVI = 1.38) as it produces better values when adapted to physical education, that five-factor model (χ2/gl = 2.82; ECVI = 1.44) and three-factor model (χ2/gl = 3.02; ECVI = 1.53). Key Points Physical education research conducted in Spain has used the version of SMS designed to assess motivation in sport, but validity reliability and validity results in physical education have not been reported. Results of the present study lend support to the factorial validity and internal reliability of three alternative factor structures (3, 5, and 7 factors) of SMS adapted to Physical Education in Spanish. Although all three models in Spanish produce good indicators of fitness, but we suggest using the seven-factor model. PMID:25435772
A Parametric Computational Model of the Action Potential of Pacemaker Cells.
Ai, Weiwei; Patel, Nitish D; Roop, Partha S; Malik, Avinash; Andalam, Sidharta; Yip, Eugene; Allen, Nathan; Trew, Mark L
2018-01-01
A flexible, efficient, and verifiable pacemaker cell model is essential to the design of real-time virtual hearts that can be used for closed-loop validation of cardiac devices. A new parametric model of pacemaker action potential is developed to address this need. The action potential phases are modeled using hybrid automaton with one piecewise-linear continuous variable. The model can capture rate-dependent dynamics, such as action potential duration restitution, conduction velocity restitution, and overdrive suppression by incorporating nonlinear update functions. Simulated dynamics of the model compared well with previous models and clinical data. The results show that the parametric model can reproduce the electrophysiological dynamics of a variety of pacemaker cells, such as sinoatrial node, atrioventricular node, and the His-Purkinje system, under varying cardiac conditions. This is an important contribution toward closed-loop validation of cardiac devices using real-time heart models.
Predicting the ungauged basin: model validation and realism assessment
NASA Astrophysics Data System (ADS)
van Emmerik, Tim; Mulder, Gert; Eilander, Dirk; Piet, Marijn; Savenije, Hubert
2016-04-01
The hydrological decade on Predictions in Ungauged Basins (PUB) [1] led to many new insights in model development, calibration strategies, data acquisition and uncertainty analysis. Due to a limited amount of published studies on genuinely ungauged basins, model validation and realism assessment of model outcome has not been discussed to a great extent. With this study [2] we aim to contribute to the discussion on how one can determine the value and validity of a hydrological model developed for an ungauged basin. As in many cases no local, or even regional, data are available, alternative methods should be applied. Using a PUB case study in a genuinely ungauged basin in southern Cambodia, we give several examples of how one can use different types of soft data to improve model design, calibrate and validate the model, and assess the realism of the model output. A rainfall-runoff model was coupled to an irrigation reservoir, allowing the use of additional and unconventional data. The model was mainly forced with remote sensing data, and local knowledge was used to constrain the parameters. Model realism assessment was done using data from surveys. This resulted in a successful reconstruction of the reservoir dynamics, and revealed the different hydrological characteristics of the two topographical classes. We do not present a generic approach that can be transferred to other ungauged catchments, but we aim to show how clever model design and alternative data acquisition can result in a valuable hydrological model for ungauged catchments. [1] Sivapalan, M., Takeuchi, K., Franks, S., Gupta, V., Karambiri, H., Lakshmi, V., et al. (2003). IAHS decade on predictions in ungauged basins (PUB), 2003-2012: shaping an exciting future for the hydrological sciences. Hydrol. Sci. J. 48, 857-880. doi: 10.1623/hysj.48.6.857.51421 [2] van Emmerik, T., Mulder, G., Eilander, D., Piet, M. and Savenije, H. (2015). Predicting the ungauged basin: model validation and realism assessment. Front. Earth Sci. 3:62. doi: 10.3389/feart.2015.00062
Leach, Colin Wayne; van Zomeren, Martijn; Zebel, Sven; Vliek, Michael L W; Pennekamp, Sjoerd F; Doosje, Bertjan; Ouwerkerk, Jaap W; Spears, Russell
2008-07-01
Recent research shows individuals' identification with in-groups to be psychologically important and socially consequential. However, there is little agreement about how identification should be conceptualized or measured. On the basis of previous work, the authors identified 5 specific components of in-group identification and offered a hierarchical 2-dimensional model within which these components are organized. Studies 1 and 2 used confirmatory factor analysis to validate the proposed model of self-definition (individual self-stereotyping, in-group homogeneity) and self-investment (solidarity, satisfaction, and centrality) dimensions, across 3 different group identities. Studies 3 and 4 demonstrated the construct validity of the 5 components by examining their (concurrent) correlations with established measures of in-group identification. Studies 5-7 demonstrated the predictive and discriminant validity of the 5 components by examining their (prospective) prediction of individuals' orientation to, and emotions about, real intergroup relations. Together, these studies illustrate the conceptual and empirical value of a hierarchical multicomponent model of in-group identification.
Ares I Scale Model Acoustic Test Liftoff Acoustic Results and Comparisons
NASA Technical Reports Server (NTRS)
Counter, Doug; Houston, Janice
2011-01-01
Conclusions: Ares I-X flight data validated the ASMAT LOA results. Ares I Liftoff acoustic environments were verified with scale model test results. Results showed that data book environments were under-conservative for Frustum (Zone 5). Recommendations: Data book environments can be updated with scale model test and flight data. Subscale acoustic model testing useful for future vehicle environment assessments.
Predicting brain acceleration during heading of soccer ball
NASA Astrophysics Data System (ADS)
Taha, Zahari; Hasnun Arif Hassan, Mohd; Azri Aris, Mohd; Anuar, Zulfika
2013-12-01
There has been a long debate whether purposeful heading could cause harm to the brain. Studies have shown that repetitive heading could lead to degeneration of brain cells, which is similarly found in patients with mild traumatic brain injury. A two-degree of freedom linear mathematical model was developed to study the impact of soccer ball to the brain during ball-to-head impact in soccer. From the model, the acceleration of the brain upon impact can be obtained. The model is a mass-spring-damper system, in which the skull is modelled as a mass and the neck is modelled as a spring-damper system. The brain is a mass with suspension characteristics that are also defined by a spring and a damper. The model was validated by experiment, in which a ball was dropped from different heights onto an instrumented dummy skull. The validation shows that the results obtained from the model are in a good agreement with the brain acceleration measured from the experiment. This findings show that a simple linear mathematical model can be useful in giving a preliminary insight on what human brain endures during a ball-to-head impact.
Mansberger, Steven L; Sheppler, Christina R; McClure, Tina M; Vanalstine, Cory L; Swanson, Ingrid L; Stoumbos, Zoey; Lambert, William E
2013-09-01
To report the psychometrics of the Glaucoma Treatment Compliance Assessment Tool (GTCAT), a new questionnaire designed to assess adherence with glaucoma therapy. We developed the questionnaire according to the constructs of the Health Belief Model. We evaluated the questionnaire using data from a cross-sectional study with focus groups (n = 20) and a prospective observational case series (n=58). Principal components analysis provided assessment of construct validity. We repeated the questionnaire after 3 months for test-retest reliability. We evaluated predictive validity using an electronic dosing monitor as an objective measure of adherence. Focus group participants provided 931 statements related to adherence, of which 88.7% (826/931) could be categorized into the constructs of the Health Belief Model. Perceived barriers accounted for 31% (288/931) of statements, cues-to-action 14% (131/931), susceptibility 12% (116/931), benefits 12% (115/931), severity 10% (91/931), and self-efficacy 9% (85/931). The principal components analysis explained 77% of the variance with five components representing Health Belief Model constructs. Reliability analyses showed acceptable Cronbach's alphas (>.70) for four of the seven components (severity, susceptibility, barriers [eye drop administration], and barriers [discomfort]). Predictive validity was high, with several Health Belief Model questions significantly associated (P <.05) with adherence and a correlation coefficient (R (2)) of .40. Test-retest reliability was 90%. The GTCAT shows excellent repeatability, content, construct, and predictive validity for glaucoma adherence. A multisite trial is needed to determine whether the results can be generalized and whether the questionnaire accurately measures the effect of interventions to increase adherence.
Kumar, B V S Suneel; Lakshmi, Narasu; Kumar, M Ravi; Rambabu, Gundla; Manjashetty, Thimmappa H; Arunasree, Kalle M; Sriram, Dharmarajan; Ramkumar, Kavya; Neamati, Nouri; Dayam, Raveendra; Sarma, J A R P
2014-01-01
Fibroblast growth factor receptor 1 (FGFR1) a tyrosine kinase receptor, plays important roles in angiogenesis, embryonic development, cell proliferation, cell differentiation, and wound healing. The FGFR isoforms and their receptors (FGFRs) considered as a potential targets and under intense research to design potential anticancer agents. Fibroblast growth factors (FGF's) and its growth factor receptors (FGFR) plays vital role in one of the critical pathway in monitoring angiogenesis. In the current study, quantitative pharmacophore models were generated and validated using known FGFR1 inhibitors. The pharmacophore models were generated using a set of 28 compounds (training). The top pharmacophore model was selected and validated using a set of 126 compounds (test set) and also using external validation. The validated pharmacophore was considered as a virtual screening query to screen a database of 400,000 virtual molecules and pharmacophore model retrieved 2800 hits. The retrieved hits were subsequently filtered based on the fit value. The selected hits were subjected for docking studies to observe the binding modes of the retrieved hits and also to reduce the false positives. One of the potential hits (thiazole-2-amine derivative) was selected based the pharmacophore fit value, dock score, and synthetic feasibility. A few analogues of the thiazole-2-amine derivative were synthesized. These compounds were screened for FGFR1 activity and anti-proliferative studies. The top active compound showed 56.87% inhibition of FGFR1 activity at 50 µM and also showed good cellular activity. Further optimization of thiazole-2-amine derivatives is in progress.
Copenhagen Psychosocial Questionnaire - A validation study using the Job Demand-Resources model.
Berthelsen, Hanne; Hakanen, Jari J; Westerlund, Hugo
2018-01-01
This study aims at investigating the nomological validity of the Copenhagen Psychosocial Questionnaire (COPSOQ II) by using an extension of the Job Demands-Resources (JD-R) model with aspects of work ability as outcome. The study design is cross-sectional. All staff working at public dental organizations in four regions of Sweden were invited to complete an electronic questionnaire (75% response rate, n = 1345). The questionnaire was based on COPSOQ II scales, the Utrecht Work Engagement scale, and the one-item Work Ability Score in combination with a proprietary item. The data was analysed by Structural Equation Modelling. This study contributed to the literature by showing that: A) The scale characteristics were satisfactory and the construct validity of COPSOQ instrument could be integrated in the JD-R framework; B) Job resources arising from leadership may be a driver of the two processes included in the JD-R model; and C) Both the health impairment and motivational processes were associated with WA, and the results suggested that leadership may impact WA, in particularly by securing task resources. In conclusion, the nomological validity of COPSOQ was supported as the JD-R model-can be operationalized by the instrument. This may be helpful for transferral of complex survey results and work life theories to practitioners in the field.
Development and validation of a mortality risk model for pediatric sepsis.
Chen, Mengshi; Lu, Xiulan; Hu, Li; Liu, Pingping; Zhao, Wenjiao; Yan, Haipeng; Tang, Liang; Zhu, Yimin; Xiao, Zhenghui; Chen, Lizhang; Tan, Hongzhuan
2017-05-01
Pediatric sepsis is a burdensome public health problem. Assessing the mortality risk of pediatric sepsis patients, offering effective treatment guidance, and improving prognosis to reduce mortality rates, are crucial.We extracted data derived from electronic medical records of pediatric sepsis patients that were collected during the first 24 hours after admission to the pediatric intensive care unit (PICU) of the Hunan Children's hospital from January 2012 to June 2014. A total of 788 children were randomly divided into a training (592, 75%) and validation group (196, 25%). The risk factors for mortality among these patients were identified by conducting multivariate logistic regression in the training group. Based on the established logistic regression equation, the logit probabilities for all patients (in both groups) were calculated to verify the model's internal and external validities.According to the training group, 6 variables (brain natriuretic peptide, albumin, total bilirubin, D-dimer, lactate levels, and mechanical ventilation in 24 hours) were included in the final logistic regression model. The areas under the curves of the model were 0.854 (0.826, 0.881) and 0.844 (0.816, 0.873) in the training and validation groups, respectively.The Mortality Risk Model for Pediatric Sepsis we established in this study showed acceptable accuracy to predict the mortality risk in pediatric sepsis patients.
Development and validation of a mortality risk model for pediatric sepsis
Chen, Mengshi; Lu, Xiulan; Hu, Li; Liu, Pingping; Zhao, Wenjiao; Yan, Haipeng; Tang, Liang; Zhu, Yimin; Xiao, Zhenghui; Chen, Lizhang; Tan, Hongzhuan
2017-01-01
Abstract Pediatric sepsis is a burdensome public health problem. Assessing the mortality risk of pediatric sepsis patients, offering effective treatment guidance, and improving prognosis to reduce mortality rates, are crucial. We extracted data derived from electronic medical records of pediatric sepsis patients that were collected during the first 24 hours after admission to the pediatric intensive care unit (PICU) of the Hunan Children's hospital from January 2012 to June 2014. A total of 788 children were randomly divided into a training (592, 75%) and validation group (196, 25%). The risk factors for mortality among these patients were identified by conducting multivariate logistic regression in the training group. Based on the established logistic regression equation, the logit probabilities for all patients (in both groups) were calculated to verify the model's internal and external validities. According to the training group, 6 variables (brain natriuretic peptide, albumin, total bilirubin, D-dimer, lactate levels, and mechanical ventilation in 24 hours) were included in the final logistic regression model. The areas under the curves of the model were 0.854 (0.826, 0.881) and 0.844 (0.816, 0.873) in the training and validation groups, respectively. The Mortality Risk Model for Pediatric Sepsis we established in this study showed acceptable accuracy to predict the mortality risk in pediatric sepsis patients. PMID:28514310
Oliveira, Thaís D; Costa, Danielle de S; Albuquerque, Maicon R; Malloy-Diniz, Leandro F; Miranda, Débora M; de Paula, Jonas J
2018-06-11
The Parenting Styles and Dimensions Questionnaire (PSDQ) is used worldwide to assess three styles (authoritative, authoritarian, and permissive) and seven dimensions of parenting. In this study, we adapted the short version of the PSDQ for use in Brazil and investigated its validity and reliability. Participants were 451 mothers of children aged 3 to 18 years, though sample size varied with analyses. The translation and adaptation of the PSDQ followed a rigorous methodological approach. Then, we investigated the content, criterion, and construct validity of the adapted instrument. The scale content validity index (S-CVI) was considered adequate (0.97). There was evidence of internal validity, with the PSDQ dimensions showing strong correlations with their higher-order parenting styles. Confirmatory factor analysis endorsed the three-factor, second-order solution (i.e., three styles consisting of seven dimensions). The PSDQ showed convergent validity with the validated Brazilian version of the Parenting Styles Inventory (Inventário de Estilos Parentais - IEP), as well as external validity, as it was associated with several instruments measuring sociodemographic and behavioral/emotional-problem variables. The PSDQ is an effective and reliable psychometric instrument to assess childrearing strategies according to Baumrind's model of parenting styles.
Validation of the Rational and Experiential Multimodal Inventory in the Italian Context.
Monacis, Lucia; de Palo, Valeria; Di Nuovo, Santo; Sinatra, Maria
2016-08-01
The unfavorable relations of the Rational and Experiential Inventory Experiential scale with objective criterion measures and its limited content validity led Norris and Epstein to propose a more content-valid measure of the experiential thinking style, the Rational and Experiential Multimodal Inventory (REIm), in order to assess the several facets of a broader experiential system consisting of interrelated components. This study aimed to provide the Italian validation of the inventory by examining its psychometric features, its factor structure (Study 1, N = 545), and its convergent and discriminant validity (Study 2, N = 257). Study 1 supported the 2- and 4-factor solutions, and multi-group analyses confirmed the invariance measurement across age and gender for both models. Study 2 provided evidence for both the convergent validity by supporting the theoretical associations among Rational and Experiential Multimodal Inventory scores and similar and related measures, and the discriminant validity by showing associations between the two thinking styles and a different but conceptually related construct, i.e., identity formation. No associations between Rational and Experiential Multimodal Inventory scores and social desirability were found. The Italian version of the Rational and Experiential Multimodal Inventory showed satisfactory psychometric properties, thus confirming its validity. © The Author(s) 2016.
OWL-based reasoning methods for validating archetypes.
Menárguez-Tortosa, Marcos; Fernández-Breis, Jesualdo Tomás
2013-04-01
Some modern Electronic Healthcare Record (EHR) architectures and standards are based on the dual model-based architecture, which defines two conceptual levels: reference model and archetype model. Such architectures represent EHR domain knowledge by means of archetypes, which are considered by many researchers to play a fundamental role for the achievement of semantic interoperability in healthcare. Consequently, formal methods for validating archetypes are necessary. In recent years, there has been an increasing interest in exploring how semantic web technologies in general, and ontologies in particular, can facilitate the representation and management of archetypes, including binding to terminologies, but no solution based on such technologies has been provided to date to validate archetypes. Our approach represents archetypes by means of OWL ontologies. This permits to combine the two levels of the dual model-based architecture in one modeling framework which can also integrate terminologies available in OWL format. The validation method consists of reasoning on those ontologies to find modeling errors in archetypes: incorrect restrictions over the reference model, non-conformant archetype specializations and inconsistent terminological bindings. The archetypes available in the repositories supported by the openEHR Foundation and the NHS Connecting for Health Program, which are the two largest publicly available ones, have been analyzed with our validation method. For such purpose, we have implemented a software tool called Archeck. Our results show that around 1/5 of archetype specializations contain modeling errors, the most common mistakes being related to coded terms and terminological bindings. The analysis of each repository reveals that different patterns of errors are found in both repositories. This result reinforces the need for making serious efforts in improving archetype design processes. Copyright © 2012 Elsevier Inc. All rights reserved.
Validation of the Family Inpatient Communication Survey.
Torke, Alexia M; Monahan, Patrick; Callahan, Christopher M; Helft, Paul R; Sachs, Greg A; Wocial, Lucia D; Slaven, James E; Montz, Kianna; Inger, Lev; Burke, Emily S
2017-01-01
Although many family members who make surrogate decisions report problems with communication, there is no validated instrument to accurately measure surrogate/clinician communication for older adults in the acute hospital setting. The objective of this study was to validate a survey of surrogate-rated communication quality in the hospital that would be useful to clinicians, researchers, and health systems. After expert review and cognitive interviewing (n = 10 surrogates), we enrolled 350 surrogates (250 development sample and 100 validation sample) of hospitalized adults aged 65 years and older from three hospitals in one metropolitan area. The communication survey and a measure of decision quality were administered within hospital days 3 and 10. Mental health and satisfaction measures were administered six to eight weeks later. Factor analysis showed support for both one-factor (Total Communication) and two-factor models (Information and Emotional Support). Item reduction led to a final 30-item scale. For the validation sample, internal reliability (Cronbach's alpha) was 0.96 (total), 0.94 (Information), and 0.90 (Emotional Support). Confirmatory factor analysis fit statistics were adequate (one-factor model, comparative fit index = 0.981, root mean square error of approximation = 0.62, weighted root mean square residual = 1.011; two-factor model comparative fit index = 0.984, root mean square error of approximation = 0.055, weighted root mean square residual = 0.930). Total score and subscales showed significant associations with the Decision Conflict Scale (Pearson correlation -0.43, P < 0.001 for total score). Emotional Support was associated with improved mental health outcomes at six to eight weeks, such as anxiety (-0.19 P < 0.001), and Information was associated with satisfaction with the hospital stay (0.49, P < 0.001). The survey shows high reliability and validity in measuring communication experiences for hospital surrogates. The scale has promise for measurement of communication quality and is predictive of important outcomes, such as surrogate satisfaction and well-being. Copyright © 2016 American Academy of Hospice and Palliative Medicine. Published by Elsevier Inc. All rights reserved.
Automatic welding detection by an intelligent tool pipe inspection
NASA Astrophysics Data System (ADS)
Arizmendi, C. J.; Garcia, W. L.; Quintero, M. A.
2015-07-01
This work provide a model based on machine learning techniques in welds recognition, based on signals obtained through in-line inspection tool called “smart pig” in Oil and Gas pipelines. The model uses a signal noise reduction phase by means of pre-processing algorithms and attribute-selection techniques. The noise reduction techniques were selected after a literature review and testing with survey data. Subsequently, the model was trained using recognition and classification algorithms, specifically artificial neural networks and support vector machines. Finally, the trained model was validated with different data sets and the performance was measured with cross validation and ROC analysis. The results show that is possible to identify welding automatically with an efficiency between 90 and 98 percent.
Yang-Baxter deformations of W2,4 × T1,1 and the associated T-dual models
NASA Astrophysics Data System (ADS)
Sakamoto, Jun-ichi; Yoshida, Kentaroh
2017-08-01
Recently, for principal chiral models and symmetric coset sigma models, Hoare and Tseytlin proposed an interesting conjecture that the Yang-Baxter deformations with the homogeneous classical Yang-Baxter equation are equivalent to non-abelian T-dualities with topological terms. It is significant to examine this conjecture for non-symmetric (i.e., non-integrable) cases. Such an example is the W2,4 ×T 1 , 1 background. In this note, we study Yang-Baxter deformations of type IIB string theory defined on W2,4 ×T 1 , 1 and the associated T-dual models, and show that this conjecture is valid even for this case. Our result indicates that the conjecture would be valid beyond integrability.
Modality, probability, and mental models.
Hinterecker, Thomas; Knauff, Markus; Johnson-Laird, P N
2016-10-01
We report 3 experiments investigating novel sorts of inference, such as: A or B or both. Therefore, possibly (A and B). Where the contents were sensible assertions, for example, Space tourism will achieve widespread popularity in the next 50 years or advances in material science will lead to the development of antigravity materials in the next 50 years, or both . Most participants accepted the inferences as valid, though they are invalid in modal logic and in probabilistic logic too. But, the theory of mental models predicts that individuals should accept them. In contrast, inferences of this sort—A or B but not both. Therefore, A or B or both—are both logically valid and probabilistically valid. Yet, as the model theory also predicts, most reasoners rejected them. The participants’ estimates of probabilities showed that their inferences tended not to be based on probabilistic validity, but that they did rate acceptable conclusions as more probable than unacceptable conclusions. We discuss the implications of the results for current theories of reasoning. PsycINFO Database Record (c) 2016 APA, all rights reserved
Auria, Richard; Boileau, Céline; Davidson, Sylvain; Casalot, Laurence; Christen, Pierre; Liebgott, Pierre Pol; Combet-Blanc, Yannick
2016-01-01
Thermotoga maritima is a hyperthermophilic bacterium known to produce hydrogen from a large variety of substrates. The aim of the present study is to propose a mathematical model incorporating kinetics of growth, consumption of substrates, product formations, and inhibition by hydrogen in order to predict hydrogen production depending on defined culture conditions. Our mathematical model, incorporating data concerning growth, substrates, and products, was developed to predict hydrogen production from batch fermentations of the hyperthermophilic bacterium, T. maritima . It includes the inhibition by hydrogen and the liquid-to-gas mass transfer of H 2 , CO 2 , and H 2 S. Most kinetic parameters of the model were obtained from batch experiments without any fitting. The mathematical model is adequate for glucose, yeast extract, and thiosulfate concentrations ranging from 2.5 to 20 mmol/L, 0.2-0.5 g/L, or 0.01-0.06 mmol/L, respectively, corresponding to one of these compounds being the growth-limiting factor of T. maritima . When glucose, yeast extract, and thiosulfate concentrations are all higher than these ranges, the model overestimates all the variables. In the window of the model validity, predictions of the model show that the combination of both variables (increase in limiting factor concentration and in inlet gas stream) leads up to a twofold increase of the maximum H 2 -specific productivity with the lowest inhibition. A mathematical model predicting H 2 production in T. maritima was successfully designed and confirmed in this study. However, it shows the limit of validity of such mathematical models. Their limit of applicability must take into account the range of validity in which the parameters were established.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alves, Vinicius M.; Laboratory for Molecular Modeling, Division of Chemical Biology and Medicinal Chemistry, Eshelman School of Pharmacy, University of North Carolina, Chapel Hill, NC 27599; Muratov, Eugene
Repetitive exposure to a chemical agent can induce an immune reaction in inherently susceptible individuals that leads to skin sensitization. Although many chemicals have been reported as skin sensitizers, there have been very few rigorously validated QSAR models with defined applicability domains (AD) that were developed using a large group of chemically diverse compounds. In this study, we have aimed to compile, curate, and integrate the largest publicly available dataset related to chemically-induced skin sensitization, use this data to generate rigorously validated and QSAR models for skin sensitization, and employ these models as a virtual screening tool for identifying putativemore » sensitizers among environmental chemicals. We followed best practices for model building and validation implemented with our predictive QSAR workflow using Random Forest modeling technique in combination with SiRMS and Dragon descriptors. The Correct Classification Rate (CCR) for QSAR models discriminating sensitizers from non-sensitizers was 71–88% when evaluated on several external validation sets, within a broad AD, with positive (for sensitizers) and negative (for non-sensitizers) predicted rates of 85% and 79% respectively. When compared to the skin sensitization module included in the OECD QSAR Toolbox as well as to the skin sensitization model in publicly available VEGA software, our models showed a significantly higher prediction accuracy for the same sets of external compounds as evaluated by Positive Predicted Rate, Negative Predicted Rate, and CCR. These models were applied to identify putative chemical hazards in the Scorecard database of possible skin or sense organ toxicants as primary candidates for experimental validation. - Highlights: • It was compiled the largest publicly-available skin sensitization dataset. • Predictive QSAR models were developed for skin sensitization. • Developed models have higher prediction accuracy than OECD QSAR Toolbox. • Putative chemical hazards in the Scorecard database were found using our models.« less
O'Hare, L; Santin, O; Winter, K; McGuinness, C
2016-09-01
There is a growing impetus across the research, policy and practice communities for children and young people to participate in decisions that affect their lives. Furthermore, there is a dearth of general instruments that measure children and young people's views on their participation in decision-making. This paper presents the reliability and validity of the Child and Adolescent Participation in Decision-Making Questionnaire (CAP-DMQ) and specifically looks at a population of looked-after children, where a lack of participation in decision-making is an acute issue. The participants were 151 looked after children and adolescents between 10-23 years of age who completed the 10 item CAP-DMQ. Of the participants 113 were in receipt of an advocacy service that had an aim of increasing participation in decision-making with the remaining participants not having received this service. The results showed that the CAP-DMQ had good reliability (Cronbach's alpha = 0.94) and showed promising uni-dimensional construct validity through an exploratory factor analysis. The items in the CAP-DMQ also demonstrated good content validity by overlapping with prominent models of child and adolescent participation (Lundy 2007) and decision-making (Halpern 2014). A regression analysis showed that age and gender were not significant predictors of CAP-DMQ scores but receipt of advocacy was a significant predictor of scores (effect size d = 0.88), thus showing appropriate discriminant criterion validity. Overall, the CAP-DMQ showed good reliability and validity. Therefore, the measure has excellent promise for theoretical investigation in the area of child and adolescent participation in decision-making and equally shows empirical promise for use as a measure in evaluating services, which have increasing the participation of children and adolescents in decision-making as an intended outcome. © 2016 John Wiley & Sons Ltd.
An improved procedure for the validation of satellite-based precipitation estimates
NASA Astrophysics Data System (ADS)
Tang, Ling; Tian, Yudong; Yan, Fang; Habib, Emad
2015-09-01
The objective of this study is to propose and test a new procedure to improve the validation of remote-sensing, high-resolution precipitation estimates. Our recent studies show that many conventional validation measures do not accurately capture the unique error characteristics in precipitation estimates to better inform both data producers and users. The proposed new validation procedure has two steps: 1) an error decomposition approach to separate the total retrieval error into three independent components: hit error, false precipitation and missed precipitation; and 2) the hit error is further analyzed based on a multiplicative error model. In the multiplicative error model, the error features are captured by three model parameters. In this way, the multiplicative error model separates systematic and random errors, leading to more accurate quantification of the uncertainties. The proposed procedure is used to quantitatively evaluate the recent two versions (Version 6 and 7) of TRMM's Multi-sensor Precipitation Analysis (TMPA) real-time and research product suite (3B42 and 3B42RT) for seven years (2005-2011) over the continental United States (CONUS). The gauge-based National Centers for Environmental Prediction (NCEP) Climate Prediction Center (CPC) near-real-time daily precipitation analysis is used as the reference. In addition, the radar-based NCEP Stage IV precipitation data are also model-fitted to verify the effectiveness of the multiplicative error model. The results show that winter total bias is dominated by the missed precipitation over the west coastal areas and the Rocky Mountains, and the false precipitation over large areas in Midwest. The summer total bias is largely coming from the hit bias in Central US. Meanwhile, the new version (V7) tends to produce more rainfall in the higher rain rates, which moderates the significant underestimation exhibited in the previous V6 products. Moreover, the error analysis from the multiplicative error model provides a clear and concise picture of the systematic and random errors, with both versions of 3B42RT have higher errors in varying degrees than their research (post-real-time) counterparts. The new V7 algorithm shows obvious improvements in reducing random errors in both winter and summer seasons, compared to its predecessors V6. Stage IV, as expected, surpasses the satellite-based datasets in all the metrics over CONUS. Based on the results, we recommend the new procedure be adopted for routine validation of satellite-based precipitation datasets, and we expect the procedure will work effectively for higher resolution data to be produced in the Global Precipitation Measurement (GPM) era.
NASA Astrophysics Data System (ADS)
Most, S.; Nowak, W.; Bijeljic, B.
2014-12-01
Transport processes in porous media are frequently simulated as particle movement. This process can be formulated as a stochastic process of particle position increments. At the pore scale, the geometry and micro-heterogeneities prohibit the commonly made assumption of independent and normally distributed increments to represent dispersion. Many recent particle methods seek to loosen this assumption. Recent experimental data suggest that we have not yet reached the end of the need to generalize, because particle increments show statistical dependency beyond linear correlation and over many time steps. The goal of this work is to better understand the validity regions of commonly made assumptions. We are investigating after what transport distances can we observe: A statistical dependence between increments, that can be modelled as an order-k Markov process, boils down to order 1. This would be the Markovian distance for the process, where the validity of yet-unexplored non-Gaussian-but-Markovian random walks would start. A bivariate statistical dependence that simplifies to a multi-Gaussian dependence based on simple linear correlation (validity of correlated PTRW). Complete absence of statistical dependence (validity of classical PTRW/CTRW). The approach is to derive a statistical model for pore-scale transport from a powerful experimental data set via copula analysis. The model is formulated as a non-Gaussian, mutually dependent Markov process of higher order, which allows us to investigate the validity ranges of simpler models.
Decoding Spontaneous Emotional States in the Human Brain
Kragel, Philip A.; Knodt, Annchen R.; Hariri, Ahmad R.; LaBar, Kevin S.
2016-01-01
Pattern classification of human brain activity provides unique insight into the neural underpinnings of diverse mental states. These multivariate tools have recently been used within the field of affective neuroscience to classify distributed patterns of brain activation evoked during emotion induction procedures. Here we assess whether neural models developed to discriminate among distinct emotion categories exhibit predictive validity in the absence of exteroceptive emotional stimulation. In two experiments, we show that spontaneous fluctuations in human resting-state brain activity can be decoded into categories of experience delineating unique emotional states that exhibit spatiotemporal coherence, covary with individual differences in mood and personality traits, and predict on-line, self-reported feelings. These findings validate objective, brain-based models of emotion and show how emotional states dynamically emerge from the activity of separable neural systems. PMID:27627738
SCS-CN based time-distributed sediment yield model
NASA Astrophysics Data System (ADS)
Tyagi, J. V.; Mishra, S. K.; Singh, Ranvir; Singh, V. P.
2008-05-01
SummaryA sediment yield model is developed to estimate the temporal rates of sediment yield from rainfall events on natural watersheds. The model utilizes the SCS-CN based infiltration model for computation of rainfall-excess rate, and the SCS-CN-inspired proportionality concept for computation of sediment-excess. For computation of sedimentographs, the sediment-excess is routed to the watershed outlet using a single linear reservoir technique. Analytical development of the model shows the ratio of the potential maximum erosion (A) to the potential maximum retention (S) of the SCS-CN method is constant for a watershed. The model is calibrated and validated on a number of events using the data of seven watersheds from India and the USA. Representative values of the A/S ratio computed for the watersheds from calibration are used for the validation of the model. The encouraging results of the proposed simple four parameter model exhibit its potential in field application.
Iwamoto, Masami; Nakahira, Yuko; Kimpara, Hideyuki
2015-01-01
Active safety devices such as automatic emergency brake (AEB) and precrash seat belt have the potential to accomplish further reduction in the number of the fatalities due to automotive accidents. However, their effectiveness should be investigated by more accurate estimations of their interaction with human bodies. Computational human body models are suitable for investigation, especially considering muscular tone effects on occupant motions and injury outcomes. However, the conventional modeling approaches such as multibody models and detailed finite element (FE) models have advantages and disadvantages in computational costs and injury predictions considering muscular tone effects. The objective of this study is to develop and validate a human body FE model with whole body muscles, which can be used for the detailed investigation of interaction between human bodies and vehicular structures including some safety devices precrash and during a crash with relatively low computational costs. In this study, we developed a human body FE model called THUMS (Total HUman Model for Safety) with a body size of 50th percentile adult male (AM50) and a sitting posture. The model has anatomical structures of bones, ligaments, muscles, brain, and internal organs. The total number of elements is 281,260, which would realize relatively low computational costs. Deformable material models were assigned to all body parts. The muscle-tendon complexes were modeled by truss elements with Hill-type muscle material and seat belt elements with tension-only material. The THUMS was validated against 35 series of cadaver or volunteer test data on frontal, lateral, and rear impacts. Model validations for 15 series of cadaver test data associated with frontal impacts are presented in this article. The THUMS with a vehicle sled model was applied to investigate effects of muscle activations on occupant kinematics and injury outcomes in specific frontal impact situations with AEB. In the validations using 5 series of cadaver test data, force-time curves predicted by the THUMS were quantitatively evaluated using correlation and analysis (CORA), which showed good or acceptable agreement with cadaver test data in most cases. The investigation of muscular effects showed that muscle activation levels and timing had significant effects on occupant kinematics and injury outcomes. Although further studies on accident injury reconstruction are needed, the THUMS has the potential for predictions of occupant kinematics and injury outcomes considering muscular tone effects with relatively low computational costs.
Cerin, Ester; Conway, Terry L; Saelens, Brian E; Frank, Lawrence D; Sallis, James F
2009-01-01
Background The Neighborhood Environment Walkability Scale (NEWS) and its abbreviated form (NEWS-A) assess perceived environmental attributes believed to influence physical activity. A multilevel confirmatory factor analysis (MCFA) conducted on a sample from Seattle, WA showed that, at the respondent level, the factor-analyzable items of the NEWS and NEWS-A measured 11 and 10 constructs of perceived neighborhood environment, respectively. At the census blockgroup (used by the US Census Bureau as a subunit of census tracts) level, the MCFA yielded five factors for both NEWS and NEWS-A. The aim of this study was to cross-validate the individual- and blockgroup-level measurement models of the NEWS and NEWS-A in a geographical location and population different from those used in the original validation study. Methods A sample of 912 adults was recruited from 16 selected neighborhoods (116 census blockgroups) in the Baltimore, MD region. Neighborhoods were stratified according to their socio-economic status and transport-related walkability level measured using Geographic Information Systems. Participants self-completed the NEWS. MCFA was used to cross-validate the individual- and blockgroup-level measurement models of the NEWS and NEWS-A. Results The data provided sufficient support for the factorial validity of the original individual-level measurement models, which consisted of 11 (NEWS) and 10 (NEWS-A) correlated factors. The original blockgroup-level measurement model of the NEWS and NEWS-A showed poor fit to the data and required substantial modifications. These included the combining of aspects of building aesthetics with safety from crime into one factor; the separation of natural aesthetics and building aesthetics into two factors; and for the NEWS-A, the separation of presence of sidewalks/walking routes from other infrastructure for walking. Conclusion This study provided support for the generalizability of the individual-level measurement models of the NEWS and NEWS-A to different urban geographical locations in the USA. It is recommended that the NEWS and NEWS-A be scored according to their individual-level measurement models, which are relatively stable and correspond to constructs commonly used in the urban planning and transportation fields. However, prior to using these instruments in international and multi-cultural studies, further validation work across diverse non-English speaking countries and populations is needed. PMID:19508724
Cerin, Ester; Conway, Terry L; Saelens, Brian E; Frank, Lawrence D; Sallis, James F
2009-06-09
The Neighborhood Environment Walkability Scale (NEWS) and its abbreviated form (NEWS-A) assess perceived environmental attributes believed to influence physical activity. A multilevel confirmatory factor analysis (MCFA) conducted on a sample from Seattle, WA showed that, at the respondent level, the factor-analyzable items of the NEWS and NEWS-A measured 11 and 10 constructs of perceived neighborhood environment, respectively. At the census blockgroup (used by the US Census Bureau as a subunit of census tracts) level, the MCFA yielded five factors for both NEWS and NEWS-A. The aim of this study was to cross-validate the individual- and blockgroup-level measurement models of the NEWS and NEWS-A in a geographical location and population different from those used in the original validation study. A sample of 912 adults was recruited from 16 selected neighborhoods (116 census blockgroups) in the Baltimore, MD region. Neighborhoods were stratified according to their socio-economic status and transport-related walkability level measured using Geographic Information Systems. Participants self-completed the NEWS. MCFA was used to cross-validate the individual- and blockgroup-level measurement models of the NEWS and NEWS-A. The data provided sufficient support for the factorial validity of the original individual-level measurement models, which consisted of 11 (NEWS) and 10 (NEWS-A) correlated factors. The original blockgroup-level measurement model of the NEWS and NEWS-A showed poor fit to the data and required substantial modifications. These included the combining of aspects of building aesthetics with safety from crime into one factor; the separation of natural aesthetics and building aesthetics into two factors; and for the NEWS-A, the separation of presence of sidewalks/walking routes from other infrastructure for walking. This study provided support for the generalizability of the individual-level measurement models of the NEWS and NEWS-A to different urban geographical locations in the USA. It is recommended that the NEWS and NEWS-A be scored according to their individual-level measurement models, which are relatively stable and correspond to constructs commonly used in the urban planning and transportation fields. However, prior to using these instruments in international and multi-cultural studies, further validation work across diverse non-English speaking countries and populations is needed.
Yokokura, Ana Valéria Carvalho Pires; Silva, Antônio Augusto Moura da; Fernandes, Juliana de Kássia Braga; Del-Ben, Cristina Marta; Figueiredo, Felipe Pinheiro de; Barbieri, Marco Antonio; Bettiol, Heloisa
2017-12-18
This study aimed to assess the dimensional structure, reliability, convergent validity, discriminant validity, and scalability of the Perceived Stress Scale (PSS). The sample consisted of 1,447 pregnant women in São Luís (Maranhão State) and 1,400 in Ribeirão Preto (São Paulo State), Brazil. The 14 and 10-item versions of the scale were assessed using confirmatory factor analysis, using weighted least squares means and variance (WLSMV). In both cities, the two-factor models (positive factors, measuring resilience to stressful situations, and negative factors, measuring stressful situations) showed better fit than the single-factor models. The two-factor models for the complete (PSS14) and reduced scale (PSS10) showed good internal consistency (Cronbach's alpha ≥ 0.70). All the factor loadings were ≥ 0.50, except for items 8 and 12 of the negative dimension and item 13 of the positive dimension. The correlations between both dimensions of stress and psychological violence showed the expected magnitude (0.46-0.59), providing evidence of an adequate convergent construct validity. The correlations between the scales' positive and negative dimensions were around 0.74-0.78, less than 0.85, which suggests adequate discriminant validity. Extracted mean variance and scalability were slightly higher for PSS10 than for PSS14. The results were consistent in both cities. In conclusion, the single-factor solution is not recommended for assessing stress in pregnant women. The reduced, 10-item two-factor scale appears to be more appropriate for measuring perceived stress in pregnant women.
Lee, Jason; Morishima, Toshitaka; Kunisawa, Susumu; Sasaki, Noriko; Otsubo, Tetsuya; Ikai, Hiroshi; Imanaka, Yuichi
2013-01-01
Stroke and other cerebrovascular diseases are a major cause of death and disability. Predicting in-hospital mortality in ischaemic stroke patients can help to identify high-risk patients and guide treatment approaches. Chart reviews provide important clinical information for mortality prediction, but are laborious and limiting in sample sizes. Administrative data allow for large-scale multi-institutional analyses but lack the necessary clinical information for outcome research. However, administrative claims data in Japan has seen the recent inclusion of patient consciousness and disability information, which may allow more accurate mortality prediction using administrative data alone. The aim of this study was to derive and validate models to predict in-hospital mortality in patients admitted for ischaemic stroke using administrative data. The sample consisted of 21,445 patients from 176 Japanese hospitals, who were randomly divided into derivation and validation subgroups. Multivariable logistic regression models were developed using 7- and 30-day and overall in-hospital mortality as dependent variables. Independent variables included patient age, sex, comorbidities upon admission, Japan Coma Scale (JCS) score, Barthel Index score, modified Rankin Scale (mRS) score, and admissions after hours and on weekends/public holidays. Models were developed in the derivation subgroup, and coefficients from these models were applied to the validation subgroup. Predictive ability was analysed using C-statistics; calibration was evaluated with Hosmer-Lemeshow χ(2) tests. All three models showed predictive abilities similar or surpassing that of chart review-based models. The C-statistics were highest in the 7-day in-hospital mortality prediction model, at 0.906 and 0.901 in the derivation and validation subgroups, respectively. For the 30-day in-hospital mortality prediction models, the C-statistics for the derivation and validation subgroups were 0.893 and 0.872, respectively; in overall in-hospital mortality prediction these values were 0.883 and 0.876. In this study, we have derived and validated in-hospital mortality prediction models for three different time spans using a large population of ischaemic stroke patients in a multi-institutional analysis. The recent inclusion of JCS, Barthel Index, and mRS scores in Japanese administrative data has allowed the prediction of in-hospital mortality with accuracy comparable to that of chart review analyses. The models developed using administrative data had consistently high predictive abilities for all models in both the derivation and validation subgroups. These results have implications in the role of administrative data in future mortality prediction analyses. Copyright © 2013 S. Karger AG, Basel.
Hua, Xijin; Wang, Ling; Al-Hajjar, Mazen; Jin, Zhongmin; Wilcox, Ruth K; Fisher, John
2014-07-01
Finite element models are becoming increasingly useful tools to conduct parametric analysis, design optimisation and pre-clinical testing for hip joint replacements. However, the verification of the finite element model is critically important. The purposes of this study were to develop a three-dimensional anatomic finite element model for a modular metal-on-polyethylene total hip replacement for predicting its contact mechanics and to conduct experimental validation for a simple finite element model which was simplified from the anatomic finite element model. An anatomic modular metal-on-polyethylene total hip replacement model (anatomic model) was first developed and then simplified with reasonable accuracy to a simple modular total hip replacement model (simplified model) for validation. The contact areas on the articulating surface of three polyethylene liners of modular metal-on-polyethylene total hip replacement bearings with different clearances were measured experimentally in the Leeds ProSim hip joint simulator under a series of loading conditions and different cup inclination angles. The contact areas predicted from the simplified model were then compared with that measured experimentally under the same conditions. The results showed that the simplification made for the anatomic model did not change the predictions of contact mechanics of the modular metal-on-polyethylene total hip replacement substantially (less than 12% for contact stresses and contact areas). Good agreements of contact areas between the finite element predictions from the simplified model and experimental measurements were obtained, with maximum difference of 14% across all conditions considered. This indicated that the simplification and assumptions made in the anatomic model were reasonable and the finite element predictions from the simplified model were valid. © IMechE 2014.
Steenson, Sharalyn; Özcebe, Hilal; Arslan, Umut; Konşuk Ünlü, Hande; Araz, Özgür M; Yardim, Mahmut; Üner, Sarp; Bilir, Nazmi; Huang, Terry T-K
2018-01-01
Childhood obesity rates have been rising rapidly in developing countries. A better understanding of the risk factors and social context is necessary to inform public health interventions and policies. This paper describes the validation of several measurement scales for use in Turkey, which relate to child and parent perceptions of physical activity (PA) and enablers and barriers of physical activity in the home environment. The aim of this study was to assess the validity and reliability of several measurement scales in Turkey using a population sample across three socio-economic strata in the Turkish capital, Ankara. Surveys were conducted in Grade 4 children (mean age = 9.7 years for boys; 9.9 years for girls), and their parents, across 6 randomly selected schools, stratified by SES (n = 641 students, 483 parents). Construct validity of the scales was evaluated through exploratory and confirmatory factor analysis. Internal consistency of scales and test-retest reliability were assessed by Cronbach's alpha and intra-class correlation. The scales as a whole were found to have acceptable-to-good model fit statistics (PA Barriers: RMSEA = 0.076, SRMR = 0.0577, AGFI = 0.901; PA Outcome Expectancies: RMSEA = 0.054, SRMR = 0.0545, AGFI = 0.916, and PA Home Environment: RMSEA = 0.038, SRMR = 0.0233, AGFI = 0.976). The PA Barriers subscales showed good internal consistency and poor to fair test-retest reliability (personal α = 0.79, ICC = 0.29, environmental α = 0.73, ICC = 0.59). The PA Outcome Expectancies subscales showed good internal consistency and test-retest reliability (negative α = 0.77, ICC = 0.56; positive α = 0.74, ICC = 0.49). Only the PA Home Environment subscale on support for PA was validated in the final confirmatory model; it showed moderate internal consistency and test-retest reliability (α = 0.61, ICC = 0.48). This study is the first to validate measures of perceptions of physical activity and the physical activity home environment in Turkey. Our results support the originally hypothesized two-factor structures for Physical Activity Barriers and Physical Activity Outcome Expectancies. However, we found the one-factor rather than two-factor structure for Physical Activity Home Environment had the best model fit. This study provides general support for the use of these scales in Turkey in terms of validity, but test-retest reliability warrants further research.
Chen, Po-Yi; Yang, Chien-Ming; Morin, Charles M
2015-05-01
The purpose of this study is to examine the factor structure of the Insomnia Severity Index (ISI) across samples recruited from different countries. We tried to identify the most appropriate factor model for the ISI and further examined the measurement invariance property of the ISI across samples from different countries. Our analyses included one data set collected from a Taiwanese sample and two data sets obtained from samples in Hong Kong and Canada. The data set collected in Taiwan was analyzed with ordinal exploratory factor analysis (EFA) to obtain the appropriate factor model for the ISI. After that, we conducted a series of confirmatory factor analyses (CFAs), which is a special case of the structural equation model (SEM) that concerns the parameters in the measurement model, to the statistics collected in Canada and Hong Kong. The purposes of these CFA were to cross-validate the result obtained from EFA and further examine the cross-cultural measurement invariance of the ISI. The three-factor model outperforms other models in terms of global fit indices in Taiwan's population. Its external validity is also supported by confirmatory factor analyses. Furthermore, the measurement invariance analyses show that the strong invariance property between the samples from different cultures holds, providing evidence that the ISI results obtained in different cultures are comparable. The factorial validity of the ISI is stable in different populations. More importantly, its invariance property across cultures suggests that the ISI is a valid measure of the insomnia severity construct across countries. Copyright © 2014 Elsevier B.V. All rights reserved.
The validation of a generalized Hooke's law for coronary arteries.
Wang, Chong; Zhang, Wei; Kassab, Ghassan S
2008-01-01
The exponential form of constitutive model is widely used in biomechanical studies of blood vessels. There are two main issues, however, with this model: 1) the curve fits of experimental data are not always satisfactory, and 2) the material parameters may be oversensitive. A new type of strain measure in a generalized Hooke's law for blood vessels was recently proposed by our group to address these issues. The new model has one nonlinear parameter and six linear parameters. In this study, the stress-strain equation is validated by fitting the model to experimental data of porcine coronary arteries. Material constants of left anterior descending artery and right coronary artery for the Hooke's law were computed with a separable nonlinear least-squares method with an excellent goodness of fit. A parameter sensitivity analysis shows that the stability of material constants is improved compared with the exponential model and a biphasic model. A boundary value problem was solved to demonstrate that the model prediction can match the measured arterial deformation under experimental loading conditions. The validated constitutive relation will serve as a basis for the solution of various boundary value problems of cardiovascular biomechanics.
NASA Astrophysics Data System (ADS)
Susanti, L. B.; Poedjiastoeti, S.; Taufikurohmah, T.
2018-04-01
The purpose of this study is to explain the validity of guided inquiry and mind mapping-based worksheet that has been developed in this study. The worksheet implemented the phases of guided inquiry teaching models in order to train students’ creative thinking skills. The creative thinking skills which were trained in this study included fluency, flexibility, originality and elaboration. The types of validity used in this study included content and construct validity. The type of this study is development research with Research and Development (R & D) method. The data of this study were collected using review and validation sheets. Sources of the data were chemistry lecturer and teacher. The data is the analyzed descriptively. The results showed that the worksheet is very valid and could be used as a learning media with the percentage of validity ranged from 82.5%-92.5%.
Finite Element Model of the Knee for Investigation of Injury Mechanisms: Development and Validation
Kiapour, Ali; Kiapour, Ata M.; Kaul, Vikas; Quatman, Carmen E.; Wordeman, Samuel C.; Hewett, Timothy E.; Demetropoulos, Constantine K.; Goel, Vijay K.
2014-01-01
Multiple computational models have been developed to study knee biomechanics. However, the majority of these models are mainly validated against a limited range of loading conditions and/or do not include sufficient details of the critical anatomical structures within the joint. Due to the multifactorial dynamic nature of knee injuries, anatomic finite element (FE) models validated against multiple factors under a broad range of loading conditions are necessary. This study presents a validated FE model of the lower extremity with an anatomically accurate representation of the knee joint. The model was validated against tibiofemoral kinematics, ligaments strain/force, and articular cartilage pressure data measured directly from static, quasi-static, and dynamic cadaveric experiments. Strong correlations were observed between model predictions and experimental data (r > 0.8 and p < 0.0005 for all comparisons). FE predictions showed low deviations (root-mean-square (RMS) error) from average experimental data under all modes of static and quasi-static loading, falling within 2.5 deg of tibiofemoral rotation, 1% of anterior cruciate ligament (ACL) and medial collateral ligament (MCL) strains, 17 N of ACL load, and 1 mm of tibiofemoral center of pressure. Similarly, the FE model was able to accurately predict tibiofemoral kinematics and ACL and MCL strains during simulated bipedal landings (dynamic loading). In addition to minimal deviation from direct cadaveric measurements, all model predictions fell within 95% confidence intervals of the average experimental data. Agreement between model predictions and experimental data demonstrates the ability of the developed model to predict the kinematics of the human knee joint as well as the complex, nonuniform stress and strain fields that occur in biological soft tissue. Such a model will facilitate the in-depth understanding of a multitude of potential knee injury mechanisms with special emphasis on ACL injury. PMID:24763546
HLPI-Ensemble: Prediction of human lncRNA-protein interactions based on ensemble strategy.
Hu, Huan; Zhang, Li; Ai, Haixin; Zhang, Hui; Fan, Yetian; Zhao, Qi; Liu, Hongsheng
2018-03-27
LncRNA plays an important role in many biological and disease progression by binding to related proteins. However, the experimental methods for studying lncRNA-protein interactions are time-consuming and expensive. Although there are a few models designed to predict the interactions of ncRNA-protein, they all have some common drawbacks that limit their predictive performance. In this study, we present a model called HLPI-Ensemble designed specifically for human lncRNA-protein interactions. HLPI-Ensemble adopts the ensemble strategy based on three mainstream machine learning algorithms of Support Vector Machines (SVM), Random Forests (RF) and Extreme Gradient Boosting (XGB) to generate HLPI-SVM Ensemble, HLPI-RF Ensemble and HLPI-XGB Ensemble, respectively. The results of 10-fold cross-validation show that HLPI-SVM Ensemble, HLPI-RF Ensemble and HLPI-XGB Ensemble achieved AUCs of 0.95, 0.96 and 0.96, respectively, in the test dataset. Furthermore, we compared the performance of the HLPI-Ensemble models with the previous models through external validation dataset. The results show that the false positives (FPs) of HLPI-Ensemble models are much lower than that of the previous models, and other evaluation indicators of HLPI-Ensemble models are also higher than those of the previous models. It is further showed that HLPI-Ensemble models are superior in predicting human lncRNA-protein interaction compared with previous models. The HLPI-Ensemble is publicly available at: http://ccsipb.lnu.edu.cn/hlpiensemble/ .
Engineering applications of strong ground motion simulation
NASA Astrophysics Data System (ADS)
Somerville, Paul
1993-02-01
The formulation, validation and application of a procedure for simulating strong ground motions for use in engineering practice are described. The procedure uses empirical source functions (derived from near-source strong motion recordings of small earthquakes) to provide a realistic representation of effects such as source radiation that are difficult to model at high frequencies due to their partly stochastic behavior. Wave propagation effects are modeled using simplified Green's functions that are designed to transfer empirical source functions from their recording sites to those required for use in simulations at a specific site. The procedure has been validated against strong motion recordings of both crustal and subduction earthquakes. For the validation process we choose earthquakes whose source models (including a spatially heterogeneous distribution of the slip of the fault) are independently known and which have abundant strong motion recordings. A quantitative measurement of the fit between the simulated and recorded motion in this validation process is used to estimate the modeling and random uncertainty associated with the simulation procedure. This modeling and random uncertainty is one part of the overall uncertainty in estimates of ground motions of future earthquakes at a specific site derived using the simulation procedure. The other contribution to uncertainty is that due to uncertainty in the source parameters of future earthquakes that affect the site, which is estimated from a suite of simulations generated by varying the source parameters over their ranges of uncertainty. In this paper, we describe the validation of the simulation procedure for crustal earthquakes against strong motion recordings of the 1989 Loma Prieta, California, earthquake, and for subduction earthquakes against the 1985 Michoacán, Mexico, and Valparaiso, Chile, earthquakes. We then show examples of the application of the simulation procedure to the estimatation of the design response spectra for crustal earthquakes at a power plant site in California and for subduction earthquakes in the Seattle-Portland region. We also demonstrate the use of simulation methods for modeling the attenuation of strong ground motion, and show evidence of the effect of critical reflections from the lower crust in causing the observed flattening of the attenuation of strong ground motion from the 1988 Saguenay, Quebec, and 1989 Loma Prieta earthquakes.
A model for flexi-bar to evaluate intervertebral disc and muscle forces in exercises.
Abdollahi, Masoud; Nikkhoo, Mohammad; Ashouri, Sajad; Asghari, Mohsen; Parnianpour, Mohamad; Khalaf, Kinda
2016-10-01
This study developed and validated a lumped parameter model for the FLEXI-BAR, a popular training instrument that provides vibration stimulation. The model which can be used in conjunction with musculoskeletal-modeling software for quantitative biomechanical analyses, consists of 3 rigid segments, 2 torsional springs, and 2 torsional dashpots. Two different sets of experiments were conducted to determine the model's key parameters including the stiffness of the springs and the damping ratio of the dashpots. In the first set of experiments, the free vibration of the FLEXI-BAR with an initial displacement at its end was considered, while in the second set, forced oscillations of the bar were studied. The properties of the mechanical elements in the lumped parameter model were derived utilizing a non-linear optimization algorithm which minimized the difference between the model's prediction and the experimental data. The results showed that the model is valid (8% error) and can be used for simulating exercises with the FLEXI-BAR for excitations in the range of the natural frequency. The model was then validated in combination with AnyBody musculoskeletal modeling software, where various lumbar disc, spinal muscles and hand muscles forces were determined during different FLEXI-BAR exercise simulations. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.
The early maximum likelihood estimation model of audiovisual integration in speech perception.
Andersen, Tobias S
2015-05-01
Speech perception is facilitated by seeing the articulatory mouth movements of the talker. This is due to perceptual audiovisual integration, which also causes the McGurk-MacDonald illusion, and for which a comprehensive computational account is still lacking. Decades of research have largely focused on the fuzzy logical model of perception (FLMP), which provides excellent fits to experimental observations but also has been criticized for being too flexible, post hoc and difficult to interpret. The current study introduces the early maximum likelihood estimation (MLE) model of audiovisual integration to speech perception along with three model variations. In early MLE, integration is based on a continuous internal representation before categorization, which can make the model more parsimonious by imposing constraints that reflect experimental designs. The study also shows that cross-validation can evaluate models of audiovisual integration based on typical data sets taking both goodness-of-fit and model flexibility into account. All models were tested on a published data set previously used for testing the FLMP. Cross-validation favored the early MLE while more conventional error measures favored more complex models. This difference between conventional error measures and cross-validation was found to be indicative of over-fitting in more complex models such as the FLMP.
Jiménez-Sotelo, Paola; Hernández-Martínez, Maylet; Osorio-Revilla, Guillermo; Meza-Márquez, Ofelia Gabriela; García-Ochoa, Felipe; Gallardo-Velázquez, Tzayhrí
2016-07-01
Avocado oil is a high-value and nutraceutical oil whose authentication is very important since the addition of low-cost oils could lower its beneficial properties. Mid-FTIR spectroscopy combined with chemometrics was used to detect and quantify adulteration of avocado oil with sunflower and soybean oils in a ternary mixture. Thirty-seven laboratory-prepared adulterated samples and 20 pure avocado oil samples were evaluated. The adulterated oil amount ranged from 2% to 50% (w/w) in avocado oil. A soft independent modelling class analogy (SIMCA) model was developed to discriminate between pure and adulterated samples. The model showed recognition and rejection rate of 100% and proper classification in external validation. A partial least square (PLS) algorithm was used to estimate the percentage of adulteration. The PLS model showed values of R(2) > 0.9961, standard errors of calibration (SEC) in the range of 0.3963-0.7881, standard errors of prediction (SEP estimated) between 0.6483 and 0.9707, and good prediction performances in external validation. The results showed that mid-FTIR spectroscopy could be an accurate and reliable technique for qualitative and quantitative analysis of avocado oil in ternary mixtures.
Optical Pattern Formation in Spatially Bunched Atoms: A Self-Consistent Model and Experiment
NASA Astrophysics Data System (ADS)
Schmittberger, Bonnie L.; Gauthier, Daniel J.
2014-05-01
The nonlinear optics and optomechanical physics communities use different theoretical models to describe how optical fields interact with a sample of atoms. There does not yet exist a model that is valid for finite atomic temperatures but that also produces the zero temperature results that are generally assumed in optomechanical systems. We present a self-consistent model that is valid for all atomic temperatures and accounts for the back-action of the atoms on the optical fields. Our model provides new insights into the competing effects of the bunching-induced nonlinearity and the saturable nonlinearity. We show that it is crucial to keep the fifth and seventh-order nonlinearities that arise when there exists atomic bunching, even at very low optical field intensities. We go on to apply this model to the results of our experimental system where we observe spontaneous, multimode, transverse optical pattern formation at ultra-low light levels. We show that our model accurately predicts our experimentally observed threshold for optical pattern formation, which is the lowest threshold ever reported for pattern formation. We gratefully acknowledge the financial support of the NSF through Grant #PHY-1206040.
Occurrence analysis of daily rainfalls through non-homogeneous Poissonian processes
NASA Astrophysics Data System (ADS)
Sirangelo, B.; Ferrari, E.; de Luca, D. L.
2011-06-01
A stochastic model based on a non-homogeneous Poisson process, characterised by a time-dependent intensity of rainfall occurrence, is employed to explain seasonal effects of daily rainfalls exceeding prefixed threshold values. The data modelling has been performed with a partition of observed daily rainfall data into a calibration period for parameter estimation and a validation period for checking on occurrence process changes. The model has been applied to a set of rain gauges located in different geographical areas of Southern Italy. The results show a good fit for time-varying intensity of rainfall occurrence process by 2-harmonic Fourier law and no statistically significant evidence of changes in the validation period for different threshold values.
Pulsed Inductive Thruster (PIT): Modeling and Validation Using the MACH2 Code
NASA Technical Reports Server (NTRS)
Schneider, Steven (Technical Monitor); Mikellides, Pavlos G.
2003-01-01
Numerical modeling of the Pulsed Inductive Thruster exercising the magnetohydrodynamics code, MACH2 aims to provide bilateral validation of the thruster's measured performance and the code's capability of capturing the pertinent physical processes. Computed impulse values for helium and argon propellants demonstrate excellent correlation to the experimental data for a range of energy levels and propellant-mass values. The effects of the vacuum tank wall and massinjection scheme were investigated to show trivial changes in the overall performance. An idealized model for these energy levels and propellants deduces that the energy expended to the internal energy modes and plasma dissipation processes is independent of the propellant type, mass, and energy level.
Fault Detection for Automotive Shock Absorber
NASA Astrophysics Data System (ADS)
Hernandez-Alcantara, Diana; Morales-Menendez, Ruben; Amezquita-Brooks, Luis
2015-11-01
Fault detection for automotive semi-active shock absorbers is a challenge due to the non-linear dynamics and the strong influence of the disturbances such as the road profile. First obstacle for this task, is the modeling of the fault, which has been shown to be of multiplicative nature. Many of the most widespread fault detection schemes consider additive faults. Two model-based fault algorithms for semiactive shock absorber are compared: an observer-based approach and a parameter identification approach. The performance of these schemes is validated and compared using a commercial vehicle model that was experimentally validated. Early results shows that a parameter identification approach is more accurate, whereas an observer-based approach is less sensible to parametric uncertainty.
NASA Astrophysics Data System (ADS)
Thavhana, M. P.; Savage, M. J.; Moeletsi, M. E.
2018-06-01
The soil and water assessment tool (SWAT) was calibrated for the Luvuvhu River catchment, South Africa in order to simulate runoff. The model was executed through QSWAT which is an interface between SWAT and QGIS. Data from four weather stations and four weir stations evenly distributed over the catchment were used. The model was run for a 33-year period of 1983-2015. Sensitivity analysis, calibration and validation were conducted using the sequential uncertainty fitting (SUFI-2) algorithm through its interface with SWAT calibration and uncertainty procedure (SWAT-CUP). The calibration process was conducted for the period 1986 to 2005 while the validation process was from 2006 to 2015. Six model efficiency measures were used, namely: coefficient of determination (R2), Nash-Sutcliffe efficiency (NSE) index, root mean square error (RMSE)-observations standard deviation ratio (RSR), percent bias (PBIAS), probability (P)-factor and correlation coefficient (R)-factor were used. Initial results indicated an over-estimation of low flows with regression slope of less than 0.7. Twelve model parameters were applied for sensitivity analysis with four (ALPHA_BF, CN2, GW_DELAY and SOL_K) found to be more distinguishable and sensitive to streamflow (p < 0.05). The SUFI-2 algorithm through the interface with the SWAT-CUP was capable of capturing the model's behaviour, with calibration results showing an R2 of 0.63, NSE index of 0.66, RSR of 0.56 and a positive PBIAS of 16.3 while validation results revealed an R2 of 0.52, NSE of 0.48, RSR of 0.72 and PBIAS of 19.90. The model produced P-factor of 0.67 and R-factor of 0.68 during calibration and during validation, 0.69 and 0.53 respectively. Although performance indicators yielded fair and acceptable results, the P-factor was still below the recommended model performance of 70%. Apart from the unacceptable P-factor values, the results obtained in this study demonstrate acceptable model performance during calibration while validation results were still inconclusive. It can be concluded that calibration of the SWAT model yielded acceptable results with exception to validation results. Having said this, the model can be a useful tool for general water resources assessment and not for analysing hydrological extremes in the Luvuvhu River catchment.
NASA Astrophysics Data System (ADS)
Trettin, C.; Dai, Z.; Amatya, D. M.
2014-12-01
Long-term climatic and hydrologic observations on the Santee Experimental Forest in the lower coastal plain of South Carolina were used to estimate long-term changes in hydrology and forest carbon dynamics for a pair of first-order watersheds. Over 70 years of climate data indicated that warming in this forest area in the last decades was faster than the global mean; 35+ years of hydrologic records showed that forest ecosystem succession three years following Hurricane Hugo caused a substantial change in the ratio of runoff to precipitation. The change in this relationship between the paired watersheds was attributed to altered evapotranspiration processes caused by greater abundance of pine in the treatment watershed and regeneration of the mixed hardwood-pine forest on the reference watershed. The long-term records and anomalous observations are highly valuable for reliable calibration and validation of hydrological and biogeochemical models capturing the effects of climate variability. We applied the hydrological model MIKESHE that showed that runoff and water table level are sensitive to global warming, and that the sustained warming trends can be expected to decrease stream discharge and lower the mean water table depth. The spatially-explicit biogeochemical model Forest-DNDC, validated using biomass measurements from the watersheds, was used to assess carbon dynamics in response to high resolution hydrologic observation data and simulation results. The simulations showed that the long-term spatiotemporal carbon dynamics, including biomass and fluxes of soil carbon dioxide and methane were highly regulated by disturbance regimes, climatic conditions and water table depth. The utility of linked-modeling framework demonstrated here to assess biogeochemical responses at the watershed scale suggests applications for assessing the consequences of climate change within an urbanizing forested landscape. The approach may also be applicable for validating large-scale models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gokaltun, Seckin; Munroe, Norman; Subramaniam, Shankar
2014-12-31
This study presents a new drag model, based on the cohesive inter-particle forces, implemented in the MFIX code. This new drag model combines an existing standard model in MFIX with a particle-based drag model based on a switching principle. Switches between the models in the computational domain occur where strong particle-to-particle cohesion potential is detected. Three versions of the new model were obtained by using one standard drag model in each version. Later, performance of each version was compared against available experimental data for a fluidized bed, published in the literature and used extensively by other researchers for validation purposes.more » In our analysis of the results, we first observed that standard models used in this research were incapable of producing closely matching results. Then, we showed for a simple case that a threshold is needed to be set on the solid volume fraction. This modification was applied to avoid non-physical results for the clustering predictions, when governing equation of the solid granular temperate was solved. Later, we used our hybrid technique and observed the capability of our approach in improving the numerical results significantly; however, improvement of the results depended on the threshold of the cohesive index, which was used in the switching procedure. Our results showed that small values of the threshold for the cohesive index could result in significant reduction of the computational error for all the versions of the proposed drag model. In addition, we redesigned an existing circulating fluidized bed (CFB) test facility in order to create validation cases for clustering regime of Geldart A type particles.« less
Dégano, Irene R; Subirana, Isaac; Torre, Marina; Grau, María; Vila, Joan; Fusco, Danilo; Kirchberger, Inge; Ferrières, Jean; Malmivaara, Antti; Azevedo, Ana; Meisinger, Christa; Bongard, Vanina; Farmakis, Dimitros; Davoli, Marina; Häkkinen, Unto; Araújo, Carla; Lekakis, John; Elosua, Roberto; Marrugat, Jaume
2015-03-01
Hospital performance models in acute myocardial infarction (AMI) are useful to assess patient management. While models are available for individual countries, mainly US, cross-European performance models are lacking. Thus, we aimed to develop a system to benchmark European hospitals in AMI and percutaneous coronary intervention (PCI), based on predicted in-hospital mortality. We used the EURopean HOspital Benchmarking by Outcomes in ACS Processes (EURHOBOP) cohort to develop the models, which included 11,631 AMI patients and 8276 acute coronary syndrome (ACS) patients who underwent PCI. Models were validated with a cohort of 55,955 European ACS patients. Multilevel logistic regression was used to predict in-hospital mortality in European hospitals for AMI and PCI. Administrative and clinical models were constructed with patient- and hospital-level covariates, as well as hospital- and country-based random effects. Internal cross-validation and external validation showed good discrimination at the patient level and good calibration at the hospital level, based on the C-index (0.736-0.819) and the concordance correlation coefficient (55.4%-80.3%). Mortality ratios (MRs) showed excellent concordance between administrative and clinical models (97.5% for AMI and 91.6% for PCI). Exclusion of transfers and hospital stays ≤1day did not affect in-hospital mortality prediction in sensitivity analyses, as shown by MR concordance (80.9%-85.4%). Models were used to develop a benchmarking system to compare in-hospital mortality rates of European hospitals with similar characteristics. The developed system, based on the EURHOBOP models, is a simple and reliable tool to compare in-hospital mortality rates between European hospitals in AMI and PCI. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Validation and uncertainty analysis of a pre-treatment 2D dose prediction model
NASA Astrophysics Data System (ADS)
Baeza, Jose A.; Wolfs, Cecile J. A.; Nijsten, Sebastiaan M. J. J. G.; Verhaegen, Frank
2018-02-01
Independent verification of complex treatment delivery with megavolt photon beam radiotherapy (RT) has been effectively used to detect and prevent errors. This work presents the validation and uncertainty analysis of a model that predicts 2D portal dose images (PDIs) without a patient or phantom in the beam. The prediction model is based on an exponential point dose model with separable primary and secondary photon fluence components. The model includes a scatter kernel, off-axis ratio map, transmission values and penumbra kernels for beam-delimiting components. These parameters were derived through a model fitting procedure supplied with point dose and dose profile measurements of radiation fields. The model was validated against a treatment planning system (TPS; Eclipse) and radiochromic film measurements for complex clinical scenarios, including volumetric modulated arc therapy (VMAT). Confidence limits on fitted model parameters were calculated based on simulated measurements. A sensitivity analysis was performed to evaluate the effect of the parameter uncertainties on the model output. For the maximum uncertainty, the maximum deviating measurement sets were propagated through the fitting procedure and the model. The overall uncertainty was assessed using all simulated measurements. The validation of the prediction model against the TPS and the film showed a good agreement, with on average 90.8% and 90.5% of pixels passing a (2%,2 mm) global gamma analysis respectively, with a low dose threshold of 10%. The maximum and overall uncertainty of the model is dependent on the type of clinical plan used as input. The results can be used to study the robustness of the model. A model for predicting accurate 2D pre-treatment PDIs in complex RT scenarios can be used clinically and its uncertainties can be taken into account.
Chen, Ling; Luo, Dan; Yu, Xiajuan; Jin, Mei; Cai, Wenzhi
2018-05-12
The aim of this study was to develop and validate a predictive tool that combining pelvic floor ultrasound parameters and clinical factors for stress urinary incontinence during pregnancy. A total of 535 women in first or second trimester were included for an interview and transperineal ultrasound assessment from two hospitals. Imaging data sets were analyzed offline to assess for bladder neck vertical position, urethra angles (α, β, and γ angles), hiatal area and bladder neck funneling. All significant continuous variables at univariable analysis were analyzed by receiver-operating characteristics. Three multivariable logistic models were built on clinical factor, and combined with ultrasound parameters. The final predictive model with best performance and fewest variables was selected to establish a nomogram. Internal and external validation of the nomogram were performed by both discrimination represented by C-index and calibration measured by Hosmer-Lemeshow test. A decision curve analysis was conducted to determine the clinical utility of the nomogram. After excluding 14 women with invalid data, 521 women were analyzed. β angle, γ angle and hiatal area had limited predictive value for stress urinary incontinence during pregnancy, with area under curves of 0.558-0.648. The final predictive model included body mass index gain since pregnancy, constipation, previous delivery mode, β angle at rest, and bladder neck funneling. The nomogram based on the final model showed good discrimination with a C-index of 0.789 and satisfactory calibration (P=0.828), both of which were supported by external validation. Decision curve analysis showed that the nomogram was clinical useful. The nomogram incorporating both the pelvic floor ultrasound parameters and clinical factors has been validated to show good discrimination and calibration, and could be an important tool for stress urinary incontinence risk prediction at an early stage of pregnancy. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Real-time numerical forecast of global epidemic spreading: case study of 2009 A/H1N1pdm.
Tizzoni, Michele; Bajardi, Paolo; Poletto, Chiara; Ramasco, José J; Balcan, Duygu; Gonçalves, Bruno; Perra, Nicola; Colizza, Vittoria; Vespignani, Alessandro
2012-12-13
Mathematical and computational models for infectious diseases are increasingly used to support public-health decisions; however, their reliability is currently under debate. Real-time forecasts of epidemic spread using data-driven models have been hindered by the technical challenges posed by parameter estimation and validation. Data gathered for the 2009 H1N1 influenza crisis represent an unprecedented opportunity to validate real-time model predictions and define the main success criteria for different approaches. We used the Global Epidemic and Mobility Model to generate stochastic simulations of epidemic spread worldwide, yielding (among other measures) the incidence and seeding events at a daily resolution for 3,362 subpopulations in 220 countries. Using a Monte Carlo Maximum Likelihood analysis, the model provided an estimate of the seasonal transmission potential during the early phase of the H1N1 pandemic and generated ensemble forecasts for the activity peaks in the northern hemisphere in the fall/winter wave. These results were validated against the real-life surveillance data collected in 48 countries, and their robustness assessed by focusing on 1) the peak timing of the pandemic; 2) the level of spatial resolution allowed by the model; and 3) the clinical attack rate and the effectiveness of the vaccine. In addition, we studied the effect of data incompleteness on the prediction reliability. Real-time predictions of the peak timing are found to be in good agreement with the empirical data, showing strong robustness to data that may not be accessible in real time (such as pre-exposure immunity and adherence to vaccination campaigns), but that affect the predictions for the attack rates. The timing and spatial unfolding of the pandemic are critically sensitive to the level of mobility data integrated into the model. Our results show that large-scale models can be used to provide valuable real-time forecasts of influenza spreading, but they require high-performance computing. The quality of the forecast depends on the level of data integration, thus stressing the need for high-quality data in population-based models, and of progressive updates of validated available empirical knowledge to inform these models.
Ng, Hui Wen; Doughty, Stephen W; Luo, Heng; Ye, Hao; Ge, Weigong; Tong, Weida; Hong, Huixiao
2015-12-21
Some chemicals in the environment possess the potential to interact with the endocrine system in the human body. Multiple receptors are involved in the endocrine system; estrogen receptor α (ERα) plays very important roles in endocrine activity and is the most studied receptor. Understanding and predicting estrogenic activity of chemicals facilitates the evaluation of their endocrine activity. Hence, we have developed a decision forest classification model to predict chemical binding to ERα using a large training data set of 3308 chemicals obtained from the U.S. Food and Drug Administration's Estrogenic Activity Database. We tested the model using cross validations and external data sets of 1641 chemicals obtained from the U.S. Environmental Protection Agency's ToxCast project. The model showed good performance in both internal (92% accuracy) and external validations (∼ 70-89% relative balanced accuracies), where the latter involved the validations of the model across different ER pathway-related assays in ToxCast. The important features that contribute to the prediction ability of the model were identified through informative descriptor analysis and were related to current knowledge of ER binding. Prediction confidence analysis revealed that the model had both high prediction confidence and accuracy for most predicted chemicals. The results demonstrated that the model constructed based on the large training data set is more accurate and robust for predicting ER binding of chemicals than the published models that have been developed using much smaller data sets. The model could be useful for the evaluation of ERα-mediated endocrine activity potential of environmental chemicals.
Development and validation of a two-dimensional fast-response flood estimation model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Judi, David R; Mcpherson, Timothy N; Burian, Steven J
2009-01-01
A finite difference formulation of the shallow water equations using an upwind differencing method was developed maintaining computational efficiency and accuracy such that it can be used as a fast-response flood estimation tool. The model was validated using both laboratory controlled experiments and an actual dam breach. Through the laboratory experiments, the model was shown to give good estimations of depth and velocity when compared to the measured data, as well as when compared to a more complex two-dimensional model. Additionally, the model was compared to high water mark data obtained from the failure of the Taum Sauk dam. Themore » simulated inundation extent agreed well with the observed extent, with the most notable differences resulting from the inability to model sediment transport. The results of these validation studies complex two-dimensional model. Additionally, the model was compared to high water mark data obtained from the failure of the Taum Sauk dam. The simulated inundation extent agreed well with the observed extent, with the most notable differences resulting from the inability to model sediment transport. The results of these validation studies show that a relatively numerical scheme used to solve the complete shallow water equations can be used to accurately estimate flood inundation. Future work will focus on further reducing the computation time needed to provide flood inundation estimates for fast-response analyses. This will be accomplished through the efficient use of multi-core, multi-processor computers coupled with an efficient domain-tracking algorithm, as well as an understanding of the impacts of grid resolution on model results.« less
NASA Astrophysics Data System (ADS)
Li, Xiaowen; Janiga, Matthew A.; Wang, Shuguang; Tao, Wei-Kuo; Rowe, Angela; Xu, Weixin; Liu, Chuntao; Matsui, Toshihisa; Zhang, Chidong
2018-04-01
Evolution of precipitation structures are simulated and compared with radar observations for the November Madden-Julian Oscillation (MJO) event during the DYNAmics of the MJO (DYNAMO) field campaign. Three ground-based, ship-borne, and spaceborne precipitation radars and three cloud-resolving models (CRMs) driven by observed large-scale forcing are used to study precipitation structures at different locations over the central equatorial Indian Ocean. Convective strength is represented by 0-dBZ echo-top heights, and convective organization by contiguous 17-dBZ areas. The multi-radar and multi-model framework allows for more stringent model validations. The emphasis is on testing models' ability to simulate subtle differences observed at different radar sites when the MJO event passed through. The results show that CRMs forced by site-specific large-scale forcing can reproduce not only common features in cloud populations but also subtle variations observed by different radars. The comparisons also revealed common deficiencies in CRM simulations where they underestimate radar echo-top heights for the strongest convection within large, organized precipitation features. Cross validations with multiple radars and models also enable quantitative comparisons in CRM sensitivity studies using different large-scale forcing, microphysical schemes and parameters, resolutions, and domain sizes. In terms of radar echo-top height temporal variations, many model sensitivity tests have better correlations than radar/model comparisons, indicating robustness in model performance on this aspect. It is further shown that well-validated model simulations could be used to constrain uncertainties in observed echo-top heights when the low-resolution surveillance scanning strategy is used.
Roze, S; Liens, D; Palmer, A; Berger, W; Tucker, D; Renaudin, C
2006-12-01
The aim of this study was to describe a health economic model developed to project lifetime clinical and cost outcomes of lipid-modifying interventions in patients not reaching target lipid levels and to assess the validity of the model. The internet-based, computer simulation model is made up of two decision analytic sub-models, the first utilizing Monte Carlo simulation, and the second applying Markov modeling techniques. Monte Carlo simulation generates a baseline cohort for long-term simulation by assigning an individual lipid profile to each patient, and applying the treatment effects of interventions under investigation. The Markov model then estimates the long-term clinical (coronary heart disease events, life expectancy, and quality-adjusted life expectancy) and cost outcomes up to a lifetime horizon, based on risk equations from the Framingham study. Internal and external validation analyses were performed. The results of the model validation analyses, plotted against corresponding real-life values from Framingham, 4S, AFCAPS/TexCAPS, and a meta-analysis by Gordon et al., showed that the majority of values were close to the y = x line, which indicates a perfect fit. The R2 value was 0.9575 and the gradient of the regression line was 0.9329, both very close to the perfect fit (= 1). Validation analyses of the computer simulation model suggest the model is able to recreate the outcomes from published clinical studies and would be a valuable tool for the evaluation of new and existing therapy options for patients with persistent dyslipidemia.
Xiao, Li; Cai, Qin; Li, Zhilin; Zhao, Hongkai; Luo, Ray
2014-11-25
A multi-scale framework is proposed for more realistic molecular dynamics simulations in continuum solvent models by coupling a molecular mechanics treatment of solute with a fluid mechanics treatment of solvent. This article reports our initial efforts to formulate the physical concepts necessary for coupling the two mechanics and develop a 3D numerical algorithm to simulate the solvent fluid via the Navier-Stokes equation. The numerical algorithm was validated with multiple test cases. The validation shows that the algorithm is effective and stable, with observed accuracy consistent with our design.
Kojima, Hajime; Katoh, Masakazu; Shinoda, Shinsuke; Hagiwara, Saori; Suzuki, Tamie; Izumi, Runa; Yamaguchi, Yoshihiro; Nakamura, Maki; Kasahawa, Toshihiko; Shibai, Aya
2014-07-01
Three validation studies were conducted by the Japanese Society for Alternatives to Animal Experiments in order to assess the performance of a skin irritation assay using reconstructed human epidermis (RhE) LabCyte EPI-MODEL24 (LabCyte EPI-MODEL24 SIT) developed by the Japan Tissue Engineering Co., Ltd. (J-TEC), and the results of these studies were submitted to the Organisation for Economic Co-operation and Development (OECD) for the creation of a Test Guideline (TG). In the summary review report from the OECD, the peer review panel indicated the need to resolve an issue regarding the misclassification of 1-bromohexane. To this end, a rinsing operation intended to remove exposed chemicals was reviewed and the standard operating procedure (SOP) revised by J-TEC. Thereafter, in order to confirm general versatility of the revised SOP, a new validation management team was organized by the Japanese Center for the Validation of Alternative Methods (JaCVAM) to undertake a catch-up validation study that would compare the revised assay with similar in vitro skin irritation assays, per OECD TG No. 439 (2010). The catch-up validation and supplementary studies for LabCyte EPI-MODEL24 SIT using the revised SOPs were conducted at three laboratories. These results showed that the revised SOP of LabCyte EPI-MODEL24 SIT conformed more accurately to the classifications for skin irritation under the United Nations Globally Harmonised System of Classification and Labelling of Chemicals (UN GHS), thereby highlighting the importance of an optimized rinsing operation for the removal of exposed chemicals in obtaining consistent results from in vitro skin irritation assays. Copyright © 2013 John Wiley & Sons, Ltd.
Zhang, Lei; Lu, Wenxi; An, Yonglei; Li, Di; Gong, Lei
2012-01-01
The impacts of climate change on streamflow and non-point source pollutant loads in the Shitoukoumen reservoir catchment are predicted by combining a general circulation model (HadCM3) with the Soil and Water Assessment Tool (SWAT) hydrological model. A statistical downscaling model was used to generate future local scenarios of meteorological variables such as temperature and precipitation. Then, the downscaled meteorological variables were used as input to the SWAT hydrological model calibrated and validated with observations, and the corresponding changes of future streamflow and non-point source pollutant loads in Shitoukoumen reservoir catchment were simulated and analyzed. Results show that daily temperature increases in three future periods (2010-2039, 2040-2069, and 2070-2099) relative to a baseline of 1961-1990, and the rate of increase is 0.63°C per decade. Annual precipitation also shows an apparent increase of 11 mm per decade. The calibration and validation results showed that the SWAT model was able to simulate well the streamflow and non-point source pollutant loads, with a coefficient of determination of 0.7 and a Nash-Sutcliffe efficiency of about 0.7 for both the calibration and validation periods. The future climate change has a significant impact on streamflow and non-point source pollutant loads. The annual streamflow shows a fluctuating upward trend from 2010 to 2099, with an increase rate of 1.1 m(3) s(-1) per decade, and a significant upward trend in summer, with an increase rate of 1.32 m(3) s(-1) per decade. The increase in summer contributes the most to the increase of annual load compared with other seasons. The annual NH (4) (+) -N load into Shitoukoumen reservoir shows a significant downward trend with a decrease rate of 40.6 t per decade. The annual TP load shows an insignificant increasing trend, and its change rate is 3.77 t per decade. The results of this analysis provide a scientific basis for effective support of decision makers and strategies of adaptation to climate change.
Validation of a Spanish-language version of the ADHD Rating Scale IV in a Spanish sample.
Vallejo-Valdivielso, M; Soutullo, C A; de Castro-Manglano, P; Marín-Méndez, J J; Díez-Suárez, A
2017-07-14
The purpose of this study is to validate a Spanish-language version of the 18-item ADHD Rating Scale-IV (ADHD-RS-IV.es) in a Spanish sample. From a total sample of 652 children and adolescents aged 6 to 17 years (mean age was 11.14±3.27), we included 518 who met the DSM-IV-TR criteria for ADHD and 134 healthy controls. To evaluate the factorial structure, validity, and reliability of the scale, we performed a confirmatory factor analysis (CFA) using structural equation modelling on a polychoric correlation matrix and maximum likelihood estimation. The scale's discriminant validity and predictive value were estimated using ROC (receiver operating characteristics) curve analysis. Both the full scale and the subscales of the Spanish-language version of the ADHD-RS-IV showed good internal consistency. Cronbach's alpha was 0.94 for the full scale and ≥ 0.90 for the subscales, and ordinal alpha was 0.95 and ≥ 0.90, respectively. CFA showed that a two-factor model (inattention and hyperactivity/impulsivity) provided the best fit for the data. ADHD-RS-IV.es offered good discriminant ability to distinguish between patients with ADHD and controls (AUC=0.97). The two-factor structure of the Spanish-language version of the ADHD-RS-IV (ADHD-RS-IV.es) is consistent with those of the DSM-IV-TR and DSM-5 as well as with the model proposed by the author of the original scale. Furthermore, it has good discriminant ability. ADHD-RS-IV.es is therefore a valid and reliable tool for determining presence and severity of ADHD symptoms in the Spanish population. Copyright © 2017 Sociedad Española de Neurología. Publicado por Elsevier España, S.L.U. All rights reserved.
Tandem internal models execute motor learning in the cerebellum.
Honda, Takeru; Nagao, Soichi; Hashimoto, Yuji; Ishikawa, Kinya; Yokota, Takanori; Mizusawa, Hidehiro; Ito, Masao
2018-06-25
In performing skillful movement, humans use predictions from internal models formed by repetition learning. However, the computational organization of internal models in the brain remains unknown. Here, we demonstrate that a computational architecture employing a tandem configuration of forward and inverse internal models enables efficient motor learning in the cerebellum. The model predicted learning adaptations observed in hand-reaching experiments in humans wearing a prism lens and explained the kinetic components of these behavioral adaptations. The tandem system also predicted a form of subliminal motor learning that was experimentally validated after training intentional misses of hand targets. Patients with cerebellar degeneration disease showed behavioral impairments consistent with tandemly arranged internal models. These findings validate computational tandemization of internal models in motor control and its potential uses in more complex forms of learning and cognition. Copyright © 2018 the Author(s). Published by PNAS.
Das, Subhasish; Sen, Ramkrishna
2011-10-01
A logistic kinetic model was derived and validated to characterize the dynamics of a sporogenous bacterium in stationary phase with respect to sporulation and product formation. The kinetic constants as determined using this model are particularly important for describing intrinsic properties of a sporogenous bacterial culture in stationary phase. Non-linear curve fitting of the experimental data into the mathematical model showed very good correlation with the predicted values for sporulation and lipase production by Bacillus coagulans RK-02 culture in minimal media. Model fitting of literature data of sporulation and product (protease and amylase) formation in the stationary phase by some other Bacilli and comparison of the results of model fitting with those of Bacillus coagulans helped validate the significance and robustness of the developed kinetic model. Copyright © 2011 Elsevier Ltd. All rights reserved.
Guo, Jing; Chen, Shangxiang; Li, Shun; Sun, Xiaowei; Li, Wei; Zhou, Zhiwei; Chen, Yingbo; Xu, Dazhi
2018-01-12
Several studies have highlighted the prognostic value of the individual and the various combinations of the tumor markers for gastric cancer (GC). Our study was designed to assess establish a new novel model incorporating carcino-embryonic antigen (CEA), carbohydrate antigen 19-9 (CA19-9), carbohydrate antigen 72-4 (CA72-4). A total of 1,566 GC patients (Primary cohort) between Jan 2000 and July 2013 were analyzed. The Primary cohort was randomly divided into Training set (n=783) and Validation set (n=783). A three-tumor marker classifier was developed in the Training set and validated in the Validation set by multivariate regression and risk-score analysis. We have identified a three-tumor marker classifier (including CEA, CA19-9 and CA72-4) for the cancer specific survival (CSS) of GC (p<0.001). Consistent results were obtained in the both Training set and Validation set. Multivariate analysis showed that the classifier was an independent predictor of GC (All p value <0.001 in the Training set, Validation set and Primary cohort). Furthermore, when the leave-one-out approach was performed, the classifier showed superior predictive value to the individual or two of them (with the highest AUC (Area Under Curve); 0.618 for the Training set, and 0.625 for the Validation set), which ascertained its predictive value. Our three-tumor marker classifier is closely associated with the CSS of GC and may serve as a novel model for future decisions concerning treatments.
Urdea, Mickey; Kolberg, Janice; Wilber, Judith; Gerwien, Robert; Moler, Edward; Rowe, Michael; Jorgensen, Paul; Hansen, Torben; Pedersen, Oluf; Jørgensen, Torben; Borch-Johnsen, Knut
2009-01-01
Background Improved identification of subjects at high risk for development of type 2 diabetes would allow preventive interventions to be targeted toward individuals most likely to benefit. In previous research, predictive biomarkers were identified and used to develop multivariate models to assess an individual's risk of developing diabetes. Here we describe the training and validation of the PreDx™ Diabetes Risk Score (DRS) model in a clinical laboratory setting using baseline serum samples from subjects in the Inter99 cohort, a population-based primary prevention study of cardiovascular disease. Methods Among 6784 subjects free of diabetes at baseline, 215 subjects progressed to diabetes (converters) during five years of follow-up. A nested case-control study was performed using serum samples from 202 converters and 597 randomly selected nonconverters. Samples were randomly assigned to equally sized training and validation sets. Seven biomarkers were measured using assays developed for use in a clinical reference laboratory. Results The PreDx DRS model performed better on the training set (area under the curve [AUC] = 0.837) than fasting plasma glucose alone (AUC = 0.779). When applied to the sequestered validation set, the PreDx DRS showed the same performance (AUC = 0.838), thus validating the model. This model had a better AUC than any other single measure from a fasting sample. Moreover, the model provided further risk stratification among high-risk subpopulations with impaired fasting glucose or metabolic syndrome. Conclusions The PreDx DRS provides the absolute risk of diabetes conversion in five years for subjects identified to be “at risk” using the clinical factors. PMID:20144324
Urdea, Mickey; Kolberg, Janice; Wilber, Judith; Gerwien, Robert; Moler, Edward; Rowe, Michael; Jorgensen, Paul; Hansen, Torben; Pedersen, Oluf; Jørgensen, Torben; Borch-Johnsen, Knut
2009-07-01
Improved identification of subjects at high risk for development of type 2 diabetes would allow preventive interventions to be targeted toward individuals most likely to benefit. In previous research, predictive biomarkers were identified and used to develop multivariate models to assess an individual's risk of developing diabetes. Here we describe the training and validation of the PreDx Diabetes Risk Score (DRS) model in a clinical laboratory setting using baseline serum samples from subjects in the Inter99 cohort, a population-based primary prevention study of cardiovascular disease. Among 6784 subjects free of diabetes at baseline, 215 subjects progressed to diabetes (converters) during five years of follow-up. A nested case-control study was performed using serum samples from 202 converters and 597 randomly selected nonconverters. Samples were randomly assigned to equally sized training and validation sets. Seven biomarkers were measured using assays developed for use in a clinical reference laboratory. The PreDx DRS model performed better on the training set (area under the curve [AUC] = 0.837) than fasting plasma glucose alone (AUC = 0.779). When applied to the sequestered validation set, the PreDx DRS showed the same performance (AUC = 0.838), thus validating the model. This model had a better AUC than any other single measure from a fasting sample. Moreover, the model provided further risk stratification among high-risk subpopulations with impaired fasting glucose or metabolic syndrome. The PreDx DRS provides the absolute risk of diabetes conversion in five years for subjects identified to be "at risk" using the clinical factors. Copyright 2009 Diabetes Technology Society.
Forzley, Brian; Er, Lee; Chiu, Helen Hl; Djurdjev, Ognjenka; Martinusen, Dan; Carson, Rachel C; Hargrove, Gaylene; Levin, Adeera; Karim, Mohamud
2018-02-01
End-stage kidney disease is associated with poor prognosis. Health care professionals must be prepared to address end-of-life issues and identify those at high risk for dying. A 6-month mortality prediction model for patients on dialysis derived in the United States is used but has not been externally validated. We aimed to assess the external validity and clinical utility in an independent cohort in Canada. We examined the performance of the published 6-month mortality prediction model, using discrimination, calibration, and decision curve analyses. Data were derived from a cohort of 374 prevalent dialysis patients in two regions of British Columbia, Canada, which included serum albumin, age, peripheral vascular disease, dementia, and answers to the "the surprise question" ("Would I be surprised if this patient died within the next year?"). The observed mortality in the validation cohort was 11.5% at 6 months. The prediction model had reasonable discrimination (c-stat = 0.70) but poor calibration (calibration-in-the-large = -0.53 (95% confidence interval: -0.88, -0.18); calibration slope = 0.57 (95% confidence interval: 0.31, 0.83)) in our data. Decision curve analysis showed the model only has added value in guiding clinical decision in a small range of threshold probabilities: 8%-20%. Despite reasonable discrimination, the prediction model has poor calibration in this external study cohort; thus, it may have limited clinical utility in settings outside of where it was derived. Decision curve analysis clarifies limitations in clinical utility not apparent by receiver operating characteristic curve analysis. This study highlights the importance of external validation of prediction models prior to routine use in clinical practice.
Predicting protein-binding regions in RNA using nucleotide profiles and compositions.
Choi, Daesik; Park, Byungkyu; Chae, Hanju; Lee, Wook; Han, Kyungsook
2017-03-14
Motivated by the increased amount of data on protein-RNA interactions and the availability of complete genome sequences of several organisms, many computational methods have been proposed to predict binding sites in protein-RNA interactions. However, most computational methods are limited to finding RNA-binding sites in proteins instead of protein-binding sites in RNAs. Predicting protein-binding sites in RNA is more challenging than predicting RNA-binding sites in proteins. Recent computational methods for finding protein-binding sites in RNAs have several drawbacks for practical use. We developed a new support vector machine (SVM) model for predicting protein-binding regions in mRNA sequences. The model uses sequence profiles constructed from log-odds scores of mono- and di-nucleotides and nucleotide compositions. The model was evaluated by standard 10-fold cross validation, leave-one-protein-out (LOPO) cross validation and independent testing. Since actual mRNA sequences have more non-binding regions than protein-binding regions, we tested the model on several datasets with different ratios of protein-binding regions to non-binding regions. The best performance of the model was obtained in a balanced dataset of positive and negative instances. 10-fold cross validation with a balanced dataset achieved a sensitivity of 91.6%, a specificity of 92.4%, an accuracy of 92.0%, a positive predictive value (PPV) of 91.7%, a negative predictive value (NPV) of 92.3% and a Matthews correlation coefficient (MCC) of 0.840. LOPO cross validation showed a lower performance than the 10-fold cross validation, but the performance remains high (87.6% accuracy and 0.752 MCC). In testing the model on independent datasets, it achieved an accuracy of 82.2% and an MCC of 0.656. Testing of our model and other state-of-the-art methods on a same dataset showed that our model is better than the others. Sequence profiles of log-odds scores of mono- and di-nucleotides were much more powerful features than nucleotide compositions in finding protein-binding regions in RNA sequences. But, a slight performance gain was obtained when using the sequence profiles along with nucleotide compositions. These are preliminary results of ongoing research, but demonstrate the potential of our approach as a powerful predictor of protein-binding regions in RNA. The program and supporting data are available at http://bclab.inha.ac.kr/RBPbinding .
Validation of a dynamic linked segment model to calculate joint moments in lifting.
de Looze, M P; Kingma, I; Bussmann, J B; Toussaint, H M
1992-08-01
A two-dimensional dynamic linked segment model was constructed and applied to a lifting activity. Reactive forces and moments were calculated by an instantaneous approach involving the application of Newtonian mechanics to individual adjacent rigid segments in succession. The analysis started once at the feet and once at a hands/load segment. The model was validated by comparing predicted external forces and moments at the feet or at a hands/load segment to actual values, which were simultaneously measured (ground reaction force at the feet) or assumed to be zero (external moments at feet and hands/load and external forces, beside gravitation, at hands/load). In addition, results of both procedures, in terms of joint moments, including the moment at the intervertebral disc between the fifth lumbar and first sacral vertebra (L5-S1), were compared. A correlation of r = 0.88 between calculated and measured vertical ground reaction forces was found. The calculated external forces and moments at the hands showed only minor deviations from the expected zero level. The moments at L5-S1, calculated starting from feet compared to starting from hands/load, yielded a coefficient of correlation of r = 0.99. However, moments calculated from hands/load were 3.6% (averaged values) and 10.9% (peak values) higher. This difference is assumed to be due mainly to erroneous estimations of the positions of centres of gravity and joint rotation centres. The estimation of the location of L5-S1 rotation axis can affect the results significantly. Despite the numerous studies estimating the load on the low back during lifting on the basis of linked segment models, only a few attempts to validate these models have been made. This study is concerned with the validity of the presented linked segment model. The results support the model's validity. Effects of several sources of error threatening the validity are discussed. Copyright © 1992. Published by Elsevier Ltd.
Shen, Minxue; Hu, Ming; Sun, Zhenqiu
2017-01-01
Objectives To develop and validate brief scales to measure common emotional and behavioural problems among adolescents in the examination-oriented education system and collectivistic culture of China. Setting Middle schools in Hunan province. Participants 5442 middle school students aged 11–19 years were sampled. 4727 valid questionnaires were collected and used for validation of the scales. The final sample included 2408 boys and 2319 girls. Primary and secondary outcome measures The tools were assessed by the item response theory, classical test theory (reliability and construct validity) and differential item functioning. Results Four scales to measure anxiety, depression, study problem and sociality problem were established. Exploratory factor analysis showed that each scale had two solutions. Confirmatory factor analysis showed acceptable to good model fit for each scale. Internal consistency and test–retest reliability of all scales were above 0.7. Item response theory showed that all items had acceptable discrimination parameters and most items had appropriate difficulty parameters. 10 items demonstrated differential item functioning with respect to gender. Conclusions Four brief scales were developed and validated among adolescents in middle schools of China. The scales have good psychometric properties with minor differential item functioning. They can be used in middle school settings, and will help school officials to assess the students’ emotional/behavioural problems. PMID:28062469
Viscoelasticity of Axisymmetric Composite Structures: Analysis and Experimental Validation
2013-02-01
compressive stress at the interface between the composite and steel prior to the sheath’s cut-off. Accordingly, the viscoelastic analysis is used...The hoop-stress profile in figure 6 shows the steel region is in compression , resulting from the winding tension of composite overwrap. The stress...mechanical and thermal loads. Experimental validation of the model is conducted using a high- tensioned composite overwrapped on a steel cylinder. The creep
Lin, Chung-Ying; Pakpour, Amir H
2017-02-01
The problems of mood disorders are critical in people with epilepsy. Therefore, there is a need to validate a useful tool for the population. The Hospital Anxiety and Depression Scale (HADS) has been used on the population, and showed that it is a satisfactory screening tool. However, more evidence on its construct validity is needed. A total of 1041 people with epilepsy were recruited in this study, and each completed the HADS. Confirmatory factor analysis (CFA) and Rasch analysis were used to understand the construct validity of the HADS. In addition, internal consistency was tested using Cronbachs' α, person separation reliability, and item separation reliability. Ordering of the response descriptors and the differential item functioning (DIF) were examined using the Rasch models. The HADS showed that 55.3% of our participants had anxiety; 56.0% had depression based on its cutoffs. CFA and Rasch analyses both showed the satisfactory construct validity of the HADS; the internal consistency was also acceptable (α=0.82 in anxiety and 0.79 in depression; person separation reliability=0.82 in anxiety and 0.73 in depression; item separation reliability=0.98 in anxiety and 0.91 in depression). The difficulties of the four-point Likert scale used in the HADS were monotonically increased, which indicates no disordering response categories. No DIF items across male and female patients and across types of epilepsy were displayed in the HADS. The HADS has promising psychometric properties on construct validity in people with epilepsy. Moreover, the additive item score is supported for calculating the cutoff. Copyright © 2016 British Epilepsy Association. Published by Elsevier Ltd. All rights reserved.
IMPACT: a generic tool for modelling and simulating public health policy.
Ainsworth, J D; Carruthers, E; Couch, P; Green, N; O'Flaherty, M; Sperrin, M; Williams, R; Asghar, Z; Capewell, S; Buchan, I E
2011-01-01
Populations are under-served by local health policies and management of resources. This partly reflects a lack of realistically complex models to enable appraisal of a wide range of potential options. Rising computing power coupled with advances in machine learning and healthcare information now enables such models to be constructed and executed. However, such models are not generally accessible to public health practitioners who often lack the requisite technical knowledge or skills. To design and develop a system for creating, executing and analysing the results of simulated public health and healthcare policy interventions, in ways that are accessible and usable by modellers and policy-makers. The system requirements were captured and analysed in parallel with the statistical method development for the simulation engine. From the resulting software requirement specification the system architecture was designed, implemented and tested. A model for Coronary Heart Disease (CHD) was created and validated against empirical data. The system was successfully used to create and validate the CHD model. The initial validation results show concordance between the simulation results and the empirical data. We have demonstrated the ability to connect health policy-modellers and policy-makers in a unified system, thereby making population health models easier to share, maintain, reuse and deploy.
Estrada Álvarez, Jorge M; Ossa García, Ximena; del Quijano del Gordo, Carmen I; Bustos, Luis; Urina, Diana P; Pérez, Celso F; Ossa, John E; Moreno Rojas, Edwin
2015-08-01
To assess the validity and reliability of the chronic respiratory questionnaire (CRQ) in the measurement of HRQL in the Colombian population with COPD. A cross-sectional study was conducted with a sample of 200 patients diagnosed with COPD according to GOLD criteria. Convergence validity was evaluated by correlating the questionnaire results with other clinical variables such as exercise tolerance, forced expiratory volume at the first second (FEV1), and depression levels. HRQL measured through the CRQ correlated significantly with the 6-min walk test (r = 0.34), just as the dimensions fatigue (r = 0.37) and dyspnoea correlated with the FEV1 test (r = 0.21) and the dimensions emotional function and disease management with depression levels (r = -0.79). The Generalized Structured Component Analysis (GSCA) with the prespecified model is showed, and the total variance explained by the items in the model was 61.5 % (FIT = 0.615), unweighted least squares (GFI = 0.998), and standardised root mean square (SRMR = 0.084), indicating that the model fits adequately. The CRQ presents evidence of adequate validity and reliability in the Colombian population. Its use is recommended to measure HRQL in patients with COPD, although future validations will be needed to identify the property of sensitivity to change.
Kersten, Paula; Cardol, Mieke; George, Steve; Ward, Christopher; Sibley, Andrew; White, Barney
2007-10-15
To evaluate the cross-cultural validity of the five subscales of the Impact on Participation and Autonomy (IPA) measure and the full 31-item scale. Data from two validation studies (Dutch and English) were pooled (n = 106). Participants (aged 18-75), known to rehabilitation services or GP practices, had conditions ranging from minor ailments to significant disability. Validity of the five subscales and the total scale was examined using Rasch analysis (Partial Credit Model). P values smaller than 0.01 were employed to allow for multiple testing. A number of items in all the subscales except 'Outdoor Autonomy' needed rescoring. One 'Indoor Autonomy' item showed uniform DIF by country and was split by country. One 'Work and Education' item displayed uniform and non-uniform DIF by gender. All the subscales fitted the Rasch model and were invariant across country. A 30-item IPA also fitted the Rasch model. The IPA subscales and a 30-item scale are invariant across the two cultures and gender. The IPA can be used validly to assess participation and autonomy in these populations. Further analyses are required to examine whether the IPA is invariant across differing levels of disability and other disease groups not included in this study.
Ship Detection in SAR Image Based on the Alpha-stable Distribution
Wang, Changcheng; Liao, Mingsheng; Li, Xiaofeng
2008-01-01
This paper describes an improved Constant False Alarm Rate (CFAR) ship detection algorithm in spaceborne synthetic aperture radar (SAR) image based on Alpha-stable distribution model. Typically, the CFAR algorithm uses the Gaussian distribution model to describe statistical characteristics of a SAR image background clutter. However, the Gaussian distribution is only valid for multilook SAR images when several radar looks are averaged. As sea clutter in SAR images shows spiky or heavy-tailed characteristics, the Gaussian distribution often fails to describe background sea clutter. In this study, we replace the Gaussian distribution with the Alpha-stable distribution, which is widely used in impulsive or spiky signal processing, to describe the background sea clutter in SAR images. In our proposed algorithm, an initial step for detecting possible ship targets is employed. Then, similar to the typical two-parameter CFAR algorithm, a local process is applied to the pixel identified as possible target. A RADARSAT-1 image is used to validate this Alpha-stable distribution based algorithm. Meanwhile, known ship location data during the time of RADARSAT-1 SAR image acquisition is used to validate ship detection results. Validation results show improvements of the new CFAR algorithm based on the Alpha-stable distribution over the CFAR algorithm based on the Gaussian distribution. PMID:27873794
Validation of an assay for quantification of alpha-amylase in saliva of sheep
Fuentes-Rubio, Maria; Fuentes, Francisco; Otal, Julio; Quiles, Alberto; Hevia, María Luisa
2016-01-01
The objective of this study was to develop a time-resolved immunofluorometric assay (TR-IFMA) for quantification of salivary alpha-amylase in sheep. For that purpose, after the design of the assay, an analytical and a clinical validation were carried out. The analytical validation of the assay showed intra- and inter-assay coefficients of variation (CVs) of 6.1% and 10.57%, respectively and an analytical limit of detection of 0.09 ng/mL. The assay also demonstrated a high level of accuracy, as determined by linearity under dilution. For clinical validation, a model of acute stress testing was conducted to determine whether expected significant changes in alpha-amylase were picked up in the newly developed assay. In that model, 11 sheep were immobilized and confronted with a sheepdog to induce stress. Saliva samples were obtained before stress induction and 15, 30, and 60 min afterwards. Salivary cortisol was measured as a reference of stress level. The results of TR-IFMA showed a significant increase (P < 0.01) in the concentration of alpha-amylase in saliva after stress induction. The assay developed in this study could be used to measure salivary alpha-amylase in the saliva of sheep and this enzyme could be a possible noninvasive biomarker of stress in sheep. PMID:27408332
Won, Jongsung; Cheng, Jack C P; Lee, Ghang
2016-03-01
Waste generated in construction and demolition processes comprised around 50% of the solid waste in South Korea in 2013. Many cases show that design validation based on building information modeling (BIM) is an effective means to reduce the amount of construction waste since construction waste is mainly generated due to improper design and unexpected changes in the design and construction phases. However, the amount of construction waste that could be avoided by adopting BIM-based design validation has been unknown. This paper aims to estimate the amount of construction waste prevented by a BIM-based design validation process based on the amount of construction waste that might be generated due to design errors. Two project cases in South Korea were studied in this paper, with 381 and 136 design errors detected, respectively during the BIM-based design validation. Each design error was categorized according to its cause and the likelihood of detection before construction. The case studies show that BIM-based design validation could prevent 4.3-15.2% of construction waste that might have been generated without using BIM. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Pradhan, Biswajeet
2010-05-01
This paper presents the results of the cross-validation of a multivariate logistic regression model using remote sensing data and GIS for landslide hazard analysis on the Penang, Cameron, and Selangor areas in Malaysia. Landslide locations in the study areas were identified by interpreting aerial photographs and satellite images, supported by field surveys. SPOT 5 and Landsat TM satellite imagery were used to map landcover and vegetation index, respectively. Maps of topography, soil type, lineaments and land cover were constructed from the spatial datasets. Ten factors which influence landslide occurrence, i.e., slope, aspect, curvature, distance from drainage, lithology, distance from lineaments, soil type, landcover, rainfall precipitation, and normalized difference vegetation index (ndvi), were extracted from the spatial database and the logistic regression coefficient of each factor was computed. Then the landslide hazard was analysed using the multivariate logistic regression coefficients derived not only from the data for the respective area but also using the logistic regression coefficients calculated from each of the other two areas (nine hazard maps in all) as a cross-validation of the model. For verification of the model, the results of the analyses were then compared with the field-verified landslide locations. Among the three cases of the application of logistic regression coefficient in the same study area, the case of Selangor based on the Selangor logistic regression coefficients showed the highest accuracy (94%), where as Penang based on the Penang coefficients showed the lowest accuracy (86%). Similarly, among the six cases from the cross application of logistic regression coefficient in other two areas, the case of Selangor based on logistic coefficient of Cameron showed highest (90%) prediction accuracy where as the case of Penang based on the Selangor logistic regression coefficients showed the lowest accuracy (79%). Qualitatively, the cross application model yields reasonable results which can be used for preliminary landslide hazard mapping.
NASA Astrophysics Data System (ADS)
Bahtiar; Rahayu, Y. S.; Wasis
2018-01-01
This research aims to produce P3E learning model to improve students’ critical thinking skills. The developed model is named P3E, consisting of 4 (four) stages namely; organization, inquiry, presentation, and evaluation. This development research refers to the development stage by Kemp. The design of the wide scale try-out used pretest-posttest group design. The wide scale try-out was conducted in grade X of 2016/2017 academic year. The analysis of the results of this development research inludes three aspects, namely: validity, practicality, and effectiveness of the model developed. The research results showed; (1) the P3E learning model was valid, according to experts with an average value of 3.7; (2) The completion of the syntax of the learning model developed obtained 98.09% and 94.39% for two schools based on the assessment of the observers. This shows that the developed model is practical to be implemented; (3) the developed model is effective for improving students’ critical thinking skills, although the n-gain of the students’ critical thinking skills was 0.54 with moderate category. Based on the results of the research above, it can be concluded that the developed P3E learning model is suitable to be used to improve students’ critical thinking skills.
Bangera, Nitin B; Schomer, Donald L; Dehghani, Nima; Ulbert, Istvan; Cash, Sydney; Papavasiliou, Steve; Eisenberg, Solomon R; Dale, Anders M; Halgren, Eric
2010-12-01
Forward solutions with different levels of complexity are employed for localization of current generators, which are responsible for the electric and magnetic fields measured from the human brain. The influence of brain anisotropy on the forward solution is poorly understood. The goal of this study is to validate an anisotropic model for the intracranial electric forward solution by comparing with the directly measured 'gold standard'. Dipolar sources are created at known locations in the brain and intracranial electroencephalogram (EEG) is recorded simultaneously. Isotropic models with increasing level of complexity are generated along with anisotropic models based on Diffusion tensor imaging (DTI). A Finite Element Method based forward solution is calculated and validated using the measured data. Major findings are (1) An anisotropic model with a linear scaling between the eigenvalues of the electrical conductivity tensor and water self-diffusion tensor in brain tissue is validated. The greatest improvement was obtained when the stimulation site is close to a region of high anisotropy. The model with a global anisotropic ratio of 10:1 between the eigenvalues (parallel: tangential to the fiber direction) has the worst performance of all the anisotropic models. (2) Inclusion of cerebrospinal fluid as well as brain anisotropy in the forward model is necessary for an accurate description of the electric field inside the skull. The results indicate that an anisotropic model based on the DTI can be constructed non-invasively and shows an improved performance when compared to the isotropic models for the calculation of the intracranial EEG forward solution.
Development and validation of the Hospitality Axiological Scale for Humanization of Nursing Care
Galán González-Serna, José María; Ferreras-Mencia, Soledad; Arribas-Marín, Juan Manuel
2017-01-01
ABSTRACT Objective: to develop and validate a scale to evaluate nursing attitudes in relation to hospitality for the humanization of nursing care. Participants: the sample consisted of 499 nursing professionals and undergraduate students of the final two years of the Bachelor of Science in Nursing program. Method: the instrument has been developed and validated to evaluate the ethical values related to hospitality using a methodological approach. Subsequently, a model was developed to measure the dimensions forming the construct hospitality. Results: the Axiological Hospitality Scale showed a high internal consistency, with Cronbach’s Alpha=0.901. The validation of the measuring instrument was performed using factorial, exploratory and confirmatory analysis techniques with high goodness of fit measures. Conclusions: the developed instrument showed an adequate validity and a high internal consistency. Based on the consistency of its psychometric properties, it is possible to affirm that the scale provides a reliable measurement of the hospitality. It was also possible to determine the dimensions or sources that embrace it: respect, responsibility, quality and transpersonal care. PMID:28793127
NASA Astrophysics Data System (ADS)
Xu, Zhongyan; Godrej, Adil N.; Grizzard, Thomas J.
2007-10-01
SummaryRunoff models such as HSPF and reservoir models such as CE-QUAL-W2 are used to model water quality in watersheds. Most often, the models are independently calibrated to observed data. While this approach can achieve good calibration, it does not replicate the physically-linked nature of the system. When models are linked by using the model output from an upstream model as input to a downstream model, the physical reality of a continuous watershed, where the overland and waterbody portions are parts of the whole, is better represented. There are some additional challenges in the calibration of such linked models, because the aim is to simulate the entire system as a whole, rather than piecemeal. When public entities are charged with model development, one of the driving forces is to use public-domain models. This paper describes the use of two such models, HSPF and CE-QUAL-W2, in the linked modeling of the Occoquan watershed located in northern Virginia, USA. The description of the process is provided, and results from the hydrological calibration and validation are shown. The Occoquan model consists of six HSPF and two CE-QUAL-W2 models, linked in a complex way, to simulate two major reservoirs and the associated drainage areas. The overall linked model was calibrated for a three-year period and validated for a two-year period. The results show that a successful calibration can be achieved using the linked approach, with moderate additional effort. Overall flow balances based on the three-year calibration period at four stream stations showed agreement ranging from -3.95% to +3.21%. Flow balances for the two reservoirs, compared via the daily water surface elevations, also showed good agreement ( R2 values of 0.937 for Lake Manassas and 0.926 for Occoquan Reservoir), when missing (un-monitored) flows were included. Validation of the models ranged from poor to fair for the watershed models and excellent for the waterbody models, thus indicating that the current model can be used to explore waterbody issues, but should be used with appropriate care for watershed issues. The study objective of being able to use the Occoquan model to study the impact of land use changes on hydrodynamics and water quality in the waterbodies, particularly the Occoquan Reservoir, can be met with the current model. However, appropriate judgment should be exercised when using the model for the prediction of watershed runoff. One of the advantages of using the linked approach is to develop a direct linkage between upstream land use changes and downstream water quality. This makes it easier for decision-makers to evaluate alternative watershed management plans and for the public to understand the decision-making process. The successful calibration of hydrology provides a solid base for further model development and application.
Comparison and validation of acoustic response models for wind noise reduction pipe arrays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marty, Julien; Denis, Stéphane; Gabrielson, Thomas
The detection capability of the infrasound component of the International Monitoring System (IMS) is tightly linked to the performance of its wind noise reduction systems. The wind noise reduction solution implemented at all IMS infrasound measurement systems consists of a spatial distribution of air inlets connected to the infrasound sensor through a network of pipes. This system, usually referred to as “pipe array,” has proven its efficiency in operational conditions. The objective of this paper is to present the results of the comparison and validation of three distinct acoustic response models for pipe arrays. The characteristics of the models andmore » the results obtained for a defined set of pipe array configurations are described. A field experiment using a newly developed infrasound generator, dedicated to the validation of these models, is then presented. The comparison between the modeled and empirical acoustic responses shows that two of the three models can be confidently used to estimate pipe array acoustic responses. Lastly, this study paves the way to the deconvolution of IMS infrasound data from pipe array responses and to the optimization of pipe array design to IMS applications.« less
Impact of Learning Model Based on Cognitive Conflict toward Student’s Conceptual Understanding
NASA Astrophysics Data System (ADS)
Mufit, F.; Festiyed, F.; Fauzan, A.; Lufri, L.
2018-04-01
The problems that often occur in the learning of physics is a matter of misconception and low understanding of the concept. Misconceptions do not only happen to students, but also happen to college students and teachers. The existing learning model has not had much impact on improving conceptual understanding and remedial efforts of student misconception. This study aims to see the impact of cognitive-based learning model in improving conceptual understanding and remediating student misconceptions. The research method used is Design / Develop Research. The product developed is a cognitive conflict-based learning model along with its components. This article reports on product design results, validity tests, and practicality test. The study resulted in the design of cognitive conflict-based learning model with 4 learning syntaxes, namely (1) preconception activation, (2) presentation of cognitive conflict, (3) discovery of concepts & equations, (4) Reflection. The results of validity tests by some experts on aspects of content, didactic, appearance or language, indicate very valid criteria. Product trial results also show a very practical product to use. Based on pretest and posttest results, cognitive conflict-based learning models have a good impact on improving conceptual understanding and remediating misconceptions, especially in high-ability students.
Comparison and validation of acoustic response models for wind noise reduction pipe arrays
Marty, Julien; Denis, Stéphane; Gabrielson, Thomas; ...
2017-02-13
The detection capability of the infrasound component of the International Monitoring System (IMS) is tightly linked to the performance of its wind noise reduction systems. The wind noise reduction solution implemented at all IMS infrasound measurement systems consists of a spatial distribution of air inlets connected to the infrasound sensor through a network of pipes. This system, usually referred to as “pipe array,” has proven its efficiency in operational conditions. The objective of this paper is to present the results of the comparison and validation of three distinct acoustic response models for pipe arrays. The characteristics of the models andmore » the results obtained for a defined set of pipe array configurations are described. A field experiment using a newly developed infrasound generator, dedicated to the validation of these models, is then presented. The comparison between the modeled and empirical acoustic responses shows that two of the three models can be confidently used to estimate pipe array acoustic responses. Lastly, this study paves the way to the deconvolution of IMS infrasound data from pipe array responses and to the optimization of pipe array design to IMS applications.« less
Isotani, Shuji; Shimoyama, Hirofumi; Yokota, Isao; Noma, Yasuhiro; Kitamura, Kousuke; China, Toshiyuki; Saito, Keisuke; Hisasue, Shin-ichi; Ide, Hisamitsu; Muto, Satoru; Yamaguchi, Raizo; Ukimura, Osamu; Gill, Inderbir S; Horie, Shigeo
2015-10-01
The predictive model of postoperative renal function may impact on planning nephrectomy. To develop the novel predictive model using combination of clinical indices with computer volumetry to measure the preserved renal cortex volume (RCV) using multidetector computed tomography (MDCT), and to prospectively validate performance of the model. Total 60 patients undergoing radical nephrectomy from 2011 to 2013 participated, including a development cohort of 39 patients and an external validation cohort of 21 patients. RCV was calculated by voxel count using software (Vincent, FUJIFILM). Renal function before and after radical nephrectomy was assessed via the estimated glomerular filtration rate (eGFR). Factors affecting postoperative eGFR were examined by regression analysis to develop the novel model for predicting postoperative eGFR with a backward elimination method. The predictive model was externally validated and the performance of the model was compared with that of the previously reported models. The postoperative eGFR value was associated with age, preoperative eGFR, preserved renal parenchymal volume (RPV), preserved RCV, % of RPV alteration, and % of RCV alteration (p < 0.01). The significant correlated variables for %eGFR alteration were %RCV preservation (r = 0.58, p < 0.01) and %RPV preservation (r = 0.54, p < 0.01). We developed our regression model as follows: postoperative eGFR = 57.87 - 0.55(age) - 15.01(body surface area) + 0.30(preoperative eGFR) + 52.92(%RCV preservation). Strong correlation was seen between postoperative eGFR and the calculated estimation model (r = 0.83; p < 0.001). The external validation cohort (n = 21) showed our model outperformed previously reported models. Combining MDCT renal volumetry and clinical indices might yield an important tool for predicting postoperative renal function.
Tian, Kuang-da; Qiu, Kai-xian; Li, Zu-hong; Lü, Ya-qiong; Zhang, Qiu-ju; Xiong, Yan-mei; Min, Shun-geng
2014-12-01
The purpose of the present paper is to determine calcium and magnesium in tobacco using NIR combined with least squares-support vector machine (LS-SVM). Five hundred ground and dried tobacco samples from Qujing city, Yunnan province, China, were surveyed by a MATRIX-I spectrometer (Bruker Optics, Bremen, Germany). At the beginning of data processing, outliers of samples were eliminated for stability of the model. The rest 487 samples were divided into several calibration sets and validation sets according to a hybrid modeling strategy. Monte-Carlo cross validation was used to choose the best spectral preprocess method from multiplicative scatter correction (MSC), standard normal variate transformation (SNV), S-G smoothing, 1st derivative, etc., and their combinations. To optimize parameters of LS-SVM model, the multilayer grid search and 10-fold cross validation were applied. The final LS-SVM models with the optimizing parameters were trained by the calibration set and accessed by 287 validation samples picked by Kennard-Stone method. For the quantitative model of calcium in tobacco, Savitzky-Golay FIR smoothing with frame size 21 showed the best performance. The regularization parameter λ of LS-SVM was e16.11, while the bandwidth of the RBF kernel σ2 was e8.42. The determination coefficient for prediction (Rc(2)) was 0.9755 and the determination coefficient for prediction (Rp(2)) was 0.9422, better than the performance of PLS model (Rc(2)=0.9593, Rp(2)=0.9344). For the quantitative analysis of magnesium, SNV made the regression model more precise than other preprocess. The optimized λ was e15.25 and σ2 was e6.32. Rc(2) and Rp(2) were 0.9961 and 0.9301, respectively, better than PLS model (Rc(2)=0.9716, Rp(2)=0.8924). After modeling, the whole progress of NIR scan and data analysis for one sample was within tens of seconds. The overall results show that NIR spectroscopy combined with LS-SVM can be efficiently utilized for rapid and accurate analysis of calcium and magnesium in tobacco.
Modeling and characterization of multipath in global navigation satellite system ranging signals
NASA Astrophysics Data System (ADS)
Weiss, Jan Peter
The Global Positioning System (GPS) provides position, velocity, and time information to users in anywhere near the earth in real-time and regardless of weather conditions. Since the system became operational, improvements in many areas have reduced systematic errors affecting GPS measurements such that multipath, defined as any signal taking a path other than the direct, has become a significant, if not dominant, error source for many applications. This dissertation utilizes several approaches to characterize and model multipath errors in GPS measurements. Multipath errors in GPS ranging signals are characterized for several receiver systems and environments. Experimental P(Y) code multipath data are analyzed for ground stations with multipath levels ranging from minimal to severe, a C-12 turboprop, an F-18 jet, and an aircraft carrier. Comparisons between receivers utilizing single patch antennas and multi-element arrays are also made. In general, the results show significant reductions in multipath with antenna array processing, although large errors can occur even with this kind of equipment. Analysis of airborne platform multipath shows that the errors tend to be small in magnitude because the size of the aircraft limits the geometric delay of multipath signals, and high in frequency because aircraft dynamics cause rapid variations in geometric delay. A comprehensive multipath model is developed and validated. The model integrates 3D structure models, satellite ephemerides, electromagnetic ray-tracing algorithms, and detailed antenna and receiver models to predict multipath errors. Validation is performed by comparing experimental and simulated multipath via overall error statistics, per satellite time histories, and frequency content analysis. The validation environments include two urban buildings, an F-18, an aircraft carrier, and a rural area where terrain multipath dominates. The validated models are used to identify multipath sources, characterize signal properties, evaluate additional antenna and receiver tracking configurations, and estimate the reflection coefficients of multipath-producing surfaces. Dynamic models for an F-18 landing on an aircraft carrier correlate aircraft dynamics to multipath frequency content; the model also characterizes the separate contributions of multipath due to the aircraft, ship, and ocean to the overall error statistics. Finally, reflection coefficients for multipath produced by terrain are estimated via a least-squares algorithm.
Noise optimization of a regenerative automotive fuel pump
NASA Astrophysics Data System (ADS)
Wang, J. F.; Feng, H. H.; Mou, X. L.; Huang, Y. X.
2017-03-01
The regenerative pump used in automotive is facing a noise problem. To understand the mechanism in detail, Computational Fluid Dynamics (CFD) and Computational Acoustic Analysis (CAA) together were used to understand the fluid and acoustic characteristics of the fuel pump using ANSYS-CFX 15.0 and LMS Virtual. Lab Rev12, respectively. The CFD model and acoustical model were validated by mass flow rate test and sound pressure test, respectively. Comparing the computational and experimental results shows that sound pressure levels at the observer position are consistent at high frequencies, especially at blade passing frequency. After validating the models, several numerical models were analyzed in the study for noise improvement. It is observed that for configuration having greater number of impeller blades, noise level was significantly improved at blade passing frequency, when compared to that of the original model.
Validation of Heat Transfer Thermal Decomposition and Container Pressurization of Polyurethane Foam.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scott, Sarah Nicole; Dodd, Amanda B.; Larsen, Marvin E.
Polymer foam encapsulants provide mechanical, electrical, and thermal isolation in engineered systems. In fire environments, gas pressure from thermal decomposition of polymers can cause mechanical failure of sealed systems. In this work, a detailed uncertainty quantification study of PMDI-based polyurethane foam is presented to assess the validity of the computational model. Both experimental measurement uncertainty and model prediction uncertainty are examined and compared. Both the mean value method and Latin hypercube sampling approach are used to propagate the uncertainty through the model. In addition to comparing computational and experimental results, the importance of each input parameter on the simulation resultmore » is also investigated. These results show that further development in the physics model of the foam and appropriate associated material testing are necessary to improve model accuracy.« less
A New Model for the Organizational Structure of Medical Record Departments in Hospitals in Iran
Moghaddasi, Hamid; Hosseini, Azamossadat; Sheikhtaheri, Abbas
2006-01-01
The organizational structure of medical record departments in Iran is not appropriate for the efficient management of healthcare information. In addition, there is no strong information management division to provide comprehensive information management services in hospitals in Iran. Therefore, a suggested model was designed based on four main axes: 1) specifications of a Health Information Management Division, 2) specifications of a Healthcare Information Management Department, 3) the functions of the Healthcare Information Management Department, and 4) the units of the Healthcare Information Management Department. The validity of the model was determined through use of the Delphi technique. The results of the validation process show that the majority of experts agree with the model and consider it to be appropriate and applicable for hospitals in Iran. The model is therefore recommended for hospitals in Iran. PMID:18066362
MIXING STUDY FOR JT-71/72 TANKS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, S.
2013-11-26
All modeling calculations for the mixing operations of miscible fluids contained in HBLine tanks, JT-71/72, were performed by taking a three-dimensional Computational Fluid Dynamics (CFD) approach. The CFD modeling results were benchmarked against the literature results and the previous SRNL test results to validate the model. Final performance calculations were performed by using the validated model to quantify the mixing time for the HB-Line tanks. The mixing study results for the JT-71/72 tanks show that, for the cases modeled, the mixing time required for blending of the tank contents is no more than 35 minutes, which is well below 2.5more » hours of recirculation pump operation. Therefore, the results demonstrate the adequacy of 2.5 hours’ mixing time of the tank contents by one recirculation pump to get well mixed.« less
ERIC Educational Resources Information Center
d'Ailly, Hsiao
2003-01-01
Tests a model of motivation and achievement with data from 50 teachers and 806 Grade 4-6 students in Taiwan. Autonomy as a construct was shown to have ecological validity in Chinese children. The proposed model fit the data well, showing that maternal involvement and autonomy support, as well as teachers' autonomy support, are important for…
Völler, Swantje; Flint, Robert B; Stolk, Leo M; Degraeuwe, Pieter L J; Simons, Sinno H P; Pokorna, Paula; Burger, David M; de Groot, Ronald; Tibboel, Dick; Knibbe, Catherijne A J
2017-11-15
Particularly in the pediatric clinical pharmacology field, data-sharing offers the possibility of making the most of all available data. In this study, we utilize previously collected therapeutic drug monitoring (TDM) data of term and preterm newborns to develop a population pharmacokinetic model for phenobarbital. We externally validate the model using prospective phenobarbital data from an ongoing pharmacokinetic study in preterm neonates. TDM data from 53 neonates (gestational age (GA): 37 (24-42) weeks, bodyweight: 2.7 (0.45-4.5) kg; postnatal age (PNA): 4.5 (0-22) days) contained information on dosage histories, concentration and covariate data (including birth weight, actual weight, post-natal age (PNA), postmenstrual age, GA, sex, liver and kidney function, APGAR-score). Model development was carried out using NONMEM ® 7.3. After assessment of model fit, the model was validated using data of 17 neonates included in the DINO (Drug dosage Improvement in NeOnates)-study. Modelling of 229 plasma concentrations, ranging from 3.2 to 75.2mg/L, resulted in a one compartment model for phenobarbital. Clearance (CL) and volume (V d ) for a child with a birthweight of 2.6kg at PNA day 4.5 was 0.0091L/h (9%) and 2.38L (5%), respectively. Birthweight and PNA were the best predictors for CL maturation, increasing CL by 36.7% per kg birthweight and 5.3% per postnatal day of living, respectively. The best predictor for the increase in V d was actual bodyweight (0.31L/kg). External validation showed that the model can adequately predict the pharmacokinetics in a prospective study. Data-sharing can help to successfully develop and validate population pharmacokinetic models in neonates. From the results it seems that both PNA and bodyweight are required to guide dosing of phenobarbital in term and preterm neonates. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Modeling Opponents in Adversarial Risk Analysis.
Rios Insua, David; Banks, David; Rios, Jesus
2016-04-01
Adversarial risk analysis has been introduced as a framework to deal with risks derived from intentional actions of adversaries. The analysis supports one of the decisionmakers, who must forecast the actions of the other agents. Typically, this forecast must take account of random consequences resulting from the set of selected actions. The solution requires one to model the behavior of the opponents, which entails strategic thinking. The supported agent may face different kinds of opponents, who may use different rationality paradigms, for example, the opponent may behave randomly, or seek a Nash equilibrium, or perform level-k thinking, or use mirroring, or employ prospect theory, among many other possibilities. We describe the appropriate analysis for these situations, and also show how to model the uncertainty about the rationality paradigm used by the opponent through a Bayesian model averaging approach, enabling a fully decision-theoretic solution. We also show how as we observe an opponent's decision behavior, this approach allows learning about the validity of each of the rationality models used to predict his decision by computing the models' (posterior) probabilities, which can be understood as a measure of their validity. We focus on simultaneous decision making by two agents. © 2015 Society for Risk Analysis.
Details of insect wing design and deformation enhance aerodynamic function and flight efficiency.
Young, John; Walker, Simon M; Bomphrey, Richard J; Taylor, Graham K; Thomas, Adrian L R
2009-09-18
Insect wings are complex structures that deform dramatically in flight. We analyzed the aerodynamic consequences of wing deformation in locusts using a three-dimensional computational fluid dynamics simulation based on detailed wing kinematics. We validated the simulation against smoke visualizations and digital particle image velocimetry on real locusts. We then used the validated model to explore the effects of wing topography and deformation, first by removing camber while keeping the same time-varying twist distribution, and second by removing camber and spanwise twist. The full-fidelity model achieved greater power economy than the uncambered model, which performed better than the untwisted model, showing that the details of insect wing topography and deformation are important aerodynamically. Such details are likely to be important in engineering applications of flapping flight.
Dynamics of stochastic SEIS epidemic model with varying population size
NASA Astrophysics Data System (ADS)
Liu, Jiamin; Wei, Fengying
2016-12-01
We introduce the stochasticity into a deterministic model which has state variables susceptible-exposed-infected with varying population size in this paper. The infected individuals could return into susceptible compartment after recovering. We show that the stochastic model possesses a unique global solution under building up a suitable Lyapunov function and using generalized Itô's formula. The densities of the exposed and infected tend to extinction when some conditions are being valid. Moreover, the conditions of persistence to a global solution are derived when the parameters are subject to some simple criteria. The stochastic model admits a stationary distribution around the endemic equilibrium, which means that the disease will prevail. To check the validity of the main results, numerical simulations are demonstrated as end of this contribution.
English, Sangeeta B.; Shih, Shou-Ching; Ramoni, Marco F.; Smith, Lois E.; Butte, Atul J.
2014-01-01
Though genome-wide technologies, such as microarrays, are widely used, data from these methods are considered noisy; there is still varied success in downstream biological validation. We report a method that increases the likelihood of successfully validating microarray findings using real time RT-PCR, including genes at low expression levels and with small differences. We use a Bayesian network to identify the most relevant sources of noise based on the successes and failures in validation for an initial set of selected genes, and then improve our subsequent selection of genes for validation based on eliminating these sources of noise. The network displays the significant sources of noise in an experiment, and scores the likelihood of validation for every gene. We show how the method can significantly increase validation success rates. In conclusion, in this study, we have successfully added a new automated step to determine the contributory sources of noise that determine successful or unsuccessful downstream biological validation. PMID:18790084
Li, Feng; Li, Wen-Xia; Zhao, Guo-Liang; Tang, Shi-Jun; Li, Xue-Jiao; Wu, Hong-Mei
2014-10-01
A series of 354 polyester-cotton blend fabrics were studied by the near-infrared spectra (NIRS) technology, and a NIR qualitative analysis model for different spectral characteristics was established by partial least squares (PLS) method combined with qualitative identification coefficient. There were two types of spectrum for dying polyester-cotton blend fabrics: normal spectrum and slash spectrum. The slash spectrum loses its spectral characteristics, which are effected by the samples' dyes, pigments, matting agents and other chemical additives. It was in low recognition rate when the model was established by the total sample set, so the samples were divided into two types of sets: normal spectrum sample set and slash spectrum sample set, and two NIR qualitative analysis models were established respectively. After the of models were established the model's spectral region, pretreatment methods and factors were optimized based on the validation results, and the robustness and reliability of the model can be improved lately. The results showed that the model recognition rate was improved greatly when they were established respectively, the recognition rate reached up to 99% when the two models were verified by the internal validation. RC (relation coefficient of calibration) values of the normal spectrum model and slash spectrum model were 0.991 and 0.991 respectively, RP (relation coefficient of prediction) values of them were 0.983 and 0.984 respectively, SEC (standard error of calibration) values of them were 0.887 and 0.453 respectively, SEP (standard error of prediction) values of them were 1.131 and 0.573 respectively. A series of 150 bounds samples reached used to verify the normal spectrum model and slash spectrum model and the recognition rate reached up to 91.33% and 88.00% respectively. It showed that the NIR qualitative analysis model can be used for identification in the recycle site for the polyester-cotton blend fabrics.
Snorradóttir, Bergthóra S; Jónsdóttir, Fjóla; Sigurdsson, Sven Th; Másson, Már
2014-08-01
A model is presented for transdermal drug delivery from single-layered silicone matrix systems. The work is based on our previous results that, in particular, extend the well-known Higuchi model. Recently, we have introduced a numerical transient model describing matrix systems where the drug dissolution can be non-instantaneous. Furthermore, our model can describe complex interactions within a multi-layered matrix and the matrix to skin boundary. The power of the modelling approach presented here is further illustrated by allowing the possibility of a donor solution. The model is validated by a comparison with experimental data, as well as validating the parameter values against each other, using various configurations with donor solution, silicone matrix and skin. Our results show that the model is a good approximation to real multi-layered delivery systems. The model offers the ability of comparing drug release for ibuprofen and diclofenac, which cannot be analysed by the Higuchi model because the dissolution in the latter case turns out to be limited. The experiments and numerical model outlined in this study could also be adjusted to more general formulations, which enhances the utility of the numerical model as a design tool for the development of drug-loaded matrices for trans-membrane and transdermal delivery. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.
Lindskog, Marcus; Winman, Anders; Juslin, Peter; Poom, Leo
2013-01-01
Two studies investigated the reliability and predictive validity of commonly used measures and models of Approximate Number System acuity (ANS). Study 1 investigated reliability by both an empirical approach and a simulation of maximum obtainable reliability under ideal conditions. Results showed that common measures of the Weber fraction (w) are reliable only when using a substantial number of trials, even under ideal conditions. Study 2 compared different purported measures of ANS acuity as for convergent and predictive validity in a within-subjects design and evaluated an adaptive test using the ZEST algorithm. Results showed that the adaptive measure can reduce the number of trials needed to reach acceptable reliability. Only direct tests with non-symbolic numerosity discriminations of stimuli presented simultaneously were related to arithmetic fluency. This correlation remained when controlling for general cognitive ability and perceptual speed. Further, the purported indirect measure of ANS acuity in terms of the Numeric Distance Effect (NDE) was not reliable and showed no sign of predictive validity. The non-symbolic NDE for reaction time was significantly related to direct w estimates in a direction contrary to the expected. Easier stimuli were found to be more reliable, but only harder (7:8 ratio) stimuli contributed to predictive validity. PMID:23964256
Mansberger, Steven L.; Sheppler, Christina R.; McClure, Tina M.; VanAlstine, Cory L.; Swanson, Ingrid L.; Stoumbos, Zoey; Lambert, William E.
2013-01-01
Purpose: To report the psychometrics of the Glaucoma Treatment Compliance Assessment Tool (GTCAT), a new questionnaire designed to assess adherence with glaucoma therapy. Methods: We developed the questionnaire according to the constructs of the Health Belief Model. We evaluated the questionnaire using data from a cross-sectional study with focus groups (n = 20) and a prospective observational case series (n=58). Principal components analysis provided assessment of construct validity. We repeated the questionnaire after 3 months for test-retest reliability. We evaluated predictive validity using an electronic dosing monitor as an objective measure of adherence. Results: Focus group participants provided 931 statements related to adherence, of which 88.7% (826/931) could be categorized into the constructs of the Health Belief Model. Perceived barriers accounted for 31% (288/931) of statements, cues-to-action 14% (131/931), susceptibility 12% (116/931), benefits 12% (115/931), severity 10% (91/931), and self-efficacy 9% (85/931). The principal components analysis explained 77% of the variance with five components representing Health Belief Model constructs. Reliability analyses showed acceptable Cronbach’s alphas (>.70) for four of the seven components (severity, susceptibility, barriers [eye drop administration], and barriers [discomfort]). Predictive validity was high, with several Health Belief Model questions significantly associated (P <.05) with adherence and a correlation coefficient (R2) of .40. Test-retest reliability was 90%. Conclusion: The GTCAT shows excellent repeatability, content, construct, and predictive validity for glaucoma adherence. A multisite trial is needed to determine whether the results can be generalized and whether the questionnaire accurately measures the effect of interventions to increase adherence. PMID:24072942
Larrosa, José Manuel; Moreno-Montañés, Javier; Martinez-de-la-Casa, José María; Polo, Vicente; Velázquez-Villoria, Álvaro; Berrozpe, Clara; García-Granero, Marta
2015-10-01
The purpose of this study was to develop and validate a multivariate predictive model to detect glaucoma by using a combination of retinal nerve fiber layer (RNFL), retinal ganglion cell-inner plexiform (GCIPL), and optic disc parameters measured using spectral-domain optical coherence tomography (OCT). Five hundred eyes from 500 participants and 187 eyes of another 187 participants were included in the study and validation groups, respectively. Patients with glaucoma were classified in five groups based on visual field damage. Sensitivity and specificity of all glaucoma OCT parameters were analyzed. Receiver operating characteristic curves (ROC) and areas under the ROC (AUC) were compared. Three predictive multivariate models (quantitative, qualitative, and combined) that used a combination of the best OCT parameters were constructed. A diagnostic calculator was created using the combined multivariate model. The best AUC parameters were: inferior RNFL, average RNFL, vertical cup/disc ratio, minimal GCIPL, and inferior-temporal GCIPL. Comparisons among the parameters did not show that the GCIPL parameters were better than those of the RNFL in early and advanced glaucoma. The highest AUC was in the combined predictive model (0.937; 95% confidence interval, 0.911-0.957) and was significantly (P = 0.0001) higher than the other isolated parameters considered in early and advanced glaucoma. The validation group displayed similar results to those of the study group. Best GCIPL, RNFL, and optic disc parameters showed a similar ability to detect glaucoma. The combined predictive formula improved the glaucoma detection compared to the best isolated parameters evaluated. The diagnostic calculator obtained good classification from participants in both the study and validation groups.
Patient and Societal Value Functions for the Testing Morbidities Index
Swan, John Shannon; Kong, Chung Yin; Lee, Janie M.; Akinyemi, Omosalewa; Halpern, Elkan F.; Lee, Pablo; Vavinskiy, Sergey; Williams, Olubunmi; Zoltick, Emilie S.; Donelan, Karen
2013-01-01
Background We developed preference-based and summated scale scoring for the Testing Morbidities Index (TMI) classification, which addresses short-term effects on quality of life from diagnostic testing before, during and after a testing procedure. Methods The two TMI value functions utilize multiattribute value techniques; one is patient-based and the other has a societal perspective. 206 breast biopsy patients and 466 (societal) subjects informed the models. Due to a lack of standard short-term methods for this application, we utilized the visual analog scale (VAS). Waiting trade-off (WTO) tolls provided an additional option for linear transformation of the TMI. We randomized participants to one of three surveys: the first derived weights for generic testing morbidity attributes and levels of severity with the VAS; a second developed VAS values and WTO tolls for linear transformation of the TMI to a death-healthy scale; the third addressed initial validation in a specific test (breast biopsy). 188 patients and 425 community subjects participated in initial validation, comparing direct VAS and WTO values to the TMI. Alternative TMI scoring as a non-preference summated scale was included, given evidence of construct and content validity. Results The patient model can use an additive function, while the societal model is multiplicative. Direct VAS and the VAS-scaled TMI were correlated across modeling groups (r=0.45 to 0.62) and agreement was comparable to the value function validation of the Health Utilities Index 2. Mean Absolute Difference (MAD) calculations showed a range of 0.07–0.10 in patients and 0.11–0.17 in subjects. MAD for direct WTO tolls compared to the WTO-scaled TMI varied closely around one quality-adjusted life day. Conclusions The TMI shows initial promise in measuring short-term testing-related health states. PMID:23689044
NASA Astrophysics Data System (ADS)
Tien Bui, Dieu; Pradhan, Biswajeet; Lofman, Owe; Revhaug, Inge; Dick, Oystein B.
2012-08-01
The objective of this study is to investigate a potential application of the Adaptive Neuro-Fuzzy Inference System (ANFIS) and the Geographic Information System (GIS) as a relatively new approach for landslide susceptibility mapping in the Hoa Binh province of Vietnam. Firstly, a landslide inventory map with a total of 118 landslide locations was constructed from various sources. Then the landslide inventory was randomly split into a testing dataset 70% (82 landslide locations) for training the models and the remaining 30% (36 landslides locations) was used for validation purpose. Ten landslide conditioning factors such as slope, aspect, curvature, lithology, land use, soil type, rainfall, distance to roads, distance to rivers, and distance to faults were considered in the analysis. The hybrid learning algorithm and six different membership functions (Gaussmf, Gauss2mf, Gbellmf, Sigmf, Dsigmf, Psigmf) were applied to generate the landslide susceptibility maps. The validation dataset, which was not considered in the ANFIS modeling process, was used to validate the landslide susceptibility maps using the prediction rate method. The validation results showed that the area under the curve (AUC) for six ANFIS models vary from 0.739 to 0.848. It indicates that the prediction capability depends on the membership functions used in the ANFIS. The models with Sigmf (0.848) and Gaussmf (0.825) have shown the highest prediction capability. The results of this study show that landslide susceptibility mapping in the Hoa Binh province of Vietnam using the ANFIS approach is viable. As far as the performance of the ANFIS approach is concerned, the results appeared to be quite satisfactory, the zones determined on the map being zones of relative susceptibility.
Li, Tsung-Lung; Lu, Wen-Cai
2015-10-05
In this work, Koopmans' theorem for Kohn-Sham density functional theory (KS-DFT) is applied to the photoemission spectra (PES) modeling over the entire valence-band. To examine the validity of this application, a PES modeling scheme is developed to facilitate a full valence-band comparison of theoretical PES spectra with experiments. The PES model incorporates the variations of electron ionization cross-sections over atomic orbitals and a linear dispersion of spectral broadening widths. KS-DFT simulations of pristine rubrene (5,6,11,12-tetraphenyltetracene) and potassium-rubrene complex are performed, and the simulation results are used as the input to the PES models. Two conclusions are reached. First, decompositions of the theoretical total spectra show that the dissociated electron of the potassium mainly remains on the backbone and has little effect on the electronic structures of phenyl side groups. This and other electronic-structure results deduced from the spectral decompositions have been qualitatively obtained with the anionic approximation to potassium-rubrene complexes. The qualitative validity of the anionic approximation is thus verified. Second, comparison of the theoretical PES with the experiments shows that the full-scale simulations combined with the PES modeling methods greatly enhance the agreement on spectral shapes over the anionic approximation. This agreement of the theoretical PES spectra with the experiments over the full valence-band can be regarded, to some extent, as a collective validation of the application of Koopmans' theorem for KS-DFT to valence-band PES, at least, for this hydrocarbon and its alkali-adsorbed complex. Copyright © 2015 Elsevier B.V. All rights reserved.
Psychometric evaluation of the HIV symptom distress scale
Marc, Linda G.; Wang, Ming-Mei; Testa, Marcia A.
2012-01-01
The objective of this paper is to psychometrically validate the HIV Symptom Distress Scale (SDS), an instrument that can be used to measure overall HIV symptom distress or clinically relevant groups of HIV symptoms. A secondary data analysis was conducted using the Collaborations in HIV Outcomes Research U.S. Cohort (CHORUS). Inclusion criteria required study participants (N=5,521) to have a valid baseline measure of the AIDS Clinical Trial Group Symptom Distress Module, with an SF-12 or SF-36 completed on the same day. Psychometric testing assessed unidimensionality, internal consistency and factor structure using exploratory and confirmatory factor analysis, and structural equation modeling (SEM). Construct validity examined whether the new measure discriminates across clinical significance (CD4 and HIV viral load). Findings show that the SDS has high reliability (α=0.92), and SEM supports a correlated second-order factor model (physical and mental distress) with acceptable fit (GFI=0.88, AGFI=0.85, NFI=0.99, NNFI=0.99; RMSEA=0.06, [90% CI 0.06 – 0.06]; Satorra Bentler Scaled, C2 =3274.20; p=0.0). Construct validity shows significant differences across categories for HIV-1 viral load (p< 0.001) and CD4 (p< 0.001). Differences in mean SDS scores exist across gender (p< 0.001), race/ethnicity (p< 0.05) and educational attainment (p < 0.001). Hence, the HIV Symptom Distress Scale is a reliable and valid instrument, which measures overall HIV symptoms or clinically relevant groups of symptoms. PMID:22409246
Adjustment and validation of a simulation tool for CSP plants based on parabolic trough technology
NASA Astrophysics Data System (ADS)
García-Barberena, Javier; Ubani, Nora
2016-05-01
The present work presents the validation process carried out for a simulation tool especially designed for the energy yield assessment of concentrating solar plants based on parabolic through (PT) technology. The validation has been carried out by comparing the model estimations with real data collected from a commercial CSP plant. In order to adjust the model parameters used for the simulation, 12 different days were selected among one-year of operational data measured at the real plant. The 12 days were simulated and the estimations compared with the measured data, focusing on the most important variables from the simulation point of view: temperatures, pressures and mass flow of the solar field, gross power, parasitic power, and net power delivered by the plant. Based on these 12 days, the key parameters for simulating the model were properly fixed and the simulation of a whole year performed. The results obtained for a complete year simulation showed very good agreement for the gross and net electric total production. The estimations for these magnitudes show a 1.47% and 2.02% BIAS respectively. The results proved that the simulation software describes with great accuracy the real operation of the power plant and correctly reproduces its transient behavior.
Validating spatiotemporal predictions of an important pest of small grains.
Merrill, Scott C; Holtzer, Thomas O; Peairs, Frank B; Lester, Philip J
2015-01-01
Arthropod pests are typically managed using tactics applied uniformly to the whole field. Precision pest management applies tactics under the assumption that within-field pest pressure differences exist. This approach allows for more precise and judicious use of scouting resources and management tactics. For example, a portion of a field delineated as attractive to pests may be selected to receive extra monitoring attention. Likely because of the high variability in pest dynamics, little attention has been given to developing precision pest prediction models. Here, multimodel synthesis was used to develop a spatiotemporal model predicting the density of a key pest of wheat, the Russian wheat aphid, Diuraphis noxia (Kurdjumov). Spatially implicit and spatially explicit models were synthesized to generate spatiotemporal pest pressure predictions. Cross-validation and field validation were used to confirm model efficacy. A strong within-field signal depicting aphid density was confirmed with low prediction errors. Results show that the within-field model predictions will provide higher-quality information than would be provided by traditional field scouting. With improvements to the broad-scale model component, the model synthesis approach and resulting tool could improve pest management strategy and provide a template for the development of spatially explicit pest pressure models. © 2014 Society of Chemical Industry.
Validation of a new plasmapause model derived from CHAMP field-aligned current signatures
NASA Astrophysics Data System (ADS)
Heilig, Balázs; Darrouzet, Fabien; Vellante, Massimo; Lichtenberger, János; Lühr, Hermann
2014-05-01
Recently a new model for the plasmapause location in the equatorial plane was introduced based on magnetic field observations made by the CHAMP satellite in the topside ionosphere (Heilig and Lühr, 2013). Related signals are medium-scale field-aligned currents (MSFAC) (some 10km scale size). An empirical model for the MSFAC boundary was developed as a function of Kp and MLT. The MSFAC model then was compared to in situ plasmapause observations of IMAGE RPI. By considering this systematic displacement resulting from this comparison and by taking into account the diurnal variation and Kp-dependence of the residuals an empirical model of the plasmapause location that is based on MSFAC measurements from CHAMP was constructed. As a first step toward validation of the new plasmapause model we used in-situ (Van Allen Probes/EMFISIS, Cluster/WHISPER) and ground based (EMMA) plasma density observations. Preliminary results show a good agreement in general between the model and observations. Some observed differences stem from the different definitions of the plasmapause. A more detailed validation of the method can take place as soon as SWARM and VAP data become available. Heilig, B., and H. Lühr (2013) New plasmapause model derived from CHAMP field-aligned current signatures, Ann. Geophys., 31, 529-539, doi:10.5194/angeo-31-529-2013
NASA Astrophysics Data System (ADS)
Obeidat, Abdalla; Abu-Ghazleh, Hind
2018-06-01
Two intermolecular potential models of methanol (TraPPE-UA and OPLS-AA) have been used in order to examine their validity in reproducing the selected structural, dynamical, and thermodynamic properties in the unary and binary systems. These two models are combined with two water models (SPC/E and TIP4P). The temperature dependence of density, surface tension, diffusion and structural properties for the unary system has been computed over specific range of temperatures (200-300K). The very good performance of the TraPPE-UA potential model in predicting surface tension, diffusion, structure, and density of the unary system led us to examine its accuracy and performance in its aqueous solution. In the binary system the same properties were examined, using different mole fractions of methanol. The TraPPE-UA model combined with TIP4P-water shows a very good agreement with the experimental results for density and surface tension properties; whereas the OPLS-AA combined with SPCE-water shows a very agreement with experimental results regarding the diffusion coefficients. Two different approaches have been used in calculating the diffusion coefficient in the mixture, namely the Einstein equation (EE) and Green-Kubo (GK) method. Our results show the advantageous of applying GK over EE in reproducing the experimental results and in saving computer time.
Validation of A One-Dimensional Snow-Land Surface Model at the Sleepers River Watershed
NASA Astrophysics Data System (ADS)
Sun, Wen-Yih; Chern, Jiun-Dar
A one-dimensional land surface model, based on conservations of heat and water substance inside the soil and snow, is presented. To validate the model, a stand-alone experiment is carried out with five years of meteorological and hydrological observations collected from the NOAA-ARS Cooperative Snow Research Project (1966-1974) at the Sleepers River watershed in Danville, Vermont, U.S.A. The numerical results show that the model is capable of reproducing the observed soil temperature at different depths during the winter as well as a rapid increase of soil temperature after snow melts in the spring. The model also simulates the density, temperature, thickness, and equivalent water depth of snow reasonably well. The numerical results are sensitive to the fresh snow density and the soil properties used in the model, which affect the heat exchange between the snowpack and the soil.
A three-dimensional, time-dependent model of Mobile Bay
NASA Technical Reports Server (NTRS)
Pitts, F. H.; Farmer, R. C.
1976-01-01
A three-dimensional, time-variant mathematical model for momentum and mass transport in estuaries was developed and its solution implemented on a digital computer. The mathematical model is based on state and conservation equations applied to turbulent flow of a two-component, incompressible fluid having a free surface. Thus, bouyancy effects caused by density differences between the fresh and salt water, inertia from thare river and tidal currents, and differences in hydrostatic head are taken into account. The conservation equations, which are partial differential equations, are solved numerically by an explicit, one-step finite difference scheme and the solutions displayed numerically and graphically. To test the validity of the model, a specific estuary for which scaled model and experimental field data are available, Mobile Bay, was simulated. Comparisons of velocity, salinity and water level data show that the model is valid and a viable means of simulating the hydrodynamics and mass transport in non-idealized estuaries.
De Caluwé, Elien; Verbeke, Lize; van Aken, Marcel; van der Heijden, Paul T; De Clercq, Barbara
2018-02-22
The inclusion of a dimensional trait model of personality pathology in DSM-5 creates new opportunities for research on developmental antecedents of personality pathology. The traits of this model can be measured with the Personality Inventory for DSM-5 (PID-5), initially developed for adults, but also demonstrating validity in adolescents. The present study adds to the growing body of literature on the psychometrics of the PID-5, by examining its structure, validity, and reliability in 187 psychiatric-referred late adolescents and emerging adults. PID-5, Big Five Inventory, and Kidscreen self-reports were provided, and 88 non-clinical matched controls completed the PID-5. Results confirm the PID-5's five-factor structure, indicate adequate psychometric properties, and underscore the construct and criterion validity, showing meaningful associations with adaptive traits and quality of life. Results are discussed in terms of the PID-5's applicability in vulnerable populations who are going through important developmental transition phases, such as the step towards early adulthood.
NASA Astrophysics Data System (ADS)
Flocco, D.; Laxon, S. W.; Feltham, D. L.; Haas, C.
2011-12-01
The GlobIce project has provided high resolution sea ice product datasets over the Arctic derived from SAR data in the ESA archive. The products are validated sea ice motion, deformation and fluxes through straits. GlobIce sea ice velocities, deformation data and sea ice concentration have been validated using buoy data provided by the International Arctic Buoy Program (IABP). Over 95% of the GlobIce and buoy data analysed fell within 5 km of each other. The GlobIce Eulerian image pair product showed a high correlation with buoy data. The sea ice concentration product was compared to SSM/I data. An evaluation of the validity of the GlobICE data will be presented in this work. GlobICE sea ice velocity and deformation were compared with runs of the CICE sea ice model: in particular the mass fluxes through the straits were used to investigate the correlation between the winter behaviour of sea ice and the sea ice state in the following summer.
Construct validity of the five factor borderline inventory.
DeShong, Hilary L; Lengel, Gregory J; Sauer-Zavala, Shannon E; O'Meara, Madison; Mullins-Sweatt, Stephanie N
2015-06-01
The Five Factor Borderline Inventory (FFBI) is a new self-report measure developed to assess traits of borderline personality disorder (BPD) from the perspective of the Five Factor Model of general personality. The current study sought to first replicate initial validity findings for the FFBI and then to further validate the FFBI with predispositional risk factors of the biosocial theory of BPD and with commonly associated features of BPD (e.g., depression, low self-esteem) utilizing two samples of young adults (N = 87; 85) who have engaged in nonsuicidal self-injury. The FFBI showed strong convergent and discriminant validity across two measures of the Five Factor Model and also correlated strongly with measures of impulsivity, emotion dysregulation, and BPD. The FFBI also related to two measures of early childhood emotional vulnerability and parental invalidation and measures of depression, anxiety, and self-esteem. Overall, the results provide support for the FFBI as a measure of BPD. © The Author(s) 2014.
Meat mixture detection in Iberian pork sausages.
Ortiz-Somovilla, V; España-España, F; De Pedro-Sanz, E J; Gaitán-Jurado, A J
2005-11-01
Five homogenized meat mixture treatments of Iberian (I) and/or Standard (S) pork were set up. Each treatment was analyzed by NIRS as a fresh product (N=75) and as dry-cured sausage (N=75). Spectra acquisition was carried out using DA 7000 equipment (Perten Instruments), obtaining a total of 750 spectra. Several absorption peaks and bands were selected as the most representative for homogenized dry-cured and fresh sausages. Discriminant analysis and mixture prediction equations were carried out based on the spectral data gathered. The best results using discriminant models were for fresh products, with 98.3% (calibration) and 60% (validation) correct classification. For dry-cured sausages 91.7% (calibration) and 80% (validation) of the samples were correctly classified. Models developed using mixture prediction equations showed SECV=4.7, r(2)=0.98 (calibration) and 73.3% of validation set were correctly classified for the fresh product. These values for dry-cured sausages were SECV=5.9, r(2)=0.99 (calibration) and 93.3% correctly classified for validation.
Numerical modeling of local scour around hydraulic structure in sandy beds by dynamic mesh method
NASA Astrophysics Data System (ADS)
Fan, Fei; Liang, Bingchen; Bai, Yuchuan; Zhu, Zhixia; Zhu, Yanjun
2017-10-01
Local scour, a non-negligible factor in hydraulic engineering, endangers the safety of hydraulic structures. In this work, a numerical model for simulating local scour was constructed, based on the open source code computational fluid dynamics model OpenFOAM. We consider both the bedload and suspended load sediment transport in the scour model and adopt the dynamic mesh method to simulate the evolution of the bed elevation. We use the finite area method to project data between the three-dimensional flow model and the two-dimensional (2D) scour model. We also improved the 2D sand slide method and added it to the scour model to correct the bed bathymetry when the bed slope angle exceeds the angle of repose. Moreover, to validate our scour model, we conducted and compared the results of three experiments with those of the developed model. The validation results show that our developed model can reliably simulate local scour.
Reduced-order modeling for hyperthermia control.
Potocki, J K; Tharp, H S
1992-12-01
This paper analyzes the feasibility of using reduced-order modeling techniques in the design of multiple-input, multiple-output (MIMO) hyperthermia temperature controllers. State space thermal models are created based upon a finite difference expansion of the bioheat transfer equation model of a scanned focused ultrasound system (SFUS). These thermal state space models are reduced using the balanced realization technique, and an order reduction criterion is tabulated. Results show that a drastic reduction in model dimension can be achieved using the balanced realization. The reduced-order model is then used to design a reduced-order optimal servomechanism controller for a two-scan input, two thermocouple output tissue model. In addition, a full-order optimal servomechanism controller is designed for comparison and validation purposes. These two controllers are applied to a variety of perturbed tissue thermal models to test the robust nature of the reduced-order controller. A comparison of the two controllers validates the use of open-loop balanced reduced-order models in the design of MIMO hyperthermia controllers.
Modelling the permafrost extent on the Tibetan Plateau
NASA Astrophysics Data System (ADS)
Zhao, L.; Zou, D.; Sheng, Y.; Chen, J.; Wu, T.; Wu, J.; Pang, Q.; Wang, W.
2016-12-01
The Tibetan Plateau (TP) possesses the largest areas of permafrost terrain in mid- and low-latitude regions of the world. Permafrost plays significant role in climatic, hydrological, and ecological systems, and has great influences on landforms formation, slope and engineering construction. Detailed database of distribution and characteristics of permafrost is crucial for engineering planning, water resource management, ecosystem protection, climate modeling, and carbon cycle research. Although some permafrost distribution maps were compiled in previous studies and proved very useful, due to the limited data source, ambiguous criteria, little validation, and the deficiency of high-quality spatial datasets, there are a large uncertainty in the mapping permafrost distribution. In this paper, a new permafrost map was generated mostly based on freezing and thawing indices from modified MODIS land surface temperatures (LSTs), and validated by various ground-based dataset. Soil thermal properties of five soil types across the TP estimated according to the empirical equation and in situ observed soil properties (water content and bulk density) which were obtained during the field survey. Based on these data sets, the model of Temperature at the Top Of Permafrost (TTOP) was applied to simulate permafrost distribution over the TP. The results show that permafrost, seasonally frozen ground, and unfrozen ground covered areas of 106.4´104 km2, 145.6´104 km2, and 2.9´104 km2. The ground based observations of permafrost distribution across the five investigated regions (IRs) and three highway transects (across the entire permafrost regions from north to south) have been using to validate the model. Result of validation shows that the kappa coefficient vary from 0.38 to 0.78 in average 0.57 at the five IRs and from 0.62 to 0.74 in average 0.68 within three transects. The result of TTOP modeling shows more accuracy to identify thawing regions in comparison with two maps, compiled in 1996 and 2006 and could be better represent the detailed permafrost distribution than other methods. Overall, the results are providing much more detailed maps of permafrost distribution, which could be a promising basic data set for further research on permafrost on the Tibetan Plateau.
Cold spray nozzle mach number limitation
NASA Astrophysics Data System (ADS)
Jodoin, B.
2002-12-01
The classic one-dimensional isentropic flow approach is used along with a two-dimensional axisymmetric numerical model to show that the exit Mach number of a cold spray nozzle should be limited due to two factors. To show this, the two-dimensional model is validated with experimental data. Although both models show that the stagnation temperature is an important limiting factor, the one-dimensional approach fails to show how important the shock-particle interactions are at limiting the nozzle Mach number. It is concluded that for an air nozzle spraying solid powder particles, the nozzle Mach number should be set between 1.5 and 3 to limit the negative effects of the high stagnation temperature and of the shock-particle interactions.
A non-isotropic multiple-scale turbulence model
NASA Technical Reports Server (NTRS)
Chen, C. P.
1990-01-01
A newly developed non-isotropic multiple scale turbulence model (MS/ASM) is described for complex flow calculations. This model focuses on the direct modeling of Reynolds stresses and utilizes split-spectrum concepts for modeling multiple scale effects in turbulence. Validation studies on free shear flows, rotating flows and recirculating flows show that the current model perform significantly better than the single scale k-epsilon model. The present model is relatively inexpensive in terms of CPU time which makes it suitable for broad engineering flow applications.
Dynamical behavior of a stochastic SVIR epidemic model with vaccination
NASA Astrophysics Data System (ADS)
Zhang, Xinhong; Jiang, Daqing; Hayat, Tasawar; Ahmad, Bashir
2017-10-01
In this paper, we investigate the dynamical behavior of SVIR models in random environments. Firstly, we show that if R0s < 1, the disease of stochastic autonomous SVIR model will die out exponentially; if R˜0s > 1, the disease will be prevail. Moreover, this system admits a unique stationary distribution and it is ergodic when R˜0s > 1. Results show that environmental white noise is helpful for disease control. Secondly, we give sufficient conditions for the existence of nontrivial periodic solutions to stochastic SVIR model with periodic parameters. Finally, numerical simulations validate the analytical results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harrison, Thomas J.
2014-03-01
This report documents the efforts to perform dynamic model validation on the Eastern Interconnection (EI) by modeling governor deadband. An on-peak EI dynamic model is modified to represent governor deadband characteristics. Simulation results are compared with synchrophasor measurements collected by the Frequency Monitoring Network (FNET/GridEye). The comparison shows that by modeling governor deadband the simulated frequency response can closely align with the actual system response.
The Predicting Model of E-commerce Site Based on the Ideas of Curve Fitting
NASA Astrophysics Data System (ADS)
Tao, Zhang; Li, Zhang; Dingjun, Chen
On the basis of the idea of the second multiplication curve fitting, the number and scale of Chinese E-commerce site is analyzed. A preventing increase model is introduced in this paper, and the model parameters are solved by the software of Matlab. The validity of the preventing increase model is confirmed though the numerical experiment. The experimental results show that the precision of preventing increase model is ideal.
NASA Astrophysics Data System (ADS)
Shi, Tiezhu; Wang, Junjie; Chen, Yiyun; Wu, Guofeng
2016-10-01
Visible and near-infrared reflectance spectroscopy provides a beneficial tool for investigating soil heavy metal contamination. This study aimed to investigate mechanisms of soil arsenic prediction using laboratory based soil and leaf spectra, compare the prediction of arsenic content using soil spectra with that using rice plant spectra, and determine whether the combination of both could improve the prediction of soil arsenic content. A total of 100 samples were collected and the reflectance spectra of soils and rice plants were measured using a FieldSpec3 portable spectroradiometer (350-2500 nm). After eliminating spectral outliers, the reflectance spectra were divided into calibration (n = 62) and validation (n = 32) data sets using the Kennard-Stone algorithm. Genetic algorithm (GA) was used to select useful spectral variables for soil arsenic prediction. Thereafter, the GA-selected spectral variables of the soil and leaf spectra were individually and jointly employed to calibrate the partial least squares regression (PLSR) models using the calibration data set. The regression models were validated and compared using independent validation data set. Furthermore, the correlation coefficients of soil arsenic against soil organic matter, leaf arsenic and leaf chlorophyll were calculated, and the important wavelengths for PLSR modeling were extracted. Results showed that arsenic prediction using the leaf spectra (coefficient of determination in validation, Rv2 = 0.54; root mean square error in validation, RMSEv = 12.99 mg kg-1; and residual prediction deviation in validation, RPDv = 1.35) was slightly better than using the soil spectra (Rv2 = 0.42, RMSEv = 13.35 mg kg-1, and RPDv = 1.31). However, results also showed that the combinational use of soil and leaf spectra resulted in higher arsenic prediction (Rv2 = 0.63, RMSEv = 11.94 mg kg-1, RPDv = 1.47) compared with either soil or leaf spectra alone. Soil spectral bands near 480, 600, 670, 810, 1980, 2050 and 2290 nm, leaf spectral bands near 700, 890 and 900 nm in PLSR models were important wavelengths for soil arsenic prediction. Moreover, soil arsenic showed significantly positive correlations with soil organic matter (r = 0.62, p < 0.01) and leaf arsenic (r = 0.77, p < 0.01), and a significantly negative correlation with leaf chlorophyll (r = -0.67, p < 0.01). The results showed that the prediction of arsenic contents using soil and leaf spectra may be based on their relationships with soil organic matter and leaf chlorophyll contents, respectively. Although RPD of 1.47 was below the recommended RPD of >2 for soil analysis, arsenic prediction in agricultural soils can be improved by combining the leaf and soil spectra.
A KLM-circuit model of a multi-layer transducer for acoustic bladder volume measurements.
Merks, E J W; Borsboom, J M G; Bom, N; van der Steen, A F W; de Jong, N
2006-12-22
In a preceding study a new technique to non-invasively measure the bladder volume on the basis of non-linear wave propagation was validated. It was shown that the harmonic level generated at the posterior bladder wall increases for larger bladder volumes. A dedicated transducer is needed to further verify and implement this approach. This transducer must be capable of both transmission of high-pressure waves at fundamental frequency and reception of up to the third harmonic. For this purpose, a multi-layer transducer was constructed using a single element PZT transducer for transmission and a PVDF top-layer for reception. To determine feasibility of the multi-layer concept for bladder volume measurements, and to ensure optimal performance, an equivalent mathematical model on the basis of KLM-circuit modeling was generated. This model was obtained in two subsequent steps. Firstly, the PZT transducer was modeled without PVDF-layer attached by means of matching the model with the measured electrical input impedance. It was validated using pulse-echo measurements. Secondly, the model was extended with the PVDF-layer. The total model was validated by considering the PVDF-layer as a hydrophone on the PZT transducer surface and comparing the measured and simulated PVDF responses on a wave transmitted by the PZT transducer. The obtained results indicated that a valid model for the multi-layer transducer was constructed. The model showed feasibility of the multi-layer concept for bladder volume measurements. It also allowed for further optimization with respect to electrical matching and transmit waveform. Additionally, the model demonstrated the effect of mechanical loading of the PVDF-layer on the PZT transducer.
Copenhagen Psychosocial Questionnaire - A validation study using the Job Demand-Resources model
Hakanen, Jari J.; Westerlund, Hugo
2018-01-01
Aim This study aims at investigating the nomological validity of the Copenhagen Psychosocial Questionnaire (COPSOQ II) by using an extension of the Job Demands-Resources (JD-R) model with aspects of work ability as outcome. Material and methods The study design is cross-sectional. All staff working at public dental organizations in four regions of Sweden were invited to complete an electronic questionnaire (75% response rate, n = 1345). The questionnaire was based on COPSOQ II scales, the Utrecht Work Engagement scale, and the one-item Work Ability Score in combination with a proprietary item. The data was analysed by Structural Equation Modelling. Results This study contributed to the literature by showing that: A) The scale characteristics were satisfactory and the construct validity of COPSOQ instrument could be integrated in the JD-R framework; B) Job resources arising from leadership may be a driver of the two processes included in the JD-R model; and C) Both the health impairment and motivational processes were associated with WA, and the results suggested that leadership may impact WA, in particularly by securing task resources. Conclusion In conclusion, the nomological validity of COPSOQ was supported as the JD-R model-can be operationalized by the instrument. This may be helpful for transferral of complex survey results and work life theories to practitioners in the field. PMID:29708998
Creating and validating cis-regulatory maps of tissue-specific gene expression regulation
O'Connor, Timothy R.; Bailey, Timothy L.
2014-01-01
Predicting which genomic regions control the transcription of a given gene is a challenge. We present a novel computational approach for creating and validating maps that associate genomic regions (cis-regulatory modules–CRMs) with genes. The method infers regulatory relationships that explain gene expression observed in a test tissue using widely available genomic data for ‘other’ tissues. To predict the regulatory targets of a CRM, we use cross-tissue correlation between histone modifications present at the CRM and expression at genes within 1 Mbp of it. To validate cis-regulatory maps, we show that they yield more accurate models of gene expression than carefully constructed control maps. These gene expression models predict observed gene expression from transcription factor binding in the CRMs linked to that gene. We show that our maps are able to identify long-range regulatory interactions and improve substantially over maps linking genes and CRMs based on either the control maps or a ‘nearest neighbor’ heuristic. Our results also show that it is essential to include CRMs predicted in multiple tissues during map-building, that H3K27ac is the most informative histone modification, and that CAGE is the most informative measure of gene expression for creating cis-regulatory maps. PMID:25200088
NASA Astrophysics Data System (ADS)
Fer, I.; Kelly, R.; Andrews, T.; Dietze, M.; Richardson, A. D.
2016-12-01
Our ability to forecast ecosystems is limited by how well we parameterize ecosystem models. Direct measurements for all model parameters are not always possible and inverse estimation of these parameters through Bayesian methods is computationally costly. A solution to computational challenges of Bayesian calibration is to approximate the posterior probability surface using a Gaussian Process that emulates the complex process-based model. Here we report the integration of this method within an ecoinformatics toolbox, Predictive Ecosystem Analyzer (PEcAn), and its application with two ecosystem models: SIPNET and ED2.1. SIPNET is a simple model, allowing application of MCMC methods both to the model itself and to its emulator. We used both approaches to assimilate flux (CO2 and latent heat), soil respiration, and soil carbon data from Bartlett Experimental Forest. This comparison showed that emulator is reliable in terms of convergence to the posterior distribution. A 10000-iteration MCMC analysis with SIPNET itself required more than two orders of magnitude greater computation time than an MCMC run of same length with its emulator. This difference would be greater for a more computationally demanding model. Validation of the emulator-calibrated SIPNET against both the assimilated data and out-of-sample data showed improved fit and reduced uncertainty around model predictions. We next applied the validated emulator method to the ED2, whose complexity precludes standard Bayesian data assimilation. We used the ED2 emulator to assimilate demographic data from a network of inventory plots. For validation of the calibrated ED2, we compared the model to results from Empirical Succession Mapping (ESM), a novel synthesis of successional patterns in Forest Inventory and Analysis data. Our results revealed that while the pre-assimilation ED2 formulation cannot capture the emergent demographic patterns from ESM analysis, constrained model parameters controlling demographic processes increased their agreement considerably.
A model of electron collecting plasma contractors
NASA Technical Reports Server (NTRS)
Davis, V. A.; Katz, I.; Mandell, M. J.; Parks, D. E.
1989-01-01
A model of plasma contractors is being developed, which can be used to describe electron collection in a laboratory test tank and in the space environment. To validate the model development, laboratory experiments are conducted in which the source plasma is separated from the background plasma by a double layer. Model calculations show that an increase in ionization rate with potential produces a steep rise in collected current with increasing potential.
Quasi-steady aerodynamic model of clap-and-fling flapping MAV and validation using free-flight data.
Armanini, S F; Caetano, J V; Croon, G C H E de; Visser, C C de; Mulder, M
2016-06-30
Flapping-wing aerodynamic models that are accurate, computationally efficient and physically meaningful, are challenging to obtain. Such models are essential to design flapping-wing micro air vehicles and to develop advanced controllers enhancing the autonomy of such vehicles. In this work, a phenomenological model is developed for the time-resolved aerodynamic forces on clap-and-fling ornithopters. The model is based on quasi-steady theory and accounts for inertial, circulatory, added mass and viscous forces. It extends existing quasi-steady approaches by: including a fling circulation factor to account for unsteady wing-wing interaction, considering real platform-specific wing kinematics and different flight regimes. The model parameters are estimated from wind tunnel measurements conducted on a real test platform. Comparison to wind tunnel data shows that the model predicts the lift forces on the test platform accurately, and accounts for wing-wing interaction effectively. Additionally, validation tests with real free-flight data show that lift forces can be predicted with considerable accuracy in different flight regimes. The complete parameter-varying model represents a wide range of flight conditions, is computationally simple, physically meaningful and requires few measurements. It is therefore potentially useful for both control design and preliminary conceptual studies for developing new platforms.
In vitro burn model illustrating heat conduction patterns using compressed thermal papers.
Lee, Jun Yong; Jung, Sung-No; Kwon, Ho
2015-01-01
To date, heat conduction from heat sources to tissue has been estimated by complex mathematical modeling. In the present study, we developed an intuitive in vitro skin burn model that illustrates heat conduction patterns inside the skin. This was composed of tightly compressed thermal papers with compression frames. Heat flow through the model left a trace by changing the color of thermal papers. These were digitized and three-dimensionally reconstituted to reproduce the heat conduction patterns in the skin. For standardization, we validated K91HG-CE thermal paper using a printout test and bivariate correlation analysis. We measured the papers' physical properties and calculated the estimated depth of heat conduction using Fourier's equation. Through contact burns of 5, 10, 15, 20, and 30 seconds on porcine skin and our burn model using a heated brass comb, and comparing the burn wound and heat conduction trace, we validated our model. The heat conduction pattern correlation analysis (intraclass correlation coefficient: 0.846, p < 0.001) and the heat conduction depth correlation analysis (intraclass correlation coefficient: 0.93, p < 0.001) showed statistically significant high correlations between the porcine burn wound and our model. Our model showed good correlation with porcine skin burn injury and replicated its heat conduction patterns. © 2014 by the Wound Healing Society.
Verification and Validation of the k-kL Turbulence Model in FUN3D and CFL3D Codes
NASA Technical Reports Server (NTRS)
Abdol-Hamid, Khaled S.; Carlson, Jan-Renee; Rumsey, Christopher L.
2015-01-01
The implementation of the k-kL turbulence model using multiple computational uid dy- namics (CFD) codes is reported herein. The k-kL model is a two-equation turbulence model based on Abdol-Hamid's closure and Menter's modi cation to Rotta's two-equation model. Rotta shows that a reliable transport equation can be formed from the turbulent length scale L, and the turbulent kinetic energy k. Rotta's equation is well suited for term-by-term mod- eling and displays useful features compared to other two-equation models. An important di erence is that this formulation leads to the inclusion of higher-order velocity derivatives in the source terms of the scale equations. This can enhance the ability of the Reynolds- averaged Navier-Stokes (RANS) solvers to simulate unsteady ows. The present report documents the formulation of the model as implemented in the CFD codes Fun3D and CFL3D. Methodology, veri cation and validation examples are shown. Attached and sepa- rated ow cases are documented and compared with experimental data. The results show generally very good comparisons with canonical and experimental data, as well as matching results code-to-code. The results from this formulation are similar or better than results using the SST turbulence model.
Bounds on quantum collapse models from matter-wave interferometry: calculational details
NASA Astrophysics Data System (ADS)
Toroš, Marko; Bassi, Angelo
2018-03-01
We present a simple derivation of the interference pattern in matter-wave interferometry predicted by a class of quantum master equations. We apply the obtained formulae to the following collapse models: the Ghirardi-Rimini-Weber (GRW) model, the continuous spontaneous localization (CSL) model together with its dissipative (dCSL) and non-Markovian generalizations (cCSL), the quantum mechanics with universal position localization (QMUPL), and the Diósi-Penrose (DP) model. We discuss the separability of the dynamics of the collapse models along the three spatial directions, the validity of the paraxial approximation, and the amplification mechanism. We obtain analytical expressions both in the far field and near field limits. These results agree with those already derived in the Wigner function formalism. We compare the theoretical predictions with the experimental data from two recent matter-wave experiments: the 2012 far-field experiment of Juffmann T et al (2012 Nat. Nanotechnol. 7 297-300) and the 2013 Kapitza-Dirac-Talbot-Lau (KDTL) near-field experiment of Eibenberger et al (2013 Phys. Chem. Chem. Phys. 15 14696-700). We show the region of the parameter space for each collapse model that is excluded by these experiments. We show that matter-wave experiments provide model-insensitive bounds that are valid for a wide family of dissipative and non-Markovian generalizations.
A Review of Hemolysis Prediction Models for Computational Fluid Dynamics.
Yu, Hai; Engel, Sebastian; Janiga, Gábor; Thévenin, Dominique
2017-07-01
Flow-induced hemolysis is a crucial issue for many biomedical applications; in particular, it is an essential issue for the development of blood-transporting devices such as left ventricular assist devices, and other types of blood pumps. In order to estimate red blood cell (RBC) damage in blood flows, many models have been proposed in the past. Most models have been validated by their respective authors. However, the accuracy and the validity range of these models remains unclear. In this work, the most established hemolysis models compatible with computational fluid dynamics of full-scale devices are described and assessed by comparing two selected reference experiments: a simple rheometric flow and a more complex hemodialytic flow through a needle. The quantitative comparisons show very large deviations concerning hemolysis predictions, depending on the model and model parameter. In light of the current results, two simple power-law models deliver the best compromise between computational efficiency and obtained accuracy. Finally, hemolysis has been computed in an axial blood pump. The reconstructed geometry of a HeartMate II shows that hemolysis occurs mainly at the tip and leading edge of the rotor blades, as well as at the leading edge of the diffusor vanes. © 2017 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Choi, Sanghun; Choi, Jiwoong; Hoffman, Eric; Lin, Ching-Long
2016-11-01
To predict the proper relationship between airway resistance and regional airflow, we proposed a novel 1-D network model for airway resistance and acinar compliance. First, we extracted 1-D skeletons at inspiration images, and generated 1-D trees of CT unresolved airways with a volume filling method. We used Horsfield order with random heterogeneity to create diameters of the generated 1-D trees. We employed a resistance model that accounts for kinetic energy and viscous dissipation (Model A). The resistance model is further coupled with a regional compliance model estimated from two static images (Model B). For validation, we applied both models to a healthy subject. The results showed that Model A failed to provide airflows consistent with air volume change, whereas Model B provided airflows consistent with air volume change. Since airflows shall be regionally consistent with air volume change in patients with normal airways, Model B was validated. Then, we applied Model B to severe asthmatic subjects. The results showed that regional airflows were significantly deviated from air volume change due to airway narrowing. This implies that airway resistance plays a major role in determining regional airflows of patients with airway narrowing. Support for this study was provided, in part, by NIH Grants U01 HL114494, R01 HL094315, R01 HL112986, and S10 RR022421.
The contribution of CEOP data to the understanding and modeling of monsoon systems
NASA Technical Reports Server (NTRS)
Lau, William K. M.
2005-01-01
CEOP has contributed and will continue to provide integrated data sets from diverse platforms for better understanding of the water and energy cycles, and for validating models. In this talk, I will show examples of how CEOP has contributed to the formulation of a strategy for the study of the monsoon as a system. The CEOP data concept has led to the development of the CEOP Inter-Monsoon Studies (CIMS), which focuses on the identification of model bias, and improvement of model physics such as the diurnal and annual cycles. A multi-model validation project focusing on diurnal variability of the East Asian monsoon, and using CEOP reference site data, as well as CEOP integrated satellite data is now ongoing. Similar validation projects in other monsoon regions are being started. Preliminary studies show that climate models have difficulties in simulating the diurnal signals of total rainfall, rainfall intensity and frequency of occurrence, which have different peak hours, depending on locations. Further more model diurnal cycle of rainfall in monsoon regions tend to lead the observed by about 2-3 hours. These model bias offer insight into lack of, or poor representation of key components of the convective,and stratiform rainfall. The CEOP data also stimulated studies to compare and contrasts monsoon variability in different parts of the world. It was found that seasonal wind reversal, orographic effects, monsoon depressions, meso-scale convective complexes, SST and land surface land influences are common features in all monsoon regions. Strong intraseasonal variability is present in all monsoon regions. While there is a clear demarcation of onset, breaks and withdrawal in the Asian and Australian monsoon region associated with climatological intraseasonal variability, it is less clear in the American and Africa monsoon regions. The examination of satellite and reference site data in monsoon has led to preliminary model experiments to study the impact of aerosol on monsoon variability. I will show examples of how the study of the dynamics of aerosol-water cycle interactions in the monsoon region, can be best achieved using the CEOP data and modeling strategy.
RRegrs: an R package for computer-aided model selection with multiple regression models.
Tsiliki, Georgia; Munteanu, Cristian R; Seoane, Jose A; Fernandez-Lozano, Carlos; Sarimveis, Haralambos; Willighagen, Egon L
2015-01-01
Predictive regression models can be created with many different modelling approaches. Choices need to be made for data set splitting, cross-validation methods, specific regression parameters and best model criteria, as they all affect the accuracy and efficiency of the produced predictive models, and therefore, raising model reproducibility and comparison issues. Cheminformatics and bioinformatics are extensively using predictive modelling and exhibit a need for standardization of these methodologies in order to assist model selection and speed up the process of predictive model development. A tool accessible to all users, irrespectively of their statistical knowledge, would be valuable if it tests several simple and complex regression models and validation schemes, produce unified reports, and offer the option to be integrated into more extensive studies. Additionally, such methodology should be implemented as a free programming package, in order to be continuously adapted and redistributed by others. We propose an integrated framework for creating multiple regression models, called RRegrs. The tool offers the option of ten simple and complex regression methods combined with repeated 10-fold and leave-one-out cross-validation. Methods include Multiple Linear regression, Generalized Linear Model with Stepwise Feature Selection, Partial Least Squares regression, Lasso regression, and Support Vector Machines Recursive Feature Elimination. The new framework is an automated fully validated procedure which produces standardized reports to quickly oversee the impact of choices in modelling algorithms and assess the model and cross-validation results. The methodology was implemented as an open source R package, available at https://www.github.com/enanomapper/RRegrs, by reusing and extending on the caret package. The universality of the new methodology is demonstrated using five standard data sets from different scientific fields. Its efficiency in cheminformatics and QSAR modelling is shown with three use cases: proteomics data for surface-modified gold nanoparticles, nano-metal oxides descriptor data, and molecular descriptors for acute aquatic toxicity data. The results show that for all data sets RRegrs reports models with equal or better performance for both training and test sets than those reported in the original publications. Its good performance as well as its adaptability in terms of parameter optimization could make RRegrs a popular framework to assist the initial exploration of predictive models, and with that, the design of more comprehensive in silico screening applications.Graphical abstractRRegrs is a computer-aided model selection framework for R multiple regression models; this is a fully validated procedure with application to QSAR modelling.
Dima, Alexandra Lelia; Schulz, Peter Johannes
2017-01-01
Background The eHealth Literacy Scale (eHEALS) is a tool to assess consumers’ comfort and skills in using information technologies for health. Although evidence exists of reliability and construct validity of the scale, less agreement exists on structural validity. Objective The aim of this study was to validate the Italian version of the eHealth Literacy Scale (I-eHEALS) in a community sample with a focus on its structural validity, by applying psychometric techniques that account for item difficulty. Methods Two Web-based surveys were conducted among a total of 296 people living in the Italian-speaking region of Switzerland (Ticino). After examining the latent variables underlying the observed variables of the Italian scale via principal component analysis (PCA), fit indices for two alternative models were calculated using confirmatory factor analysis (CFA). The scale structure was examined via parametric and nonparametric item response theory (IRT) analyses accounting for differences between items regarding the proportion of answers indicating high ability. Convergent validity was assessed by correlations with theoretically related constructs. Results CFA showed a suboptimal model fit for both models. IRT analyses confirmed all items measure a single dimension as intended. Reliability and construct validity of the final scale were also confirmed. The contrasting results of factor analysis (FA) and IRT analyses highlight the importance of considering differences in item difficulty when examining health literacy scales. Conclusions The findings support the reliability and validity of the translated scale and its use for assessing Italian-speaking consumers’ eHealth literacy. PMID:28400356
Damping in Space Constructions
NASA Astrophysics Data System (ADS)
de Vreugd, Jan; de Lange, Dorus; Winters, Jasper; Human, Jet; Kamphues, Fred; Tabak, Erik
2014-06-01
Monolithic structures are often used in optomechanical designs for space applications to achieve high dimensional stability and to prevent possible backlash and friction phenomena. The capacity of monolithic structures to dissipate mechanical energy is however limited due to the high Q-factor, which might result in high stresses during dynamic launch loads like random vibration, sine sweeps and shock. To reduce the Q-factor in space applications, the effect of constrained layer damping (CLD) is investigated in this work. To predict the damping increase, the CLD effect is implemented locally at the supporting struts in an existing FE model of an optical instrument. Numerical simulations show that the effect of local damping treatment in this instrument could reduce the vibrational stresses with 30-50%. Validation experiments on a simple structure showed good agreement between measured and predicted damping properties. This paper presents material characterization, material modeling, numerical implementation of damping models in finite element code, numerical results on space hardware and the results of validation experiments.
Amin, Elham E; van Kuijk, Sander M J; Joore, Manuela A; Prandoni, Paolo; Cate, Hugo Ten; Cate-Hoek, Arina J Ten
2018-06-04
Post-thrombotic syndrome (PTS) is a common chronic consequence of deep vein thrombosis that affects the quality of life and is associated with substantial costs. In clinical practice, it is not possible to predict the individual patient risk. We develop and validate a practical two-step prediction tool for PTS in the acute and sub-acute phase of deep vein thrombosis. Multivariable regression modelling with data from two prospective cohorts in which 479 (derivation) and 1,107 (validation) consecutive patients with objectively confirmed deep vein thrombosis of the leg, from thrombosis outpatient clinic of Maastricht University Medical Centre, the Netherlands (derivation) and Padua University hospital in Italy (validation), were included. PTS was defined as a Villalta score of ≥ 5 at least 6 months after acute thrombosis. Variables in the baseline model in the acute phase were: age, body mass index, sex, varicose veins, history of venous thrombosis, smoking status, provoked thrombosis and thrombus location. For the secondary model, the additional variable was residual vein obstruction. Optimism-corrected area under the receiver operating characteristic curves (AUCs) were 0.71 for the baseline model and 0.60 for the secondary model. Calibration plots showed well-calibrated predictions. External validation of the derived clinical risk scores was successful: AUC, 0.66 (95% confidence interval [CI], 0.63-0.70) and 0.64 (95% CI, 0.60-0.69). Individual risk for PTS in the acute phase of deep vein thrombosis can be predicted based on readily accessible baseline clinical and demographic characteristics. The individual risk in the sub-acute phase can be predicted with limited additional clinical characteristics. Schattauer GmbH Stuttgart.
Population-based validation of a German version of the Brief Resilience Scale
Wenzel, Mario; Stieglitz, Rolf-Dieter; Kunzler, Angela; Bagusat, Christiana; Helmreich, Isabella; Gerlicher, Anna; Kampa, Miriam; Kubiak, Thomas; Kalisch, Raffael; Lieb, Klaus; Tüscher, Oliver
2018-01-01
Smith and colleagues developed the Brief Resilience Scale (BRS) to assess the individual ability to recover from stress despite significant adversity. This study aimed to validate the German version of the BRS. We used data from a population-based (sample 1: n = 1.481) and a representative (sample 2: n = 1.128) sample of participants from the German general population (age ≥ 18) to assess reliability and validity. Confirmatory factor analyses (CFA) were conducted to compare one- and two-factorial models from previous studies with a method-factor model which especially accounts for the wording of the items. Reliability was analyzed. Convergent validity was measured by correlating BRS scores with mental health measures, coping, social support, and optimism. Reliability was good (α = .85, ω = .85 for both samples). The method-factor model showed excellent model fit (sample 1: χ2/df = 7.544; RMSEA = .07; CFI = .99; SRMR = .02; sample 2: χ2/df = 1.166; RMSEA = .01; CFI = 1.00; SRMR = .01) which was significantly better than the one-factor model (Δχ2(4) = 172.71, p < .001) or the two-factor model (Δχ2(3) = 31.16, p < .001). The BRS was positively correlated with well-being, social support, optimism, and the coping strategies active coping, positive reframing, acceptance, and humor. It was negatively correlated with somatic symptoms, anxiety and insomnia, social dysfunction, depression, and the coping strategies religion, denial, venting, substance use, and self-blame. To conclude, our results provide evidence for the reliability and validity of the German adaptation of the BRS as well as the unidimensional structure of the scale once method effects are accounted for. PMID:29438435
An Experimental and Numerical Study of a Supersonic Burner for CFD Model Development
NASA Technical Reports Server (NTRS)
Magnotti, G.; Cutler, A. D.
2008-01-01
A laboratory scale supersonic burner has been developed for validation of computational fluid dynamics models. Detailed numerical simulations were performed for the flow inside the combustor, and coupled with finite element thermal analysis to obtain more accurate outflow conditions. A database of nozzle exit profiles for a wide range of conditions of interest was generated to be used as boundary conditions for simulation of the external jet, or for validation of non-intrusive measurement techniques. A set of experiments was performed to validate the numerical results. In particular, temperature measurements obtained by using an infrared camera show that the computed heat transfer was larger than the measured value. Relaminarization in the convergent part of the nozzle was found to be responsible for this discrepancy, and further numerical simulations sustained this conclusion.
A turbulence model for iced airfoils and its validation
NASA Technical Reports Server (NTRS)
Shin, Jaiwon; Chen, Hsun H.; Cebeci, Tuncer
1992-01-01
A turbulence model based on the extension of the algebraic eddy viscosity formulation of Cebeci and Smith developed for two dimensional flows over smooth and rough surfaces is described for iced airfoils and validated for computed ice shapes obtained for a range of total temperatures varying from 28 to -15 F. The validation is made with an interactive boundary layer method which uses a panel method to compute the inviscid flow and an inverse finite difference boundary layer method to compute the viscous flow. The interaction between inviscid and viscous flows is established by the use of the Hilbert integral. The calculated drag coefficients compare well with recent experimental data taken at the NASA-Lewis Icing Research Tunnel (IRT) and show that, in general, the drag increase due to ice accretion can be predicted well and efficiently.
Du, Yongxing; Zhang, Lingze; Sang, Lulu; Wu, Daocheng
2016-04-29
In this paper, an Archimedean planar spiral antenna for the application of thermotherapy was designed. This type of antenna was chosen for its compact structure, flexible application and wide heating area. The temperature field generated by the use of this Two-armed Spiral Antenna in a muscle-equivalent phantom was simulated and subsequently validated by experimentation. First, the specific absorption rate (SAR) of the field was calculated using the Finite Element Method (FEM) by Ansoft's High Frequency Structure Simulation (HFSS). Then, the temperature elevation in the phantom was simulated by an explicit finite difference approximation of the bioheat equation (BHE). The temperature distribution was then validated by a phantom heating experiment. The results showed that this antenna had a good heating ability and a wide heating area. A comparison between the calculation and the measurement showed a fair agreement in the temperature elevation. The validated model could be applied for the analysis of electromagnetic-temperature distribution in phantoms during the process of antenna design or thermotherapy experimentation.
Murumkar, Prashant R; Giridhar, Rajani; Yadav, Mange Ram
2008-04-01
A set of 29 benzothiadiazepine hydroxamates having selective tumor necrosis factor-alpha converting enzyme inhibitory activity were used to compare the quality and predictive power of 3D-quantitative structure-activity relationship, comparative molecular field analysis, and comparative molecular similarity indices models for the atom-based, centroid/atom-based, data-based, and docked conformer-based alignment. Removal of two outliers from the initial training set of molecules improved the predictivity of models. Among the 3D-quantitative structure-activity relationship models developed using the above four alignments, the database alignment provided the optimal predictive comparative molecular field analysis model for the training set with cross-validated r(2) (q(2)) = 0.510, non-cross-validated r(2) = 0.972, standard error of estimates (s) = 0.098, and F = 215.44 and the optimal comparative molecular similarity indices model with cross-validated r(2) (q(2)) = 0.556, non-cross-validated r(2) = 0.946, standard error of estimates (s) = 0.163, and F = 99.785. These models also showed the best test set prediction for six compounds with predictive r(2) values of 0.460 and 0.535, respectively. The contour maps obtained from 3D-quantitative structure-activity relationship studies were appraised for activity trends for the molecules analyzed. The comparative molecular similarity indices models exhibited good external predictivity as compared with that of comparative molecular field analysis models. The data generated from the present study helped us to further design and report some novel and potent tumor necrosis factor-alpha converting enzyme inhibitors.
Validation of a school-based amblyopia screening protocol in a kindergarten population.
Casas-Llera, Pilar; Ortega, Paula; Rubio, Inmaculada; Santos, Verónica; Prieto, María J; Alio, Jorge L
2016-08-04
To validate a school-based amblyopia screening program model by comparing its outcomes to those of a state-of-the-art conventional ophthalmic clinic examination in a kindergarten population of children between the ages of 4 and 5 years. An amblyopia screening protocol, which consisted of visual acuity measurement using Lea charts, ocular alignment test, ocular motility assessment, and stereoacuity with TNO random-dot test, was performed at school in a pediatric 4- to 5-year-old population by qualified healthcare professionals. The outcomes were validated in a selected group by a conventional ophthalmologic examination performed in a fully equipped ophthalmologic center. The ophthalmologic evaluation was used to confirm whether or not children were correctly classified by the screening protocol. The sensitivity and specificity of the test model to detect amblyopia were established. A total of 18,587 4- to 5-year-old children were subjected to the amblyopia screening program during the 2010-2011 school year. A population of 100 children were selected for the ophthalmologic validation screening. A sensitivity of 89.3%, specificity of 93.1%, positive predictive value of 83.3%, negative predictive value of 95.7%, positive likelihood ratio of 12.86, and negative likelihood ratio of 0.12 was obtained for the amblyopia screening validation model. The amblyopia screening protocol model tested in this investigation shows high sensitivity and specificity in detecting high-risk cases of amblyopia compared to the standard ophthalmologic examination. This screening program may be highly relevant for amblyopia screening at schools.
Brasil, Pedro Emmanuel Alvarenga Americano do; Xavier, Sergio Salles; Holanda, Marcelo Teixeira; Hasslocher-Moreno, Alejandro Marcel; Braga, José Ueleres
2016-01-01
With the globalization of Chagas disease, unexperienced health care providers may have difficulties in identifying which patients should be examined for this condition. This study aimed to develop and validate a diagnostic clinical prediction model for chronic Chagas disease. This diagnostic cohort study included consecutive volunteers suspected to have chronic Chagas disease. The clinical information was blindly compared to serological tests results, and a logistic regression model was fit and validated. The development cohort included 602 patients, and the validation cohort included 138 patients. The Chagas disease prevalence was 19.9%. Sex, age, referral from blood bank, history of living in a rural area, recognizing the kissing bug, systemic hypertension, number of siblings with Chagas disease, number of relatives with a history of stroke, ECG with low voltage, anterosuperior divisional block, pathologic Q wave, right bundle branch block, and any kind of extrasystole were included in the final model. Calibration and discrimination in the development and validation cohorts (ROC AUC 0.904 and 0.912, respectively) were good. Sensitivity and specificity analyses showed that specificity reaches at least 95% above the predicted 43% risk, while sensitivity is at least 95% below the predicted 7% risk. Net benefit decision curves favor the model across all thresholds. A nomogram and an online calculator (available at http://shiny.ipec.fiocruz.br:3838/pedrobrasil/chronic_chagas_disease_prediction/) were developed to aid in individual risk estimation.
de Boer, Pieter T; Frederix, Geert W J; Feenstra, Talitha L; Vemer, Pepijn
2016-09-01
Transparent reporting of validation efforts of health economic models give stakeholders better insight into the credibility of model outcomes. In this study we reviewed recently published studies on seasonal influenza and early breast cancer in order to gain insight into the reporting of model validation efforts in the overall health economic literature. A literature search was performed in Pubmed and Embase to retrieve health economic modelling studies published between 2008 and 2014. Reporting on model validation was evaluated by checking for the word validation, and by using AdViSHE (Assessment of the Validation Status of Health Economic decision models), a tool containing a structured list of relevant items for validation. Additionally, we contacted corresponding authors to ask whether more validation efforts were performed other than those reported in the manuscripts. A total of 53 studies on seasonal influenza and 41 studies on early breast cancer were included in our review. The word validation was used in 16 studies (30 %) on seasonal influenza and 23 studies (56 %) on early breast cancer; however, in a minority of studies, this referred to a model validation technique. Fifty-seven percent of seasonal influenza studies and 71 % of early breast cancer studies reported one or more validation techniques. Cross-validation of study outcomes was found most often. A limited number of studies reported on model validation efforts, although good examples were identified. Author comments indicated that more validation techniques were performed than those reported in the manuscripts. Although validation is deemed important by many researchers, this is not reflected in the reporting habits of health economic modelling studies. Systematic reporting of validation efforts would be desirable to further enhance decision makers' confidence in health economic models and their outcomes.
Wind tunnel validation of AeroDyn within LIFES50+ project: imposed Surge and Pitch tests
NASA Astrophysics Data System (ADS)
Bayati, I.; Belloli, M.; Bernini, L.; Zasso, A.
2016-09-01
This paper presents the first set of results of the steady and unsteady wind tunnel tests, performed at Politecnico di Milano wind tunnel, on a 1/75 rigid scale model of the DTU 10 MW wind turbine, within the LIFES50+ project. The aim of these tests is the validation of the open source code AeroDyn developed at NREL. Numerical and experimental steady results are compared in terms of thrust and torque coefficients, showing good agreement, as well as for unsteady measurements gathered with a 2 degree-of-freedom test rig, capable of imposing the displacements at the base of the model, and providing the surge and pitch motion of the floating offshore wind turbine (FOWT) scale model. The measurements of the unsteady test configuration are compared with AeroDyn/Dynin module results, implementing the generalized dynamic wake (GDW) model. Numerical and experimental comparison showed similar behaviours in terms of non linear hysteresis, however some discrepancies are herein reported and need further data analysis and interpretations about the aerodynamic integral quantities, with a special attention to the physics of the unsteady phenomenon.
Gil Solsona, R; Boix, C; Ibáñez, M; Sancho, J V
2018-03-01
The aim of this study was to use an untargeted UHPLC-HRMS-based metabolomics approach allowing discrimination between almonds based on their origin and variety. Samples were homogenised, extracted with ACN:H 2 O (80:20) containing 0.1% HCOOH and injected in a UHPLC-QTOF instrument in both positive and negative ionisation modes. Principal component analysis (PCA) was performed to ensure the absence of outliers. Partial least squares - discriminant analysis (PLS-DA) was employed to create and validate the models for country (with five different compounds) and variety (with 20 features), showing more than 95% accuracy. Additional samples were injected and the model was evaluated with blind samples, with more than 95% of samples being correctly classified using both models. MS/MS experiments were carried out to tentatively elucidate the highlighted marker compounds (pyranosides, peptides or amino acids, among others). This study has shown the potential of high-resolution mass spectrometry to perform and validate classification models, also providing information concerning the identification of the unexpected biomarkers which showed the highest discriminant power.
Koyama, Suguru; Xia, Jimmy; Leblanc, Brian W; Gu, Jianwen Wendy; Saab, Carl Y
2018-05-08
Paresthesia, a common feature of epidural spinal cord stimulation (SCS) for pain management, presents a challenge to the double-blind study design. Although sub-paresthesia SCS has been shown to be effective in alleviating pain, empirical criteria for sub-paresthesia SCS have not been established and its basic mechanisms of action at supraspinal levels are unknown. We tested our hypothesis that sub-paresthesia SCS attenuates behavioral signs of neuropathic pain in a rat model, and modulates pain-related theta (4-8 Hz) power of the electroencephalogram (EEG), a previously validated correlate of spontaneous pain in rodent models. Results show that sub-paresthesia SCS attenuates thermal hyperalgesia and power amplitude in the 3-4 Hz range, consistent with clinical data showing significant yet modest analgesic effects of sub-paresthesia SCS in humans. Therefore, we present evidence for anti-nociceptive effects of sub-paresthesia SCS in a rat model of neuropathic pain and further validate EEG theta power as a reliable 'biosignature' of spontaneous pain.
Near infrared spectroscopy for prediction of antioxidant compounds in the honey.
Escuredo, Olga; Seijo, M Carmen; Salvador, Javier; González-Martín, M Inmaculada
2013-12-15
The selection of antioxidant variables in honey is first time considered applying the near infrared (NIR) spectroscopic technique. A total of 60 honey samples were used to develop the calibration models using the modified partial least squares (MPLS) regression method and 15 samples were used for external validation. Calibration models on honey matrix for the estimation of phenols, flavonoids, vitamin C, antioxidant capacity (DPPH), oxidation index and copper using near infrared (NIR) spectroscopy has been satisfactorily obtained. These models were optimised by cross-validation, and the best model was evaluated according to multiple correlation coefficient (RSQ), standard error of cross-validation (SECV), ratio performance deviation (RPD) and root mean standard error (RMSE) in the prediction set. The result of these statistics suggested that the equations developed could be used for rapid determination of antioxidant compounds in honey. This work shows that near infrared spectroscopy can be considered as rapid tool for the nondestructive measurement of antioxidant constitutes as phenols, flavonoids, vitamin C and copper and also the antioxidant capacity in the honey. Copyright © 2013 Elsevier Ltd. All rights reserved.
Indirect Validation of Probe Speed Data on Arterial Corridors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eshragh, Sepideh; Young, Stanley E.; Sharifi, Elham
This study aimed to estimate the accuracy of probe speed data on arterial corridors on the basis of roadway geometric attributes and functional classification. It was assumed that functional class (medium and low) along with other road characteristics (such as weighted average of the annual average daily traffic, average signal density, average access point density, and average speed) were available as correlation factors to estimate the accuracy of probe traffic data. This study tested these factors as predictors of the fidelity of probe traffic data by using the results of an extensive validation exercise. This study showed strong correlations betweenmore » these geometric attributes and the accuracy of probe data when they were assessed by using average absolute speed error. Linear models were regressed to existing data to estimate appropriate models for medium- and low-type arterial corridors. The proposed models for medium- and low-type arterials were validated further on the basis of the results of a slowdown analysis. These models can be used to predict the accuracy of probe data indirectly in medium and low types of arterial corridors.« less
Kook, Seung Hee; Varni, James W
2008-06-02
The Pediatric Quality of Life Inventory (PedsQL) is a child self-report and parent proxy-report instrument designed to assess health-related quality of life (HRQOL) in healthy and ill children and adolescents. It has been translated into over 70 international languages and proposed as a valid and reliable pediatric HRQOL measure. This study aimed to assess the psychometric properties of the Korean translation of the PedsQL 4.0 Generic Core Scales. Following the guidelines for linguistic validation, the original US English scales were translated into Korean and cognitive interviews were administered. The field testing responses of 1425 school children and adolescents and 1431 parents to the Korean version of PedsQL 4.0 Generic Core Scales were analyzed utilizing confirmatory factor analysis and the Rasch model. Consistent with studies using the US English instrument and other translation studies, score distributions were skewed toward higher HRQOL in a predominantly healthy population. Confirmatory factor analysis supported a four-factor and a second order-factor model. The analysis using the Rasch model showed that person reliabilities are low, item reliabilities are high, and the majority of items fit the model's expectation. The Rasch rating scale diagnostics showed that PedsQL 4.0 Generic Core Scales in general have the optimal number of response categories, but category 4 (almost always a problem) is somewhat problematic for the healthy school sample. The agreements between child self-report and parent proxy-report were moderate. The results demonstrate the feasibility, validity, item reliability, item fit, and agreement between child self-report and parent proxy-report of the Korean version of PedsQL 4.0 Generic Core Scales for school population health research in Korea. However, the utilization of the Korean version of the PedsQL 4.0 Generic Core Scales for healthy school populations needs to consider low person reliability, ceiling effects and cultural differences, and further validation studies on Korean clinical samples are required.
Improving Risk Adjustment for Mortality After Pediatric Cardiac Surgery: The UK PRAiS2 Model.
Rogers, Libby; Brown, Katherine L; Franklin, Rodney C; Ambler, Gareth; Anderson, David; Barron, David J; Crowe, Sonya; English, Kate; Stickley, John; Tibby, Shane; Tsang, Victor; Utley, Martin; Witter, Thomas; Pagel, Christina
2017-07-01
Partial Risk Adjustment in Surgery (PRAiS), a risk model for 30-day mortality after children's heart surgery, has been used by the UK National Congenital Heart Disease Audit to report expected risk-adjusted survival since 2013. This study aimed to improve the model by incorporating additional comorbidity and diagnostic information. The model development dataset was all procedures performed between 2009 and 2014 in all UK and Ireland congenital cardiac centers. The outcome measure was death within each 30-day surgical episode. Model development followed an iterative process of clinical discussion and development and assessment of models using logistic regression under 25 × 5 cross-validation. Performance was measured using Akaike information criterion, the area under the receiver-operating characteristic curve (AUC), and calibration. The final model was assessed in an external 2014 to 2015 validation dataset. The development dataset comprised 21,838 30-day surgical episodes, with 539 deaths (mortality, 2.5%). The validation dataset comprised 4,207 episodes, with 97 deaths (mortality, 2.3%). The updated risk model included 15 procedural, 11 diagnostic, and 4 comorbidity groupings, and nonlinear functions of age and weight. Performance under cross-validation was: median AUC of 0.83 (range, 0.82 to 0.83), median calibration slope and intercept of 0.92 (range, 0.64 to 1.25) and -0.23 (range, -1.08 to 0.85) respectively. In the validation dataset, the AUC was 0.86 (95% confidence interval [CI], 0.82 to 0.89), and the calibration slope and intercept were 1.01 (95% CI, 0.83 to 1.18) and 0.11 (95% CI, -0.45 to 0.67), respectively, showing excellent performance. A more sophisticated PRAiS2 risk model for UK use was developed with additional comorbidity and diagnostic information, alongside age and weight as nonlinear variables. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Factor complexity of crash occurrence: An empirical demonstration using boosted regression trees.
Chung, Yi-Shih
2013-12-01
Factor complexity is a characteristic of traffic crashes. This paper proposes a novel method, namely boosted regression trees (BRT), to investigate the complex and nonlinear relationships in high-variance traffic crash data. The Taiwanese 2004-2005 single-vehicle motorcycle crash data are used to demonstrate the utility of BRT. Traditional logistic regression and classification and regression tree (CART) models are also used to compare their estimation results and external validities. Both the in-sample cross-validation and out-of-sample validation results show that an increase in tree complexity provides improved, although declining, classification performance, indicating a limited factor complexity of single-vehicle motorcycle crashes. The effects of crucial variables including geographical, time, and sociodemographic factors explain some fatal crashes. Relatively unique fatal crashes are better approximated by interactive terms, especially combinations of behavioral factors. BRT models generally provide improved transferability than conventional logistic regression and CART models. This study also discusses the implications of the results for devising safety policies. Copyright © 2012 Elsevier Ltd. All rights reserved.
Predictive models of safety based on audit findings: Part 1: Model development and reliability.
Hsiao, Yu-Lin; Drury, Colin; Wu, Changxu; Paquet, Victor
2013-03-01
This consecutive study was aimed at the quantitative validation of safety audit tools as predictors of safety performance, as we were unable to find prior studies that tested audit validity against safety outcomes. An aviation maintenance domain was chosen for this work as both audits and safety outcomes are currently prescribed and regulated. In Part 1, we developed a Human Factors/Ergonomics classification framework based on HFACS model (Shappell and Wiegmann, 2001a,b), for the human errors detected by audits, because merely counting audit findings did not predict future safety. The framework was tested for measurement reliability using four participants, two of whom classified errors on 1238 audit reports. Kappa values leveled out after about 200 audits at between 0.5 and 0.8 for different tiers of errors categories. This showed sufficient reliability to proceed with prediction validity testing in Part 2. Copyright © 2012 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Scaling field data to calibrate and validate moderate spatial resolution remote sensing models
Baccini, A.; Friedl, M.A.; Woodcock, C.E.; Zhu, Z.
2007-01-01
Validation and calibration are essential components of nearly all remote sensing-based studies. In both cases, ground measurements are collected and then related to the remote sensing observations or model results. In many situations, and particularly in studies that use moderate resolution remote sensing, a mismatch exists between the sensor's field of view and the scale at which in situ measurements are collected. The use of in situ measurements for model calibration and validation, therefore, requires a robust and defensible method to spatially aggregate ground measurements to the scale at which the remotely sensed data are acquired. This paper examines this challenge and specifically considers two different approaches for aggregating field measurements to match the spatial resolution of moderate spatial resolution remote sensing data: (a) landscape stratification; and (b) averaging of fine spatial resolution maps. The results show that an empirically estimated stratification based on a regression tree method provides a statistically defensible and operational basis for performing this type of procedure.
Katz, Andrea C; Hee, Danelle; Hooker, Christine I; Shankman, Stewart A
2017-10-03
In Section III of the DSM-5, the American Psychiatric Association (APA) proposes a pathological personality trait model of personality disorders. The recommended assessment instrument is the Personality Inventory for the DSM-5 (PID-5), an empirically derived scale that assesses personality pathology along five domains and 25 facets. Although the PID-5 demonstrates strong convergent validity with other personality measures, no study has examined whether it identifies traits that run in families, another important step toward validating the DSM-5's dimensional model. Using a family study method, we investigated familial associations of PID-5 domain and facet scores in 195 families, examining associations between parents and offspring and across siblings. The Psychoticism, Antagonism, and Detachment domains showed significant familial aggregation, as did facets of Negative Affect and Disinhibition. Results are discussed in the context of personality pathology and family study methodology. The results also help validate the PID-5, given the familial nature of personality traits.
The Development of Statistics Textbook Supported with ICT and Portfolio-Based Assessment
NASA Astrophysics Data System (ADS)
Hendikawati, Putriaji; Yuni Arini, Florentina
2016-02-01
This research was development research that aimed to develop and produce a Statistics textbook model that supported with information and communication technology (ICT) and Portfolio-Based Assessment. This book was designed for students of mathematics at the college to improve students’ ability in mathematical connection and communication. There were three stages in this research i.e. define, design, and develop. The textbooks consisted of 10 chapters which each chapter contains introduction, core materials and include examples and exercises. The textbook developed phase begins with the early stages of designed the book (draft 1) which then validated by experts. Revision of draft 1 produced draft 2 which then limited test for readability test book. Furthermore, revision of draft 2 produced textbook draft 3 which simulated on a small sample to produce a valid model textbook. The data were analysed with descriptive statistics. The analysis showed that the Statistics textbook model that supported with ICT and Portfolio-Based Assessment valid and fill up the criteria of practicality.
Computer-based personality judgments are more accurate than those made by humans
Youyou, Wu; Kosinski, Michal; Stillwell, David
2015-01-01
Judging others’ personalities is an essential skill in successful social living, as personality is a key driver behind people’s interactions, behaviors, and emotions. Although accurate personality judgments stem from social-cognitive skills, developments in machine learning show that computer models can also make valid judgments. This study compares the accuracy of human and computer-based personality judgments, using a sample of 86,220 volunteers who completed a 100-item personality questionnaire. We show that (i) computer predictions based on a generic digital footprint (Facebook Likes) are more accurate (r = 0.56) than those made by the participants’ Facebook friends using a personality questionnaire (r = 0.49); (ii) computer models show higher interjudge agreement; and (iii) computer personality judgments have higher external validity when predicting life outcomes such as substance use, political attitudes, and physical health; for some outcomes, they even outperform the self-rated personality scores. Computers outpacing humans in personality judgment presents significant opportunities and challenges in the areas of psychological assessment, marketing, and privacy. PMID:25583507
Computer-based personality judgments are more accurate than those made by humans.
Youyou, Wu; Kosinski, Michal; Stillwell, David
2015-01-27
Judging others' personalities is an essential skill in successful social living, as personality is a key driver behind people's interactions, behaviors, and emotions. Although accurate personality judgments stem from social-cognitive skills, developments in machine learning show that computer models can also make valid judgments. This study compares the accuracy of human and computer-based personality judgments, using a sample of 86,220 volunteers who completed a 100-item personality questionnaire. We show that (i) computer predictions based on a generic digital footprint (Facebook Likes) are more accurate (r = 0.56) than those made by the participants' Facebook friends using a personality questionnaire (r = 0.49); (ii) computer models show higher interjudge agreement; and (iii) computer personality judgments have higher external validity when predicting life outcomes such as substance use, political attitudes, and physical health; for some outcomes, they even outperform the self-rated personality scores. Computers outpacing humans in personality judgment presents significant opportunities and challenges in the areas of psychological assessment, marketing, and privacy.
The bottom-up approach to integrative validity: a new perspective for program evaluation.
Chen, Huey T
2010-08-01
The Campbellian validity model and the traditional top-down approach to validity have had a profound influence on research and evaluation. That model includes the concepts of internal and external validity and within that model, the preeminence of internal validity as demonstrated in the top-down approach. Evaluators and researchers have, however, increasingly recognized that in an evaluation, the over-emphasis on internal validity reduces that evaluation's usefulness and contributes to the gulf between academic and practical communities regarding interventions. This article examines the limitations of the Campbellian validity model and the top-down approach and provides a comprehensive, alternative model, known as the integrative validity model for program evaluation. The integrative validity model includes the concept of viable validity, which is predicated on a bottom-up approach to validity. This approach better reflects stakeholders' evaluation views and concerns, makes external validity workable, and becomes therefore a preferable alternative for evaluation of health promotion/social betterment programs. The integrative validity model and the bottom-up approach enable evaluators to meet scientific and practical requirements, facilitate in advancing external validity, and gain a new perspective on methods. The new perspective also furnishes a balanced view of credible evidence, and offers an alternative perspective for funding. Copyright (c) 2009 Elsevier Ltd. All rights reserved.
Shetty, N; Løvendahl, P; Lund, M S; Buitenhuis, A J
2017-01-01
The present study explored the effectiveness of Fourier transform mid-infrared (FT-IR) spectral profiles as a predictor for dry matter intake (DMI) and residual feed intake (RFI). The partial least squares regression method was used to develop the prediction models. The models were validated using different external test sets, one randomly leaving out 20% of the records (validation A), the second randomly leaving out 20% of cows (validation B), and a third (for DMI prediction models) randomly leaving out one cow (validation C). The data included 1,044 records from 140 cows; 97 were Danish Holstein and 43 Danish Jersey. Results showed better accuracies for validation A compared with other validation methods. Milk yield (MY) contributed largely to DMI prediction; MY explained 59% of the variation and the validated model error root mean square error of prediction (RMSEP) was 2.24kg. The model was improved by adding live weight (LW) as an additional predictor trait, where the accuracy R 2 increased from 0.59 to 0.72 and error RMSEP decreased from 2.24 to 1.83kg. When only the milk FT-IR spectral profile was used in DMI prediction, a lower prediction ability was obtained, with R 2 =0.30 and RMSEP=2.91kg. However, once the spectral information was added, along with MY and LW as predictors, model accuracy improved and R 2 increased to 0.81 and RMSEP decreased to 1.49kg. Prediction accuracies of RFI changed throughout lactation. The RFI prediction model for the early-lactation stage was better compared with across lactation or mid- and late-lactation stages, with R 2 =0.46 and RMSEP=1.70. The most important spectral wavenumbers that contributed to DMI and RFI prediction models included fat, protein, and lactose peaks. Comparable prediction results were obtained when using infrared-predicted fat, protein, and lactose instead of full spectra, indicating that FT-IR spectral data do not add significant new information to improve DMI and RFI prediction models. Therefore, in practice, if full FT-IR spectral data are not stored, it is possible to achieve similar DMI or RFI prediction results based on standard milk control data. For DMI, the milk fat region was responsible for the major variation in milk spectra; for RFI, the major variation in milk spectra was within the milk protein region. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Blagus, Rok; Lusa, Lara
2015-11-04
Prediction models are used in clinical research to develop rules that can be used to accurately predict the outcome of the patients based on some of their characteristics. They represent a valuable tool in the decision making process of clinicians and health policy makers, as they enable them to estimate the probability that patients have or will develop a disease, will respond to a treatment, or that their disease will recur. The interest devoted to prediction models in the biomedical community has been growing in the last few years. Often the data used to develop the prediction models are class-imbalanced as only few patients experience the event (and therefore belong to minority class). Prediction models developed using class-imbalanced data tend to achieve sub-optimal predictive accuracy in the minority class. This problem can be diminished by using sampling techniques aimed at balancing the class distribution. These techniques include under- and oversampling, where a fraction of the majority class samples are retained in the analysis or new samples from the minority class are generated. The correct assessment of how the prediction model is likely to perform on independent data is of crucial importance; in the absence of an independent data set, cross-validation is normally used. While the importance of correct cross-validation is well documented in the biomedical literature, the challenges posed by the joint use of sampling techniques and cross-validation have not been addressed. We show that care must be taken to ensure that cross-validation is performed correctly on sampled data, and that the risk of overestimating the predictive accuracy is greater when oversampling techniques are used. Examples based on the re-analysis of real datasets and simulation studies are provided. We identify some results from the biomedical literature where the incorrect cross-validation was performed, where we expect that the performance of oversampling techniques was heavily overestimated.
Johnson, Philip L.; Shekhar, Anantha
2013-01-01
Panic disorder (PD) is a severe anxiety disorder characterized by susceptibility to induction of panic attacks by subthreshold interoceptive stimuli such as sodium lactate infusions or hypercapnia induction. Here we review a model of panic vulnerability in rats involving chronic inhibition of GABAergic tone in the dorsomedial/ perifornical hypothalamic (DMH/PeF) region that produces enhanced anxiety and freezing responses in fearful situations, as well as a vulnerability to displaying acute panic-like increases in cardioexcitation, respiration activity and “flight” associated behavior following subthreshold interoceptive stimuli that do not elicit panic responses in control rats. This model of panic vulnerability was developed over 15 years ago and has provided an excellent preclinical model with robust face, predictive and construct validity. The model recapitulates many of the phenotypics features of panic attacks associated with human panic disorder (face validity) including greater sensitivity to panicogenic stimuli demonstrated by sudden onset of anxiety and autonomic activation following an administration of a sub-threshold (i.e., do not usually induce panic in healthy subjects) stimulus such as sodium lactate, CO2, or yohimbine. The construct validity is supported by several key findings; DMH/PeF neurons regulate behavioral and autonomic components of a normal adaptive panic response, as well as being implicated in eliciting panic-like responses in humans. Additionally, Patients with PD have deficits in central GABA activity and pharmacological restoration of central GABA activity prevents panic attacks, consistent with this model. The model’s predictive validity is demonstrated by not only showing panic responses to several panic-inducing agents that elicit panic in patients with PD, but also by the positive therapeutic responses to clinically used agents such as alprazolam and antidepressants that attenuate panic attacks in patients. More importantly, this model has been utilized to discover novel drugs such as group II metabotropic glutamate agonists and a new class of translocator protein enhancers of GABA, both of which subsequently showed anti-panic properties in clinical trials. All of these data suggest that this preparation provides a strong preclinical model of some forms of human panic disorders. PMID:22484112
Applying the Health Belief Model to college students' health behavior
Kim, Hak-Seon; Ahn, Joo
2012-01-01
The purpose of this research was to investigate how university students' nutrition beliefs influence their health behavioral intention. This study used an online survey engine (Qulatrics.com) to collect data from college students. Out of 253 questionnaires collected, 251 questionnaires (99.2%) were used for the statistical analysis. Confirmatory Factor Analysis (CFA) revealed that six dimensions, "Nutrition Confidence," "Susceptibility," "Severity," "Barrier," "Benefit," "Behavioral Intention to Eat Healthy Food," and "Behavioral Intention to do Physical Activity," had construct validity; Cronbach's alpha coefficient and composite reliabilities were tested for item reliability. The results validate that objective nutrition knowledge was a good predictor of college students' nutrition confidence. The results also clearly showed that two direct measures were significant predictors of behavioral intentions as hypothesized. Perceived benefit of eating healthy food and perceived barrier for eat healthy food to had significant effects on Behavioral Intentions and was a valid measurement to use to determine Behavioral Intentions. These findings can enhance the extant literature on the universal applicability of the model and serve as useful references for further investigations of the validity of the model within other health care or foodservice settings and for other health behavioral categories. PMID:23346306
Development and validation of the short-form Adolescent Health Promotion Scale.
Chen, Mei-Yen; Lai, Li-Ju; Chen, Hsiu-Chih; Gaete, Jorge
2014-10-26
Health-promoting lifestyle choices of adolescents are closely related to current and subsequent health status. However, parsimonious yet reliable and valid screening tools are scarce. The original 40-item adolescent health promotion (AHP) scale was developed by our research team and has been applied to measure adolescent health-promoting behaviors worldwide. The aim of our study was to examine the psychometric properties of a newly developed short-form version of the AHP (AHP-SF) including tests of its reliability and validity. The study was conducted in nine middle and high schools in southern Taiwan. Participants were 814 adolescents randomly divided into two subgroups with equal size and homogeneity of baseline characteristics. The first subsample (calibration sample) was used to modify and shorten the factorial model while the second subsample (validation sample) was utilized to validate the result obtained from the first one. The psychometric testing of the AHP-SF included internal reliability of McDonald's omega and Cronbach's alpha, convergent validity, discriminant validity, and construct validity with confirmatory factor analysis (CFA). The results of the CFA supported a six-factor model and 21 items were retained in the AHP-SF with acceptable model fit. For the discriminant validity test, results indicated that adolescents with lower AHP-SF scores were more likely to be overweight or obese, skip breakfast, and spend more time watching TV and playing computer games. The AHP-SF also showed excellent internal consistency with a McDonald's omega of 0.904 (Cronbach's alpha 0.905) in the calibration group. The current findings suggest that the AHP-SF is a valid and reliable instrument for the evaluation of adolescent health-promoting behaviors. Primary health care providers and clinicians can use the AHP-SF to assess these behaviors and evaluate the outcome of health promotion programs in the adolescent population.
NASA Astrophysics Data System (ADS)
Radenković, Lazar; Nešić, Ljubiša
2018-05-01
The main contribution of this paper is didactic adaptation of the biomechanical analysis of the three main lifts in powerlifting (squat, bench press, deadlift). We used simple models that can easily be understood by undergraduate college students to estimate the values of various physical quantities during powerlifting. Specifically, we showed how plate choice affects the bench press and estimated spine loads and torques at hip and knee during lifting. Theoretical calculations showed good agreement with experimental data, proving that the models are valid.
Predicting Pilot Behavior in Medium Scale Scenarios Using Game Theory and Reinforcement Learning
NASA Technical Reports Server (NTRS)
Yildiz, Yildiray; Agogino, Adrian; Brat, Guillaume
2013-01-01
Effective automation is critical in achieving the capacity and safety goals of the Next Generation Air Traffic System. Unfortunately creating integration and validation tools for such automation is difficult as the interactions between automation and their human counterparts is complex and unpredictable. This validation becomes even more difficult as we integrate wide-reaching technologies that affect the behavior of different decision makers in the system such as pilots, controllers and airlines. While overt short-term behavior changes can be explicitly modeled with traditional agent modeling systems, subtle behavior changes caused by the integration of new technologies may snowball into larger problems and be very hard to detect. To overcome these obstacles, we show how integration of new technologies can be validated by learning behavior models based on goals. In this framework, human participants are not modeled explicitly. Instead, their goals are modeled and through reinforcement learning their actions are predicted. The main advantage to this approach is that modeling is done within the context of the entire system allowing for accurate modeling of all participants as they interact as a whole. In addition such an approach allows for efficient trade studies and feasibility testing on a wide range of automation scenarios. The goal of this paper is to test that such an approach is feasible. To do this we implement this approach using a simple discrete-state learning system on a scenario where 50 aircraft need to self-navigate using Automatic Dependent Surveillance-Broadcast (ADS-B) information. In this scenario, we show how the approach can be used to predict the ability of pilots to adequately balance aircraft separation and fly efficient paths. We present results with several levels of complexity and airspace congestion.
Testing and Modeling of a 3-MW Wind Turbine Using Fully Coupled Simulation Codes (Poster)
DOE Office of Scientific and Technical Information (OSTI.GOV)
LaCava, W.; Guo, Y.; Van Dam, J.
This poster describes the NREL/Alstom Wind testing and model verification of the Alstom 3-MW wind turbine located at NREL's National Wind Technology Center. NREL,in collaboration with ALSTOM Wind, is studying a 3-MW wind turbine installed at the National Wind Technology Center(NWTC). The project analyzes the turbine design using a state-of-the-art simulation code validated with detailed test data. This poster describes the testing and the model validation effort, and provides conclusions about the performance of the unique drive train configuration used in this wind turbine. The 3-MW machine has been operating at the NWTC since March 2011, and drive train measurementsmore » will be collected through the spring of 2012. The NWTC testing site has particularly turbulent wind patterns that allow for the measurement of large transient loads and the resulting turbine response. This poster describes the 3-MW turbine test project, the instrumentation installed, and the load cases captured. The design of a reliable wind turbine drive train increasingly relies on the use of advanced simulation to predict structural responses in a varying wind field. This poster presents a fully coupled, aero-elastic and dynamic model of the wind turbine. It also shows the methodology used to validate the model, including the use of measured tower modes, model-to-model comparisons of the power curve, and mainshaft bending predictions for various load cases. The drivetrain is designed to only transmit torque to the gearbox, eliminating non-torque moments that are known to cause gear misalignment. Preliminary results show that the drivetrain is able to divert bending loads in extreme loading cases, and that a significantly smaller bending moment is induced on the mainshaft compared to a three-point mounting design.« less
de Castro, Alberto; Rosales, Patricia; Marcos, Susana
2007-03-01
To measure tilt and decentration of intraocular lenses (IOLs) with Scheimpflug and Purkinje imaging systems in physical model eyes with known amounts of tilt and decentration and patients. Instituto de Optica Daza de Valdés, Consejo Superior de Investigaciones Científicas, Madrid, Spain. Measurements of IOL tilt and decentration were obtained using a commercial Scheimpflug system (Pentacam, Oculus), custom algorithms, and a custom-built Purkinje imaging apparatus. Twenty-five Scheimpflug images of the anterior segment of the eye were obtained at different meridians. Custom algorithms were used to process the images (correction of geometrical distortion, edge detection, and curve fittings). Intraocular lens tilt and decentration were estimated by fitting sinusoidal functions to the projections of the pupillary axis and IOL axis in each image. The Purkinje imaging system captures pupil images showing reflections of light from the anterior corneal surface and anterior and posterior lens surfaces. Custom algorithms were used to detect the Purkinje image locations and estimate IOL tilt and decentration based on a linear system equation and computer eye models with individual biometry. Both methods were validated with a physical model eye in which IOL tilt and decentration can be set nominally. Twenty-one eyes of 12 patients with IOLs were measured with both systems. Measurements of the physical model eye showed an absolute discrepancy between nominal and measured values of 0.279 degree (Purkinje) and 0.243 degree (Scheimpflug) for tilt and 0.094 mm (Purkinje) and 0.228 mm (Scheimpflug) for decentration. In patients, the mean tilt was less than 2.6 degrees and the mean decentration less than 0.4 mm. Both techniques showed mirror symmetry between right eyes and left eyes for tilt around the vertical axis and for decentration in the horizontal axis. Both systems showed high reproducibility. Validation experiments on physical model eyes showed slightly higher accuracy with the Purkinje method than the Scheimpflug imaging method. Horizontal measurements of patients with both techniques were highly correlated. The IOLs tended to be tilted and decentered nasally in most patients.
NASA Astrophysics Data System (ADS)
Zhu, Wei; Timmermans, Harry
2011-06-01
Models of geographical choice behavior have been dominantly based on rational choice models, which assume that decision makers are utility-maximizers. Rational choice models may be less appropriate as behavioral models when modeling decisions in complex environments in which decision makers may simplify the decision problem using heuristics. Pedestrian behavior in shopping streets is an example. We therefore propose a modeling framework for pedestrian shopping behavior incorporating principles of bounded rationality. We extend three classical heuristic rules (conjunctive, disjunctive and lexicographic rule) by introducing threshold heterogeneity. The proposed models are implemented using data on pedestrian behavior in Wang Fujing Street, the city center of Beijing, China. The models are estimated and compared with multinomial logit models and mixed logit models. Results show that the heuristic models are the best for all the decisions that are modeled. Validation tests are carried out through multi-agent simulation by comparing simulated spatio-temporal agent behavior with the observed pedestrian behavior. The predictions of heuristic models are slightly better than those of the multinomial logit models.
Prediction models for clustered data: comparison of a random intercept and standard regression model
2013-01-01
Background When study data are clustered, standard regression analysis is considered inappropriate and analytical techniques for clustered data need to be used. For prediction research in which the interest of predictor effects is on the patient level, random effect regression models are probably preferred over standard regression analysis. It is well known that the random effect parameter estimates and the standard logistic regression parameter estimates are different. Here, we compared random effect and standard logistic regression models for their ability to provide accurate predictions. Methods Using an empirical study on 1642 surgical patients at risk of postoperative nausea and vomiting, who were treated by one of 19 anesthesiologists (clusters), we developed prognostic models either with standard or random intercept logistic regression. External validity of these models was assessed in new patients from other anesthesiologists. We supported our results with simulation studies using intra-class correlation coefficients (ICC) of 5%, 15%, or 30%. Standard performance measures and measures adapted for the clustered data structure were estimated. Results The model developed with random effect analysis showed better discrimination than the standard approach, if the cluster effects were used for risk prediction (standard c-index of 0.69 versus 0.66). In the external validation set, both models showed similar discrimination (standard c-index 0.68 versus 0.67). The simulation study confirmed these results. For datasets with a high ICC (≥15%), model calibration was only adequate in external subjects, if the used performance measure assumed the same data structure as the model development method: standard calibration measures showed good calibration for the standard developed model, calibration measures adapting the clustered data structure showed good calibration for the prediction model with random intercept. Conclusion The models with random intercept discriminate better than the standard model only if the cluster effect is used for predictions. The prediction model with random intercept had good calibration within clusters. PMID:23414436
Bouwmeester, Walter; Twisk, Jos W R; Kappen, Teus H; van Klei, Wilton A; Moons, Karel G M; Vergouwe, Yvonne
2013-02-15
When study data are clustered, standard regression analysis is considered inappropriate and analytical techniques for clustered data need to be used. For prediction research in which the interest of predictor effects is on the patient level, random effect regression models are probably preferred over standard regression analysis. It is well known that the random effect parameter estimates and the standard logistic regression parameter estimates are different. Here, we compared random effect and standard logistic regression models for their ability to provide accurate predictions. Using an empirical study on 1642 surgical patients at risk of postoperative nausea and vomiting, who were treated by one of 19 anesthesiologists (clusters), we developed prognostic models either with standard or random intercept logistic regression. External validity of these models was assessed in new patients from other anesthesiologists. We supported our results with simulation studies using intra-class correlation coefficients (ICC) of 5%, 15%, or 30%. Standard performance measures and measures adapted for the clustered data structure were estimated. The model developed with random effect analysis showed better discrimination than the standard approach, if the cluster effects were used for risk prediction (standard c-index of 0.69 versus 0.66). In the external validation set, both models showed similar discrimination (standard c-index 0.68 versus 0.67). The simulation study confirmed these results. For datasets with a high ICC (≥15%), model calibration was only adequate in external subjects, if the used performance measure assumed the same data structure as the model development method: standard calibration measures showed good calibration for the standard developed model, calibration measures adapting the clustered data structure showed good calibration for the prediction model with random intercept. The models with random intercept discriminate better than the standard model only if the cluster effect is used for predictions. The prediction model with random intercept had good calibration within clusters.
Validation of Yoon's Critical Thinking Disposition Instrument.
Shin, Hyunsook; Park, Chang Gi; Kim, Hyojin
2015-12-01
The lack of reliable and valid evaluation tools targeting Korean nursing students' critical thinking (CT) abilities has been reported as one of the barriers to instructing and evaluating students in undergraduate programs. Yoon's Critical Thinking Disposition (YCTD) instrument was developed for Korean nursing students, but few studies have assessed its validity. This study aimed to validate the YCTD. Specifically, the YCTD was assessed to identify its cross-sectional and longitudinal measurement invariance. This was a validation study in which a cross-sectional and longitudinal (prenursing and postnursing practicum) survey was used to validate the YCTD using 345 nursing students at three universities in Seoul, Korea. The participants' CT abilities were assessed using the YCTD before and after completing an established pediatric nursing practicum. The validity of the YCTD was estimated and then group invariance test using multigroup confirmatory factor analysis was performed to confirm the measurement compatibility of multigroups. A test of the seven-factor model showed that the YCTD demonstrated good construct validity. Multigroup confirmatory factor analysis findings for the measurement invariance suggested that this model structure demonstrated strong invariance between groups (i.e., configural, factor loading, and intercept combined) but weak invariance within a group (i.e., configural and factor loading combined). In general, traditional methods for assessing instrument validity have been less than thorough. In this study, multigroup confirmatory factor analysis using cross-sectional and longitudinal measurement data allowed validation of the YCTD. This study concluded that the YCTD can be used for evaluating Korean nursing students' CT abilities. Copyright © 2015. Published by Elsevier B.V.
Rodenacker, Klaas; Hautmann, Christopher; Görtz-Dorten, Anja; Döpfner, Manfred
2016-01-01
Various studies have demonstrated that bifactor models yield better solutions than models with correlated factors. However, the kind of bifactor model that is most appropriate is yet to be examined. The current study is the first to test bifactor models across the full age range (11-18 years) of adolescents using self-reports, and the first to test bifactor models with German subjects and German questionnaires. The study sample included children and adolescents aged between 6 and 18 years recruited from a German clinical sample (n = 1,081) and a German community sample (n = 642). To examine the factorial validity, we compared unidimensional, correlated factors and higher-order and bifactor models and further tested a modified incomplete bifactor model for measurement invariance. Bifactor models displayed superior model fit statistics compared to correlated factor models or second-order models. However, a more parsimonious incomplete bifactor model with only 2 specific factors (inattention and impulsivity) showed a good model fit and a better factor structure than the other bifactor models. Scalar measurement invariance was given in most group comparisons. An incomplete bifactor model would suggest that the specific inattention and impulsivity factors represent entities separable from the general attention-deficit/hyperactivity disorder construct and might, therefore, give way to a new approach to subtyping of children beyond and above attention-deficit/hyperactivity disorder. © 2016 S. Karger AG, Basel.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nguyen, Ba Nghiep; Bapanapalli, Satish K.; Smith, Mark T.
2008-09-01
The objective of our work is to enable the optimum design of lightweight automotive structural components using injection-molded long fiber thermoplastics (LFTs). To this end, an integrated approach that links process modeling to structural analysis with experimental microstructural characterization and validation is developed. First, process models for LFTs are developed and implemented into processing codes (e.g. ORIENT, Moldflow) to predict the microstructure of the as-formed composite (i.e. fiber length and orientation distributions). In parallel, characterization and testing methods are developed to obtain necessary microstructural data to validate process modeling predictions. Second, the predicted LFT composite microstructure is imported into amore » structural finite element analysis by ABAQUS to determine the response of the as-formed composite to given boundary conditions. At this stage, constitutive models accounting for the composite microstructure are developed to predict various types of behaviors (i.e. thermoelastic, viscoelastic, elastic-plastic, damage, fatigue, and impact) of LFTs. Experimental methods are also developed to determine material parameters and to validate constitutive models. Such a process-linked-structural modeling approach allows an LFT composite structure to be designed with confidence through numerical simulations. Some recent results of our collaborative research will be illustrated to show the usefulness and applications of this integrated approach.« less
Zampolli, Mario; Nijhof, Marten J J; de Jong, Christ A F; Ainslie, Michael A; Jansen, Erwin H W; Quesson, Benoit A J
2013-01-01
The acoustic radiation from a pile being driven into the sediment by a sequence of hammer strikes is studied with a linear, axisymmetric, structural acoustic frequency domain finite element model. Each hammer strike results in an impulsive sound that is emitted from the pile and then propagated in the shallow water waveguide. Measurements from accelerometers mounted on the head of a test pile and from hydrophones deployed in the water are used to validate the model results. Transfer functions between the force input at the top of the anvil and field quantities, such as acceleration components in the structure or pressure in the fluid, are computed with the model. These transfer functions are validated using accelerometer or hydrophone measurements to infer the structural forcing. A modeled hammer forcing pulse is used in the successive step to produce quantitative predictions of sound exposure at the hydrophones. The comparison between the model and the measurements shows that, although several simplifying assumptions were made, useful predictions of noise levels based on linear structural acoustic models are possible. In the final part of the paper, the model is used to characterize the pile as an acoustic radiator by analyzing the flow of acoustic energy.
NASA Technical Reports Server (NTRS)
Cognata, Thomas J.; Leimkuehler, Thomas O.; Sheth, Rubik B.; Le,Hung
2012-01-01
The Fusible Heat Sink is a novel vehicle heat rejection technology which combines a flow through radiator with a phase change material. The combined technologies create a multi-function device able to shield crew members against Solar Particle Events (SPE), reduce radiator extent by permitting sizing to the average vehicle heat load rather than to the peak vehicle heat load, and to substantially absorb heat load excursions from the average while constantly maintaining thermal control system setpoints. This multi-function technology provides great flexibility for mission planning, making it possible to operate a vehicle in hot or cold environments and under high or low heat load conditions for extended periods of time. This paper describes the model development and experimental validation of the Fusible Heat Sink technology. The model developed was intended to meet the radiation and heat rejection requirements of a nominal MMSEV mission. Development parameters and results, including sizing and model performance will be discussed. From this flight-sized model, a scaled test-article design was modeled, designed, and fabricated for experimental validation of the technology at Johnson Space Center thermal vacuum chamber facilities. Testing showed performance comparable to the model at nominal loads and the capability to maintain heat loads substantially greater than nominal for extended periods of time.
NASA Technical Reports Server (NTRS)
Cognata, Thomas J.; Leimkuehler, Thomas; Sheth, Rubik; Le, Hung
2013-01-01
The Fusible Heat Sink is a novel vehicle heat rejection technology which combines a flow through radiator with a phase change material. The combined technologies create a multi-function device able to shield crew members against Solar Particle Events (SPE), reduce radiator extent by permitting sizing to the average vehicle heat load rather than to the peak vehicle heat load, and to substantially absorb heat load excursions from the average while constantly maintaining thermal control system setpoints. This multi-function technology provides great flexibility for mission planning, making it possible to operate a vehicle in hot or cold environments and under high or low heat load conditions for extended periods of time. This paper describes the modeling and experimental validation of the Fusible Heat Sink technology. The model developed was intended to meet the radiation and heat rejection requirements of a nominal MMSEV mission. Development parameters and results, including sizing and model performance will be discussed. From this flight-sized model, a scaled test-article design was modeled, designed, and fabricated for experimental validation of the technology at Johnson Space Center thermal vacuum chamber facilities. Testing showed performance comparable to the model at nominal loads and the capability to maintain heat loads substantially greater than nominal for extended periods of time.
A New 1DVAR Retrieval for AMSR2 and GMI: Validation and Sensitivites
NASA Astrophysics Data System (ADS)
Duncan, D.; Kummerow, C. D.
2015-12-01
A new non-raining retrieval has been developed for microwave imagers and applied to the GMI and AMSR2 sensors. With the Community Radiative Transfer Model (CRTM) as the forward model for the physical retrieval, a 1-dimensional variational method finds the atmospheric state which minimizes the difference between observed and simulated brightness temperatures. A key innovation of the algorithm development is a method to calculate the sensor error covariance matrix that is specific to the forward model employed and includes off-diagonal elements, allowing the algorithm to handle various forward models and sensors with little cross-talk. The water vapor profile is resolved by way of empirical orthogonal functions (EOFs) and then summed to get total precipitable water (TPW). Validation of retrieved 10m wind speed, TPW, and sea surface temperature (SST) is performed via comparison with buoys and radiosondes as well as global models and other remotely sensed products. In addition to the validation, sensitivity experiments investigate the impact of ancillary data on the under-constrained retrieval, a concern for climate data records that strive to be independent of model biases. The introduction of model analysis data is found to aid the algorithm most at high frequency channels and affect TPW retrievals, whereas wind and cloud water retrievals show little effect from ingesting further ancillary data.
NASA Technical Reports Server (NTRS)
Geng, Tao; Paxson, Daniel E.; Zheng, Fei; Kuznetsov, Andrey V.; Roberts, William L.
2008-01-01
Pulsed combustion is receiving renewed interest as a potential route to higher performance in air breathing propulsion systems. Pulsejets offer a simple experimental device with which to study unsteady combustion phenomena and validate simulations. Previous computational fluid dynamic (CFD) simulation work focused primarily on the pulsejet combustion and exhaust processes. This paper describes a new inlet sub-model which simulates the fluidic and mechanical operation of a valved pulsejet head. The governing equations for this sub-model are described. Sub-model validation is provided through comparisons of simulated and experimentally measured reed valve motion, and time averaged inlet mass flow rate. The updated pulsejet simulation, with the inlet sub-model implemented, is validated through comparison with experimentally measured combustion chamber pressure, inlet mass flow rate, operational frequency, and thrust. Additionally, the simulated pulsejet exhaust flowfield, which is dominated by a starting vortex ring, is compared with particle imaging velocimetry (PIV) measurements on the bases of velocity, vorticity, and vortex location. The results show good agreement between simulated and experimental data. The inlet sub-model is shown to be critical for the successful modeling of pulsejet operation. This sub-model correctly predicts both the inlet mass flow rate and its phase relationship with the combustion chamber pressure. As a result, the predicted pulsejet thrust agrees very well with experimental data.
Fieberg, John R.; Forester, James D.; Street, Garrett M.; Johnson, Douglas H.; ArchMiller, Althea A.; Matthiopoulos, Jason
2018-01-01
“Species distribution modeling” was recently ranked as one of the top five “research fronts” in ecology and the environmental sciences by ISI's Essential Science Indicators (Renner and Warton 2013), reflecting the importance of predicting how species distributions will respond to anthropogenic change. Unfortunately, species distribution models (SDMs) often perform poorly when applied to novel environments. Compounding on this problem is the shortage of methods for evaluating SDMs (hence, we may be getting our predictions wrong and not even know it). Traditional methods for validating SDMs quantify a model's ability to classify locations as used or unused. Instead, we propose to focus on how well SDMs can predict the characteristics of used locations. This subtle shift in viewpoint leads to a more natural and informative evaluation and validation of models across the entire spectrum of SDMs. Through a series of examples, we show how simple graphical methods can help with three fundamental challenges of habitat modeling: identifying missing covariates, non-linearity, and multicollinearity. Identifying habitat characteristics that are not well-predicted by the model can provide insights into variables affecting the distribution of species, suggest appropriate model modifications, and ultimately improve the reliability and generality of conservation and management recommendations.
NASA Astrophysics Data System (ADS)
Yusliana Ekawati, Elvin
2017-01-01
This study aimed to produce a model of scientific attitude assessment in terms of the observations for physics learning based scientific approach (case study of dynamic fluid topic in high school). Development of instruments in this study adaptation of the Plomp model, the procedure includes the initial investigation, design, construction, testing, evaluation and revision. The test is done in Surakarta, so that the data obtained are analyzed using Aiken formula to determine the validity of the content of the instrument, Cronbach’s alpha to determine the reliability of the instrument, and construct validity using confirmatory factor analysis with LISREL 8.50 program. The results of this research were conceptual models, instruments and guidelines on scientific attitudes assessment by observation. The construct assessment instruments include components of curiosity, objectivity, suspended judgment, open-mindedness, honesty and perseverance. The construct validity of instruments has been qualified (rated load factor > 0.3). The reliability of the model is quite good with the Alpha value 0.899 (> 0.7). The test showed that the model fits the theoretical models are supported by empirical data, namely p-value 0.315 (≥ 0.05), RMSEA 0.027 (≤ 0.08)
Modelling population distribution using remote sensing imagery and location-based data
NASA Astrophysics Data System (ADS)
Song, J.; Prishchepov, A. V.
2017-12-01
Detailed spatial distribution of population density is essential for city studies such as urban planning, environmental pollution and city emergency, even estimate pressure on the environment and human exposure and risks to health. However, most of the researches used census data as the detailed dynamic population distribution are difficult to acquire, especially in microscale research. This research describes a method using remote sensing imagery and location-based data to model population distribution at the function zone level. Firstly, urban functional zones within a city were mapped by high-resolution remote sensing images and POIs. The workflow of functional zones extraction includes five parts: (1) Urban land use classification. (2) Segmenting images in built-up area. (3) Identification of functional segments by POIs. (4) Identification of functional blocks by functional segmentation and weight coefficients. (5) Assessing accuracy by validation points. The result showed as Fig.1. Secondly, we applied ordinary least square and geographically weighted regression to assess spatial nonstationary relationship between light digital number (DN) and population density of sampling points. The two methods were employed to predict the population distribution over the research area. The R²of GWR model were in the order of 0.7 and typically showed significant variations over the region than traditional OLS model. The result showed as Fig.2.Validation with sampling points of population density demonstrated that the result predicted by the GWR model correlated well with light value. The result showed as Fig.3. Results showed: (1) Population density is not linear correlated with light brightness using global model. (2) VIIRS night-time light data could estimate population density integrating functional zones at city level. (3) GWR is a robust model to map population distribution, the adjusted R2 of corresponding GWR models were higher than the optimal OLS models, confirming that GWR models demonstrate better prediction accuracy. So this method provide detailed population density information for microscale citizen studies.
Xiao, Li; Cai, Qin; Li, Zhilin; Zhao, Hongkai; Luo, Ray
2014-01-01
A multi-scale framework is proposed for more realistic molecular dynamics simulations in continuum solvent models by coupling a molecular mechanics treatment of solute with a fluid mechanics treatment of solvent. This article reports our initial efforts to formulate the physical concepts necessary for coupling the two mechanics and develop a 3D numerical algorithm to simulate the solvent fluid via the Navier-Stokes equation. The numerical algorithm was validated with multiple test cases. The validation shows that the algorithm is effective and stable, with observed accuracy consistent with our design. PMID:25404761
Show and tell: disclosure and data sharing in experimental pathology.
Schofield, Paul N; Ward, Jerrold M; Sundberg, John P
2016-06-01
Reproducibility of data from experimental investigations using animal models is increasingly under scrutiny because of the potentially negative impact of poor reproducibility on the translation of basic research. Histopathology is a key tool in biomedical research, in particular for the phenotyping of animal models to provide insights into the pathobiology of diseases. Failure to disclose and share crucial histopathological experimental details compromises the validity of the review process and reliability of the conclusions. We discuss factors that affect the interpretation and validation of histopathology data in publications and the importance of making these data accessible to promote replicability in research. © 2016. Published by The Company of Biologists Ltd.
Validation of High Frequency (HF) Propagation Prediction Models in the Arctic region
NASA Astrophysics Data System (ADS)
Athieno, R.; Jayachandran, P. T.
2014-12-01
Despite the emergence of modern techniques for long distance communication, Ionospheric communication in the high frequency (HF) band (3-30 MHz) remains significant to both civilian and military users. However, the efficient use of the ever-varying ionosphere as a propagation medium is dependent on the reliability of ionospheric and HF propagation prediction models. Most available models are empirical implying that data collection has to be sufficiently large to provide good intended results. The models we present were developed with little data from the high latitudes which necessitates their validation. This paper presents the validation of three long term High Frequency (HF) propagation prediction models over a path within the Arctic region. Measurements of the Maximum Usable Frequency for a 3000 km range (MUF (3000) F2) for Resolute, Canada (74.75° N, 265.00° E), are obtained from hand-scaled ionograms generated by the Canadian Advanced Digital Ionosonde (CADI). The observations have been compared with predictions obtained from the Ionospheric Communication Enhanced Profile Analysis Program (ICEPAC), Voice of America Coverage Analysis Program (VOACAP) and International Telecommunication Union Recommendation 533 (ITU-REC533) for 2009, 2011, 2012 and 2013. A statistical analysis shows that the monthly predictions seem to reproduce the general features of the observations throughout the year though it is more evident in the winter and equinox months. Both predictions and observations show a diurnal and seasonal variation. The analysed models did not show large differences in their performances. However, there are noticeable differences across seasons for the entire period analysed: REC533 gives a better performance in winter months while VOACAP has a better performance for both equinox and summer months. VOACAP gives a better performance in the daily predictions compared to ICEPAC though, in general, the monthly predictions seem to agree more with the observations compared to the daily predictions.
An Easy Tool to Predict Survival in Patients Receiving Radiation Therapy for Painful Bone Metastases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Westhoff, Paulien G., E-mail: p.g.westhoff@umcutrecht.nl; Graeff, Alexander de; Monninkhof, Evelyn M.
2014-11-15
Purpose: Patients with bone metastases have a widely varying survival. A reliable estimation of survival is needed for appropriate treatment strategies. Our goal was to assess the value of simple prognostic factors, namely, patient and tumor characteristics, Karnofsky performance status (KPS), and patient-reported scores of pain and quality of life, to predict survival in patients with painful bone metastases. Methods and Materials: In the Dutch Bone Metastasis Study, 1157 patients were treated with radiation therapy for painful bone metastases. At randomization, physicians determined the KPS; patients rated general health on a visual analogue scale (VAS-gh), valuation of life on amore » verbal rating scale (VRS-vl) and pain intensity. To assess the predictive value of the variables, we used multivariate Cox proportional hazard analyses and C-statistics for discriminative value. Of the final model, calibration was assessed. External validation was performed on a dataset of 934 patients who were treated with radiation therapy for vertebral metastases. Results: Patients had mainly breast (39%), prostate (23%), or lung cancer (25%). After a maximum of 142 weeks' follow-up, 74% of patients had died. The best predictive model included sex, primary tumor, visceral metastases, KPS, VAS-gh, and VRS-vl (C-statistic = 0.72, 95% CI = 0.70-0.74). A reduced model, with only KPS and primary tumor, showed comparable discriminative capacity (C-statistic = 0.71, 95% CI = 0.69-0.72). External validation showed a C-statistic of 0.72 (95% CI = 0.70-0.73). Calibration of the derivation and the validation dataset showed underestimation of survival. Conclusion: In predicting survival in patients with painful bone metastases, KPS combined with primary tumor was comparable to a more complex model. Considering the amount of variables in complex models and the additional burden on patients, the simple model is preferred for daily use. In addition, a risk table for survival is provided.« less
NASA Astrophysics Data System (ADS)
Kim, Dongmin; Lee, Myong-In; Jeong, Su-Jong; Im, Jungho; Cha, Dong Hyun; Lee, Sanggyun
2017-12-01
This study compares historical simulations of the terrestrial carbon cycle produced by 10 Earth System Models (ESMs) that participated in the fifth phase of the Coupled Model Intercomparison Project (CMIP5). Using MODIS satellite estimates, this study validates the simulation of gross primary production (GPP), net primary production (NPP), and carbon use efficiency (CUE), which depend on plant function types (PFTs). The models show noticeable deficiencies compared to the MODIS data in the simulation of the spatial patterns of GPP and NPP and large differences among the simulations, although the multi-model ensemble (MME) mean provides a realistic global mean value and spatial distributions. The larger model spreads in GPP and NPP compared to those of surface temperature and precipitation suggest that the differences among simulations in terms of the terrestrial carbon cycle are largely due to uncertainties in the parameterization of terrestrial carbon fluxes by vegetation. The models also exhibit large spatial differences in their simulated CUE values and at locations where the dominant PFT changes, primarily due to differences in the parameterizations. While the MME-simulated CUE values show a strong dependence on surface temperatures, the observed CUE values from MODIS show greater complexity, as well as non-linear sensitivity. This leads to the overall underestimation of CUE using most of the PFTs incorporated into current ESMs. The results of this comparison suggest that more careful and extensive validation is needed to improve the terrestrial carbon cycle in terms of ecosystem-level processes.
Wong, Quincy J J; Certoma, Sarah P; McLellan, Lauren F; Halldorsson, Brynjar; Reyes, Natasha; Boulton, Kelsie; Hudson, Jennifer L; Rapee, Ronald M
2017-12-28
Recent research has started to examine the applicability of influential adult models of the maintenance of social anxiety disorder (SAD) to youth. This research is limited by the lack of psychometrically validated measures of underlying constructs that are developmentally appropriate for youth. One key construct in adult models of SAD is maladaptive social-evaluative beliefs. The current study aimed to develop and validate a measure of these beliefs in youth, known as the Report of Youth Social Cognitions (RYSC). The RYSC was developed with a clinical sample of youth with anxiety disorders (N = 180) and cross-validated in a community sample of youth (N = 305). In the clinical sample, the RYSC exhibited a 3-factor structure (negative evaluation, revealing self, and positive impression factors), good internal consistency, and construct validity. In the community sample, the 3-factor structure and the internal consistency of the RYSC were replicated, but the test of construct validity showed that the RYSC had similarly strong associations with social anxiety and depressed affect. The RYSC had good test-retest reliability overall, although the revealing self subscale showed lower temporal stability which improved when only older participants were considered (age ≥9 years). The RYSC in general was also shown to discriminate between youth with and without SAD although the revealing self subscale again performed suboptimally but improved when only older participants were considered. These findings provide psychometric support for the RYSC and justifies its use with youth in research and clinical settings requiring the assessment of maladaptive social-evaluative beliefs. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Malti, Tina; Zuffianò, Antonio; Noam, Gil G
2018-04-01
Knowing every child's social-emotional development is important as it can support prevention and intervention approaches to meet the developmental needs and strengths of children. Here, we discuss the role of social-emotional assessment tools in planning, implementing, and evaluating preventative strategies to promote mental health in all children and adolescents. We, first, selectively review existing tools and identify current gaps in the measurement literature. Next, we introduce the Holistic Student Assessment (HSA), a tool that is based in our social-emotional developmental theory, The Clover Model, and designed to measure social-emotional development in children and adolescents. Using a sample of 5946 students (51% boys, M age = 13.16 years), we provide evidence for the psychometric validity of the self-report version of the HSA. First, we document the theoretically expected 7-dimension factor structure in a calibration sub-sample (n = 984) and cross-validate its structure in a validation sub-sample (n = 4962). Next, we show measurement invariance across development, i.e., late childhood (9- to 11-year-olds), early adolescence (12- to 14-year-olds), and middle adolescence (15- to 18-year-olds), and evidence for the HSA's construct validity in each age group. The findings support the robustness of the factor structure and confirm its developmental sensitivity. Structural equation modeling validity analysis in a multiple-group framework indicates that the HSA is associated with mental health in expected directions across ages. Overall, these findings show the psychometric properties of the tool, and we discuss how social-emotional tools such as the HSA can guide future research and inform large-scale dissemination of preventive strategies.
Dhingra, Madhur S; Artois, Jean; Robinson, Timothy P; Linard, Catherine; Chaiban, Celia; Xenarios, Ioannis; Engler, Robin; Liechti, Robin; Kuznetsov, Dmitri; Xiao, Xiangming; Dobschuetz, Sophie Von; Claes, Filip; Newman, Scott H; Dauphin, Gwenaëlle; Gilbert, Marius
2016-01-01
Global disease suitability models are essential tools to inform surveillance systems and enable early detection. We present the first global suitability model of highly pathogenic avian influenza (HPAI) H5N1 and demonstrate that reliable predictions can be obtained at global scale. Best predictions are obtained using spatial predictor variables describing host distributions, rather than land use or eco-climatic spatial predictor variables, with a strong association with domestic duck and extensively raised chicken densities. Our results also support a more systematic use of spatial cross-validation in large-scale disease suitability modelling compared to standard random cross-validation that can lead to unreliable measure of extrapolation accuracy. A global suitability model of the H5 clade 2.3.4.4 viruses, a group of viruses that recently spread extensively in Asia and the US, shows in comparison a lower spatial extrapolation capacity than the HPAI H5N1 models, with a stronger association with intensively raised chicken densities and anthropogenic factors. DOI: http://dx.doi.org/10.7554/eLife.19571.001 PMID:27885988