Sample records for analysis model validation

  1. Load Model Verification, Validation and Calibration Framework by Statistical Analysis on Field Data

    NASA Astrophysics Data System (ADS)

    Jiao, Xiangqing; Liao, Yuan; Nguyen, Thai

    2017-11-01

    Accurate load models are critical for power system analysis and operation. A large amount of research work has been done on load modeling. Most of the existing research focuses on developing load models, while little has been done on developing formal load model verification and validation (V&V) methodologies or procedures. Most of the existing load model validation is based on qualitative rather than quantitative analysis. In addition, not all aspects of model V&V problem have been addressed by the existing approaches. To complement the existing methods, this paper proposes a novel load model verification and validation framework that can systematically and more comprehensively examine load model's effectiveness and accuracy. Statistical analysis, instead of visual check, quantifies the load model's accuracy, and provides a confidence level of the developed load model for model users. The analysis results can also be used to calibrate load models. The proposed framework can be used as a guidance to systematically examine load models for utility engineers and researchers. The proposed method is demonstrated through analysis of field measurements collected from a utility system.

  2. Computer simulation of Cerebral Arteriovenous Malformation-validation analysis of hemodynamics parameters.

    PubMed

    Kumar, Y Kiran; Mehta, Shashi Bhushan; Ramachandra, Manjunath

    2017-01-01

    The purpose of this work is to provide some validation methods for evaluating the hemodynamic assessment of Cerebral Arteriovenous Malformation (CAVM). This article emphasizes the importance of validating noninvasive measurements for CAVM patients, which are designed using lumped models for complex vessel structure. The validation of the hemodynamics assessment is based on invasive clinical measurements and cross-validation techniques with the Philips proprietary validated software's Qflow and 2D Perfursion. The modeling results are validated for 30 CAVM patients for 150 vessel locations. Mean flow, diameter, and pressure were compared between modeling results and with clinical/cross validation measurements, using an independent two-tailed Student t test. Exponential regression analysis was used to assess the relationship between blood flow, vessel diameter, and pressure between them. Univariate analysis is used to assess the relationship between vessel diameter, vessel cross-sectional area, AVM volume, AVM pressure, and AVM flow results were performed with linear or exponential regression. Modeling results were compared with clinical measurements from vessel locations of cerebral regions. Also, the model is cross validated with Philips proprietary validated software's Qflow and 2D Perfursion. Our results shows that modeling results and clinical results are nearly matching with a small deviation. In this article, we have validated our modeling results with clinical measurements. The new approach for cross-validation is proposed by demonstrating the accuracy of our results with a validated product in a clinical environment.

  3. Validation Database Based Thermal Analysis of an Advanced RPS Concept

    NASA Technical Reports Server (NTRS)

    Balint, Tibor S.; Emis, Nickolas D.

    2006-01-01

    Advanced RPS concepts can be conceived, designed and assessed using high-end computational analysis tools. These predictions may provide an initial insight into the potential performance of these models, but verification and validation are necessary and required steps to gain confidence in the numerical analysis results. This paper discusses the findings from a numerical validation exercise for a small advanced RPS concept, based on a thermal analysis methodology developed at JPL and on a validation database obtained from experiments performed at Oregon State University. Both the numerical and experimental configurations utilized a single GPHS module enabled design, resembling a Mod-RTG concept. The analysis focused on operating and environmental conditions during the storage phase only. This validation exercise helped to refine key thermal analysis and modeling parameters, such as heat transfer coefficients, and conductivity and radiation heat transfer values. Improved understanding of the Mod-RTG concept through validation of the thermal model allows for future improvements to this power system concept.

  4. Assessment of the Value, Impact, and Validity of the Jobs and Economic Development Impacts (JEDI) Suite of Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Billman, L.; Keyser, D.

    The Jobs and Economic Development Impacts (JEDI) models, developed by the National Renewable Energy Laboratory (NREL) for the U.S. Department of Energy (DOE) Office of Energy Efficiency and Renewable Energy (EERE), use input-output methodology to estimate gross (not net) jobs and economic impacts of building and operating selected types of renewable electricity generation and fuel plants. This analysis provides the DOE with an assessment of the value, impact, and validity of the JEDI suite of models. While the models produce estimates of jobs, earnings, and economic output, this analysis focuses only on jobs estimates. This validation report includes an introductionmore » to JEDI models, an analysis of the value and impact of the JEDI models, and an analysis of the validity of job estimates generated by JEDI model through comparison to other modeled estimates and comparison to empirical, observed jobs data as reported or estimated for a commercial project, a state, or a region.« less

  5. Exact Analysis of Squared Cross-Validity Coefficient in Predictive Regression Models

    ERIC Educational Resources Information Center

    Shieh, Gwowen

    2009-01-01

    In regression analysis, the notion of population validity is of theoretical interest for describing the usefulness of the underlying regression model, whereas the presumably more important concept of population cross-validity represents the predictive effectiveness for the regression equation in future research. It appears that the inference…

  6. A Model for Estimating the Reliability and Validity of Criterion-Referenced Measures.

    ERIC Educational Resources Information Center

    Edmonston, Leon P.; Randall, Robert S.

    A decision model designed to determine the reliability and validity of criterion referenced measures (CRMs) is presented. General procedures which pertain to the model are discussed as to: Measures of relationship, Reliability, Validity (content, criterion-oriented, and construct validation), and Item Analysis. The decision model is presented in…

  7. Hybrid Soft Soil Tire Model (HSSTM). Part 1: Tire Material and Structure Modeling

    DTIC Science & Technology

    2015-04-28

    commercially available vehicle simulation packages. Model parameters are obtained using a validated finite element tire model, modal analysis, and other...design of experiment matrix. This data, in addition to modal analysis data were used to validate the tire model. Furthermore, to study the validity...é ë ê ê ê ê ê ê ê ù û ú ú ú ú ú ú ú (78) The applied forces to the rim center consist of the axle forces and suspension forces: FFF Gsuspension G

  8. Mathematical modeling in realistic mathematics education

    NASA Astrophysics Data System (ADS)

    Riyanto, B.; Zulkardi; Putri, R. I. I.; Darmawijoyo

    2017-12-01

    The purpose of this paper is to produce Mathematical modelling in Realistics Mathematics Education of Junior High School. This study used development research consisting of 3 stages, namely analysis, design and evaluation. The success criteria of this study were obtained in the form of local instruction theory for school mathematical modelling learning which was valid and practical for students. The data were analyzed using descriptive analysis method as follows: (1) walk through, analysis based on the expert comments in the expert review to get Hypothetical Learning Trajectory for valid mathematical modelling learning; (2) analyzing the results of the review in one to one and small group to gain practicality. Based on the expert validation and students’ opinion and answers, the obtained mathematical modeling problem in Realistics Mathematics Education was valid and practical.

  9. Development of a Conservative Model Validation Approach for Reliable Analysis

    DTIC Science & Technology

    2015-01-01

    CIE 2015 August 2-5, 2015, Boston, Massachusetts, USA [DRAFT] DETC2015-46982 DEVELOPMENT OF A CONSERVATIVE MODEL VALIDATION APPROACH FOR RELIABLE...obtain a conservative simulation model for reliable design even with limited experimental data. Very little research has taken into account the...3, the proposed conservative model validation is briefly compared to the conventional model validation approach. Section 4 describes how to account

  10. Crash Certification by Analysis - Are We There Yet?

    NASA Technical Reports Server (NTRS)

    Jackson, Karen E.; Fasanella, Edwin L.; Lyle, Karen H.

    2006-01-01

    This paper addresses the issue of crash certification by analysis. This broad topic encompasses many ancillary issues including model validation procedures, uncertainty in test data and analysis models, probabilistic techniques for test-analysis correlation, verification of the mathematical formulation, and establishment of appropriate qualification requirements. This paper will focus on certification requirements for crashworthiness of military helicopters; capabilities of the current analysis codes used for crash modeling and simulation, including some examples of simulations from the literature to illustrate the current approach to model validation; and future directions needed to achieve "crash certification by analysis."

  11. Applicability of Monte Carlo cross validation technique for model development and validation using generalised least squares regression

    NASA Astrophysics Data System (ADS)

    Haddad, Khaled; Rahman, Ataur; A Zaman, Mohammad; Shrestha, Surendra

    2013-03-01

    SummaryIn regional hydrologic regression analysis, model selection and validation are regarded as important steps. Here, the model selection is usually based on some measurements of goodness-of-fit between the model prediction and observed data. In Regional Flood Frequency Analysis (RFFA), leave-one-out (LOO) validation or a fixed percentage leave out validation (e.g., 10%) is commonly adopted to assess the predictive ability of regression-based prediction equations. This paper develops a Monte Carlo Cross Validation (MCCV) technique (which has widely been adopted in Chemometrics and Econometrics) in RFFA using Generalised Least Squares Regression (GLSR) and compares it with the most commonly adopted LOO validation approach. The study uses simulated and regional flood data from the state of New South Wales in Australia. It is found that when developing hydrologic regression models, application of the MCCV is likely to result in a more parsimonious model than the LOO. It has also been found that the MCCV can provide a more realistic estimate of a model's predictive ability when compared with the LOO.

  12. Identifying model error in metabolic flux analysis - a generalized least squares approach.

    PubMed

    Sokolenko, Stanislav; Quattrociocchi, Marco; Aucoin, Marc G

    2016-09-13

    The estimation of intracellular flux through traditional metabolic flux analysis (MFA) using an overdetermined system of equations is a well established practice in metabolic engineering. Despite the continued evolution of the methodology since its introduction, there has been little focus on validation and identification of poor model fit outside of identifying "gross measurement error". The growing complexity of metabolic models, which are increasingly generated from genome-level data, has necessitated robust validation that can directly assess model fit. In this work, MFA calculation is framed as a generalized least squares (GLS) problem, highlighting the applicability of the common t-test for model validation. To differentiate between measurement and model error, we simulate ideal flux profiles directly from the model, perturb them with estimated measurement error, and compare their validation to real data. Application of this strategy to an established Chinese Hamster Ovary (CHO) cell model shows how fluxes validated by traditional means may be largely non-significant due to a lack of model fit. With further simulation, we explore how t-test significance relates to calculation error and show that fluxes found to be non-significant have 2-4 fold larger error (if measurement uncertainty is in the 5-10 % range). The proposed validation method goes beyond traditional detection of "gross measurement error" to identify lack of fit between model and data. Although the focus of this work is on t-test validation and traditional MFA, the presented framework is readily applicable to other regression analysis methods and MFA formulations.

  13. Aeroservoelastic Model Validation and Test Data Analysis of the F/A-18 Active Aeroelastic Wing

    NASA Technical Reports Server (NTRS)

    Brenner, Martin J.; Prazenica, Richard J.

    2003-01-01

    Model validation and flight test data analysis require careful consideration of the effects of uncertainty, noise, and nonlinearity. Uncertainty prevails in the data analysis techniques and results in a composite model uncertainty from unmodeled dynamics, assumptions and mechanics of the estimation procedures, noise, and nonlinearity. A fundamental requirement for reliable and robust model development is an attempt to account for each of these sources of error, in particular, for model validation, robust stability prediction, and flight control system development. This paper is concerned with data processing procedures for uncertainty reduction in model validation for stability estimation and nonlinear identification. F/A-18 Active Aeroelastic Wing (AAW) aircraft data is used to demonstrate signal representation effects on uncertain model development, stability estimation, and nonlinear identification. Data is decomposed using adaptive orthonormal best-basis and wavelet-basis signal decompositions for signal denoising into linear and nonlinear identification algorithms. Nonlinear identification from a wavelet-based Volterra kernel procedure is used to extract nonlinear dynamics from aeroelastic responses, and to assist model development and uncertainty reduction for model validation and stability prediction by removing a class of nonlinearity from the uncertainty.

  14. Advanced Concept Modeling

    NASA Technical Reports Server (NTRS)

    Chaput, Armand; Johns, Zachary; Hodges, Todd; Selfridge, Justin; Bevirt, Joeben; Ahuja, Vivek

    2015-01-01

    Advanced Concepts Modeling software validation, analysis, and design. This was a National Institute of Aerospace contract with a lot of pieces. Efforts ranged from software development and validation for structures and aerodynamics, through flight control development, and aeropropulsive analysis, to UAV piloting services.

  15. Longitudinal Models of Reliability and Validity: A Latent Curve Approach.

    ERIC Educational Resources Information Center

    Tisak, John; Tisak, Marie S.

    1996-01-01

    Dynamic generalizations of reliability and validity that will incorporate longitudinal or developmental models, using latent curve analysis, are discussed. A latent curve model formulated to depict change is incorporated into the classical definitions of reliability and validity. The approach is illustrated with sociological and psychological…

  16. How to test validity in orthodontic research: a mixed dentition analysis example.

    PubMed

    Donatelli, Richard E; Lee, Shin-Jae

    2015-02-01

    The data used to test the validity of a prediction method should be different from the data used to generate the prediction model. In this study, we explored whether an independent data set is mandatory for testing the validity of a new prediction method and how validity can be tested without independent new data. Several validation methods were compared in an example using the data from a mixed dentition analysis with a regression model. The validation errors of real mixed dentition analysis data and simulation data were analyzed for increasingly large data sets. The validation results of both the real and the simulation studies demonstrated that the leave-1-out cross-validation method had the smallest errors. The largest errors occurred in the traditional simple validation method. The differences between the validation methods diminished as the sample size increased. The leave-1-out cross-validation method seems to be an optimal validation method for improving the prediction accuracy in a data set with limited sample sizes. Copyright © 2015 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.

  17. Community-wide validation of geospace model local K-index predictions to support model transition to operations

    NASA Astrophysics Data System (ADS)

    Glocer, A.; Rastätter, L.; Kuznetsova, M.; Pulkkinen, A.; Singer, H. J.; Balch, C.; Weimer, D.; Welling, D.; Wiltberger, M.; Raeder, J.; Weigel, R. S.; McCollough, J.; Wing, S.

    2016-07-01

    We present the latest result of a community-wide space weather model validation effort coordinated among the Community Coordinated Modeling Center (CCMC), NOAA Space Weather Prediction Center (SWPC), model developers, and the broader science community. Validation of geospace models is a critical activity for both building confidence in the science results produced by the models and in assessing the suitability of the models for transition to operations. Indeed, a primary motivation of this work is supporting NOAA/SWPC's effort to select a model or models to be transitioned into operations. Our validation efforts focus on the ability of the models to reproduce a regional index of geomagnetic disturbance, the local K-index. Our analysis includes six events representing a range of geomagnetic activity conditions and six geomagnetic observatories representing midlatitude and high-latitude locations. Contingency tables, skill scores, and distribution metrics are used for the quantitative analysis of model performance. We consider model performance on an event-by-event basis, aggregated over events, at specific station locations, and separated into high-latitude and midlatitude domains. A summary of results is presented in this report, and an online tool for detailed analysis is available at the CCMC.

  18. Community-Wide Validation of Geospace Model Local K-Index Predictions to Support Model Transition to Operations

    NASA Technical Reports Server (NTRS)

    Glocer, A.; Rastaetter, L.; Kuznetsova, M.; Pulkkinen, A.; Singer, H. J.; Balch, C.; Weimer, D.; Welling, D.; Wiltberger, M.; Raeder, J.; hide

    2016-01-01

    We present the latest result of a community-wide space weather model validation effort coordinated among the Community Coordinated Modeling Center (CCMC), NOAA Space Weather Prediction Center (SWPC), model developers, and the broader science community. Validation of geospace models is a critical activity for both building confidence in the science results produced by the models and in assessing the suitability of the models for transition to operations. Indeed, a primary motivation of this work is supporting NOAA/SWPCs effort to select a model or models to be transitioned into operations. Our validation efforts focus on the ability of the models to reproduce a regional index of geomagnetic disturbance, the local K-index. Our analysis includes six events representing a range of geomagnetic activity conditions and six geomagnetic observatories representing midlatitude and high-latitude locations. Contingency tables, skill scores, and distribution metrics are used for the quantitative analysis of model performance. We consider model performance on an event-by-event basis, aggregated over events, at specific station locations, and separated into high-latitude and midlatitude domains. A summary of results is presented in this report, and an online tool for detailed analysis is available at the CCMC.

  19. A diagnostic model for the detection of sensitization to wheat allergens was developed and validated in bakery workers.

    PubMed

    Suarthana, Eva; Vergouwe, Yvonne; Moons, Karel G; de Monchy, Jan; Grobbee, Diederick; Heederik, Dick; Meijer, Evert

    2010-09-01

    To develop and validate a prediction model to detect sensitization to wheat allergens in bakery workers. The prediction model was developed in 867 Dutch bakery workers (development set, prevalence of sensitization 13%) and included questionnaire items (candidate predictors). First, principal component analysis was used to reduce the number of candidate predictors. Then, multivariable logistic regression analysis was used to develop the model. Internal validation and extent of optimism was assessed with bootstrapping. External validation was studied in 390 independent Dutch bakery workers (validation set, prevalence of sensitization 20%). The prediction model contained the predictors nasoconjunctival symptoms, asthma symptoms, shortness of breath and wheeze, work-related upper and lower respiratory symptoms, and traditional bakery. The model showed good discrimination with an area under the receiver operating characteristic (ROC) curve area of 0.76 (and 0.75 after internal validation). Application of the model in the validation set gave a reasonable discrimination (ROC area=0.69) and good calibration after a small adjustment of the model intercept. A simple model with questionnaire items only can be used to stratify bakers according to their risk of sensitization to wheat allergens. Its use may increase the cost-effectiveness of (subsequent) medical surveillance.

  20. Verification and validation of a Work Domain Analysis with turing machine task analysis.

    PubMed

    Rechard, J; Bignon, A; Berruet, P; Morineau, T

    2015-03-01

    While the use of Work Domain Analysis as a methodological framework in cognitive engineering is increasing rapidly, verification and validation of work domain models produced by this method are becoming a significant issue. In this article, we propose the use of a method based on Turing machine formalism named "Turing Machine Task Analysis" to verify and validate work domain models. The application of this method on two work domain analyses, one of car driving which is an "intentional" domain, and the other of a ship water system which is a "causal domain" showed the possibility of highlighting improvements needed by these models. More precisely, the step by step analysis of a degraded task scenario in each work domain model pointed out unsatisfactory aspects in the first modelling, like overspecification, underspecification, omission of work domain affordances, or unsuitable inclusion of objects in the work domain model. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  1. Construct Validation of the Louisiana School Analysis Model (SAM) Instructional Staff Questionnaire

    ERIC Educational Resources Information Center

    Bray-Clark, Nikki; Bates, Reid

    2005-01-01

    The purpose of this study was to validate the Louisiana SAM Instructional Staff Questionnaire, a key component of the Louisiana School Analysis Model. The model was designed as a comprehensive evaluation tool for schools. Principle axis factoring with oblique rotation was used to uncover the underlying structure of the SISQ. (Contains 1 table.)

  2. A Simulation Study of Threats to Validity in Quasi-Experimental Designs: Interrelationship between Design, Measurement, and Analysis.

    PubMed

    Holgado-Tello, Fco P; Chacón-Moscoso, Salvador; Sanduvete-Chaves, Susana; Pérez-Gil, José A

    2016-01-01

    The Campbellian tradition provides a conceptual framework to assess threats to validity. On the other hand, different models of causal analysis have been developed to control estimation biases in different research designs. However, the link between design features, measurement issues, and concrete impact estimation analyses is weak. In order to provide an empirical solution to this problem, we use Structural Equation Modeling (SEM) as a first approximation to operationalize the analytical implications of threats to validity in quasi-experimental designs. Based on the analogies established between the Classical Test Theory (CTT) and causal analysis, we describe an empirical study based on SEM in which range restriction and statistical power have been simulated in two different models: (1) A multistate model in the control condition (pre-test); and (2) A single-trait-multistate model in the control condition (post-test), adding a new mediator latent exogenous (independent) variable that represents a threat to validity. Results show, empirically, how the differences between both the models could be partially or totally attributed to these threats. Therefore, SEM provides a useful tool to analyze the influence of potential threats to validity.

  3. A Simulation Study of Threats to Validity in Quasi-Experimental Designs: Interrelationship between Design, Measurement, and Analysis

    PubMed Central

    Holgado-Tello, Fco. P.; Chacón-Moscoso, Salvador; Sanduvete-Chaves, Susana; Pérez-Gil, José A.

    2016-01-01

    The Campbellian tradition provides a conceptual framework to assess threats to validity. On the other hand, different models of causal analysis have been developed to control estimation biases in different research designs. However, the link between design features, measurement issues, and concrete impact estimation analyses is weak. In order to provide an empirical solution to this problem, we use Structural Equation Modeling (SEM) as a first approximation to operationalize the analytical implications of threats to validity in quasi-experimental designs. Based on the analogies established between the Classical Test Theory (CTT) and causal analysis, we describe an empirical study based on SEM in which range restriction and statistical power have been simulated in two different models: (1) A multistate model in the control condition (pre-test); and (2) A single-trait-multistate model in the control condition (post-test), adding a new mediator latent exogenous (independent) variable that represents a threat to validity. Results show, empirically, how the differences between both the models could be partially or totally attributed to these threats. Therefore, SEM provides a useful tool to analyze the influence of potential threats to validity. PMID:27378991

  4. Validation analysis of probabilistic models of dietary exposure to food additives.

    PubMed

    Gilsenan, M B; Thompson, R L; Lambe, J; Gibney, M J

    2003-10-01

    The validity of a range of simple conceptual models designed specifically for the estimation of food additive intakes using probabilistic analysis was assessed. Modelled intake estimates that fell below traditional conservative point estimates of intake and above 'true' additive intakes (calculated from a reference database at brand level) were considered to be in a valid region. Models were developed for 10 food additives by combining food intake data, the probability of an additive being present in a food group and additive concentration data. Food intake and additive concentration data were entered as raw data or as a lognormal distribution, and the probability of an additive being present was entered based on the per cent brands or the per cent eating occasions within a food group that contained an additive. Since the three model components assumed two possible modes of input, the validity of eight (2(3)) model combinations was assessed. All model inputs were derived from the reference database. An iterative approach was employed in which the validity of individual model components was assessed first, followed by validation of full conceptual models. While the distribution of intake estimates from models fell below conservative intakes, which assume that the additive is present at maximum permitted levels (MPLs) in all foods in which it is permitted, intake estimates were not consistently above 'true' intakes. These analyses indicate the need for more complex models for the estimation of food additive intakes using probabilistic analysis. Such models should incorporate information on market share and/or brand loyalty.

  5. Classification and regression tree analysis of acute-on-chronic hepatitis B liver failure: Seeing the forest for the trees.

    PubMed

    Shi, K-Q; Zhou, Y-Y; Yan, H-D; Li, H; Wu, F-L; Xie, Y-Y; Braddock, M; Lin, X-Y; Zheng, M-H

    2017-02-01

    At present, there is no ideal model for predicting the short-term outcome of patients with acute-on-chronic hepatitis B liver failure (ACHBLF). This study aimed to establish and validate a prognostic model by using the classification and regression tree (CART) analysis. A total of 1047 patients from two separate medical centres with suspected ACHBLF were screened in the study, which were recognized as derivation cohort and validation cohort, respectively. CART analysis was applied to predict the 3-month mortality of patients with ACHBLF. The accuracy of the CART model was tested using the area under the receiver operating characteristic curve, which was compared with the model for end-stage liver disease (MELD) score and a new logistic regression model. CART analysis identified four variables as prognostic factors of ACHBLF: total bilirubin, age, serum sodium and INR, and three distinct risk groups: low risk (4.2%), intermediate risk (30.2%-53.2%) and high risk (81.4%-96.9%). The new logistic regression model was constructed with four independent factors, including age, total bilirubin, serum sodium and prothrombin activity by multivariate logistic regression analysis. The performances of the CART model (0.896), similar to the logistic regression model (0.914, P=.382), exceeded that of MELD score (0.667, P<.001). The results were confirmed in the validation cohort. We have developed and validated a novel CART model superior to MELD for predicting three-month mortality of patients with ACHBLF. Thus, the CART model could facilitate medical decision-making and provide clinicians with a validated practical bedside tool for ACHBLF risk stratification. © 2016 John Wiley & Sons Ltd.

  6. Validation of the measure automobile emissions model : a statistical analysis

    DOT National Transportation Integrated Search

    2000-09-01

    The Mobile Emissions Assessment System for Urban and Regional Evaluation (MEASURE) model provides an external validation capability for hot stabilized option; the model is one of several new modal emissions models designed to predict hot stabilized e...

  7. Cross-validation to select Bayesian hierarchical models in phylogenetics.

    PubMed

    Duchêne, Sebastián; Duchêne, David A; Di Giallonardo, Francesca; Eden, John-Sebastian; Geoghegan, Jemma L; Holt, Kathryn E; Ho, Simon Y W; Holmes, Edward C

    2016-05-26

    Recent developments in Bayesian phylogenetic models have increased the range of inferences that can be drawn from molecular sequence data. Accordingly, model selection has become an important component of phylogenetic analysis. Methods of model selection generally consider the likelihood of the data under the model in question. In the context of Bayesian phylogenetics, the most common approach involves estimating the marginal likelihood, which is typically done by integrating the likelihood across model parameters, weighted by the prior. Although this method is accurate, it is sensitive to the presence of improper priors. We explored an alternative approach based on cross-validation that is widely used in evolutionary analysis. This involves comparing models according to their predictive performance. We analysed simulated data and a range of viral and bacterial data sets using a cross-validation approach to compare a variety of molecular clock and demographic models. Our results show that cross-validation can be effective in distinguishing between strict- and relaxed-clock models and in identifying demographic models that allow growth in population size over time. In most of our empirical data analyses, the model selected using cross-validation was able to match that selected using marginal-likelihood estimation. The accuracy of cross-validation appears to improve with longer sequence data, particularly when distinguishing between relaxed-clock models. Cross-validation is a useful method for Bayesian phylogenetic model selection. This method can be readily implemented even when considering complex models where selecting an appropriate prior for all parameters may be difficult.

  8. A novel integrated framework and improved methodology of computer-aided drug design.

    PubMed

    Chen, Calvin Yu-Chian

    2013-01-01

    Computer-aided drug design (CADD) is a critical initiating step of drug development, but a single model capable of covering all designing aspects remains to be elucidated. Hence, we developed a drug design modeling framework that integrates multiple approaches, including machine learning based quantitative structure-activity relationship (QSAR) analysis, 3D-QSAR, Bayesian network, pharmacophore modeling, and structure-based docking algorithm. Restrictions for each model were defined for improved individual and overall accuracy. An integration method was applied to join the results from each model to minimize bias and errors. In addition, the integrated model adopts both static and dynamic analysis to validate the intermolecular stabilities of the receptor-ligand conformation. The proposed protocol was applied to identifying HER2 inhibitors from traditional Chinese medicine (TCM) as an example for validating our new protocol. Eight potent leads were identified from six TCM sources. A joint validation system comprised of comparative molecular field analysis, comparative molecular similarity indices analysis, and molecular dynamics simulation further characterized the candidates into three potential binding conformations and validated the binding stability of each protein-ligand complex. The ligand pathway was also performed to predict the ligand "in" and "exit" from the binding site. In summary, we propose a novel systematic CADD methodology for the identification, analysis, and characterization of drug-like candidates.

  9. Maximizing the Information and Validity of a Linear Composite in the Factor Analysis Model for Continuous Item Responses

    ERIC Educational Resources Information Center

    Ferrando, Pere J.

    2008-01-01

    This paper develops results and procedures for obtaining linear composites of factor scores that maximize: (a) test information, and (b) validity with respect to external variables in the multiple factor analysis (FA) model. I treat FA as a multidimensional item response theory model, and use Ackerman's multidimensional information approach based…

  10. I-15 San Diego, California, model validation and calibration report.

    DOT National Transportation Integrated Search

    2010-02-01

    The Integrated Corridor Management (ICM) initiative requires the calibration and validation of simulation models used in the Analysis, Modeling, and Simulation of Pioneer Site proposed integrated corridors. This report summarizes the results and proc...

  11. The SCALE Verified, Archived Library of Inputs and Data - VALID

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marshall, William BJ J; Rearden, Bradley T

    The Verified, Archived Library of Inputs and Data (VALID) at ORNL contains high quality, independently reviewed models and results that improve confidence in analysis. VALID is developed and maintained according to a procedure of the SCALE quality assurance (QA) plan. This paper reviews the origins of the procedure and its intended purpose, the philosophy of the procedure, some highlights of its implementation, and the future of the procedure and associated VALID library. The original focus of the procedure was the generation of high-quality models that could be archived at ORNL and applied to many studies. The review process associated withmore » model generation minimized the chances of errors in these archived models. Subsequently, the scope of the library and procedure was expanded to provide high quality, reviewed sensitivity data files for deployment through the International Handbook of Evaluated Criticality Safety Benchmark Experiments (IHECSBE). Sensitivity data files for approximately 400 such models are currently available. The VALID procedure and library continue fulfilling these multiple roles. The VALID procedure is based on the quality assurance principles of ISO 9001 and nuclear safety analysis. Some of these key concepts include: independent generation and review of information, generation and review by qualified individuals, use of appropriate references for design data and documentation, and retrievability of the models, results, and documentation associated with entries in the library. Some highlights of the detailed procedure are discussed to provide background on its implementation and to indicate limitations of data extracted from VALID for use by the broader community. Specifically, external users of data generated within VALID must take responsibility for ensuring that the files are used within the QA framework of their organization and that use is appropriate. The future plans for the VALID library include expansion to include additional experiments from the IHECSBE, to include experiments from areas beyond criticality safety, such as reactor physics and shielding, and to include application models. In the future, external SCALE users may also obtain qualification under the VALID procedure and be involved in expanding the library. The VALID library provides a pathway for the criticality safety community to leverage modeling and analysis expertise at ORNL.« less

  12. Validating the FOCUS Model Through an Analysis of Identity Fragmentation in Nigerian Social Media

    DTIC Science & Technology

    2015-09-01

    TRAC-M-TM-15-032 September 2015 Validating the FOCUS Model Through an Analysis of Identity Fragmentation in Nigerian Social Media ...an Analysis of Identity Fragmentation in Nigerian Social Media Authors MAJ Adam Haupt Dr. Camber Warren...Identity Fragmentation in Nigerian Social Media 5. PROJECT NUMBERS TRAC Project Code 060113 6. AUTHOR(S) MAJ Haupt, Dr. Warren 7. PERFORMING

  13. Validation of Metrics as Error Predictors

    NASA Astrophysics Data System (ADS)

    Mendling, Jan

    In this chapter, we test the validity of metrics that were defined in the previous chapter for predicting errors in EPC business process models. In Section 5.1, we provide an overview of how the analysis data is generated. Section 5.2 describes the sample of EPCs from practice that we use for the analysis. Here we discuss a disaggregation by the EPC model group and by error as well as a correlation analysis between metrics and error. Based on this sample, we calculate a logistic regression model for predicting error probability with the metrics as input variables in Section 5.3. In Section 5.4, we then test the regression function for an independent sample of EPC models from textbooks as a cross-validation. Section 5.5 summarizes the findings.

  14. Application of Petri net theory for modelling and validation of the sucrose breakdown pathway in the potato tuber.

    PubMed

    Koch, Ina; Junker, Björn H; Heiner, Monika

    2005-04-01

    Because of the complexity of metabolic networks and their regulation, formal modelling is a useful method to improve the understanding of these systems. An essential step in network modelling is to validate the network model. Petri net theory provides algorithms and methods, which can be applied directly to metabolic network modelling and analysis in order to validate the model. The metabolism between sucrose and starch in the potato tuber is of great research interest. Even if the metabolism is one of the best studied in sink organs, it is not yet fully understood. We provide an approach for model validation of metabolic networks using Petri net theory, which we demonstrate for the sucrose breakdown pathway in the potato tuber. We start with hierarchical modelling of the metabolic network as a Petri net and continue with the analysis of qualitative properties of the network. The results characterize the net structure and give insights into the complex net behaviour.

  15. Applicability Analysis of Validation Evidence for Biomedical Computational Models

    DOE PAGES

    Pathmanathan, Pras; Gray, Richard A.; Romero, Vicente J.; ...

    2017-09-07

    Computational modeling has the potential to revolutionize medicine the way it transformed engineering. However, despite decades of work, there has only been limited progress to successfully translate modeling research to patient care. One major difficulty which often occurs with biomedical computational models is an inability to perform validation in a setting that closely resembles how the model will be used. For example, for a biomedical model that makes in vivo clinically relevant predictions, direct validation of predictions may be impossible for ethical, technological, or financial reasons. Unavoidable limitations inherent to the validation process lead to challenges in evaluating the credibilitymore » of biomedical model predictions. Therefore, when evaluating biomedical models, it is critical to rigorously assess applicability, that is, the relevance of the computational model, and its validation evidence to the proposed context of use (COU). However, there are no well-established methods for assessing applicability. In this paper, we present a novel framework for performing applicability analysis and demonstrate its use with a medical device computational model. The framework provides a systematic, step-by-step method for breaking down the broad question of applicability into a series of focused questions, which may be addressed using supporting evidence and subject matter expertise. The framework can be used for model justification, model assessment, and validation planning. While motivated by biomedical models, it is relevant to a broad range of disciplines and underlying physics. Finally, the proposed applicability framework could help overcome some of the barriers inherent to validation of, and aid clinical implementation of, biomedical models.« less

  16. Applicability Analysis of Validation Evidence for Biomedical Computational Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pathmanathan, Pras; Gray, Richard A.; Romero, Vicente J.

    Computational modeling has the potential to revolutionize medicine the way it transformed engineering. However, despite decades of work, there has only been limited progress to successfully translate modeling research to patient care. One major difficulty which often occurs with biomedical computational models is an inability to perform validation in a setting that closely resembles how the model will be used. For example, for a biomedical model that makes in vivo clinically relevant predictions, direct validation of predictions may be impossible for ethical, technological, or financial reasons. Unavoidable limitations inherent to the validation process lead to challenges in evaluating the credibilitymore » of biomedical model predictions. Therefore, when evaluating biomedical models, it is critical to rigorously assess applicability, that is, the relevance of the computational model, and its validation evidence to the proposed context of use (COU). However, there are no well-established methods for assessing applicability. In this paper, we present a novel framework for performing applicability analysis and demonstrate its use with a medical device computational model. The framework provides a systematic, step-by-step method for breaking down the broad question of applicability into a series of focused questions, which may be addressed using supporting evidence and subject matter expertise. The framework can be used for model justification, model assessment, and validation planning. While motivated by biomedical models, it is relevant to a broad range of disciplines and underlying physics. Finally, the proposed applicability framework could help overcome some of the barriers inherent to validation of, and aid clinical implementation of, biomedical models.« less

  17. External model validation of binary clinical risk prediction models in cardiovascular and thoracic surgery.

    PubMed

    Hickey, Graeme L; Blackstone, Eugene H

    2016-08-01

    Clinical risk-prediction models serve an important role in healthcare. They are used for clinical decision-making and measuring the performance of healthcare providers. To establish confidence in a model, external model validation is imperative. When designing such an external model validation study, thought must be given to patient selection, risk factor and outcome definitions, missing data, and the transparent reporting of the analysis. In addition, there are a number of statistical methods available for external model validation. Execution of a rigorous external validation study rests in proper study design, application of suitable statistical methods, and transparent reporting. Copyright © 2016 The American Association for Thoracic Surgery. Published by Elsevier Inc. All rights reserved.

  18. Validity and reliability of Chinese version of Adult Carer Quality of Life questionnaire (AC-QoL) in family caregivers of stroke survivors

    PubMed Central

    Li, Yingshuang; Ding, Chunge

    2017-01-01

    The Adult Carer Quality of Life questionnaire (AC-QoL) is a reliable and valid instrument used to assess the quality of life (QoL) of adult family caregivers. We explored the psychometric properties and tested the reliability and validity of a Chinese version of the AC-QoL with reliability and validity testing in 409 Chinese stroke caregivers. We used item-total correlation and extreme group comparison to do item analysis. To evaluate its reliability, we used a test-retest reliability approach, intraclass correlation coefficient (ICC), together with Cronbach’s alpha and model-based internal consistency index; to evaluate its validity, we used scale content validity, confirmatory factor analysis (CFA) and exploratory factor analysis (EFA) via principal component analysis with varimax rotation. We found that the CFA did not in fact confirm the original factor model and our EFA yielded a 31-item measure with a five-factor model. In conclusions, although some items performed differently in our analysis of the original English language version and our Chinese language version, our translated AC-QoL is a reliable and valid tool which can be used to assess the quality of life of stroke caregivers in mainland China. Chinese version AC-QoL is a comprehensive and good measurement to understand caregivers and has the potential to be a screening tool to assess QoL of caregiver. PMID:29131845

  19. The psychometric validation of the Social Problem-Solving Inventory--Revised with UK incarcerated sexual offenders.

    PubMed

    Wakeling, Helen C

    2007-09-01

    This study examined the reliability and validity of the Social Problem-Solving Inventory--Revised (SPSI-R; D'Zurilla, Nezu, & Maydeu-Olivares, 2002) with a population of incarcerated sexual offenders. An availability sample of 499 adult male sexual offenders was used. The SPSI-R had good reliability measured by internal consistency and test-retest reliability, and adequate validity. Construct validity was determined via factor analysis. An exploratory factor analysis extracted a two-factor model. This model was then tested against the theory-driven five-factor model using confirmatory factor analysis. The five-factor model was selected as the better fitting of the two, and confirmed the model according to social problem-solving theory (D'Zurilla & Nezu, 1982). The SPSI-R had good convergent validity; significant correlations were found between SPSI-R subscales and measures of self-esteem, impulsivity, and locus of control. SPSI-R subscales were however found to significantly correlate with a measure of socially desirable responding. This finding is discussed in relation to recent research suggesting that impression management may not invalidate self-report measures (e.g. Mills & Kroner, 2005). The SPSI-R was sensitive to sexual offender intervention, with problem-solving improving pre to post-treatment in both rapists and child molesters. The study concludes that the SPSI-R is a reasonably internally valid and appropriate tool to assess problem-solving in sexual offenders. However future research should cross-validate the SPSI-R with other behavioural outcomes to examine the external validity of the measure. Furthermore, future research should utilise a control group to determine treatment impact.

  20. Development and Validation of the Work-Related Well-Being Index: Analysis of the Federal Employee Viewpoint Survey.

    PubMed

    Eaton, Jennifer L; Mohr, David C; Hodgson, Michael J; McPhaul, Kathleen M

    2018-02-01

    To describe development and validation of the work-related well-being (WRWB) index. Principal components analysis was performed using Federal Employee Viewpoint Survey (FEVS) data (N = 392,752) to extract variables representing worker well-being constructs. Confirmatory factor analysis was performed to verify factor structure. To validate the WRWB index, we used multiple regression analysis to examine relationships with burnout associated outcomes. Principal Components Analysis identified three positive psychology constructs: "Work Positivity", "Co-worker Relationships", and "Work Mastery". An 11 item index explaining 63.5% of variance was achieved. The structural equation model provided a very good fit to the data. Higher WRWB scores were positively associated with all three employee experience measures examined in regression models. The new WRWB index shows promise as a valid and widely accessible instrument to assess worker well-being.

  1. A ferrofluid based energy harvester: Computational modeling, analysis, and experimental validation

    NASA Astrophysics Data System (ADS)

    Liu, Qi; Alazemi, Saad F.; Daqaq, Mohammed F.; Li, Gang

    2018-03-01

    A computational model is described and implemented in this work to analyze the performance of a ferrofluid based electromagnetic energy harvester. The energy harvester converts ambient vibratory energy into an electromotive force through a sloshing motion of a ferrofluid. The computational model solves the coupled Maxwell's equations and Navier-Stokes equations for the dynamic behavior of the magnetic field and fluid motion. The model is validated against experimental results for eight different configurations of the system. The validated model is then employed to study the underlying mechanisms that determine the electromotive force of the energy harvester. Furthermore, computational analysis is performed to test the effect of several modeling aspects, such as three-dimensional effect, surface tension, and type of the ferrofluid-magnetic field coupling on the accuracy of the model prediction.

  2. U.S. 75 Dallas, Texas, Model Validation and Calibration Report

    DOT National Transportation Integrated Search

    2010-02-01

    This report presents the model validation and calibration results of the Integrated Corridor Management (ICM) analysis, modeling, and simulation (AMS) for the U.S. 75 Corridor in Dallas, Texas. The purpose of the project was to estimate the benefits ...

  3. Verification, Validation and Sensitivity Studies in Computational Biomechanics

    PubMed Central

    Anderson, Andrew E.; Ellis, Benjamin J.; Weiss, Jeffrey A.

    2012-01-01

    Computational techniques and software for the analysis of problems in mechanics have naturally moved from their origins in the traditional engineering disciplines to the study of cell, tissue and organ biomechanics. Increasingly complex models have been developed to describe and predict the mechanical behavior of such biological systems. While the availability of advanced computational tools has led to exciting research advances in the field, the utility of these models is often the subject of criticism due to inadequate model verification and validation. The objective of this review is to present the concepts of verification, validation and sensitivity studies with regard to the construction, analysis and interpretation of models in computational biomechanics. Specific examples from the field are discussed. It is hoped that this review will serve as a guide to the use of verification and validation principles in the field of computational biomechanics, thereby improving the peer acceptance of studies that use computational modeling techniques. PMID:17558646

  4. European Organization for Research and Treatment of Cancer Quality of Life Questionnaire Core 30: factorial models to Brazilian cancer patients

    PubMed Central

    Campos, Juliana Alvares Duarte Bonini; Spexoto, Maria Cláudia Bernardes; da Silva, Wanderson Roberto; Serrano, Sergio Vicente; Marôco, João

    2018-01-01

    ABSTRACT Objective To evaluate the psychometric properties of the seven theoretical models proposed in the literature for European Organization for Research and Treatment of Cancer Quality of Life Questionnaire Core 30 (EORTC QLQ-C30), when applied to a sample of Brazilian cancer patients. Methods Content and construct validity (factorial, convergent, discriminant) were estimated. Confirmatory factor analysis was performed. Convergent validity was analyzed using the average variance extracted. Discriminant validity was analyzed using correlational analysis. Internal consistency and composite reliability were used to assess the reliability of instrument. Results A total of 1,020 cancer patients participated. The mean age was 53.3±13.0 years, and 62% were female. All models showed adequate factorial validity for the study sample. Convergent and discriminant validities and the reliability were compromised in all of the models for all of the single items referring to symptoms, as well as for the “physical function” and “cognitive function” factors. Conclusion All theoretical models assessed in this study presented adequate factorial validity when applied to Brazilian cancer patients. The choice of the best model for use in research and/or clinical protocols should be centered on the purpose and underlying theory of each model. PMID:29694609

  5. Integrated corridor management (ICM) analysis, modeling, and simulation (AMS) for Minneapolis site : model calibration and validation report.

    DOT National Transportation Integrated Search

    2010-02-01

    This technical report documents the calibration and validation of the baseline (2008) mesoscopic model for the I-394 Minneapolis, Minnesota, Pioneer Site. DynusT was selected as the mesoscopic model for analyzing operating conditions in the I-394 cor...

  6. Multimethod latent class analysis

    PubMed Central

    Nussbeck, Fridtjof W.; Eid, Michael

    2015-01-01

    Correct and, hence, valid classifications of individuals are of high importance in the social sciences as these classifications are the basis for diagnoses and/or the assignment to a treatment. The via regia to inspect the validity of psychological ratings is the multitrait-multimethod (MTMM) approach. First, a latent variable model for the analysis of rater agreement (latent rater agreement model) will be presented that allows for the analysis of convergent validity between different measurement approaches (e.g., raters). Models of rater agreement are transferred to the level of latent variables. Second, the latent rater agreement model will be extended to a more informative MTMM latent class model. This model allows for estimating (i) the convergence of ratings, (ii) method biases in terms of differential latent distributions of raters and differential associations of categorizations within raters (specific rater bias), and (iii) the distinguishability of categories indicating if categories are satisfyingly distinct from each other. Finally, an empirical application is presented to exemplify the interpretation of the MTMM latent class model. PMID:26441714

  7. Validation of 2D flood models with insurance claims

    NASA Astrophysics Data System (ADS)

    Zischg, Andreas Paul; Mosimann, Markus; Bernet, Daniel Benjamin; Röthlisberger, Veronika

    2018-02-01

    Flood impact modelling requires reliable models for the simulation of flood processes. In recent years, flood inundation models have been remarkably improved and widely used for flood hazard simulation, flood exposure and loss analyses. In this study, we validate a 2D inundation model for the purpose of flood exposure analysis at the river reach scale. We validate the BASEMENT simulation model with insurance claims using conventional validation metrics. The flood model is established on the basis of available topographic data in a high spatial resolution for four test cases. The validation metrics were calculated with two different datasets; a dataset of event documentations reporting flooded areas and a dataset of insurance claims. The model fit relating to insurance claims is in three out of four test cases slightly lower than the model fit computed on the basis of the observed inundation areas. This comparison between two independent validation data sets suggests that validation metrics using insurance claims can be compared to conventional validation data, such as the flooded area. However, a validation on the basis of insurance claims might be more conservative in cases where model errors are more pronounced in areas with a high density of values at risk.

  8. Finite element analysis of dental implants with validation: to what extent can we expect the model to predict biological phenomena? A literature review and proposal for classification of a validation process.

    PubMed

    Chang, Yuanhan; Tambe, Abhijit Anil; Maeda, Yoshinobu; Wada, Masahiro; Gonda, Tomoya

    2018-03-08

    A literature review of finite element analysis (FEA) studies of dental implants with their model validation process was performed to establish the criteria for evaluating validation methods with respect to their similarity to biological behavior. An electronic literature search of PubMed was conducted up to January 2017 using the Medical Subject Headings "dental implants" and "finite element analysis." After accessing the full texts, the context of each article was searched using the words "valid" and "validation" and articles in which these words appeared were read to determine whether they met the inclusion criteria for the review. Of 601 articles published from 1997 to 2016, 48 that met the eligibility criteria were selected. The articles were categorized according to their validation method as follows: in vivo experiments in humans (n = 1) and other animals (n = 3), model experiments (n = 32), others' clinical data and past literature (n = 9), and other software (n = 2). Validation techniques with a high level of sufficiency and efficiency are still rare in FEA studies of dental implants. High-level validation, especially using in vivo experiments tied to an accurate finite element method, needs to become an established part of FEA studies. The recognition of a validation process should be considered when judging the practicality of an FEA study.

  9. Time Domain Tool Validation Using ARES I-X Flight Data

    NASA Technical Reports Server (NTRS)

    Hough, Steven; Compton, James; Hannan, Mike; Brandon, Jay

    2011-01-01

    The ARES I-X vehicle was launched from NASA's Kennedy Space Center (KSC) on October 28, 2009 at approximately 11:30 EDT. ARES I-X was the first test flight for NASA s ARES I launch vehicle, and it was the first non-Shuttle launch vehicle designed and flown by NASA since Saturn. The ARES I-X had a 4-segment solid rocket booster (SRB) first stage and a dummy upper stage (US) to emulate the properties of the ARES I US. During ARES I-X pre-flight modeling and analysis, six (6) independent time domain simulation tools were developed and cross validated. Each tool represents an independent implementation of a common set of models and parameters in a different simulation framework and architecture. Post flight data and reconstructed models provide the means to validate a subset of the simulations against actual flight data and to assess the accuracy of pre-flight dispersion analysis. Post flight data consists of telemetered Operational Flight Instrumentation (OFI) data primarily focused on flight computer outputs and sensor measurements as well as Best Estimated Trajectory (BET) data that estimates vehicle state information from all available measurement sources. While pre-flight models were found to provide a reasonable prediction of the vehicle flight, reconstructed models were generated to better represent and simulate the ARES I-X flight. Post flight reconstructed models include: SRB propulsion model, thrust vector bias models, mass properties, base aerodynamics, and Meteorological Estimated Trajectory (wind and atmospheric data). The result of the effort is a set of independently developed, high fidelity, time-domain simulation tools that have been cross validated and validated against flight data. This paper presents the process and results of high fidelity aerospace modeling, simulation, analysis and tool validation in the time domain.

  10. Multireader multicase reader studies with binary agreement data: simulation, analysis, validation, and sizing.

    PubMed

    Chen, Weijie; Wunderlich, Adam; Petrick, Nicholas; Gallas, Brandon D

    2014-10-01

    We treat multireader multicase (MRMC) reader studies for which a reader's diagnostic assessment is converted to binary agreement (1: agree with the truth state, 0: disagree with the truth state). We present a mathematical model for simulating binary MRMC data with a desired correlation structure across readers, cases, and two modalities, assuming the expected probability of agreement is equal for the two modalities ([Formula: see text]). This model can be used to validate the coverage probabilities of 95% confidence intervals (of [Formula: see text], [Formula: see text], or [Formula: see text] when [Formula: see text]), validate the type I error of a superiority hypothesis test, and size a noninferiority hypothesis test (which assumes [Formula: see text]). To illustrate the utility of our simulation model, we adapt the Obuchowski-Rockette-Hillis (ORH) method for the analysis of MRMC binary agreement data. Moreover, we use our simulation model to validate the ORH method for binary data and to illustrate sizing in a noninferiority setting. Our software package is publicly available on the Google code project hosting site for use in simulation, analysis, validation, and sizing of MRMC reader studies with binary agreement data.

  11. Multireader multicase reader studies with binary agreement data: simulation, analysis, validation, and sizing

    PubMed Central

    Chen, Weijie; Wunderlich, Adam; Petrick, Nicholas; Gallas, Brandon D.

    2014-01-01

    Abstract. We treat multireader multicase (MRMC) reader studies for which a reader’s diagnostic assessment is converted to binary agreement (1: agree with the truth state, 0: disagree with the truth state). We present a mathematical model for simulating binary MRMC data with a desired correlation structure across readers, cases, and two modalities, assuming the expected probability of agreement is equal for the two modalities (P1=P2). This model can be used to validate the coverage probabilities of 95% confidence intervals (of P1, P2, or P1−P2 when P1−P2=0), validate the type I error of a superiority hypothesis test, and size a noninferiority hypothesis test (which assumes P1=P2). To illustrate the utility of our simulation model, we adapt the Obuchowski–Rockette–Hillis (ORH) method for the analysis of MRMC binary agreement data. Moreover, we use our simulation model to validate the ORH method for binary data and to illustrate sizing in a noninferiority setting. Our software package is publicly available on the Google code project hosting site for use in simulation, analysis, validation, and sizing of MRMC reader studies with binary agreement data. PMID:26158051

  12. Prognostic models for complete recovery in ischemic stroke: a systematic review and meta-analysis.

    PubMed

    Jampathong, Nampet; Laopaiboon, Malinee; Rattanakanokchai, Siwanon; Pattanittum, Porjai

    2018-03-09

    Prognostic models have been increasingly developed to predict complete recovery in ischemic stroke. However, questions arise about the performance characteristics of these models. The aim of this study was to systematically review and synthesize performance of existing prognostic models for complete recovery in ischemic stroke. We searched journal publications indexed in PUBMED, SCOPUS, CENTRAL, ISI Web of Science and OVID MEDLINE from inception until 4 December, 2017, for studies designed to develop and/or validate prognostic models for predicting complete recovery in ischemic stroke patients. Two reviewers independently examined titles and abstracts, and assessed whether each study met the pre-defined inclusion criteria and also independently extracted information about model development and performance. We evaluated validation of the models by medians of the area under the receiver operating characteristic curve (AUC) or c-statistic and calibration performance. We used a random-effects meta-analysis to pool AUC values. We included 10 studies with 23 models developed from elderly patients with a moderately severe ischemic stroke, mainly in three high income countries. Sample sizes for each study ranged from 75 to 4441. Logistic regression was the only analytical strategy used to develop the models. The number of various predictors varied from one to 11. Internal validation was performed in 12 models with a median AUC of 0.80 (95% CI 0.73 to 0.84). One model reported good calibration. Nine models reported external validation with a median AUC of 0.80 (95% CI 0.76 to 0.82). Four models showed good discrimination and calibration on external validation. The pooled AUC of the two validation models of the same developed model was 0.78 (95% CI 0.71 to 0.85). The performance of the 23 models found in the systematic review varied from fair to good in terms of internal and external validation. Further models should be developed with internal and external validation in low and middle income countries.

  13. Development and Validity Testing of Belief Measurement Model in Buddhism for Junior High School Students at Chiang Rai Buddhist Scripture School: An Application for Multitrait-Multimethod Analysis

    ERIC Educational Resources Information Center

    Chaidi, Thirachai; Damrongpanich, Sunthorapot

    2016-01-01

    The purposes of this study were to develop a model to measure the belief in Buddhism of junior high school students at Chiang Rai Buddhist Scripture School, and to determine construct validity of the model for measuring the belief in Buddhism by using Multitrait-Multimethod analysis. The samples were 590 junior high school students at Buddhist…

  14. Validation of a common data model for active safety surveillance research

    PubMed Central

    Ryan, Patrick B; Reich, Christian G; Hartzema, Abraham G; Stang, Paul E

    2011-01-01

    Objective Systematic analysis of observational medical databases for active safety surveillance is hindered by the variation in data models and coding systems. Data analysts often find robust clinical data models difficult to understand and ill suited to support their analytic approaches. Further, some models do not facilitate the computations required for systematic analysis across many interventions and outcomes for large datasets. Translating the data from these idiosyncratic data models to a common data model (CDM) could facilitate both the analysts' understanding and the suitability for large-scale systematic analysis. In addition to facilitating analysis, a suitable CDM has to faithfully represent the source observational database. Before beginning to use the Observational Medical Outcomes Partnership (OMOP) CDM and a related dictionary of standardized terminologies for a study of large-scale systematic active safety surveillance, the authors validated the model's suitability for this use by example. Validation by example To validate the OMOP CDM, the model was instantiated into a relational database, data from 10 different observational healthcare databases were loaded into separate instances, a comprehensive array of analytic methods that operate on the data model was created, and these methods were executed against the databases to measure performance. Conclusion There was acceptable representation of the data from 10 observational databases in the OMOP CDM using the standardized terminologies selected, and a range of analytic methods was developed and executed with sufficient performance to be useful for active safety surveillance. PMID:22037893

  15. Validity of High School Physic Module With Character Values Using Process Skill Approach In STKIP PGRI West Sumatera

    NASA Astrophysics Data System (ADS)

    Anaperta, M.; Helendra, H.; Zulva, R.

    2018-04-01

    This study aims to describe the validity of physics module with Character Oriented Values Using Process Approach Skills at Dynamic Electrical Material in high school physics / MA and SMK. The type of research is development research. The module development model uses the development model proposed by Plomp which consists of (1) preliminary research phase, (2) the prototyping phase, and (3) assessment phase. In this research is done is initial investigation phase and designing. Data collecting technique to know validation is observation and questionnaire. In the initial investigative phase, curriculum analysis, student analysis, and concept analysis were conducted. In the design phase and the realization of module design for SMA / MA and SMK subjects in dynamic electrical materials. After that, the formative evaluation which include self evaluation, prototyping (expert reviews, one-to-one, and small group. At this stage validity is performed. This research data is obtained through the module validation sheet, which then generates a valid module.

  16. The use of docking-based comparative intermolecular contacts analysis to identify optimal docking conditions within glucokinase and to discover of new GK activators

    NASA Astrophysics Data System (ADS)

    Taha, Mutasem O.; Habash, Maha; Khanfar, Mohammad A.

    2014-05-01

    Glucokinase (GK) is involved in normal glucose homeostasis and therefore it is a valid target for drug design and discovery efforts. GK activators (GKAs) have excellent potential as treatments of hyperglycemia and diabetes. The combined recent interest in GKAs, together with docking limitations and shortages of docking validation methods prompted us to use our new 3D-QSAR analysis, namely, docking-based comparative intermolecular contacts analysis (dbCICA), to validate docking configurations performed on a group of GKAs within GK binding site. dbCICA assesses the consistency of docking by assessing the correlation between ligands' affinities and their contacts with binding site spots. Optimal dbCICA models were validated by receiver operating characteristic curve analysis and comparative molecular field analysis. dbCICA models were also converted into valid pharmacophores that were used as search queries to mine 3D structural databases for new GKAs. The search yielded several potent bioactivators that experimentally increased GK bioactivity up to 7.5-folds at 10 μM.

  17. Analysis of model development strategies: predicting ventral hernia recurrence.

    PubMed

    Holihan, Julie L; Li, Linda T; Askenasy, Erik P; Greenberg, Jacob A; Keith, Jerrod N; Martindale, Robert G; Roth, J Scott; Liang, Mike K

    2016-11-01

    There have been many attempts to identify variables associated with ventral hernia recurrence; however, it is unclear which statistical modeling approach results in models with greatest internal and external validity. We aim to assess the predictive accuracy of models developed using five common variable selection strategies to determine variables associated with hernia recurrence. Two multicenter ventral hernia databases were used. Database 1 was randomly split into "development" and "internal validation" cohorts. Database 2 was designated "external validation". The dependent variable for model development was hernia recurrence. Five variable selection strategies were used: (1) "clinical"-variables considered clinically relevant, (2) "selective stepwise"-all variables with a P value <0.20 were assessed in a step-backward model, (3) "liberal stepwise"-all variables were included and step-backward regression was performed, (4) "restrictive internal resampling," and (5) "liberal internal resampling." Variables were included with P < 0.05 for the Restrictive model and P < 0.10 for the Liberal model. A time-to-event analysis using Cox regression was performed using these strategies. The predictive accuracy of the developed models was tested on the internal and external validation cohorts using Harrell's C-statistic where C > 0.70 was considered "reasonable". The recurrence rate was 32.9% (n = 173/526; median/range follow-up, 20/1-58 mo) for the development cohort, 36.0% (n = 95/264, median/range follow-up 20/1-61 mo) for the internal validation cohort, and 12.7% (n = 155/1224, median/range follow-up 9/1-50 mo) for the external validation cohort. Internal validation demonstrated reasonable predictive accuracy (C-statistics = 0.772, 0.760, 0.767, 0.757, 0.763), while on external validation, predictive accuracy dipped precipitously (C-statistic = 0.561, 0.557, 0.562, 0.553, 0.560). Predictive accuracy was equally adequate on internal validation among models; however, on external validation, all five models failed to demonstrate utility. Future studies should report multiple variable selection techniques and demonstrate predictive accuracy on external data sets for model validation. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. AHP-based spatial analysis of water quality impact assessment due to change in vehicular traffic caused by highway broadening in Sikkim Himalaya

    NASA Astrophysics Data System (ADS)

    Banerjee, Polash; Ghose, Mrinal Kanti; Pradhan, Ratika

    2018-05-01

    Spatial analysis of water quality impact assessment of highway projects in mountainous areas remains largely unexplored. A methodology is presented here for Spatial Water Quality Impact Assessment (SWQIA) due to highway-broadening-induced vehicular traffic change in the East district of Sikkim. Pollution load of the highway runoff was estimated using an Average Annual Daily Traffic-Based Empirical model in combination with mass balance model to predict pollution in the rivers within the study area. Spatial interpolation and overlay analysis were used for impact mapping. Analytic Hierarchy Process-Based Water Quality Status Index was used to prepare a composite impact map. Model validation criteria, cross-validation criteria, and spatial explicit sensitivity analysis show that the SWQIA model is robust. The study shows that vehicular traffic is a significant contributor to water pollution in the study area. The model is catering specifically to impact analysis of the concerned project. It can be an aid for decision support system for the project stakeholders. The applicability of SWQIA model needs to be explored and validated in the context of a larger set of water quality parameters and project scenarios at a greater spatial scale.

  19. Rasch validation of the Arabic version of the lower extremity functional scale.

    PubMed

    Alnahdi, Ali H

    2018-02-01

    The purpose of this study was to examine the internal construct validity of the Arabic version of the Lower Extremity Functional Scale (20-item Arabic LEFS) using Rasch analysis. Patients (n = 170) with lower extremity musculoskeletal dysfunction were recruited. Rasch analysis of 20-item Arabic LEFS was performed. Once the initial Rasch analysis indicated that the 20-item Arabic LEFS did not fit the Rasch model, follow-up analyses were conducted to improve the fit of the scale to the Rasch measurement model. These modifications included removing misfitting individuals, changing item scoring structure, removing misfitting items, addressing bias caused by response dependency between items and differential item functioning (DIF). Initial analysis indicated deviation of the 20-item Arabic LEFS from the Rasch model. Disordered thresholds in eight items and response dependency between six items were detected with the scale as a whole did not meet the requirement of unidimensionality. Refinements led to a 15-item Arabic LEFS that demonstrated excellent internal consistency (person separation index [PSI] = 0.92) and satisfied all the requirement of the Rasch model. Rasch analysis did not support the 20-item Arabic LEFS as a unidimensional measure of lower extremity function. The refined 15-item Arabic LEFS met all the requirement of the Rasch model and hence is a valid objective measure of lower extremity function. The Rasch-validated 15-item Arabic LEFS needs to be further tested in an independent sample to confirm its fit to the Rasch measurement model. Implications for Rehabilitation The validity of the 20-item Arabic Lower Extremity Functional Scale to measure lower extremity function is not supported. The 15-item Arabic version of the LEFS is a valid measure of lower extremity function and can be used to quantify lower extremity function in patients with lower extremity musculoskeletal disorders.

  20. PCA as a practical indicator of OPLS-DA model reliability.

    PubMed

    Worley, Bradley; Powers, Robert

    Principal Component Analysis (PCA) and Orthogonal Projections to Latent Structures Discriminant Analysis (OPLS-DA) are powerful statistical modeling tools that provide insights into separations between experimental groups based on high-dimensional spectral measurements from NMR, MS or other analytical instrumentation. However, when used without validation, these tools may lead investigators to statistically unreliable conclusions. This danger is especially real for Partial Least Squares (PLS) and OPLS, which aggressively force separations between experimental groups. As a result, OPLS-DA is often used as an alternative method when PCA fails to expose group separation, but this practice is highly dangerous. Without rigorous validation, OPLS-DA can easily yield statistically unreliable group separation. A Monte Carlo analysis of PCA group separations and OPLS-DA cross-validation metrics was performed on NMR datasets with statistically significant separations in scores-space. A linearly increasing amount of Gaussian noise was added to each data matrix followed by the construction and validation of PCA and OPLS-DA models. With increasing added noise, the PCA scores-space distance between groups rapidly decreased and the OPLS-DA cross-validation statistics simultaneously deteriorated. A decrease in correlation between the estimated loadings (added noise) and the true (original) loadings was also observed. While the validity of the OPLS-DA model diminished with increasing added noise, the group separation in scores-space remained basically unaffected. Supported by the results of Monte Carlo analyses of PCA group separations and OPLS-DA cross-validation metrics, we provide practical guidelines and cross-validatory recommendations for reliable inference from PCA and OPLS-DA models.

  1. Validation of the replica trick for simple models

    NASA Astrophysics Data System (ADS)

    Shinzato, Takashi

    2018-04-01

    We discuss the replica analytic continuation using several simple models in order to prove mathematically the validity of the replica analysis, which is used in a wide range of fields related to large-scale complex systems. While replica analysis consists of two analytical techniques—the replica trick (or replica analytic continuation) and the thermodynamical limit (and/or order parameter expansion)—we focus our study on replica analytic continuation, which is the mathematical basis of the replica trick. We apply replica analysis to solve a variety of analytical models, and examine the properties of replica analytic continuation. Based on the positive results for these models we propose that replica analytic continuation is a robust procedure in replica analysis.

  2. Development and Validation of the Work-Related Well-Being Index: Analysis of the Federal Employee Viewpoint Survey (FEVS).

    PubMed

    Eaton, Jennifer L; Mohr, David C; Hodgson, Michael J; McPhaul, Kathleen M

    2017-10-11

    To describe development and validation of the Work-Related Well-Being (WRWB) Index. Principal Components Analysis was performed using Federal Employee Viewpoint Survey (FEVS) data (N = 392,752) to extract variables representing worker well-being constructs. Confirmatory factor analysis was performed to verify factor structure. To validate the WRWB index, we used multiple regression analysis to examine relationships with burnout associated outcomes. PCA identified three positive psychology constructs: "Work Positivity", "Co-worker Relationships", and "Work Mastery". An 11 item index explaining 63.5% of variance was achieved. The structural equation model provided a very good fit to the data. Higher WRWB scores were positively associated with all 3 employee experience measures examined in regression models. The new WRWB index shows promise as a valid and widely accessible instrument to assess worker well-being.

  3. Efficient strategies for leave-one-out cross validation for genomic best linear unbiased prediction.

    PubMed

    Cheng, Hao; Garrick, Dorian J; Fernando, Rohan L

    2017-01-01

    A random multiple-regression model that simultaneously fit all allele substitution effects for additive markers or haplotypes as uncorrelated random effects was proposed for Best Linear Unbiased Prediction, using whole-genome data. Leave-one-out cross validation can be used to quantify the predictive ability of a statistical model. Naive application of Leave-one-out cross validation is computationally intensive because the training and validation analyses need to be repeated n times, once for each observation. Efficient Leave-one-out cross validation strategies are presented here, requiring little more effort than a single analysis. Efficient Leave-one-out cross validation strategies is 786 times faster than the naive application for a simulated dataset with 1,000 observations and 10,000 markers and 99 times faster with 1,000 observations and 100 markers. These efficiencies relative to the naive approach using the same model will increase with increases in the number of observations. Efficient Leave-one-out cross validation strategies are presented here, requiring little more effort than a single analysis.

  4. Verification and Validation of EnergyPlus Phase Change Material Model for Opaque Wall Assemblies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tabares-Velasco, P. C.; Christensen, C.; Bianchi, M.

    2012-08-01

    Phase change materials (PCMs) represent a technology that may reduce peak loads and HVAC energy consumption in buildings. A few building energy simulation programs have the capability to simulate PCMs, but their accuracy has not been completely tested. This study shows the procedure used to verify and validate the PCM model in EnergyPlus using a similar approach as dictated by ASHRAE Standard 140, which consists of analytical verification, comparative testing, and empirical validation. This process was valuable, as two bugs were identified and fixed in the PCM model, and version 7.1 of EnergyPlus will have a validated PCM model. Preliminarymore » results using whole-building energy analysis show that careful analysis should be done when designing PCMs in homes, as their thermal performance depends on several variables such as PCM properties and location in the building envelope.« less

  5. Understanding Student Teachers' Behavioural Intention to Use Technology: Technology Acceptance Model (TAM) Validation and Testing

    ERIC Educational Resources Information Center

    Wong, Kung-Teck; Osman, Rosma bt; Goh, Pauline Swee Choo; Rahmat, Mohd Khairezan

    2013-01-01

    This study sets out to validate and test the Technology Acceptance Model (TAM) in the context of Malaysian student teachers' integration of their technology in teaching and learning. To establish factorial validity, data collected from 302 respondents were tested against the TAM using confirmatory factor analysis (CFA), and structural equation…

  6. Different approaches in Partial Least Squares and Artificial Neural Network models applied for the analysis of a ternary mixture of Amlodipine, Valsartan and Hydrochlorothiazide

    NASA Astrophysics Data System (ADS)

    Darwish, Hany W.; Hassan, Said A.; Salem, Maissa Y.; El-Zeany, Badr A.

    2014-03-01

    Different chemometric models were applied for the quantitative analysis of Amlodipine (AML), Valsartan (VAL) and Hydrochlorothiazide (HCT) in ternary mixture, namely, Partial Least Squares (PLS) as traditional chemometric model and Artificial Neural Networks (ANN) as advanced model. PLS and ANN were applied with and without variable selection procedure (Genetic Algorithm GA) and data compression procedure (Principal Component Analysis PCA). The chemometric methods applied are PLS-1, GA-PLS, ANN, GA-ANN and PCA-ANN. The methods were used for the quantitative analysis of the drugs in raw materials and pharmaceutical dosage form via handling the UV spectral data. A 3-factor 5-level experimental design was established resulting in 25 mixtures containing different ratios of the drugs. Fifteen mixtures were used as a calibration set and the other ten mixtures were used as validation set to validate the prediction ability of the suggested methods. The validity of the proposed methods was assessed using the standard addition technique.

  7. Validation and uncertainty analysis of a pre-treatment 2D dose prediction model

    NASA Astrophysics Data System (ADS)

    Baeza, Jose A.; Wolfs, Cecile J. A.; Nijsten, Sebastiaan M. J. J. G.; Verhaegen, Frank

    2018-02-01

    Independent verification of complex treatment delivery with megavolt photon beam radiotherapy (RT) has been effectively used to detect and prevent errors. This work presents the validation and uncertainty analysis of a model that predicts 2D portal dose images (PDIs) without a patient or phantom in the beam. The prediction model is based on an exponential point dose model with separable primary and secondary photon fluence components. The model includes a scatter kernel, off-axis ratio map, transmission values and penumbra kernels for beam-delimiting components. These parameters were derived through a model fitting procedure supplied with point dose and dose profile measurements of radiation fields. The model was validated against a treatment planning system (TPS; Eclipse) and radiochromic film measurements for complex clinical scenarios, including volumetric modulated arc therapy (VMAT). Confidence limits on fitted model parameters were calculated based on simulated measurements. A sensitivity analysis was performed to evaluate the effect of the parameter uncertainties on the model output. For the maximum uncertainty, the maximum deviating measurement sets were propagated through the fitting procedure and the model. The overall uncertainty was assessed using all simulated measurements. The validation of the prediction model against the TPS and the film showed a good agreement, with on average 90.8% and 90.5% of pixels passing a (2%,2 mm) global gamma analysis respectively, with a low dose threshold of 10%. The maximum and overall uncertainty of the model is dependent on the type of clinical plan used as input. The results can be used to study the robustness of the model. A model for predicting accurate 2D pre-treatment PDIs in complex RT scenarios can be used clinically and its uncertainties can be taken into account.

  8. Repeated holdout Cross-Validation of Model to Estimate Risk of Lyme Disease by Landscape Attributes

    EPA Science Inventory

    We previously modeled Lyme disease (LD) risk at the landscape scale; here we evaluate the model's overall goodness-of-fit using holdout validation. Landscapes were characterized within road-bounded analysis units (AU). Observed LD cases (obsLD) were ascertained per AU. Data were ...

  9. Evaluating the dynamic response of in-flight thrust calculation techniques during throttle transients

    NASA Technical Reports Server (NTRS)

    Ray, Ronald J.

    1994-01-01

    New flight test maneuvers and analysis techniques for evaluating the dynamic response of in-flight thrust models during throttle transients have been developed and validated. The approach is based on the aircraft and engine performance relationship between thrust and drag. Two flight test maneuvers, a throttle step and a throttle frequency sweep, were developed and used in the study. Graphical analysis techniques, including a frequency domain analysis method, were also developed and evaluated. They provide quantitative and qualitative results. Four thrust calculation methods were used to demonstrate and validate the test technique. Flight test applications on two high-performance aircraft confirmed the test methods as valid and accurate. These maneuvers and analysis techniques were easy to implement and use. Flight test results indicate the analysis techniques can identify the combined effects of model error and instrumentation response limitations on the calculated thrust value. The methods developed in this report provide an accurate approach for evaluating, validating, or comparing thrust calculation methods for dynamic flight applications.

  10. Fast and accurate focusing analysis of large photon sieve using pinhole ring diffraction model.

    PubMed

    Liu, Tao; Zhang, Xin; Wang, Lingjie; Wu, Yanxiong; Zhang, Jizhen; Qu, Hemeng

    2015-06-10

    In this paper, we developed a pinhole ring diffraction model for the focusing analysis of a large photon sieve. Instead of analyzing individual pinholes, we discuss the focusing of all of the pinholes in a single ring. An explicit equation for the diffracted field of individual pinhole ring has been proposed. We investigated the validity range of this generalized model and analytically describe the sufficient conditions for the validity of this pinhole ring diffraction model. A practical example and investigation reveals the high accuracy of the pinhole ring diffraction model. This simulation method could be used for fast and accurate focusing analysis of a large photon sieve.

  11. Issues and approach to develop validated analysis tools for hypersonic flows: One perspective

    NASA Technical Reports Server (NTRS)

    Deiwert, George S.

    1993-01-01

    Critical issues concerning the modeling of low density hypervelocity flows where thermochemical nonequilibrium effects are pronounced are discussed. Emphasis is on the development of validated analysis tools, and the activity in the NASA Ames Research Center's Aerothermodynamics Branch is described. Inherent in the process is a strong synergism between ground test and real gas computational fluid dynamics (CFD). Approaches to develop and/or enhance phenomenological models and incorporate them into computational flowfield simulation codes are discussed. These models were partially validated with experimental data for flows where the gas temperature is raised (compressive flows). Expanding flows, where temperatures drop, however, exhibit somewhat different behavior. Experimental data for these expanding flow conditions is sparse and reliance must be made on intuition and guidance from computational chemistry to model transport processes under these conditions. Ground based experimental studies used to provide necessary data for model development and validation are described. Included are the performance characteristics of high enthalpy flow facilities, such as shock tubes and ballistic ranges.

  12. Issues and approach to develop validated analysis tools for hypersonic flows: One perspective

    NASA Technical Reports Server (NTRS)

    Deiwert, George S.

    1992-01-01

    Critical issues concerning the modeling of low-density hypervelocity flows where thermochemical nonequilibrium effects are pronounced are discussed. Emphasis is on the development of validated analysis tools. A description of the activity in the Ames Research Center's Aerothermodynamics Branch is also given. Inherent in the process is a strong synergism between ground test and real-gas computational fluid dynamics (CFD). Approaches to develop and/or enhance phenomenological models and incorporate them into computational flow-field simulation codes are discussed. These models have been partially validated with experimental data for flows where the gas temperature is raised (compressive flows). Expanding flows, where temperatures drop, however, exhibit somewhat different behavior. Experimental data for these expanding flow conditions are sparse; reliance must be made on intuition and guidance from computational chemistry to model transport processes under these conditions. Ground-based experimental studies used to provide necessary data for model development and validation are described. Included are the performance characteristics of high-enthalpy flow facilities, such as shock tubes and ballistic ranges.

  13. Least Squares Distance Method of Cognitive Validation and Analysis for Binary Items Using Their Item Response Theory Parameters

    ERIC Educational Resources Information Center

    Dimitrov, Dimiter M.

    2007-01-01

    The validation of cognitive attributes required for correct answers on binary test items or tasks has been addressed in previous research through the integration of cognitive psychology and psychometric models using parametric or nonparametric item response theory, latent class modeling, and Bayesian modeling. All previous models, each with their…

  14. Predictive modeling of infrared radiative heating in tomato dry-peeling process: Part II. Model validation and sensitivity analysis

    USDA-ARS?s Scientific Manuscript database

    A predictive mathematical model was developed to simulate heat transfer in a tomato undergoing double sided infrared (IR) heating in a dry-peeling process. The aims of this study were to validate the developed model using experimental data and to investigate different engineering parameters that mos...

  15. Multivariate meta-analysis of individual participant data helped externally validate the performance and implementation of a prediction model.

    PubMed

    Snell, Kym I E; Hua, Harry; Debray, Thomas P A; Ensor, Joie; Look, Maxime P; Moons, Karel G M; Riley, Richard D

    2016-01-01

    Our aim was to improve meta-analysis methods for summarizing a prediction model's performance when individual participant data are available from multiple studies for external validation. We suggest multivariate meta-analysis for jointly synthesizing calibration and discrimination performance, while accounting for their correlation. The approach estimates a prediction model's average performance, the heterogeneity in performance across populations, and the probability of "good" performance in new populations. This allows different implementation strategies (e.g., recalibration) to be compared. Application is made to a diagnostic model for deep vein thrombosis (DVT) and a prognostic model for breast cancer mortality. In both examples, multivariate meta-analysis reveals that calibration performance is excellent on average but highly heterogeneous across populations unless the model's intercept (baseline hazard) is recalibrated. For the cancer model, the probability of "good" performance (defined by C statistic ≥0.7 and calibration slope between 0.9 and 1.1) in a new population was 0.67 with recalibration but 0.22 without recalibration. For the DVT model, even with recalibration, there was only a 0.03 probability of "good" performance. Multivariate meta-analysis can be used to externally validate a prediction model's calibration and discrimination performance across multiple populations and to evaluate different implementation strategies. Crown Copyright © 2016. Published by Elsevier Inc. All rights reserved.

  16. Developing and validating risk prediction models in an individual participant data meta-analysis

    PubMed Central

    2014-01-01

    Background Risk prediction models estimate the risk of developing future outcomes for individuals based on one or more underlying characteristics (predictors). We review how researchers develop and validate risk prediction models within an individual participant data (IPD) meta-analysis, in order to assess the feasibility and conduct of the approach. Methods A qualitative review of the aims, methodology, and reporting in 15 articles that developed a risk prediction model using IPD from multiple studies. Results The IPD approach offers many opportunities but methodological challenges exist, including: unavailability of requested IPD, missing patient data and predictors, and between-study heterogeneity in methods of measurement, outcome definitions and predictor effects. Most articles develop their model using IPD from all available studies and perform only an internal validation (on the same set of data). Ten of the 15 articles did not allow for any study differences in baseline risk (intercepts), potentially limiting their model’s applicability and performance in some populations. Only two articles used external validation (on different data), including a novel method which develops the model on all but one of the IPD studies, tests performance in the excluded study, and repeats by rotating the omitted study. Conclusions An IPD meta-analysis offers unique opportunities for risk prediction research. Researchers can make more of this by allowing separate model intercept terms for each study (population) to improve generalisability, and by using ‘internal-external cross-validation’ to simultaneously develop and validate their model. Methodological challenges can be reduced by prospectively planned collaborations that share IPD for risk prediction. PMID:24397587

  17. Measuring engagement in nurses: the psychometric properties of the Persian version of Utrecht Work Engagement Scale

    PubMed Central

    Torabinia, Mansour; Mahmoudi, Sara; Dolatshahi, Mojtaba; Abyaz, Mohamad Reza

    2017-01-01

    Background: Considering the overall tendency in psychology, researchers in the field of work and organizational psychology have become progressively interested in employees’ effective and optimistic experiments at work such as work engagement. This study was conducted to investigate 2 main purposes: assessing the psychometric properties of the Utrecht Work Engagement Scale, and finding any association between work engagement and burnout in nurses. Methods: The present methodological study was conducted in 2015 and included 248 females and 34 males with 6 months to 30 years of job experience. After the translation process, face and content validity were calculated by qualitative and quantitative methods. Moreover, content validation ratio, scale-level content validity index and item-level content validity index were measured for this scale. Construct validity was determined by factor analysis. Moreover, internal consistency and stability reliability were assessed. Factor analysis, test-retest, Cronbach’s alpha, and association analysis were used as statistical methods. Results: Face and content validity were acceptable. Exploratory factor analysis suggested a new 3- factor model. In this new model, some items from the construct model of the original version were dislocated with the same 17 items. The new model was confirmed by divergent Copenhagen Burnout Inventory as the Persian version of UWES. Internal consistency reliability for the total scale and the subscales was 0.76 to 0.89. Results from Pearson correlation test indicated a high degree of test-retest reliability (r = 0. 89). ICC was also 0.91. Engagement was negatively related to burnout and overtime per month, whereas it was positively related with age and job experiment. Conclusion: The Persian 3– factor model of Utrecht Work Engagement Scale is a valid and reliable instrument to measure work engagement in Iranian nurses as well as in other medical professionals. PMID:28955665

  18. Theoretical relationship between vibration transmissibility and driving-point response functions of the human body.

    PubMed

    Dong, Ren G; Welcome, Daniel E; McDowell, Thomas W; Wu, John Z

    2013-11-25

    The relationship between the vibration transmissibility and driving-point response functions (DPRFs) of the human body is important for understanding vibration exposures of the system and for developing valid models. This study identified their theoretical relationship and demonstrated that the sum of the DPRFs can be expressed as a linear combination of the transmissibility functions of the individual mass elements distributed throughout the system. The relationship is verified using several human vibration models. This study also clarified the requirements for reliably quantifying transmissibility values used as references for calibrating the system models. As an example application, this study used the developed theory to perform a preliminary analysis of the method for calibrating models using both vibration transmissibility and DPRFs. The results of the analysis show that the combined method can theoretically result in a unique and valid solution of the model parameters, at least for linear systems. However, the validation of the method itself does not guarantee the validation of the calibrated model, because the validation of the calibration also depends on the model structure and the reliability and appropriate representation of the reference functions. The basic theory developed in this study is also applicable to the vibration analyses of other structures.

  19. Parameterization of Model Validating Sets for Uncertainty Bound Optimizations. Revised

    NASA Technical Reports Server (NTRS)

    Lim, K. B.; Giesy, D. P.

    2000-01-01

    Given measurement data, a nominal model and a linear fractional transformation uncertainty structure with an allowance on unknown but bounded exogenous disturbances, easily computable tests for the existence of a model validating uncertainty set are given. Under mild conditions, these tests are necessary and sufficient for the case of complex, nonrepeated, block-diagonal structure. For the more general case which includes repeated and/or real scalar uncertainties, the tests are only necessary but become sufficient if a collinearity condition is also satisfied. With the satisfaction of these tests, it is shown that a parameterization of all model validating sets of plant models is possible. The new parameterization is used as a basis for a systematic way to construct or perform uncertainty tradeoff with model validating uncertainty sets which have specific linear fractional transformation structure for use in robust control design and analysis. An illustrative example which includes a comparison of candidate model validating sets is given.

  20. Use of Latent Class Analysis to define groups based on validity, cognition, and emotional functioning.

    PubMed

    Morin, Ruth T; Axelrod, Bradley N

    Latent Class Analysis (LCA) was used to classify a heterogeneous sample of neuropsychology data. In particular, we used measures of performance validity, symptom validity, cognition, and emotional functioning to assess and describe latent groups of functioning in these areas. A data-set of 680 neuropsychological evaluation protocols was analyzed using a LCA. Data were collected from evaluations performed for clinical purposes at an urban medical center. A four-class model emerged as the best fitting model of latent classes. The resulting classes were distinct based on measures of performance validity and symptom validity. Class A performed poorly on both performance and symptom validity measures. Class B had intact performance validity and heightened symptom reporting. The remaining two Classes performed adequately on both performance and symptom validity measures, differing only in cognitive and emotional functioning. In general, performance invalidity was associated with worse cognitive performance, while symptom invalidity was associated with elevated emotional distress. LCA appears useful in identifying groups within a heterogeneous sample with distinct performance patterns. Further, the orthogonal nature of performance and symptom validities is supported.

  1. A Psychometric Analysis of the Italian Version of the eHealth Literacy Scale Using Item Response and Classical Test Theory Methods

    PubMed Central

    Dima, Alexandra Lelia; Schulz, Peter Johannes

    2017-01-01

    Background The eHealth Literacy Scale (eHEALS) is a tool to assess consumers’ comfort and skills in using information technologies for health. Although evidence exists of reliability and construct validity of the scale, less agreement exists on structural validity. Objective The aim of this study was to validate the Italian version of the eHealth Literacy Scale (I-eHEALS) in a community sample with a focus on its structural validity, by applying psychometric techniques that account for item difficulty. Methods Two Web-based surveys were conducted among a total of 296 people living in the Italian-speaking region of Switzerland (Ticino). After examining the latent variables underlying the observed variables of the Italian scale via principal component analysis (PCA), fit indices for two alternative models were calculated using confirmatory factor analysis (CFA). The scale structure was examined via parametric and nonparametric item response theory (IRT) analyses accounting for differences between items regarding the proportion of answers indicating high ability. Convergent validity was assessed by correlations with theoretically related constructs. Results CFA showed a suboptimal model fit for both models. IRT analyses confirmed all items measure a single dimension as intended. Reliability and construct validity of the final scale were also confirmed. The contrasting results of factor analysis (FA) and IRT analyses highlight the importance of considering differences in item difficulty when examining health literacy scales. Conclusions The findings support the reliability and validity of the translated scale and its use for assessing Italian-speaking consumers’ eHealth literacy. PMID:28400356

  2. Combat Simulation Using Breach Computer Language

    DTIC Science & Technology

    1979-09-01

    simulation and weapon system analysis computer language Two types of models were constructed: a stochastic duel and a dynamic engagement model The... duel model validates the BREACH approach by comparing results with mathematical solutions. The dynamic model shows the capability of the BREACH...BREACH 2 Background 2 The Language 3 Static Duel 4 Background and Methodology 4 Validation 5 Results 8 Tank Duel Simulation 8 Dynamic Assault Model

  3. The effect of leverage and/or influential on structure-activity relationships.

    PubMed

    Bolboacă, Sorana D; Jäntschi, Lorentz

    2013-05-01

    In the spirit of reporting valid and reliable Quantitative Structure-Activity Relationship (QSAR) models, the aim of our research was to assess how the leverage (analysis with Hat matrix, h(i)) and the influential (analysis with Cook's distance, D(i)) of QSAR models may reflect the models reliability and their characteristics. The datasets included in this research were collected from previously published papers. Seven datasets which accomplished the imposed inclusion criteria were analyzed. Three models were obtained for each dataset (full-model, h(i)-model and D(i)-model) and several statistical validation criteria were applied to the models. In 5 out of 7 sets the correlation coefficient increased when compounds with either h(i) or D(i) higher than the threshold were removed. Withdrawn compounds varied from 2 to 4 for h(i)-models and from 1 to 13 for D(i)-models. Validation statistics showed that D(i)-models possess systematically better agreement than both full-models and h(i)-models. Removal of influential compounds from training set significantly improves the model and is recommended to be conducted in the process of quantitative structure-activity relationships developing. Cook's distance approach should be combined with hat matrix analysis in order to identify the compounds candidates for removal.

  4. Developing and validating a model to predict the success of an IHCS implementation: the Readiness for Implementation Model.

    PubMed

    Wen, Kuang-Yi; Gustafson, David H; Hawkins, Robert P; Brennan, Patricia F; Dinauer, Susan; Johnson, Pauley R; Siegler, Tracy

    2010-01-01

    To develop and validate the Readiness for Implementation Model (RIM). This model predicts a healthcare organization's potential for success in implementing an interactive health communication system (IHCS). The model consists of seven weighted factors, with each factor containing five to seven elements. Two decision-analytic approaches, self-explicated and conjoint analysis, were used to measure the weights of the RIM with a sample of 410 experts. The RIM model with weights was then validated in a prospective study of 25 IHCS implementation cases. Orthogonal main effects design was used to develop 700 conjoint-analysis profiles, which varied on seven factors. Each of the 410 experts rated the importance and desirability of the factors and their levels, as well as a set of 10 different profiles. For the prospective 25-case validation, three time-repeated measures of the RIM scores were collected for comparison with the implementation outcomes. Two of the seven factors, 'organizational motivation' and 'meeting user needs,' were found to be most important in predicting implementation readiness. No statistically significant difference was found in the predictive validity of the two approaches (self-explicated and conjoint analysis). The RIM was a better predictor for the 1-year implementation outcome than the half-year outcome. The expert sample, the order of the survey tasks, the additive model, and basing the RIM cut-off score on experience are possible limitations of the study. The RIM needs to be empirically evaluated in institutions adopting IHCS and sustaining the system in the long term.

  5. AMOVA ["Accumulative Manifold Validation Analysis"]: An Advanced Statistical Methodology Designed to Measure and Test the Validity, Reliability, and Overall Efficacy of Inquiry-Based Psychometric Instruments

    ERIC Educational Resources Information Center

    Osler, James Edward, II

    2015-01-01

    This monograph provides an epistemological rational for the Accumulative Manifold Validation Analysis [also referred by the acronym "AMOVA"] statistical methodology designed to test psychometric instruments. This form of inquiry is a form of mathematical optimization in the discipline of linear stochastic modelling. AMOVA is an in-depth…

  6. Economic analysis of model validation for a challenge problem

    DOE PAGES

    Paez, Paul J.; Paez, Thomas L.; Hasselman, Timothy K.

    2016-02-19

    It is now commonplace for engineers to build mathematical models of the systems they are designing, building, or testing. And, it is nearly universally accepted that phenomenological models of physical systems must be validated prior to use for prediction in consequential scenarios. Yet, there are certain situations in which testing only or no testing and no modeling may be economically viable alternatives to modeling and its associated testing. This paper develops an economic framework within which benefit–cost can be evaluated for modeling and model validation relative to other options. The development is presented in terms of a challenge problem. Asmore » a result, we provide a numerical example that quantifies when modeling, calibration, and validation yield higher benefit–cost than a testing only or no modeling and no testing option.« less

  7. Psychometric Properties and Validation of the Arabic Social Media Addiction Scale.

    PubMed

    Al-Menayes, Jamal

    2015-01-01

    This study investigated the psychometric properties of the Arabic version of the SMAS. SMAS is a variant of IAT customized to measure addiction to social media instead of the Internet as a whole. Using a self-report instrument on a cross-sectional sample of undergraduate students, the results revealed the following. First, the exploratory factor analysis showed that a three-factor model fits the data well. Second, concurrent validity analysis showed the SMAS to be a valid measure of social media addiction. However, further studies and data should verify the hypothesized model. Finally, this study showed that the Arabic version of the SMAS is a valid and reliable instrument for use in measuring social media addiction in the Arab world.

  8. Psychometric Properties and Validation of the Arabic Social Media Addiction Scale

    PubMed Central

    Al-Menayes, Jamal

    2015-01-01

    This study investigated the psychometric properties of the Arabic version of the SMAS. SMAS is a variant of IAT customized to measure addiction to social media instead of the Internet as a whole. Using a self-report instrument on a cross-sectional sample of undergraduate students, the results revealed the following. First, the exploratory factor analysis showed that a three-factor model fits the data well. Second, concurrent validity analysis showed the SMAS to be a valid measure of social media addiction. However, further studies and data should verify the hypothesized model. Finally, this study showed that the Arabic version of the SMAS is a valid and reliable instrument for use in measuring social media addiction in the Arab world. PMID:26347848

  9. A Measure for Evaluating the Effectiveness of Teen Pregnancy Prevention Programs.

    ERIC Educational Resources Information Center

    Somers, Cheryl L.; Johnson, Stephanie A.; Sawilowksy, Shlomo S.

    2002-01-01

    The Teen Attitude Pregnancy Scale (TAPS) was developed to measure teen attitudes and intentions regarding teenage pregnancy. The model demonstrated good internal consistency and concurrent validity for the samples in this study. Analysis revealed evidence of validity for this model. (JDM)

  10. Quantitative validation of carbon-fiber laminate low velocity impact simulations

    DOE PAGES

    English, Shawn A.; Briggs, Timothy M.; Nelson, Stacy M.

    2015-09-26

    Simulations of low velocity impact with a flat cylindrical indenter upon a carbon fiber fabric reinforced polymer laminate are rigorously validated. Comparison of the impact energy absorption between the model and experiment is used as the validation metric. Additionally, non-destructive evaluation, including ultrasonic scans and three-dimensional computed tomography, provide qualitative validation of the models. The simulations include delamination, matrix cracks and fiber breaks. An orthotropic damage and failure constitutive model, capable of predicting progressive damage and failure, is developed in conjunction and described. An ensemble of simulations incorporating model parameter uncertainties is used to predict a response distribution which ismore » then compared to experimental output using appropriate statistical methods. Lastly, the model form errors are exposed and corrected for use in an additional blind validation analysis. The result is a quantifiable confidence in material characterization and model physics when simulating low velocity impact in structures of interest.« less

  11. [Factor structure validity of the social capital scale used at baseline in the ELSA-Brasil study].

    PubMed

    Souto, Ester Paiva; Vasconcelos, Ana Glória Godoi; Chor, Dora; Reichenheim, Michael E; Griep, Rosane Härter

    2016-07-21

    This study aims to analyze the factor structure of the Brazilian version of the Resource Generator (RG) scale, using baseline data from the Brazilian Longitudinal Health Study in Adults (ELSA-Brasil). Cross-validation was performed in three random subsamples. Exploratory factor analysis using exploratory structural equation models was conducted in the first two subsamples to diagnose the factor structure, and confirmatory factor analysis was used in the third to corroborate the model defined by the exploratory analyses. Based on the 31 initial items, the model with the best fit included 25 items distributed across three dimensions. They all presented satisfactory convergent validity (values greater than 0.50 for the extracted variance) and precision (values greater than 0.70 for compound reliability). All factor correlations were below 0.85, indicating full discriminative factor validity. The RG scale presents acceptable psychometric properties and can be used in populations with similar characteristics.

  12. [Psychometric properties of the French version of the Effort-Reward Imbalance model].

    PubMed

    Niedhammer, I; Siegrist, J; Landre, M F; Goldberg, M; Leclerc, A

    2000-10-01

    Two main models are currently used to evaluate psychosocial factors at work: the Job Strain model developed by Karasek and the Effort-Reward Imbalance model. A French version of the first model has been validated for the dimensions of psychological demands and decision latitude. As regards the second one evaluating three dimensions (extrinsic effort, reward, and intrinsic effort), there are several versions in different languages, but until recently there was no validated French version. The objective of this study was to explore the psychometric properties of the French version of the Effort-Reward Imbalance model in terms of internal consistency, factorial validity, and discriminant validity. The present study was based on the GAZEL cohort and included the 10 174 subjects who were working at the French national electric and gas company (EDF-GDF) and answered the questionnaire in 1998. A French version of Effort-Reward Imbalance was included in this questionnaire. This version was obtained by a standard forward/backward translation procedure. Internal consistency was satisfactory for the three scales of extrinsic effort, reward, and intrinsic effort: Cronbach's Alpha coefficients higher than 0.7 were observed. A one-factor solution was retained for the factor analysis of the scale of extrinsic effort. A three-factor solution was retained for the factor analysis of reward, and these dimensions were interpreted as the factor analysis of intrinsic effort did not support the expected four-dimension structure. The analysis of discriminant validity displayed significant associations between measures of Effort-Reward Imbalance and the variables of sex, age, education level, and occupational grade. This study is the first one supporting satisfactory psychometric properties of the French version of the Effort-Reward Imbalance model. However, the factorial validity of intrinsic effort could be questioned. Furthermore, as most previous studies were based on male samples working in specific occupations, the present one is also one of the first to show strong associations between measures of this model and social class variables in a population of men and women employed in various occupations.

  13. Causal inference with measurement error in outcomes: Bias analysis and estimation methods.

    PubMed

    Shu, Di; Yi, Grace Y

    2017-01-01

    Inverse probability weighting estimation has been popularly used to consistently estimate the average treatment effect. Its validity, however, is challenged by the presence of error-prone variables. In this paper, we explore the inverse probability weighting estimation with mismeasured outcome variables. We study the impact of measurement error for both continuous and discrete outcome variables and reveal interesting consequences of the naive analysis which ignores measurement error. When a continuous outcome variable is mismeasured under an additive measurement error model, the naive analysis may still yield a consistent estimator; when the outcome is binary, we derive the asymptotic bias in a closed-form. Furthermore, we develop consistent estimation procedures for practical scenarios where either validation data or replicates are available. With validation data, we propose an efficient method for estimation of average treatment effect; the efficiency gain is substantial relative to usual methods of using validation data. To provide protection against model misspecification, we further propose a doubly robust estimator which is consistent even when either the treatment model or the outcome model is misspecified. Simulation studies are reported to assess the performance of the proposed methods. An application to a smoking cessation dataset is presented.

  14. Preparation of the implementation plan of AASHTO Mechanistic-Empirical Pavement Design Guide (M-EPDG) in Connecticut : Phase II : expanded sensitivity analysis and validation with pavement management data.

    DOT National Transportation Integrated Search

    2017-02-08

    The study re-evaluates distress prediction models using the Mechanistic-Empirical Pavement Design Guide (MEPDG) and expands the sensitivity analysis to a wide range of pavement structures and soils. In addition, an extensive validation analysis of th...

  15. Measuring the statistical validity of summary meta-analysis and meta-regression results for use in clinical practice.

    PubMed

    Willis, Brian H; Riley, Richard D

    2017-09-20

    An important question for clinicians appraising a meta-analysis is: are the findings likely to be valid in their own practice-does the reported effect accurately represent the effect that would occur in their own clinical population? To this end we advance the concept of statistical validity-where the parameter being estimated equals the corresponding parameter for a new independent study. Using a simple ('leave-one-out') cross-validation technique, we demonstrate how we may test meta-analysis estimates for statistical validity using a new validation statistic, Vn, and derive its distribution. We compare this with the usual approach of investigating heterogeneity in meta-analyses and demonstrate the link between statistical validity and homogeneity. Using a simulation study, the properties of Vn and the Q statistic are compared for univariate random effects meta-analysis and a tailored meta-regression model, where information from the setting (included as model covariates) is used to calibrate the summary estimate to the setting of application. Their properties are found to be similar when there are 50 studies or more, but for fewer studies Vn has greater power but a higher type 1 error rate than Q. The power and type 1 error rate of Vn are also shown to depend on the within-study variance, between-study variance, study sample size, and the number of studies in the meta-analysis. Finally, we apply Vn to two published meta-analyses and conclude that it usefully augments standard methods when deciding upon the likely validity of summary meta-analysis estimates in clinical practice. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  16. Interpreting Variance Components as Evidence for Reliability and Validity.

    ERIC Educational Resources Information Center

    Kane, Michael T.

    The reliability and validity of measurement is analyzed by a sampling model based on generalizability theory. A model for the relationship between a measurement procedure and an attribute is developed from an analysis of how measurements are used and interpreted in science. The model provides a basis for analyzing the concept of an error of…

  17. Combined Use of Tissue Morphology, Neural Network Analysis of Chromatin Texture and Clinical Variables to Predict Prostate Cancer Agressiveness from Biopsy Water

    DTIC Science & Technology

    2000-10-01

    Purpose: To combine clinical, serum, pathologic and computer derived information into an artificial neural network to develop/validate a model to...Development of an artificial neural network (year 02). Prospective validation of this model (projected year 03). All models will be tested and

  18. Progress toward the determination of correct classification rates in fire debris analysis.

    PubMed

    Waddell, Erin E; Song, Emma T; Rinke, Caitlin N; Williams, Mary R; Sigman, Michael E

    2013-07-01

    Principal components analysis (PCA), linear discriminant analysis (LDA), and quadratic discriminant analysis (QDA) were used to develop a multistep classification procedure for determining the presence of ignitable liquid residue in fire debris and assigning any ignitable liquid residue present into the classes defined under the American Society for Testing and Materials (ASTM) E 1618-10 standard method. A multistep classification procedure was tested by cross-validation based on model data sets comprised of the time-averaged mass spectra (also referred to as total ion spectra) of commercial ignitable liquids and pyrolysis products from common building materials and household furnishings (referred to simply as substrates). Fire debris samples from laboratory-scale and field test burns were also used to test the model. The optimal model's true-positive rate was 81.3% for cross-validation samples and 70.9% for fire debris samples. The false-positive rate was 9.9% for cross-validation samples and 8.9% for fire debris samples. © 2013 American Academy of Forensic Sciences.

  19. Dynamic modelling and experimental validation of three wheeled tilting vehicles

    NASA Astrophysics Data System (ADS)

    Amati, Nicola; Festini, Andrea; Pelizza, Luigi; Tonoli, Andrea

    2011-06-01

    The present paper describes the study of the stability in the straight running of a three-wheeled tilting vehicle for urban and sub-urban mobility. The analysis was carried out by developing a multibody model in the Matlab/SimulinkSimMechanics environment. An Adams-Motorcycle model and an equivalent analytical model were developed for the cross-validation and for highlighting the similarities with the lateral dynamics of motorcycles. Field tests were carried out to validate the model and identify some critical parameters, such as the damping on the steering system. The stability analysis demonstrates that the lateral dynamic motions are characterised by vibration modes that are similar to that of a motorcycle. Additionally, it shows that the wobble mode is significantly affected by the castor trail, whereas it is only slightly affected by the dynamics of the front suspension. For the present case study, the frame compliance also has no influence on the weave and wobble.

  20. Reinforced Carbon-Carbon Subcomponent Flat Plate Impact Testing for Space Shuttle Orbiter Return to Flight

    NASA Technical Reports Server (NTRS)

    Melis, Matthew E.; Brand, Jeremy H.; Pereira, J. Michael; Revilock, Duane M.

    2007-01-01

    Following the tragedy of the Space Shuttle Columbia on February 1, 2003, a major effort commenced to develop a better understanding of debris impacts and their effect on the Space Shuttle subsystems. An initiative to develop and validate physics-based computer models to predict damage from such impacts was a fundamental component of this effort. To develop the models it was necessary to physically characterize Reinforced Carbon-Carbon (RCC) and various debris materials which could potentially shed on ascent and impact the Orbiter RCC leading edges. The validated models enabled the launch system community to use the impact analysis software LS DYNA to predict damage by potential and actual impact events on the Orbiter leading edge and nose cap thermal protection systems. Validation of the material models was done through a three-level approach: fundamental tests to obtain independent static and dynamic material model properties of materials of interest, sub-component impact tests to provide highly controlled impact test data for the correlation and validation of the models, and full-scale impact tests to establish the final level of confidence for the analysis methodology. This paper discusses the second level subcomponent test program in detail and its application to the LS DYNA model validation process. The level two testing consisted of over one hundred impact tests in the NASA Glenn Research Center Ballistic Impact Lab on 6 by 6 in. and 6 by 12 in. flat plates of RCC and evaluated three types of debris projectiles: BX 265 External Tank foam, ice, and PDL 1034 External Tank foam. These impact tests helped determine the level of damage generated in the RCC flat plates by each projectile. The information obtained from this testing validated the LS DYNA damage prediction models and provided a certain level of confidence to begin performing analysis for full-size RCC test articles for returning NASA to flight with STS 114 and beyond.

  1. Potential Vorticity Analysis of Low Level Thunderstorm Dynamics in an Idealized Supercell Simulation

    DTIC Science & Technology

    2009-03-01

    Severe Weather, Supercell, Weather Research and Forecasting Model , Advanced WRF 16. PRICE CODE 17. SECURITY CLASSIFICATION OF REPORT...27 A. ADVANCED RESEARCH WRF MODEL .................................................27 1. Data, Model Setup, and Methodology...03/11/2006 GFS model run. Top row: 11/12Z initialization. Middle row: 12 hour forecast valid at 12/00Z. Bottom row: 24 hour forecast valid at

  2. Vivaldi: visualization and validation of biomacromolecular NMR structures from the PDB.

    PubMed

    Hendrickx, Pieter M S; Gutmanas, Aleksandras; Kleywegt, Gerard J

    2013-04-01

    We describe Vivaldi (VIsualization and VALidation DIsplay; http://pdbe.org/vivaldi), a web-based service for the analysis, visualization, and validation of NMR structures in the Protein Data Bank (PDB). Vivaldi provides access to model coordinates and several types of experimental NMR data using interactive visualization tools, augmented with structural annotations and model-validation information. The service presents information about the modeled NMR ensemble, validation of experimental chemical shifts, residual dipolar couplings, distance and dihedral angle constraints, as well as validation scores based on empirical knowledge and databases. Vivaldi was designed for both expert NMR spectroscopists and casual non-expert users who wish to obtain a better grasp of the information content and quality of NMR structures in the public archive. Copyright © 2013 Wiley Periodicals, Inc.

  3. The Persian Version of the "Life Satisfaction Scale": Construct Validity and Test-Re-Test Reliability among Iranian Older Adults.

    PubMed

    Moghadam, Manije; Salavati, Mahyar; Sahaf, Robab; Rassouli, Maryam; Moghadam, Mojgan; Kamrani, Ahmad Ali Akbari

    2018-03-01

    After forward-backward translation, the LSS was administered to 334 Persian speaking, cognitively healthy elderly aged 60 years and over recruited through convenience sampling. To analyze the validity of the model's constructs and the relationships between the constructs, a confirmatory factor analysis followed by PLS analysis was performed. The Construct validity was further investigated by calculating the correlations between the LSS and the "Short Form Health Survey" (SF-36) subscales measuring similar and dissimilar constructs. The LSS was re-administered to 50 participants a month later to assess the reliability. For the eight-factor model of the life satisfaction construct, adequate goodness of fit between the hypothesized model and the model derived from the sample data was attained (positive and statistically significant beta coefficients, good R-squares and acceptable GoF). Construct validity was supported by convergent and discriminant validity, and correlations between the LSS and SF-36 subscales. Minimum Intraclass Correlation Coefficient level of 0.60 was exceeded by all subscales. Minimum level of reliability indices (Cronbach's α, composite reliability and indicator reliability) was exceeded by all subscales. The Persian-version of the Life Satisfaction Scale is a reliable and valid instrument, with psychometric properties which are consistent with the original version.

  4. Ascertaining Validity in the Abstract Realm of PMESII Simulation Models: An Analysis of the Peace Support Operations Model (PSOM)

    DTIC Science & Technology

    2009-06-01

    simulation is the campaign-level Peace Support Operations Model (PSOM). This thesis provides a quantitative analysis of PSOM. The results are based ...multiple potential outcomes , further development and analysis is required before the model is used for large scale analysis . 15. NUMBER OF PAGES 159...multiple potential outcomes , further development and analysis is required before the model is used for large scale analysis . vi THIS PAGE

  5. Cross-validation of an employee safety climate model in Malaysia.

    PubMed

    Bahari, Siti Fatimah; Clarke, Sharon

    2013-06-01

    Whilst substantial research has investigated the nature of safety climate, and its importance as a leading indicator of organisational safety, much of this research has been conducted with Western industrial samples. The current study focuses on the cross-validation of a safety climate model in the non-Western industrial context of Malaysian manufacturing. The first-order factorial validity of Cheyne et al.'s (1998) [Cheyne, A., Cox, S., Oliver, A., Tomas, J.M., 1998. Modelling safety climate in the prediction of levels of safety activity. Work and Stress, 12(3), 255-271] model was tested, using confirmatory factor analysis, in a Malaysian sample. Results showed that the model fit indices were below accepted levels, indicating that the original Cheyne et al. (1998) safety climate model was not supported. An alternative three-factor model was developed using exploratory factor analysis. Although these findings are not consistent with previously reported cross-validation studies, we argue that previous studies have focused on validation across Western samples, and that the current study demonstrates the need to take account of cultural factors in the development of safety climate models intended for use in non-Western contexts. The results have important implications for the transferability of existing safety climate models across cultures (for example, in global organisations) and highlight the need for future research to examine cross-cultural issues in relation to safety climate. Copyright © 2013 National Safety Council and Elsevier Ltd. All rights reserved.

  6. Construct validity of the ovine model in endoscopic sinus surgery training.

    PubMed

    Awad, Zaid; Taghi, Ali; Sethukumar, Priya; Tolley, Neil S

    2015-03-01

    To demonstrate construct validity of the ovine model as a tool for training in endoscopic sinus surgery (ESS). Prospective, cross-sectional evaluation study. Over 18 consecutive months, trainees and experts were evaluated in their ability to perform a range of tasks (based on previous face validation and descriptive studies conducted by the same group) relating to ESS on the sheep-head model. Anonymized randomized video recordings of the above were assessed by two independent and blinded assessors. A validated assessment tool utilizing a five-point Likert scale was employed. Construct validity was calculated by comparing scores across training levels and experts using mean and interquartile range of global and task-specific scores. Subgroup analysis of the intermediate group ascertained previous experience. Nonparametric descriptive statistics were used, and analysis was carried out using SPSS version 21 (IBM, Armonk, NY). Reliability of the assessment tool was confirmed. The model discriminated well between different levels of expertise in global and task-specific scores. A positive correlation was noted between year in training and both global and task-specific scores (P < .001). Experience of the intermediate group was variable, and the number of ESS procedures performed under supervision had the highest impact on performance. This study describes an alternative model for ESS training and assessment. It is also the first to demonstrate construct validity of the sheep-head model for ESS training. © 2014 The American Laryngological, Rhinological and Otological Society, Inc.

  7. Hand-to-Hand Model for Bioelectrical Impedance Analysis to Estimate Fat Free Mass in a Healthy Population

    PubMed Central

    Lu, Hsueh-Kuan; Chiang, Li-Ming; Chen, Yu-Yawn; Chuang, Chih-Lin; Chen, Kuen-Tsann; Dwyer, Gregory B.; Hsu, Ying-Lin; Chen, Chun-Hao; Hsieh, Kuen-Chang

    2016-01-01

    This study aimed to establish a hand-to-hand (HH) model for bioelectrical impedance analysis (BIA) fat free mass (FFM) estimation by comparing with a standing position hand-to-foot (HF) BIA model and dual energy X-ray absorptiometry (DXA); we also verified the reliability of the newly developed model. A total of 704 healthy Chinese individuals (403 men and 301 women) participated. FFM (FFMDXA) reference variables were measured using DXA and segmental BIA. Further, regression analysis, Bland–Altman plots, and cross-validation (2/3 participants as the modeling group, 1/3 as the validation group; three turns were repeated for validation grouping) were conducted to compare tests of agreement with FFMDXA reference variables. In male participants, the hand-to-hand BIA model estimation equation was calculated as follows: FFMmHH = 0.537 h2/ZHH − 0.126 year + 0.217 weight + 18.235 (r2 = 0.919, standard estimate of error (SEE) = 2.164 kg, n = 269). The mean validated correlation coefficients and limits of agreement (LOAs) of the Bland–Altman analysis of the calculated values for FFMmHH and FFMDXA were 0.958 and −4.369–4.343 kg, respectively, for hand-to-foot BIA model measurements for men; the FFM (FFMmHF) and FFMDXA were 0.958 and −4.356–4.375 kg, respectively. The hand-to-hand BIA model estimating equation for female participants was FFMFHH = 0.615 h2/ZHH − 0.144 year + 0.132 weight + 16.507 (r2 = 0.870, SEE = 1.884 kg, n = 201); the three mean validated correlation coefficient and LOA for the hand-to-foot BIA model measurements for female participants (FFMFHH and FFMDXA) were 0.929 and −3.880–3.886 kg, respectively. The FFMHF and FFMDXA were 0.942 and −3.511–3.489 kg, respectively. The results of both hand-to-hand and hand-to-foot BIA models demonstrated similar reliability, and the hand-to-hand BIA models are practical for assessing FFM. PMID:27775642

  8. Hand-to-Hand Model for Bioelectrical Impedance Analysis to Estimate Fat Free Mass in a Healthy Population.

    PubMed

    Lu, Hsueh-Kuan; Chiang, Li-Ming; Chen, Yu-Yawn; Chuang, Chih-Lin; Chen, Kuen-Tsann; Dwyer, Gregory B; Hsu, Ying-Lin; Chen, Chun-Hao; Hsieh, Kuen-Chang

    2016-10-21

    This study aimed to establish a hand-to-hand (HH) model for bioelectrical impedance analysis (BIA) fat free mass (FFM) estimation by comparing with a standing position hand-to-foot (HF) BIA model and dual energy X-ray absorptiometry (DXA); we also verified the reliability of the newly developed model. A total of 704 healthy Chinese individuals (403 men and 301 women) participated. FFM (FFM DXA ) reference variables were measured using DXA and segmental BIA. Further, regression analysis, Bland-Altman plots, and cross-validation (2/3 participants as the modeling group, 1/3 as the validation group; three turns were repeated for validation grouping) were conducted to compare tests of agreement with FFM DXA reference variables. In male participants, the hand-to-hand BIA model estimation equation was calculated as follows: FFM m HH = 0.537 h²/Z HH - 0.126 year + 0.217 weight + 18.235 ( r ² = 0.919, standard estimate of error (SEE) = 2.164 kg, n = 269). The mean validated correlation coefficients and limits of agreement (LOAs) of the Bland-Altman analysis of the calculated values for FFM m HH and FFM DXA were 0.958 and -4.369-4.343 kg, respectively, for hand-to-foot BIA model measurements for men; the FFM (FFM m HF ) and FFM DXA were 0.958 and -4.356-4.375 kg, respectively. The hand-to-hand BIA model estimating equation for female participants was FFM F HH = 0.615 h²/Z HH - 0.144 year + 0.132 weight + 16.507 ( r ² = 0.870, SEE = 1.884 kg, n = 201); the three mean validated correlation coefficient and LOA for the hand-to-foot BIA model measurements for female participants (FFM F HH and FFM DXA ) were 0.929 and -3.880-3.886 kg, respectively. The FFM HF and FFM DXA were 0.942 and -3.511-3.489 kg, respectively. The results of both hand-to-hand and hand-to-foot BIA models demonstrated similar reliability, and the hand-to-hand BIA models are practical for assessing FFM.

  9. A Psychometric Analysis of the Italian Version of the eHealth Literacy Scale Using Item Response and Classical Test Theory Methods.

    PubMed

    Diviani, Nicola; Dima, Alexandra Lelia; Schulz, Peter Johannes

    2017-04-11

    The eHealth Literacy Scale (eHEALS) is a tool to assess consumers' comfort and skills in using information technologies for health. Although evidence exists of reliability and construct validity of the scale, less agreement exists on structural validity. The aim of this study was to validate the Italian version of the eHealth Literacy Scale (I-eHEALS) in a community sample with a focus on its structural validity, by applying psychometric techniques that account for item difficulty. Two Web-based surveys were conducted among a total of 296 people living in the Italian-speaking region of Switzerland (Ticino). After examining the latent variables underlying the observed variables of the Italian scale via principal component analysis (PCA), fit indices for two alternative models were calculated using confirmatory factor analysis (CFA). The scale structure was examined via parametric and nonparametric item response theory (IRT) analyses accounting for differences between items regarding the proportion of answers indicating high ability. Convergent validity was assessed by correlations with theoretically related constructs. CFA showed a suboptimal model fit for both models. IRT analyses confirmed all items measure a single dimension as intended. Reliability and construct validity of the final scale were also confirmed. The contrasting results of factor analysis (FA) and IRT analyses highlight the importance of considering differences in item difficulty when examining health literacy scales. The findings support the reliability and validity of the translated scale and its use for assessing Italian-speaking consumers' eHealth literacy. ©Nicola Diviani, Alexandra Lelia Dima, Peter Johannes Schulz. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 11.04.2017.

  10. Ventilation tube insertion simulation: a literature review and validity assessment of five training models.

    PubMed

    Mahalingam, S; Awad, Z; Tolley, N S; Khemani, S

    2016-08-01

    The objective of this study was to identify and investigate the face and content validity of ventilation tube insertion (VTI) training models described in the literature. A review of literature was carried out to identify articles describing VTI simulators. Feasible models were replicated and assessed by a group of experts. Postgraduate simulation centre. Experts were defined as surgeons who had performed at least 100 VTI on patients. Seventeen experts were participated ensuring sufficient statistical power for analysis. A standardised 18-item Likert-scale questionnaire was used. This addressed face validity (realism), global and task-specific content (suitability of the model for teaching) and curriculum recommendation. The search revealed eleven models, of which only five had associated validity data. Five models were found to be feasible to replicate. None of the tested models achieved face or global content validity. Only one model achieved task-specific validity, and hence, there was no agreement on curriculum recommendation. The quality of simulation models is moderate and there is room for improvement. There is a need for new models to be developed or existing ones to be refined in order to construct a more realistic training platform for VTI simulation. © 2015 John Wiley & Sons Ltd.

  11. Application of a High-Fidelity Icing Analysis Method to a Model-Scale Rotor in Forward Flight

    NASA Technical Reports Server (NTRS)

    Narducci, Robert; Orr, Stanley; Kreeger, Richard E.

    2012-01-01

    An icing analysis process involving the loose coupling of OVERFLOW-RCAS for rotor performance prediction and with LEWICE3D for thermal analysis and ice accretion is applied to a model-scale rotor for validation. The process offers high-fidelity rotor analysis for the noniced and iced rotor performance evaluation that accounts for the interaction of nonlinear aerodynamics with blade elastic deformations. Ice accumulation prediction also involves loosely coupled data exchanges between OVERFLOW and LEWICE3D to produce accurate ice shapes. Validation of the process uses data collected in the 1993 icing test involving Sikorsky's Powered Force Model. Non-iced and iced rotor performance predictions are compared to experimental measurements as are predicted ice shapes.

  12. Exploring the measurement properties of the osteopathy clinical teaching questionnaire using Rasch analysis.

    PubMed

    Vaughan, Brett

    2018-01-01

    Clinical teaching evaluations are common in health profession education programs to ensure students are receiving a quality clinical education experience. Questionnaires students use to evaluate their clinical teachers have been developed in professions such as medicine and nursing. The development of a questionnaire that is specifically for the osteopathy on-campus, student-led clinic environment is warranted. Previous work developed the 30-item Osteopathy Clinical Teaching Questionnaire. The current study utilised Rasch analysis to investigate the construct validity of the Osteopathy Clinical Teaching Questionnaire and provide evidence for the validity argument through fit to the Rasch model. Senior osteopathy students at four institutions in Australia, New Zealand and the United Kingdom rated their clinical teachers using the Osteopathy Clinical Teaching Questionnaire. Three hundred and ninety-nine valid responses were received and the data were evaluated for fit to the Rasch model. Reliability estimations (Cronbach's alpha and McDonald's omega) were also evaluated for the final model. The initial analysis demonstrated the data did not fit the Rasch model. Accordingly, modifications to the questionnaire were made including removing items, removing person responses, and rescoring one item. The final model contained 12 items and fit to the Rasch model was adequate. Support for unidimensionality was demonstrated through both the Principal Components Analysis/t-test, and the Cronbach's alpha and McDonald's omega reliability estimates. Analysis of the questionnaire using McDonald's omega hierarchical supported a general factor (quality of clinical teaching in osteopathy). The evidence for unidimensionality and the presence of a general factor support the calculation of a total score for the questionnaire as a sufficient statistic. Further work is now required to investigate the reliability of the 12-item Osteopathy Clinical Teaching Questionnaire to provide evidence for the validity argument.

  13. Validation of a new mortality risk prediction model for people 65 years and older in northwest Russia: The Crystal risk score.

    PubMed

    Turusheva, Anna; Frolova, Elena; Bert, Vaes; Hegendoerfer, Eralda; Degryse, Jean-Marie

    2017-07-01

    Prediction models help to make decisions about further management in clinical practice. This study aims to develop a mortality risk score based on previously identified risk predictors and to perform internal and external validations. In a population-based prospective cohort study of 611 community-dwelling individuals aged 65+ in St. Petersburg (Russia), all-cause mortality risks over 2.5 years follow-up were determined based on the results obtained from anthropometry, medical history, physical performance tests, spirometry and laboratory tests. C-statistic, risk reclassification analysis, integrated discrimination improvement analysis, decision curves analysis, internal validation and external validation were performed. Older adults were at higher risk for mortality [HR (95%CI)=4.54 (3.73-5.52)] when two or more of the following components were present: poor physical performance, low muscle mass, poor lung function, and anemia. If anemia was combined with high C-reactive protein (CRP) and high B-type natriuretic peptide (BNP) was added the HR (95%CI) was slightly higher (5.81 (4.73-7.14)) even after adjusting for age, sex and comorbidities. Our models were validated in an external population of adults 80+. The extended model had a better predictive capacity for cardiovascular mortality [HR (95%CI)=5.05 (2.23-11.44)] compared to the baseline model [HR (95%CI)=2.17 (1.18-4.00)] in the external population. We developed and validated a new risk prediction score that may be used to identify older adults at higher risk for mortality in Russia. Additional studies need to determine which targeted interventions improve the outcomes of these at-risk individuals. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. SWAT model uncertainty analysis, calibration and validation for runoff simulation in the Luvuvhu River catchment, South Africa

    NASA Astrophysics Data System (ADS)

    Thavhana, M. P.; Savage, M. J.; Moeletsi, M. E.

    2018-06-01

    The soil and water assessment tool (SWAT) was calibrated for the Luvuvhu River catchment, South Africa in order to simulate runoff. The model was executed through QSWAT which is an interface between SWAT and QGIS. Data from four weather stations and four weir stations evenly distributed over the catchment were used. The model was run for a 33-year period of 1983-2015. Sensitivity analysis, calibration and validation were conducted using the sequential uncertainty fitting (SUFI-2) algorithm through its interface with SWAT calibration and uncertainty procedure (SWAT-CUP). The calibration process was conducted for the period 1986 to 2005 while the validation process was from 2006 to 2015. Six model efficiency measures were used, namely: coefficient of determination (R2), Nash-Sutcliffe efficiency (NSE) index, root mean square error (RMSE)-observations standard deviation ratio (RSR), percent bias (PBIAS), probability (P)-factor and correlation coefficient (R)-factor were used. Initial results indicated an over-estimation of low flows with regression slope of less than 0.7. Twelve model parameters were applied for sensitivity analysis with four (ALPHA_BF, CN2, GW_DELAY and SOL_K) found to be more distinguishable and sensitive to streamflow (p < 0.05). The SUFI-2 algorithm through the interface with the SWAT-CUP was capable of capturing the model's behaviour, with calibration results showing an R2 of 0.63, NSE index of 0.66, RSR of 0.56 and a positive PBIAS of 16.3 while validation results revealed an R2 of 0.52, NSE of 0.48, RSR of 0.72 and PBIAS of 19.90. The model produced P-factor of 0.67 and R-factor of 0.68 during calibration and during validation, 0.69 and 0.53 respectively. Although performance indicators yielded fair and acceptable results, the P-factor was still below the recommended model performance of 70%. Apart from the unacceptable P-factor values, the results obtained in this study demonstrate acceptable model performance during calibration while validation results were still inconclusive. It can be concluded that calibration of the SWAT model yielded acceptable results with exception to validation results. Having said this, the model can be a useful tool for general water resources assessment and not for analysing hydrological extremes in the Luvuvhu River catchment.

  15. External validation and clinical utility of a prediction model for 6-month mortality in patients undergoing hemodialysis for end-stage kidney disease.

    PubMed

    Forzley, Brian; Er, Lee; Chiu, Helen Hl; Djurdjev, Ognjenka; Martinusen, Dan; Carson, Rachel C; Hargrove, Gaylene; Levin, Adeera; Karim, Mohamud

    2018-02-01

    End-stage kidney disease is associated with poor prognosis. Health care professionals must be prepared to address end-of-life issues and identify those at high risk for dying. A 6-month mortality prediction model for patients on dialysis derived in the United States is used but has not been externally validated. We aimed to assess the external validity and clinical utility in an independent cohort in Canada. We examined the performance of the published 6-month mortality prediction model, using discrimination, calibration, and decision curve analyses. Data were derived from a cohort of 374 prevalent dialysis patients in two regions of British Columbia, Canada, which included serum albumin, age, peripheral vascular disease, dementia, and answers to the "the surprise question" ("Would I be surprised if this patient died within the next year?"). The observed mortality in the validation cohort was 11.5% at 6 months. The prediction model had reasonable discrimination (c-stat = 0.70) but poor calibration (calibration-in-the-large = -0.53 (95% confidence interval: -0.88, -0.18); calibration slope = 0.57 (95% confidence interval: 0.31, 0.83)) in our data. Decision curve analysis showed the model only has added value in guiding clinical decision in a small range of threshold probabilities: 8%-20%. Despite reasonable discrimination, the prediction model has poor calibration in this external study cohort; thus, it may have limited clinical utility in settings outside of where it was derived. Decision curve analysis clarifies limitations in clinical utility not apparent by receiver operating characteristic curve analysis. This study highlights the importance of external validation of prediction models prior to routine use in clinical practice.

  16. Detailed weather and terrain analysis for aircraft noise modeling

    DOT National Transportation Integrated Search

    2013-04-30

    A study has been conducted supporting refinement and development of FAAs airport environmental analysis tools. Tasks conducted in this study are: (1) updated analysis of the 1997 KDEN noise model validation study with newer versions of INM and rel...

  17. Concept analysis and validation of the nursing diagnosis, delayed surgical recovery.

    PubMed

    Appoloni, Aline Helena; Herdman, T Heather; Napoleão, Anamaria Alves; Campos de Carvalho, Emilia; Hortense, Priscilla

    2013-10-01

    To analyze the human response of delayed surgical recovery, approved by NANDA-I, and to validate its defining characteristics (DCs) and related factors (RFs). This was a two-part study using a concept analysis based on the method of Walker and Avant, and diagnostic content validation based on Fehring's model. Three of the original DCs, and three proposed DCs identified from the concept analysis, were validated in this study; five of the original RFs and four proposed RFs were validated. A revision of the concept studied is suggested, incorporating the validation of some of the DCs and RFs presented by NANDA-I, and the insertion of new, validated DCs and RFs. This study may enable the extension of the use of this diagnosis and contribute to quality surgical care of clients. © 2013, The Authors. International Journal of Nursing Knowledge © 2013, NANDA International.

  18. Validation of the Weight Concerns Scale Applied to Brazilian University Students.

    PubMed

    Dias, Juliana Chioda Ribeiro; da Silva, Wanderson Roberto; Maroco, João; Campos, Juliana Alvares Duarte Bonini

    2015-06-01

    The aim of this study was to evaluate the validity and reliability of the Portuguese version of the Weight Concerns Scale (WCS) when applied to Brazilian university students. The scale was completed by 1084 university students from Brazilian public education institutions. A confirmatory factor analysis was conducted. The stability of the model in independent samples was assessed through multigroup analysis, and the invariance was estimated. Convergent, concurrent, divergent, and criterion validities as well as internal consistency were estimated. Results indicated that the one-factor model presented an adequate fit to the sample and values of convergent validity. The concurrent validity with the Body Shape Questionnaire and divergent validity with the Maslach Burnout Inventory for Students were adequate. Internal consistency was adequate, and the factorial structure was invariant in independent subsamples. The results present a simple and short instrument capable of precisely and accurately assessing concerns with weight among Brazilian university students. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Prior Study of Cross-Cultural Validation of McGill Quality-of-Life Questionnaire in Mainland Mandarin Chinese Patients With Cancer.

    PubMed

    Hu, Liya; Li, Jingwen; Wang, Xu; Payne, Sheila; Chen, Yuan; Mei, Qi

    2015-11-01

    The validation of McGill quality-of-life questionnaire (MQOLQ) in mainland China, which had already been used in multicultural palliative care background including Hong Kong and Taiwan, remained unknown. Eligible patients completed the translated Chinese version of McGill questionnaires (MQOL-C), which had been examined before the study. Construct validity was preliminarily assessed through exploratory factor analysis extracting 4 factors that construct a new hypothesis model and then the original model was proved to be better confirmed by confirmatory factor analysis. Internal consistency of all the subscales was within 0.582 to 0.917. Furthermore, test-retest reliability ranged from 0.509 to 0.859, which was determined by Spearman rank correlation coefficient. Face validation and feasibility also confirm the good validity of MQOL-C. The MQOL-C has satisfied validation in mainland Chinese patients with cancer, although cultural difference should be considered while using it. © The Author(s) 2014.

  20. Organizational citizenship behavior in schools: validation of a questionnaire.

    PubMed

    Neves, Paula C; Paixão, Rui; Alarcão, Madalena; Gomes, A Duarte

    2014-01-01

    The present study examines the psychometric properties (including factorial validity) of an organizational citizenship behavior (OCB) scale in a school context. A total of 321 middle and high school teachers from 59 schools in urban and rural areas of central Portugal completed the OCB scale at their schools. The confirmatory factor analysis validated a hierarchical model with four latent factors on the first level (altruism, conscientiousness, civic participation and courtesy) and a second order factor (OCB). The revised model fit with the data, χ 2 /gl = 1.97; CFI = .962; GFI = .952, RMSEA = .05. The proposed scale (comportamentos de cidadania organizacional em escolas- Revista CCOE-R)- is a valid instrument to assess teacher's perceptions of OCB in their schools, allowing investigation at the organizational level of analysis.

  1. Is Going Beyond Rasch Analysis Necessary to Assess the Construct Validity of a Motor Function Scale?

    PubMed

    Guillot, Tiffanie; Roche, Sylvain; Rippert, Pascal; Hamroun, Dalil; Iwaz, Jean; Ecochard, René; Vuillerot, Carole

    2018-04-03

    To examine whether a Rasch analysis is sufficient to establish the construct validity of the Motor Function Measure (MFM) and discuss whether weighting the MFM item scores would improve the MFM construct validity. Observational cross-sectional multicenter study. Twenty-three physical medicine departments, neurology departments, or reference centers for neuromuscular diseases. Patients (N=911) aged 6 to 60 years with Charcot-Marie-Tooth disease (CMT), facioscapulohumeral dystrophy (FSHD), or myotonic dystrophy type 1 (DM1). None. Comparison of the goodness-of-fit of the confirmatory factor analysis (CFA) model vs that of a modified multidimensional Rasch model on MFM item scores in each considered disease. The CFA model showed good fit to the data and significantly better goodness of fit than the modified multidimensional Rasch model regardless of the disease (P<.001). Statistically significant differences in item standardized factor loadings were found between DM1, CMT, and FSHD in only 6 of 32 items (items 6, 27, 2, 7, 9 and 17). For multidimensional scales designed to measure patient abilities in various diseases, a Rasch analysis might not be the most convenient, whereas a CFA is able to establish the scale construct validity and provide weights to adapt the item scores to a specific disease. Copyright © 2018 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  2. Study on rapid valid acidity evaluation of apple by fiber optic diffuse reflectance technique

    NASA Astrophysics Data System (ADS)

    Liu, Yande; Ying, Yibin; Fu, Xiaping; Jiang, Xuesong

    2004-03-01

    Some issues related to nondestructive evaluation of valid acidity in intact apples by means of Fourier transform near infrared (FTNIR) (800-2631nm) method were addressed. A relationship was established between the diffuse reflectance spectra recorded with a bifurcated optic fiber and the valid acidity. The data were analyzed by multivariate calibration analysis such as partial least squares (PLS) analysis and principal component regression (PCR) technique. A total of 120 Fuji apples were tested and 80 of them were used to form a calibration data set. The influence of data preprocessing and different spectra treatments were also investigated. Models based on smoothing spectra were slightly worse than models based on derivative spectra and the best result was obtained when the segment length was 5 and the gap size was 10. Depending on data preprocessing and multivariate calibration technique, the best prediction model had a correlation efficient (0.871), a low RMSEP (0.0677), a low RMSEC (0.056) and a small difference between RMSEP and RMSEC by PLS analysis. The results point out the feasibility of FTNIR spectral analysis to predict the fruit valid acidity non-destructively. The ratio of data standard deviation to the root mean square error of prediction (SDR) is better to be less than 3 in calibration models, however, the results cannot meet the demand of actual application. Therefore, further study is required for better calibration and prediction.

  3. Experimental validation of finite element model analysis of a steel frame in simulated post-earthquake fire environments

    NASA Astrophysics Data System (ADS)

    Huang, Ying; Bevans, W. J.; Xiao, Hai; Zhou, Zhi; Chen, Genda

    2012-04-01

    During or after an earthquake event, building system often experiences large strains due to shaking effects as observed during recent earthquakes, causing permanent inelastic deformation. In addition to the inelastic deformation induced by the earthquake effect, the post-earthquake fires associated with short fuse of electrical systems and leakage of gas devices can further strain the already damaged structures during the earthquakes, potentially leading to a progressive collapse of buildings. Under these harsh environments, measurements on the involved building by various sensors could only provide limited structural health information. Finite element model analysis, on the other hand, if validated by predesigned experiments, can provide detail structural behavior information of the entire structures. In this paper, a temperature dependent nonlinear 3-D finite element model (FEM) of a one-story steel frame is set up by ABAQUS based on the cited material property of steel from EN 1993-1.2 and AISC manuals. The FEM is validated by testing the modeled steel frame in simulated post-earthquake environments. Comparisons between the FEM analysis and the experimental results show that the FEM predicts the structural behavior of the steel frame in post-earthquake fire conditions reasonably. With experimental validations, the FEM analysis of critical structures could be continuously predicted for structures in these harsh environments for a better assistant to fire fighters in their rescue efforts and save fire victims.

  4. Critical evaluation of mechanistic two-phase flow pipeline and well simulation models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dhulesia, H.; Lopez, D.

    1996-12-31

    Mechanistic steady state simulation models, rather than empirical correlations, are used for a design of multiphase production system including well, pipeline and downstream installations. Among the available models, PEPITE, WELLSIM, OLGA, TACITE and TUFFP are widely used for this purpose and consequently, a critical evaluation of these models is needed. An extensive validation methodology is proposed which consists of two distinct steps: first to validate the hydrodynamic point model using the test loop data and, then to validate the over-all simulation model using the real pipelines and wells data. The test loop databank used in this analysis contains about 5952more » data sets originated from four different test loops and a majority of these data are obtained at high pressures (up to 90 bars) with real hydrocarbon fluids. Before performing the model evaluation, physical analysis of the test loops data is required to eliminate non-coherent data. The evaluation of these point models demonstrates that the TACITE and OLGA models can be applied to any configuration of pipes. The TACITE model performs better than the OLGA model because it uses the most appropriate closure laws from the literature validated on a large number of data. The comparison of predicted and measured pressure drop for various real pipelines and wells demonstrates that the TACITE model is a reliable tool.« less

  5. Cultural Geography Model Validation

    DTIC Science & Technology

    2010-03-01

    the Cultural Geography Model (CGM), a government owned, open source multi - agent system utilizing Bayesian networks, queuing systems, the Theory of...referent determined either from theory or SME opinion. 4. CGM Overview The CGM is a government-owned, open source, data driven multi - agent social...HSCB, validation, social network analysis ABSTRACT: In the current warfighting environment , the military needs robust modeling and simulation (M&S

  6. Modeling and simulation of soft sensor design for real-time speed estimation, measurement and control of induction motor.

    PubMed

    Etien, Erik

    2013-05-01

    This paper deals with the design of a speed soft sensor for induction motor. The sensor is based on the physical model of the motor. Because the validation step highlight the fact that the sensor cannot be validated for all the operating points, the model is modified in order to obtain a fully validated sensor in the whole speed range. An original feature of the proposed approach is that the modified model is derived from stability analysis using automatic control theory. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  7. The Model Human Processor and the Older Adult: Parameter Estimation and Validation Within a Mobile Phone Task

    PubMed Central

    Jastrzembski, Tiffany S.; Charness, Neil

    2009-01-01

    The authors estimate weighted mean values for nine information processing parameters for older adults using the Card, Moran, and Newell (1983) Model Human Processor model. The authors validate a subset of these parameters by modeling two mobile phone tasks using two different phones and comparing model predictions to a sample of younger (N = 20; Mage = 20) and older (N = 20; Mage = 69) adults. Older adult models fit keystroke-level performance at the aggregate grain of analysis extremely well (R = 0.99) and produced equivalent fits to previously validated younger adult models. Critical path analyses highlighted points of poor design as a function of cognitive workload, hardware/software design, and user characteristics. The findings demonstrate that estimated older adult information processing parameters are valid for modeling purposes, can help designers understand age-related performance using existing interfaces, and may support the development of age-sensitive technologies. PMID:18194048

  8. The Model Human Processor and the older adult: parameter estimation and validation within a mobile phone task.

    PubMed

    Jastrzembski, Tiffany S; Charness, Neil

    2007-12-01

    The authors estimate weighted mean values for nine information processing parameters for older adults using the Card, Moran, and Newell (1983) Model Human Processor model. The authors validate a subset of these parameters by modeling two mobile phone tasks using two different phones and comparing model predictions to a sample of younger (N = 20; M-sub(age) = 20) and older (N = 20; M-sub(age) = 69) adults. Older adult models fit keystroke-level performance at the aggregate grain of analysis extremely well (R = 0.99) and produced equivalent fits to previously validated younger adult models. Critical path analyses highlighted points of poor design as a function of cognitive workload, hardware/software design, and user characteristics. The findings demonstrate that estimated older adult information processing parameters are valid for modeling purposes, can help designers understand age-related performance using existing interfaces, and may support the development of age-sensitive technologies.

  9. Comprehensive analysis of transport aircraft flight performance

    NASA Astrophysics Data System (ADS)

    Filippone, Antonio

    2008-04-01

    This paper reviews the state-of-the art in comprehensive performance codes for fixed-wing aircraft. The importance of system analysis in flight performance is discussed. The paper highlights the role of aerodynamics, propulsion, flight mechanics, aeroacoustics, flight operation, numerical optimisation, stochastic methods and numerical analysis. The latter discipline is used to investigate the sensitivities of the sub-systems to uncertainties in critical state parameters or functional parameters. The paper discusses critically the data used for performance analysis, and the areas where progress is required. Comprehensive analysis codes can be used for mission fuel planning, envelope exploration, competition analysis, a wide variety of environmental studies, marketing analysis, aircraft certification and conceptual aircraft design. A comprehensive program that uses the multi-disciplinary approach for transport aircraft is presented. The model includes a geometry deck, a separate engine input deck with the main parameters, a database of engine performance from an independent simulation, and an operational deck. The comprehensive code has modules for deriving the geometry from bitmap files, an aerodynamics model for all flight conditions, a flight mechanics model for flight envelopes and mission analysis, an aircraft noise model and engine emissions. The model is validated at different levels. Validation of the aerodynamic model is done against the scale models DLR-F4 and F6. A general model analysis and flight envelope exploration are shown for the Boeing B-777-300 with GE-90 turbofan engines with intermediate passenger capacity (394 passengers in 2 classes). Validation of the flight model is done by sensitivity analysis on the wetted area (or profile drag), on the specific air range, the brake-release gross weight and the aircraft noise. A variety of results is shown, including specific air range charts, take-off weight-altitude charts, payload-range performance, atmospheric effects, economic Mach number and noise trajectories at F.A.R. landing points.

  10. Developing a model for hospital inherent safety assessment: Conceptualization and validation.

    PubMed

    Yari, Saeed; Akbari, Hesam; Gholami Fesharaki, Mohammad; Khosravizadeh, Omid; Ghasemi, Mohammad; Barsam, Yalda; Akbari, Hamed

    2018-01-01

    Paying attention to the safety of hospitals, as the most crucial institute for providing medical and health services wherein a bundle of facilities, equipment, and human resource exist, is of significant importance. The present research aims at developing a model for assessing hospitals' safety based on principles of inherent safety design. Face validity (30 experts), content validity (20 experts), construct validity (268 examples), convergent validity, and divergent validity have been employed to validate the prepared questionnaire; and the items analysis, the Cronbach's alpha test, ICC test (to measure reliability of the test), composite reliability coefficient have been used to measure primary reliability. The relationship between variables and factors has been confirmed at 0.05 significance level by conducting confirmatory factor analysis (CFA) and structural equations modeling (SEM) technique with the use of Smart-PLS. R-square and load factors values, which were higher than 0.67 and 0.300 respectively, indicated the strong fit. Moderation (0.970), simplification (0.959), substitution (0.943), and minimization (0.5008) have had the most weights in determining the inherent safety of hospital respectively. Moderation, simplification, and substitution, among the other dimensions, have more weight on the inherent safety, while minimization has the less weight, which could be due do its definition as to minimize the risk.

  11. Pre-launch Optical Characteristics of the Oculus-ASR Nanosatellite for Attitude and Shape Recognition Experiments

    DTIC Science & Technology

    2011-12-02

    construction and validation of predictive computer models such as those used in Time-domain Analysis Simulation for Advanced Tracking (TASAT), a...characterization data, successful construction and validation of predictive computer models was accomplished. And an investigation in pose determination from...currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1. REPORT DATE (DD-MM-YYYY) 2. REPORT TYPE 3. DATES

  12. Measuring striving for understanding and learning value of geometry: a validity study

    NASA Astrophysics Data System (ADS)

    Ubuz, Behiye; Aydınyer, Yurdagül

    2017-11-01

    The current study aimed to construct a questionnaire that measures students' personality traits related to striving for understanding and learning value of geometry and then examine its psychometric properties. Through the use of multiple methods on two independent samples of 402 and 521 middle school students, two studies were performed to address this issue to provide support for its validity. In Study 1, exploratory factor analysis indicated the two-factor model. In Study 2, confirmatory factor analysis indicated the better fit of two-factor model compared to one or three-factor model. Convergent and discriminant validity evidence provided insight into the distinctiveness of the two factors. Subgroup validity evidence revealed gender differences for striving for understanding geometry trait favouring girls and grade level differences for learning value of geometry trait favouring the sixth- and seventh-grade students. Predictive validity evidence demonstrated that the striving for understanding geometry trait but not learning value of geometry trait was significantly correlated with prior mathematics achievement. In both studies, each factor and the entire questionnaire showed satisfactory reliability. In conclusion, the questionnaire was psychometrically sound.

  13. Assessment of family functioning in Caucasian and Hispanic Americans: reliability, validity, and factor structure of the Family Assessment Device.

    PubMed

    Aarons, Gregory A; McDonald, Elizabeth J; Connelly, Cynthia D; Newton, Rae R

    2007-12-01

    The purpose of this study was to examine the factor structure, reliability, and validity of the Family Assessment Device (FAD) among a national sample of Caucasian and Hispanic American families receiving public sector mental health services. A confirmatory factor analysis conducted to test model fit yielded equivocal findings. With few exceptions, indices of model fit, reliability, and validity were poorer for Hispanic Americans compared with Caucasian Americans. Contrary to our expectation, an exploratory factor analysis did not result in a better fitting model of family functioning. Without stronger evidence supporting a reformulation of the FAD, we recommend against such a course of action. Findings highlight the need for additional research on the role of culture in measurement of family functioning.

  14. Development of estrogen receptor beta binding prediction model using large sets of chemicals.

    PubMed

    Sakkiah, Sugunadevi; Selvaraj, Chandrabose; Gong, Ping; Zhang, Chaoyang; Tong, Weida; Hong, Huixiao

    2017-11-03

    We developed an ER β binding prediction model to facilitate identification of chemicals specifically bind ER β or ER α together with our previously developed ER α binding model. Decision Forest was used to train ER β binding prediction model based on a large set of compounds obtained from EADB. Model performance was estimated through 1000 iterations of 5-fold cross validations. Prediction confidence was analyzed using predictions from the cross validations. Informative chemical features for ER β binding were identified through analysis of the frequency data of chemical descriptors used in the models in the 5-fold cross validations. 1000 permutations were conducted to assess the chance correlation. The average accuracy of 5-fold cross validations was 93.14% with a standard deviation of 0.64%. Prediction confidence analysis indicated that the higher the prediction confidence the more accurate the predictions. Permutation testing results revealed that the prediction model is unlikely generated by chance. Eighteen informative descriptors were identified to be important to ER β binding prediction. Application of the prediction model to the data from ToxCast project yielded very high sensitivity of 90-92%. Our results demonstrated ER β binding of chemicals could be accurately predicted using the developed model. Coupling with our previously developed ER α prediction model, this model could be expected to facilitate drug development through identification of chemicals that specifically bind ER β or ER α .

  15. Validating Remotely Sensed Land Surface Evapotranspiration Based on Multi-scale Field Measurements

    NASA Astrophysics Data System (ADS)

    Jia, Z.; Liu, S.; Ziwei, X.; Liang, S.

    2012-12-01

    The land surface evapotranspiration plays an important role in the surface energy balance and the water cycle. There have been significant technical and theoretical advances in our knowledge of evapotranspiration over the past two decades. Acquisition of the temporally and spatially continuous distribution of evapotranspiration using remote sensing technology has attracted the widespread attention of researchers and managers. However, remote sensing technology still has many uncertainties coming from model mechanism, model inputs, parameterization schemes, and scaling issue in the regional estimation. Achieving remotely sensed evapotranspiration (RS_ET) with confident certainty is required but difficult. As a result, it is indispensable to develop the validation methods to quantitatively assess the accuracy and error sources of the regional RS_ET estimations. This study proposes an innovative validation method based on multi-scale evapotranspiration acquired from field measurements, with the validation results including the accuracy assessment, error source analysis, and uncertainty analysis of the validation process. It is a potentially useful approach to evaluate the accuracy and analyze the spatio-temporal properties of RS_ET at both the basin and local scales, and is appropriate to validate RS_ET in diverse resolutions at different time-scales. An independent RS_ET validation using this method was presented over the Hai River Basin, China in 2002-2009 as a case study. Validation at the basin scale showed good agreements between the 1 km annual RS_ET and the validation data such as the water balanced evapotranspiration, MODIS evapotranspiration products, precipitation, and landuse types. Validation at the local scale also had good results for monthly, daily RS_ET at 30 m and 1 km resolutions, comparing to the multi-scale evapotranspiration measurements from the EC and LAS, respectively, with the footprint model over three typical landscapes. Although some validation experiments demonstrated that the models yield accurate estimates at flux measurement sites, the question remains whether they are performing well over the broader landscape. Moreover, a large number of RS_ET products have been released in recent years. Thus, we also pay attention to the cross-validation method of RS_ET derived from multi-source models. "The Multi-scale Observation Experiment on Evapotranspiration over Heterogeneous Land Surfaces: Flux Observation Matrix" campaign is carried out at the middle reaches of the Heihe River Basin, China in 2012. Flux measurements from an observation matrix composed of 22 EC and 4 LAS are acquired to investigate the cross-validation of multi-source models over different landscapes. In this case, six remote sensing models, including the empirical statistical model, the one-source and two-source models, the Penman-Monteith equation based model, the Priestley-Taylor equation based model, and the complementary relationship based model, are used to perform an intercomparison. All the results from the two cases of RS_ET validation showed that the proposed validation methods are reasonable and feasible.

  16. Structural Validation of the Holistic Wellness Assessment

    ERIC Educational Resources Information Center

    Brown, Charlene; Applegate, E. Brooks; Yildiz, Mustafa

    2015-01-01

    The Holistic Wellness Assessment (HWA) is a relatively new assessment instrument based on an emergent transdisciplinary model of wellness. This study validated the factor structure identified via exploratory factor analysis (EFA), assessed test-retest reliability, and investigated concurrent validity of the HWA in three separate samples. The…

  17. ASME V\\&V challenge problem: Surrogate-based V&V

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beghini, Lauren L.; Hough, Patricia D.

    2015-12-18

    The process of verification and validation can be resource intensive. From the computational model perspective, the resource demand typically arises from long simulation run times on multiple cores coupled with the need to characterize and propagate uncertainties. In addition, predictive computations performed for safety and reliability analyses have similar resource requirements. For this reason, there is a tradeoff between the time required to complete the requisite studies and the fidelity or accuracy of the results that can be obtained. At a high level, our approach is cast within a validation hierarchy that provides a framework in which we perform sensitivitymore » analysis, model calibration, model validation, and prediction. The evidence gathered as part of these activities is mapped into the Predictive Capability Maturity Model to assess credibility of the model used for the reliability predictions. With regard to specific technical aspects of our analysis, we employ surrogate-based methods, primarily based on polynomial chaos expansions and Gaussian processes, for model calibration, sensitivity analysis, and uncertainty quantification in order to reduce the number of simulations that must be done. The goal is to tip the tradeoff balance to improving accuracy without increasing the computational demands.« less

  18. Man-machine analysis of translation and work tasks of Skylab films

    NASA Technical Reports Server (NTRS)

    Hosler, W. W.; Boelter, J. G.; Morrow, J. R., Jr.; Jackson, J. T.

    1979-01-01

    An objective approach to determine the concurrent validity of computer-graphic models is real time film analysis. This technique was illustrated through the procedures and results obtained in an evaluation of translation of Skylab mission astronauts. The quantitative analysis was facilitated by the use of an electronic film analyzer, minicomputer, and specifically supportive software. The uses of this technique for human factors research are: (1) validation of theoretical operator models; (2) biokinetic analysis; (3) objective data evaluation; (4) dynamic anthropometry; (5) empirical time-line analysis; and (6) consideration of human variability. Computer assisted techniques for interface design and evaluation have the potential for improving the capability for human factors engineering.

  19. A full-spectrum analysis of high-speed train interior noise under multi-physical-field coupling excitations

    NASA Astrophysics Data System (ADS)

    Zheng, Xu; Hao, Zhiyong; Wang, Xu; Mao, Jie

    2016-06-01

    High-speed-railway-train interior noise at low, medium, and high frequencies could be simulated by finite element analysis (FEA) or boundary element analysis (BEA), hybrid finite element analysis-statistical energy analysis (FEA-SEA) and statistical energy analysis (SEA), respectively. First, a new method named statistical acoustic energy flow (SAEF) is proposed, which can be applied to the full-spectrum HST interior noise simulation (including low, medium, and high frequencies) with only one model. In an SAEF model, the corresponding multi-physical-field coupling excitations are firstly fully considered and coupled to excite the interior noise. The interior noise attenuated by sound insulation panels of carriage is simulated through modeling the inflow acoustic energy from the exterior excitations into the interior acoustic cavities. Rigid multi-body dynamics, fast multi-pole BEA, and large-eddy simulation with indirect boundary element analysis are first employed to extract the multi-physical-field excitations, which include the wheel-rail interaction forces/secondary suspension forces, the wheel-rail rolling noise, and aerodynamic noise, respectively. All the peak values and their frequency bands of the simulated acoustic excitations are validated with those from the noise source identification test. Besides, the measured equipment noise inside equipment compartment is used as one of the excitation sources which contribute to the interior noise. Second, a full-trimmed FE carriage model is firstly constructed, and the simulated modal shapes and frequencies agree well with the measured ones, which has validated the global FE carriage model as well as the local FE models of the aluminum alloy-trim composite panel. Thus, the sound transmission loss model of any composite panel has indirectly been validated. Finally, the SAEF model of the carriage is constructed based on the accurate FE model and stimulated by the multi-physical-field excitations. The results show that the trend of the simulated 1/3 octave band sound pressure spectrum agrees well with that of the on-site-measured one. The deviation between the simulated and measured overall sound pressure level (SPL) is 2.6 dB(A) and well controlled below the engineering tolerance limit, which has validated the SAEF model in the full-spectrum analysis of the high speed train interior noise.

  20. Measuring the statistical validity of summary meta‐analysis and meta‐regression results for use in clinical practice

    PubMed Central

    Riley, Richard D.

    2017-01-01

    An important question for clinicians appraising a meta‐analysis is: are the findings likely to be valid in their own practice—does the reported effect accurately represent the effect that would occur in their own clinical population? To this end we advance the concept of statistical validity—where the parameter being estimated equals the corresponding parameter for a new independent study. Using a simple (‘leave‐one‐out’) cross‐validation technique, we demonstrate how we may test meta‐analysis estimates for statistical validity using a new validation statistic, Vn, and derive its distribution. We compare this with the usual approach of investigating heterogeneity in meta‐analyses and demonstrate the link between statistical validity and homogeneity. Using a simulation study, the properties of Vn and the Q statistic are compared for univariate random effects meta‐analysis and a tailored meta‐regression model, where information from the setting (included as model covariates) is used to calibrate the summary estimate to the setting of application. Their properties are found to be similar when there are 50 studies or more, but for fewer studies Vn has greater power but a higher type 1 error rate than Q. The power and type 1 error rate of Vn are also shown to depend on the within‐study variance, between‐study variance, study sample size, and the number of studies in the meta‐analysis. Finally, we apply Vn to two published meta‐analyses and conclude that it usefully augments standard methods when deciding upon the likely validity of summary meta‐analysis estimates in clinical practice. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. PMID:28620945

  1. Correlation Results for a Mass Loaded Vehicle Panel Test Article Finite Element Models and Modal Survey Tests

    NASA Technical Reports Server (NTRS)

    Maasha, Rumaasha; Towner, Robert L.

    2012-01-01

    High-fidelity Finite Element Models (FEMs) were developed to support a recent test program at Marshall Space Flight Center (MSFC). The FEMs correspond to test articles used for a series of acoustic tests. Modal survey tests were used to validate the FEMs for five acoustic tests (a bare panel and four different mass-loaded panel configurations). An additional modal survey test was performed on the empty test fixture (orthogrid panel mounting fixture, between the reverb and anechoic chambers). Modal survey tests were used to test-validate the dynamic characteristics of FEMs used for acoustic test excitation. Modal survey testing and subsequent model correlation has validated the natural frequencies and mode shapes of the FEMs. The modal survey test results provide a basis for the analysis models used for acoustic loading response test and analysis comparisons

  2. Geographic and temporal validity of prediction models: Different approaches were useful to examine model performance

    PubMed Central

    Austin, Peter C.; van Klaveren, David; Vergouwe, Yvonne; Nieboer, Daan; Lee, Douglas S.; Steyerberg, Ewout W.

    2017-01-01

    Objective Validation of clinical prediction models traditionally refers to the assessment of model performance in new patients. We studied different approaches to geographic and temporal validation in the setting of multicenter data from two time periods. Study Design and Setting We illustrated different analytic methods for validation using a sample of 14,857 patients hospitalized with heart failure at 90 hospitals in two distinct time periods. Bootstrap resampling was used to assess internal validity. Meta-analytic methods were used to assess geographic transportability. Each hospital was used once as a validation sample, with the remaining hospitals used for model derivation. Hospital-specific estimates of discrimination (c-statistic) and calibration (calibration intercepts and slopes) were pooled using random effects meta-analysis methods. I2 statistics and prediction interval width quantified geographic transportability. Temporal transportability was assessed using patients from the earlier period for model derivation and patients from the later period for model validation. Results Estimates of reproducibility, pooled hospital-specific performance, and temporal transportability were on average very similar, with c-statistics of 0.75. Between-hospital variation was moderate according to I2 statistics and prediction intervals for c-statistics. Conclusion This study illustrates how performance of prediction models can be assessed in settings with multicenter data at different time periods. PMID:27262237

  3. Hurricane Sandy Economic Impacts Assessment: A Computable General Equilibrium Approach and Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boero, Riccardo; Edwards, Brian Keith

    Economists use computable general equilibrium (CGE) models to assess how economies react and self-organize after changes in policies, technology, and other exogenous shocks. CGE models are equation-based, empirically calibrated, and inspired by Neoclassical economic theory. The focus of this work was to validate the National Infrastructure Simulation and Analysis Center (NISAC) CGE model and apply it to the problem of assessing the economic impacts of severe events. We used the 2012 Hurricane Sandy event as our validation case. In particular, this work first introduces the model and then describes the validation approach and the empirical data available for studying themore » event of focus. Shocks to the model are then formalized and applied. Finally, model results and limitations are presented and discussed, pointing out both the model degree of accuracy and the assessed total damage caused by Hurricane Sandy.« less

  4. Evidence of validity of the Stress-Producing Life Events (SPLE) instrument.

    PubMed

    Rizzini, Marta; Santos, Alcione Miranda Dos; Silva, Antônio Augusto Moura da

    2018-01-01

    OBJECTIVE Evaluate the construct validity of a list of eight Stressful Life Events in pregnant women. METHODS A cross-sectional study was conducted with 1,446 pregnant women in São Luís, MA, and 1,364 pregnant women in Ribeirão Preto, SP (BRISA cohort), from February 2010 to June 2011. In the exploratory factorial analysis, the promax oblique rotation was used and for the calculation of the internal consistency, we used the compound reliability. The construct validity was determined by means of the confirmatory factorial analysis with the method of estimation of weighted least squares adjusted by the mean and variance. RESULTS The model with the best fit in the exploratory analysis was the one that retained three factors with a cumulative variance of 61.1%. The one-factor model did not obtain a good fit in both samples in the confirmatory analysis. The three-factor model called Stress-Producing Life Events presented a good fit (RMSEA < 0.05; CFI/TLI > 0.90) for both samples. CONCLUSIONS The Stress-Producing Life Events constitute a second order construct with three dimensions related to health, personal and financial aspects and violence. This study found evidence that confirms the construct validity of a list of stressor events, entitled Stress-Producing Life Events Inventory.

  5. Models and applications for space weather forecasting and analysis at the Community Coordinated Modeling Center.

    NASA Astrophysics Data System (ADS)

    Kuznetsova, Maria

    The Community Coordinated Modeling Center (CCMC, http://ccmc.gsfc.nasa.gov) was established at the dawn of the new millennium as a long-term flexible solution to the problem of transition of progress in space environment modeling to operational space weather forecasting. CCMC hosts an expanding collection of state-of-the-art space weather models developed by the international space science community. Over the years the CCMC acquired the unique experience in preparing complex models and model chains for operational environment and developing and maintaining custom displays and powerful web-based systems and tools ready to be used by researchers, space weather service providers and decision makers. In support of space weather needs of NASA users CCMC is developing highly-tailored applications and services that target specific orbits or locations in space and partnering with NASA mission specialists on linking CCMC space environment modeling with impacts on biological and technological systems in space. Confidence assessment of model predictions is an essential element of space environment modeling. CCMC facilitates interaction between model owners and users in defining physical parameters and metrics formats relevant to specific applications and leads community efforts to quantify models ability to simulate and predict space environment events. Interactive on-line model validation systems developed at CCMC make validation a seamless part of model development circle. The talk will showcase innovative solutions for space weather research, validation, anomaly analysis and forecasting and review on-going community-wide model validation initiatives enabled by CCMC applications.

  6. Validation and influence of anthropometric and kinematic models of obese teenagers in vertical jump performance and mechanical internal energy expenditure.

    PubMed

    Achard de Leluardière, F; Hajri, L N; Lacouture, P; Duboy, J; Frelut, M L; Peres, G

    2006-02-01

    There may be concerns about the validity of kinetic models when studying locomotion in obese subjects (OS). The aim of the present study was to improve and validate a relevant representation of obese subject from four kinetic models. Fourteen teenagers with severe primary obesity (BMI = 40 +/- 5.2 kg/m(2)), were studied during jumping. The jumps were filmed by six cameras (synchronized, 50 Hz), associated with a force-plate (1,000 Hz). All the tested models were valid; the linear mechanical analysis of the jumps gave similar results (p > 0.05); but there were significantly different segment inertias when considering the subjects' abdomen (p < 0.01), which was associated with a significantly higher mechanical internal energy expenditure (p < 0.01) than that estimated from Dempster's and Hanavan's model, by about 40 and 30%. The validation of a modelling specifically for obese subjects will enable a better understanding of their locomotion.

  7. Numerical model (switchable/dual model) of the human head for rigid body and finite elements applications.

    PubMed

    Tabacu, Stefan

    2015-01-01

    In this paper, a methodology for the development and validation of a numerical model of the human head using generic procedures is presented. All steps required, starting with the model generation, model validation and applications will be discussed. The proposed model may be considered as a dual one due to its capabilities to switch from deformable to a rigid body according to the application's requirements. The first step is to generate the numerical model of the human head using geometry files or medical images. The required stiffness and damping for the elastic connection used for the rigid body model are identified by performing a natural frequency analysis. The presented applications for model validation are related to impact analysis. The first case is related to Nahum's (Nahum and Smith 1970) experiments pressure data being evaluated and a pressure map generated using the results from discrete elements. For the second case, the relative displacement between the brain and the skull is evaluated according to Hardy's (Hardy WH, Foster CD, Mason, MJ, Yang KH, King A, Tashman S. 2001.Investigation of head injury mechanisms using neutral density technology and high-speed biplanar X-ray. Stapp Car Crash J. 45:337-368, SAE Paper 2001-22-0016) experiments. The main objective is to validate the rigid model as a quick and versatile tool for acquiring the input data for specific brain analyses.

  8. SBKF Modeling and Analysis Plan: Buckling Analysis of Compression-Loaded Orthogrid and Isogrid Cylinders

    NASA Technical Reports Server (NTRS)

    Lovejoy, Andrew E.; Hilburger, Mark W.

    2013-01-01

    This document outlines a Modeling and Analysis Plan (MAP) to be followed by the SBKF analysts. It includes instructions on modeling and analysis formulation and execution, model verification and validation, identifying sources of error and uncertainty, and documentation. The goal of this MAP is to provide a standardized procedure that ensures uniformity and quality of the results produced by the project and corresponding documentation.

  9. Validating Cognitive Models of Task Performance in Algebra on the SAT®. Research Report No. 2009-3

    ERIC Educational Resources Information Center

    Gierl, Mark J.; Leighton, Jacqueline P.; Wang, Changjiang; Zhou, Jiawen; Gokiert, Rebecca; Tan, Adele

    2009-01-01

    The purpose of the study is to present research focused on validating the four algebra cognitive models in Gierl, Wang, et al., using student response data collected with protocol analysis methods to evaluate the knowledge structures and processing skills used by a sample of SAT test takers.

  10. Validity and Realibility of Chemistry Systemic Multiple Choices Questions (CSMCQs)

    ERIC Educational Resources Information Center

    Priyambodo, Erfan; Marfuatun

    2016-01-01

    Nowadays, Rasch model analysis is used widely in social research, moreover in educational research. In this research, Rasch model is used to determine the validation and the reliability of systemic multiple choices question in chemistry teaching and learning. There were 30 multiple choices question with systemic approach for high school student…

  11. The Utility of Job Dimensions Based on Form B of the Position Analysis Questionnaire (PAQ) in a Job Component Validation Model. Report No. 5.

    ERIC Educational Resources Information Center

    Marquardt, Lloyd D.; McCormick, Ernest J.

    The study involved the use of a structured job analysis instrument called the Position Analysis Questionnaire (PAQ) as the direct basis for the establishment of the job component validity of aptitude tests (that is, a procedure for estimating the aptitude requirements for jobs strictly on the basis of job analysis data). The sample of jobs used…

  12. XMI2USE: A Tool for Transforming XMI to USE Specifications

    NASA Astrophysics Data System (ADS)

    Sun, Wuliang; Song, Eunjee; Grabow, Paul C.; Simmonds, Devon M.

    The UML-based Specification Environment (USE) tool supports syntactic analysis, type checking, consistency checking, and dynamic validation of invariants and pre-/post conditions specified in the Object Constraint Language (OCL). Due to its animation and analysis power, it is useful when checking critical non-functional properties such as security policies. However, the USE tool requires one to specify (i.e., "write") a model using its own textual language and does not allow one to import any model specification files created by other UML modeling tools. Hence, to make the best use of existing UML tools, we often create a model with OCL constraints using a modeling tool such as the IBM Rational Software Architect (RSA) and then use the USE tool for model validation. This approach, however, requires a manual transformation between the specifications of two different tool formats, which is error-prone and diminishes the benefit of automated model-level validations. In this paper, we describe our own implementation of a specification transformation engine that is based on the Model Driven Architecture (MDA) framework and currently supports automatic tool-level transformations from RSA to USE.

  13. AIR Model Preflight Analysis

    NASA Technical Reports Server (NTRS)

    Tai, H.; Wilson, J. W.; Maiden, D. L.

    2003-01-01

    The atmospheric ionizing radiation (AIR) ER-2 preflight analysis, one of the first attempts to obtain a relatively complete measurement set of the high-altitude radiation level environment, is described in this paper. The primary thrust is to characterize the atmospheric radiation and to define dose levels at high-altitude flight. A secondary thrust is to develop and validate dosimetric techniques and monitoring devices for protecting aircrews. With a few chosen routes, we can measure the experimental results and validate the AIR model predictions. Eventually, as more measurements are made, we gain more understanding about the hazardous radiation environment and acquire more confidence in the prediction models.

  14. Modeling effectiveness of management practices for flood mitigation using GIS spatial analysis functions in Upper Cilliwung watershed

    NASA Astrophysics Data System (ADS)

    Darma Tarigan, Suria

    2016-01-01

    Flooding is caused by excessive rainfall flowing downstream as cumulative surface runoff. Flooding event is a result of complex interaction of natural system components such as rainfall events, land use, soil, topography and channel characteristics. Modeling flooding event as a result of interaction of those components is a central theme in watershed management. The model is usually used to test performance of various management practices in flood mitigation. There are various types of management practices for flood mitigation including vegetative and structural management practices. Existing hydrological model such as SWAT and HEC-HMS models have limitation to accommodate discrete management practices such as infiltration well, small farm reservoir, silt pits in its analysis due to the lumped structure of these models. Aim of this research is to use raster spatial analysis functions of Geo-Information System (RGIS-HM) to model flooding event in Ciliwung watershed and to simulate impact of discrete management practices on surface runoff reduction. The model was validated using flooding data event of Ciliwung watershed on 29 January 2004. The hourly hydrograph data and rainfall data were available during period of model validation. The model validation provided good result with Nash-Suthcliff efficiency of 0.8. We also compared the RGIS-HM with Netlogo Hydrological Model (NL-HM). The RGIS-HM has similar capability with NL-HM in simulating discrete management practices in watershed scale.

  15. Bem Sex Role Inventory Validation in the International Mobility in Aging Study.

    PubMed

    Ahmed, Tamer; Vafaei, Afshin; Belanger, Emmanuelle; Phillips, Susan P; Zunzunegui, Maria-Victoria

    2016-09-01

    This study investigated the measurement structure of the Bem Sex Role Inventory (BSRI) with different factor analysis methods. Most previous studies on validity applied exploratory factor analysis (EFA) to examine the BSRI. We aimed to assess the psychometric properties and construct validity of the 12-item short-form BSRI in a sample administered to 1,995 older adults from wave 1 of the International Mobility in Aging Study (IMIAS). We used Cronbach's alpha to assess internal consistency reliability and confirmatory factor analysis (CFA) to assess psychometric properties. EFA revealed a three-factor model, further confirmed by CFA and compared with the original two-factor structure model. Results revealed that a two-factor solution (instrumentality-expressiveness) has satisfactory construct validity and superior fit to data compared to the three-factor solution. The two-factor solution confirms expected gender differences in older adults. The 12-item BSRI provides a brief, psychometrically sound, and reliable instrument in international samples of older adults.

  16. 3D-quantitative structure-activity relationship studies on benzothiadiazepine hydroxamates as inhibitors of tumor necrosis factor-alpha converting enzyme.

    PubMed

    Murumkar, Prashant R; Giridhar, Rajani; Yadav, Mange Ram

    2008-04-01

    A set of 29 benzothiadiazepine hydroxamates having selective tumor necrosis factor-alpha converting enzyme inhibitory activity were used to compare the quality and predictive power of 3D-quantitative structure-activity relationship, comparative molecular field analysis, and comparative molecular similarity indices models for the atom-based, centroid/atom-based, data-based, and docked conformer-based alignment. Removal of two outliers from the initial training set of molecules improved the predictivity of models. Among the 3D-quantitative structure-activity relationship models developed using the above four alignments, the database alignment provided the optimal predictive comparative molecular field analysis model for the training set with cross-validated r(2) (q(2)) = 0.510, non-cross-validated r(2) = 0.972, standard error of estimates (s) = 0.098, and F = 215.44 and the optimal comparative molecular similarity indices model with cross-validated r(2) (q(2)) = 0.556, non-cross-validated r(2) = 0.946, standard error of estimates (s) = 0.163, and F = 99.785. These models also showed the best test set prediction for six compounds with predictive r(2) values of 0.460 and 0.535, respectively. The contour maps obtained from 3D-quantitative structure-activity relationship studies were appraised for activity trends for the molecules analyzed. The comparative molecular similarity indices models exhibited good external predictivity as compared with that of comparative molecular field analysis models. The data generated from the present study helped us to further design and report some novel and potent tumor necrosis factor-alpha converting enzyme inhibitors.

  17. Self-esteem among nursing assistants: reliability and validity of the Rosenberg Self-Esteem Scale.

    PubMed

    McMullen, Tara; Resnick, Barbara

    2013-01-01

    To establish the reliability and validity of the Rosenberg Self-Esteem Scale (RSES) when used with nursing assistants (NAs). Testing the RSES used baseline data from a randomized controlled trial testing the Res-Care Intervention. Female NAs were recruited from nursing homes (n = 508). Validity testing for the positive and negative subscales of the RSES was based on confirmatory factor analysis (CFA) using structural equation modeling and Rasch analysis. Estimates of reliability were based on Rasch analysis and the person separation index. Evidence supports the reliability and validity of the RSES in NAs although we recommend minor revisions to the measure for subsequent use. Establishing reliable and valid measures of self-esteem in NAs will facilitate testing of interventions to strengthen workplace self-esteem, job satisfaction, and retention.

  18. One Size Fits All? The Validity of a Composite Poverty Index Across Urban and Rural Households in South Africa.

    PubMed

    Steinert, Janina Isabel; Cluver, Lucie Dale; Melendez-Torres, G J; Vollmer, Sebastian

    2018-01-01

    Composite indices have been prominently used in poverty research. However, validity of these indices remains subject to debate. This paper examines the validity of a common type of composite poverty indices using data from a cross-sectional survey of 2477 households in urban and rural KwaZulu-Natal, South Africa. Multiple-group comparisons in structural equation modelling were employed for testing differences in the measurement model across urban and rural groups. The analysis revealed substantial variations between urban and rural respondents both in the conceptualisation of poverty as well as in the weights and importance assigned to individual poverty indicators. The validity of a 'one size fits all' measurement model can therefore not be confirmed. In consequence, it becomes virtually impossible to determine a household's poverty level relative to the full sample. Findings from our analysis have important practical implications in nuancing how we can sensitively use composite poverty indices to identify poor people.

  19. Validation of reactive gases and aerosols in the MACC global analysis and forecast system

    NASA Astrophysics Data System (ADS)

    Eskes, H.; Huijnen, V.; Arola, A.; Benedictow, A.; Blechschmidt, A.-M.; Botek, E.; Boucher, O.; Bouarar, I.; Chabrillat, S.; Cuevas, E.; Engelen, R.; Flentje, H.; Gaudel, A.; Griesfeller, J.; Jones, L.; Kapsomenakis, J.; Katragkou, E.; Kinne, S.; Langerock, B.; Razinger, M.; Richter, A.; Schultz, M.; Schulz, M.; Sudarchikova, N.; Thouret, V.; Vrekoussis, M.; Wagner, A.; Zerefos, C.

    2015-02-01

    The European MACC (Monitoring Atmospheric Composition and Climate) project is preparing the operational Copernicus Atmosphere Monitoring Service (CAMS), one of the services of the European Copernicus Programme on Earth observation and environmental services. MACC uses data assimilation to combine in-situ and remote sensing observations with global and regional models of atmospheric reactive gases, aerosols and greenhouse gases, and is based on the Integrated Forecast System of the ECMWF. The global component of the MACC service has a dedicated validation activity to document the quality of the atmospheric composition products. In this paper we discuss the approach to validation that has been developed over the past three years. Topics discussed are the validation requirements, the operational aspects, the measurement data sets used, the structure of the validation reports, the models and assimilation systems validated, the procedure to introduce new upgrades, and the scoring methods. One specific target of the MACC system concerns forecasting special events with high pollution concentrations. Such events receive extra attention in the validation process. Finally, a summary is provided of the results from the validation of the latest set of daily global analysis and forecast products from the MACC system reported in November 2014.

  20. Statistical Modeling of Natural Backgrounds in Hyperspectral LWIR Data

    DTIC Science & Technology

    2016-09-06

    extremely important for studying performance trades. First, we study the validity of this model using real hyperspectral data, and compare the relative...difficult to validate any statistical model created for a target of interest. However, since background measurements are plentiful, it is reasonable to...Golden, S., Less, D., Jin, X., and Rynes, P., “ Modeling and analysis of LWIR signature variability associated with 3d and BRDF effects,” 98400P (May 2016

  1. NDARC - NASA Design and Analysis of Rotorcraft Validation and Demonstration

    NASA Technical Reports Server (NTRS)

    Johnson, Wayne

    2010-01-01

    Validation and demonstration results from the development of the conceptual design tool NDARC (NASA Design and Analysis of Rotorcraft) are presented. The principal tasks of NDARC are to design a rotorcraft to satisfy specified design conditions and missions, and then analyze the performance of the aircraft for a set of off-design missions and point operating conditions. The aircraft chosen as NDARC development test cases are the UH-60A single main-rotor and tail-rotor helicopter, the CH-47D tandem helicopter, the XH-59A coaxial lift-offset helicopter, and the XV-15 tiltrotor. These aircraft were selected because flight performance data, a weight statement, detailed geometry information, and a correlated comprehensive analysis model are available for each. Validation consists of developing the NDARC models for these aircraft by using geometry and weight information, airframe wind tunnel test data, engine decks, rotor performance tests, and comprehensive analysis results; and then comparing the NDARC results for aircraft and component performance with flight test data. Based on the calibrated models, the capability of the code to size rotorcraft is explored.

  2. Geographical origin discrimination of lentils (Lens culinaris Medik.) using 1H NMR fingerprinting and multivariate statistical analyses.

    PubMed

    Longobardi, Francesco; Innamorato, Valentina; Di Gioia, Annalisa; Ventrella, Andrea; Lippolis, Vincenzo; Logrieco, Antonio F; Catucci, Lucia; Agostiano, Angela

    2017-12-15

    Lentil samples coming from two different countries, i.e. Italy and Canada, were analysed using untargeted 1 H NMR fingerprinting in combination with chemometrics in order to build models able to classify them according to their geographical origin. For such aim, Soft Independent Modelling of Class Analogy (SIMCA), k-Nearest Neighbor (k-NN), Principal Component Analysis followed by Linear Discriminant Analysis (PCA-LDA) and Partial Least Squares-Discriminant Analysis (PLS-DA) were applied to the NMR data and the results were compared. The best combination of average recognition (100%) and cross-validation prediction abilities (96.7%) was obtained for the PCA-LDA. All the statistical models were validated both by using a test set and by carrying out a Monte Carlo Cross Validation: the obtained performances were found to be satisfying for all the models, with prediction abilities higher than 95% demonstrating the suitability of the developed methods. Finally, the metabolites that mostly contributed to the lentil discrimination were indicated. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Software phantom with realistic speckle modeling for validation of image analysis methods in echocardiography

    NASA Astrophysics Data System (ADS)

    Law, Yuen C.; Tenbrinck, Daniel; Jiang, Xiaoyi; Kuhlen, Torsten

    2014-03-01

    Computer-assisted processing and interpretation of medical ultrasound images is one of the most challenging tasks within image analysis. Physical phenomena in ultrasonographic images, e.g., the characteristic speckle noise and shadowing effects, make the majority of standard methods from image analysis non optimal. Furthermore, validation of adapted computer vision methods proves to be difficult due to missing ground truth information. There is no widely accepted software phantom in the community and existing software phantoms are not exible enough to support the use of specific speckle models for different tissue types, e.g., muscle and fat tissue. In this work we propose an anatomical software phantom with a realistic speckle pattern simulation to _ll this gap and provide a exible tool for validation purposes in medical ultrasound image analysis. We discuss the generation of speckle patterns and perform statistical analysis of the simulated textures to obtain quantitative measures of the realism and accuracy regarding the resulting textures.

  4. Validation of Yoon's Critical Thinking Disposition Instrument.

    PubMed

    Shin, Hyunsook; Park, Chang Gi; Kim, Hyojin

    2015-12-01

    The lack of reliable and valid evaluation tools targeting Korean nursing students' critical thinking (CT) abilities has been reported as one of the barriers to instructing and evaluating students in undergraduate programs. Yoon's Critical Thinking Disposition (YCTD) instrument was developed for Korean nursing students, but few studies have assessed its validity. This study aimed to validate the YCTD. Specifically, the YCTD was assessed to identify its cross-sectional and longitudinal measurement invariance. This was a validation study in which a cross-sectional and longitudinal (prenursing and postnursing practicum) survey was used to validate the YCTD using 345 nursing students at three universities in Seoul, Korea. The participants' CT abilities were assessed using the YCTD before and after completing an established pediatric nursing practicum. The validity of the YCTD was estimated and then group invariance test using multigroup confirmatory factor analysis was performed to confirm the measurement compatibility of multigroups. A test of the seven-factor model showed that the YCTD demonstrated good construct validity. Multigroup confirmatory factor analysis findings for the measurement invariance suggested that this model structure demonstrated strong invariance between groups (i.e., configural, factor loading, and intercept combined) but weak invariance within a group (i.e., configural and factor loading combined). In general, traditional methods for assessing instrument validity have been less than thorough. In this study, multigroup confirmatory factor analysis using cross-sectional and longitudinal measurement data allowed validation of the YCTD. This study concluded that the YCTD can be used for evaluating Korean nursing students' CT abilities. Copyright © 2015. Published by Elsevier B.V.

  5. A trace map comparison algorithm for the discrete fracture network models of rock masses

    NASA Astrophysics Data System (ADS)

    Han, Shuai; Wang, Gang; Li, Mingchao

    2018-06-01

    Discrete fracture networks (DFN) are widely used to build refined geological models. However, validating whether a refined model can match to reality is a crucial problem, concerning whether the model can be used for analysis. The current validation methods include numerical validation and graphical validation. However, the graphical validation, aiming at estimating the similarity between a simulated trace map and the real trace map by visual observation, is subjective. In this paper, an algorithm for the graphical validation of DFN is set up. Four main indicators, including total gray, gray grade curve, characteristic direction and gray density distribution curve, are presented to assess the similarity between two trace maps. A modified Radon transform and loop cosine similarity are presented based on Radon transform and cosine similarity respectively. Besides, how to use Bézier curve to reduce the edge effect is described. Finally, a case study shows that the new algorithm can effectively distinguish which simulated trace map is more similar to the real trace map.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    English, Shawn A.; Briggs, Timothy M.; Nelson, Stacy M.

    Simulations of low velocity impact with a flat cylindrical indenter upon a carbon fiber fabric reinforced polymer laminate are rigorously validated. Comparison of the impact energy absorption between the model and experiment is used as the validation metric. Additionally, non-destructive evaluation, including ultrasonic scans and three-dimensional computed tomography, provide qualitative validation of the models. The simulations include delamination, matrix cracks and fiber breaks. An orthotropic damage and failure constitutive model, capable of predicting progressive damage and failure, is developed in conjunction and described. An ensemble of simulations incorporating model parameter uncertainties is used to predict a response distribution which ismore » then compared to experimental output using appropriate statistical methods. Lastly, the model form errors are exposed and corrected for use in an additional blind validation analysis. The result is a quantifiable confidence in material characterization and model physics when simulating low velocity impact in structures of interest.« less

  7. Testing alternative ground water models using cross-validation and other methods

    USGS Publications Warehouse

    Foglia, L.; Mehl, S.W.; Hill, M.C.; Perona, P.; Burlando, P.

    2007-01-01

    Many methods can be used to test alternative ground water models. Of concern in this work are methods able to (1) rank alternative models (also called model discrimination) and (2) identify observations important to parameter estimates and predictions (equivalent to the purpose served by some types of sensitivity analysis). Some of the measures investigated are computationally efficient; others are computationally demanding. The latter are generally needed to account for model nonlinearity. The efficient model discrimination methods investigated include the information criteria: the corrected Akaike information criterion, Bayesian information criterion, and generalized cross-validation. The efficient sensitivity analysis measures used are dimensionless scaled sensitivity (DSS), composite scaled sensitivity, and parameter correlation coefficient (PCC); the other statistics are DFBETAS, Cook's D, and observation-prediction statistic. Acronyms are explained in the introduction. Cross-validation (CV) is a computationally intensive nonlinear method that is used for both model discrimination and sensitivity analysis. The methods are tested using up to five alternative parsimoniously constructed models of the ground water system of the Maggia Valley in southern Switzerland. The alternative models differ in their representation of hydraulic conductivity. A new method for graphically representing CV and sensitivity analysis results for complex models is presented and used to evaluate the utility of the efficient statistics. The results indicate that for model selection, the information criteria produce similar results at much smaller computational cost than CV. For identifying important observations, the only obviously inferior linear measure is DSS; the poor performance was expected because DSS does not include the effects of parameter correlation and PCC reveals large parameter correlations. ?? 2007 National Ground Water Association.

  8. Guidelines for Use of the Approximate Beta-Poisson Dose-Response Model.

    PubMed

    Xie, Gang; Roiko, Anne; Stratton, Helen; Lemckert, Charles; Dunn, Peter K; Mengersen, Kerrie

    2017-07-01

    For dose-response analysis in quantitative microbial risk assessment (QMRA), the exact beta-Poisson model is a two-parameter mechanistic dose-response model with parameters α>0 and β>0, which involves the Kummer confluent hypergeometric function. Evaluation of a hypergeometric function is a computational challenge. Denoting PI(d) as the probability of infection at a given mean dose d, the widely used dose-response model PI(d)=1-(1+dβ)-α is an approximate formula for the exact beta-Poisson model. Notwithstanding the required conditions α<β and β>1, issues related to the validity and approximation accuracy of this approximate formula have remained largely ignored in practice, partly because these conditions are too general to provide clear guidance. Consequently, this study proposes a probability measure Pr(0 < r < 1 | α̂, β̂) as a validity measure (r is a random variable that follows a gamma distribution; α̂ and β̂ are the maximum likelihood estimates of α and β in the approximate model); and the constraint conditions β̂>(22α̂)0.50 for 0.02<α̂<2 as a rule of thumb to ensure an accurate approximation (e.g., Pr(0 < r < 1 | α̂, β̂) >0.99) . This validity measure and rule of thumb were validated by application to all the completed beta-Poisson models (related to 85 data sets) from the QMRA community portal (QMRA Wiki). The results showed that the higher the probability Pr(0 < r < 1 | α̂, β̂), the better the approximation. The results further showed that, among the total 85 models examined, 68 models were identified as valid approximate model applications, which all had a near perfect match to the corresponding exact beta-Poisson model dose-response curve. © 2016 Society for Risk Analysis.

  9. Modeling Piezoelectric Stack Actuators for Control of Micromanipulation

    NASA Technical Reports Server (NTRS)

    Goldfarb, Michael; Celanovic, Nikola

    1997-01-01

    A nonlinear lumped-parameter model of a piezoelectric stack actuator has been developed to describe actuator behavior for purposes of control system analysis and design, and, in particular, for microrobotic applications requiring accurate position and/or force control. In formulating this model, the authors propose a generalized Maxwell resistive capacitor as a lumped-parameter causal representation of rate-independent hysteresis. Model formulation is validated by comparing results of numerical simulations to experimental data. Validation is followed by a discussion of model implications for purposes of actuator control.

  10. Simulation for Prediction of Entry Article Demise (SPEAD): An Analysis Tool for Spacecraft Safety Analysis and Ascent/Reentry Risk Assessment

    NASA Technical Reports Server (NTRS)

    Ling, Lisa

    2014-01-01

    For the purpose of performing safety analysis and risk assessment for a potential off-nominal atmospheric reentry resulting in vehicle breakup, a synthesis of trajectory propagation coupled with thermal analysis and the evaluation of node failure is required to predict the sequence of events, the timeline, and the progressive demise of spacecraft components. To provide this capability, the Simulation for Prediction of Entry Article Demise (SPEAD) analysis tool was developed. The software and methodology have been validated against actual flights, telemetry data, and validated software, and safety/risk analyses were performed for various programs using SPEAD. This report discusses the capabilities, modeling, validation, and application of the SPEAD analysis tool.

  11. The development and testing of a skin tear risk assessment tool.

    PubMed

    Newall, Nelly; Lewin, Gill F; Bulsara, Max K; Carville, Keryln J; Leslie, Gavin D; Roberts, Pam A

    2017-02-01

    The aim of the present study is to develop a reliable and valid skin tear risk assessment tool. The six characteristics identified in a previous case control study as constituting the best risk model for skin tear development were used to construct a risk assessment tool. The ability of the tool to predict skin tear development was then tested in a prospective study. Between August 2012 and September 2013, 1466 tertiary hospital patients were assessed at admission and followed up for 10 days to see if they developed a skin tear. The predictive validity of the tool was assessed using receiver operating characteristic (ROC) analysis. When the tool was found not to have performed as well as hoped, secondary analyses were performed to determine whether a potentially better performing risk model could be identified. The tool was found to have high sensitivity but low specificity and therefore have inadequate predictive validity. Secondary analysis of the combined data from this and the previous case control study identified an alternative better performing risk model. The tool developed and tested in this study was found to have inadequate predictive validity. The predictive validity of an alternative, more parsimonious model now needs to be tested. © 2015 Medicalhelplines.com Inc and John Wiley & Sons Ltd.

  12. SCALE TSUNAMI Analysis of Critical Experiments for Validation of 233U Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mueller, Don; Rearden, Bradley T

    2009-01-01

    Oak Ridge National Laboratory (ORNL) staff used the SCALE TSUNAMI tools to provide a demonstration evaluation of critical experiments considered for use in validation of current and anticipated operations involving {sup 233}U at the Radiochemical Development Facility (RDF). This work was reported in ORNL/TM-2008/196 issued in January 2009. This paper presents the analysis of two representative safety analysis models provided by RDF staff.

  13. Differential Item Functioning Analysis of High-Stakes Test in Terms of Gender: A Rasch Model Approach

    ERIC Educational Resources Information Center

    Alavi, Seyed Mohammad; Bordbar, Soodeh

    2017-01-01

    Differential Item Functioning (DIF) analysis is a key element in evaluating educational test fairness and validity. One of the frequently cited sources of construct-irrelevant variance is gender which has an important role in the university entrance exam; therefore, it causes bias and consequently undermines test validity. The present study aims…

  14. Knowledge about dietary fibres (KADF): development and validation of an evaluation instrument through structural equation modelling (SEM).

    PubMed

    Guiné, R P F; Duarte, J; Ferreira, M; Correia, P; Leal, M; Rumbak, I; Barić, I C; Komes, D; Satalić, Z; Sarić, M M; Tarcea, M; Fazakas, Z; Jovanoska, D; Vanevski, D; Vittadini, E; Pellegrini, N; Szűcs, V; Harangozó, J; El-Kenawy, A; El-Shenawy, O; Yalçın, E; Kösemeci, C; Klava, D; Straumite, E

    2016-09-01

    Because there is scientific evidence that an appropriate intake of dietary fibre should be part of a healthy diet, given its importance in promoting health, the present study aimed to develop and validate an instrument to evaluate the knowledge of the general population about dietary fibres. The present study was a cross sectional study. The methodological study of psychometric validation was conducted with 6010 participants, residing in 10 countries from three continents. The instrument is a questionnaire of self-response, aimed at collecting information on knowledge about food fibres. Exploratory factor analysis (EFA) was chosen as the analysis of the main components using varimax orthogonal rotation and eigenvalues greater than 1. In confirmatory factor analysis by structural equation modelling (SEM) was considered the covariance matrix and adopted the maximum likelihood estimation algorithm for parameter estimation. Exploratory factor analysis retained two factors. The first was called dietary fibre and promotion of health (DFPH) and included seven questions that explained 33.94% of total variance (α = 0.852). The second was named sources of dietary fibre (SDF) and included four questions that explained 22.46% of total variance (α = 0.786). The model was tested by SEM giving a final solution with four questions in each factor. This model showed a very good fit in practically all the indexes considered, except for the ratio χ(2)/df. The values of average variance extracted (0.458 and 0.483) demonstrate the existence of convergent validity; the results also prove the existence of discriminant validity of the factors (r(2) = 0.028) and finally good internal consistency was confirmed by the values of composite reliability (0.854 and 0.787). This study allowed validating the KADF scale, increasing the degree of confidence in the information obtained through this instrument in this and in future studies. Copyright © 2016 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.

  15. Development and validation of a tool to assess knowledge and attitudes towards generic medicines among students in Greece: The ATtitude TOwards GENerics (ATTOGEN) questionnaire.

    PubMed

    Domeyer, Philip J; Aletras, Vassilis; Anagnostopoulos, Fotios; Katsari, Vasiliki; Niakas, Dimitris

    2017-01-01

    The use of generic medicines is a cost-effective policy, often dictated by fiscal restraints. To our knowledge, no fully validated tool exploring the students' knowledge and attitudes towards generic medicines exists. The aim of our study was to develop and validate a questionnaire exploring the knowledge and attitudes of M.Sc. in Health Care Management students and recent alumni's towards generic drugs in Greece. The development of the questionnaire was a result of literature review and pilot-testing of its preliminary versions to researchers and students. The final version of the questionnaire contains 18 items measuring the respondents' knowledge and attitude towards generic medicines on a 5-point Likert scale. Given the ordinal nature of the data, ordinal alpha and polychoric correlations were computed. The sample was randomly split into two halves. Exploratory factor analysis, performed in the first sample, was used for the creation of multi-item scales. Confirmatory factor analysis and Generalized Linear Latent and Mixed Model analysis (GLLAMM) with the use of the rating scale model were used in the second sample to assess goodness of fit. An assessment of internal consistency reliability, test-retest reliability, and construct validity was also performed. Among 1402 persons contacted, 986 persons completed our questionnaire (response rate = 70.3%). Overall Cronbach's alpha was 0.871. The conjoint use of exploratory and confirmatory factor analysis resulted in a six-scale model, which seemed to fit the data well. Five of the six scales, namely trust, drug quality, state audit, fiscal impact and drug substitution were found to be valid and reliable, while the knowledge scale suffered only from low inter-scale correlations and a ceiling effect. However, the subsequent confirmatory factor and GLLAMM analyses indicated a good fit of the model to the data. The ATTOGEN instrument proved to be a reliable and valid tool, suitable for assessing students' knowledge and attitudes towards generic medicines.

  16. Validating proposed migration equation and parameters' values as a tool to reproduce and predict 137Cs vertical migration activity in Spanish soils.

    PubMed

    Olondo, C; Legarda, F; Herranz, M; Idoeta, R

    2017-04-01

    This paper shows the procedure performed to validate the migration equation and the migration parameters' values presented in a previous paper (Legarda et al., 2011) regarding the migration of 137 Cs in Spanish mainland soils. In this paper, this model validation has been carried out checking experimentally obtained activity concentration values against those predicted by the model. This experimental data come from the measured vertical activity profiles of 8 new sampling points which are located in northern Spain. Before testing predicted values of the model, the uncertainty of those values has been assessed with the appropriate uncertainty analysis. Once establishing the uncertainty of the model, both activity concentration values, experimental versus model predicted ones, have been compared. Model validation has been performed analyzing its accuracy, studying it as a whole and also at different depth intervals. As a result, this model has been validated as a tool to predict 137 Cs behaviour in a Mediterranean environment. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Testing the Construct Validity of Proposed Criteria for "DSM-5" Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Mandy, William P. L.; Charman, Tony; Skuse, David H.

    2012-01-01

    Objective: To use confirmatory factor analysis to test the construct validity of the proposed "DSM-5" symptom model of autism spectrum disorder (ASD), in comparison to alternative models, including that described in "DSM-IV-TR." Method: Participants were 708 verbal children and young persons (mean age, 9.5 years) with mild to severe autistic…

  18. Longitudinal Construct Validity of Brief Symptom Inventory Subscales in Schizophrenia

    ERIC Educational Resources Information Center

    Long, Jeffrey D.; Harring, Jeffrey R.; Brekke, John S.; Test, Mary Ann; Greenberg, Jan

    2007-01-01

    Longitudinal validity of Brief Symptom Inventory subscales was examined in a sample (N = 318) with schizophrenia-related illness measured at baseline and every 6 months for 3 years. Nonlinear factor analysis of items was used to test graded response models (GRMs) for subscales in isolation. The models varied in their within-time and between-times…

  19. Using Structural Equation Modeling to Validate Online Game Players' Motivations Relative to Self-Concept and Life Adaptation

    ERIC Educational Resources Information Center

    Yang, Shu Ching; Huang, Chiao Ling

    2013-01-01

    This study aimed to validate a systematic instrument to measure online players' motivations for playing online games (MPOG) and examine how the interplay of differential motivations impacts young gamers' self-concept and life adaptation. Confirmatory factor analysis determined that a hierarchical model with a two-factor structure of…

  20. A Confirmatory Factor Analysis of the Structure of Statistics Anxiety Measure: An examination of four alternative models

    PubMed Central

    Vahedi, Shahram; Farrokhi, Farahman

    2011-01-01

    Objective The aim of this study is to explore the confirmatory factor analysis results of the Persian adaptation of Statistics Anxiety Measure (SAM), proposed by Earp. Method The validity and reliability assessments of the scale were performed on 298 college students chosen randomly from Tabriz University in Iran. Confirmatory factor analysis (CFA) was carried out to determine the factor structures of the Persian adaptation of SAM. Results As expected, the second order model provided a better fit to the data than the three alternative models. Conclusions Hence, SAM provides an equally valid measure for use among college students. The study both expands and adds support to the existing body of math anxiety literature. PMID:22952530

  1. The Model Analyst’s Toolkit: Scientific Model Development, Analysis, and Validation

    DTIC Science & Technology

    2013-11-20

    Granger causality F-test validation 3.1.2. Dynamic time warping for uneven temporal relationships Many causal relationships are imperfectly...mapping for dynamic feedback models Granger causality and DTW can identify causal relationships and consider complex temporal factors. However, many ...variant of the tf-idf algorithm (Manning, Raghavan, Schutze et al., 2008), typically used in search engines, to “score” features. The (-log tf) in

  2. Establishment of an Adjusted Prognosis Analysis Model for Initially Diagnosed Non-Small-Cell Lung Cancer With Brain Metastases From Sun Yat-Sen University Cancer Center.

    PubMed

    Dinglin, Xiao-Xiao; Ma, Shu-Xiang; Wang, Fang; Li, De-Lan; Liang, Jian-Zhong; Chen, Xin-Ru; Liu, Qing; Zeng, Yin-Duo; Chen, Li-Kun

    2017-05-01

    The current published prognosis models for brain metastases (BMs) from cancer have not addressed the issue of either newly diagnosed non-small-cell lung cancer (NSCLC) with BMs or the lung cancer genotype. We sought to build an adjusted prognosis analysis (APA) model, a new prognosis model specifically for NSCLC patients with BMs at the initial diagnosis using adjusted prognosis analysis (APA). The model was derived using data from 1158 consecutive patients, with 837 in the derivation cohort and 321 in the validation cohort. The patients had initially received a diagnosis of BMs from NSCLC at Sun Yat-Sen University Cancer Center from 1994 to 2015. The prognostic factors analyzed included patient characteristics, disease characteristics, and treatments. The APA model was built according to the numerical score derived from the hazard ratio of each independent prognostic variable. The predictive accuracy of the APA model was determined using a concordance index and was compared with current prognosis models. The results were validated using bootstrap resampling and a validation cohort. We established 2 prognostic models (APA 1 and 2) for the whole group of patients and for those with known epidermal growth factor receptor (EGFR) genotype, respectively. Six factors were independently associated with survival time: Karnofsky performance status, age, smoking history (replaced by EGFR mutation in APA 2), local treatment of intracranial metastases, EGFR-tyrosine kinase inhibitor treatment, and chemotherapy. Patients in the derivation cohort were stratified into low- (score, 0-2), moderate- (score, 3-5), and high-risk (score 6-7) groups according to the median survival time (16.6, 10.3, and 5.2 months, respectively; P < .001). The results were further confirmed in the validation cohort. Compared with recursive partition analysis and graded prognostic assessment, APA seems to be more suitable for initially diagnosed NSCLC with BMs. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. A Hardware Model Validation Tool for Use in Complex Space Systems

    NASA Technical Reports Server (NTRS)

    Davies, Misty Dawn; Gundy-Burlet, Karen L.; Limes, Gregory L.

    2010-01-01

    One of the many technological hurdles that must be overcome in future missions is the challenge of validating as-built systems against the models used for design. We propose a technique composed of intelligent parameter exploration in concert with automated failure analysis as a scalable method for the validation of complex space systems. The technique is impervious to discontinuities and linear dependencies in the data, and can handle dimensionalities consisting of hundreds of variables over tens of thousands of experiments.

  4. Developing a Psychometric Instrument to Measure Physical Education Teachers' Job Demands and Resources.

    PubMed

    Zhang, Tan; Chen, Ang

    2017-01-01

    Based on the job demands-resources model, the study developed and validated an instrument that measures physical education teachers' job demands-resources perception. Expert review established content validity with the average item rating of 3.6/5.0. Construct validity and reliability were determined with a teacher sample ( n = 397). Exploratory factor analysis established a five-dimension construct structure matching the theoretical construct deliberated in the literature. The composite reliability scores for the five dimensions range from .68 to .83. Validity coefficients (intraclass correlational coefficients) are .69 for job resources items and .82 for job demands items. Inter-scale correlational coefficients range from -.32 to .47. Confirmatory factor analysis confirmed the construct validity with high dimensional factor loadings (ranging from .47 to .84 for job resources scale and from .50 to .85 for job demands scale) and adequate model fit indexes (root mean square error of approximation = .06). The instrument provides a tool to measure physical education teachers' perception of their working environment.

  5. Developing a Psychometric Instrument to Measure Physical Education Teachers’ Job Demands and Resources

    PubMed Central

    Zhang, Tan; Chen, Ang

    2017-01-01

    Based on the job demands–resources model, the study developed and validated an instrument that measures physical education teachers’ job demands–resources perception. Expert review established content validity with the average item rating of 3.6/5.0. Construct validity and reliability were determined with a teacher sample (n = 397). Exploratory factor analysis established a five-dimension construct structure matching the theoretical construct deliberated in the literature. The composite reliability scores for the five dimensions range from .68 to .83. Validity coefficients (intraclass correlational coefficients) are .69 for job resources items and .82 for job demands items. Inter-scale correlational coefficients range from −.32 to .47. Confirmatory factor analysis confirmed the construct validity with high dimensional factor loadings (ranging from .47 to .84 for job resources scale and from .50 to .85 for job demands scale) and adequate model fit indexes (root mean square error of approximation = .06). The instrument provides a tool to measure physical education teachers’ perception of their working environment. PMID:29200808

  6. Development and Validation of the Sorokin Psychosocial Love Inventory for Divorced Individuals

    ERIC Educational Resources Information Center

    D'Ambrosio, Joseph G.; Faul, Anna C.

    2013-01-01

    Objective: This study describes the development and validation of the Sorokin Psychosocial Love Inventory (SPSLI) measuring love actions toward a former spouse. Method: Classical measurement theory and confirmatory factor analysis (CFA) were utilized with an a priori theory and factor model to validate the SPSLI. Results: A 15-item scale…

  7. Development of an online, publicly accessible naive Bayesian decision support tool for mammographic mass lesions based on the American College of Radiology (ACR) BI-RADS lexicon.

    PubMed

    Benndorf, Matthias; Kotter, Elmar; Langer, Mathias; Herda, Christoph; Wu, Yirong; Burnside, Elizabeth S

    2015-06-01

    To develop and validate a decision support tool for mammographic mass lesions based on a standardized descriptor terminology (BI-RADS lexicon) to reduce variability of practice. We used separate training data (1,276 lesions, 138 malignant) and validation data (1,177 lesions, 175 malignant). We created naïve Bayes (NB) classifiers from the training data with tenfold cross-validation. Our "inclusive model" comprised BI-RADS categories, BI-RADS descriptors, and age as predictive variables; our "descriptor model" comprised BI-RADS descriptors and age. The resulting NB classifiers were applied to the validation data. We evaluated and compared classifier performance with ROC-analysis. In the training data, the inclusive model yields an AUC of 0.959; the descriptor model yields an AUC of 0.910 (P < 0.001). The inclusive model is superior to the clinical performance (BI-RADS categories alone, P < 0.001); the descriptor model performs similarly. When applied to the validation data, the inclusive model yields an AUC of 0.935; the descriptor model yields an AUC of 0.876 (P < 0.001). Again, the inclusive model is superior to the clinical performance (P < 0.001); the descriptor model performs similarly. We consider our classifier a step towards a more uniform interpretation of combinations of BI-RADS descriptors. We provide our classifier at www.ebm-radiology.com/nbmm/index.html . • We provide a decision support tool for mammographic masses at www.ebm-radiology.com/nbmm/index.html . • Our tool may reduce variability of practice in BI-RADS category assignment. • A formal analysis of BI-RADS descriptors may enhance radiologists' diagnostic performance.

  8. Linguistic validation of stigmatisation degree, self-esteem and knowledge questionnaire among asthma patients using Rasch analysis.

    PubMed

    Ahmad, Sohail; Ismail, Ahmad Izuanuddin; Khan, Tahir Mehmood; Akram, Waqas; Mohd Zim, Mohd Arif; Ismail, Nahlah Elkudssiah

    2017-04-01

    The stigmatisation degree, self-esteem and knowledge either directly or indirectly influence the control and self-management of asthma. To date, there is no valid and reliable instrument that can assess these key issues collectively. The main aim of this study was to test the reliability and validity of the newly devised and translated "Stigmatisation Degree, Self-Esteem and Knowledge Questionnaire" among adult asthma patients using the Rasch measurement model. This cross-sectional study recruited thirty adult asthma patients from two respiratory specialist clinics in Selangor, Malaysia. The newly devised self-administered questionnaire was adapted from relevant publications and translated into the Malay language using international standard translation guidelines. Content and face validation was done. The data were extracted and analysed for real item reliability and construct validation using the Rasch model. The translated "Stigmatisation Degree, Self-Esteem and Knowledge Questionnaire" showed high real item reliability values of 0.90, 0.86 and 0.89 for stigmatisation degree, self-esteem, and knowledge of asthma, respectively. Furthermore, all values of point measure correlation (PTMEA Corr) analysis were within the acceptable specified range of the Rasch model. Infit/outfit mean square values and Z standard (ZSTD) values of each item verified the construct validity and suggested retaining all the items in the questionnaire. The reliability analyses and output tables of item measures for construct validation proved the translated Malaysian version of "Stigmatisation Degree, Self-Esteem and Knowledge Questionnaire" as a valid and highly reliable questionnaire.

  9. Multiple-Group Analysis Using the sem Package in the R System

    ERIC Educational Resources Information Center

    Evermann, Joerg

    2010-01-01

    Multiple-group analysis in covariance-based structural equation modeling (SEM) is an important technique to ensure the invariance of latent construct measurements and the validity of theoretical models across different subpopulations. However, not all SEM software packages provide multiple-group analysis capabilities. The sem package for the R…

  10. Questionable Validity of Poisson Assumptions in a Combined Loglinear/MDS Mapping Model.

    ERIC Educational Resources Information Center

    Gleason, John M.

    1993-01-01

    This response to an earlier article on a combined log-linear/MDS model for mapping journals by citation analysis discusses the underlying assumptions of the Poisson model with respect to characteristics of the citation process. The importance of empirical data analysis is also addressed. (nine references) (LRW)

  11. Validation of Slosh Model Parameters and Anti-Slosh Baffle Designs of Propellant Tanks by Using Lateral Slosh Testing

    NASA Technical Reports Server (NTRS)

    Perez, Jose G.; Parks, Russel, A.; Lazor, Daniel R.

    2012-01-01

    The slosh dynamics of propellant tanks can be represented by an equivalent mass-pendulum-dashpot mechanical model. The parameters of this equivalent model, identified as slosh mechanical model parameters, are slosh frequency, slosh mass, and pendulum hinge point location. They can be obtained by both analysis and testing for discrete fill levels. Anti-slosh baffles are usually needed in propellant tanks to control the movement of the fluid inside the tank. Lateral slosh testing, involving both random excitation testing and free-decay testing, are performed to validate the slosh mechanical model parameters and the damping added to the fluid by the anti-slosh baffles. Traditional modal analysis procedures were used to extract the parameters from the experimental data. Test setup of sub-scale tanks will be described. A comparison between experimental results and analysis will be presented.

  12. Validation of Slosh Model Parameters and Anti-Slosh Baffle Designs of Propellant Tanks by Using Lateral Slosh Testing

    NASA Technical Reports Server (NTRS)

    Perez, Jose G.; Parks, Russel A.; Lazor, Daniel R.

    2012-01-01

    The slosh dynamics of propellant tanks can be represented by an equivalent pendulum-mass mechanical model. The parameters of this equivalent model, identified as slosh model parameters, are slosh mass, slosh mass center of gravity, slosh frequency, and smooth-wall damping. They can be obtained by both analysis and testing for discrete fill heights. Anti-slosh baffles are usually needed in propellant tanks to control the movement of the fluid inside the tank. Lateral slosh testing, involving both random testing and free-decay testing, are performed to validate the slosh model parameters and the damping added to the fluid by the anti-slosh baffles. Traditional modal analysis procedures are used to extract the parameters from the experimental data. Test setup of sub-scale test articles of cylindrical and spherical shapes will be described. A comparison between experimental results and analysis will be presented.

  13. Development and Validation of a Safety Climate Scale for Manufacturing Industry

    PubMed Central

    Ghahramani, Abolfazl; Khalkhali, Hamid R.

    2015-01-01

    Background This paper describes the development of a scale for measuring safety climate. Methods This study was conducted in six manufacturing companies in Iran. The scale developed through conducting a literature review about the safety climate and constructing a question pool. The number of items was reduced to 71 after performing a screening process. Results The result of content validity analysis showed that 59 items had excellent item content validity index (≥ 0.78) and content validity ratio (> 0.38). The exploratory factor analysis resulted in eight safety climate dimensions. The reliability value for the final 45-item scale was 0.96. The result of confirmatory factor analysis showed that the safety climate model is satisfactory. Conclusion This study produced a valid and reliable scale for measuring safety climate in manufacturing companies. PMID:26106508

  14. Validating a model that predicts daily growth and feed quality of New Zealand dairy pastures.

    PubMed

    Woodward, S J

    2001-09-01

    The Pasture Quality (PQ) model is a simple, mechanistic, dynamical system model that was designed to capture the essential biological processes in grazed grass-clover pasture, and to be optimised to derive improved grazing strategies for New Zealand dairy farms. While the individual processes represented in the model (photosynthesis, tissue growth, flowering, leaf death, decomposition, worms) were based on experimental data, this did not guarantee that the assembled model would accurately predict the behaviour of the system as a whole (i.e., pasture growth and quality). Validation of the whole model was thus a priority, since any strategy derived from the model could impact a farm business in the order of thousands of dollars per annum if adopted. This paper describes the process of defining performance criteria for the model, obtaining suitable data to test the model, and carrying out the validation analysis. The validation process highlighted a number of weaknesses in the model, which will lead to the model being improved. As a result, the model's utility will be enhanced. Furthermore, validation was found to have an unexpected additional benefit, in that despite the model's poor initial performance, support was generated for the model among field scientists involved in the wider project.

  15. Model Validation Status Review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    E.L. Hardin

    The primary objective for the Model Validation Status Review was to perform a one-time evaluation of model validation associated with the analysis/model reports (AMRs) containing model input to total-system performance assessment (TSPA) for the Yucca Mountain site recommendation (SR). This review was performed in response to Corrective Action Request BSC-01-C-01 (Clark 2001, Krisha 2001) pursuant to Quality Assurance review findings of an adverse trend in model validation deficiency. The review findings in this report provide the following information which defines the extent of model validation deficiency and the corrective action needed: (1) AMRs that contain or support models are identified,more » and conversely, for each model the supporting documentation is identified. (2) The use for each model is determined based on whether the output is used directly for TSPA-SR, or for screening (exclusion) of features, events, and processes (FEPs), and the nature of the model output. (3) Two approaches are used to evaluate the extent to which the validation for each model is compliant with AP-3.10Q (Analyses and Models). The approaches differ in regard to whether model validation is achieved within individual AMRs as originally intended, or whether model validation could be readily achieved by incorporating information from other sources. (4) Recommendations are presented for changes to the AMRs, and additional model development activities or data collection, that will remedy model validation review findings, in support of licensing activities. The Model Validation Status Review emphasized those AMRs that support TSPA-SR (CRWMS M&O 2000bl and 2000bm). A series of workshops and teleconferences was held to discuss and integrate the review findings. The review encompassed 125 AMRs (Table 1) plus certain other supporting documents and data needed to assess model validity. The AMRs were grouped in 21 model areas representing the modeling of processes affecting the natural and engineered barriers, plus the TSPA model itself Description of the model areas is provided in Section 3, and the documents reviewed are described in Section 4. The responsible manager for the Model Validation Status Review was the Chief Science Officer (CSO) for Bechtel-SAIC Co. (BSC). The team lead was assigned by the CSO. A total of 32 technical specialists were engaged to evaluate model validation status in the 21 model areas. The technical specialists were generally independent of the work reviewed, meeting technical qualifications as discussed in Section 5.« less

  16. A mathematical prediction model incorporating molecular subtype for risk of non-sentinel lymph node metastasis in sentinel lymph node-positive breast cancer patients: a retrospective analysis and nomogram development.

    PubMed

    Wang, Na-Na; Yang, Zheng-Jun; Wang, Xue; Chen, Li-Xuan; Zhao, Hong-Meng; Cao, Wen-Feng; Zhang, Bin

    2018-04-25

    Molecular subtype of breast cancer is associated with sentinel lymph node status. We sought to establish a mathematical prediction model that included breast cancer molecular subtype for risk of positive non-sentinel lymph nodes in breast cancer patients with sentinel lymph node metastasis and further validate the model in a separate validation cohort. We reviewed the clinicopathologic data of breast cancer patients with sentinel lymph node metastasis who underwent axillary lymph node dissection between June 16, 2014 and November 16, 2017 at our hospital. Sentinel lymph node biopsy was performed and patients with pathologically proven sentinel lymph node metastasis underwent axillary lymph node dissection. Independent risks for non-sentinel lymph node metastasis were assessed in a training cohort by multivariate analysis and incorporated into a mathematical prediction model. The model was further validated in a separate validation cohort, and a nomogram was developed and evaluated for diagnostic performance in predicting the risk of non-sentinel lymph node metastasis. Moreover, we assessed the performance of five different models in predicting non-sentinel lymph node metastasis in training cohort. Totally, 495 cases were eligible for the study, including 291 patients in the training cohort and 204 in the validation cohort. Non-sentinel lymph node metastasis was observed in 33.3% (97/291) patients in the training cohort. The AUC of MSKCC, Tenon, MDA, Ljubljana, and Louisville models in training cohort were 0.7613, 0.7142, 0.7076, 0.7483, and 0.671, respectively. Multivariate regression analysis indicated that tumor size (OR = 1.439; 95% CI 1.025-2.021; P = 0.036), sentinel lymph node macro-metastasis versus micro-metastasis (OR = 5.063; 95% CI 1.111-23.074; P = 0.036), the number of positive sentinel lymph nodes (OR = 2.583, 95% CI 1.714-3.892; P < 0.001), and the number of negative sentinel lymph nodes (OR = 0.686, 95% CI 0.575-0.817; P < 0.001) were independent statistically significant predictors of non-sentinel lymph node metastasis. Furthermore, luminal B (OR = 3.311, 95% CI 1.593-6.884; P = 0.001) and HER2 overexpression (OR = 4.308, 95% CI 1.097-16.912; P = 0.036) were independent and statistically significant predictor of non-sentinel lymph node metastasis versus luminal A. A regression model based on the results of multivariate analysis was established to predict the risk of non-sentinel lymph node metastasis, which had an AUC of 0.8188. The model was validated in the validation cohort and showed excellent diagnostic performance. The mathematical prediction model that incorporates five variables including breast cancer molecular subtype demonstrates excellent diagnostic performance in assessing the risk of non-sentinel lymph node metastasis in sentinel lymph node-positive patients. The prediction model could be of help surgeons in evaluating the risk of non-sentinel lymph node involvement for breast cancer patients; however, the model requires further validation in prospective studies.

  17. Exploring the validity and reliability of a questionnaire for evaluating veterinary clinical teachers' supervisory skills during clinical rotations.

    PubMed

    Boerboom, T B B; Dolmans, D H J M; Jaarsma, A D C; Muijtjens, A M M; Van Beukelen, P; Scherpbier, A J J A

    2011-01-01

    Feedback to aid teachers in improving their teaching requires validated evaluation instruments. When implementing an evaluation instrument in a different context, it is important to collect validity evidence from multiple sources. We examined the validity and reliability of the Maastricht Clinical Teaching Questionnaire (MCTQ) as an instrument to evaluate individual clinical teachers during short clinical rotations in veterinary education. We examined four sources of validity evidence: (1) Content was examined based on theory of effective learning. (2) Response process was explored in a pilot study. (3) Internal structure was assessed by confirmatory factor analysis using 1086 student evaluations and reliability was examined utilizing generalizability analysis. (4) Relations with other relevant variables were examined by comparing factor scores with other outcomes. Content validity was supported by theory underlying the cognitive apprenticeship model on which the instrument is based. The pilot study resulted in an additional question about supervision time. A five-factor model showed a good fit with the data. Acceptable reliability was achievable with 10-12 questionnaires per teacher. Correlations between the factors and overall teacher judgement were strong. The MCTQ appears to be a valid and reliable instrument to evaluate clinical teachers' performance during short rotations.

  18. Introducing the Professionalism Mini-Evaluation Exercise (P-MEX) in Japan: results from a multicenter, cross-sectional study.

    PubMed

    Tsugawa, Yusuke; Ohbu, Sadayoshi; Cruess, Richard; Cruess, Sylvia; Okubo, Tomoya; Takahashi, Osamu; Tokuda, Yasuharu; Heist, Brian S; Bito, Seiji; Itoh, Toshiyuki; Aoki, Akiko; Chiba, Tsutomu; Fukui, Tsuguya

    2011-08-01

    Despite the growing importance of and interest in medical professionalism, there is no standardized tool for its measurement. The authors sought to verify the validity, reliability, and generalizability of the Professionalism Mini-Evaluation Exercise (P-MEX), a previously developed and tested tool, in the context of Japanese hospitals. A multicenter, cross-sectional evaluation study was performed to investigate the validity, reliability, and generalizability of the P-MEX in seven Japanese hospitals. In 2009-2010, 378 evaluators (attending physicians, nurses, peers, and junior residents) completed 360-degree assessments of 165 residents and fellows using the P-MEX. The content validity and criterion-related validity were examined, and the construct validity of the P-MEX was investigated by performing confirmatory factor analysis through a structural equation model. The reliability was tested using generalizability analysis. The contents of the P-MEX achieved good acceptance in a preliminary working group, and the poststudy survey revealed that 302 (79.9%) evaluators rated the P-MEX items as appropriate, indicating good content validity. The correlation coefficient between P-MEX scores and external criteria was 0.78 (P < .001), demonstrating good criterion-related validity. Confirmatory factor analysis verified high path coefficient (0.60-0.99) and adequate goodness of fit of the model. The generalizability analysis yielded a high dependability coefficient, suggesting good reliability, except when evaluators were peers or junior residents. Findings show evidence of adequate validity, reliability, and generalizability of the P-MEX in Japanese hospital settings. The P-MEX is the only evaluation tool for medical professionalism verified in both a Western and East Asian cultural context.

  19. [Construction of competency model of 'excellent doctor' in Chinese medicine].

    PubMed

    Jin, Aning; Tian, Yongquan; Zhao, Taiyang

    2014-05-01

    To evaluate outstanding and ordinary persons from personal characteristics using competency as the important criteria, which is the future direction of medical education reform. We carried on a behavior event interview about famous doctors of old traditional Chinese medicine, compiled competency dictionary, proceed control prediction test. SPSS and AMOS were used to be data analysis tools on statistics. We adopted the model of peer assessment and contrast to carry out empirical research. This project has carried on exploratory factor analysis and confirmatory factor analysis, established a "5A" competency model which include moral ability, thinking ability, communication ability, learning and practical ability. Competency model of "excellent doctor" in Chinese medicine has been validated, with good reliability and validity, and embodies the characteristics of traditional Chinese medicine personnel training, with theoretical and practical significance for excellence in medicine physician training.

  20. Emotional and tangible social support in a German population-based sample: Development and validation of the Brief Social Support Scale (BS6).

    PubMed

    Beutel, Manfred E; Brähler, Elmar; Wiltink, Jörg; Michal, Matthias; Klein, Eva M; Jünger, Claus; Wild, Philipp S; Münzel, Thomas; Blettner, Maria; Lackner, Karl; Nickels, Stefan; Tibubos, Ana N

    2017-01-01

    Aim of the study was the development and validation of the psychometric properties of a six-item bi-factorial instrument for the assessment of social support (emotional and tangible support) with a population-based sample. A cross-sectional data set of N = 15,010 participants enrolled in the Gutenberg Health Study (GHS) in 2007-2012 was divided in two sub-samples. The GHS is a population-based, prospective, observational single-center cohort study in the Rhein-Main-Region in western Mid-Germany. The first sub-sample was used for scale development by performing an exploratory factor analysis. In order to test construct validity, confirmatory factor analyses were run to compare the extracted bi-factorial model with the one-factor solution. Reliability of the scales was indicated by calculating internal consistency. External validity was tested by investigating demographic characteristics health behavior, and distress using analysis of variance, Spearman and Pearson correlation analysis, and logistic regression analysis. Based on an exploratory factor analysis, a set of six items was extracted representing two independent factors. The two-factor structure of the Brief Social Support Scale (BS6) was confirmed by the results of the confirmatory factor analyses. Fit indices of the bi-factorial model were good and better compared to the one-factor solution. External validity was demonstrated for the BS6. The BS6 is a reliable and valid short scale that can be applied in social surveys due to its brevity to assess emotional and practical dimensions of social support.

  1. Baseline Error Analysis and Experimental Validation for Height Measurement of Formation Insar Satellite

    NASA Astrophysics Data System (ADS)

    Gao, X.; Li, T.; Zhang, X.; Geng, X.

    2018-04-01

    In this paper, we proposed the stochastic model of InSAR height measurement by considering the interferometric geometry of InSAR height measurement. The model directly described the relationship between baseline error and height measurement error. Then the simulation analysis in combination with TanDEM-X parameters was implemented to quantitatively evaluate the influence of baseline error to height measurement. Furthermore, the whole emulation validation of InSAR stochastic model was performed on the basis of SRTM DEM and TanDEM-X parameters. The spatial distribution characteristics and error propagation rule of InSAR height measurement were fully evaluated.

  2. Chemometric and biological validation of a capillary electrophoresis metabolomic experiment of Schistosoma mansoni infection in mice.

    PubMed

    Garcia-Perez, Isabel; Angulo, Santiago; Utzinger, Jürg; Holmes, Elaine; Legido-Quigley, Cristina; Barbas, Coral

    2010-07-01

    Metabonomic and metabolomic studies are increasingly utilized for biomarker identification in different fields, including biology of infection. The confluence of improved analytical platforms and the availability of powerful multivariate analysis software have rendered the multiparameter profiles generated by these omics platforms a user-friendly alternative to the established analysis methods where the quality and practice of a procedure is well defined. However, unlike traditional assays, validation methods for these new multivariate profiling tools have yet to be established. We propose a validation for models obtained by CE fingerprinting of urine from mice infected with the blood fluke Schistosoma mansoni. We have analysed urine samples from two sets of mice infected in an inter-laboratory experiment where different infection methods and animal husbandry procedures were employed in order to establish the core biological response to a S. mansoni infection. CE data were analysed using principal component analysis. Validation of the scores consisted of permutation scrambling (100 repetitions) and a manual validation method, using a third of the samples (not included in the model) as a test or prediction set. The validation yielded 100% specificity and 100% sensitivity, demonstrating the robustness of these models with respect to deciphering metabolic perturbations in the mouse due to a S. mansoni infection. A total of 20 metabolites across the two experiments were identified that significantly discriminated between S. mansoni-infected and noninfected control samples. Only one of these metabolites, allantoin, was identified as manifesting different behaviour in the two experiments. This study shows the reproducibility of CE-based metabolic profiling methods for disease characterization and screening and highlights the importance of much needed validation strategies in the emerging field of metabolomics.

  3. Hospital survey on patient safety culture: psychometric analysis on a Scottish sample.

    PubMed

    Sarac, Cakil; Flin, Rhona; Mearns, Kathryn; Jackson, Jeanette

    2011-10-01

    To investigate the psychometric properties of the Hospital Survey on Patient Safety Culture on a Scottish NHS data set. The data were collected from 1969 clinical staff (estimated 22% response rate) from one acute hospital from each of seven Scottish Health boards. Using a split-half validation technique, the data were randomly split; an exploratory factor analysis was conducted on the calibration data set, and confirmatory factor analyses were conducted on the validation data set to investigate and check the original US model fit in a Scottish sample. Following the split-half validation technique, exploratory factor analysis results showed a 10-factor optimal measurement model. The confirmatory factor analyses were then performed to compare the model fit of two competing models (10-factor alternative model vs 12-factor original model). An S-B scaled χ(2) square difference test demonstrated that the original 12-factor model performed significantly better in a Scottish sample. Furthermore, reliability analyses of each component yielded satisfactory results. The mean scores on the climate dimensions in the Scottish sample were comparable with those found in other European countries. This study provided evidence that the original 12-factor structure of the Hospital Survey on Patient Safety Culture scale has been replicated in this Scottish sample. Therefore, no modifications are required to the original 12-factor model, which is suggested for use, since it would allow researchers the possibility of cross-national comparisons.

  4. Verification, Validation, and Solution Quality in Computational Physics: CFD Methods Applied to Ice Sheet Physics

    NASA Technical Reports Server (NTRS)

    Thompson, David E.

    2005-01-01

    Procedures and methods for veri.cation of coding algebra and for validations of models and calculations used in the aerospace computational fluid dynamics (CFD) community would be ef.cacious if used by the glacier dynamics modeling community. This paper presents some of those methods, and how they might be applied to uncertainty management supporting code veri.cation and model validation for glacier dynamics. The similarities and differences between their use in CFD analysis and the proposed application of these methods to glacier modeling are discussed. After establishing sources of uncertainty and methods for code veri.cation, the paper looks at a representative sampling of veri.cation and validation efforts that are underway in the glacier modeling community, and establishes a context for these within an overall solution quality assessment. Finally, a vision of a new information architecture and interactive scienti.c interface is introduced and advocated.

  5. Analysis of SSME HPOTP rotordynamics subsynchronous whirl

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The causes and remedies of vibration and subsynchronous whirl problems encountered in the Shuttle Main Engine SSME turbomachinery are analyzed. Because the nonlinear and linearized models of the turbopumps play such an important role in the analysis process, the main emphasis is concentrated on the verification and improvement of these tools. It has been the goal of our work to validate the equations of motion used in the models are validated, including the assumptions upon which they are based. Verification of th SSME rotordynamics simulation and the developed enhancements, are emphasized.

  6. Comprehensive Approach to Verification and Validation of CFD Simulations Applied to Backward Facing Step-Application of CFD Uncertainty Analysis

    NASA Technical Reports Server (NTRS)

    Groves, Curtis E.; LLie, Marcel; Shallhorn, Paul A.

    2012-01-01

    There are inherent uncertainties and errors associated with using Computational Fluid Dynamics (CFD) to predict the flow field and there is no standard method for evaluating uncertainty in the CFD community. This paper describes an approach to -validate the . uncertainty in using CFD. The method will use the state of the art uncertainty analysis applying different turbulence niodels and draw conclusions on which models provide the least uncertainty and which models most accurately predict the flow of a backward facing step.

  7. Validation of new psychosocial factors questionnaires: a Colombian national study.

    PubMed

    Villalobos, Gloria H; Vargas, Angélica M; Rondón, Martin A; Felknor, Sarah A

    2013-01-01

    The study of workers' health problems possibly associated with stressful conditions requires valid and reliable tools for monitoring risk factors. The present study validates two questionnaires to assess psychosocial risk factors for stress-related illnesses within a sample of Colombian workers. The validation process was based on a representative sample survey of 2,360 Colombian employees, aged 18-70 years. Worker response rate was 90%; 46% of the responders were women. Internal consistency was calculated, construct validity was tested with factor analysis and concurrent validity was tested with Spearman correlations. The questionnaires demonstrated adequate reliability (0.88-0.95). Factor analysis confirmed the dimensions proposed in the measurement model. Concurrent validity resulted in significant correlations with stress and health symptoms. "Work and Non-work Psychosocial Factors Questionnaires" were found to be valid and reliable for the assessment of workers' psychosocial factors, and they provide information for research and intervention. Copyright © 2012 Wiley Periodicals, Inc.

  8. Integrating Genetic, Neuropsychological and Neuroimaging Data to Model Early-Onset Obsessive Compulsive Disorder Severity

    PubMed Central

    Mas, Sergi; Gassó, Patricia; Morer, Astrid; Calvo, Anna; Bargalló, Nuria; Lafuente, Amalia; Lázaro, Luisa

    2016-01-01

    We propose an integrative approach that combines structural magnetic resonance imaging data (MRI), diffusion tensor imaging data (DTI), neuropsychological data, and genetic data to predict early-onset obsessive compulsive disorder (OCD) severity. From a cohort of 87 patients, 56 with complete information were used in the present analysis. First, we performed a multivariate genetic association analysis of OCD severity with 266 genetic polymorphisms. This association analysis was used to select and prioritize the SNPs that would be included in the model. Second, we split the sample into a training set (N = 38) and a validation set (N = 18). Third, entropy-based measures of information gain were used for feature selection with the training subset. Fourth, the selected features were fed into two supervised methods of class prediction based on machine learning, using the leave-one-out procedure with the training set. Finally, the resulting model was validated with the validation set. Nine variables were used for the creation of the OCD severity predictor, including six genetic polymorphisms and three variables from the neuropsychological data. The developed model classified child and adolescent patients with OCD by disease severity with an accuracy of 0.90 in the testing set and 0.70 in the validation sample. Above its clinical applicability, the combination of particular neuropsychological, neuroimaging, and genetic characteristics could enhance our understanding of the neurobiological basis of the disorder. PMID:27093171

  9. Design and validation of a model to predict early mortality in haemodialysis patients.

    PubMed

    Mauri, Joan M; Clèries, Montse; Vela, Emili

    2008-05-01

    Mortality and morbidity rates are higher in patients receiving haemodialysis therapy than in the general population. Detection of risk factors related to early death in these patients could be of aid for clinical and administrative decision making. Objectives. The aims of this study were (1) to identify risk factors (comorbidity and variables specific to haemodialysis) associated with death in the first year following the start of haemodialysis and (2) to design and validate a prognostic model to quantify the probability of death for each patient. An analysis was carried out on all patients starting haemodialysis treatment in Catalonia during the period 1997-2003 (n = 5738). The data source was the Renal Registry of Catalonia, a mandatory population registry. Patients were randomly divided into two samples: 60% (n = 3455) of the total were used to develop the prognostic model and the remaining 40% (n = 2283) to validate the model. Logistic regression analysis was used to construct the model. One-year mortality in the total study population was 16.5%. The predictive model included the following variables: age, sex, primary renal disease, grade of functional autonomy, chronic obstructive pulmonary disease, malignant processes, chronic liver disease, cardiovascular disease, initial vascular access and malnutrition. The analyses showed adequate calibration for both the sample to develop the model and the validation sample (Hosmer-Lemeshow statistic 0.97 and P = 0.49, respectively) as well as adequate discrimination (ROC curve 0.78 in both cases). Risk factors implicated in mortality at one year following the start of haemodialysis have been determined and a prognostic model designed. The validated, easy-to-apply model quantifies individual patient risk attributable to various factors, some of them amenable to correction by directed interventions.

  10. Validation of VARK learning modalities questionnaire using Rasch analysis

    NASA Astrophysics Data System (ADS)

    Fitkov-Norris, E. D.; Yeghiazarian, A.

    2015-02-01

    This article discusses the application of Rasch analysis to assess the internal validity of a four sub-scale VARK (Visual, Auditory, Read/Write and Kinaesthetic) learning styles instrument. The results from the analysis show that the Rasch model fits the majority of the VARK questionnaire data and the sample data support the internal validity of the four sub-constructs at 1% level of significance for all but one item. While this suggests that the instrument could potentially be used as a predictor for a person's learning preference orientation, further analysis is necessary to confirm the invariability of the instrument across different user groups across factors such as gender, age, educational and cultural background.

  11. Quantitative determination and classification of energy drinks using near-infrared spectroscopy.

    PubMed

    Rácz, Anita; Héberger, Károly; Fodor, Marietta

    2016-09-01

    Almost a hundred commercially available energy drink samples from Hungary, Slovakia, and Greece were collected for the quantitative determination of their caffeine and sugar content with FT-NIR spectroscopy and high-performance liquid chromatography (HPLC). Calibration models were built with partial least-squares regression (PLSR). An HPLC-UV method was used to measure the reference values for caffeine content, while sugar contents were measured with the Schoorl method. Both the nominal sugar content (as indicated on the cans) and the measured sugar concentration were used as references. Although the Schoorl method has larger error and bias, appropriate models could be developed using both references. The validation of the models was based on sevenfold cross-validation and external validation. FT-NIR analysis is a good candidate to replace the HPLC-UV method, because it is much cheaper than any chromatographic method, while it is also more time-efficient. The combination of FT-NIR with multidimensional chemometric techniques like PLSR can be a good option for the detection of low caffeine concentrations in energy drinks. Moreover, three types of energy drinks that contain (i) taurine, (ii) arginine, and (iii) none of these two components were classified correctly using principal component analysis and linear discriminant analysis. Such classifications are important for the detection of adulterated samples and for quality control, as well. In this case, more than a hundred samples were used for the evaluation. The classification was validated with cross-validation and several randomization tests (X-scrambling). Graphical Abstract The way of energy drinks from cans to appropriate chemometric models.

  12. The Construct Validity of Higher Order Structure-of-Intellect Abilities in a Battery of Tests Emphasizing the Product of Transformations: A Confirmatory Maximum Likelihood Factor Analysis.

    ERIC Educational Resources Information Center

    Khattab, Ali-Maher; And Others

    1982-01-01

    A causal modeling system, using confirmatory maximum likelihood factor analysis with the LISREL IV computer program, evaluated the construct validity underlying the higher order factor structure of a given correlation matrix of 46 structure-of-intellect tests emphasizing the product of transformations. (Author/PN)

  13. Construction and Validation of the Career and Educational Decision Self-Efficacy Inventory for Secondary Students (CEDSIS)

    ERIC Educational Resources Information Center

    Ho, Esther Sui Chu; Sum, Kwok Wing

    2018-01-01

    This study aims to construct and validate the Career and Educational Decision Self-Efficacy Inventory for Secondary Students (CEDSIS) by using a sample of 2,631 students in Hong Kong. Principal component analysis yielded a three-factor structure, which demonstrated good model fit in confirmatory factor analysis. High reliability was found for the…

  14. Modeling the Space Debris Environment with MASTER-2009 and ORDEM2010

    NASA Technical Reports Server (NTRS)

    Flegel, S.; Gelhaus, J.; Wiedemann, C.; Mockel, M.; Vorsmann, P.; Krisko, P.; Xu, Y. -L.; Horstman, M. F.; Opiela, J. N.; Matney, M.; hide

    2010-01-01

    Spacecraft analysis using ORDEM2010 uses a high-fidelity population model to compute risk to on-orbit assets. The ORDEM2010 GUI allows visualization of spacecraft flux in 2-D and 1-D. The population was produced using a Bayesian statistical approach with measured and modeled environment data. Validation of sizes < 1mm were performed using Shuttle window and radiator impact measurements. Validation of sizes > 1mm is on-going.

  15. The Model Analyst’s Toolkit: Scientific Model Development, Analysis, and Validation

    DTIC Science & Technology

    2014-05-20

    but there can still be many recommendations generated. Therefore, the recommender results are displayed in a sortable table where each row is a...reporting period. Since the synthesis graph can be complex and have many dependencies, the system must determine the order of evaluation of nodes, and...validation failure, if any. 3.1. Automatic Feature Extraction In many domains, causal models can often be more readily described as patterns of

  16. Improved Healing of Large, Osseous, Segmental Defects by Reverse Dynamization: Evaluation in a Sheep Model

    DTIC Science & Technology

    2017-12-01

    reverse dynamization. This was supplemented by finite element analysis and the use of a strain gauge. This aim was successfully completed, with the...testing deformation results for model validation. Development of a Finite Element (FE) model was conducted through ANSYS 16 to help characterize...Fixators were characterized through mechanical testing by sawbone and ovine cadaver tibiae samples, and data was used to validate a finite element

  17. Validation of reactive gases and aerosols in the MACC global analysis and forecast system

    NASA Astrophysics Data System (ADS)

    Eskes, H.; Huijnen, V.; Arola, A.; Benedictow, A.; Blechschmidt, A.-M.; Botek, E.; Boucher, O.; Bouarar, I.; Chabrillat, S.; Cuevas, E.; Engelen, R.; Flentje, H.; Gaudel, A.; Griesfeller, J.; Jones, L.; Kapsomenakis, J.; Katragkou, E.; Kinne, S.; Langerock, B.; Razinger, M.; Richter, A.; Schultz, M.; Schulz, M.; Sudarchikova, N.; Thouret, V.; Vrekoussis, M.; Wagner, A.; Zerefos, C.

    2015-11-01

    The European MACC (Monitoring Atmospheric Composition and Climate) project is preparing the operational Copernicus Atmosphere Monitoring Service (CAMS), one of the services of the European Copernicus Programme on Earth observation and environmental services. MACC uses data assimilation to combine in situ and remote sensing observations with global and regional models of atmospheric reactive gases, aerosols, and greenhouse gases, and is based on the Integrated Forecasting System of the European Centre for Medium-Range Weather Forecasts (ECMWF). The global component of the MACC service has a dedicated validation activity to document the quality of the atmospheric composition products. In this paper we discuss the approach to validation that has been developed over the past 3 years. Topics discussed are the validation requirements, the operational aspects, the measurement data sets used, the structure of the validation reports, the models and assimilation systems validated, the procedure to introduce new upgrades, and the scoring methods. One specific target of the MACC system concerns forecasting special events with high-pollution concentrations. Such events receive extra attention in the validation process. Finally, a summary is provided of the results from the validation of the latest set of daily global analysis and forecast products from the MACC system reported in November 2014.

  18. Topological characterization versus synchronization for assessing (or not) dynamical equivalence

    NASA Astrophysics Data System (ADS)

    Letellier, Christophe; Mangiarotti, Sylvain; Sendiña-Nadal, Irene; Rössler, Otto E.

    2018-04-01

    Model validation from experimental data is an important and not trivial topic which is too often reduced to a simple visual inspection of the state portrait spanned by the variables of the system. Synchronization was suggested as a possible technique for model validation. By means of a topological analysis, we revisited this concept with the help of an abstract chemical reaction system and data from two electrodissolution experiments conducted by Jack Hudson's group. The fact that it was possible to synchronize topologically different global models led us to conclude that synchronization is not a recommendable technique for model validation. A short historical preamble evokes Jack Hudson's early career in interaction with Otto E. Rössler.

  19. Analysis of various quality attributes of sunflower and soybean plants by near infra-red reflectance spectroscopy: Development and validation calibration models

    USDA-ARS?s Scientific Manuscript database

    Sunflower and soybean are summer annuals that can be grown as an alternative to corn and may be particularly useful in organic production systems. Rapid and low cost methods of analyzing plant quality would be helpful for crop management. We developed and validated calibration models for Near-infrar...

  20. Selecting the "Best" Factor Structure and Moving Measurement Validation Forward: An Illustration.

    PubMed

    Schmitt, Thomas A; Sass, Daniel A; Chappelle, Wayne; Thompson, William

    2018-04-09

    Despite the broad literature base on factor analysis best practices, research seeking to evaluate a measure's psychometric properties frequently fails to consider or follow these recommendations. This leads to incorrect factor structures, numerous and often overly complex competing factor models and, perhaps most harmful, biased model results. Our goal is to demonstrate a practical and actionable process for factor analysis through (a) an overview of six statistical and psychometric issues and approaches to be aware of, investigate, and report when engaging in factor structure validation, along with a flowchart for recommended procedures to understand latent factor structures; (b) demonstrating these issues to provide a summary of the updated Posttraumatic Stress Disorder Checklist (PCL-5) factor models and a rationale for validation; and (c) conducting a comprehensive statistical and psychometric validation of the PCL-5 factor structure to demonstrate all the issues we described earlier. Considering previous research, the PCL-5 was evaluated using a sample of 1,403 U.S. Air Force remotely piloted aircraft operators with high levels of battlefield exposure. Previously proposed PCL-5 factor structures were not supported by the data, but instead a bifactor model is arguably more statistically appropriate.

  1. Testing the psychometric properties of the Environmental Attitudes Inventory on undergraduate students in the Arab context: A test-retest approach.

    PubMed

    AlMenhali, Entesar Ali; Khalid, Khalizani; Iyanna, Shilpa

    2018-01-01

    The Environmental Attitudes Inventory (EAI) was developed to evaluate the multidimensional nature of environmental attitudes; however, it is based on a dataset from outside the Arab context. This study reinvestigated the construct validity of the EAI with a new dataset and confirmed the feasibility of applying it in the Arab context. One hundred and forty-eight subjects in Study 1 and 130 in Study 2 provided valid responses. An exploratory factor analysis (EFA) was used to extract a new factor structure in Study 1, and confirmatory factor analysis (CFA) was performed in Study 2. Both studies generated a seven-factor model, and the model fit was discussed for both the studies. Study 2 exhibited satisfactory model fit indices compared to Study 1. Factor loading values of a few items in Study 1 affected the reliability values and average variance extracted values, which demonstrated low discriminant validity. Based on the results of the EFA and CFA, this study showed sufficient model fit and suggested the feasibility of applying the EAI in the Arab context with a good construct validity and internal consistency.

  2. Testing the psychometric properties of the Environmental Attitudes Inventory on undergraduate students in the Arab context: A test-retest approach

    PubMed Central

    2018-01-01

    The Environmental Attitudes Inventory (EAI) was developed to evaluate the multidimensional nature of environmental attitudes; however, it is based on a dataset from outside the Arab context. This study reinvestigated the construct validity of the EAI with a new dataset and confirmed the feasibility of applying it in the Arab context. One hundred and forty-eight subjects in Study 1 and 130 in Study 2 provided valid responses. An exploratory factor analysis (EFA) was used to extract a new factor structure in Study 1, and confirmatory factor analysis (CFA) was performed in Study 2. Both studies generated a seven-factor model, and the model fit was discussed for both the studies. Study 2 exhibited satisfactory model fit indices compared to Study 1. Factor loading values of a few items in Study 1 affected the reliability values and average variance extracted values, which demonstrated low discriminant validity. Based on the results of the EFA and CFA, this study showed sufficient model fit and suggested the feasibility of applying the EAI in the Arab context with a good construct validity and internal consistency. PMID:29758021

  3. Exploration of Uncertainty in Glacier Modelling

    NASA Technical Reports Server (NTRS)

    Thompson, David E.

    1999-01-01

    There are procedures and methods for verification of coding algebra and for validations of models and calculations that are in use in the aerospace computational fluid dynamics (CFD) community. These methods would be efficacious if used by the glacier dynamics modelling community. This paper is a presentation of some of those methods, and how they might be applied to uncertainty management supporting code verification and model validation for glacier dynamics. The similarities and differences between their use in CFD analysis and the proposed application of these methods to glacier modelling are discussed. After establishing sources of uncertainty and methods for code verification, the paper looks at a representative sampling of verification and validation efforts that are underway in the glacier modelling community, and establishes a context for these within overall solution quality assessment. Finally, an information architecture and interactive interface is introduced and advocated. This Integrated Cryospheric Exploration (ICE) Environment is proposed for exploring and managing sources of uncertainty in glacier modelling codes and methods, and for supporting scientific numerical exploration and verification. The details and functionality of this Environment are described based on modifications of a system already developed for CFD modelling and analysis.

  4. Towards Automatic Validation and Healing of Citygml Models for Geometric and Semantic Consistency

    NASA Astrophysics Data System (ADS)

    Alam, N.; Wagner, D.; Wewetzer, M.; von Falkenhausen, J.; Coors, V.; Pries, M.

    2013-09-01

    A steadily growing number of application fields for large 3D city models have emerged in recent years. Like in many other domains, data quality is recognized as a key factor for successful business. Quality management is mandatory in the production chain nowadays. Automated domain-specific tools are widely used for validation of business-critical data but still common standards defining correct geometric modeling are not precise enough to define a sound base for data validation of 3D city models. Although the workflow for 3D city models is well-established from data acquisition to processing, analysis and visualization, quality management is not yet a standard during this workflow. Processing data sets with unclear specification leads to erroneous results and application defects. We show that this problem persists even if data are standard compliant. Validation results of real-world city models are presented to demonstrate the potential of the approach. A tool to repair the errors detected during the validation process is under development; first results are presented and discussed. The goal is to heal defects of the models automatically and export a corrected CityGML model.

  5. Development and Validation of a Disease Severity Scoring Model for Pediatric Sepsis.

    PubMed

    Hu, Li; Zhu, Yimin; Chen, Mengshi; Li, Xun; Lu, Xiulan; Liang, Ying; Tan, Hongzhuan

    2016-07-01

    Multiple severity scoring systems have been devised and evaluated in adult sepsis, but a simplified scoring model for pediatric sepsis has not yet been developed. This study aimed to develop and validate a new scoring model to stratify the severity of pediatric sepsis, thus assisting the treatment of sepsis in children. Data from 634 consecutive patients who presented with sepsis at Children's hospital of Hunan province in China in 2011-2013 were analyzed, with 476 patients placed in training group and 158 patients in validation group. Stepwise discriminant analysis was used to develop the accurate discriminate model. A simplified scoring model was generated using weightings defined by the discriminate coefficients. The discriminant ability of the model was tested by receiver operating characteristic curves (ROC). The discriminant analysis showed that prothrombin time, D-dimer, total bilirubin, serum total protein, uric acid, PaO2/FiO2 ratio, myoglobin were associated with severity of sepsis. These seven variables were assigned with values of 4, 3, 3, 4, 3, 3, 3 respectively based on the standardized discriminant coefficients. Patients with higher scores had higher risk of severe sepsis. The areas under ROC (AROC) were 0.836 for accurate discriminate model, and 0.825 for simplified scoring model in validation group. The proposed disease severity scoring model for pediatric sepsis showed adequate discriminatory capacity and sufficient accuracy, which has important clinical significance in evaluating the severity of pediatric sepsis and predicting its progress.

  6. Development and validation of a new knowledge, attitude, belief and practice questionnaire on leptospirosis in Malaysia.

    PubMed

    Zahiruddin, Wan Mohd; Arifin, Wan Nor; Mohd-Nazri, Shafei; Sukeri, Surianti; Zawaha, Idris; Bakar, Rahman Abu; Hamat, Rukman Awang; Malina, Osman; Jamaludin, Tengku Zetty Maztura Tengku; Pathman, Arumugam; Mas-Harithulfadhli-Agus, Ab Rahman; Norazlin, Idris; Suhailah, Binti Samsudin; Saudi, Siti Nor Sakinah; Abdullah, Nurul Munirah; Nozmi, Noramira; Zainuddin, Abdul Wahab; Aziah, Daud

    2018-03-07

    In Malaysia, leptospirosis is considered an endemic disease, with sporadic outbreaks following rainy or flood seasons. The objective of this study was to develop and validate a new knowledge, attitude, belief and practice (KABP) questionnaire on leptospirosis for use in urban and rural populations in Malaysia. The questionnaire comprised development and validation stages. The development phase encompassed a literature review, expert panel review, focus-group testing, and evaluation. The validation phase consisted of exploratory and confirmatory parts to verify the psychometric properties of the questionnaire. A total of 214 and 759 participants were recruited from two Malaysian states, Kelantan and Selangor respectively, for the validation phase. The participants comprised urban and rural communities with a high reported incidence of leptospirosis. The knowledge section of the validation phase utilized item response theory (IRT) analysis. The attitude and belief sections utilized exploratory factor analysis (EFA) and confirmatory factor analysis (CFA). The development phase resulted in a questionnaire that included four main sections: knowledge, attitude, belief, and practice. In the exploratory phase, as shown by the IRT analysis of knowledge about leptospirosis, the difficulty and discrimination values of the items were acceptable, with the exception of two items. Based on the EFA, the psychometric properties of the attitude, belief, and practice sections were poor. Thus, these sections were revised, and no further factor analysis of the practice section was conducted. In the confirmatory stage, the difficulty and discrimination values of the items in the knowledge section remained within the acceptable range. The CFA of the attitude section resulted in a good-fitting two-factor model. The CFA of the belief section retained low number of items, although the analysis resulted in a good fit in the final three-factor model. Based on the IRT analysis and factor analytic evidence, the knowledge and attitude sections of the KABP questionnaire on leptospirosis were psychometrically valid. However, the psychometric properties of the belief section were unsatisfactory, despite being revised after the initial validation study. Further development of this section is warranted in future studies.

  7. The Validity and Utility of the California Family Risk Assessment under Practice Conditions in the Field: A Prospective Study

    ERIC Educational Resources Information Center

    Johnson, Will L.

    2011-01-01

    Objective: Analysis of the validity and implementation of a child maltreatment actuarial risk assessment model, the California Family Risk Assessment (CFRA). Questions addressed: (1) Is there evidence of the validity of the CFRA under field operating conditions? (2) Do actuarial risk assessment results influence child welfare workers' service…

  8. Royal London space analysis: plaster versus digital model assessment.

    PubMed

    Grewal, Balpreet; Lee, Robert T; Zou, Lifong; Johal, Ama

    2017-06-01

    With the advent of digital study models, the importance of being able to evaluate space requirements becomes valuable to treatment planning and the justification for any required extraction pattern. This study was undertaken to compare the validity and reliability of the Royal London space analysis (RLSA) undertaken on plaster as compared with digital models. A pilot study (n = 5) was undertaken on plaster and digital models to evaluate the feasibility of digital space planning. This also helped to determine the sample size calculation and as a result, 30 sets of study models with specified inclusion criteria were selected. All five components of the RLSA, namely: crowding; depth of occlusal curve; arch expansion/contraction; incisor antero-posterior advancement and inclination (assessed from the pre-treatment lateral cephalogram) were accounted for in relation to both model types. The plaster models served as the gold standard. Intra-operator measurement error (reliability) was evaluated along with a direct comparison of the measured digital values (validity) with the plaster models. The measurement error or coefficient of repeatability was comparable for plaster and digital space analyses and ranged from 0.66 to 0.95mm. No difference was found between the space analysis performed in either the upper or lower dental arch. Hence, the null hypothesis was accepted. The digital model measurements were consistently larger, albeit by a relatively small amount, than the plaster models (0.35mm upper arch and 0.32mm lower arch). No difference was detected in the RLSA when performed using either plaster or digital models. Thus, digital space analysis provides a valid and reproducible alternative method in the new era of digital records. © The Author 2016. Published by Oxford University Press on behalf of the European Orthodontic Society. All rights reserved. For permissions, please email: journals.permissions@oup.com

  9. Validating the WRF-Chem model for wind energy applications using High Resolution Doppler Lidar data from a Utah 2012 field campaign

    NASA Astrophysics Data System (ADS)

    Mitchell, M. J.; Pichugina, Y. L.; Banta, R. M.

    2015-12-01

    Models are important tools for assessing potential of wind energy sites, but the accuracy of these projections has not been properly validated. In this study, High Resolution Doppler Lidar (HRDL) data obtained with high temporal and spatial resolution at heights of modern turbine rotors were compared to output from the WRF-chem model in order to help improve the performance of the model in producing accurate wind forecasts for the industry. HRDL data were collected from January 23-March 1, 2012 during the Uintah Basin Winter Ozone Study (UBWOS) field campaign. A model validation method was based on the qualitative comparison of the wind field images, time-series analysis and statistical analysis of the observed and modeled wind speed and direction, both for case studies and for the whole experiment. To compare the WRF-chem model output to the HRDL observations, the model heights and forecast times were interpolated to match the observed times and heights. Then, time-height cross-sections of the HRDL and WRF-Chem wind speed and directions were plotted to select case studies. Cross-sections of the differences between the observed and forecasted wind speed and directions were also plotted to visually analyze the model performance in different wind flow conditions. A statistical analysis includes the calculation of vertical profiles and time series of bias, correlation coefficient, root mean squared error, and coefficient of determination between two datasets. The results from this analysis reveals where and when the model typically struggles in forecasting winds at heights of modern turbine rotors so that in the future the model can be improved for the industry.

  10. CheS-Mapper 2.0 for visual validation of (Q)SAR models

    PubMed Central

    2014-01-01

    Background Sound statistical validation is important to evaluate and compare the overall performance of (Q)SAR models. However, classical validation does not support the user in better understanding the properties of the model or the underlying data. Even though, a number of visualization tools for analyzing (Q)SAR information in small molecule datasets exist, integrated visualization methods that allow the investigation of model validation results are still lacking. Results We propose visual validation, as an approach for the graphical inspection of (Q)SAR model validation results. The approach applies the 3D viewer CheS-Mapper, an open-source application for the exploration of small molecules in virtual 3D space. The present work describes the new functionalities in CheS-Mapper 2.0, that facilitate the analysis of (Q)SAR information and allows the visual validation of (Q)SAR models. The tool enables the comparison of model predictions to the actual activity in feature space. The approach is generic: It is model-independent and can handle physico-chemical and structural input features as well as quantitative and qualitative endpoints. Conclusions Visual validation with CheS-Mapper enables analyzing (Q)SAR information in the data and indicates how this information is employed by the (Q)SAR model. It reveals, if the endpoint is modeled too specific or too generic and highlights common properties of misclassified compounds. Moreover, the researcher can use CheS-Mapper to inspect how the (Q)SAR model predicts activity cliffs. The CheS-Mapper software is freely available at http://ches-mapper.org. Graphical abstract Comparing actual and predicted activity values with CheS-Mapper.

  11. [Vis-NIR spectroscopic pattern recognition combined with SG smoothing applied to breed screening of transgenic sugarcane].

    PubMed

    Liu, Gui-Song; Guo, Hao-Song; Pan, Tao; Wang, Ji-Hua; Cao, Gan

    2014-10-01

    Based on Savitzky-Golay (SG) smoothing screening, principal component analysis (PCA) combined with separately supervised linear discriminant analysis (LDA) and unsupervised hierarchical clustering analysis (HCA) were used for non-destructive visible and near-infrared (Vis-NIR) detection for breed screening of transgenic sugarcane. A random and stability-dependent framework of calibration, prediction, and validation was proposed. A total of 456 samples of sugarcane leaves planting in the elongating stage were collected from the field, which was composed of 306 transgenic (positive) samples containing Bt and Bar gene and 150 non-transgenic (negative) samples. A total of 156 samples (negative 50 and positive 106) were randomly selected as the validation set; the remaining samples (negative 100 and positive 200, a total of 300 samples) were used as the modeling set, and then the modeling set was subdivided into calibration (negative 50 and positive 100, a total of 150 samples) and prediction sets (negative 50 and positive 100, a total of 150 samples) for 50 times. The number of SG smoothing points was ex- panded, while some modes of higher derivative were removed because of small absolute value, and a total of 264 smoothing modes were used for screening. The pairwise combinations of first three principal components were used, and then the optimal combination of principal components was selected according to the model effect. Based on all divisions of calibration and prediction sets and all SG smoothing modes, the SG-PCA-LDA and SG-PCA-HCA models were established, the model parameters were optimized based on the average prediction effect for all divisions to produce modeling stability. Finally, the model validation was performed by validation set. With SG smoothing, the modeling accuracy and stability of PCA-LDA, PCA-HCA were signif- icantly improved. For the optimal SG-PCA-LDA model, the recognition rate of positive and negative validation samples were 94.3%, 96.0%; and were 92.5%, 98.0% for the optimal SG-PCA-LDA model, respectively. Vis-NIR spectro- scopic pattern recognition combined with SG smoothing could be used for accurate recognition of transgenic sugarcane leaves, and provided a convenient screening method for transgenic sugarcane breeding.

  12. Correlation and agreement of a digital and conventional method to measure arch parameters.

    PubMed

    Nawi, Nes; Mohamed, Alizae Marny; Marizan Nor, Murshida; Ashar, Nor Atika

    2018-01-01

    The aim of the present study was to determine the overall reliability and validity of arch parameters measured digitally compared to conventional measurement. A sample of 111 plaster study models of Down syndrome (DS) patients were digitized using a blue light three-dimensional (3D) scanner. Digital and manual measurements of defined parameters were performed using Geomagic analysis software (Geomagic Studio 2014 software, 3D Systems, Rock Hill, SC, USA) on digital models and with a digital calliper (Tuten, Germany) on plaster study models. Both measurements were repeated twice to validate the intraexaminer reliability based on intraclass correlation coefficients (ICCs) using the independent t test and Pearson's correlation, respectively. The Bland-Altman method of analysis was used to evaluate the agreement of the measurement between the digital and plaster models. No statistically significant differences (p > 0.05) were found between the manual and digital methods when measuring the arch width, arch length, and space analysis. In addition, all parameters showed a significant correlation coefficient (r ≥ 0.972; p < 0.01) between all digital and manual measurements. Furthermore, a positive agreement between digital and manual measurements of the arch width (90-96%), arch length and space analysis (95-99%) were also distinguished using the Bland-Altman method. These results demonstrate that 3D blue light scanning and measurement software are able to precisely produce 3D digital model and measure arch width, arch length, and space analysis. The 3D digital model is valid to be used in various clinical applications.

  13. Turkish translation and adaptation of Champion's Health Belief Model Scales for breast cancer mammography screening.

    PubMed

    Yilmaz, Meryem; Sayin, Yazile Yazici

    2014-07-01

    To examine the translation and adaptation process from English to Turkish and the validity and reliability of the Champion's Health Belief Model Scales for Mammography Screening. Its aim (1) is to provide data about and (2) to assess Turkish women's attitudes and behaviours towards mammography. The proportion of women who have mammography is lower in Turkey. The Champion's Health Belief Model Scales for Mammography Screening-Turkish version can be helpful to determine Turkish women's health beliefs, particularly about mammography. Cross-sectional design was used to collect survey data from Turkish women: classical measurement method. The Champion's Health Belief Model Scales for Mammography Screening was translated from English to Turkish. Again, it was back translated into English. Later, the meaning and clarity of the scale items were evaluated by a bilingual group representing the culture of the target population. Finally, the tool was evaluated by two bilingual professional researchers in terms of content validity, translation validity and psychometric estimates of the validity and reliability. The analysis included a total of 209 Turkish women. The validity of the scale was confirmed by confirmatory factor analysis and criterion-related validity testing. The Champion's Health Belief Model Scales for Mammography Screening aligned to four factors that were coherent and relatively independent of each other. There was a statistically significant relationship among all of the subscale items: the positive and high correlation of the total item test score and high Cronbach's α. The scale has a strong stability over time: the Champion's Health Belief Model Scales for Mammography Screening demonstrated acceptable preliminary values of reliability and validity. The Champion's Health Belief Model Scales for Mammography Screening is both a reliable and valid instrument that can be useful in measuring the health beliefs of Turkish women. It can be used to provide data about healthcare practices required for mammography screening and breast cancer prevention. This scale will show nurses that nursing intervention planning is essential for increasing Turkish women's participation in mammography screening. © 2013 John Wiley & Sons Ltd.

  14. Novel risk score of contrast-induced nephropathy after percutaneous coronary intervention.

    PubMed

    Ji, Ling; Su, XiaoFeng; Qin, Wei; Mi, XuHua; Liu, Fei; Tang, XiaoHong; Li, Zi; Yang, LiChuan

    2015-08-01

    Contrast-induced nephropathy (CIN) post-percutaneous coronary intervention (PCI) is a major cause of acute kidney injury. In this study, we established a comprehensive risk score model to assess risk of CIN after PCI procedure, which could be easily used in a clinical environment. A total of 805 PCI patients, divided into analysis cohort (70%) and validation cohort (30%), were enrolled retrospectively in this study. Risk factors for CIN were identified using univariate analysis and multivariate logistic regression in the analysis cohort. Risk score model was developed based on multiple regression coefficients. Sensitivity and specificity of the new risk score system was validated in the validation cohort. Comparisons between the new risk score model and previous reported models were applied. The incidence of post-PCI CIN in the analysis cohort (n = 565) was 12%. Considerably high CIN incidence (50%) was observed in patients with chronic kidney disease (CKD). Age >75, body mass index (BMI) >25, myoglobin level, cardiac function level, hypoalbuminaemia, history of chronic kidney disease (CKD), Intra-aortic balloon pump (IABP) and peripheral vascular disease (PVD) were identified as independent risk factors of post-PCI CIN. A novel risk score model was established using multivariate regression coefficients, which showed highest sensitivity and specificity (0.917, 95%CI 0.877-0.957) compared with previous models. A new post-PCI CIN risk score model was developed based on a retrospective study of 805 patients. Application of this model might be helpful to predict CIN in patients undergoing PCI procedure. © 2015 Asian Pacific Society of Nephrology.

  15. Measuring leader perceptions of school readiness for reforms: use of an iterative model combining classical and Rasch methods.

    PubMed

    Chatterji, Madhabi

    2002-01-01

    This study examines validity of data generated by the School Readiness for Reforms: Leader Questionnaire (SRR-LQ) using an iterative procedure that combines classical and Rasch rating scale analysis. Following content-validation and pilot-testing, principal axis factor extraction and promax rotation of factors yielded a five factor structure consistent with the content-validated subscales of the original instrument. Factors were identified based on inspection of pattern and structure coefficients. The rotated factor pattern, inter-factor correlations, convergent validity coefficients, and Cronbach's alpha reliability estimates supported the hypothesized construct properties. To further examine unidimensionality and efficacy of the rating scale structures, item-level data from each factor-defined subscale were subjected to analysis with the Rasch rating scale model. Data-to-model fit statistics and separation reliability for items and persons met acceptable criteria. Rating scale results suggested consistency of expected and observed step difficulties in rating categories, and correspondence of step calibrations with increases in the underlying variables. The combined approach yielded more comprehensive diagnostic information on the quality of the five SRR-LQ subscales; further research is continuing.

  16. Development and validation of a risk-prediction nomogram for in-hospital mortality in adults poisoned with drugs and nonpharmaceutical agents

    PubMed Central

    Lionte, Catalina; Sorodoc, Victorita; Jaba, Elisabeta; Botezat, Alina

    2017-01-01

    Abstract Acute poisoning with drugs and nonpharmaceutical agents represents an important challenge in the emergency department (ED). The objective is to create and validate a risk-prediction nomogram for use in the ED to predict the risk of in-hospital mortality in adults from acute poisoning with drugs and nonpharmaceutical agents. This was a prospective cohort study involving adults with acute poisoning from drugs and nonpharmaceutical agents admitted to a tertiary referral center for toxicology between January and December 2015 (derivation cohort) and between January and June 2016 (validation cohort). We used a program to generate nomograms based on binary logistic regression predictive models. We included variables that had significant associations with death. Using regression coefficients, we calculated scores for each variable, and estimated the event probability. Model validation was performed using bootstrap to quantify our modeling strategy and using receiver operator characteristic (ROC) analysis. The nomogram was tested on a separate validation cohort using ROC analysis and goodness-of-fit tests. Data from 315 patients aged 18 to 91 years were analyzed (n = 180 in the derivation cohort; n = 135 in the validation cohort). In the final model, the following variables were significantly associated with mortality: age, laboratory test results (lactate, potassium, MB isoenzyme of creatine kinase), electrocardiogram parameters (QTc interval), and echocardiography findings (E wave velocity deceleration time). Sex was also included to use the same model for men and women. The resulting nomogram showed excellent survival/mortality discrimination (area under the curve [AUC] 0.976, 95% confidence interval [CI] 0.954–0.998, P < 0.0001 for the derivation cohort; AUC 0.957, 95% CI 0.892–1, P < 0.0001 for the validation cohort). This nomogram provides more precise, rapid, and simple risk-analysis information for individual patients acutely exposed to drugs and nonpharmaceutical agents, and accurately estimates the probability of in-hospital death, exclusively using the results of objective tests available in the ED. PMID:28328838

  17. Integration of system identification and finite element modelling of nonlinear vibrating structures

    NASA Astrophysics Data System (ADS)

    Cooper, Samson B.; DiMaio, Dario; Ewins, David J.

    2018-03-01

    The Finite Element Method (FEM), Experimental modal analysis (EMA) and other linear analysis techniques have been established as reliable tools for the dynamic analysis of engineering structures. They are often used to provide solutions to small and large structures and other variety of cases in structural dynamics, even those exhibiting a certain degree of nonlinearity. Unfortunately, when the nonlinear effects are substantial or the accuracy of the predicted response is of vital importance, a linear finite element model will generally prove to be unsatisfactory. As a result, the validated linear FE model requires further enhancement so that it can represent and predict the nonlinear behaviour exhibited by the structure. In this paper, a pragmatic approach to integrating test-based system identification and FE modelling of a nonlinear structure is presented. This integration is based on three different phases: the first phase involves the derivation of an Underlying Linear Model (ULM) of the structure, the second phase includes experiment-based nonlinear identification using measured time series and the third phase covers augmenting the linear FE model and experimental validation of the nonlinear FE model. The proposed case study is demonstrated on a twin cantilever beam assembly coupled with a flexible arch shaped beam. In this case, polynomial-type nonlinearities are identified and validated with force-controlled stepped-sine test data at several excitation levels.

  18. Validation of the Adolescent Concerns Measure (ACM): evidence from exploratory and confirmatory factor analysis.

    PubMed

    Ang, Rebecca P; Chong, Wan Har; Huan, Vivien S; Yeo, Lay See

    2007-01-01

    This article reports the development and initial validation of scores obtained from the Adolescent Concerns Measure (ACM), a scale which assesses concerns of Asian adolescent students. In Study 1, findings from exploratory factor analysis using 619 adolescents suggested a 24-item scale with four correlated factors--Family Concerns (9 items), Peer Concerns (5 items), Personal Concerns (6 items), and School Concerns (4 items). Initial estimates of convergent validity for ACM scores were also reported. The four-factor structure of ACM scores derived from Study 1 was confirmed via confirmatory factor analysis in Study 2 using a two-fold cross-validation procedure with a separate sample of 811 adolescents. Support was found for both the multidimensional and hierarchical models of adolescent concerns using the ACM. Internal consistency and test-retest reliability estimates were adequate for research purposes. ACM scores show promise as a reliable and potentially valid measure of Asian adolescents' concerns.

  19. Development of Evaluation Indicators for Hospice and Palliative Care Professionals Training Programs in Korea.

    PubMed

    Kang, Jina; Park, Kyoung-Ok

    2017-01-01

    The importance of training for Hospice and Palliative Care (HPC) professionals has been increasing with the systemization of HPC in Korea. Hence, the need and importance of training quality for HPC professionals are growing. This study evaluated the construct validity and reliability of the Evaluation Indicators for standard Hospice and Palliative Care Training (EIHPCT) program. As a framework to develop evaluation indicators, an invented theoretical model combining Stufflebeam's CIPP (Context-Input-Process-Product) evaluation model with PRECEDE-PROCEED model was used. To verify the construct validity of the EIHPCT program, a structured survey was performed with 169 professionals who were the HPC training program administrators, trainers, and trainees. To examine the validity of the areas of the EIHPCT program, exploratory factor analysis and confirmatory factor analysis were conducted. First, in the exploratory factor analysis, the indicators with factor loadings above 0.4 were chosen as desirable items, and some cross-loaded items that loaded at 0.4 or higher on two or more factors were adjusted as the higher factor. Second, the model fit of the modified EIHPCT program was quite good in the confirmatory factor analysis (Goodness-of-Fit Index > 0.70, Comparative Fit Index > 0.80, Normed Fit Index > 0.80, Root Mean square of Residuals < 0.05). The modified model of the EIHPCT comprised 4 areas, 13 subdomains, and 61 indicators. The evaluation indicators of the modified model will be valuable references for improving the HPC professional training program.

  20. Distress modeling for DARWin-ME : final report.

    DOT National Transportation Integrated Search

    2013-12-01

    Distress prediction models, or transfer functions, are key components of the Pavement M-E Design and relevant analysis. The accuracy of such models depends on a successful process of calibration and subsequent validation of model coefficients in the ...

  1. Transtheoretical Model Constructs for Physical Activity Behavior are Invariant across Time among Ethnically Diverse Adults in Hawaii

    PubMed Central

    Nigg, Claudio R; Motl, Robert W; Horwath, Caroline; Dishman, Rod K

    2012-01-01

    Objectives Physical activity (PA) research applying the Transtheoretical Model (TTM) to examine group differences and/or change over time requires preliminary evidence of factorial validity and invariance. The current study examined the factorial validity and longitudinal invariance of TTM constructs recently revised for PA. Method Participants from an ethnically diverse sample in Hawaii (N=700) completed questionnaires capturing each TTM construct. Results Factorial validity was confirmed for each construct using confirmatory factor analysis with full-information maximum likelihood. Longitudinal invariance was evidenced across a shorter (3-month) and longer (6-month) time period via nested model comparisons. Conclusions The questionnaires for each validated TTM construct are provided, and can now be generalized across similar subgroups and time points. Further validation of the provided measures is suggested in additional populations and across extended time points. PMID:22778669

  2. Validity of Sensory Systems as Distinct Constructs

    PubMed Central

    Su, Chia-Ting

    2014-01-01

    This study investigated the validity of sensory systems as distinct measurable constructs as part of a larger project examining Ayres’s theory of sensory integration. Confirmatory factor analysis (CFA) was conducted to test whether sensory questionnaire items represent distinct sensory system constructs. Data were obtained from clinical records of two age groups, 2- to 5-yr-olds (n = 231) and 6- to 10-yr-olds (n = 223). With each group, we tested several CFA models for goodness of fit with the data. The accepted model was identical for each group and indicated that tactile, vestibular–proprioceptive, visual, and auditory systems form distinct, valid factors that are not age dependent. In contrast, alternative models that grouped items according to sensory processing problems (e.g., over- or underresponsiveness within or across sensory systems) did not yield valid factors. Results indicate that distinct sensory system constructs can be measured validly using questionnaire data. PMID:25184467

  3. Developing evaluation instrument based on CIPP models on the implementation of portfolio assessment

    NASA Astrophysics Data System (ADS)

    Kurnia, Feni; Rosana, Dadan; Supahar

    2017-08-01

    This study aimed to develop an evaluation instrument constructed by CIPP model on the implementation of portfolio assessment in science learning. This study used research and development (R & D) method; adapting 4-D by the development of non-test instrument, and the evaluation instrument constructed by CIPP model. CIPP is the abbreviation of Context, Input, Process, and Product. The techniques of data collection were interviews, questionnaires, and observations. Data collection instruments were: 1) the interview guidelines for the analysis of the problems and the needs, 2) questionnaire to see level of accomplishment of portfolio assessment instrument, and 3) observation sheets for teacher and student to dig up responses to the portfolio assessment instrument. The data obtained was quantitative data obtained from several validators. The validators consist of two lecturers as the evaluation experts, two practitioners (science teachers), and three colleagues. This paper shows the results of content validity obtained from the validators and the analysis result of the data obtained by using Aikens' V formula. The results of this study shows that the evaluation instrument based on CIPP models is proper to evaluate the implementation of portfolio assessment instruments. Based on the experts' judgments, practitioners, and colleagues, the Aikens' V coefficient was between 0.86-1,00 which means that it is valid and can be used in the limited trial and operational field trial.

  4. Application of Multivariable Analysis and FTIR-ATR Spectroscopy to the Prediction of Properties in Campeche Honey

    PubMed Central

    Pat, Lucio; Ali, Bassam; Guerrero, Armando; Córdova, Atl V.; Garduza, José P.

    2016-01-01

    Attenuated total reflectance-Fourier transform infrared spectrometry and chemometrics model was used for determination of physicochemical properties (pH, redox potential, free acidity, electrical conductivity, moisture, total soluble solids (TSS), ash, and HMF) in honey samples. The reference values of 189 honey samples of different botanical origin were determined using Association Official Analytical Chemists, (AOAC), 1990; Codex Alimentarius, 2001, International Honey Commission, 2002, methods. Multivariate calibration models were built using partial least squares (PLS) for the measurands studied. The developed models were validated using cross-validation and external validation; several statistical parameters were obtained to determine the robustness of the calibration models: (PCs) optimum number of components principal, (SECV) standard error of cross-validation, (R 2 cal) coefficient of determination of cross-validation, (SEP) standard error of validation, and (R 2 val) coefficient of determination for external validation and coefficient of variation (CV). The prediction accuracy for pH, redox potential, electrical conductivity, moisture, TSS, and ash was good, while for free acidity and HMF it was poor. The results demonstrate that attenuated total reflectance-Fourier transform infrared spectrometry is a valuable, rapid, and nondestructive tool for the quantification of physicochemical properties of honey. PMID:28070445

  5. Modeling Opponents in Adversarial Risk Analysis.

    PubMed

    Rios Insua, David; Banks, David; Rios, Jesus

    2016-04-01

    Adversarial risk analysis has been introduced as a framework to deal with risks derived from intentional actions of adversaries. The analysis supports one of the decisionmakers, who must forecast the actions of the other agents. Typically, this forecast must take account of random consequences resulting from the set of selected actions. The solution requires one to model the behavior of the opponents, which entails strategic thinking. The supported agent may face different kinds of opponents, who may use different rationality paradigms, for example, the opponent may behave randomly, or seek a Nash equilibrium, or perform level-k thinking, or use mirroring, or employ prospect theory, among many other possibilities. We describe the appropriate analysis for these situations, and also show how to model the uncertainty about the rationality paradigm used by the opponent through a Bayesian model averaging approach, enabling a fully decision-theoretic solution. We also show how as we observe an opponent's decision behavior, this approach allows learning about the validity of each of the rationality models used to predict his decision by computing the models' (posterior) probabilities, which can be understood as a measure of their validity. We focus on simultaneous decision making by two agents. © 2015 Society for Risk Analysis.

  6. Validation of the Transient Structural Response of a Threaded Assembly: Phase I

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doebling, Scott W.; Hemez, Francois M.; Robertson, Amy N.

    2004-04-01

    This report explores the application of model validation techniques in structural dynamics. The problem of interest is the propagation of an explosive-driven mechanical shock through a complex threaded joint. The study serves the purpose of assessing whether validating a large-size computational model is feasible, which unit experiments are required, and where the main sources of uncertainty reside. The results documented here are preliminary, and the analyses are exploratory in nature. The results obtained to date reveal several deficiencies of the analysis, to be rectified in future work.

  7. Validity of empirical models of exposure in asphalt paving

    PubMed Central

    Burstyn, I; Boffetta, P; Burr, G; Cenni, A; Knecht, U; Sciarra, G; Kromhout, H

    2002-01-01

    Aims: To investigate the validity of empirical models of exposure to bitumen fume and benzo(a)pyrene, developed for a historical cohort study of asphalt paving in Western Europe. Methods: Validity was evaluated using data from the USA, Italy, and Germany not used to develop the original models. Correlation between observed and predicted exposures was examined. Bias and precision were estimated. Results: Models were imprecise. Furthermore, predicted bitumen fume exposures tended to be lower (-70%) than concentrations found during paving in the USA. This apparent bias might be attributed to differences between Western European and USA paving practices. Evaluation of the validity of the benzo(a)pyrene exposure model revealed a similar to expected effect of re-paving and a larger than expected effect of tar use. Overall, benzo(a)pyrene models underestimated exposures by 51%. Conclusions: Possible bias as a result of underestimation of the impact of coal tar on benzo(a)pyrene exposure levels must be explored in sensitivity analysis of the exposure–response relation. Validation of the models, albeit limited, increased our confidence in their applicability to exposure assessment in the historical cohort study of cancer risk among asphalt workers. PMID:12205236

  8. Factorial Validity of the Decisional Involvement Scale as a Measure of Content and Context of Nursing Practice.

    PubMed

    Yurek, Leo A; Havens, Donna S; Hays, Spencer; Hughes, Linda C

    2015-10-01

    Decisional involvement is widely recognized as an essential component of a professional nursing practice environment. In recent years, researchers have added to the conceptualization of nurses' role in decision-making to differentiate between the content and context of nursing practice. Yet, instruments that clearly distinguish between these two dimensions of practice are lacking. The purpose of this study was to examine the factorial validity of the Decisional Involvement Scale (DIS) as a measure of both the content and context of nursing practice. This secondary analysis was conducted using data from a longitudinal action research project to improve the quality of nursing practice and patient care in six hospitals (N = 1,034) in medically underserved counties of Pennsylvania. A cross-sectional analysis of baseline data from the parent study was used to compare the factor structure of two models (one nested within the other) using confirmatory factor analysis. Although a comparison of the two models indicated that the addition of second-order factors for the content and context of nursing practice improved model fit, neither model provided optimal fit to the data. Additional model-generating research is needed to develop the DIS as a valid measure of decisional involvement for both the content and context of nursing practice. © 2015 Wiley Periodicals, Inc.

  9. Identifying Talent in Youth Sport: A Novel Methodology Using Higher-Dimensional Analysis.

    PubMed

    Till, Kevin; Jones, Ben L; Cobley, Stephen; Morley, David; O'Hara, John; Chapman, Chris; Cooke, Carlton; Beggs, Clive B

    2016-01-01

    Prediction of adult performance from early age talent identification in sport remains difficult. Talent identification research has generally been performed using univariate analysis, which ignores multivariate relationships. To address this issue, this study used a novel higher-dimensional model to orthogonalize multivariate anthropometric and fitness data from junior rugby league players, with the aim of differentiating future career attainment. Anthropometric and fitness data from 257 Under-15 rugby league players was collected. Players were grouped retrospectively according to their future career attainment (i.e., amateur, academy, professional). Players were blindly and randomly divided into an exploratory (n = 165) and validation dataset (n = 92). The exploratory dataset was used to develop and optimize a novel higher-dimensional model, which combined singular value decomposition (SVD) with receiver operating characteristic analysis. Once optimized, the model was tested using the validation dataset. SVD analysis revealed 60 m sprint and agility 505 performance were the most influential characteristics in distinguishing future professional players from amateur and academy players. The exploratory dataset model was able to distinguish between future amateur and professional players with a high degree of accuracy (sensitivity = 85.7%, specificity = 71.1%; p<0.001), although it could not distinguish between future professional and academy players. The validation dataset model was able to distinguish future professionals from the rest with reasonable accuracy (sensitivity = 83.3%, specificity = 63.8%; p = 0.003). Through the use of SVD analysis it was possible to objectively identify criteria to distinguish future career attainment with a sensitivity over 80% using anthropometric and fitness data alone. As such, this suggests that SVD analysis may be a useful analysis tool for research and practice within talent identification.

  10. Identifying Talent in Youth Sport: A Novel Methodology Using Higher-Dimensional Analysis

    PubMed Central

    Till, Kevin; Jones, Ben L.; Cobley, Stephen; Morley, David; O'Hara, John; Chapman, Chris; Cooke, Carlton; Beggs, Clive B.

    2016-01-01

    Prediction of adult performance from early age talent identification in sport remains difficult. Talent identification research has generally been performed using univariate analysis, which ignores multivariate relationships. To address this issue, this study used a novel higher-dimensional model to orthogonalize multivariate anthropometric and fitness data from junior rugby league players, with the aim of differentiating future career attainment. Anthropometric and fitness data from 257 Under-15 rugby league players was collected. Players were grouped retrospectively according to their future career attainment (i.e., amateur, academy, professional). Players were blindly and randomly divided into an exploratory (n = 165) and validation dataset (n = 92). The exploratory dataset was used to develop and optimize a novel higher-dimensional model, which combined singular value decomposition (SVD) with receiver operating characteristic analysis. Once optimized, the model was tested using the validation dataset. SVD analysis revealed 60 m sprint and agility 505 performance were the most influential characteristics in distinguishing future professional players from amateur and academy players. The exploratory dataset model was able to distinguish between future amateur and professional players with a high degree of accuracy (sensitivity = 85.7%, specificity = 71.1%; p<0.001), although it could not distinguish between future professional and academy players. The validation dataset model was able to distinguish future professionals from the rest with reasonable accuracy (sensitivity = 83.3%, specificity = 63.8%; p = 0.003). Through the use of SVD analysis it was possible to objectively identify criteria to distinguish future career attainment with a sensitivity over 80% using anthropometric and fitness data alone. As such, this suggests that SVD analysis may be a useful analysis tool for research and practice within talent identification. PMID:27224653

  11. High-resolution computational algorithms for simulating offshore wind turbines and farms: Model development and validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Calderer, Antoni; Yang, Xiaolei; Angelidis, Dionysios

    2015-10-30

    The present project involves the development of modeling and analysis design tools for assessing offshore wind turbine technologies. The computational tools developed herein are able to resolve the effects of the coupled interaction of atmospheric turbulence and ocean waves on aerodynamic performance and structural stability and reliability of offshore wind turbines and farms. Laboratory scale experiments have been carried out to derive data sets for validating the computational models.

  12. Parameter Identification and Uncertainty Analysis for Visual MODFLOW based Groundwater Flow Model in a Small River Basin, Eastern India

    NASA Astrophysics Data System (ADS)

    Jena, S.

    2015-12-01

    The overexploitation of groundwater resulted in abandoning many shallow tube wells in the river Basin in Eastern India. For the sustainability of groundwater resources, basin-scale modelling of groundwater flow is essential for the efficient planning and management of the water resources. The main intent of this study is to develope a 3-D groundwater flow model of the study basin using the Visual MODFLOW package and successfully calibrate and validate it using 17 years of observed data. The sensitivity analysis was carried out to quantify the susceptibility of aquifer system to the river bank seepage, recharge from rainfall and agriculture practices, horizontal and vertical hydraulic conductivities, and specific yield. To quantify the impact of parameter uncertainties, Sequential Uncertainty Fitting Algorithm (SUFI-2) and Markov chain Monte Carlo (MCMC) techniques were implemented. Results from the two techniques were compared and the advantages and disadvantages were analysed. Nash-Sutcliffe coefficient (NSE) and coefficient of determination (R2) were adopted as two criteria during calibration and validation of the developed model. NSE and R2 values of groundwater flow model for calibration and validation periods were in acceptable range. Also, the MCMC technique was able to provide more reasonable results than SUFI-2. The calibrated and validated model will be useful to identify the aquifer properties, analyse the groundwater flow dynamics and the change in groundwater levels in future forecasts.

  13. Metrological analysis of a virtual flowmeter-based transducer for cryogenic helium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arpaia, P., E-mail: pasquale.arpaia@unina.it; Technology Department, European Organization for Nuclear Research; Girone, M., E-mail: mario.girone@cern.ch

    2015-12-15

    The metrological performance of a virtual flowmeter-based transducer for monitoring helium under cryogenic conditions is assessed. At this aim, an uncertainty model of the transducer, mainly based on a valve model, exploiting finite-element approach, and a virtual flowmeter model, based on the Sereg-Schlumberger method, are presented. The models are validated experimentally on a case study for helium monitoring in cryogenic systems at the European Organization for Nuclear Research (CERN). The impact of uncertainty sources on the transducer metrological performance is assessed by a sensitivity analysis, based on statistical experiment design and analysis of variance. In this way, the uncertainty sourcesmore » most influencing metrological performance of the transducer are singled out over the input range as a whole, at varying operating and setting conditions. This analysis turns out to be important for CERN cryogenics operation because the metrological design of the transducer is validated, and its components and working conditions with critical specifications for future improvements are identified.« less

  14. Analytic Modeling of Pressurization and Cryogenic Propellant Conditions for Lunar Landing Vehicle

    NASA Technical Reports Server (NTRS)

    Corpening, Jeremy

    2010-01-01

    This slide presentation reviews the development, validation and application of the model to the Lunar Landing Vehicle. The model named, Computational Propellant and Pressurization Program -- One Dimensional (CPPPO), is used to model in this case cryogenic propellant conditions of the Altair Lunar lander. The validation of CPPPO was accomplished via comparison to an existing analytic model (i.e., ROCETS), flight experiment and ground experiments. The model was used to the Lunar Landing Vehicle perform a parametric analysis on pressurant conditions and to examine the results of unequal tank pressurization and draining for multiple tank designs.

  15. External validation of prognostic models to predict risk of gestational diabetes mellitus in one Dutch cohort: prospective multicentre cohort study.

    PubMed

    Lamain-de Ruiter, Marije; Kwee, Anneke; Naaktgeboren, Christiana A; de Groot, Inge; Evers, Inge M; Groenendaal, Floris; Hering, Yolanda R; Huisjes, Anjoke J M; Kirpestein, Cornel; Monincx, Wilma M; Siljee, Jacqueline E; Van 't Zelfde, Annewil; van Oirschot, Charlotte M; Vankan-Buitelaar, Simone A; Vonk, Mariska A A W; Wiegers, Therese A; Zwart, Joost J; Franx, Arie; Moons, Karel G M; Koster, Maria P H

    2016-08-30

     To perform an external validation and direct comparison of published prognostic models for early prediction of the risk of gestational diabetes mellitus, including predictors applicable in the first trimester of pregnancy.  External validation of all published prognostic models in large scale, prospective, multicentre cohort study.  31 independent midwifery practices and six hospitals in the Netherlands.  Women recruited in their first trimester (<14 weeks) of pregnancy between December 2012 and January 2014, at their initial prenatal visit. Women with pre-existing diabetes mellitus of any type were excluded.  Discrimination of the prognostic models was assessed by the C statistic, and calibration assessed by calibration plots.  3723 women were included for analysis, of whom 181 (4.9%) developed gestational diabetes mellitus in pregnancy. 12 prognostic models for the disorder could be validated in the cohort. C statistics ranged from 0.67 to 0.78. Calibration plots showed that eight of the 12 models were well calibrated. The four models with the highest C statistics included almost all of the following predictors: maternal age, maternal body mass index, history of gestational diabetes mellitus, ethnicity, and family history of diabetes. Prognostic models had a similar performance in a subgroup of nulliparous women only. Decision curve analysis showed that the use of these four models always had a positive net benefit.  In this external validation study, most of the published prognostic models for gestational diabetes mellitus show acceptable discrimination and calibration. The four models with the highest discriminative abilities in this study cohort, which also perform well in a subgroup of nulliparous women, are easy models to apply in clinical practice and therefore deserve further evaluation regarding their clinical impact. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  16. Factor Validity of the Motivated Strategies for Learning Questionnaire (MSLQ) in Asynchronous Online Learning Environments (AOLE)

    ERIC Educational Resources Information Center

    Cho, Moon-Heum; Summers, Jessica

    2012-01-01

    The purpose of this study was to investigate the factor validity of the Motivated Strategies for Learning Questionnaire (MSLQ) in asynchronous online learning environments. In order to check the factor validity, confirmatory factor analysis (CFA) was conducted with 193 cases. Using CFA, it was found that the original measurement model fit for…

  17. The Involvement of Hong Kong Parents in the Education of Their Children: A Validation of the Parents' Attributions and Perception Questionnaire

    ERIC Educational Resources Information Center

    Phillipson, Sivanes; Phillipson, Shane N.

    2010-01-01

    This study describes the validation and interpretation of the Parents' Attributions and Perception Questionnaire (PAPQ) using confirmatory factor analysis and Rasch modelling to report both the construct validity and category structure of the scales in the questionnaire. The PAPQ was developed to reflect the proposal that parents mediate the…

  18. A validated model for the 22-item Sino-Nasal Outcome Test subdomain structure in chronic rhinosinusitis.

    PubMed

    Feng, Allen L; Wesely, Nicholas C; Hoehle, Lloyd P; Phillips, Katie M; Yamasaki, Alisa; Campbell, Adam P; Gregorio, Luciano L; Killeen, Thomas E; Caradonna, David S; Meier, Josh C; Gray, Stacey T; Sedaghat, Ahmad R

    2017-12-01

    Previous studies have identified subdomains of the 22-item Sino-Nasal Outcome Test (SNOT-22), reflecting distinct and largely independent categories of chronic rhinosinusitis (CRS) symptoms. However, no study has validated the subdomain structure of the SNOT-22. This study aims to validate the existence of underlying symptom subdomains of the SNOT-22 using confirmatory factor analysis (CFA) and to develop a subdomain model that practitioners and researchers can use to describe CRS symptomatology. A total of 800 patients with CRS were included into this cross-sectional study (400 CRS patients from Boston, MA, and 400 CRS patients from Reno, NV). Their SNOT-22 responses were analyzed using exploratory factor analysis (EFA) to determine the number of symptom subdomains. A CFA was performed to develop a validated measurement model for the underlying SNOT-22 subdomains along with various tests of validity and goodness of fit. EFA demonstrated 4 distinct factors reflecting: sleep, nasal, otologic/facial pain, and emotional symptoms (Cronbach's alpha, >0.7; Bartlett's test of sphericity, p < 0.001; Kaiser-Meyer-Olkin >0.90), independent of geographic locale. The corresponding CFA measurement model demonstrated excellent measures of fit (root mean square error of approximation, <0.06; standardized root mean square residual, <0.08; comparative fit index, >0.95; Tucker-Lewis index, >0.95) and measures of construct validity (heterotrait-monotrait [HTMT] ratio, <0.85; composite reliability, >0.7), again independent of geographic locale. The use of the 4-subdomain structure for SNOT-22 (reflecting sleep, nasal, otologic/facial pain, and emotional symptoms of CRS) was validated as the most appropriate to calculate SNOT-22 subdomain scores for patients from different geographic regions using CFA. © 2017 ARS-AAOA, LLC.

  19. Development and Validation of Decision Forest Model for Estrogen Receptor Binding Prediction of Chemicals Using Large Data Sets.

    PubMed

    Ng, Hui Wen; Doughty, Stephen W; Luo, Heng; Ye, Hao; Ge, Weigong; Tong, Weida; Hong, Huixiao

    2015-12-21

    Some chemicals in the environment possess the potential to interact with the endocrine system in the human body. Multiple receptors are involved in the endocrine system; estrogen receptor α (ERα) plays very important roles in endocrine activity and is the most studied receptor. Understanding and predicting estrogenic activity of chemicals facilitates the evaluation of their endocrine activity. Hence, we have developed a decision forest classification model to predict chemical binding to ERα using a large training data set of 3308 chemicals obtained from the U.S. Food and Drug Administration's Estrogenic Activity Database. We tested the model using cross validations and external data sets of 1641 chemicals obtained from the U.S. Environmental Protection Agency's ToxCast project. The model showed good performance in both internal (92% accuracy) and external validations (∼ 70-89% relative balanced accuracies), where the latter involved the validations of the model across different ER pathway-related assays in ToxCast. The important features that contribute to the prediction ability of the model were identified through informative descriptor analysis and were related to current knowledge of ER binding. Prediction confidence analysis revealed that the model had both high prediction confidence and accuracy for most predicted chemicals. The results demonstrated that the model constructed based on the large training data set is more accurate and robust for predicting ER binding of chemicals than the published models that have been developed using much smaller data sets. The model could be useful for the evaluation of ERα-mediated endocrine activity potential of environmental chemicals.

  20. Modeling the Dynamic Interrelations between Mobility, Utility, and Land Asking Price

    NASA Astrophysics Data System (ADS)

    Hidayat, E.; Rudiarto, I.; Siegert, F.; Vries, W. D.

    2018-02-01

    Limited and insufficient information about the dynamic interrelation among mobility, utility, and land price is the main reason to conduct this research. Several studies, with several approaches, and several variables have been conducted so far in order to model the land price. However, most of these models appear to generate primarily static land prices. Thus, a research is required to compare, design, and validate different models which calculate and/or compare the inter-relational changes of mobility, utility, and land price. The applied method is a combination of analysis of literature review, expert interview, and statistical analysis. The result is newly improved mathematical model which have been validated and is suitable for the case study location. This improved model consists of 12 appropriate variables. This model can be implemented in the Salatiga city as the case study location in order to arrange better land use planning to mitigate the uncontrolled urban growth.

  1. International Space Station Modal Correction Analysis

    NASA Technical Reports Server (NTRS)

    Fotz[atrocl. Lrostom; Grugoer. < ocjae; Laible, Michael; Sugavanam, Sujatha

    2012-01-01

    This paper summarizes the on-orbit modal test and the related modal analysis, model validation and correlation performed for the ISS Stage ULF4, DTF S4-1A, October 11,2010, GMT 284/06:13:00.00. The objective of this analysis is to validate and correlate analytical models with the intent to verify the ISS critical interface dynamic loads and improve fatigue life prediction. For the ISS configurations under consideration, on-orbit dynamic responses were collected with Russian vehicles attached and without the Orbiter attached to the ISS. ISS instrumentation systems that were used to collect the dynamic responses during the DTF S4-1A included the Internal Wireless Instrumentation System (IWIS), External Wireless Instrumentation System (EWIS), Structural Dynamic Measurement System (SDMS), Space Acceleration Measurement System (SAMS), Inertial Measurement Unit (IMU) and ISS External Cameras. Experimental modal analyses were performed on the measured data to extract modal parameters including frequency, damping and mode shape information. Correlation and comparisons between test and analytical modal parameters were performed to assess the accuracy of models for the ISS configuration under consideration. Based on the frequency comparisons, the accuracy of the mathematical models is assessed and model refinement recommendations are given. Section 2.0 of this report presents the math model used in the analysis. This section also describes the ISS configuration under consideration and summarizes the associated primary modes of interest along with the fundamental appendage modes. Section 3.0 discusses the details of the ISS Stage ULF4 DTF S4-1A test. Section 4.0 discusses the on-orbit instrumentation systems that were used in the collection of the data analyzed in this paper. The modal analysis approach and results used in the analysis of the collected data are summarized in Section 5.0. The model correlation and validation effort is reported in Section 6.0. Conclusions and recommendations drawn from this analysis are included in Section 7.0.

  2. Clinimetrics and clinical psychometrics: macro- and micro-analysis.

    PubMed

    Tomba, Elena; Bech, Per

    2012-01-01

    Clinimetrics was introduced three decades ago to specify the domain of clinical markers in clinical medicine (indexes or rating scales). In this perspective, clinical validity is the platform for selecting the various indexes or rating scales (macro-analysis). Psychometric validation of these indexes or rating scales is the measuring aspect (micro-analysis). Clinical judgment analysis by experienced psychiatrists is included in the macro-analysis and the item response theory models are especially preferred in the micro-analysis when using the total score as a sufficient statistic. Clinical assessment tools covering severity of illness scales, prognostic measures, issues of co-morbidity, longitudinal assessments, recovery, stressors, lifestyle, psychological well-being, and illness behavior have been identified. The constructive dialogue in clinimetrics between clinical judgment and psychometric validation procedures is outlined for generating developments of clinical practice in psychiatry. Copyright © 2012 S. Karger AG, Basel.

  3. Derivation and external validation of a case mix model for the standardized reporting of 30-day stroke mortality rates.

    PubMed

    Bray, Benjamin D; Campbell, James; Cloud, Geoffrey C; Hoffman, Alex; James, Martin; Tyrrell, Pippa J; Wolfe, Charles D A; Rudd, Anthony G

    2014-11-01

    Case mix adjustment is required to allow valid comparison of outcomes across care providers. However, there is a lack of externally validated models suitable for use in unselected stroke admissions. We therefore aimed to develop and externally validate prediction models to enable comparison of 30-day post-stroke mortality outcomes using routine clinical data. Models were derived (n=9000 patients) and internally validated (n=18 169 patients) using data from the Sentinel Stroke National Audit Program, the national register of acute stroke in England and Wales. External validation (n=1470 patients) was performed in the South London Stroke Register, a population-based longitudinal study. Models were fitted using general estimating equations. Discrimination and calibration were assessed using receiver operating characteristic curve analysis and correlation plots. Two final models were derived. Model A included age (<60, 60-69, 70-79, 80-89, and ≥90 years), National Institutes of Health Stroke Severity Score (NIHSS) on admission, presence of atrial fibrillation on admission, and stroke type (ischemic versus primary intracerebral hemorrhage). Model B was similar but included only the consciousness component of the NIHSS in place of the full NIHSS. Both models showed excellent discrimination and calibration in internal and external validation. The c-statistics in external validation were 0.87 (95% confidence interval, 0.84-0.89) and 0.86 (95% confidence interval, 0.83-0.89) for models A and B, respectively. We have derived and externally validated 2 models to predict mortality in unselected patients with acute stroke using commonly collected clinical variables. In settings where the ability to record the full NIHSS on admission is limited, the level of consciousness component of the NIHSS provides a good approximation of the full NIHSS for mortality prediction. © 2014 American Heart Association, Inc.

  4. Dimensionality of the Knee Numeric-Entity Evaluation Score (KNEES-ACL): a condition-specific questionnaire.

    PubMed

    Comins, J D; Krogsgaard, M R; Kreiner, S; Brodersen, J

    2013-10-01

    The benefit of anterior cruciate ligament (ACL) reconstruction has been questioned based on patient-reported outcome measures (PROMs). Valid interpretation of such results requires confirmation of the psychometric properties of the PROM. Rasch analysis is the gold standard for validation of PROMs, yet PROMs used for ACL reconstruction have not been validated using Rasch analysis. We used Rasch analysis to investigate the psychometric properties of the Knee Numeric-Entity Evaluation Score (KNEES-ACL), a newly developed PROM for patients treated for ACL deficiency. Two-hundred forty-two patients pre- and post-ACL reconstruction completed the pilot PROM. Rasch models were used to assess the psychometric properties (e.g., unidimensionality, local response dependency, and differential item functioning). Forty-one items distributed across seven unidimensional constructs measuring impairment, functional limitations, and psychosocial consequences were confirmed to fit Rasch models. Fourteen items were removed because of statistical lack of fit and inadequate face validity. Local response dependency and differential item functioning were identified and adjusted. The KNEES-ACL is the first Rasch-validated condition-specific PROM constructed for patients with ACL deficiency and patients with ACL reconstruction. Thus, this instrument can be used for within- and between-group comparisons. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  5. Validation of a model with climatic and flow scenario analysis: case of Lake Burrumbeet in southeastern Australia.

    PubMed

    Yihdego, Yohannes; Webb, John

    2016-05-01

    Forecast evaluation is an important topic that addresses the development of reliable hydrological probabilistic forecasts, mainly through the use of climate uncertainties. Often, validation has no place in hydrology for most of the times, despite the parameters of a model are uncertain. Similarly, the structure of the model can be incorrectly chosen. A calibrated and verified dynamic hydrologic water balance spreadsheet model has been used to assess the effect of climate variability on Lake Burrumbeet, southeastern Australia. The lake level has been verified to lake level, lake volume, lake surface area, surface outflow and lake salinity. The current study aims to increase lake level confidence model prediction through historical validation for the year 2008-2013, under different climatic scenario. Based on the observed climatic condition (2008-2013), it fairly matches with a hybridization of scenarios, being the period interval (2008-2013), corresponds to both dry and wet climatic condition. Besides to the hydrologic stresses uncertainty, uncertainty in the calibrated model is among the major drawbacks involved in making scenario simulations. In line with this, the uncertainty in the calibrated model was tested using sensitivity analysis and showed that errors in the model can largely be attributed to erroneous estimates of evaporation and rainfall, and surface inflow to a lesser. The study demonstrates that several climatic scenarios should be analysed, with a combination of extreme climate, stream flow and climate change instead of one assumed climatic sequence, to improve climate variability prediction in the future. Performing such scenario analysis is a valid exercise to comprehend the uncertainty with the model structure and hydrology, in a meaningful way, without missing those, even considered as less probable, ultimately turned to be crucial for decision making and will definitely increase the confidence of model prediction for management of the water resources.

  6. A Preliminary Assessment of the SURF Reactive Burn Model Implementation in FLAG

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Carl Edward; McCombe, Ryan Patrick; Carver, Kyle

    Properly validated and calibrated reactive burn models (RBM) can be useful engineering tools for assessing high explosive performance and safety. Experiments with high explosives are expensive. Inexpensive RBM calculations are increasingly relied on for predictive analysis for performance and safety. This report discusses the validation of Menikoff and Shaw’s SURF reactive burn model, which has recently been implemented in the FLAG code. The LANL Gapstick experiment is discussed as is its’ utility in reactive burn model validation. Data obtained from pRad for the LT-63 series is also presented along with FLAG simulations using SURF for both PBX 9501 and PBXmore » 9502. Calibration parameters for both explosives are presented.« less

  7. A Developmental Analysis of the Factorial Validity of the Parent-Report Version of the Adult Responses to Children’s Symptoms in Children Versus Adolescents With Chronic Pain or Pain-Related Chronic Illness

    PubMed Central

    Noel, Melanie; Palermo, Tonya M.; Essner, Bonnie; Zhou, Chuan; Levy, Rona L.; Langer, Shelby L.; Sherman, Amanda L.; Walker, Lynn S.

    2015-01-01

    The widely used Adult Responses to Children’s Symptoms measures parental responses to child symptom complaints among youth aged 7 to 18 years with recurrent/chronic pain. Given developmental differences between children and adolescents and the impact of developmental stage on parenting, the factorial validity of the parent-report version of the Adult Responses to Children’s Symptoms with a pain-specific stem was examined separately in 743 parents of 281 children (7–11 years) and 462 adolescents (12–18 years) with chronic pain or pain-related chronic illness. Factor structures of the Adult Responses to Children’s Symptoms beyond the original 3-factor model were also examined. Exploratory factor analysis with oblique rotation was conducted on a randomly chosen half of the sample of children and adolescents as well as the 2 groups combined to assess underlying factor structure. Confirmatory factor analysis was conducted on the other randomly chosen half of the sample to cross-validate factor structure revealed by exploratory factor analyses and compare it to other model variants. Poor loading and high cross loading items were removed. A 4-factor model (Protect, Minimize, Monitor, and Distract) for children and the combined (child and adolescent) sample and a 5-factor model (Protect, Minimize, Monitor, Distract, and Solicitousness) for adolescents was superior to the 3-factor model proposed in previous literature. Future research should examine the validity of derived subscales and developmental differences in their relationships with parent and child functioning. PMID:25451623

  8. Nonlinear dynamics of planetary gears using analytical and finite element models

    NASA Astrophysics Data System (ADS)

    Ambarisha, Vijaya Kumar; Parker, Robert G.

    2007-05-01

    Vibration-induced gear noise and dynamic loads remain key concerns in many transmission applications that use planetary gears. Tooth separations at large vibrations introduce nonlinearity in geared systems. The present work examines the complex, nonlinear dynamic behavior of spur planetary gears using two models: (i) a lumped-parameter model, and (ii) a finite element model. The two-dimensional (2D) lumped-parameter model represents the gears as lumped inertias, the gear meshes as nonlinear springs with tooth contact loss and periodically varying stiffness due to changing tooth contact conditions, and the supports as linear springs. The 2D finite element model is developed from a unique finite element-contact analysis solver specialized for gear dynamics. Mesh stiffness variation excitation, corner contact, and gear tooth contact loss are all intrinsically considered in the finite element analysis. The dynamics of planetary gears show a rich spectrum of nonlinear phenomena. Nonlinear jumps, chaotic motions, and period-doubling bifurcations occur when the mesh frequency or any of its higher harmonics are near a natural frequency of the system. Responses from the dynamic analysis using analytical and finite element models are successfully compared qualitatively and quantitatively. These comparisons validate the effectiveness of the lumped-parameter model to simulate the dynamics of planetary gears. Mesh phasing rules to suppress rotational and translational vibrations in planetary gears are valid even when nonlinearity from tooth contact loss occurs. These mesh phasing rules, however, are not valid in the chaotic and period-doubling regions.

  9. Evaluating the predictive accuracy and the clinical benefit of a nomogram aimed to predict survival in node-positive prostate cancer patients: External validation on a multi-institutional database.

    PubMed

    Bianchi, Lorenzo; Schiavina, Riccardo; Borghesi, Marco; Bianchi, Federico Mineo; Briganti, Alberto; Carini, Marco; Terrone, Carlo; Mottrie, Alex; Gacci, Mauro; Gontero, Paolo; Imbimbo, Ciro; Marchioro, Giansilvio; Milanese, Giulio; Mirone, Vincenzo; Montorsi, Francesco; Morgia, Giuseppe; Novara, Giacomo; Porreca, Angelo; Volpe, Alessandro; Brunocilla, Eugenio

    2018-04-06

    To assess the predictive accuracy and the clinical value of a recent nomogram predicting cancer-specific mortality-free survival after surgery in pN1 prostate cancer patients through an external validation. We evaluated 518 prostate cancer patients treated with radical prostatectomy and pelvic lymph node dissection with evidence of nodal metastases at final pathology, at 10 tertiary centers. External validation was carried out using regression coefficients of the previously published nomogram. The performance characteristics of the model were assessed by quantifying predictive accuracy, according to the area under the curve in the receiver operating characteristic curve and model calibration. Furthermore, we systematically analyzed the specificity, sensitivity, positive predictive value and negative predictive value for each nomogram-derived probability cut-off. Finally, we implemented decision curve analysis, in order to quantify the nomogram's clinical value in routine practice. External validation showed inferior predictive accuracy as referred to in the internal validation (65.8% vs 83.3%, respectively). The discrimination (area under the curve) of the multivariable model was 66.7% (95% CI 60.1-73.0%) by testing with receiver operating characteristic curve analysis. The calibration plot showed an overestimation throughout the range of predicted cancer-specific mortality-free survival rates probabilities. However, in decision curve analysis, the nomogram's use showed a net benefit when compared with the scenarios of treating all patients or none. In an external setting, the nomogram showed inferior predictive accuracy and suboptimal calibration characteristics as compared to that reported in the original population. However, decision curve analysis showed a clinical net benefit, suggesting a clinical implication to correctly manage pN1 prostate cancer patients after surgery. © 2018 The Japanese Urological Association.

  10. Automatic RBG-depth-pressure anthropometric analysis and individualised sleep solution prescription.

    PubMed

    Esquirol Caussa, Jordi; Palmero Cantariño, Cristina; Bayo Tallón, Vanessa; Cos Morera, Miquel Àngel; Escalera, Sergio; Sánchez, David; Sánchez Padilla, Maider; Serrano Domínguez, Noelia; Relats Vilageliu, Mireia

    2017-08-01

    Sleep surfaces must adapt to individual somatotypic features to maintain a comfortable, convenient and healthy sleep, preventing diseases and injuries. Individually determining the most adequate rest surface can often be a complex and subjective question. To design and validate an automatic multimodal somatotype determination model to automatically recommend an individually designed mattress-topper-pillow combination. Design and validation of an automated prescription model for an individualised sleep system is performed through a single-image 2 D-3 D analysis and body pressure distribution, to objectively determine optimal individual sleep surfaces combining five different mattress densities, three different toppers and three cervical pillows. A final study (n = 151) and re-analysis (n = 117) defined and validated the model, showing high correlations between calculated and real data (>85% in height and body circumferences, 89.9% in weight, 80.4% in body mass index and more than 70% in morphotype categorisation). Somatotype determination model can accurately prescribe an individualised sleep solution. This can be useful for healthy people and for health centres that need to adapt sleep surfaces to people with special needs. Next steps will increase model's accuracy and analise, if this prescribed individualised sleep solution can improve sleep quantity and quality; additionally, future studies will adapt the model to mattresses with technological improvements, tailor-made production and will define interfaces for people with special needs.

  11. The Probability Heuristics Model of Syllogistic Reasoning.

    ERIC Educational Resources Information Center

    Chater, Nick; Oaksford, Mike

    1999-01-01

    Proposes a probability heuristic model for syllogistic reasoning and confirms the rationality of this heuristic by an analysis of the probabilistic validity of syllogistic reasoning that treats logical inference as a limiting case of probabilistic inference. Meta-analysis and two experiments involving 40 adult participants and using generalized…

  12. Validation in the Absence of Observed Events.

    PubMed

    Lathrop, John; Ezell, Barry

    2016-04-01

    This article addresses the problem of validating models in the absence of observed events, in the area of weapons of mass destruction terrorism risk assessment. We address that problem with a broadened definition of "validation," based on stepping "up" a level to considering the reason why decisionmakers seek validation, and from that basis redefine validation as testing how well the model can advise decisionmakers in terrorism risk management decisions. We develop that into two conditions: validation must be based on cues available in the observable world; and it must focus on what can be done to affect that observable world, i.e., risk management. That leads to two foci: (1) the real-world risk generating process, and (2) best use of available data. Based on our experience with nine WMD terrorism risk assessment models, we then describe three best use of available data pitfalls: SME confidence bias, lack of SME cross-referencing, and problematic initiation rates. Those two foci and three pitfalls provide a basis from which we define validation in this context in terms of four tests--Does the model: … capture initiation? … capture the sequence of events by which attack scenarios unfold? … consider unanticipated scenarios? … consider alternative causal chains? Finally, we corroborate our approach against three validation tests from the DOD literature: Is the model a correct representation of the process to be simulated? To what degree are the model results comparable to the real world? Over what range of inputs are the model results useful? © 2015 Society for Risk Analysis.

  13. Experimental Validation of a Thermoelastic Model for SMA Hybrid Composites

    NASA Technical Reports Server (NTRS)

    Turner, Travis L.

    2001-01-01

    This study presents results from experimental validation of a recently developed model for predicting the thermomechanical behavior of shape memory alloy hybrid composite (SMAHC) structures, composite structures with an embedded SMA constituent. The model captures the material nonlinearity of the material system with temperature and is capable of modeling constrained, restrained, or free recovery behavior from experimental measurement of fundamental engineering properties. A brief description of the model and analysis procedures is given, followed by an overview of a parallel effort to fabricate and characterize the material system of SMAHC specimens. Static and dynamic experimental configurations for the SMAHC specimens are described and experimental results for thermal post-buckling and random response are presented. Excellent agreement is achieved between the measured and predicted results, fully validating the theoretical model for constrained recovery behavior of SMAHC structures.

  14. Validation of the Dutch version of the Swallowing Quality-of-Life Questionnaire (DSWAL-QoL) and the adjusted DSWAL-QoL (aDSWAL-QoL) using item analysis with the Rasch model: a pilot study.

    PubMed

    Simpelaere, Ingeborg S; Van Nuffelen, Gwen; De Bodt, Marc; Vanderwegen, Jan; Hansen, Tina

    2017-04-07

    The Swallowing Quality-of-Life Questionnaire (SWAL-QoL) is considered the gold standard for assessing health-related QoL in oropharyngeal dysphagia. The Dutch translation (DSWAL-QoL) and its adjusted version (aDSWAL-QoL) have been validated using classical test theory (CTT). However, these scales have not been tested against the Rasch measurement model, which is required to establish the structural validity and objectivity of the total scale and subscale scores. Thus, the purpose of this study was to examine the psychometric properties of these scales using item analysis according to the Rasch model. Item analysis with the Rasch model was performed using RUMM2030 software with previously collected data from a validation study of 108 patients. The assessment included evaluations of overall model fit, reliability, unidimensionality, threshold ordering, individual item and person fits, differential item functioning (DIF), local item dependency (LID) and targeting. The analysis could not establish the psychometric properties of either of the scales or their subscales because they did not fit the Rasch model, and multidimensionality, disordered thresholds, DIF, and/or LID were found. The reliability and power of fit were high for the total scales (PSI = 0.93) but low for most of the subscales (PSI < 0.70). The targeting of persons and items was suboptimal. The main source of misfit was disordered thresholds for both the total scales and subscales. Based on the results of the analysis, adjustments to improve the scales were implemented as follows: disordered thresholds were rescaled, misfit items were removed and items were split for DIF. However, the multidimensionality and LID could not be resolved. The reliability and power of fit remained low for most of the subscales. This study represents the first analyses of the DSWAL-QoL and aDSWAL-QoL with the Rasch model. Relying on the DSWAL-QoL and aDSWAL-QoL total and subscale scores to make conclusions regarding dysphagia-related HRQoL should be treated with caution before the structural validity and objectivity of both scales have been established. A larger and well-targeted sample is recommended to derive definitive conclusions about the items and scales. Solutions for the psychometric weaknesses suggested by the model and practical implications are discussed.

  15. A metabolic fingerprinting approach based on selected ion flow tube mass spectrometry (SIFT-MS) and chemometrics: A reliable tool for Mediterranean origin-labeled olive oils authentication.

    PubMed

    Bajoub, Aadil; Medina-Rodríguez, Santiago; Ajal, El Amine; Cuadros-Rodríguez, Luis; Monasterio, Romina Paula; Vercammen, Joeri; Fernández-Gutiérrez, Alberto; Carrasco-Pancorbo, Alegría

    2018-04-01

    Selected Ion flow tube mass spectrometry (SIFT-MS) in combination with chemometrics was used to authenticate the geographical origin of Mediterranean virgin olive oils (VOOs) produced under geographical origin labels. In particular, 130 oil samples from six different Mediterranean regions (Kalamata (Greece); Toscana (Italy); Meknès and Tyout (Morocco); and Priego de Córdoba and Baena (Spain)) were considered. The headspace volatile fingerprints were measured by SIFT-MS in full scan with H 3 O + , NO + and O 2 + as precursor ions and the results were subjected to chemometric treatments. Principal Component Analysis (PCA) was used for preliminary multivariate data analysis and Partial Least Squares-Discriminant Analysis (PLS-DA) was applied to build different models (considering the three reagent ions) to classify samples according to the country of origin and regions (within the same country). The multi-class PLS-DA models showed very good performance in terms of fitting accuracy (98.90-100%) and prediction accuracy (96.70-100% accuracy for cross validation and 97.30-100% accuracy for external validation (test set)). Considering the two-class PLS-DA models, the one for the Spanish samples showed 100% sensitivity, specificity and accuracy in calibration, cross validation and external validation; the model for Moroccan oils also showed very satisfactory results (with perfect scores for almost every parameter in all the cases). Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Modeling Terrorism Risk to the Air Transportation System: An Independent Assessment of TSA’s Risk Management Analysis Tool and Associated Methods

    DTIC Science & Technology

    2012-01-01

    our own work for this discussion. DoD Instruction 5000.61 defines model validation as “the pro - cess of determining the degree to which a model and its... determined that RMAT is highly con - crete code, potentially leading to redundancies in the code itself and making RMAT more difficult to maintain...system con - ceptual models valid, and are the data used to support them adequate? (Chapters Two and Three) 2. Are the sources and methods for populating

  17. Rapid and non-invasive analysis of deoxynivalenol in durum and common wheat by Fourier-Transform Near Infrared (FT-NIR) spectroscopy.

    PubMed

    De Girolamo, A; Lippolis, V; Nordkvist, E; Visconti, A

    2009-06-01

    Fourier transform near-infrared spectroscopy (FT-NIR) was used for rapid and non-invasive analysis of deoxynivalenol (DON) in durum and common wheat. The relevance of using ground wheat samples with a homogeneous particle size distribution to minimize measurement variations and avoid DON segregation among particles of different sizes was established. Calibration models for durum wheat, common wheat and durum + common wheat samples, with particle size <500 microm, were obtained by using partial least squares (PLS) regression with an external validation technique. Values of root mean square error of prediction (RMSEP, 306-379 microg kg(-1)) were comparable and not too far from values of root mean square error of cross-validation (RMSECV, 470-555 microg kg(-1)). Coefficients of determination (r(2)) indicated an "approximate to good" level of prediction of the DON content by FT-NIR spectroscopy in the PLS calibration models (r(2) = 0.71-0.83), and a "good" discrimination between low and high DON contents in the PLS validation models (r(2) = 0.58-0.63). A "limited to good" practical utility of the models was ascertained by range error ratio (RER) values higher than 6. A qualitative model, based on 197 calibration samples, was developed to discriminate between blank and naturally contaminated wheat samples by setting a cut-off at 300 microg kg(-1) DON to separate the two classes. The model correctly classified 69% of the 65 validation samples with most misclassified samples (16 of 20) showing DON contamination levels quite close to the cut-off level. These findings suggest that FT-NIR analysis is suitable for the determination of DON in unprocessed wheat at levels far below the maximum permitted limits set by the European Commission.

  18. Measurements using orthodontic analysis software on digital models obtained by 3D scans of plaster casts : Intrarater reliability and validity.

    PubMed

    Czarnota, Judith; Hey, Jeremias; Fuhrmann, Robert

    2016-01-01

    The purpose of this work was to determine the reliability and validity of measurements performed on digital models with a desktop scanner and analysis software in comparison with measurements performed manually on conventional plaster casts. A total of 20 pairs of plaster casts reflecting the intraoral conditions of 20 fully dentate individuals were digitized using a three-dimensional scanner (D700; 3Shape). A series of defined parameters were measured both on the resultant digital models with analysis software (Ortho Analyzer; 3Shape) and on the original plaster casts with a digital caliper (Digimatic CD-15DCX; Mitutoyo). Both measurement series were repeated twice and analyzed for intrarater reliability based on intraclass correlation coefficients (ICCs). The results from the digital models were evaluated for their validity against the casts by calculating mean-value differences and associated 95 % limits of agreement (Bland-Altman method). Statistically significant differences were identified via a paired t test. Significant differences were obtained for 16 of 24 tooth-width measurements, for 2 of 5 sites of contact-point displacement in the mandibular anterior segment, for overbite, for maxillary intermolar distance, for Little's irregularity index, and for the summation indices of maxillary and mandibular incisor width. Overall, however, both the mean differences between the results obtained on the digital models versus on the plaster casts and the dispersion ranges associated with these differences suggest that the deviations incurred by the digital measuring technique are not clinically significant. Digital models are adequately reproducible and valid to be employed for routine measurements in orthodontic practice.

  19. Validation Test Report For The CRWMS Analysis and Logistics Visually Interactive Model Calvin Version 3.0, 10074-Vtr-3.0-00

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    S. Gillespie

    2000-07-27

    This report describes the tests performed to validate the CRWMS ''Analysis and Logistics Visually Interactive'' Model (CALVIN) Version 3.0 (V3.0) computer code (STN: 10074-3.0-00). To validate the code, a series of test cases was developed in the CALVIN V3.0 Validation Test Plan (CRWMS M&O 1999a) that exercises the principal calculation models and options of CALVIN V3.0. Twenty-five test cases were developed: 18 logistics test cases and 7 cost test cases. These cases test the features of CALVIN in a sequential manner, so that the validation of each test case is used to demonstrate the accuracy of the input to subsequentmore » calculations. Where necessary, the test cases utilize reduced-size data tables to make the hand calculations used to verify the results more tractable, while still adequately testing the code's capabilities. Acceptance criteria, were established for the logistics and cost test cases in the Validation Test Plan (CRWMS M&O 1999a). The Logistics test cases were developed to test the following CALVIN calculation models: Spent nuclear fuel (SNF) and reactivity calculations; Options for altering reactor life; Adjustment of commercial SNF (CSNF) acceptance rates for fiscal year calculations and mid-year acceptance start; Fuel selection, transportation cask loading, and shipping to the Monitored Geologic Repository (MGR); Transportation cask shipping to and storage at an Interim Storage Facility (ISF); Reactor pool allocation options; and Disposal options at the MGR. Two types of cost test cases were developed: cases to validate the detailed transportation costs, and cases to validate the costs associated with the Civilian Radioactive Waste Management System (CRWMS) Management and Operating Contractor (M&O) and Regional Servicing Contractors (RSCs). For each test case, values calculated using Microsoft Excel 97 worksheets were compared to CALVIN V3.0 scenarios with the same input data and assumptions. All of the test case results compare with the CALVIN V3.0 results within the bounds of the acceptance criteria. Therefore, it is concluded that the CALVIN V3.0 calculation models and options tested in this report are validated.« less

  20. Validating Human Behavioral Models for Combat Simulations Using Techniques for the Evaluation of Human Performance

    DTIC Science & Technology

    2004-01-01

    Cognitive Task Analysis Abstract As Department of Defense (DoD) leaders rely more on modeling and simulation to provide information on which to base...capabilities and intent. Cognitive Task Analysis (CTA) Cognitive Task Analysis (CTA) is an extensive/detailed look at tasks and subtasks performed by a...Domain Analysis and Task Analysis: A Difference That Matters. In Cognitive Task Analysis , edited by J. M. Schraagen, S.

  1. Issues in cross-cultural validity: example from the adaptation, reliability, and validity testing of a Turkish version of the Stanford Health Assessment Questionnaire.

    PubMed

    Küçükdeveci, Ayse A; Sahin, Hülya; Ataman, Sebnem; Griffiths, Bridget; Tennant, Alan

    2004-02-15

    Guidelines have been established for cross-cultural adaptation of outcome measures. However, invariance across cultures must also be demonstrated through analysis of Differential Item Functioning (DIF). This is tested in the context of a Turkish adaptation of the Health Assessment Questionnaire (HAQ). Internal construct validity of the adapted HAQ is assessed by Rasch analysis; reliability, by internal consistency and the intraclass correlation coefficient; external construct validity, by association with impairments and American College of Rheumatology functional stages. Cross-cultural validity is tested through DIF by comparison with data from the UK version of the HAQ. The adapted version of the HAQ demonstrated good internal construct validity through fit of the data to the Rasch model (mean item fit 0.205; SD 0.998). Reliability was excellent (alpha = 0.97) and external construct validity was confirmed by expected associations. DIF for culture was found in only 1 item. Cross-cultural validity was found to be sufficient for use in international studies between the UK and Turkey. Future adaptation of instruments should include analysis of DIF at the field testing stage in the adaptation process.

  2. Test Cases for Modeling and Validation of Structures with Piezoelectric Actuators

    NASA Technical Reports Server (NTRS)

    Reaves, Mercedes C.; Horta, Lucas G.

    2001-01-01

    A set of benchmark test articles were developed to validate techniques for modeling structures containing piezoelectric actuators using commercially available finite element analysis packages. The paper presents the development, modeling, and testing of two structures: an aluminum plate with surface mounted patch actuators and a composite box beam with surface mounted actuators. Three approaches for modeling structures containing piezoelectric actuators using the commercially available packages: MSC/NASTRAN and ANSYS are presented. The approaches, applications, and limitations are discussed. Data for both test articles are compared in terms of frequency response functions from deflection and strain data to input voltage to the actuator. Frequency response function results using the three different analysis approaches provided comparable test/analysis results. It is shown that global versus local behavior of the analytical model and test article must be considered when comparing different approaches. Also, improper bonding of actuators greatly reduces the electrical to mechanical effectiveness of the actuators producing anti-resonance errors.

  3. Development and validation of a tool to assess knowledge and attitudes towards generic medicines among students in Greece: The ATtitude TOwards GENerics (ATTOGEN) questionnaire

    PubMed Central

    Katsari, Vasiliki; Niakas, Dimitris

    2017-01-01

    Introduction The use of generic medicines is a cost-effective policy, often dictated by fiscal restraints. To our knowledge, no fully validated tool exploring the students’ knowledge and attitudes towards generic medicines exists. The aim of our study was to develop and validate a questionnaire exploring the knowledge and attitudes of M.Sc. in Health Care Management students and recent alumni’s towards generic drugs in Greece. Materials and methods The development of the questionnaire was a result of literature review and pilot-testing of its preliminary versions to researchers and students. The final version of the questionnaire contains 18 items measuring the respondents’ knowledge and attitude towards generic medicines on a 5-point Likert scale. Given the ordinal nature of the data, ordinal alpha and polychoric correlations were computed. The sample was randomly split into two halves. Exploratory factor analysis, performed in the first sample, was used for the creation of multi-item scales. Confirmatory factor analysis and Generalized Linear Latent and Mixed Model analysis (GLLAMM) with the use of the rating scale model were used in the second sample to assess goodness of fit. An assessment of internal consistency reliability, test-retest reliability, and construct validity was also performed. Results Among 1402 persons contacted, 986 persons completed our questionnaire (response rate = 70.3%). Overall Cronbach’s alpha was 0.871. The conjoint use of exploratory and confirmatory factor analysis resulted in a six-scale model, which seemed to fit the data well. Five of the six scales, namely trust, drug quality, state audit, fiscal impact and drug substitution were found to be valid and reliable, while the knowledge scale suffered only from low inter-scale correlations and a ceiling effect. However, the subsequent confirmatory factor and GLLAMM analyses indicated a good fit of the model to the data. Conclusions The ATTOGEN instrument proved to be a reliable and valid tool, suitable for assessing students’ knowledge and attitudes towards generic medicines. PMID:29186163

  4. Description, validation, and modification of the Guyton model for space-flight applications. Part A. Guyton model of circulatory, fluid and electrolyte control. Part B. Modification of the Guyton model for circulatory, fluid and electrolyte control

    NASA Technical Reports Server (NTRS)

    Leonard, J. I.

    1985-01-01

    The mathematical model that has been a cornerstone for the systems analysis of space-flight physiological studies is the Guyton model describing circulatory, fluid and electrolyte regulation. The model and the modifications that are made to permit simulation and analysis of the stress of weightlessness are described.

  5. The hind wing of the desert locust (Schistocerca gregaria Forskål). III. A finite element analysis of a deployable structure.

    PubMed

    Herbert, R C; Young, P G; Smith, C W; Wootton, R J; Evans, K E

    2000-10-01

    Finite element analysis is used to model the automatic cambering of the locust hind wing during promotion: the umbrella effect. It was found that the model required a high degree of sophistication before replicating the deformations found in vivo. The model has been validated using experimental data and the deformations recorded both in vivo and ex vivo. It predicts that even slight modifications to the geometrical description used can lead to significant changes in the deformations observed in the anal fan. The model agrees with experimental data and produces deformations very close to those seen in free-flying locusts. The validated model may be used to investigate the varying geometries found in orthopteran anal fans and the stresses found throughout the wing when loaded.

  6. SU-E-T-50: Automatic Validation of Megavoltage Beams Modeled for Clinical Use in Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Melchior, M; Salinas Aranda, F; 21st Century Oncology, Ft. Myers, FL

    2014-06-01

    Purpose: To automatically validate megavoltage beams modeled in XiO™ 4.50 (Elekta, Stockholm, Sweden) and Varian Eclipse™ Treatment Planning Systems (TPS) (Varian Associates, Palo Alto, CA, USA), reducing validation time before beam-on for clinical use. Methods: A software application that can automatically read and analyze DICOM RT Dose and W2CAD files was developed using MatLab integrated development environment.TPS calculated dose distributions, in DICOM RT Dose format, and dose values measured in different Varian Clinac beams, in W2CAD format, were compared. Experimental beam data used were those acquired for beam commissioning, collected on a water phantom with a 2D automatic beam scanningmore » system.Two methods were chosen to evaluate dose distributions fitting: gamma analysis and point tests described in Appendix E of IAEA TECDOC-1583. Depth dose curves and beam profiles were evaluated for both open and wedged beams. Tolerance parameters chosen for gamma analysis are 3% and 3 mm dose and distance, respectively.Absolute dose was measured independently at points proposed in Appendix E of TECDOC-1583 to validate software results. Results: TPS calculated depth dose distributions agree with measured beam data under fixed precision values at all depths analyzed. Measured beam dose profiles match TPS calculated doses with high accuracy in both open and wedged beams. Depth and profile dose distributions fitting analysis show gamma values < 1. Relative errors at points proposed in Appendix E of TECDOC-1583 meet therein recommended tolerances.Independent absolute dose measurements at points proposed in Appendix E of TECDOC-1583 confirm software results. Conclusion: Automatic validation of megavoltage beams modeled for their use in the clinic was accomplished. The software tool developed proved efficient, giving users a convenient and reliable environment to decide whether to accept or not a beam model for clinical use. Validation time before beam-on for clinical use was reduced to a few hours.« less

  7. Robust QCT/FEA Models of Proximal Femur Stiffness and Fracture Load During a Sideways Fall on the Hip

    PubMed Central

    Dragomir-Daescu, Dan; Buijs, Jorn Op Den; McEligot, Sean; Dai, Yifei; Entwistle, Rachel C.; Salas, Christina; Melton, L. Joseph; Bennet, Kevin E.; Khosla, Sundeep; Amin, Shreyasee

    2013-01-01

    Clinical implementation of quantitative computed tomography-based finite element analysis (QCT/FEA) of proximal femur stiffness and strength to assess the likelihood of proximal femur (hip) fractures requires a unified modeling procedure, consistency in predicting bone mechanical properties, and validation with realistic test data that represent typical hip fractures, specifically, a sideways fall on the hip. We, therefore, used two sets (n = 9, each) of cadaveric femora with bone densities varying from normal to osteoporotic to build, refine, and validate a new class of QCT/FEA models for hip fracture under loading conditions that simulate a sideways fall on the hip. Convergence requirements of finite element models of the first set of femora led to the creation of a new meshing strategy and a robust process to model proximal femur geometry and material properties from QCT images. We used a second set of femora to cross-validate the model parameters derived from the first set. Refined models were validated experimentally by fracturing femora using specially designed fixtures, load cells, and high speed video capture. CT image reconstructions of fractured femora were created to classify the fractures. The predicted stiffness (cross-validation R2 = 0.87), fracture load (cross-validation R2 = 0.85), and fracture patterns (83% agreement) correlated well with experimental data. PMID:21052839

  8. The presentation and preliminary validation of KIWEST using a large sample of Norwegian university staff.

    PubMed

    Innstrand, Siw Tone; Christensen, Marit; Undebakke, Kirsti Godal; Svarva, Kyrre

    2015-12-01

    The aim of the present paper is to present and validate a Knowledge-Intensive Work Environment Survey Target (KIWEST), a questionnaire developed for assessing the psychosocial factors among people in knowledge-intensive work environments. The construct validity and reliability of the measurement model where tested on a representative sample of 3066 academic and administrative staff working at one of the largest universities in Norway. Confirmatory factor analysis provided initial support for the convergent validity and internal consistency of the 30 construct KIWEST measurement model. However, discriminant validity tests indicated that some of the constructs might overlap to some degree. Overall, the KIWEST measure showed promising psychometric properties as a psychosocial work environment measure. © 2015 the Nordic Societies of Public Health.

  9. QSAR modeling of GPCR ligands: methodologies and examples of applications.

    PubMed

    Tropsha, A; Wang, S X

    2006-01-01

    GPCR ligands represent not only one of the major classes of current drugs but the major continuing source of novel potent pharmaceutical agents. Because 3D structures of GPCRs as determined by experimental techniques are still unavailable, ligand-based drug discovery methods remain the major computational molecular modeling approaches to the analysis of growing data sets of tested GPCR ligands. This paper presents an overview of modern Quantitative Structure Activity Relationship (QSAR) modeling. We discuss the critical issue of model validation and the strategy for applying the successfully validated QSAR models to virtual screening of available chemical databases. We present several examples of applications of validated QSAR modeling approaches to GPCR ligands. We conclude with the comments on exciting developments in the QSAR modeling of GPCR ligands that focus on the study of emerging data sets of compounds with dual or even multiple activities against two or more of GPCRs.

  10. Validation of Rolls-Royce RR-BK01Digital Recording and 1/3 Octave Analysis System for Use in Support of Aircraft Noise Certification Efforts in Compliance with 14 CFR Part 36; Letter Report: V324-FB48B3-LR4

    DOT National Transportation Integrated Search

    2017-10-23

    In support of the Federal Aviation Administrations Office of Environment and Energy, the Volpe Center Environmental Measurement and Modeling Division (Volpe) has completed validation of the digital recording and 1/3 octave band analysis components...

  11. Viscoelasticity of Axisymmetric Composite Structures: Analysis and Experimental Validation

    DTIC Science & Technology

    2013-02-01

    compressive stress at the interface between the composite and steel prior to the sheath’s cut-off. Accordingly, the viscoelastic analysis is used...The hoop-stress profile in figure 6 shows the steel region is in compression , resulting from the winding tension of composite overwrap. The stress...mechanical and thermal loads. Experimental validation of the model is conducted using a high- tensioned composite overwrapped on a steel cylinder. The creep

  12. Systematic review of prognostic prediction models for acute kidney injury (AKI) in general hospital populations.

    PubMed

    Hodgson, Luke Eliot; Sarnowski, Alexander; Roderick, Paul J; Dimitrov, Borislav D; Venn, Richard M; Forni, Lui G

    2017-09-27

    Critically appraise prediction models for hospital-acquired acute kidney injury (HA-AKI) in general populations. Systematic review. Medline, Embase and Web of Science until November 2016. Studies describing development of a multivariable model for predicting HA-AKI in non-specialised adult hospital populations. Published guidance followed for data extraction reporting and appraisal. 14 046 references were screened. Of 53 HA-AKI prediction models, 11 met inclusion criteria (general medicine and/or surgery populations, 474 478 patient episodes) and five externally validated. The most common predictors were age (n=9 models), diabetes (5), admission serum creatinine (SCr) (5), chronic kidney disease (CKD) (4), drugs (diuretics (4) and/or ACE inhibitors/angiotensin-receptor blockers (3)), bicarbonate and heart failure (4 models each). Heterogeneity was identified for outcome definition. Deficiencies in reporting included handling of predictors, missing data and sample size. Admission SCr was frequently taken to represent baseline renal function. Most models were considered at high risk of bias. Area under the receiver operating characteristic curves to predict HA-AKI ranged 0.71-0.80 in derivation (reported in 8/11 studies), 0.66-0.80 for internal validation studies (n=7) and 0.65-0.71 in five external validations. For calibration, the Hosmer-Lemeshow test or a calibration plot was provided in 4/11 derivations, 3/11 internal and 3/5 external validations. A minority of the models allow easy bedside calculation and potential electronic automation. No impact analysis studies were found. AKI prediction models may help address shortcomings in risk assessment; however, in general hospital populations, few have external validation. Similar predictors reflect an elderly demographic with chronic comorbidities. Reporting deficiencies mirrors prediction research more broadly, with handling of SCr (baseline function and use as a predictor) a concern. Future research should focus on validation, exploration of electronic linkage and impact analysis. The latter could combine a prediction model with AKI alerting to address prevention and early recognition of evolving AKI. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  13. Static Aeroelastic Analysis with an Inviscid Cartesian Method

    NASA Technical Reports Server (NTRS)

    Rodriguez, David L.; Aftosmis, Michael J.; Nemec, Marian; Smith, Stephen C.

    2014-01-01

    An embedded-boundary Cartesian-mesh flow solver is coupled with a three degree-offreedom structural model to perform static, aeroelastic analysis of complex aircraft geometries. The approach solves the complete system of aero-structural equations using a modular, loosely-coupled strategy which allows the lower-fidelity structural model to deform the highfidelity CFD model. The approach uses an open-source, 3-D discrete-geometry engine to deform a triangulated surface geometry according to the shape predicted by the structural model under the computed aerodynamic loads. The deformation scheme is capable of modeling large deflections and is applicable to the design of modern, very-flexible transport wings. The interface is modular so that aerodynamic or structural analysis methods can be easily swapped or enhanced. This extended abstract includes a brief description of the architecture, along with some preliminary validation of underlying assumptions and early results on a generic 3D transport model. The final paper will present more concrete cases and validation of the approach. Preliminary results demonstrate convergence of the complete aero-structural system and investigate the accuracy of the approximations used in the formulation of the structural model.

  14. Finite Element Model Development For Aircraft Fuselage Structures

    NASA Technical Reports Server (NTRS)

    Buehrle, Ralph D.; Fleming, Gary A.; Pappa, Richard S.; Grosveld, Ferdinand W.

    2000-01-01

    The ability to extend the valid frequency range for finite element based structural dynamic predictions using detailed models of the structural components and attachment interfaces is examined for several stiffened aircraft fuselage structures. This extended dynamic prediction capability is needed for the integration of mid-frequency noise control technology. Beam, plate and solid element models of the stiffener components are evaluated. Attachment models between the stiffener and panel skin range from a line along the rivets of the physical structure to a constraint over the entire contact surface. The finite element models are validated using experimental modal analysis results.

  15. The establishment and external validation of NIR qualitative analysis model for waste polyester-cotton blend fabrics.

    PubMed

    Li, Feng; Li, Wen-Xia; Zhao, Guo-Liang; Tang, Shi-Jun; Li, Xue-Jiao; Wu, Hong-Mei

    2014-10-01

    A series of 354 polyester-cotton blend fabrics were studied by the near-infrared spectra (NIRS) technology, and a NIR qualitative analysis model for different spectral characteristics was established by partial least squares (PLS) method combined with qualitative identification coefficient. There were two types of spectrum for dying polyester-cotton blend fabrics: normal spectrum and slash spectrum. The slash spectrum loses its spectral characteristics, which are effected by the samples' dyes, pigments, matting agents and other chemical additives. It was in low recognition rate when the model was established by the total sample set, so the samples were divided into two types of sets: normal spectrum sample set and slash spectrum sample set, and two NIR qualitative analysis models were established respectively. After the of models were established the model's spectral region, pretreatment methods and factors were optimized based on the validation results, and the robustness and reliability of the model can be improved lately. The results showed that the model recognition rate was improved greatly when they were established respectively, the recognition rate reached up to 99% when the two models were verified by the internal validation. RC (relation coefficient of calibration) values of the normal spectrum model and slash spectrum model were 0.991 and 0.991 respectively, RP (relation coefficient of prediction) values of them were 0.983 and 0.984 respectively, SEC (standard error of calibration) values of them were 0.887 and 0.453 respectively, SEP (standard error of prediction) values of them were 1.131 and 0.573 respectively. A series of 150 bounds samples reached used to verify the normal spectrum model and slash spectrum model and the recognition rate reached up to 91.33% and 88.00% respectively. It showed that the NIR qualitative analysis model can be used for identification in the recycle site for the polyester-cotton blend fabrics.

  16. Statistical analysis and model validation of automobile emissions

    DOT National Transportation Integrated Search

    2000-09-01

    The article discusses the development of a comprehensive modal emissions model that is currently being integrated with a variety of transportation models as part of National Cooperative Highway Research Program project 25-11. Described is the second-...

  17. Differentiating Categories and Dimensions: Evaluating the Robustness of Taxometric Analyses

    ERIC Educational Resources Information Center

    Ruscio, John; Kaczetow, Walter

    2009-01-01

    Interest in modeling the structure of latent variables is gaining momentum, and many simulation studies suggest that taxometric analysis can validly assess the relative fit of categorical and dimensional models. The generation and parallel analysis of categorical and dimensional comparison data sets reduces the subjectivity required to interpret…

  18. Development of design parameters for mass concrete using finite element analysis : final report, February 2010.

    DOT National Transportation Integrated Search

    2010-02-01

    A finite element model for analysis of mass concrete was developed in this study. To validate the developed model, large concrete blocks made with four different mixes of concrete, typical of use in mass concrete applications in Florida, were made an...

  19. What Do HPT Consultants Do for Performance Analysis?

    ERIC Educational Resources Information Center

    Kang, Sung

    2017-01-01

    This study was conducted to contribute to the field of Human Performance Technology (HPT) through the validation of the performance analysis process of the International Society for Performance Improvement (ISPI) HPT model, the most representative and frequently utilized process model in the HPT field. The study was conducted using content…

  20. Precision Efficacy Analysis for Regression.

    ERIC Educational Resources Information Center

    Brooks, Gordon P.

    When multiple linear regression is used to develop a prediction model, sample size must be large enough to ensure stable coefficients. If the derivation sample size is inadequate, the model may not predict well for future subjects. The precision efficacy analysis for regression (PEAR) method uses a cross- validity approach to select sample sizes…

  1. Modeling the Relationship between Safety Climate and Safety Performance in a Developing Construction Industry: A Cross-Cultural Validation Study

    PubMed Central

    Zahoor, Hafiz; Chan, Albert P. C.; Utama, Wahyudi P.; Gao, Ran; Zafar, Irfan

    2017-01-01

    This study attempts to validate a safety performance (SP) measurement model in the cross-cultural setting of a developing country. In addition, it highlights the variations in investigating the relationship between safety climate (SC) factors and SP indicators. The data were collected from forty under-construction multi-storey building projects in Pakistan. Based on the results of exploratory factor analysis, a SP measurement model was hypothesized. It was tested and validated by conducting confirmatory factor analysis on calibration and validation sub-samples respectively. The study confirmed the significant positive impact of SC on safety compliance and safety participation, and negative impact on number of self-reported accidents/injuries. However, number of near-misses could not be retained in the final SP model because it attained a lower standardized path coefficient value. Moreover, instead of safety participation, safety compliance established a stronger impact on SP. The study uncovered safety enforcement and promotion as a novel SC factor, whereas safety rules and work practices was identified as the most neglected factor. The study contributed to the body of knowledge by unveiling the deviations in existing dimensions of SC and SP. The refined model is expected to concisely measure the SP in the Pakistani construction industry, however, caution must be exercised while generalizing the study results to other developing countries. PMID:28350366

  2. Modeling the Relationship between Safety Climate and Safety Performance in a Developing Construction Industry: A Cross-Cultural Validation Study.

    PubMed

    Zahoor, Hafiz; Chan, Albert P C; Utama, Wahyudi P; Gao, Ran; Zafar, Irfan

    2017-03-28

    This study attempts to validate a safety performance (SP) measurement model in the cross-cultural setting of a developing country. In addition, it highlights the variations in investigating the relationship between safety climate (SC) factors and SP indicators. The data were collected from forty under-construction multi-storey building projects in Pakistan. Based on the results of exploratory factor analysis, a SP measurement model was hypothesized. It was tested and validated by conducting confirmatory factor analysis on calibration and validation sub-samples respectively. The study confirmed the significant positive impact of SC on safety compliance and safety participation , and negative impact on number of self-reported accidents/injuries . However, number of near-misses could not be retained in the final SP model because it attained a lower standardized path coefficient value. Moreover, instead of safety participation , safety compliance established a stronger impact on SP. The study uncovered safety enforcement and promotion as a novel SC factor, whereas safety rules and work practices was identified as the most neglected factor. The study contributed to the body of knowledge by unveiling the deviations in existing dimensions of SC and SP. The refined model is expected to concisely measure the SP in the Pakistani construction industry, however, caution must be exercised while generalizing the study results to other developing countries.

  3. The relationship between cost estimates reliability and BIM adoption: SEM analysis

    NASA Astrophysics Data System (ADS)

    Ismail, N. A. A.; Idris, N. H.; Ramli, H.; Rooshdi, R. R. Raja Muhammad; Sahamir, S. R.

    2018-02-01

    This paper presents the usage of Structural Equation Modelling (SEM) approach in analysing the effects of Building Information Modelling (BIM) technology adoption in improving the reliability of cost estimates. Based on the questionnaire survey results, SEM analysis using SPSS-AMOS application examined the relationships between BIM-improved information and cost estimates reliability factors, leading to BIM technology adoption. Six hypotheses were established prior to SEM analysis employing two types of SEM models, namely the Confirmatory Factor Analysis (CFA) model and full structural model. The SEM models were then validated through the assessment on their uni-dimensionality, validity, reliability, and fitness index, in line with the hypotheses tested. The final SEM model fit measures are: P-value=0.000, RMSEA=0.079<0.08, GFI=0.824, CFI=0.962>0.90, TLI=0.956>0.90, NFI=0.935>0.90 and ChiSq/df=2.259; indicating that the overall index values achieved the required level of model fitness. The model supports all the hypotheses evaluated, confirming that all relationship exists amongst the constructs are positive and significant. Ultimately, the analysis verified that most of the respondents foresee better understanding of project input information through BIM visualization, its reliable database and coordinated data, in developing more reliable cost estimates. They also perceive to accelerate their cost estimating task through BIM adoption.

  4. Machine Learning Meta-analysis of Large Metagenomic Datasets: Tools and Biological Insights.

    PubMed

    Pasolli, Edoardo; Truong, Duy Tin; Malik, Faizan; Waldron, Levi; Segata, Nicola

    2016-07-01

    Shotgun metagenomic analysis of the human associated microbiome provides a rich set of microbial features for prediction and biomarker discovery in the context of human diseases and health conditions. However, the use of such high-resolution microbial features presents new challenges, and validated computational tools for learning tasks are lacking. Moreover, classification rules have scarcely been validated in independent studies, posing questions about the generality and generalization of disease-predictive models across cohorts. In this paper, we comprehensively assess approaches to metagenomics-based prediction tasks and for quantitative assessment of the strength of potential microbiome-phenotype associations. We develop a computational framework for prediction tasks using quantitative microbiome profiles, including species-level relative abundances and presence of strain-specific markers. A comprehensive meta-analysis, with particular emphasis on generalization across cohorts, was performed in a collection of 2424 publicly available metagenomic samples from eight large-scale studies. Cross-validation revealed good disease-prediction capabilities, which were in general improved by feature selection and use of strain-specific markers instead of species-level taxonomic abundance. In cross-study analysis, models transferred between studies were in some cases less accurate than models tested by within-study cross-validation. Interestingly, the addition of healthy (control) samples from other studies to training sets improved disease prediction capabilities. Some microbial species (most notably Streptococcus anginosus) seem to characterize general dysbiotic states of the microbiome rather than connections with a specific disease. Our results in modelling features of the "healthy" microbiome can be considered a first step toward defining general microbial dysbiosis. The software framework, microbiome profiles, and metadata for thousands of samples are publicly available at http://segatalab.cibio.unitn.it/tools/metaml.

  5. Uncertainty aggregation and reduction in structure-material performance prediction

    NASA Astrophysics Data System (ADS)

    Hu, Zhen; Mahadevan, Sankaran; Ao, Dan

    2018-02-01

    An uncertainty aggregation and reduction framework is presented for structure-material performance prediction. Different types of uncertainty sources, structural analysis model, and material performance prediction model are connected through a Bayesian network for systematic uncertainty aggregation analysis. To reduce the uncertainty in the computational structure-material performance prediction model, Bayesian updating using experimental observation data is investigated based on the Bayesian network. It is observed that the Bayesian updating results will have large error if the model cannot accurately represent the actual physics, and that this error will be propagated to the predicted performance distribution. To address this issue, this paper proposes a novel uncertainty reduction method by integrating Bayesian calibration with model validation adaptively. The observation domain of the quantity of interest is first discretized into multiple segments. An adaptive algorithm is then developed to perform model validation and Bayesian updating over these observation segments sequentially. Only information from observation segments where the model prediction is highly reliable is used for Bayesian updating; this is found to increase the effectiveness and efficiency of uncertainty reduction. A composite rotorcraft hub component fatigue life prediction model, which combines a finite element structural analysis model and a material damage model, is used to demonstrate the proposed method.

  6. Overcoming redundancies in bedside nursing assessments by validating a parsimonious meta-tool: findings from a methodological exercise study.

    PubMed

    Palese, Alvisa; Marini, Eva; Guarnier, Annamaria; Barelli, Paolo; Zambiasi, Paola; Allegrini, Elisabetta; Bazoli, Letizia; Casson, Paola; Marin, Meri; Padovan, Marisa; Picogna, Michele; Taddia, Patrizia; Chiari, Paolo; Salmaso, Daniele; Marognolli, Oliva; Canzan, Federica; Ambrosi, Elisa; Saiani, Luisa; Grassetti, Luca

    2016-10-01

    There is growing interest in validating tools aimed at supporting the clinical decision-making process and research. However, an increased bureaucratization of clinical practice and redundancies in the measures collected have been reported by clinicians. Redundancies in clinical assessments affect negatively both patients and nurses. To validate a meta-tool measuring the risks/problems currently estimated by multiple tools used in daily practice. A secondary analysis of a database was performed, using a cross-validation and a longitudinal study designs. In total, 1464 patients admitted to 12 medical units in 2012 were assessed at admission with the Brass, Barthel, Conley and Braden tools. Pertinent outcomes such as the occurrence of post-discharge need for resources and functional decline at discharge, as well as falls and pressure sores, were measured. Explorative factor analysis of each tool, inter-tool correlations and a conceptual evaluation of the redundant/similar items across tools were performed. Therefore, the validation of the meta-tool was performed through explorative factor analysis, confirmatory factor analysis and the structural equation model to establish the ability of the meta-tool to predict the outcomes estimated by the original tools. High correlations between the tools have emerged (from r 0.428 to 0.867) with a common variance from 18.3% to 75.1%. Through a conceptual evaluation and explorative factor analysis, the items were reduced from 42 to 20, and the three factors that emerged were confirmed by confirmatory factor analysis. According to the structural equation model results, two out of three emerged factors predicted the outcomes. From the initial 42 items, the meta-tool is composed of 20 items capable of predicting the outcomes as with the original tools. © 2016 John Wiley & Sons, Ltd.

  7. Examining construct and predictive validity of the Health-IT Usability Evaluation Scale: confirmatory factor analysis and structural equation modeling results

    PubMed Central

    Yen, Po-Yin; Sousa, Karen H; Bakken, Suzanne

    2014-01-01

    Background In a previous study, we developed the Health Information Technology Usability Evaluation Scale (Health-ITUES), which is designed to support customization at the item level. Such customization matches the specific tasks/expectations of a health IT system while retaining comparability at the construct level, and provides evidence of its factorial validity and internal consistency reliability through exploratory factor analysis. Objective In this study, we advanced the development of Health-ITUES to examine its construct validity and predictive validity. Methods The health IT system studied was a web-based communication system that supported nurse staffing and scheduling. Using Health-ITUES, we conducted a cross-sectional study to evaluate users’ perception toward the web-based communication system after system implementation. We examined Health-ITUES's construct validity through first and second order confirmatory factor analysis (CFA), and its predictive validity via structural equation modeling (SEM). Results The sample comprised 541 staff nurses in two healthcare organizations. The CFA (n=165) showed that a general usability factor accounted for 78.1%, 93.4%, 51.0%, and 39.9% of the explained variance in ‘Quality of Work Life’, ‘Perceived Usefulness’, ‘Perceived Ease of Use’, and ‘User Control’, respectively. The SEM (n=541) supported the predictive validity of Health-ITUES, explaining 64% of the variance in intention for system use. Conclusions The results of CFA and SEM provide additional evidence for the construct and predictive validity of Health-ITUES. The customizability of Health-ITUES has the potential to support comparisons at the construct level, while allowing variation at the item level. We also illustrate application of Health-ITUES across stages of system development. PMID:24567081

  8. Funding for the 2ND IAEA technical meeting on fusion data processing, validation and analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greenwald, Martin

    The International Atomic Energy Agency (IAEA) will organize the second Technical Meeting on Fusion Da Processing, Validation and Analysis from 30 May to 02 June, 2017, in Cambridge, MA USA. The meeting w be hosted by the MIT Plasma Science and Fusion Center (PSFC). The objective of the meeting is to provide a platform where a set of topics relevant to fusion data processing, validation and analysis are discussed with the view of extrapolation needs to next step fusion devices such as ITER. The validation and analysis of experimental data obtained from diagnostics used to characterize fusion plasmas are crucialmore » for a knowledge based understanding of the physical processes governing the dynamics of these plasmas. The meeting will aim at fostering, in particular, discussions of research and development results that set out or underline trends observed in the current major fusion confinement devices. General information on the IAEA, including its mission and organization, can be found at the IAEA websit Uncertainty quantification (UQ) Model selection, validation, and verification (V&V) Probability theory and statistical analysis Inverse problems & equilibrium reconstru ction Integrated data analysis Real time data analysis Machine learning Signal/image proc essing & pattern recognition Experimental design and synthetic diagnostics Data management« less

  9. A METHODOLOGY FOR ESTIMATING UNCERTAINTY OF A DISTRIBUTED HYDROLOGIC MODEL: APPLICATION TO POCONO CREEK WATERSHED

    EPA Science Inventory

    Utility of distributed hydrologic and water quality models for watershed management and sustainability studies should be accompanied by rigorous model uncertainty analysis. However, the use of complex watershed models primarily follows the traditional {calibrate/validate/predict}...

  10. LIVVkit: An extensible, python-based, land ice verification and validation toolkit for ice sheet models

    NASA Astrophysics Data System (ADS)

    Kennedy, Joseph H.; Bennett, Andrew R.; Evans, Katherine J.; Price, Stephen; Hoffman, Matthew; Lipscomb, William H.; Fyke, Jeremy; Vargo, Lauren; Boghozian, Adrianna; Norman, Matthew; Worley, Patrick H.

    2017-06-01

    To address the pressing need to better understand the behavior and complex interaction of ice sheets within the global Earth system, significant development of continental-scale, dynamical ice sheet models is underway. Concurrent to the development of the Community Ice Sheet Model (CISM), the corresponding verification and validation (V&V) process is being coordinated through a new, robust, Python-based extensible software package, the Land Ice Verification and Validation toolkit (LIVVkit). Incorporated into the typical ice sheet model development cycle, it provides robust and automated numerical verification, software verification, performance validation, and physical validation analyses on a variety of platforms, from personal laptops to the largest supercomputers. LIVVkit operates on sets of regression test and reference data sets, and provides comparisons for a suite of community prioritized tests, including configuration and parameter variations, bit-for-bit evaluation, and plots of model variables to indicate where differences occur. LIVVkit also provides an easily extensible framework to incorporate and analyze results of new intercomparison projects, new observation data, and new computing platforms. LIVVkit is designed for quick adaptation to additional ice sheet models via abstraction of model specific code, functions, and configurations into an ice sheet model description bundle outside the main LIVVkit structure. Ultimately, through shareable and accessible analysis output, LIVVkit is intended to help developers build confidence in their models and enhance the credibility of ice sheet models overall.

  11. Predicting surgical site infection after spine surgery: a validated model using a prospective surgical registry.

    PubMed

    Lee, Michael J; Cizik, Amy M; Hamilton, Deven; Chapman, Jens R

    2014-09-01

    The impact of surgical site infection (SSI) is substantial. Although previous study has determined relative risk and odds ratio (OR) values to quantify risk factors, these values may be difficult to translate to the patient during counseling of surgical options. Ideally, a model that predicts absolute risk of SSI, rather than relative risk or OR values, would greatly enhance the discussion of safety of spine surgery. To date, there is no risk stratification model that specifically predicts the risk of medical complication. The purpose of this study was to create and validate a predictive model for the risk of SSI after spine surgery. This study performs a multivariate analysis of SSI after spine surgery using a large prospective surgical registry. Using the results of this analysis, this study will then create and validate a predictive model for SSI after spine surgery. The patient sample is from a high-quality surgical registry from our two institutions with prospectively collected, detailed demographic, comorbidity, and complication data. An SSI that required return to the operating room for surgical debridement. Using a prospectively collected surgical registry of more than 1,532 patients with extensive demographic, comorbidity, surgical, and complication details recorded for 2 years after the surgery, we identified several risk factors for SSI after multivariate analysis. Using the beta coefficients from those regression analyses, we created a model to predict the occurrence of SSI after spine surgery. We split our data into two subsets for internal and cross-validation of our model. We created a predictive model based on our beta coefficients from our multivariate analysis. The final predictive model for SSI had a receiver-operator curve characteristic of 0.72, considered to be a fair measure. The final model has been uploaded for use on SpineSage.com. We present a validated model for predicting SSI after spine surgery. The value in this model is that it gives the user an absolute percent likelihood of SSI after spine surgery based on the patient's comorbidity profile and invasiveness of surgery. Patients are far more likely to understand an absolute percentage, rather than relative risk and confidence interval values. A model such as this is of paramount importance in counseling patients and enhancing the safety of spine surgery. In addition, a tool such as this can be of great use particularly as health care trends toward pay for performance, quality metrics (such as SSI), and risk adjustment. To facilitate the use of this model, we have created a Web site (SpineSage.com) where users can enter patient data to determine likelihood for SSI. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. Approach for validating actinide and fission product compositions for burnup credit criticality safety analyses

    DOE PAGES

    Radulescu, Georgeta; Gauld, Ian C.; Ilas, Germina; ...

    2014-11-01

    This paper describes a depletion code validation approach for criticality safety analysis using burnup credit for actinide and fission product nuclides in spent nuclear fuel (SNF) compositions. The technical basis for determining the uncertainties in the calculated nuclide concentrations is comparison of calculations to available measurements obtained from destructive radiochemical assay of SNF samples. Probability distributions developed for the uncertainties in the calculated nuclide concentrations were applied to the SNF compositions of a criticality safety analysis model by the use of a Monte Carlo uncertainty sampling method to determine bias and bias uncertainty in effective neutron multiplication factor. Application ofmore » the Monte Carlo uncertainty sampling approach is demonstrated for representative criticality safety analysis models of pressurized water reactor spent fuel pool storage racks and transportation packages using burnup-dependent nuclide concentrations calculated with SCALE 6.1 and the ENDF/B-VII nuclear data. Furthermore, the validation approach and results support a recent revision of the U.S. Nuclear Regulatory Commission Interim Staff Guidance 8.« less

  13. Portable visible and near-infrared spectrophotometer for triglyceride measurements.

    PubMed

    Kobayashi, Takanori; Kato, Yukiko Hakariya; Tsukamoto, Megumi; Ikuta, Kazuyoshi; Sakudo, Akikazu

    2009-01-01

    An affordable and portable machine is required for the practical use of visible and near-infrared (Vis-NIR) spectroscopy. A portable fruit tester comprising a Vis-NIR spectrophotometer was modified for use in the transmittance mode and employed to quantify triglyceride levels in serum in combination with a chemometric analysis. Transmittance spectra collected in the 600- to 1100-nm region were subjected to a partial least-squares regression analysis and leave-out cross-validation to develop a chemometrics model for predicting triglyceride concentrations in serum. The model yielded a coefficient of determination in cross-validation (R2VAL) of 0.7831 with a standard error of cross-validation (SECV) of 43.68 mg/dl. The detection limit of the model was 148.79 mg/dl. Furthermore, masked samples predicted by the model yielded a coefficient of determination in prediction (R2PRED) of 0.6856 with a standard error of prediction (SEP) and detection limit of 61.54 and 159.38 mg/dl, respectively. The portable Vis-NIR spectrophotometer may prove convenient for the measurement of triglyceride concentrations in serum, although before practical use there remain obstacles, which are discussed.

  14. Promoting motivation through mode of instruction: The relationship between use of affective teaching techniques and motivation to learn science

    NASA Astrophysics Data System (ADS)

    Sanchez Rivera, Yamil

    The purpose of this study is to add to what we know about the affective domain and to create a valid instrument for future studies. The Motivation to Learn Science (MLS) Inventory is based on Krathwohl's Taxonomy of Affective Behaviors (Krathwohl et al., 1964). The results of the Exploratory Factor Analysis (EFA) and Confirmatory Factor Analysis (CFA) demonstrated that the MLS Inventory is a valid and reliable instrument. Therefore, the MLS Inventory is a uni-dimensional instrument composed of 9 items with convergent validity (no divergence). The instrument had a high Chronbach Alpha value of .898 during the EFA analysis and .919 with the CFA analysis. Factor loadings on the 9 items ranged from .617 to .800. Standardized regression weights ranged from .639 to .835 in the CFA analysis. Various indices (RMSEA = .033; NFI = .987; GFI = .985; CFI = 1.000) demonstrated a good fitness of the proposed model. Hierarchical linear modeling was used to statistical analyze data where students' motivation to learn science scores (level-1) were nested within teachers (level-2). The analysis was geared toward identifying if teachers' use of affective behavior (a level-2 classroom variable) was significantly related with students' MLS scores (level-1 criterion variable). Model testing proceeded in three phases: intercept-only model, means-as-outcome model, and a random-regression coefficient model. The intercept-only model revealed an intra-class correlation coefficient of .224 with an estimated reliability of .726. Therefore, data suggested that only 22.4% of the variance in MLS scores is between-classes and the remaining 77.6% is at the student-level. Due to the significant variance in MLS scores, X2(62.756, p<.0001), teachers' TAB scores were added as a level-2 predictor. The regression coefficient was non-significant (p>.05). Therefore, the teachers' self-reported use of affective behaviors was not a significant predictor of students' motivation to learn science.

  15. Psychometric evaluation of a unified Portuguese-language version of the Body Shape Questionnaire in female university students.

    PubMed

    Silva, Wanderson Roberto; Costa, David; Pimenta, Filipa; Maroco, João; Campos, Juliana Alvares Duarte Bonini

    2016-07-21

    The objectives of this study were to develop a unified Portuguese-language version, for use in Brazil and Portugal, of the Body Shape Questionnaire (BSQ) and to estimate its validity, reliability, and internal consistency in Brazilian and Portuguese female university students. Confirmatory factor analysis was performed using both original (34-item) and shortened (8-item) versions. The model's fit was assessed with χ²/df, CFI, NFI, and RMSEA. Concurrent and convergent validity were assessed. Reliability was estimated through internal consistency and composite reliability (α). Transnational invariance of the BSQ was tested using multi-group analysis. The original 32-item model was refined to present a better fit and adequate validity and reliability. The shortened model was stable in both independent samples and in transnational samples (Brazil and Portugal). The use of this unified version is recommended for the assessment of body shape concerns in both Brazilian and Portuguese college students.

  16. Parameter Optimisation and Uncertainty Analysis in Visual MODFLOW based Flow Model for predicting the groundwater head in an Eastern Indian Aquifer

    NASA Astrophysics Data System (ADS)

    Mohanty, B.; Jena, S.; Panda, R. K.

    2016-12-01

    The overexploitation of groundwater elicited in abandoning several shallow tube wells in the study Basin in Eastern India. For the sustainability of groundwater resources, basin-scale modelling of groundwater flow is indispensable for the effective planning and management of the water resources. The basic intent of this study is to develop a 3-D groundwater flow model of the study basin using the Visual MODFLOW Flex 2014.2 package and successfully calibrate and validate the model using 17 years of observed data. The sensitivity analysis was carried out to quantify the susceptibility of aquifer system to the river bank seepage, recharge from rainfall and agriculture practices, horizontal and vertical hydraulic conductivities, and specific yield. To quantify the impact of parameter uncertainties, Sequential Uncertainty Fitting Algorithm (SUFI-2) and Markov chain Monte Carlo (McMC) techniques were implemented. Results from the two techniques were compared and the advantages and disadvantages were analysed. Nash-Sutcliffe coefficient (NSE), Coefficient of Determination (R2), Mean Absolute Error (MAE), Mean Percent Deviation (Dv) and Root Mean Squared Error (RMSE) were adopted as criteria of model evaluation during calibration and validation of the developed model. NSE, R2, MAE, Dv and RMSE values for groundwater flow model during calibration and validation were in acceptable range. Also, the McMC technique was able to provide more reasonable results than SUFI-2. The calibrated and validated model will be useful to identify the aquifer properties, analyse the groundwater flow dynamics and the change in groundwater levels in future forecasts.

  17. Development of a Home Food Safety Questionnaire Based on the PRECEDE Model: Targeting Iranian Women.

    PubMed

    Esfarjani, Fatemeh; Hosseini, Hedayat; Mohammadi-Nasrabadi, Fatemeh; Abadi, Alireza; Roustaee, Roshanak; Alikhanian, Haleh; Khalafi, Marjan; Kiaee, Mohammad Farhad; Khaksar, Ramin

    2016-12-01

    Food safety is an essential public health issue for all countries. This study was the first attempt to design and develop a home food safety questionnaire (HFSQ), in the conceptual framework of the PRECEDE (predisposing, reinforcing, and enabling constructs in educational diagnosis and evaluation) model, and to assess its validity and reliability. The HFSQ was developed by reviewing electronic databases and 12 focus group discussions with 96 women volunteers. Ten panel members reviewed the questionnaire, and the content validity ratio and content validity index were computed. Twenty women completed the HFSQ, and face validity was assessed. Women who were responsible for food handling in their households (n =320) were selected randomly from 10 health centers and completed the HFSQ based on the PRECEDE model. To examine the construct validity, a principal components factor analysis with varimax rotation was used. Internal consistency was determined with Cronbach's α. Reproducibility was checked by Kendall's τ after 4 weeks with 30 women. The developed HSFQ was considered acceptable with a content validity index of 0.88. Face validity revealed that 95% of the participants understood the questions and found them easy to answer, and 90% confirmed the appearance of the HFSQ and declared the layout acceptable. Principal component factor analysis revealed that the HFSQ could explain 33.7, 55.3, 34.8, and 60.0% of the total variance of the predisposing, reinforcing, practice, and enabling components, respectively. Cronbach's α was acceptable at 0.73. For Kendall's τ c , r = 0.89, with a 95% confidence interval of 0.85 to 0.93. The HFSQ developed based on the PRECEDE model met the standards of acceptable reliability and validity, which can be generalized to a wider population. These results can provide information for the development of effective communication strategies to promote home food safety.

  18. Structural exploration for the refinement of anticancer matrix metalloproteinase-2 inhibitor designing approaches through robust validated multi-QSARs

    NASA Astrophysics Data System (ADS)

    Adhikari, Nilanjan; Amin, Sk. Abdul; Saha, Achintya; Jha, Tarun

    2018-03-01

    Matrix metalloproteinase-2 (MMP-2) is a promising pharmacological target for designing potential anticancer drugs. MMP-2 plays critical functions in apoptosis by cleaving the DNA repair enzyme namely poly (ADP-ribose) polymerase (PARP). Moreover, MMP-2 expression triggers the vascular endothelial growth factor (VEGF) having a positive influence on tumor size, invasion, and angiogenesis. Therefore, it is an urgent need to develop potential MMP-2 inhibitors without any toxicity but better pharmacokinetic property. In this article, robust validated multi-quantitative structure-activity relationship (QSAR) modeling approaches were attempted on a dataset of 222 MMP-2 inhibitors to explore the important structural and pharmacophoric requirements for higher MMP-2 inhibition. Different validated regression and classification-based QSARs, pharmacophore mapping and 3D-QSAR techniques were performed. These results were challenged and subjected to further validation to explain 24 in house MMP-2 inhibitors to judge the reliability of these models further. All these models were individually validated internally as well as externally and were supported and validated by each other. These results were further justified by molecular docking analysis. Modeling techniques adopted here not only helps to explore the necessary structural and pharmacophoric requirements but also for the overall validation and refinement techniques for designing potential MMP-2 inhibitors.

  19. Automated Gait Analysis Through Hues and Areas (AGATHA): a method to characterize the spatiotemporal pattern of rat gait

    PubMed Central

    Kloefkorn, Heidi E.; Pettengill, Travis R.; Turner, Sara M. F.; Streeter, Kristi A.; Gonzalez-Rothi, Elisa J.; Fuller, David D.; Allen, Kyle D.

    2016-01-01

    While rodent gait analysis can quantify the behavioral consequences of disease, significant methodological differences exist between analysis platforms and little validation has been performed to understand or mitigate these sources of variance. By providing the algorithms used to quantify gait, open-source gait analysis software can be validated and used to explore methodological differences. Our group is introducing, for the first time, a fully-automated, open-source method for the characterization of rodent spatiotemporal gait patterns, termed Automated Gait Analysis Through Hues and Areas (AGATHA). This study describes how AGATHA identifies gait events, validates AGATHA relative to manual digitization methods, and utilizes AGATHA to detect gait compensations in orthopaedic and spinal cord injury models. To validate AGATHA against manual digitization, results from videos of rodent gait, recorded at 1000 frames per second (fps), were compared. To assess one common source of variance (the effects of video frame rate), these 1000 fps videos were re-sampled to mimic several lower fps and compared again. While spatial variables were indistinguishable between AGATHA and manual digitization, low video frame rates resulted in temporal errors for both methods. At frame rates over 125 fps, AGATHA achieved a comparable accuracy and precision to manual digitization for all gait variables. Moreover, AGATHA detected unique gait changes in each injury model. These data demonstrate AGATHA is an accurate and precise platform for the analysis of rodent spatiotemporal gait patterns. PMID:27554674

  20. Automated Gait Analysis Through Hues and Areas (AGATHA): A Method to Characterize the Spatiotemporal Pattern of Rat Gait.

    PubMed

    Kloefkorn, Heidi E; Pettengill, Travis R; Turner, Sara M F; Streeter, Kristi A; Gonzalez-Rothi, Elisa J; Fuller, David D; Allen, Kyle D

    2017-03-01

    While rodent gait analysis can quantify the behavioral consequences of disease, significant methodological differences exist between analysis platforms and little validation has been performed to understand or mitigate these sources of variance. By providing the algorithms used to quantify gait, open-source gait analysis software can be validated and used to explore methodological differences. Our group is introducing, for the first time, a fully-automated, open-source method for the characterization of rodent spatiotemporal gait patterns, termed Automated Gait Analysis Through Hues and Areas (AGATHA). This study describes how AGATHA identifies gait events, validates AGATHA relative to manual digitization methods, and utilizes AGATHA to detect gait compensations in orthopaedic and spinal cord injury models. To validate AGATHA against manual digitization, results from videos of rodent gait, recorded at 1000 frames per second (fps), were compared. To assess one common source of variance (the effects of video frame rate), these 1000 fps videos were re-sampled to mimic several lower fps and compared again. While spatial variables were indistinguishable between AGATHA and manual digitization, low video frame rates resulted in temporal errors for both methods. At frame rates over 125 fps, AGATHA achieved a comparable accuracy and precision to manual digitization for all gait variables. Moreover, AGATHA detected unique gait changes in each injury model. These data demonstrate AGATHA is an accurate and precise platform for the analysis of rodent spatiotemporal gait patterns.

  1. Applying the Health Belief Model to college students' health behavior

    PubMed Central

    Kim, Hak-Seon; Ahn, Joo

    2012-01-01

    The purpose of this research was to investigate how university students' nutrition beliefs influence their health behavioral intention. This study used an online survey engine (Qulatrics.com) to collect data from college students. Out of 253 questionnaires collected, 251 questionnaires (99.2%) were used for the statistical analysis. Confirmatory Factor Analysis (CFA) revealed that six dimensions, "Nutrition Confidence," "Susceptibility," "Severity," "Barrier," "Benefit," "Behavioral Intention to Eat Healthy Food," and "Behavioral Intention to do Physical Activity," had construct validity; Cronbach's alpha coefficient and composite reliabilities were tested for item reliability. The results validate that objective nutrition knowledge was a good predictor of college students' nutrition confidence. The results also clearly showed that two direct measures were significant predictors of behavioral intentions as hypothesized. Perceived benefit of eating healthy food and perceived barrier for eat healthy food to had significant effects on Behavioral Intentions and was a valid measurement to use to determine Behavioral Intentions. These findings can enhance the extant literature on the universal applicability of the model and serve as useful references for further investigations of the validity of the model within other health care or foodservice settings and for other health behavioral categories. PMID:23346306

  2. Anisotropic composite human skull model and skull fracture validation against temporo-parietal skull fracture.

    PubMed

    Sahoo, Debasis; Deck, Caroline; Yoganandan, Narayan; Willinger, Rémy

    2013-12-01

    A composite material model for skull, taking into account damage is implemented in the Strasbourg University finite element head model (SUFEHM) in order to enhance the existing skull mechanical constitutive law. The skull behavior is validated in terms of fracture patterns and contact forces by reconstructing 15 experimental cases. The new SUFEHM skull model is capable of reproducing skull fracture precisely. The composite skull model is validated not only for maximum forces, but also for lateral impact against actual force time curves from PMHS for the first time. Skull strain energy is found to be a pertinent parameter to predict the skull fracture and based on statistical (binary logistical regression) analysis it is observed that 50% risk of skull fracture occurred at skull strain energy of 544.0mJ. © 2013 Elsevier Ltd. All rights reserved.

  3. Potential serum biomarkers from a metabolomics study of autism

    PubMed Central

    Wang, Han; Liang, Shuang; Wang, Maoqing; Gao, Jingquan; Sun, Caihong; Wang, Jia; Xia, Wei; Wu, Shiying; Sumner, Susan J.; Zhang, Fengyu; Sun, Changhao; Wu, Lijie

    2016-01-01

    Background Early detection and diagnosis are very important for autism. Current diagnosis of autism relies mainly on some observational questionnaires and interview tools that may involve a great variability. We performed a metabolomics analysis of serum to identify potential biomarkers for the early diagnosis and clinical evaluation of autism. Methods We analyzed a discovery cohort of patients with autism and participants without autism in the Chinese Han population using ultra-performance liquid chromatography quadrupole time-of-flight tandem mass spectrometry (UPLC/Q-TOF MS/MS) to detect metabolic changes in serum associated with autism. The potential metabolite candidates for biomarkers were individually validated in an additional independent cohort of cases and controls. We built a multiple logistic regression model to evaluate the validated biomarkers. Results We included 73 patients and 63 controls in the discovery cohort and 100 cases and 100 controls in the validation cohort. Metabolomic analysis of serum in the discovery stage identified 17 metabolites, 11 of which were validated in an independent cohort. A multiple logistic regression model built on the 11 validated metabolites fit well in both cohorts. The model consistently showed that autism was associated with 2 particular metabolites: sphingosine 1-phosphate and docosahexaenoic acid. Limitations While autism is diagnosed predominantly in boys, we were unable to perform the analysis by sex owing to difficulty recruiting enough female patients. Other limitations include the need to perform test–retest assessment within the same individual and the relatively small sample size. Conclusion Two metabolites have potential as biomarkers for the clinical diagnosis and evaluation of autism. PMID:26395811

  4. Inter-Disciplinary Collaboration in Support of the Post-Standby TREAT Mission

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeHart, Mark; Baker, Benjamin; Ortensi, Javier

    Although analysis methods have advanced significantly in the last two decades, high fidelity multi- physics methods for reactors systems have been under development for only a few years and are not presently mature nor deployed. Furthermore, very few methods provide the ability to simulate rapid transients in three dimensions. Data for validation of advanced time-dependent multi- physics is sparse; at TREAT, historical data were not collected for the purpose of validating three-dimensional methods, let alone multi-physics simulations. Existing data continues to be collected to attempt to simulate the behavior of experiments and calibration transients, but it will be insufficient formore » the complete validation of analysis methods used for TREAT transient simulations. Hence, a 2018 restart will most likely occur without the direct application of advanced modeling and simulation methods. At present, the current INL modeling and simulation team plans to work with TREAT operations staff in performing reactor simulations with MAMMOTH, in parallel with the software packages currently being used in preparation for core restart (e.g., MCNP5, RELAP5, ABAQUS). The TREAT team has also requested specific measurements to be performed during startup testing, currently scheduled to run from February to August of 2018. These startup measurements will be crucial in validating the new analysis methods in preparation for ultimate application for TREAT operations and experiment design. This document describes the collaboration between modeling and simulation staff and restart, operations, instrumentation and experiment development teams to be able to effectively interact and achieve successful validation work during restart testing.« less

  5. Thermal analysis of combinatorial solid geometry models using SINDA

    NASA Technical Reports Server (NTRS)

    Gerencser, Diane; Radke, George; Introne, Rob; Klosterman, John; Miklosovic, Dave

    1993-01-01

    Algorithms have been developed using Monte Carlo techniques to determine the thermal network parameters necessary to perform a finite difference analysis on Combinatorial Solid Geometry (CSG) models. Orbital and laser fluxes as well as internal heat generation are modeled to facilitate satellite modeling. The results of the thermal calculations are used to model the infrared (IR) images of targets and assess target vulnerability. Sample analyses and validation are presented which demonstrate code products.

  6. Perspectives on the Validity of the Thinking Styles Inventories

    ERIC Educational Resources Information Center

    Berding, Florian; Masemann, Maike; Rebmann, Karin; Paechter, Manuela

    2016-01-01

    The Thinking Styles Inventories (TSI) are questionnaires for assessing individual preferences in constructing knowledge. This paper identifies several problems concerning their validity, which range from an inadequate use of factor analysis, to missing information on the measurement model, to findings indicating a low discrimination between the…

  7. Software development predictors, error analysis, reliability models and software metric analysis

    NASA Technical Reports Server (NTRS)

    Basili, Victor

    1983-01-01

    The use of dynamic characteristics as predictors for software development was studied. It was found that there are some significant factors that could be useful as predictors. From a study on software errors and complexity, it was shown that meaningful results can be obtained which allow insight into software traits and the environment in which it is developed. Reliability models were studied. The research included the field of program testing because the validity of some reliability models depends on the answers to some unanswered questions about testing. In studying software metrics, data collected from seven software engineering laboratory (FORTRAN) projects were examined and three effort reporting accuracy checks were applied to demonstrate the need to validate a data base. Results are discussed.

  8. Analysis of Whole-Sky Imager Data to Determine the Validity of PCFLOS models

    DTIC Science & Technology

    1992-12-01

    included in the data sample. 2-5 3.1. Data arrangement for a r x c contingency table ....................... 3-2 3.2. ARIMA models estimated for each...satellites. This model uses the multidimen- sional Boehm Sawtooth Wave Model to establish climatic probabilities through repetitive simula- tions of...analysis techniques to develop an ARIMAe model for each direction at the Columbia and Kirtland sites. Then, the models can be compared and analyzed to

  9. Perception of competence in middle school physical education: instrument development and validation.

    PubMed

    Scrabis-Fletcher, Kristin; Silverman, Stephen

    2010-03-01

    Perception of Competence (POC) has been studied extensively in physical activity (PA) research with similar instruments adapted for physical education (PE) research. Such instruments do not account for the unique PE learning environment. Therefore, an instrument was developed and the scores validated to measure POC in middle school PE. A multiphase design was used consisting of an intensive theoretical review, elicitation study, prepilot study, pilot study, content validation study, and final validation study (N=1281). Data analysis included a multistep iterative process to identify the best model fit. A three-factor model for POC was tested and resulted in root mean square error of approximation = .09, root mean square residual = .07, goodness offit index = .90, and adjusted goodness offit index = .86 values in the acceptable range (Hu & Bentler, 1999). A two-factor model was also tested and resulted in a good fit (two-factor fit indexes values = .05, .03, .98, .97, respectively). The results of this study suggest that an instrument using a three- or two-factor model provides reliable and valid scores ofPOC measurement in middle school PE.

  10. High Fidelity System Simulation of Multiple Components in Support of the UEET Program

    NASA Technical Reports Server (NTRS)

    Plybon, Ronald C.; VanDeWall, Allan; Sampath, Rajiv; Balasubramaniam, Mahadevan; Mallina, Ramakrishna; Irani, Rohinton

    2006-01-01

    The High Fidelity System Simulation effort has addressed various important objectives to enable additional capability within the NPSS framework. The scope emphasized High Pressure Turbine and High Pressure Compressor components. Initial effort was directed at developing and validating intermediate fidelity NPSS model using PD geometry and extended to high-fidelity NPSS model by overlaying detailed geometry to validate CFD against rig data. Both "feedforward" and feedback" approaches of analysis zooming was employed to enable system simulation capability in NPSS. These approaches have certain benefits and applicability in terms of specific applications "feedback" zooming allows the flow-up of information from high-fidelity analysis to be used to update the NPSS model results by forcing the NPSS solver to converge to high-fidelity analysis predictions. This apporach is effective in improving the accuracy of the NPSS model; however, it can only be used in circumstances where there is a clear physics-based strategy to flow up the high-fidelity analysis results to update the NPSS system model. "Feed-forward" zooming approach is more broadly useful in terms of enabling detailed analysis at early stages of design for a specified set of critical operating points and using these analysis results to drive design decisions early in the development process.

  11. Machine Learning Algorithms Outperform Conventional Regression Models in Predicting Development of Hepatocellular Carcinoma

    PubMed Central

    Singal, Amit G.; Mukherjee, Ashin; Elmunzer, B. Joseph; Higgins, Peter DR; Lok, Anna S.; Zhu, Ji; Marrero, Jorge A; Waljee, Akbar K

    2015-01-01

    Background Predictive models for hepatocellular carcinoma (HCC) have been limited by modest accuracy and lack of validation. Machine learning algorithms offer a novel methodology, which may improve HCC risk prognostication among patients with cirrhosis. Our study's aim was to develop and compare predictive models for HCC development among cirrhotic patients, using conventional regression analysis and machine learning algorithms. Methods We enrolled 442 patients with Child A or B cirrhosis at the University of Michigan between January 2004 and September 2006 (UM cohort) and prospectively followed them until HCC development, liver transplantation, death, or study termination. Regression analysis and machine learning algorithms were used to construct predictive models for HCC development, which were tested on an independent validation cohort from the Hepatitis C Antiviral Long-term Treatment against Cirrhosis (HALT-C) Trial. Both models were also compared to the previously published HALT-C model. Discrimination was assessed using receiver operating characteristic curve analysis and diagnostic accuracy was assessed with net reclassification improvement and integrated discrimination improvement statistics. Results After a median follow-up of 3.5 years, 41 patients developed HCC. The UM regression model had a c-statistic of 0.61 (95%CI 0.56-0.67), whereas the machine learning algorithm had a c-statistic of 0.64 (95%CI 0.60–0.69) in the validation cohort. The machine learning algorithm had significantly better diagnostic accuracy as assessed by net reclassification improvement (p<0.001) and integrated discrimination improvement (p=0.04). The HALT-C model had a c-statistic of 0.60 (95%CI 0.50-0.70) in the validation cohort and was outperformed by the machine learning algorithm (p=0.047). Conclusion Machine learning algorithms improve the accuracy of risk stratifying patients with cirrhosis and can be used to accurately identify patients at high-risk for developing HCC. PMID:24169273

  12. Analytical Formulation for Sizing and Estimating the Dimensions and Weight of Wind Turbine Hub and Drivetrain Components

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Y.; Parsons, T.; King, R.

    This report summarizes the theory, verification, and validation of a new sizing tool for wind turbine drivetrain components, the Drivetrain Systems Engineering (DriveSE) tool. DriveSE calculates the dimensions and mass properties of the hub, main shaft, main bearing(s), gearbox, bedplate, transformer if up-tower, and yaw system. The level of fi¬ delity for each component varies depending on whether semiempirical parametric or physics-based models are used. The physics-based models have internal iteration schemes based on system constraints and design criteria. Every model is validated against available industry data or finite-element analysis. The verification and validation results show that the models reasonablymore » capture primary drivers for the sizing and design of major drivetrain components.« less

  13. Validation of a mixture-averaged thermal diffusion model for premixed lean hydrogen flames

    NASA Astrophysics Data System (ADS)

    Schlup, Jason; Blanquart, Guillaume

    2018-03-01

    The mixture-averaged thermal diffusion model originally proposed by Chapman and Cowling is validated using multiple flame configurations. Simulations using detailed hydrogen chemistry are done on one-, two-, and three-dimensional flames. The analysis spans flat and stretched, steady and unsteady, and laminar and turbulent flames. Quantitative and qualitative results using the thermal diffusion model compare very well with the more complex multicomponent diffusion model. Comparisons are made using flame speeds, surface areas, species profiles, and chemical source terms. Once validated, this model is applied to three-dimensional laminar and turbulent flames. For these cases, thermal diffusion causes an increase in the propagation speed of the flames as well as increased product chemical source terms in regions of high positive curvature. The results illustrate the necessity for including thermal diffusion, and the accuracy and computational efficiency of the mixture-averaged thermal diffusion model.

  14. DEVELOPMENT AND VALIDATION OF A MULTIFIELD MODEL OF CHURN-TURBULENT GAS/LIQUID FLOWS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elena A. Tselishcheva; Steven P. Antal; Michael Z. Podowski

    The accuracy of numerical predictions for gas/liquid two-phase flows using Computational Multiphase Fluid Dynamics (CMFD) methods strongly depends on the formulation of models governing the interaction between the continuous liquid field and bubbles of different sizes. The purpose of this paper is to develop, test and validate a multifield model of adiabatic gas/liquid flows at intermediate gas concentrations (e.g., churn-turbulent flow regime), in which multiple-size bubbles are divided into a specified number of groups, each representing a prescribed range of sizes. The proposed modeling concept uses transport equations for the continuous liquid field and for each bubble field. The overallmore » model has been implemented in the NPHASE-CMFD computer code. The results of NPHASE-CMFD simulations have been validated against the experimental data from the TOPFLOW test facility. Also, a parametric analysis on the effect of various modeling assumptions has been performed.« less

  15. An Integrated Approach Linking Process to Structural Modeling With Microstructural Characterization for Injections-Molded Long-Fiber Thermoplastics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen, Ba Nghiep; Bapanapalli, Satish K.; Smith, Mark T.

    2008-09-01

    The objective of our work is to enable the optimum design of lightweight automotive structural components using injection-molded long fiber thermoplastics (LFTs). To this end, an integrated approach that links process modeling to structural analysis with experimental microstructural characterization and validation is developed. First, process models for LFTs are developed and implemented into processing codes (e.g. ORIENT, Moldflow) to predict the microstructure of the as-formed composite (i.e. fiber length and orientation distributions). In parallel, characterization and testing methods are developed to obtain necessary microstructural data to validate process modeling predictions. Second, the predicted LFT composite microstructure is imported into amore » structural finite element analysis by ABAQUS to determine the response of the as-formed composite to given boundary conditions. At this stage, constitutive models accounting for the composite microstructure are developed to predict various types of behaviors (i.e. thermoelastic, viscoelastic, elastic-plastic, damage, fatigue, and impact) of LFTs. Experimental methods are also developed to determine material parameters and to validate constitutive models. Such a process-linked-structural modeling approach allows an LFT composite structure to be designed with confidence through numerical simulations. Some recent results of our collaborative research will be illustrated to show the usefulness and applications of this integrated approach.« less

  16. Factors affecting GEBV accuracy with single-step Bayesian models.

    PubMed

    Zhou, Lei; Mrode, Raphael; Zhang, Shengli; Zhang, Qin; Li, Bugao; Liu, Jian-Feng

    2018-01-01

    A single-step approach to obtain genomic prediction was first proposed in 2009. Many studies have investigated the components of GEBV accuracy in genomic selection. However, it is still unclear how the population structure and the relationships between training and validation populations influence GEBV accuracy in terms of single-step analysis. Here, we explored the components of GEBV accuracy in single-step Bayesian analysis with a simulation study. Three scenarios with various numbers of QTL (5, 50, and 500) were simulated. Three models were implemented to analyze the simulated data: single-step genomic best linear unbiased prediction (GBLUP; SSGBLUP), single-step BayesA (SS-BayesA), and single-step BayesB (SS-BayesB). According to our results, GEBV accuracy was influenced by the relationships between the training and validation populations more significantly for ungenotyped animals than for genotyped animals. SS-BayesA/BayesB showed an obvious advantage over SSGBLUP with the scenarios of 5 and 50 QTL. SS-BayesB model obtained the lowest accuracy with the 500 QTL in the simulation. SS-BayesA model was the most efficient and robust considering all QTL scenarios. Generally, both the relationships between training and validation populations and LD between markers and QTL contributed to GEBV accuracy in the single-step analysis, and the advantages of single-step Bayesian models were more apparent when the trait is controlled by fewer QTL.

  17. Validation of the Epworth Sleepiness Scale for Children and Adolescents using Rasch analysis.

    PubMed

    Janssen, Kitty C; Phillipson, Sivanes; O'Connor, Justen; Johns, Murray W

    2017-05-01

    A validated measure of daytime sleepiness for adolescents is needed to better explore emerging relationships between sleepiness and the mental and physical health of adolescents. The Epworth Sleepiness Scale (ESS) is a widely used scale for daytime sleepiness in adults but contains references to alcohol and driving. The Epworth Sleepiness Scale for Children and Adolescents (ESS-CHAD) has been proposed as the official modified version of the ESS for children and adolescents. This study describes the psychometric analysis of the ESS-CHAD as a measure of daytime sleepiness for adolescents. The ESS-CHAD was completed by 297 adolescents, 12-18 years old, from two independent schools in Victoria, Australia. Exploratory factor analysis and Rasch analysis was conducted to determine the validity of the scale. Exploratory factor analysis and Rasch analysis indicated that ESS-CHAD has internal validity and a unidimensional structure with good model fit. Rasch analysis of four subgroups based on gender and year-level were consistent with the overall results. The results were consistent with published ESS results, which strongly indicates that the changes to the scale do not affect the scale's capacity to measure daytime sleepiness. It is concluded that the ESS-CHAD is a reliable and internally valid measure of daytime sleepiness in adolescents 12-18 years old. Further studies are needed to establish the internal validity of the ESS-CHAD for children under 12 years, and to establish external validity and accurate cut-off points for children and adolescents. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Joint use of over- and under-sampling techniques and cross-validation for the development and assessment of prediction models.

    PubMed

    Blagus, Rok; Lusa, Lara

    2015-11-04

    Prediction models are used in clinical research to develop rules that can be used to accurately predict the outcome of the patients based on some of their characteristics. They represent a valuable tool in the decision making process of clinicians and health policy makers, as they enable them to estimate the probability that patients have or will develop a disease, will respond to a treatment, or that their disease will recur. The interest devoted to prediction models in the biomedical community has been growing in the last few years. Often the data used to develop the prediction models are class-imbalanced as only few patients experience the event (and therefore belong to minority class). Prediction models developed using class-imbalanced data tend to achieve sub-optimal predictive accuracy in the minority class. This problem can be diminished by using sampling techniques aimed at balancing the class distribution. These techniques include under- and oversampling, where a fraction of the majority class samples are retained in the analysis or new samples from the minority class are generated. The correct assessment of how the prediction model is likely to perform on independent data is of crucial importance; in the absence of an independent data set, cross-validation is normally used. While the importance of correct cross-validation is well documented in the biomedical literature, the challenges posed by the joint use of sampling techniques and cross-validation have not been addressed. We show that care must be taken to ensure that cross-validation is performed correctly on sampled data, and that the risk of overestimating the predictive accuracy is greater when oversampling techniques are used. Examples based on the re-analysis of real datasets and simulation studies are provided. We identify some results from the biomedical literature where the incorrect cross-validation was performed, where we expect that the performance of oversampling techniques was heavily overestimated.

  19. Development of Flight-Test Performance Estimation Techniques for Small Unmanned Aerial Systems

    NASA Astrophysics Data System (ADS)

    McCrink, Matthew Henry

    This dissertation provides a flight-testing framework for assessing the performance of fixed-wing, small-scale unmanned aerial systems (sUAS) by leveraging sub-system models of components unique to these vehicles. The development of the sub-system models, and their links to broader impacts on sUAS performance, is the key contribution of this work. The sub-system modeling and analysis focuses on the vehicle's propulsion, navigation and guidance, and airframe components. Quantification of the uncertainty in the vehicle's power available and control states is essential for assessing the validity of both the methods and results obtained from flight-tests. Therefore, detailed propulsion and navigation system analyses are presented to validate the flight testing methodology. Propulsion system analysis required the development of an analytic model of the propeller in order to predict the power available over a range of flight conditions. The model is based on the blade element momentum (BEM) method. Additional corrections are added to the basic model in order to capture the Reynolds-dependent scale effects unique to sUAS. The model was experimentally validated using a ground based testing apparatus. The BEM predictions and experimental analysis allow for a parameterized model relating the electrical power, measurable during flight, to the power available required for vehicle performance analysis. Navigation system details are presented with a specific focus on the sensors used for state estimation, and the resulting uncertainty in vehicle state. Uncertainty quantification is provided by detailed calibration techniques validated using quasi-static and hardware-in-the-loop (HIL) ground based testing. The HIL methods introduced use a soft real-time flight simulator to provide inertial quality data for assessing overall system performance. Using this tool, the uncertainty in vehicle state estimation based on a range of sensors, and vehicle operational environments is presented. The propulsion and navigation system models are used to evaluate flight-testing methods for evaluating fixed-wing sUAS performance. A brief airframe analysis is presented to provide a foundation for assessing the efficacy of the flight-test methods. The flight-testing presented in this work is focused on validating the aircraft drag polar, zero-lift drag coefficient, and span efficiency factor. Three methods are detailed and evaluated for estimating these design parameters. Specific focus is placed on the influence of propulsion and navigation system uncertainty on the resulting performance data. Performance estimates are used in conjunction with the propulsion model to estimate the impact sensor and measurement uncertainty on the endurance and range of a fixed-wing sUAS. Endurance and range results for a simplistic power available model are compared to the Reynolds-dependent model presented in this work. Additional parameter sensitivity analysis related to state estimation uncertainties encountered in flight-testing are presented. Results from these analyses indicate that the sub-system models introduced in this work are of first-order importance, on the order of 5-10% change in range and endurance, in assessing the performance of a fixed-wing sUAS.

  20. Validation of periodontitis screening model using sociodemographic, systemic, and molecular information in a Korean population.

    PubMed

    Kim, Hyun-Duck; Sukhbaatar, Munkhzaya; Shin, Myungseop; Ahn, Yoo-Been; Yoo, Wook-Sung

    2014-12-01

    This study aims to evaluate and validate a periodontitis screening model that includes sociodemographic, metabolic syndrome (MetS), and molecular information, including gingival crevicular fluid (GCF), matrix metalloproteinase (MMP), and blood cytokines. The authors selected 506 participants from the Shiwha-Banwol cohort: 322 participants from the 2005 cohort for deriving the screening model and 184 participants from the 2007 cohort for its validation. Periodontitis was assessed by dentists using the community periodontal index. Interleukin (IL)-6, IL-8, and tumor necrosis factor-α in blood and MMP-8, -9, and -13 in GCF were assayed using enzyme-linked immunosorbent assay. MetS was assessed by physicians using physical examination and blood laboratory data. Information about age, sex, income, smoking, and drinking was obtained by interview. Logistic regression analysis was applied to finalize the best-fitting model and validate the model using sensitivity, specificity, and c-statistics. The derived model for periodontitis screening had a sensitivity of 0.73, specificity of 0.85, and c-statistic of 0.86 (P <0.001); those of the validated model were 0.64, 0.91, and 0.83 (P <0.001), respectively. The model that included age, sex, income, smoking, drinking, and blood and GCF biomarkers could be useful in screening for periodontitis. A future prospective study is indicated for evaluating this model's ability to predict the occurrence of periodontitis.

  1. A kinematic model to assess spinal motion during walking.

    PubMed

    Konz, Regina J; Fatone, Stefania; Stine, Rebecca L; Ganju, Aruna; Gard, Steven A; Ondra, Stephen L

    2006-11-15

    A 3-dimensional multi-segment kinematic spine model was developed for noninvasive analysis of spinal motion during walking. Preliminary data from able-bodied ambulators were collected and analyzed using the model. Neither the spine's role during walking nor the effect of surgical spinal stabilization on gait is fully understood. Typically, gait analysis models disregard the spine entirely or regard it as a single rigid structure. Data on regional spinal movements, in conjunction with lower limb data, associated with walking are scarce. KinTrak software (Motion Analysis Corp., Santa Rosa, CA) was used to create a biomechanical model for analysis of 3-dimensional regional spinal movements. Measuring known angles from a mechanical model and comparing them to the calculated angles validated the kinematic model. Spine motion data were collected from 10 able-bodied adults walking at 5 self-selected speeds. These results were compared to data reported in the literature. The uniaxial angles measured on the mechanical model were within 5 degrees of the calculated kinematic model angles, and the coupled angles were within 2 degrees. Regional spine kinematics from able-bodied subjects calculated with this model compared well to data reported by other authors. A multi-segment kinematic spine model has been developed and validated for analysis of spinal motion during walking. By understanding the spine's role during ambulation and the cause-and-effect relationship between spine motion and lower limb motion, preoperative planning may be augmented to restore normal alignment and balance with minimal negative effects on walking.

  2. Developing Guided Inquiry-Based Student Lab Worksheet for Laboratory Knowledge Course

    NASA Astrophysics Data System (ADS)

    Rahmi, Y. L.; Novriyanti, E.; Ardi, A.; Rifandi, R.

    2018-04-01

    The course of laboratory knowledge is an introductory course for biology students to follow various lectures practicing in the biology laboratory. Learning activities of laboratory knowledge course at this time in the Biology Department, Universitas Negeri Padang has not been completed by supporting learning media such as student lab worksheet. Guided inquiry learning model is one of the learning models that can be integrated into laboratory activity. The study aimed to produce student lab worksheet based on guided inquiry for laboratory knowledge course and to determine the validity of lab worksheet. The research was conducted using research and developmet (R&D) model. The instruments used in data collection in this research were questionnaire for student needed analysis and questionnaire to measure the student lab worksheet validity. The data obtained was quantitative from several validators. The validators consist of three lecturers. The percentage of a student lab worksheet validity was 94.18 which can be categorized was very good.

  3. VALIDITY OF A TWO-DIMENSIONAL MODEL FOR VARIABLE-DENSITY HYDRODYNAMIC CIRCULATION

    EPA Science Inventory

    A three-dimensional model of temperatures and currents has been formulated to assist in the analysis and interpretation of the dynamics of stratified lakes. In this model, nonlinear eddy coefficients for viscosity and conductivities are included. A two-dimensional model (one vert...

  4. Dependency of the Reynolds number on the water flow through the perforated tube

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Závodný, Zdenko, E-mail: zdenko.zavodny@stuba.sk; Bereznai, Jozef, E-mail: jozef.bereznai@stuba.sk; Urban, František

    Safe and effective loading of nuclear reactor fuel assemblies demands qualitative and quantitative analysis of the relationship between the coolant temperature in the fuel assembly outlet, measured by the thermocouple, and the mean coolant temperature profile in the thermocouple plane position. It is not possible to perform the analysis directly in the reactor, so it is carried out using measurements on the physical model, and the CFD fuel assembly coolant flow models. The CFD models have to be verified and validated in line with the temperature and velocity profile obtained from the measurements of the cooling water flowing in themore » physical model of the fuel assembly. Simplified physical model with perforated central tube and its validated CFD model serve to design of the second physical model of the fuel assembly of the nuclear reactor VVER 440. Physical model will be manufactured and installed in the laboratory of the Institute of Energy Machines, Faculty of Mechanical Engineering of the Slovak University of Technology in Bratislava.« less

  5. 6DOF Testing of the SLS Inertial Navigation Unit

    NASA Technical Reports Server (NTRS)

    Geohagan, Kevin W.; Bernard, William P.; Oliver, T. Emerson; Strickland, Dennis J.; Leggett, Jared O.

    2018-01-01

    The Navigation System on the NASA Space Launch System (SLS) Block 1 vehicle performs initial alignment of the Inertial Navigation System (INS) navigation frame through gyrocompass alignment (GCA). In lieu of direct testing of GCA accuracy in support of requirement verification, the SLS Navigation Team proposed and conducted an engineering test to, among other things, validate the GCA performance and overall behavior of the SLS INS model through comparison with test data. This paper will detail dynamic hardware testing of the SLS INS, conducted by the SLS Navigation Team at Marshall Space Flight Center's 6DOF Table Facility, in support of GCA performance characterization and INS model validation. A 6-DOF motion platform was used to produce 6DOF pad twist and sway dynamics while a simulated SLS flight computer communicated with the INS. Tests conducted include an evaluation of GCA algorithm robustness to increasingly dynamic pad environments, an examination of GCA algorithm stability and accuracy over long durations, and a long-duration static test to gather enough data for Allan Variance analysis. Test setup, execution, and data analysis will be discussed, including analysis performed in support of SLS INS model validation.

  6. 3D-QSAR studies on the inhibitory activity of trimethoprim analogues against Escherichia coli dihydrofolate reductase.

    PubMed

    Vijayaraj, Ramadoss; Devi, Mekapothula Lakshmi Vasavi; Subramanian, Venkatesan; Chattaraj, Pratim Kumar

    2012-06-01

    Three-dimensional quantitative structure activity relationship (3D-QSAR) study has been carried out on the Escherichia coli DHFR inhibitors 2,4-diamino-5-(substituted-benzyl)pyrimidine derivatives to understand the structural features responsible for the improved potency. To construct highly predictive 3D-QSAR models, comparative molecular field analysis (CoMFA) and comparative molecular similarity indices analysis (CoMSIA) methods were used. The predicted models show statistically significant cross-validated and non-cross-validated correlation coefficient of r2 CV and r2 nCV, respectively. The final 3D-QSAR models were validated using structurally diverse test set compounds. Analysis of the contour maps generated from CoMFA and CoMSIA methods reveals that the substitution of electronegative groups at the first and second position along with electropositive group at the third position of R2 substitution significantly increases the potency of the derivatives. The results obtained from the CoMFA and CoMSIA study delineate the substituents on the trimethoprim analogues responsible for the enhanced potency and also provide valuable directions for the design of new trimethoprim analogues with improved affinity. © 2012 John Wiley & Sons A/S.

  7. Serum and urine metabolomics study reveals a distinct diagnostic model for cancer cachexia

    PubMed Central

    Yang, Quan‐Jun; Zhao, Jiang‐Rong; Hao, Juan; Li, Bin; Huo, Yan; Han, Yong‐Long; Wan, Li‐Li; Li, Jie; Huang, Jinlu; Lu, Jin

    2017-01-01

    Abstract Background Cachexia is a multifactorial metabolic syndrome with high morbidity and mortality in patients with advanced cancer. The diagnosis of cancer cachexia depends on objective measures of clinical symptoms and a history of weight loss, which lag behind disease progression and have limited utility for the early diagnosis of cancer cachexia. In this study, we performed a nuclear magnetic resonance‐based metabolomics analysis to reveal the metabolic profile of cancer cachexia and establish a diagnostic model. Methods Eighty‐four cancer cachexia patients, 33 pre‐cachectic patients, 105 weight‐stable cancer patients, and 74 healthy controls were included in the training and validation sets. Comparative analysis was used to elucidate the distinct metabolites of cancer cachexia, while metabolic pathway analysis was employed to elucidate reprogramming pathways. Random forest, logistic regression, and receiver operating characteristic analyses were used to select and validate the biomarker metabolites and establish a diagnostic model. Results Forty‐six cancer cachexia patients, 22 pre‐cachectic patients, 68 weight‐stable cancer patients, and 48 healthy controls were included in the training set, and 38 cancer cachexia patients, 11 pre‐cachectic patients, 37 weight‐stable cancer patients, and 26 healthy controls were included in the validation set. All four groups were age‐matched and sex‐matched in the training set. Metabolomics analysis showed a clear separation of the four groups. Overall, 45 metabolites and 18 metabolic pathways were associated with cancer cachexia. Using random forest analysis, 15 of these metabolites were identified as highly discriminating between disease states. Logistic regression and receiver operating characteristic analyses were used to create a distinct diagnostic model with an area under the curve of 0.991 based on three metabolites. The diagnostic equation was Logit(P) = −400.53 – 481.88 × log(Carnosine) −239.02 × log(Leucine) + 383.92 × log(Phenyl acetate), and the result showed 94.64% accuracy in the validation set. Conclusions This metabolomics study revealed a distinct metabolic profile of cancer cachexia and established and validated a diagnostic model. This research provided a feasible diagnostic tool for identifying at‐risk populations through the detection of serum metabolites. PMID:29152916

  8. Clinical prediction models for mortality and functional outcome following ischemic stroke: A systematic review and meta-analysis

    PubMed Central

    Crayton, Elise; Wolfe, Charles; Douiri, Abdel

    2018-01-01

    Objective We aim to identify and critically appraise clinical prediction models of mortality and function following ischaemic stroke. Methods Electronic databases, reference lists, citations were searched from inception to September 2015. Studies were selected for inclusion, according to pre-specified criteria and critically appraised by independent, blinded reviewers. The discrimination of the prediction models was measured by the area under the curve receiver operating characteristic curve or c-statistic in random effects meta-analysis. Heterogeneity was measured using I2. Appropriate appraisal tools and reporting guidelines were used in this review. Results 31395 references were screened, of which 109 articles were included in the review. These articles described 66 different predictive risk models. Appraisal identified poor methodological quality and a high risk of bias for most models. However, all models precede the development of reporting guidelines for prediction modelling studies. Generalisability of models could be improved, less than half of the included models have been externally validated(n = 27/66). 152 predictors of mortality and 192 predictors and functional outcome were identified. No studies assessing ability to improve patient outcome (model impact studies) were identified. Conclusions Further external validation and model impact studies to confirm the utility of existing models in supporting decision-making is required. Existing models have much potential. Those wishing to predict stroke outcome are advised to build on previous work, to update and adapt validated models to their specific contexts opposed to designing new ones. PMID:29377923

  9. Validity And Practicality of Experiment Integrated Guided Inquiry-Based Module on Topic of Colloidal Chemistry for Senior High School Learning

    NASA Astrophysics Data System (ADS)

    Andromeda, A.; Lufri; Festiyed; Ellizar, E.; Iryani, I.; Guspatni, G.; Fitri, L.

    2018-04-01

    This Research & Development study aims to produce a valid and practical experiment integrated guided inquiry based module on topic of colloidal chemistry. 4D instructional design model was selected in this study. Limited trial of the product was conducted at SMAN 7 Padang. Instruments used were validity and practicality questionnaires. Validity and practicality data were analyzed using Kappa moment. Analysis of the data shows that Kappa moment for validity was 0.88 indicating a very high degree of validity. Kappa moments for the practicality from students and teachers were 0.89 and 0.95 respectively indicating high degree of practicality. Analysis on the module filled in by students shows that 91.37% students could correctly answer critical thinking, exercise, prelab, postlab and worksheet questions asked in the module. These findings indicate that the integrated guided inquiry based module on topic of colloidal chemistry was valid and practical for chemistry learning in senior high school.

  10. Multicenter external validation of two malignancy risk prediction models in patients undergoing 18F-FDG-PET for solitary pulmonary nodule evaluation.

    PubMed

    Perandini, Simone; Soardi, G A; Larici, A R; Del Ciello, A; Rizzardi, G; Solazzo, A; Mancino, L; Zeraj, F; Bernhart, M; Signorini, M; Motton, M; Montemezzi, S

    2017-05-01

    To achieve multicentre external validation of the Herder and Bayesian Inference Malignancy Calculator (BIMC) models. Two hundred and fifty-nine solitary pulmonary nodules (SPNs) collected from four major hospitals which underwent 18-FDG-PET characterization were included in this multicentre retrospective study. The Herder model was tested on all available lesions (group A). A subgroup of 180 SPNs (group B) was used to provide unbiased comparison between the Herder and BIMC models. Receiver operating characteristic (ROC) area under the curve (AUC) analysis was performed to assess diagnostic accuracy. Decision analysis was performed by adopting the risk threshold stated in British Thoracic Society (BTS) guidelines. Unbiased comparison performed In Group B showed a ROC AUC for the Herder model of 0.807 (95 % CI 0.742-0.862) and for the BIMC model of 0.822 (95 % CI 0.758-0.875). Both the Herder and the BIMC models were proven to accurately predict the risk of malignancy when tested on a large multicentre external case series. The BIMC model seems advantageous on the basis of a more favourable decision analysis. • The Herder model showed a ROC AUC of 0.807 on 180 SPNs. • The BIMC model showed a ROC AUC of 0.822 on 180 SPNs. • Decision analysis is more favourable to the BIMC model.

  11. Application of the DART Code for the Assessment of Advanced Fuel Behavior

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rest, J.; Totev, T.

    2007-07-01

    The Dispersion Analysis Research Tool (DART) code is a dispersion fuel analysis code that contains mechanistically-based fuel and reaction-product swelling models, a one dimensional heat transfer analysis, and mechanical deformation models. DART has been used to simulate the irradiation behavior of uranium oxide, uranium silicide, and uranium molybdenum aluminum dispersion fuels, as well as their monolithic counterparts. The thermal-mechanical DART code has been validated against RERTR tests performed in the ATR for irradiation data on interaction thickness, fuel, matrix, and reaction product volume fractions, and plate thickness changes. The DART fission gas behavior model has been validated against UO{sub 2}more » fission gas release data as well as measured fission gas-bubble size distributions. Here DART is utilized to analyze various aspects of the observed bubble growth in U-Mo/Al interaction product. (authors)« less

  12. A systematic review of validated sinus surgery simulators.

    PubMed

    Stew, B; Kao, S S-T; Dharmawardana, N; Ooi, E H

    2018-06-01

    Simulation provides a safe and effective opportunity to develop surgical skills. A variety of endoscopic sinus surgery (ESS) simulators has been described in the literature. Validation of these simulators allows for effective utilisation in training. To conduct a systematic review of the published literature to analyse the evidence for validated ESS simulation. Pubmed, Embase, Cochrane and Cinahl were searched from inception of the databases to 11 January 2017. Twelve thousand five hundred and sixteen articles were retrieved of which 10 112 were screened following the removal of duplicates. Thirty-eight full-text articles were reviewed after meeting search criteria. Evidence of face, content, construct, discriminant and predictive validity was extracted. Twenty articles were included in the analysis describing 12 ESS simulators. Eleven of these simulators had undergone validation: 3 virtual reality, 7 physical bench models and 1 cadaveric simulator. Seven of the simulators were shown to have face validity, 7 had construct validity and 1 had predictive validity. None of the simulators demonstrated discriminate validity. This systematic review demonstrates that a number of ESS simulators have been comprehensively validated. Many of the validation processes, however, lack standardisation in outcome reporting, thus limiting a meta-analysis comparison between simulators. © 2017 John Wiley & Sons Ltd.

  13. The 2006 Cape Canaveral Air Force Station Range Reference Atmosphere Model Validation Study and Sensitivity Analysis to the National Aeronautics and Space Administration's Space Shuttle

    NASA Technical Reports Server (NTRS)

    Burns, Lee; Merry, Carl; Decker, Ryan; Harrington, Brian

    2008-01-01

    The 2006 Cape Canaveral Air Force Station (CCAFS) Range Reference Atmosphere (RRA) is a statistical model summarizing the wind and thermodynamic atmospheric variability from surface to 70 kin. Launches of the National Aeronautics and Space Administration's (NASA) Space Shuttle from Kennedy Space Center utilize CCAFS RRA data to evaluate environmental constraints on various aspects of the vehicle during ascent. An update to the CCAFS RRA was recently completed. As part of the update, a validation study on the 2006 version was conducted as well as a comparison analysis of the 2006 version to the existing CCAFS RRA database version 1983. Assessments to the Space Shuttle vehicle ascent profile characteristics were performed to determine impacts of the updated model to the vehicle performance. Details on the model updates and the vehicle sensitivity analyses with the update model are presented.

  14. Validation of the Integrated Medical Model Using Historical Space Flight Data

    NASA Technical Reports Server (NTRS)

    Kerstman, Eric L.; Minard, Charles G.; FreiredeCarvalho, Mary H.; Walton, Marlei E.; Myers, Jerry G., Jr.; Saile, Lynn G.; Lopez, Vilma; Butler, Douglas J.; Johnson-Throop, Kathy A.

    2010-01-01

    The Integrated Medical Model (IMM) utilizes Monte Carlo methodologies to predict the occurrence of medical events, utilization of resources, and clinical outcomes during space flight. Real-world data may be used to demonstrate the accuracy of the model. For this analysis, IMM predictions were compared to data from historical shuttle missions, not yet included as model source input. Initial goodness of fit test-ing on International Space Station data suggests that the IMM may overestimate the number of occurrences for three of the 83 medical conditions in the model. The IMM did not underestimate the occurrence of any medical condition. Initial comparisons with shuttle data demonstrate the importance of understanding crew preference (i.e., preferred analgesic) for accurately predicting the utilization of re-sources. The initial analysis demonstrates the validity of the IMM for its intended use and highlights areas for improvement.

  15. Concurrent and convergent validity of the mobility- and multidimensional-hierarchical disability categorization models with physical performance in community older adults.

    PubMed

    Hu, Ming-Hsia; Yeh, Chih-Jun; Chen, Tou-Rong; Wang, Ching-Yi

    2014-01-01

    A valid, time-efficient and easy-to-use instrument is important for busy clinical settings, large scale surveys, or community screening use. The purpose of this study was to validate the mobility hierarchical disability categorization model (an abbreviated model) by investigating its concurrent validity with the multidimensional hierarchical disability categorization model (a comprehensive model) and triangulating both models with physical performance measures in older adults. 604 community-dwelling older adults of at least 60 years in age volunteered to participate. Self-reported function on mobility, instrumental activities of daily living (IADL) and activities of daily living (ADL) domains were recorded and then the disability status determined based on both the multidimensional hierarchical categorization model and the mobility hierarchical categorization model. The physical performance measures, consisting of grip strength and usual and fastest gait speeds (UGS, FGS), were collected on the same day. Both categorization models showed high correlation (γs = 0.92, p < 0.001) and agreement (kappa = 0.61, p < 0.0001). Physical performance measures demonstrated significant different group means among the disability subgroups based on both categorization models. The results of multiple regression analysis indicated that both models individually explain similar amount of variance on all physical performances, with adjustments for age, sex, and number of comorbidities. Our results found that the mobility hierarchical disability categorization model is a valid and time efficient tool for large survey or screening use.

  16. [State Recognition of Solid Fermentation Process Based on Near Infrared Spectroscopy with Adaboost and Spectral Regression Discriminant Analysis].

    PubMed

    Yu, Shuang; Liu, Guo-hai; Xia, Rong-sheng; Jiang, Hui

    2016-01-01

    In order to achieve the rapid monitoring of process state of solid state fermentation (SSF), this study attempted to qualitative identification of process state of SSF of feed protein by use of Fourier transform near infrared (FT-NIR) spectroscopy analysis technique. Even more specifically, the FT-NIR spectroscopy combined with Adaboost-SRDA-NN integrated learning algorithm as an ideal analysis tool was used to accurately and rapidly monitor chemical and physical changes in SSF of feed protein without the need for chemical analysis. Firstly, the raw spectra of all the 140 fermentation samples obtained were collected by use of Fourier transform near infrared spectrometer (Antaris II), and the raw spectra obtained were preprocessed by use of standard normal variate transformation (SNV) spectral preprocessing algorithm. Thereafter, the characteristic information of the preprocessed spectra was extracted by use of spectral regression discriminant analysis (SRDA). Finally, nearest neighbors (NN) algorithm as a basic classifier was selected and building state recognition model to identify different fermentation samples in the validation set. Experimental results showed as follows: the SRDA-NN model revealed its superior performance by compared with other two different NN models, which were developed by use of the feature information form principal component analysis (PCA) and linear discriminant analysis (LDA), and the correct recognition rate of SRDA-NN model achieved 94.28% in the validation set. In this work, in order to further improve the recognition accuracy of the final model, Adaboost-SRDA-NN ensemble learning algorithm was proposed by integrated the Adaboost and SRDA-NN methods, and the presented algorithm was used to construct the online monitoring model of process state of SSF of feed protein. Experimental results showed as follows: the prediction performance of SRDA-NN model has been further enhanced by use of Adaboost lifting algorithm, and the correct recognition rate of the Adaboost-SRDA-NN model achieved 100% in the validation set. The overall results demonstrate that SRDA algorithm can effectively achieve the spectral feature information extraction to the spectral dimension reduction in model calibration process of qualitative analysis of NIR spectroscopy. In addition, the Adaboost lifting algorithm can improve the classification accuracy of the final model. The results obtained in this work can provide research foundation for developing online monitoring instruments for the monitoring of SSF process.

  17. Functional Data Analysis Applied to Modeling of Severe Acute Mucositis and Dysphagia Resulting From Head and Neck Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dean, Jamie A., E-mail: jamie.dean@icr.ac.uk; Wong, Kee H.; Gay, Hiram

    Purpose: Current normal tissue complication probability modeling using logistic regression suffers from bias and high uncertainty in the presence of highly correlated radiation therapy (RT) dose data. This hinders robust estimates of dose-response associations and, hence, optimal normal tissue–sparing strategies from being elucidated. Using functional data analysis (FDA) to reduce the dimensionality of the dose data could overcome this limitation. Methods and Materials: FDA was applied to modeling of severe acute mucositis and dysphagia resulting from head and neck RT. Functional partial least squares regression (FPLS) and functional principal component analysis were used for dimensionality reduction of the dose-volume histogrammore » data. The reduced dose data were input into functional logistic regression models (functional partial least squares–logistic regression [FPLS-LR] and functional principal component–logistic regression [FPC-LR]) along with clinical data. This approach was compared with penalized logistic regression (PLR) in terms of predictive performance and the significance of treatment covariate–response associations, assessed using bootstrapping. Results: The area under the receiver operating characteristic curve for the PLR, FPC-LR, and FPLS-LR models was 0.65, 0.69, and 0.67, respectively, for mucositis (internal validation) and 0.81, 0.83, and 0.83, respectively, for dysphagia (external validation). The calibration slopes/intercepts for the PLR, FPC-LR, and FPLS-LR models were 1.6/−0.67, 0.45/0.47, and 0.40/0.49, respectively, for mucositis (internal validation) and 2.5/−0.96, 0.79/−0.04, and 0.79/0.00, respectively, for dysphagia (external validation). The bootstrapped odds ratios indicated significant associations between RT dose and severe toxicity in the mucositis and dysphagia FDA models. Cisplatin was significantly associated with severe dysphagia in the FDA models. None of the covariates was significantly associated with severe toxicity in the PLR models. Dose levels greater than approximately 1.0 Gy/fraction were most strongly associated with severe acute mucositis and dysphagia in the FDA models. Conclusions: FPLS and functional principal component analysis marginally improved predictive performance compared with PLR and provided robust dose-response associations. FDA is recommended for use in normal tissue complication probability modeling.« less

  18. Functional Data Analysis Applied to Modeling of Severe Acute Mucositis and Dysphagia Resulting From Head and Neck Radiation Therapy.

    PubMed

    Dean, Jamie A; Wong, Kee H; Gay, Hiram; Welsh, Liam C; Jones, Ann-Britt; Schick, Ulrike; Oh, Jung Hun; Apte, Aditya; Newbold, Kate L; Bhide, Shreerang A; Harrington, Kevin J; Deasy, Joseph O; Nutting, Christopher M; Gulliford, Sarah L

    2016-11-15

    Current normal tissue complication probability modeling using logistic regression suffers from bias and high uncertainty in the presence of highly correlated radiation therapy (RT) dose data. This hinders robust estimates of dose-response associations and, hence, optimal normal tissue-sparing strategies from being elucidated. Using functional data analysis (FDA) to reduce the dimensionality of the dose data could overcome this limitation. FDA was applied to modeling of severe acute mucositis and dysphagia resulting from head and neck RT. Functional partial least squares regression (FPLS) and functional principal component analysis were used for dimensionality reduction of the dose-volume histogram data. The reduced dose data were input into functional logistic regression models (functional partial least squares-logistic regression [FPLS-LR] and functional principal component-logistic regression [FPC-LR]) along with clinical data. This approach was compared with penalized logistic regression (PLR) in terms of predictive performance and the significance of treatment covariate-response associations, assessed using bootstrapping. The area under the receiver operating characteristic curve for the PLR, FPC-LR, and FPLS-LR models was 0.65, 0.69, and 0.67, respectively, for mucositis (internal validation) and 0.81, 0.83, and 0.83, respectively, for dysphagia (external validation). The calibration slopes/intercepts for the PLR, FPC-LR, and FPLS-LR models were 1.6/-0.67, 0.45/0.47, and 0.40/0.49, respectively, for mucositis (internal validation) and 2.5/-0.96, 0.79/-0.04, and 0.79/0.00, respectively, for dysphagia (external validation). The bootstrapped odds ratios indicated significant associations between RT dose and severe toxicity in the mucositis and dysphagia FDA models. Cisplatin was significantly associated with severe dysphagia in the FDA models. None of the covariates was significantly associated with severe toxicity in the PLR models. Dose levels greater than approximately 1.0 Gy/fraction were most strongly associated with severe acute mucositis and dysphagia in the FDA models. FPLS and functional principal component analysis marginally improved predictive performance compared with PLR and provided robust dose-response associations. FDA is recommended for use in normal tissue complication probability modeling. Copyright © 2016 The Author(s). Published by Elsevier Inc. All rights reserved.

  19. The Development and Validation of a Mental Toughness Scale for Adolescents

    ERIC Educational Resources Information Center

    McGeown, Sarah; St. Clair-Thompson, Helen; Putwain, David W.

    2018-01-01

    The present study examined the validity of a newly developed instrument, the Mental Toughness Scale for Adolescents, which examines the attributes of challenge, commitment, confidence (abilities and interpersonal), and control (life and emotion). The six-factor model was supported using exploratory factor analysis (n = 373) and confirmatory factor…

  20. Examining the Cultural Validity of a College Student Engagement Survey for Latinos

    ERIC Educational Resources Information Center

    Hernandez, Ebelia; Mobley, Michael; Coryell, Gayle; Yu, En-Hui; Martinez, Gladys

    2013-01-01

    Using critical race theory and quantitative criticalist stance, this study examines the construct validity of an engagement survey, "Student Experiences in the Research University" (SERU) for Latino college students through exploratory factor analysis. Results support the principal seven-factor SERU model. However subfactors exhibited…

  1. Evidence of Convergent and Discriminant Validity of Child, Teacher, and Peer Reports of Teacher-Student Support

    PubMed Central

    Li, Yan; Hughes, Jan N.; Kwok, Oi-man; Hsu, Hsien-Yuan

    2012-01-01

    This study investigated the construct validity of measures of teacher-student support in a sample of 709 ethnically diverse second and third grade academically at-risk students. Confirmatory factor analysis investigated the convergent and discriminant validities of teacher, child, and peer reports of teacher-student support and child conduct problems. Results supported the convergent and discriminant validity of scores on the measures. Peer reports accounted for the largest proportion of trait variance and non-significant method variance. Child reports accounted for the smallest proportion of trait variance and the largest method variance. A model with two latent factors provided a better fit to the data than a model with one factor, providing further evidence of the discriminant validity of measures of teacher-student support. Implications for research, policy, and practice are discussed. PMID:21767024

  2. Development and Validation of Discriminant Analysis Models for Student Loan Defaultees and Non-Defaultees.

    ERIC Educational Resources Information Center

    Myers, Greeley; Siera, Steven

    1980-01-01

    Default on guaranteed student loans has been increasing. The use of discriminant analysis as a technique to identify "good" v "bad" student loans based on information available from the loan application is discussed. Research to test the ability of models to such predictions is reported. (Author/MLW)

  3. [Development and testing of a preparedness and response capacity questionnaire in public health emergency for Chinese provincial and municipal governments].

    PubMed

    Hu, Guo-Qing; Rao, Ke-Qin; Sun, Zhen-Qiu

    2008-12-01

    To develop a capacity questionnaire in public health emergency for Chinese local governments. Literature reviews, conceptual modelling, stake-holder analysis, focus group, interview, and Delphi technique were employed together to develop the questionnaire. Classical test theory and case study were used to assess the reliability and validity. (1) A 2-dimension conceptual model was built. A preparedness and response capacity questionnaire in public health emergency with 10 dimensions and 204 items, was developed. (2) Reliability and validity results. Internal consistency: except for dimension 3 and 8, the Cronbach's alpha coefficient of other dimensions was higher than 0.60. The alpha coefficients of dimension 3 and dimension 8 were 0.59 and 0.39 respectively; Content validity: the questionnaire was recognized by the investigatees; Construct validity: the Spearman correlation coefficients among the 10 dimensions fluctuated around 0.50, ranging from 0.26 to 0.75 (P<0.05); Discrimination validity: comparisons of 10 dimensions among 4 provinces did not show statistical significance using One-way analysis of variance (P>0.05). Criterion-related validity: case study showed significant difference among the 10 dimensions in Beijing between February 2003 (before SARS event) and November 2005 (after SARS event). The preparedness and response capacity questionnaire in public health emergency is a reliable and valid tool, which can be used in all provinces and municipalities in China.

  4. Validation of the Malay Version of the Parental Bonding Instrument among Malaysian Youths Using Exploratory Factor Analysis.

    PubMed

    Muhammad, Noor Azimah; Shamsuddin, Khadijah; Omar, Khairani; Shah, Shamsul Azhar; Mohd Amin, Rahmah

    2014-01-01

    Parenting behaviour is culturally sensitive. The aims of this study were (1) to translate the Parental Bonding Instrument into Malay (PBI-M) and (2) to determine its factorial structure and validity among the Malaysian population. The PBI-M was generated from a standard translation process and comprehension testing. The validation study of the PBI-M was administered to 248 college students aged 18 to 22 years. Participants in the comprehension testing had difficulty understanding negative items. Five translated double negative items were replaced with five positive items with similar meanings. Exploratory factor analysis showed a three-factor model for the PBI-M with acceptable reliability. Four negative items (items 3, 4, 8, and 16) and item 19 were omitted from the final PBI-M list because of incorrect placement or low factor loading (< 0.32). Out of the final 20 items of the PBI-M, there were 10 items for the care factor, five items for the autonomy factor and five items for the overprotection factor. All the items loaded positively on their respective factors. The Malaysian population favoured positive items in answering questions. The PBI-M confirmed the three-factor model that consisted of care, autonomy and overprotection. The PBI-M is a valid and reliable instrument to assess the Malaysian parenting style. Confirmatory factor analysis may further support this finding. Malaysia, parenting, questionnaire, validity.

  5. Rapid analysis of glucose, fructose, sucrose, and maltose in honeys from different geographic regions using fourier transform infrared spectroscopy and multivariate analysis.

    PubMed

    Wang, Jun; Kliks, Michael M; Jun, Soojin; Jackson, Mel; Li, Qing X

    2010-03-01

    Quantitative analysis of glucose, fructose, sucrose, and maltose in different geographic origin honey samples in the world using the Fourier transform infrared (FTIR) spectroscopy and chemometrics such as partial least squares (PLS) and principal component regression was studied. The calibration series consisted of 45 standard mixtures, which were made up of glucose, fructose, sucrose, and maltose. There were distinct peak variations of all sugar mixtures in the spectral "fingerprint" region between 1500 and 800 cm(-1). The calibration model was successfully validated using 7 synthetic blend sets of sugars. The PLS 2nd-derivative model showed the highest degree of prediction accuracy with a highest R(2) value of 0.999. Along with the canonical variate analysis, the calibration model further validated by high-performance liquid chromatography measurements for commercial honey samples demonstrates that FTIR can qualitatively and quantitatively determine the presence of glucose, fructose, sucrose, and maltose in multiple regional honey samples.

  6. Validation of a Best-Fit Pharmacokinetic Model for Scopolamine Disposition after Intranasal Administration

    NASA Technical Reports Server (NTRS)

    Wu, L.; Chow, D. S-L.; Tam, V.; Putcha, L.

    2015-01-01

    An intranasal gel formulation of scopolamine (INSCOP) was developed for the treatment of Motion Sickness. Bioavailability and pharmacokinetics (PK) were determined per Investigative New Drug (IND) evaluation guidance by the Food and Drug Administration. Earlier, we reported the development of a PK model that can predict the relationship between plasma, saliva and urinary scopolamine (SCOP) concentrations using data collected from an IND clinical trial with INSCOP. This data analysis project is designed to validate the reported best fit PK model for SCOP by comparing observed and model predicted SCOP concentration-time profiles after administration of INSCOP.

  7. Development of a Rational Modeling Approach for the Design, and Optimization of the Multifiltration Unit. Volume 1

    NASA Technical Reports Server (NTRS)

    Hand, David W.; Crittenden, John C.; Ali, Anisa N.; Bulloch, John L.; Hokanson, David R.; Parrem, David L.

    1996-01-01

    This thesis includes the development and verification of an adsorption model for analysis and optimization of the adsorption processes within the International Space Station multifiltration beds. The fixed bed adsorption model includes multicomponent equilibrium and both external and intraparticle mass transfer resistances. Single solute isotherm parameters were used in the multicomponent equilibrium description to predict the competitive adsorption interactions occurring during the adsorption process. The multicomponent equilibrium description used the Fictive Component Analysis to describe adsorption in unknown background matrices. Multicomponent isotherms were used to validate the multicomponent equilibrium description. Column studies were used to develop and validate external and intraparticle mass transfer parameter correlations for compounds of interest. The fixed bed model was verified using a shower and handwash ersatz water which served as a surrogate to the actual shower and handwash wastewater.

  8. Validity of segmental bioelectrical impedance analysis for estimating fat-free mass in children including overweight individuals.

    PubMed

    Ohta, Megumi; Midorikawa, Taishi; Hikihara, Yuki; Masuo, Yoshihisa; Sakamoto, Shizuo; Torii, Suguru; Kawakami, Yasuo; Fukunaga, Tetsuo; Kanehisa, Hiroaki

    2017-02-01

    This study examined the validity of segmental bioelectrical impedance (BI) analysis for predicting the fat-free masses (FFMs) of whole-body and body segments in children including overweight individuals. The FFM and impedance (Z) values of arms, trunk, legs, and whole body were determined using a dual-energy X-ray absorptiometry and segmental BI analyses, respectively, in 149 boys and girls aged 6 to 12 years, who were divided into model-development (n = 74), cross-validation (n = 35), and overweight (n = 40) groups. Simple regression analysis was applied to (length) 2 /Z (BI index) for each of the whole-body and 3 segments to develop the prediction equations of the measured FFM of the related body part. In the model-development group, the BI index of each of the 3 segments and whole body was significantly correlated to the measured FFM (R 2 = 0.867-0.932, standard error of estimation = 0.18-1.44 kg (5.9%-8.7%)). There was no significant difference between the measured and predicted FFM values without systematic error. The application of each equation derived in the model-development group to the cross-validation and overweight groups did not produce significant differences between the measured and predicted FFM values and systematic errors, with an exception that the arm FFM in the overweight group was overestimated. Segmental bioelectrical impedance analysis is useful for predicting the FFM of each of whole-body and body segments in children including overweight individuals, although the application for estimating arm FFM in overweight individuals requires a certain modification.

  9. Psychometric analysis of the Swedish version of the General Medical Council's multi source feedback questionnaires.

    PubMed

    Olsson, Jan-Eric; Wallentin, Fan Yang; Toth-Pal, Eva; Ekblad, Solvig; Bertilson, Bo Christer

    2017-07-10

    To determine the internal consistency and the underlying components of our translated and adapted Swedish version of the General Medical Council's multisource feedback questionnaires (GMC questionnaires) for physicians and to confirm which aspects of good medical practice the latent variable structure reflected. From October 2015 to March 2016, residents in family medicine in Sweden were invited to participate in the study and to use the Swedish version to perform self-evaluations and acquire feedback from both their patients and colleagues. The validation focused on internal consistency and construct validity. Main outcome measures were Cronbach's alpha coefficients, Principal Component Analysis, and Confirmatory Factor Analysis indices. A total of 752 completed questionnaires from patients, colleagues, and residents were analysed. Of these, 213 comprised resident self-evaluations, 336 were feedback from residents' patients, and 203 were feedback from residents' colleagues. Cronbach's alpha coefficients of the scores were 0.88 from patients, 0.93 from colleagues, and 0.84 in the self-evaluations. The Confirmatory Factor Analysis validated two models that fit the data reasonably well and reflected important aspects of good medical practice. The first model had two latent factors for patient-related items concerning empathy and consultation management, and the second model had five latent factors for colleague-related items, including knowledge and skills, attitude and approach, reflection and development, teaching, and trust. The current Swedish version seems to be a reliable and valid tool for formative assessment for resident physicians and their supervisors. This needs to be verified in larger samples.

  10. Rapid Quantitative Determination of Squalene in Shark Liver Oils by Raman and IR Spectroscopy.

    PubMed

    Hall, David W; Marshall, Susan N; Gordon, Keith C; Killeen, Daniel P

    2016-01-01

    Squalene is sourced predominantly from shark liver oils and to a lesser extent from plants such as olives. It is used for the production of surfactants, dyes, sunscreen, and cosmetics. The economic value of shark liver oil is directly related to the squalene content, which in turn is highly variable and species-dependent. Presented here is a validated gas chromatography-mass spectrometry analysis method for the quantitation of squalene in shark liver oils, with an accuracy of 99.0 %, precision of 0.23 % (standard deviation), and linearity of >0.999. The method has been used to measure the squalene concentration of 16 commercial shark liver oils. These reference squalene concentrations were related to infrared (IR) and Raman spectra of the same oils using partial least squares regression. The resultant models were suitable for the rapid quantitation of squalene in shark liver oils, with cross-validation r (2) values of >0.98 and root mean square errors of validation of ≤4.3 % w/w. Independent test set validation of these models found mean absolute deviations of the 4.9 and 1.0 % w/w for the IR and Raman models, respectively. Both techniques were more accurate than results obtained by an industrial refractive index analysis method, which is used for rapid, cheap quantitation of squalene in shark liver oils. In particular, the Raman partial least squares regression was suited to quantitative squalene analysis. The intense and highly characteristic Raman bands of squalene made quantitative analysis possible irrespective of the lipid matrix.

  11. Psychometric analysis of the Swedish version of the General Medical Council's multi source feedback questionnaires

    PubMed Central

    Wallentin, Fan Yang; Toth-Pal, Eva; Ekblad, Solvig; Bertilson, Bo Christer

    2017-01-01

    Objectives To determine the internal consistency and the underlying components of our translated and adapted Swedish version of the General Medical Council's multisource feedback questionnaires (GMC questionnaires) for physicians and to confirm which aspects of good medical practice the latent variable structure reflected. Methods From October 2015 to March 2016, residents in family medicine in Sweden were invited to participate in the study and to use the Swedish version to perform self-evaluations and acquire feedback from both their patients and colleagues. The validation focused on internal consistency and construct validity. Main outcome measures were Cronbach’s alpha coefficients, Principal Component Analysis, and Confirmatory Factor Analysis indices. Results A total of 752 completed questionnaires from patients, colleagues, and residents were analysed. Of these, 213 comprised resident self-evaluations, 336 were feedback from residents’ patients, and 203 were feedback from residents’ colleagues. Cronbach’s alpha coefficients of the scores were 0.88 from patients, 0.93 from colleagues, and 0.84 in the self-evaluations. The Confirmatory Factor Analysis validated two models that fit the data reasonably well and reflected important aspects of good medical practice. The first model had two latent factors for patient-related items concerning empathy and consultation management, and the second model had five latent factors for colleague-related items, including knowledge and skills, attitude and approach, reflection and development, teaching, and trust. Conclusions The current Swedish version seems to be a reliable and valid tool for formative assessment for resident physicians and their supervisors. This needs to be verified in larger samples. PMID:28704204

  12. Computational Fluid Dynamics Best Practice Guidelines in the Analysis of Storage Dry Cask

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zigh, A.; Solis, J.

    2008-07-01

    Computational fluid dynamics (CFD) methods are used to evaluate the thermal performance of a dry cask under long term storage conditions in accordance with NUREG-1536 [NUREG-1536, 1997]. A three-dimensional CFD model was developed and validated using data for a ventilated storage cask (VSC-17) collected by Idaho National Laboratory (INL). The developed Fluent CFD model was validated to minimize the modeling and application uncertainties. To address modeling uncertainties, the paper focused on turbulence modeling of buoyancy driven air flow. Similarly, in the application uncertainties, the pressure boundary conditions used to model the air inlet and outlet vents were investigated and validated.more » Different turbulence models were used to reduce the modeling uncertainty in the CFD simulation of the air flow through the annular gap between the overpack and the multi-assembly sealed basket (MSB). Among the chosen turbulence models, the validation showed that the low Reynolds k-{epsilon} and the transitional k-{omega} turbulence models predicted the measured temperatures closely. To assess the impact of pressure boundary conditions used at the air inlet and outlet channels on the application uncertainties, a sensitivity analysis of operating density was undertaken. For convergence purposes, all available commercial CFD codes include the operating density in the pressure gradient term of the momentum equation. The validation showed that the correct operating density corresponds to the density evaluated at the air inlet condition of pressure and temperature. Next, the validated CFD method was used to predict the thermal performance of an existing dry cask storage system. The evaluation uses two distinct models: a three-dimensional and an axisymmetrical representation of the cask. In the 3-D model, porous media was used to model only the volume occupied by the rodded region that is surrounded by the BWR channel box. In the axisymmetric model, porous media was used to model the entire region that encompasses the fuel assemblies as well as the gaps in between. Consequently, a larger volume is represented by porous media in the second model; hence, a higher frictional flow resistance is introduced in the momentum equations. The conservatism and the safety margins of these models were compared to assess the applicability and the realism of these two models. The three-dimensional model included fewer geometry simplifications and is recommended as it predicted less conservative fuel cladding temperature values, while still assuring the existence of adequate safety margins. (authors)« less

  13. Moment method analysis of linearly tapered slot antennas

    NASA Technical Reports Server (NTRS)

    Koeksal, Adnan

    1993-01-01

    A method of moments (MOM) model for the analysis of the Linearly Tapered Slot Antenna (LTSA) is developed and implemented. The model employs an unequal size rectangular sectioning for conducting parts of the antenna. Piecewise sinusoidal basis functions are used for the expansion of conductor current. The effect of the dielectric is incorporated in the model by using equivalent volume polarization current density and solving the equivalent problem in free-space. The feed section of the antenna including the microstripline is handled rigorously in the MOM model by including slotline short circuit and microstripline currents among the unknowns. Comparison with measurements is made to demonstrate the validity of the model for both the air case and the dielectric case. Validity of the model is also verified by extending the model to handle the analysis of the skew-plate antenna and comparing the results to those of a skew-segmentation modeling results of the same structure and to available data in the literature. Variation of the radiation pattern for the air LTSA with length, height, and taper angle is investigated, and the results are tabulated. Numerical results for the effect of the dielectric thickness and permittivity are presented.

  14. Assessing Requirements Quality through Requirements Coverage

    NASA Technical Reports Server (NTRS)

    Rajan, Ajitha; Heimdahl, Mats; Woodham, Kurt

    2008-01-01

    In model-based development, the development effort is centered around a formal description of the proposed software system the model. This model is derived from some high-level requirements describing the expected behavior of the software. For validation and verification purposes, this model can then be subjected to various types of analysis, for example, completeness and consistency analysis [6], model checking [3], theorem proving [1], and test-case generation [4, 7]. This development paradigm is making rapid inroads in certain industries, e.g., automotive, avionics, space applications, and medical technology. This shift towards model-based development naturally leads to changes in the verification and validation (V&V) process. The model validation problem determining that the model accurately captures the customer's high-level requirements has received little attention and the sufficiency of the validation activities has been largely determined through ad-hoc methods. Since the model serves as the central artifact, its correctness with respect to the users needs is absolutely crucial. In our investigation, we attempt to answer the following two questions with respect to validation (1) Are the requirements sufficiently defined for the system? and (2) How well does the model implement the behaviors specified by the requirements? The second question can be addressed using formal verification. Nevertheless, the size and complexity of many industrial systems make formal verification infeasible even if we have a formal model and formalized requirements. Thus, presently, there is no objective way of answering these two questions. To this end, we propose an approach based on testing that, when given a set of formal requirements, explores the relationship between requirements-based structural test-adequacy coverage and model-based structural test-adequacy coverage. The proposed technique uses requirements coverage metrics defined in [9] on formal high-level software requirements and existing model coverage metrics such as the Modified Condition and Decision Coverage (MC/DC) used when testing highly critical software in the avionics industry [8]. Our work is related to Chockler et al. [2], but we base our work on traditional testing techniques as opposed to verification techniques.

  15. Toward a CFD nose-to-tail capability - Hypersonic unsteady Navier-Stokes code validation

    NASA Technical Reports Server (NTRS)

    Edwards, Thomas A.; Flores, Jolen

    1989-01-01

    Computational fluid dynamics (CFD) research for hypersonic flows presents new problems in code validation because of the added complexity of the physical models. This paper surveys code validation procedures applicable to hypersonic flow models that include real gas effects. The current status of hypersonic CFD flow analysis is assessed with the Compressible Navier-Stokes (CNS) code as a case study. The methods of code validation discussed to beyond comparison with experimental data to include comparisons with other codes and formulations, component analyses, and estimation of numerical errors. Current results indicate that predicting hypersonic flows of perfect gases and equilibrium air are well in hand. Pressure, shock location, and integrated quantities are relatively easy to predict accurately, while surface quantities such as heat transfer are more sensitive to the solution procedure. Modeling transition to turbulence needs refinement, though preliminary results are promising.

  16. Validation of X1 motorcycle model in industrial plant layout by using WITNESSTM simulation software

    NASA Astrophysics Data System (ADS)

    Hamzas, M. F. M. A.; Bareduan, S. A.; Zakaria, M. Z.; Tan, W. J.; Zairi, S.

    2017-09-01

    This paper demonstrates a case study on simulation, modelling and analysis for X1 Motorcycles Model. In this research, a motorcycle assembly plant has been selected as a main place of research study. Simulation techniques by using Witness software were applied to evaluate the performance of the existing manufacturing system. The main objective is to validate the data and find out the significant impact on the overall performance of the system for future improvement. The process of validation starts when the layout of the assembly line was identified. All components are evaluated to validate whether the data is significance for future improvement. Machine and labor statistics are among the parameters that were evaluated for process improvement. Average total cycle time for given workstations is used as criterion for comparison of possible variants. From the simulation process, the data used are appropriate and meet the criteria for two-sided assembly line problems.

  17. Quality Control Analysis of Selected Aspects of Programs Administered by the Bureau of Student Financial Assistance. Task 1 and Quality Control Sample; Error-Prone Modeling Analysis Plan.

    ERIC Educational Resources Information Center

    Saavedra, Pedro; And Others

    Parameters and procedures for developing an error-prone model (EPM) to predict financial aid applicants who are likely to misreport on Basic Educational Opportunity Grant (BEOG) applications are introduced. Specifications to adapt these general parameters to secondary data analysis of the Validation, Edits, and Applications Processing Systems…

  18. Validation of the Spanish version of the Drive for Muscularity Scale (DMS) among males: Confirmatory factor analysis.

    PubMed

    Sepulveda, Ana R; Parks, Melissa; de Pellegrin, Yolanda; Anastasiadou, Dimitra; Blanco, Miriam

    2016-04-01

    Drive for Muscularity (DM) has been shown to be a relevant construct for measuring and understanding male body image. For this reason, it is important to have reliable and valid instruments with which to measure DM, and to date no such instruments exist in Spain. This study analyzes the psychometric and structural properties of the Drive for Muscularity Scale (DMS) in a sample of Spanish adolescent males (N=212), with the aim of studying the structural validity of the scale by using a confirmatory factor analysis (CFA), as well as analyzing the internal consistency and construct (convergent and discriminant) and concurrent validity of the instrument. After testing three models, results indicated that the best structure was a two-dimensional model, with the factors of muscularity-oriented body image (MBI) and muscularity behavior (MB). The scale showed good internal consistency (α=.90) and adequate construct validity. Furthermore, significant associations were found between DM and increased difficulties in emotional regulation (rho=.37) and low self-esteem (rho=-.19). Findings suggest that the two-factor structure may be used when assessing drive for muscularity among adolescent males in Spain. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walsh, Seán, E-mail: walshsharp@gmail.com; Department of Oncology, Gray Institute for Radiation Oncology and Biology, University of Oxford, Oxford OX3 7DQ; Roelofs, Erik

    Purpose: A fully heterogeneous population averaged mechanistic tumor control probability (TCP) model is appropriate for the analysis of external beam radiotherapy (EBRT). This has been accomplished for EBRT photon treatment of intermediate-risk prostate cancer. Extending the TCP model for low and high-risk patients would be beneficial in terms of overall decision making. Furthermore, different radiation treatment modalities such as protons and carbon-ions are becoming increasingly available. Consequently, there is a need for a complete TCP model. Methods: A TCP model was fitted and validated to a primary endpoint of 5-year biological no evidence of disease clinical outcome data obtained frommore » a review of the literature for low, intermediate, and high-risk prostate cancer patients (5218 patients fitted, 1088 patients validated), treated by photons, protons, or carbon-ions. The review followed the preferred reporting item for systematic reviews and meta-analyses statement. Treatment regimens include standard fractionation and hypofractionation treatments. Residual analysis and goodness of fit statistics were applied. Results: The TCP model achieves a good level of fit overall, linear regression results in a p-value of <0.000 01 with an adjusted-weighted-R{sup 2} value of 0.77 and a weighted root mean squared error (wRMSE) of 1.2%, to the fitted clinical outcome data. Validation of the model utilizing three independent datasets obtained from the literature resulted in an adjusted-weighted-R{sup 2} value of 0.78 and a wRMSE of less than 1.8%, to the validation clinical outcome data. The weighted mean absolute residual across the entire dataset is found to be 5.4%. Conclusions: This TCP model fitted and validated to clinical outcome data, appears to be an appropriate model for the inclusion of all clinical prostate cancer risk categories, and allows evaluation of current EBRT modalities with regard to tumor control prediction.« less

  20. Validity and Reliability of the 8-Item Work Limitations Questionnaire.

    PubMed

    Walker, Timothy J; Tullar, Jessica M; Diamond, Pamela M; Kohl, Harold W; Amick, Benjamin C

    2017-12-01

    Purpose To evaluate factorial validity, scale reliability, test-retest reliability, convergent validity, and discriminant validity of the 8-item Work Limitations Questionnaire (WLQ) among employees from a public university system. Methods A secondary analysis using de-identified data from employees who completed an annual Health Assessment between the years 2009-2015 tested research aims. Confirmatory factor analysis (CFA) (n = 10,165) tested the latent structure of the 8-item WLQ. Scale reliability was determined using a CFA-based approach while test-retest reliability was determined using the intraclass correlation coefficient. Convergent/discriminant validity was tested by evaluating relations between the 8-item WLQ with health/performance variables for convergent validity (health-related work performance, number of chronic conditions, and general health) and demographic variables for discriminant validity (gender and institution type). Results A 1-factor model with three correlated residuals demonstrated excellent model fit (CFI = 0.99, TLI = 0.99, RMSEA = 0.03, and SRMR = 0.01). The scale reliability was acceptable (0.69, 95% CI 0.68-0.70) and the test-retest reliability was very good (ICC = 0.78). Low-to-moderate associations were observed between the 8-item WLQ and the health/performance variables while weak associations were observed between the demographic variables. Conclusions The 8-item WLQ demonstrated sufficient reliability and validity among employees from a public university system. Results suggest the 8-item WLQ is a usable alternative for studies when the more comprehensive 25-item WLQ is not available.

  1. Predictive Ability of Pender's Health Promotion Model for Physical Activity and Exercise in People with Spinal Cord Injuries: A Hierarchical Regression Analysis

    ERIC Educational Resources Information Center

    Keegan, John P.; Chan, Fong; Ditchman, Nicole; Chiu, Chung-Yi

    2012-01-01

    The main objective of this study was to validate Pender's Health Promotion Model (HPM) as a motivational model for exercise/physical activity self-management for people with spinal cord injuries (SCIs). Quantitative descriptive research design using hierarchical regression analysis (HRA) was used. A total of 126 individuals with SCI were recruited…

  2. Validation of GEMACS (General Electromagnetic Model for the Analysis of Complex Systems) for Modeling Lightning-Induced Electromagnetic Fields.

    DTIC Science & Technology

    1987-12-01

    0 00 I DTIC"ELECTE. ~FEB 0 911988< " H VALIDATION OF GEMACS FOR MODELING ’LIGHTNING-INDUCED ELECTROMAGNETIC FIELDS THESIS David S. Mabee Captain...THESIS David S. Mabee . Captain, USAFD T C ’::, AFIT/GE/ENG/87D-39 ELECTFE r C:’., ~FEB 0 91988 J Approved for public release; distribution unlimited...Electrical Engineering David S. Mabee , B.S. ’- ,. . Captain, USAF December 1987 A o fr p.. ’ Approved for public release; distribution unlimited ,12

  3. Rational selection of training and test sets for the development of validated QSAR models

    NASA Astrophysics Data System (ADS)

    Golbraikh, Alexander; Shen, Min; Xiao, Zhiyan; Xiao, Yun-De; Lee, Kuo-Hsiung; Tropsha, Alexander

    2003-02-01

    Quantitative Structure-Activity Relationship (QSAR) models are used increasingly to screen chemical databases and/or virtual chemical libraries for potentially bioactive molecules. These developments emphasize the importance of rigorous model validation to ensure that the models have acceptable predictive power. Using k nearest neighbors ( kNN) variable selection QSAR method for the analysis of several datasets, we have demonstrated recently that the widely accepted leave-one-out (LOO) cross-validated R2 (q2) is an inadequate characteristic to assess the predictive ability of the models [Golbraikh, A., Tropsha, A. Beware of q2! J. Mol. Graphics Mod. 20, 269-276, (2002)]. Herein, we provide additional evidence that there exists no correlation between the values of q 2 for the training set and accuracy of prediction ( R 2) for the test set and argue that this observation is a general property of any QSAR model developed with LOO cross-validation. We suggest that external validation using rationally selected training and test sets provides a means to establish a reliable QSAR model. We propose several approaches to the division of experimental datasets into training and test sets and apply them in QSAR studies of 48 functionalized amino acid anticonvulsants and a series of 157 epipodophyllotoxin derivatives with antitumor activity. We formulate a set of general criteria for the evaluation of predictive power of QSAR models.

  4. Head-Spine Structure Modeling: Enhancements to Secondary Loading Path Model and Validation of Head-Cervical Spine Model.

    DTIC Science & Technology

    1985-07-01

    cervical spine; 𔃽*an axisymmetric finite element analysis of a lumbar vertebral body with comparisons to other models and sJEecific attention to the...AXISYMMETRIC FINITE ELEMENT ANALYSIS OF A LUMBAR VERTEBRAL BODY 37 Model 40 Stress Nomenclature 42 Comparison of Models C and S 47 Comparison with Earlier...left and right sides. Each side of the diaphragm arises as one sternal slip, six costal slips and one lumbar slip. Accordingly, the origin of the

  5. A FAST BAYESIAN METHOD FOR UPDATING AND FORECASTING HOURLY OZONE LEVELS

    EPA Science Inventory

    A Bayesian hierarchical space-time model is proposed by combining information from real-time ambient AIRNow air monitoring data, and output from a computer simulation model known as the Community Multi-scale Air Quality (Eta-CMAQ) forecast model. A model validation analysis shows...

  6. Load Composition Model Workflow (BPA TIP-371 Deliverable 1A)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chassin, David P.; Cezar, Gustavo V.

    This project is funded under Bonneville Power Administration (BPA) Strategic Partnership Project (SPP) 17-005 between BPA and SLAC National Accelerator Laboratory. The project in a BPA Technology Improvement Project (TIP) that builds on and validates the Composite Load Model developed by the Western Electric Coordinating Council's (WECC) Load Modeling Task Force (LMTF). The composite load model is used by the WECC Modeling and Validation Work Group to study the stability and security of the western electricity interconnection. The work includes development of load composition data sets, collection of load disturbance data, and model development and validation. This work supports reliablemore » and economic operation of the power system. This report was produced for Deliverable 1A of the BPA TIP-371 Project entitled \\TIP 371: Advancing the Load Composition Model". The deliverable documents the proposed work ow for the Composite Load Model, which provides the basis for the instrumentation, data acquisition, analysis and data dissemination activities addressed by later phases of the project.« less

  7. [Hyperspectral Estimation of Apple Tree Canopy LAI Based on SVM and RF Regression].

    PubMed

    Han, Zhao-ying; Zhu, Xi-cun; Fang, Xian-yi; Wang, Zhuo-yuan; Wang, Ling; Zhao, Geng-Xing; Jiang, Yuan-mao

    2016-03-01

    Leaf area index (LAI) is the dynamic index of crop population size. Hyperspectral technology can be used to estimate apple canopy LAI rapidly and nondestructively. It can be provide a reference for monitoring the tree growing and yield estimation. The Red Fuji apple trees of full bearing fruit are the researching objects. Ninety apple trees canopies spectral reflectance and LAI values were measured by the ASD Fieldspec3 spectrometer and LAI-2200 in thirty orchards in constant two years in Qixia research area of Shandong Province. The optimal vegetation indices were selected by the method of correlation analysis of the original spectral reflectance and vegetation indices. The models of predicting the LAI were built with the multivariate regression analysis method of support vector machine (SVM) and random forest (RF). The new vegetation indices, GNDVI527, ND-VI676, RVI682, FD-NVI656 and GRVI517 and the previous two main vegetation indices, NDVI670 and NDVI705, are in accordance with LAI. In the RF regression model, the calibration set decision coefficient C-R2 of 0.920 and validation set decision coefficient V-R2 of 0.889 are higher than the SVM regression model by 0.045 and 0.033 respectively. The root mean square error of calibration set C-RMSE of 0.249, the root mean square error validation set V-RMSE of 0.236 are lower than that of the SVM regression model by 0.054 and 0.058 respectively. Relative analysis of calibrating error C-RPD and relative analysis of validation set V-RPD reached 3.363 and 2.520, 0.598 and 0.262, respectively, which were higher than the SVM regression model. The measured and predicted the scatterplot trend line slope of the calibration set and validation set C-S and V-S are close to 1. The estimation result of RF regression model is better than that of the SVM. RF regression model can be used to estimate the LAI of red Fuji apple trees in full fruit period.

  8. Model Verification and Validation Concepts for a Probabilistic Fracture Assessment Model to Predict Cracking of Knife Edge Seals in the Space Shuttle Main Engine High Pressure Oxidizer

    NASA Technical Reports Server (NTRS)

    Pai, Shantaram S.; Riha, David S.

    2013-01-01

    Physics-based models are routinely used to predict the performance of engineered systems to make decisions such as when to retire system components, how to extend the life of an aging system, or if a new design will be safe or available. Model verification and validation (V&V) is a process to establish credibility in model predictions. Ideally, carefully controlled validation experiments will be designed and performed to validate models or submodels. In reality, time and cost constraints limit experiments and even model development. This paper describes elements of model V&V during the development and application of a probabilistic fracture assessment model to predict cracking in space shuttle main engine high-pressure oxidizer turbopump knife-edge seals. The objective of this effort was to assess the probability of initiating and growing a crack to a specified failure length in specific flight units for different usage and inspection scenarios. The probabilistic fracture assessment model developed in this investigation combined a series of submodels describing the usage, temperature history, flutter tendencies, tooth stresses and numbers of cycles, fatigue cracking, nondestructive inspection, and finally the probability of failure. The analysis accounted for unit-to-unit variations in temperature, flutter limit state, flutter stress magnitude, and fatigue life properties. The investigation focused on the calculation of relative risk rather than absolute risk between the usage scenarios. Verification predictions were first performed for three units with known usage and cracking histories to establish credibility in the model predictions. Then, numerous predictions were performed for an assortment of operating units that had flown recently or that were projected for future flights. Calculations were performed using two NASA-developed software tools: NESSUS(Registered Trademark) for the probabilistic analysis, and NASGRO(Registered Trademark) for the fracture mechanics analysis. The goal of these predictions was to provide additional information to guide decisions on the potential of reusing existing and installed units prior to the new design certification.

  9. Evaluating model structure adequacy: The case of the Maggia Valley groundwater system, southern Switzerland

    USGS Publications Warehouse

    Hill, Mary C.; L. Foglia,; S. W. Mehl,; P. Burlando,

    2013-01-01

    Model adequacy is evaluated with alternative models rated using model selection criteria (AICc, BIC, and KIC) and three other statistics. Model selection criteria are tested with cross-validation experiments and insights for using alternative models to evaluate model structural adequacy are provided. The study is conducted using the computer codes UCODE_2005 and MMA (MultiModel Analysis). One recharge alternative is simulated using the TOPKAPI hydrological model. The predictions evaluated include eight heads and three flows located where ecological consequences and model precision are of concern. Cross-validation is used to obtain measures of prediction accuracy. Sixty-four models were designed deterministically and differ in representation of river, recharge, bedrock topography, and hydraulic conductivity. Results include: (1) What may seem like inconsequential choices in model construction may be important to predictions. Analysis of predictions from alternative models is advised. (2) None of the model selection criteria consistently identified models with more accurate predictions. This is a disturbing result that suggests to reconsider the utility of model selection criteria, and/or the cross-validation measures used in this work to measure model accuracy. (3) KIC displayed poor performance for the present regression problems; theoretical considerations suggest that difficulties are associated with wide variations in the sensitivity term of KIC resulting from the models being nonlinear and the problems being ill-posed due to parameter correlations and insensitivity. The other criteria performed somewhat better, and similarly to each other. (4) Quantities with high leverage are more difficult to predict. The results are expected to be generally applicable to models of environmental systems.

  10. The application of sensitivity analysis to models of large scale physiological systems

    NASA Technical Reports Server (NTRS)

    Leonard, J. I.

    1974-01-01

    A survey of the literature of sensitivity analysis as it applies to biological systems is reported as well as a brief development of sensitivity theory. A simple population model and a more complex thermoregulatory model illustrate the investigatory techniques and interpretation of parameter sensitivity analysis. The role of sensitivity analysis in validating and verifying models, and in identifying relative parameter influence in estimating errors in model behavior due to uncertainty in input data is presented. This analysis is valuable to the simulationist and the experimentalist in allocating resources for data collection. A method for reducing highly complex, nonlinear models to simple linear algebraic models that could be useful for making rapid, first order calculations of system behavior is presented.

  11. QSAR Analysis of 2-Amino or 2-Methyl-1-Substituted Benzimidazoles Against Pseudomonas aeruginosa

    PubMed Central

    Podunavac-Kuzmanović, Sanja O.; Cvetković, Dragoljub D.; Barna, Dijana J.

    2009-01-01

    A set of benzimidazole derivatives were tested for their inhibitory activities against the Gram-negative bacterium Pseudomonas aeruginosa and minimum inhibitory concentrations were determined for all the compounds. Quantitative structure activity relationship (QSAR) analysis was applied to fourteen of the abovementioned derivatives using a combination of various physicochemical, steric, electronic, and structural molecular descriptors. A multiple linear regression (MLR) procedure was used to model the relationships between molecular descriptors and the antibacterial activity of the benzimidazole derivatives. The stepwise regression method was used to derive the most significant models as a calibration model for predicting the inhibitory activity of this class of molecules. The best QSAR models were further validated by a leave one out technique as well as by the calculation of statistical parameters for the established theoretical models. To confirm the predictive power of the models, an external set of molecules was used. High agreement between experimental and predicted inhibitory values, obtained in the validation procedure, indicated the good quality of the derived QSAR models. PMID:19468332

  12. Validation and Verification of Operational Land Analysis Activities at the Air Force Weather Agency

    NASA Technical Reports Server (NTRS)

    Shaw, Michael; Kumar, Sujay V.; Peters-Lidard, Christa D.; Cetola, Jeffrey

    2012-01-01

    The NASA developed Land Information System (LIS) is the Air Force Weather Agency's (AFWA) operational Land Data Assimilation System (LDAS) combining real time precipitation observations and analyses, global forecast model data, vegetation, terrain, and soil parameters with the community Noah land surface model, along with other hydrology module options, to generate profile analyses of global soil moisture, soil temperature, and other important land surface characteristics. (1) A range of satellite data products and surface observations used to generate the land analysis products (2) Global, 1/4 deg spatial resolution (3) Model analysis generated at 3 hours. AFWA recognizes the importance of operational benchmarking and uncertainty characterization for land surface modeling and is developing standard methods, software, and metrics to verify and/or validate LIS output products. To facilitate this and other needs for land analysis activities at AFWA, the Model Evaluation Toolkit (MET) -- a joint product of the National Center for Atmospheric Research Developmental Testbed Center (NCAR DTC), AFWA, and the user community -- and the Land surface Verification Toolkit (LVT), developed at the Goddard Space Flight Center (GSFC), have been adapted to operational benchmarking needs of AFWA's land characterization activities.

  13. Initial Validation of a Comprehensive Assessment Instrument for Bereavement-Related Grief Symptoms and Risk of Complications: The Indicator of Bereavement Adaptation—Cruse Scotland (IBACS)

    PubMed Central

    Schut, Henk; Stroebe, Margaret S.; Wilson, Stewart; Birrell, John

    2016-01-01

    Objective This study assessed the validity of the Indicator of Bereavement Adaptation Cruse Scotland (IBACS). Designed for use in clinical and non-clinical settings, the IBACS measures severity of grief symptoms and risk of developing complications. Method N = 196 (44 male, 152 female) help-seeking, bereaved Scottish adults participated at two timepoints: T1 (baseline) and T2 (after 18 months). Four validated assessment instruments were administered: CORE-R, ICG-R, IES-R, SCL-90-R. Discriminative ability was assessed using ROC curve analysis. Concurrent validity was tested through correlation analysis at T1. Predictive validity was assessed using correlation analyses and ROC curve analysis. Optimal IBACS cutoff values were obtained by calculating a maximal Youden index J in ROC curve analysis. Clinical implications were compared across instruments. Results ROC curve analysis results (AUC = .84, p < .01, 95% CI between .77 and .90) indicated the IBACS is a good diagnostic instrument for assessing complicated grief. Positive correlations (p < .01, 2-tailed) with all four instruments at T1 demonstrated the IBACS' concurrent validity, strongest with complicated grief measures (r = .82). Predictive validity was shown to be fair in T2 ROC curve analysis results (n = 67, AUC = .78, 95% CI between .65 and .92; p < .01). Predictive validity was also supported by stable positive correlations between IBACS and other instruments at T2. Clinical indications were found not to differ across instruments. Conclusions The IBACS offers effective grief symptom and risk assessment for use by non-clinicians. Indications are sufficient to support intake assessment for a stepped model of bereavement intervention. PMID:27741246

  14. Heavy Metal Adsorption onto Kappaphycus sp. from Aqueous Solutions: The Use of Error Functions for Validation of Isotherm and Kinetics Models

    PubMed Central

    Rahman, Md. Sayedur; Sathasivam, Kathiresan V.

    2015-01-01

    Biosorption process is a promising technology for the removal of heavy metals from industrial wastes and effluents using low-cost and effective biosorbents. In the present study, adsorption of Pb2+, Cu2+, Fe2+, and Zn2+ onto dried biomass of red seaweed Kappaphycus sp. was investigated as a function of pH, contact time, initial metal ion concentration, and temperature. The experimental data were evaluated by four isotherm models (Langmuir, Freundlich, Temkin, and Dubinin-Radushkevich) and four kinetic models (pseudo-first-order, pseudo-second-order, Elovich, and intraparticle diffusion models). The adsorption process was feasible, spontaneous, and endothermic in nature. Functional groups in the biomass involved in metal adsorption process were revealed as carboxylic and sulfonic acids and sulfonate by Fourier transform infrared analysis. A total of nine error functions were applied to validate the models. We strongly suggest the analysis of error functions for validating adsorption isotherm and kinetic models using linear methods. The present work shows that the red seaweed Kappaphycus sp. can be used as a potentially low-cost biosorbent for the removal of heavy metal ions from aqueous solutions. Further study is warranted to evaluate its feasibility for the removal of heavy metals from the real environment. PMID:26295032

  15. Heavy Metal Adsorption onto Kappaphycus sp. from Aqueous Solutions: The Use of Error Functions for Validation of Isotherm and Kinetics Models.

    PubMed

    Rahman, Md Sayedur; Sathasivam, Kathiresan V

    2015-01-01

    Biosorption process is a promising technology for the removal of heavy metals from industrial wastes and effluents using low-cost and effective biosorbents. In the present study, adsorption of Pb(2+), Cu(2+), Fe(2+), and Zn(2+) onto dried biomass of red seaweed Kappaphycus sp. was investigated as a function of pH, contact time, initial metal ion concentration, and temperature. The experimental data were evaluated by four isotherm models (Langmuir, Freundlich, Temkin, and Dubinin-Radushkevich) and four kinetic models (pseudo-first-order, pseudo-second-order, Elovich, and intraparticle diffusion models). The adsorption process was feasible, spontaneous, and endothermic in nature. Functional groups in the biomass involved in metal adsorption process were revealed as carboxylic and sulfonic acids and sulfonate by Fourier transform infrared analysis. A total of nine error functions were applied to validate the models. We strongly suggest the analysis of error functions for validating adsorption isotherm and kinetic models using linear methods. The present work shows that the red seaweed Kappaphycus sp. can be used as a potentially low-cost biosorbent for the removal of heavy metal ions from aqueous solutions. Further study is warranted to evaluate its feasibility for the removal of heavy metals from the real environment.

  16. Validity of the Eating Attitude Test among Exercisers.

    PubMed

    Lane, Helen J; Lane, Andrew M; Matheson, Hilary

    2004-12-01

    Theory testing and construct measurement are inextricably linked. To date, no published research has looked at the factorial validity of an existing eating attitude inventory for use with exercisers. The Eating Attitude Test (EAT) is a 26-item measure that yields a single index of disordered eating attitudes. The original factor analysis showed three interrelated factors: Dieting behavior (13-items), oral control (7-items), and bulimia nervosa-food preoccupation (6-items). The primary purpose of the study was to examine the factorial validity of the EAT among a sample of exercisers. The second purpose was to investigate relationships between eating attitudes scores and selected psychological constructs. In stage one, 598 regular exercisers completed the EAT. Confirmatory factor analysis (CFA) was used to test the single-factor, a three-factor model, and a four-factor model, which distinguished bulimia from food pre-occupation. CFA of the single-factor model (RCFI = 0.66, RMSEA = 0.10), the three-factor-model (RCFI = 0.74; RMSEA = 0.09) showed poor model fit. There was marginal fit for the 4-factor model (RCFI = 0.91, RMSEA = 0.06). Results indicated five-items showed poor factor loadings. After these 5-items were discarded, the three models were re-analyzed. CFA results indicated that the single-factor model (RCFI = 0.76, RMSEA = 0.10) and three-factor model (RCFI = 0.82, RMSEA = 0.08) showed poor fit. CFA results for the four-factor model showed acceptable fit indices (RCFI = 0.98, RMSEA = 0.06). Stage two explored relationships between EAT scores, mood, self-esteem, and motivational indices toward exercise in terms of self-determination, enjoyment and competence. Correlation results indicated that depressed mood scores positively correlated with bulimia and dieting scores. Further, dieting was inversely related with self-determination toward exercising. Collectively, findings suggest that a 21-item four-factor model shows promising validity coefficients among exercise participants, and that future research is needed to investigate eating attitudes among samples of exercisers. Key PointsValidity of psychometric measures should be thoroughly investigated. Researchers should not assume that a scale validation on one sample will show the same validity coefficients in a different population.The Eating Attitude Test is a commonly used scale. The present study shows a revised 21-item scale was suitable for exercisers.Researchers using the Eating Attitude Test should use subscales of Dieting, Oral control, Food pre-occupation, and Bulimia.Future research should involve qualitative techniques and interview exercise participants to explore the nature of eating attitudes.

  17. Real-time remote scientific model validation

    NASA Technical Reports Server (NTRS)

    Frainier, Richard; Groleau, Nicolas

    1994-01-01

    This paper describes flight results from the use of a CLIPS-based validation facility to compare analyzed data from a space life sciences (SLS) experiment to an investigator's preflight model. The comparison, performed in real-time, either confirms or refutes the model and its predictions. This result then becomes the basis for continuing or modifying the investigator's experiment protocol. Typically, neither the astronaut crew in Spacelab nor the ground-based investigator team are able to react to their experiment data in real time. This facility, part of a larger science advisor system called Principal Investigator in a Box, was flown on the space shuttle in October, 1993. The software system aided the conduct of a human vestibular physiology experiment and was able to outperform humans in the tasks of data integrity assurance, data analysis, and scientific model validation. Of twelve preflight hypotheses associated with investigator's model, seven were confirmed and five were rejected or compromised.

  18. A Nonparametric Statistical Approach to the Validation of Computer Simulation Models

    DTIC Science & Technology

    1985-11-01

    Ballistic Research Laboratory, the Experimental Design and Analysis Branch of the Systems Engineering and Concepts Analysis Division was funded to...2 Winter. E M. Wisemiler. D P. azd UjiharmJ K. Venrgcation ad Validatiot of Engineering Simulatiots with Minimal D2ta." Pmeedinr’ of the 1976 Summer...used by numerous authors. Law%6 has augmented their approach with specific suggestions for each of the three stage’s: 1. develop high face-validity

  19. A Comparison of Three Methods for the Analysis of Skin Flap Viability: Reliability and Validity.

    PubMed

    Tim, Carla Roberta; Martignago, Cintia Cristina Santi; da Silva, Viviane Ribeiro; Dos Santos, Estefany Camila Bonfim; Vieira, Fabiana Nascimento; Parizotto, Nivaldo Antonio; Liebano, Richard Eloin

    2018-05-01

    Objective: Technological advances have provided new alternatives to the analysis of skin flap viability in animal models; however, the interrater validity and reliability of these techniques have yet to be analyzed. The present study aimed to evaluate the interrater validity and reliability of three different methods: weight of paper template (WPT), paper template area (PTA), and photographic analysis. Approach: Sixteen male Wistar rats had their cranially based dorsal skin flap elevated. On the seventh postoperative day, the viable tissue area and the necrotic area of the skin flap were recorded using the paper template method and photo image. The evaluation of the percentage of viable tissue was performed using three methods, simultaneously and independently by two raters. The analysis of interrater reliability and viability was performed using the intraclass correlation coefficient and Bland Altman Plot Analysis was used to visualize the presence or absence of systematic bias in the evaluations of data validity. Results: The results showed that interrater reliability for WPT, measurement of PTA, and photographic analysis were 0.995, 0.990, and 0.982, respectively. For data validity, a correlation >0.90 was observed for all comparisons made between the three methods. In addition, Bland Altman Plot Analysis showed agreement between the comparisons of the methods and the presence of systematic bias was not observed. Innovation: Digital methods are an excellent choice for assessing skin flap viability; moreover, they make data use and storage easier. Conclusion: Independently from the method used, the interrater reliability and validity proved to be excellent for the analysis of skin flaps' viability.

  20. Cross validation issues in multiobjective clustering

    PubMed Central

    Brusco, Michael J.; Steinley, Douglas

    2018-01-01

    The implementation of multiobjective programming methods in combinatorial data analysis is an emergent area of study with a variety of pragmatic applications in the behavioural sciences. Most notably, multiobjective programming provides a tool for analysts to model trade offs among competing criteria in clustering, seriation, and unidimensional scaling tasks. Although multiobjective programming has considerable promise, the technique can produce numerically appealing results that lack empirical validity. With this issue in mind, the purpose of this paper is to briefly review viable areas of application for multiobjective programming and, more importantly, to outline the importance of cross-validation when using this method in cluster analysis. PMID:19055857

  1. Ares I-X Flight Test Validation of Control Design Tools in the Frequency-Domain

    NASA Technical Reports Server (NTRS)

    Johnson, Matthew; Hannan, Mike; Brandon, Jay; Derry, Stephen

    2011-01-01

    A major motivation of the Ares I-X flight test program was to Design for Data, in order to maximize the usefulness of the data recorded in support of Ares I modeling and validation of design and analysis tools. The Design for Data effort was intended to enable good post-flight characterizations of the flight control system, the vehicle structural dynamics, and also the aerodynamic characteristics of the vehicle. To extract the necessary data from the system during flight, a set of small predetermined Programmed Test Inputs (PTIs) was injected directly into the TVC signal. These PTIs were designed to excite the necessary vehicle dynamics while exhibiting a minimal impact on loads. The method is similar to common approaches in aircraft flight test programs, but with unique launch vehicle challenges due to rapidly changing states, short duration of flight, a tight flight envelope, and an inability to repeat any test. This paper documents the validation effort of the stability analysis tools to the flight data which was performed by comparing the post-flight calculated frequency response of the vehicle to the frequency response calculated by the stability analysis tools used to design and analyze the preflight models during the control design effort. The comparison between flight day frequency response and stability tool analysis for flight of the simulated vehicle shows good agreement and provides a high level of confidence in the stability analysis tools for use in any future program. This is true for both a nominal model as well as for dispersed analysis, which shows that the flight day frequency response is enveloped by the vehicle s preflight uncertainty models.

  2. Validating archetypes for the Multiple Sclerosis Functional Composite.

    PubMed

    Braun, Michael; Brandt, Alexander Ulrich; Schulz, Stefan; Boeker, Martin

    2014-08-03

    Numerous information models for electronic health records, such as openEHR archetypes are available. The quality of such clinical models is important to guarantee standardised semantics and to facilitate their interoperability. However, validation aspects are not regarded sufficiently yet. The objective of this report is to investigate the feasibility of archetype development and its community-based validation process, presuming that this review process is a practical way to ensure high-quality information models amending the formal reference model definitions. A standard archetype development approach was applied on a case set of three clinical tests for multiple sclerosis assessment: After an analysis of the tests, the obtained data elements were organised and structured. The appropriate archetype class was selected and the data elements were implemented in an iterative refinement process. Clinical and information modelling experts validated the models in a structured review process. Four new archetypes were developed and publicly deployed in the openEHR Clinical Knowledge Manager, an online platform provided by the openEHR Foundation. Afterwards, these four archetypes were validated by domain experts in a team review. The review was a formalised process, organised in the Clinical Knowledge Manager. Both, development and review process turned out to be time-consuming tasks, mostly due to difficult selection processes between alternative modelling approaches. The archetype review was a straightforward team process with the goal to validate archetypes pragmatically. The quality of medical information models is crucial to guarantee standardised semantic representation in order to improve interoperability. The validation process is a practical way to better harmonise models that diverge due to necessary flexibility left open by the underlying formal reference model definitions.This case study provides evidence that both community- and tool-enabled review processes, structured in the Clinical Knowledge Manager, ensure archetype quality. It offers a pragmatic but feasible way to reduce variation in the representation of clinical information models towards a more unified and interoperable model.

  3. Validating archetypes for the Multiple Sclerosis Functional Composite

    PubMed Central

    2014-01-01

    Background Numerous information models for electronic health records, such as openEHR archetypes are available. The quality of such clinical models is important to guarantee standardised semantics and to facilitate their interoperability. However, validation aspects are not regarded sufficiently yet. The objective of this report is to investigate the feasibility of archetype development and its community-based validation process, presuming that this review process is a practical way to ensure high-quality information models amending the formal reference model definitions. Methods A standard archetype development approach was applied on a case set of three clinical tests for multiple sclerosis assessment: After an analysis of the tests, the obtained data elements were organised and structured. The appropriate archetype class was selected and the data elements were implemented in an iterative refinement process. Clinical and information modelling experts validated the models in a structured review process. Results Four new archetypes were developed and publicly deployed in the openEHR Clinical Knowledge Manager, an online platform provided by the openEHR Foundation. Afterwards, these four archetypes were validated by domain experts in a team review. The review was a formalised process, organised in the Clinical Knowledge Manager. Both, development and review process turned out to be time-consuming tasks, mostly due to difficult selection processes between alternative modelling approaches. The archetype review was a straightforward team process with the goal to validate archetypes pragmatically. Conclusions The quality of medical information models is crucial to guarantee standardised semantic representation in order to improve interoperability. The validation process is a practical way to better harmonise models that diverge due to necessary flexibility left open by the underlying formal reference model definitions. This case study provides evidence that both community- and tool-enabled review processes, structured in the Clinical Knowledge Manager, ensure archetype quality. It offers a pragmatic but feasible way to reduce variation in the representation of clinical information models towards a more unified and interoperable model. PMID:25087081

  4. Modeling the Biodynamical Response of the Human Head for Injury Analysis

    DTIC Science & Technology

    2001-09-01

    1 II. BACKGROUND ..............................................5 A. HUMAN ANATOMY ......................................5...facilitate the simulation of the sled acceleration test used for model validation. A. HUMAN ANATOMY 1. The Spine The muscles and other soft tissue

  5. [Risk factor analysis of the patients with solitary pulmonary nodules and establishment of a prediction model for the probability of malignancy].

    PubMed

    Wang, X; Xu, Y H; Du, Z Y; Qian, Y J; Xu, Z H; Chen, R; Shi, M H

    2018-02-23

    Objective: This study aims to analyze the relationship among the clinical features, radiologic characteristics and pathological diagnosis in patients with solitary pulmonary nodules, and establish a prediction model for the probability of malignancy. Methods: Clinical data of 372 patients with solitary pulmonary nodules who underwent surgical resection with definite postoperative pathological diagnosis were retrospectively analyzed. In these cases, we collected clinical and radiologic features including gender, age, smoking history, history of tumor, family history of cancer, the location of lesion, ground-glass opacity, maximum diameter, calcification, vessel convergence sign, vacuole sign, pleural indentation, speculation and lobulation. The cases were divided to modeling group (268 cases) and validation group (104 cases). A new prediction model was established by logistic regression analying the data from modeling group. Then the data of validation group was planned to validate the efficiency of the new model, and was compared with three classical models(Mayo model, VA model and LiYun model). With the calculated probability values for each model from validation group, SPSS 22.0 was used to draw the receiver operating characteristic curve, to assess the predictive value of this new model. Results: 112 benign SPNs and 156 malignant SPNs were included in modeling group. Multivariable logistic regression analysis showed that gender, age, history of tumor, ground -glass opacity, maximum diameter, and speculation were independent predictors of malignancy in patients with SPN( P <0.05). We calculated a prediction model for the probability of malignancy as follow: p =e(x)/(1+ e(x)), x=-4.8029-0.743×gender+ 0.057×age+ 1.306×history of tumor+ 1.305×ground-glass opacity+ 0.051×maximum diameter+ 1.043×speculation. When the data of validation group was added to the four-mathematical prediction model, The area under the curve of our mathematical prediction model was 0.742, which is greater than other models (Mayo 0.696, VA 0.634, LiYun 0.681), while the differences between any two of the four models were not significant ( P >0.05). Conclusions: Age of patient, gender, history of tumor, ground-glass opacity, maximum diameter and speculation are independent predictors of malignancy in patients with solitary pulmonary nodule. This logistic regression prediction mathematic model is not inferior to those classical models in estimating the prognosis of SPNs.

  6. Finite Element Model Development and Validation for Aircraft Fuselage Structures

    NASA Technical Reports Server (NTRS)

    Buehrle, Ralph D.; Fleming, Gary A.; Pappa, Richard S.; Grosveld, Ferdinand W.

    2000-01-01

    The ability to extend the valid frequency range for finite element based structural dynamic predictions using detailed models of the structural components and attachment interfaces is examined for several stiffened aircraft fuselage structures. This extended dynamic prediction capability is needed for the integration of mid-frequency noise control technology. Beam, plate and solid element models of the stiffener components are evaluated. Attachment models between the stiffener and panel skin range from a line along the rivets of the physical structure to a constraint over the entire contact surface. The finite element models are validated using experimental modal analysis results. The increased frequency range results in a corresponding increase in the number of modes, modal density and spatial resolution requirements. In this study, conventional modal tests using accelerometers are complemented with Scanning Laser Doppler Velocimetry and Electro-Optic Holography measurements to further resolve the spatial response characteristics. Whenever possible, component and subassembly modal tests are used to validate the finite element models at lower levels of assembly. Normal mode predictions for different finite element representations of components and assemblies are compared with experimental results to assess the most accurate techniques for modeling aircraft fuselage type structures.

  7. Validation Assessment of a Glass-to-Metal Seal Finite-Element Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jamison, Ryan Dale; Buchheit, Thomas E.; Emery, John M

    Sealing glasses are ubiquitous in high pressure and temperature engineering applications, such as hermetic feed-through electrical connectors. A common connector technology are glass-to-metal seals where a metal shell compresses a sealing glass to create a hermetic seal. Though finite-element analysis has been used to understand and design glass-to-metal seals for many years, there has been little validation of these models. An indentation technique was employed to measure the residual stress on the surface of a simple glass-to-metal seal. Recently developed rate- dependent material models of both Schott 8061 and 304L VAR stainless steel have been applied to a finite-element modelmore » of the simple glass-to-metal seal. Model predictions of residual stress based on the evolution of material models are shown. These model predictions are compared to measured data. Validity of the finite- element predictions is discussed. It will be shown that the finite-element model of the glass-to-metal seal accurately predicts the mean residual stress in the glass near the glass-to-metal interface and is valid for this quantity of interest.« less

  8. Assessment of MARMOT Grain Growth Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fromm, B.; Zhang, Y.; Schwen, D.

    2015-12-01

    This report assesses the MARMOT grain growth model by comparing modeling predictions with experimental results from thermal annealing. The purpose here is threefold: (1) to demonstrate the validation approach of using thermal annealing experiments with non-destructive characterization, (2) to test the reconstruction capability and computation efficiency in MOOSE, and (3) to validate the grain growth model and the associated parameters that are implemented in MARMOT for UO 2. To assure a rigorous comparison, the 2D and 3D initial experimental microstructures of UO 2 samples were characterized using non-destructive Synchrotron x-ray. The same samples were then annealed at 2273K for grainmore » growth, and their initial microstructures were used as initial conditions for simulated annealing at the same temperature using MARMOT. After annealing, the final experimental microstructures were characterized again to compare with the results from simulations. So far, comparison between modeling and experiments has been done for 2D microstructures, and 3D comparison is underway. The preliminary results demonstrated the usefulness of the non-destructive characterization method for MARMOT grain growth model validation. A detailed analysis of the 3D microstructures is in progress to fully validate the current model in MARMOT.« less

  9. Reaction Wheel Disturbance Modeling, Jitter Analysis, and Validation Tests for Solar Dynamics Observatory

    NASA Technical Reports Server (NTRS)

    Liu,Kuo-Chia; Maghami, Peiman; Blaurock, Carl

    2008-01-01

    The Solar Dynamics Observatory (SDO) aims to study the Sun's influence on the Earth by understanding the source, storage, and release of the solar energy, and the interior structure of the Sun. During science observations, the jitter stability at the instrument focal plane must be maintained to less than a fraction of an arcsecond for two of the SDO instruments. To meet these stringent requirements, a significant amount of analysis and test effort has been devoted to predicting the jitter induced from various disturbance sources. One of the largest disturbance sources onboard is the reaction wheel. This paper presents the SDO approach on reaction wheel disturbance modeling and jitter analysis. It describes the verification and calibration of the disturbance model, and ground tests performed for validating the reaction wheel jitter analysis. To mitigate the reaction wheel disturbance effects, the wheels will be limited to operate at low wheel speeds based on the current analysis. An on-orbit jitter test algorithm is also presented in the paper which will identify the true wheel speed limits in order to ensure that the wheel jitter requirements are met.

  10. LIVVkit: An extensible, python-based, land ice verification and validation toolkit for ice sheet models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kennedy, Joseph H.; Bennett, Andrew R.; Evans, Katherine J.

    To address the pressing need to better understand the behavior and complex interaction of ice sheets within the global Earth system, significant development of continental-scale, dynamical ice sheet models is underway. Concurrent to the development of the Community Ice Sheet Model (CISM), the corresponding verification and validation (V&V) process is being coordinated through a new, robust, Python-based extensible software package, the Land Ice Verification and Validation toolkit (LIVVkit). Incorporated into the typical ice sheet model development cycle, it provides robust and automated numerical verification, software verification, performance validation, and physical validation analyses on a variety of platforms, from personal laptopsmore » to the largest supercomputers. LIVVkit operates on sets of regression test and reference data sets, and provides comparisons for a suite of community prioritized tests, including configuration and parameter variations, bit-for-bit evaluation, and plots of model variables to indicate where differences occur. LIVVkit also provides an easily extensible framework to incorporate and analyze results of new intercomparison projects, new observation data, and new computing platforms. LIVVkit is designed for quick adaptation to additional ice sheet models via abstraction of model specific code, functions, and configurations into an ice sheet model description bundle outside the main LIVVkit structure. Furthermore, through shareable and accessible analysis output, LIVVkit is intended to help developers build confidence in their models and enhance the credibility of ice sheet models overall.« less

  11. LIVVkit: An extensible, python-based, land ice verification and validation toolkit for ice sheet models

    DOE PAGES

    Kennedy, Joseph H.; Bennett, Andrew R.; Evans, Katherine J.; ...

    2017-03-23

    To address the pressing need to better understand the behavior and complex interaction of ice sheets within the global Earth system, significant development of continental-scale, dynamical ice sheet models is underway. Concurrent to the development of the Community Ice Sheet Model (CISM), the corresponding verification and validation (V&V) process is being coordinated through a new, robust, Python-based extensible software package, the Land Ice Verification and Validation toolkit (LIVVkit). Incorporated into the typical ice sheet model development cycle, it provides robust and automated numerical verification, software verification, performance validation, and physical validation analyses on a variety of platforms, from personal laptopsmore » to the largest supercomputers. LIVVkit operates on sets of regression test and reference data sets, and provides comparisons for a suite of community prioritized tests, including configuration and parameter variations, bit-for-bit evaluation, and plots of model variables to indicate where differences occur. LIVVkit also provides an easily extensible framework to incorporate and analyze results of new intercomparison projects, new observation data, and new computing platforms. LIVVkit is designed for quick adaptation to additional ice sheet models via abstraction of model specific code, functions, and configurations into an ice sheet model description bundle outside the main LIVVkit structure. Furthermore, through shareable and accessible analysis output, LIVVkit is intended to help developers build confidence in their models and enhance the credibility of ice sheet models overall.« less

  12. A New Approach to Computing Information in Measurements of Non-Resolved Space Objects by the Falcon Telescope Network

    DTIC Science & Technology

    2014-09-01

    Analysis Simulation for Advanced Tracking (TASAT) satellite modeling tool [8,9]. The method uses the bi-reflectance distribution functions ( BRDF ...directional Reflectance Model Validation and Utilization, Air Force Avionics Laboratory Technical Report, AFAL-TR-73-303, October 1973. [10] Hall, D...failing to comply with a collection of information if it does not display a currently valid OMB control number. 1. REPORT DATE SEP 2014 2. REPORT

  13. Novel Screening Tool for Stroke Using Artificial Neural Network.

    PubMed

    Abedi, Vida; Goyal, Nitin; Tsivgoulis, Georgios; Hosseinichimeh, Niyousha; Hontecillas, Raquel; Bassaganya-Riera, Josep; Elijovich, Lucas; Metter, Jeffrey E; Alexandrov, Anne W; Liebeskind, David S; Alexandrov, Andrei V; Zand, Ramin

    2017-06-01

    The timely diagnosis of stroke at the initial examination is extremely important given the disease morbidity and narrow time window for intervention. The goal of this study was to develop a supervised learning method to recognize acute cerebral ischemia (ACI) and differentiate that from stroke mimics in an emergency setting. Consecutive patients presenting to the emergency department with stroke-like symptoms, within 4.5 hours of symptoms onset, in 2 tertiary care stroke centers were randomized for inclusion in the model. We developed an artificial neural network (ANN) model. The learning algorithm was based on backpropagation. To validate the model, we used a 10-fold cross-validation method. A total of 260 patients (equal number of stroke mimics and ACIs) were enrolled for the development and validation of our ANN model. Our analysis indicated that the average sensitivity and specificity of ANN for the diagnosis of ACI based on the 10-fold cross-validation analysis was 80.0% (95% confidence interval, 71.8-86.3) and 86.2% (95% confidence interval, 78.7-91.4), respectively. The median precision of ANN for the diagnosis of ACI was 92% (95% confidence interval, 88.7-95.3). Our results show that ANN can be an effective tool for the recognition of ACI and differentiation of ACI from stroke mimics at the initial examination. © 2017 American Heart Association, Inc.

  14. On Nomological Validity and Auxiliary Assumptions: The Importance of Simultaneously Testing Effects in Social Cognitive Theories Applied to Health Behavior and Some Guidelines

    PubMed Central

    Hagger, Martin S.; Gucciardi, Daniel F.; Chatzisarantis, Nikos L. D.

    2017-01-01

    Tests of social cognitive theories provide informative data on the factors that relate to health behavior, and the processes and mechanisms involved. In the present article, we contend that tests of social cognitive theories should adhere to the principles of nomological validity, defined as the degree to which predictions in a formal theoretical network are confirmed. We highlight the importance of nomological validity tests to ensure theory predictions can be disconfirmed through observation. We argue that researchers should be explicit on the conditions that lead to theory disconfirmation, and identify any auxiliary assumptions on which theory effects may be conditional. We contend that few researchers formally test the nomological validity of theories, or outline conditions that lead to model rejection and the auxiliary assumptions that may explain findings that run counter to hypotheses, raising potential for ‘falsification evasion.’ We present a brief analysis of studies (k = 122) testing four key social cognitive theories in health behavior to illustrate deficiencies in reporting theory tests and evaluations of nomological validity. Our analysis revealed that few articles report explicit statements suggesting that their findings support or reject the hypotheses of the theories tested, even when findings point to rejection. We illustrate the importance of explicit a priori specification of fundamental theory hypotheses and associated auxiliary assumptions, and identification of the conditions which would lead to rejection of theory predictions. We also demonstrate the value of confirmatory analytic techniques, meta-analytic structural equation modeling, and Bayesian analyses in providing robust converging evidence for nomological validity. We provide a set of guidelines for researchers on how to adopt and apply the nomological validity approach to testing health behavior models. PMID:29163307

  15. Modal Test/Analysis Correlation of Space Station Structures Using Nonlinear Sensitivity

    NASA Technical Reports Server (NTRS)

    Gupta, Viney K.; Newell, James F.; Berke, Laszlo; Armand, Sasan

    1992-01-01

    The modal correlation problem is formulated as a constrained optimization problem for validation of finite element models (FEM's). For large-scale structural applications, a pragmatic procedure for substructuring, model verification, and system integration is described to achieve effective modal correlation. The space station substructure FEM's are reduced using Lanczos vectors and integrated into a system FEM using Craig-Bampton component modal synthesis. The optimization code is interfaced with MSC/NASTRAN to solve the problem of modal test/analysis correlation; that is, the problem of validating FEM's for launch and on-orbit coupled loads analysis against experimentally observed frequencies and mode shapes. An iterative perturbation algorithm is derived and implemented to update nonlinear sensitivity (derivatives of eigenvalues and eigenvectors) during optimizer iterations, which reduced the number of finite element analyses.

  16. Modal test/analysis correlation of Space Station structures using nonlinear sensitivity

    NASA Technical Reports Server (NTRS)

    Gupta, Viney K.; Newell, James F.; Berke, Laszlo; Armand, Sasan

    1992-01-01

    The modal correlation problem is formulated as a constrained optimization problem for validation of finite element models (FEM's). For large-scale structural applications, a pragmatic procedure for substructuring, model verification, and system integration is described to achieve effective modal correlations. The space station substructure FEM's are reduced using Lanczos vectors and integrated into a system FEM using Craig-Bampton component modal synthesis. The optimization code is interfaced with MSC/NASTRAN to solve the problem of modal test/analysis correlation; that is, the problem of validating FEM's for launch and on-orbit coupled loads analysis against experimentally observed frequencies and mode shapes. An iterative perturbation algorithm is derived and implemented to update nonlinear sensitivity (derivatives of eigenvalues and eigenvectors) during optimizer iterations, which reduced the number of finite element analyses.

  17. Seal Analysis for the Ares-I Upper Stage Fuel Tank Manhole Cover

    NASA Technical Reports Server (NTRS)

    Phillips, Dawn R.; Wingate, Robert J.

    2010-01-01

    Techniques for studying the performance of Naflex pressure-assisted seals in the Ares-I Upper Stage liquid hydrogen tank manhole cover seal joint are explored. To assess the feasibility of using the identical seal design for the Upper Stage as was used for the Space Shuttle External Tank manhole covers, a preliminary seal deflection analysis using the ABAQUS commercial finite element software is employed. The ABAQUS analyses are performed using three-dimensional symmetric wedge finite element models. This analysis technique is validated by first modeling a heritage External Tank liquid hydrogen tank manhole cover joint and correlating the results to heritage test data. Once the technique is validated, the Upper Stage configuration is modeled. The Upper Stage analyses are performed at 1.4 times the expected pressure to comply with the Constellation Program factor of safety requirement on joint separation. Results from the analyses performed with the External Tank and Upper Stage models demonstrate the effects of several modeling assumptions on the seal deflection. The analyses for Upper Stage show that the integrity of the seal is successfully maintained.

  18. Modelling, validation and analysis of a three-dimensional railway vehicle-track system model with linear and nonlinear track properties in the presence of wheel flats

    NASA Astrophysics Data System (ADS)

    Uzzal, R. U. A.; Ahmed, A. K. W.; Bhat, R. B.

    2013-11-01

    This paper presents dynamic contact loads at wheel-rail contact point in a three-dimensional railway vehicle-track model as well as dynamic response at vehicle-track component levels in the presence of wheel flats. The 17-degrees of freedom lumped mass vehicle is modelled as a full car body, two bogies and four wheelsets, whereas the railway track is modelled as two parallel Timoshenko beams periodically supported by lumped masses representing the sleepers. The rail beam is also supported by nonlinear spring and damper elements representing the railpad and ballast. In order to ensure the interactions between the railpads, a shear parameter beneath the rail beams has also been considered into the model. The wheel-rail contact is modelled using nonlinear Hertzian contact theory. In order to solve the coupled partial and ordinary differential equations of the vehicle-track system, modal analysis method is employed. Idealised Haversine wheel flats with the rounded corner are included in the wheel-rail contact model. The developed model is validated with the existing measured and analytical data available in the literature. The nonlinear model is then employed to investigate the wheel-rail impact forces that arise in the wheel-rail interface due to the presence of wheel flats. The validated model is further employed to investigate the dynamic responses of vehicle and track components in terms of displacement, velocity, and acceleration in the presence of single wheel flat.

  19. Validating a Cantonese short version of the Zarit Burden Interview (CZBI-Short) for dementia caregivers.

    PubMed

    Tang, Jennifer Yee-Man; Ho, Andy Hau-Yan; Luo, Hao; Wong, Gloria Hoi-Yan; Lau, Bobo Hi-Po; Lum, Terry Yat-Sang; Cheung, Karen Siu-Lan

    2016-09-01

    The present study aimed to develop and validate a Cantonese short version of the Zarit Burden Interview (CZBI-Short) for Hong Kong Chinese dementia caregivers. The 12-item Zarit Burden Interview (ZBI) was translated into spoken Cantonese and back-translated by two bilingual research assistants and face validated by a panel of experts. Five hundred Chinese dementia caregivers showing signs of stress reported their burden using the translated ZBI and rated their depressive symptoms, overall health, and care recipients' physical functioning and behavioral problems. The factor structure of the translated scale was identified using principal component analysis and confirmatory factor analysis; internal consistency and item-total correlations were assessed; and concurrent validity was tested by correlating the ZBI with depressive symptoms, self-rated health, and care recipients' physical functioning and behavioral problems. The principal component analysis resulted in 11 items loading on a three-factor model comprised role strain, self-criticism, and negative emotion, which accounted for 59% of the variance. The confirmatory factor analysis supported the three-factor model (CZBI-Short) that explained 61% of the total variance. Cronbach's alpha (0.84) and item-total correlations (rho = 0.39-0.71) indicated CZBI-Short had good reliability. CZBI-Short showed correlations with depressive symptoms (r = 0.50), self-rated health (r = -0.26) and care recipients' physical functioning (r = 0.18-0.26) and disruptive behaviors (r = 0.36). The 12-item CZBI-Short is a concise, reliable, and valid instrument to assess burden in Chinese dementia caregivers in clinical and social care settings.

  20. QSAR studies on triazole derivatives as sglt inhibitors via CoMFA and CoMSIA

    NASA Astrophysics Data System (ADS)

    Zhi, Hui; Zheng, Junxia; Chang, Yiqun; Li, Qingguo; Liao, Guochao; Wang, Qi; Sun, Pinghua

    2015-10-01

    Forty-six sodium-dependent glucose cotransporters-2 (SGLT-2) inhibitors with hypoglycemic activity were selected to develop three-dimensional quantitative structure-activity relationship (3D-QSAR) using comparative molecular field analysis (CoMFA) and comparative molecular similarity indices analysis (CoMSIA) models. A training set of 39 compounds were used to build up the models, which were then evaluated by a series of internal and external cross-validation techniques. A test set of 7 compounds was used for the external validation. The CoMFA model predicted a q2 value of 0.792 and an r2 value of 0.985. The best CoMSIA model predicted a q2 value of 0.633 and an r2 value of 0.895 based on a combination of steric, electrostatic, hydrophobic and hydrogen-bond acceptor effects. The predictive correlation coefficients (rpred2) of CoMFA and CoMSIA models were 0.872 and 0.839, respectively. The analysis of the contour maps from each model provided insight into the structural requirements for the development of more active sglt inhibitors, and on the basis of the models 8 new sglt inhibitors were designed and predicted.

  1. Visualization and Rule Validation in Human-Behavior Representation

    ERIC Educational Resources Information Center

    Moya, Lisa Jean; McKenzie, Frederic D.; Nguyen, Quynh-Anh H.

    2008-01-01

    Human behavior representation (HBR) models simulate human behaviors and responses. The Joint Crowd Federate [TM] cognitive model developed by the Virginia Modeling, Analysis, and Simulation Center (VMASC) and licensed by WernerAnderson, Inc., models the cognitive behavior of crowds to provide credible crowd behavior in support of military…

  2. Design and validation of a comprehensive fecal incontinence questionnaire.

    PubMed

    Macmillan, Alexandra K; Merrie, Arend E H; Marshall, Roger J; Parry, Bryan R

    2008-10-01

    Fecal incontinence can have a profound effect on quality of life. Its prevalence remains uncertain because of stigma, lack of consistent definition, and dearth of validated measures. This study was designed to develop a valid clinical and epidemiologic questionnaire, building on current literature and expertise. Patients and experts undertook face validity testing. Construct validity, criterion validity, and test-retest reliability was undertaken. Construct validity comprised factor analysis and internal consistency of the quality of life scale. The validity of known groups was tested against 77 control subjects by using regression models. Questionnaire results were compared with a stool diary for criterion validity. Test-retest reliability was calculated from repeated questionnaire completion. The questionnaire achieved good face validity. It was completed by 104 patients. The quality of life scale had four underlying traits (factor analysis) and high internal consistency (overall Cronbach alpha = 0.97). Patients and control subjects answered the questionnaire significantly differently (P < 0.01) in known-groups validity testing. Criterion validity assessment found mean differences close to zero. Median reliability for the whole questionnaire was 0.79 (range, 0.35-1). This questionnaire compares favorably with other available instruments, although the interpretation of stool consistency requires further research. Its sensitivity to treatment still needs to be investigated.

  3. A model for plant lighting system selection.

    PubMed

    Ciolkosz, D E; Albright, L D; Sager, J C; Langhans, R W

    2002-01-01

    A decision model is presented that compares lighting systems for a plant growth scenario and chooses the most appropriate system from a given set of possible choices. The model utilizes a Multiple Attribute Utility Theory approach, and incorporates expert input and performance simulations to calculate a utility value for each lighting system being considered. The system with the highest utility is deemed the most appropriate system. The model was applied to a greenhouse scenario, and analyses were conducted to test the model's output for validity. Parameter variation indicates that the model performed as expected. Analysis of model output indicates that differences in utility among the candidate lighting systems were sufficiently large to give confidence that the model's order of selection was valid.

  4. Beware the black box: investigating the sensitivity of FEA simulations to modelling factors in comparative biomechanics.

    PubMed

    Walmsley, Christopher W; McCurry, Matthew R; Clausen, Phillip D; McHenry, Colin R

    2013-01-01

    Finite element analysis (FEA) is a computational technique of growing popularity in the field of comparative biomechanics, and is an easily accessible platform for form-function analyses of biological structures. However, its rapid evolution in recent years from a novel approach to common practice demands some scrutiny in regards to the validity of results and the appropriateness of assumptions inherent in setting up simulations. Both validation and sensitivity analyses remain unexplored in many comparative analyses, and assumptions considered to be 'reasonable' are often assumed to have little influence on the results and their interpretation. HERE WE REPORT AN EXTENSIVE SENSITIVITY ANALYSIS WHERE HIGH RESOLUTION FINITE ELEMENT (FE) MODELS OF MANDIBLES FROM SEVEN SPECIES OF CROCODILE WERE ANALYSED UNDER LOADS TYPICAL FOR COMPARATIVE ANALYSIS: biting, shaking, and twisting. Simulations explored the effect on both the absolute response and the interspecies pattern of results to variations in commonly used input parameters. Our sensitivity analysis focuses on assumptions relating to the selection of material properties (heterogeneous or homogeneous), scaling (standardising volume, surface area, or length), tooth position (front, mid, or back tooth engagement), and linear load case (type of loading for each feeding type). Our findings show that in a comparative context, FE models are far less sensitive to the selection of material property values and scaling to either volume or surface area than they are to those assumptions relating to the functional aspects of the simulation, such as tooth position and linear load case. Results show a complex interaction between simulation assumptions, depending on the combination of assumptions and the overall shape of each specimen. Keeping assumptions consistent between models in an analysis does not ensure that results can be generalised beyond the specific set of assumptions used. Logically, different comparative datasets would also be sensitive to identical simulation assumptions; hence, modelling assumptions should undergo rigorous selection. The accuracy of input data is paramount, and simulations should focus on taking biological context into account. Ideally, validation of simulations should be addressed; however, where validation is impossible or unfeasible, sensitivity analyses should be performed to identify which assumptions have the greatest influence upon the results.

  5. Beware the black box: investigating the sensitivity of FEA simulations to modelling factors in comparative biomechanics

    PubMed Central

    McCurry, Matthew R.; Clausen, Phillip D.; McHenry, Colin R.

    2013-01-01

    Finite element analysis (FEA) is a computational technique of growing popularity in the field of comparative biomechanics, and is an easily accessible platform for form-function analyses of biological structures. However, its rapid evolution in recent years from a novel approach to common practice demands some scrutiny in regards to the validity of results and the appropriateness of assumptions inherent in setting up simulations. Both validation and sensitivity analyses remain unexplored in many comparative analyses, and assumptions considered to be ‘reasonable’ are often assumed to have little influence on the results and their interpretation. Here we report an extensive sensitivity analysis where high resolution finite element (FE) models of mandibles from seven species of crocodile were analysed under loads typical for comparative analysis: biting, shaking, and twisting. Simulations explored the effect on both the absolute response and the interspecies pattern of results to variations in commonly used input parameters. Our sensitivity analysis focuses on assumptions relating to the selection of material properties (heterogeneous or homogeneous), scaling (standardising volume, surface area, or length), tooth position (front, mid, or back tooth engagement), and linear load case (type of loading for each feeding type). Our findings show that in a comparative context, FE models are far less sensitive to the selection of material property values and scaling to either volume or surface area than they are to those assumptions relating to the functional aspects of the simulation, such as tooth position and linear load case. Results show a complex interaction between simulation assumptions, depending on the combination of assumptions and the overall shape of each specimen. Keeping assumptions consistent between models in an analysis does not ensure that results can be generalised beyond the specific set of assumptions used. Logically, different comparative datasets would also be sensitive to identical simulation assumptions; hence, modelling assumptions should undergo rigorous selection. The accuracy of input data is paramount, and simulations should focus on taking biological context into account. Ideally, validation of simulations should be addressed; however, where validation is impossible or unfeasible, sensitivity analyses should be performed to identify which assumptions have the greatest influence upon the results. PMID:24255817

  6. Validation of a Latent Construct for Dementia in a Population-Wide Dataset from Singapore.

    PubMed

    Peh, Chao Xu; Abdin, Edimansyah; Vaingankar, Janhavi A; Verma, Swapna; Chua, Boon Yiang; Sagayadevan, Vathsala; Seow, Esmond; Zhang, YunJue; Shahwan, Shazana; Ng, Li Ling; Prince, Martin; Chong, Siow Ann; Subramaniam, Mythily

    2017-01-01

    The latent variable δ has been proposed as a proxy for dementia. Previous validation studies have been conducted using convenience samples. It is currently unknown how δ performs in population-wide data. To validate δ in Singapore using population-wide epidemiological study data on persons aged 60 and above. δ was constructed using items from the Community Screening Instrument for Dementia (CSI'D) and World Health Organization Disability Assessment Schedule (WHODAS II). Confirmatory factor analysis (CFA) was conducted to examine δ model fit. Convergent validity was examined with the Clinical Dementia Rating scale (CDR) and GMS-AGECAT dementia. Divergent validity was examined with GMS-AGECAT depression. The δ model demonstrated fit to the data, χ2(df) = 249.71(55), p < 0.001, CFI = 0.990, TLI = 0.997, RMSEA = 0.037. Latent variable δ was significantly associated with CDR and GMS-AGECAT dementia (range: β= 0.32 to 0.63), and was not associated with GMS-AGECAT depression. Compared to unadjusted models, δ model fit was poor when adjusted for age, gender, ethnicity, and education. The study found some support for δ as a proxy for dementia in Singapore based on population data. Both convergent and divergent validity were established. In addition, the δ model structure appeared to be influenced by age, gender, ethnicity, and education covariates.

  7. MotiveValidator: interactive web-based validation of ligand and residue structure in biomolecular complexes.

    PubMed

    Vařeková, Radka Svobodová; Jaiswal, Deepti; Sehnal, David; Ionescu, Crina-Maria; Geidl, Stanislav; Pravda, Lukáš; Horský, Vladimír; Wimmerová, Michaela; Koča, Jaroslav

    2014-07-01

    Structure validation has become a major issue in the structural biology community, and an essential step is checking the ligand structure. This paper introduces MotiveValidator, a web-based application for the validation of ligands and residues in PDB or PDBx/mmCIF format files provided by the user. Specifically, MotiveValidator is able to evaluate in a straightforward manner whether the ligand or residue being studied has a correct annotation (3-letter code), i.e. if it has the same topology and stereochemistry as the model ligand or residue with this annotation. If not, MotiveValidator explicitly describes the differences. MotiveValidator offers a user-friendly, interactive and platform-independent environment for validating structures obtained by any type of experiment. The results of the validation are presented in both tabular and graphical form, facilitating their interpretation. MotiveValidator can process thousands of ligands or residues in a single validation run that takes no more than a few minutes. MotiveValidator can be used for testing single structures, or the analysis of large sets of ligands or fragments prepared for binding site analysis, docking or virtual screening. MotiveValidator is freely available via the Internet at http://ncbr.muni.cz/MotiveValidator. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  8. Validity analysis on merged and averaged data using within and between analysis: focus on effect of qualitative social capital on self-rated health.

    PubMed

    Shin, Sang Soo; Shin, Young-Jeon

    2016-01-01

    With an increasing number of studies highlighting regional social capital (SC) as a determinant of health, many studies are using multi-level analysis with merged and averaged scores of community residents' survey responses calculated from community SC data. Sufficient examination is required to validate if the merged and averaged data can represent the community. Therefore, this study analyzes the validity of the selected indicators and their applicability in multi-level analysis. Within and between analysis (WABA) was performed after creating community variables using merged and averaged data of community residents' responses from the 2013 Community Health Survey in Korea, using subjective self-rated health assessment as a dependent variable. Further analysis was performed following the model suggested by WABA result. Both E-test results (1) and WABA results (2) revealed that single-level analysis needs to be performed using qualitative SC variable with cluster mean centering. Through single-level multivariate regression analysis, qualitative SC with cluster mean centering showed positive effect on self-rated health (0.054, p<0.001), although there was no substantial difference in comparison to analysis using SC variables without cluster mean centering or multi-level analysis. As modification in qualitative SC was larger within the community than between communities, we validate that relational analysis of individual self-rated health can be performed within the group, using cluster mean centering. Other tests besides the WABA can be performed in the future to confirm the validity of using community variables and their applicability in multi-level analysis.

  9. Predictive and concurrent validity of the Braden scale in long-term care: a meta-analysis.

    PubMed

    Wilchesky, Machelle; Lungu, Ovidiu

    2015-01-01

    Pressure ulcer prevention is an important long-term care (LTC) quality indicator. While the Braden Scale is a recommended risk assessment tool, there is a paucity of information specifically pertaining to its validity within the LTC setting. We, therefore, undertook a systematic review and meta-analysis comparing Braden Scale predictive and concurrent validity within this context. We searched the Medline, EMBASE, PsychINFO and PubMed databases from 1985-2014 for studies containing the requisite information to analyze tool validity. Our initial search yielded 3,773 articles. Eleven datasets emanating from nine published studies describing 40,361 residents met all meta-analysis inclusion criteria and were analyzed using random effects models. Pooled sensitivity, specificity, positive predictive value (PPV), and negative predictive values were 86%, 38%, 28%, and 93%, respectively. Specificity was poorer in concurrent samples as compared with predictive samples (38% vs. 72%), while PPV was low in both sample types (25 and 37%). Though random effects model results showed that the Scale had good overall predictive ability [RR, 4.33; 95% CI, 3.28-5.72], none of the concurrent samples were found to have "optimal" sensitivity and specificity. In conclusion, the appropriateness of the Braden Scale in LTC is questionable given its low specificity and PPV, in particular in concurrent validity studies. Future studies should further explore the extent to which the apparent low validity of the Scale in LTC is due to the choice of cutoff point and/or preventive strategies implemented by LTC staff as a matter of course. © 2015 by the Wound Healing Society.

  10. Assessing Model Fit: Caveats and Recommendations for Confirmatory Factor Analysis and Exploratory Structural Equation Modeling

    ERIC Educational Resources Information Center

    Perry, John L.; Nicholls, Adam R.; Clough, Peter J.; Crust, Lee

    2015-01-01

    Despite the limitations of overgeneralizing cutoff values for confirmatory factor analysis (CFA; e.g., Marsh, Hau, & Wen, 2004), they are still often employed as golden rules for assessing factorial validity in sport and exercise psychology. The purpose of this study was to investigate the appropriateness of using the CFA approach with these…

  11. A validation of the construct and reliability of an emotional intelligence scale applied to nursing students1

    PubMed Central

    Espinoza-Venegas, Maritza; Sanhueza-Alvarado, Olivia; Ramírez-Elizondo, Noé; Sáez-Carrillo, Katia

    2015-01-01

    OBJECTIVE: The current study aimed to validate the construct and reliability of an emotional intelligence scale. METHOD: The Trait Meta-Mood Scale-24 was applied to 349 nursing students. The process included content validation, which involved expert reviews, pilot testing, measurements of reliability using Cronbach's alpha, and factor analysis to corroborate the validity of the theoretical model's construct. RESULTS: Adequate Cronbach coefficients were obtained for all three dimensions, and factor analysis confirmed the scale's dimensions (perception, comprehension, and regulation). CONCLUSION: The Trait Meta-Mood Scale is a reliable and valid tool to measure the emotional intelligence of nursing students. Its use allows for accurate determinations of individuals' abilities to interpret and manage emotions. At the same time, this new construct is of potential importance for measurements in nursing leadership; educational, organizational, and personal improvements; and the establishment of effective relationships with patients. PMID:25806642

  12. Commercial Supersonics Technology Project - Status of Airport Noise

    NASA Technical Reports Server (NTRS)

    Bridges, James

    2016-01-01

    The Commercial Supersonic Technology Project has been developing databases, computational tools, and system models to prepare for a level 1 milestone, the Low Noise Propulsion Tech Challenge, to be delivered Sept 2016. Steps taken to prepare for the final validation test are given, including system analysis, code validation, and risk reduction testing.

  13. Development and Validation of Two Scales to Measure Elaboration and Behaviors Associated with Stewardship in Children

    ERIC Educational Resources Information Center

    Vezeau, Susan Lynn; Powell, Robert B.; Stern, Marc J.; Moore, D. DeWayne; Wright, Brett A.

    2017-01-01

    This investigation examines the development of two scales that measure elaboration and behaviors associated with stewardship in children. The scales were developed using confirmatory factor analysis to investigate their construct validity, reliability, and psychometric properties. Results suggest that a second-order factor model structure provides…

  14. Model development and validation of geometrically complex eddy current coils using finite element methods

    NASA Astrophysics Data System (ADS)

    Brown, Alexander; Eviston, Connor

    2017-02-01

    Multiple FEM models of complex eddy current coil geometries were created and validated to calculate the change of impedance due to the presence of a notch. Capable realistic simulations of eddy current inspections are required for model assisted probability of detection (MAPOD) studies, inversion algorithms, experimental verification, and tailored probe design for NDE applications. An FEM solver was chosen to model complex real world situations including varying probe dimensions and orientations along with complex probe geometries. This will also enable creation of a probe model library database with variable parameters. Verification and validation was performed using other commercially available eddy current modeling software as well as experimentally collected benchmark data. Data analysis and comparison showed that the created models were able to correctly model the probe and conductor interactions and accurately calculate the change in impedance of several experimental scenarios with acceptable error. The promising results of the models enabled the start of an eddy current probe model library to give experimenters easy access to powerful parameter based eddy current models for alternate project applications.

  15. Prediction models for the risk of spontaneous preterm birth based on maternal characteristics: a systematic review and independent external validation.

    PubMed

    Meertens, Linda J E; van Montfort, Pim; Scheepers, Hubertina C J; van Kuijk, Sander M J; Aardenburg, Robert; Langenveld, Josje; van Dooren, Ivo M A; Zwaan, Iris M; Spaanderman, Marc E A; Smits, Luc J M

    2018-04-17

    Prediction models may contribute to personalized risk-based management of women at high risk of spontaneous preterm delivery. Although prediction models are published frequently, often with promising results, external validation generally is lacking. We performed a systematic review of prediction models for the risk of spontaneous preterm birth based on routine clinical parameters. Additionally, we externally validated and evaluated the clinical potential of the models. Prediction models based on routinely collected maternal parameters obtainable during first 16 weeks of gestation were eligible for selection. Risk of bias was assessed according to the CHARMS guidelines. We validated the selected models in a Dutch multicenter prospective cohort study comprising 2614 unselected pregnant women. Information on predictors was obtained by a web-based questionnaire. Predictive performance of the models was quantified by the area under the receiver operating characteristic curve (AUC) and calibration plots for the outcomes spontaneous preterm birth <37 weeks and <34 weeks of gestation. Clinical value was evaluated by means of decision curve analysis and calculating classification accuracy for different risk thresholds. Four studies describing five prediction models fulfilled the eligibility criteria. Risk of bias assessment revealed a moderate to high risk of bias in three studies. The AUC of the models ranged from 0.54 to 0.67 and from 0.56 to 0.70 for the outcomes spontaneous preterm birth <37 weeks and <34 weeks of gestation, respectively. A subanalysis showed that the models discriminated poorly (AUC 0.51-0.56) for nulliparous women. Although we recalibrated the models, two models retained evidence of overfitting. The decision curve analysis showed low clinical benefit for the best performing models. This review revealed several reporting and methodological shortcomings of published prediction models for spontaneous preterm birth. Our external validation study indicated that none of the models had the ability to predict spontaneous preterm birth adequately in our population. Further improvement of prediction models, using recent knowledge about both model development and potential risk factors, is necessary to provide an added value in personalized risk assessment of spontaneous preterm birth. © 2018 The Authors Acta Obstetricia et Gynecologica Scandinavica published by John Wiley & Sons Ltd on behalf of Nordic Federation of Societies of Obstetrics and Gynecology (NFOG).

  16. Evaluation of bending modulus of lipid bilayers using undulation and orientation analysis

    NASA Astrophysics Data System (ADS)

    Chaurasia, Adarsh K.; Rukangu, Andrew M.; Philen, Michael K.; Seidel, Gary D.; Freeman, Eric C.

    2018-03-01

    In the current paper, phospholipid bilayers are modeled using coarse-grained molecular dynamics simulations with the MARTINI force field. The extracted molecular trajectories are analyzed using Fourier analysis of the undulations and orientation vectors to establish the differences between the two approaches for evaluating the bending modulus. The current work evaluates and extends the implementation of the Fourier analysis for molecular trajectories using a weighted horizon-based averaging approach. The effect of numerical parameters in the analysis of these trajectories is explored by conducting parametric studies. Computational modeling results are validated against experimentally characterized bending modulus of lipid membranes using a shape fluctuation analysis. The computational framework is then used to estimate the bending moduli for different types of lipids (phosphocholine, phosphoethanolamine, and phosphoglycerol). This work provides greater insight into the numerical aspects of evaluating the bilayer bending modulus, provides validation for the orientation analysis technique, and explores differences in bending moduli based on differences in the lipid nanostructures.

  17. Impact Testing on Reinforced Carbon-Carbon Flat Panels with Ice Projectiles for the Space Shuttle Return to Flight Program

    NASA Technical Reports Server (NTRS)

    Melis, Matthew E.; Revilock, Duane M.; Pereira, Michael J.; Lyle, Karen H.

    2009-01-01

    Following the tragedy of the Orbiter Columbia (STS-107) on February 1, 2003, a major effort commenced to develop a better understanding of debris impacts and their effect on the space shuttle subsystems. An initiative to develop and validate physics-based computer models to predict damage from such impacts was a fundamental component of this effort. To develop the models it was necessary to physically characterize reinforced carbon-carbon (RCC) along with ice and foam debris materials, which could shed on ascent and impact the orbiter RCC leading edges. The validated models enabled the launch system community to use the impact analysis software LS-DYNA (Livermore Software Technology Corp.) to predict damage by potential and actual impact events on the orbiter leading edge and nose cap thermal protection systems. Validation of the material models was done through a three-level approach: Level 1--fundamental tests to obtain independent static and dynamic constitutive model properties of materials of interest, Level 2--subcomponent impact tests to provide highly controlled impact test data for the correlation and validation of the models, and Level 3--full-scale orbiter leading-edge impact tests to establish the final level of confidence for the analysis methodology. This report discusses the Level 2 test program conducted in the NASA Glenn Research Center (GRC) Ballistic Impact Laboratory with ice projectile impact tests on flat RCC panels, and presents the data observed. The Level 2 testing consisted of 54 impact tests in the NASA GRC Ballistic Impact Laboratory on 6- by 6-in. and 6- by 12-in. flat plates of RCC and evaluated three types of debris projectiles: Single-crystal, polycrystal, and "soft" ice. These impact tests helped determine the level of damage generated in the RCC flat plates by each projectile and validated the use of the ice and RCC models for use in LS-DYNA.

  18. Impact Testing on Reinforced Carbon-Carbon Flat Panels With BX-265 and PDL-1034 External Tank Foam for the Space Shuttle Return to Flight Program

    NASA Technical Reports Server (NTRS)

    Melis, Matthew E.; Revilock, Duane M.; Pereira, Michael J.; Lyle, Karen H.

    2009-01-01

    Following the tragedy of the Orbiter Columbia (STS-107) on February 1, 2003, a major effort commenced to develop a better understanding of debris impacts and their effect on the space shuttle subsystems. An initiative to develop and validate physics-based computer models to predict damage from such impacts was a fundamental component of this effort. To develop the models it was necessary to physically characterize reinforced carbon-carbon (RCC) along with ice and foam debris materials, which could shed on ascent and impact the orbiter RCC leading edges. The validated models enabled the launch system community to use the impact analysis software LS-DYNA (Livermore Software Technology Corp.) to predict damage by potential and actual impact events on the orbiter leading edge and nose cap thermal protection systems. Validation of the material models was done through a three-level approach: Level 1-fundamental tests to obtain independent static and dynamic constitutive model properties of materials of interest, Level 2-subcomponent impact tests to provide highly controlled impact test data for the correlation and validation of the models, and Level 3-full-scale orbiter leading-edge impact tests to establish the final level of confidence for the analysis methodology. This report discusses the Level 2 test program conducted in the NASA Glenn Research Center (GRC) Ballistic Impact Laboratory with external tank foam impact tests on flat RCC panels, and presents the data observed. The Level 2 testing consisted of 54 impact tests in the NASA GRC Ballistic Impact Laboratory on 6- by 6-in. and 6- by 12-in. flat plates of RCC and evaluated two types of debris projectiles: BX-265 and PDL-1034 external tank foam. These impact tests helped determine the level of damage generated in the RCC flat plates by each projectile and validated the use of the foam and RCC models for use in LS-DYNA.

  19. Development and validation of a nutrition knowledge questionnaire for a Canadian population.

    PubMed

    Bradette-Laplante, Maude; Carbonneau, Élise; Provencher, Véronique; Bégin, Catherine; Robitaille, Julie; Desroches, Sophie; Vohl, Marie-Claude; Corneau, Louise; Lemieux, Simone

    2017-05-01

    The present study aimed to develop and validate a nutrition knowledge questionnaire in a sample of French Canadians from the province of Quebec, taking into account dietary guidelines. A thirty-eight-item questionnaire was developed by the research team and evaluated for content validity by an expert panel, and then administered to respondents. Face validity and construct validity were measured in a pre-test. Exploratory factor analysis and covariance structure analysis were performed to verify the structure of the questionnaire and identify problematic items. Internal consistency and test-retest reliability were evaluated through a validation study. Online survey. Six nutrition and psychology experts, fifteen registered dietitians (RD) and 180 lay people participated. Content validity evaluation resulted in the removal of two items and reformulation of one item. Following face validity, one item was reformulated. Construct validity was found to be adequate, with higher scores for RD v. non-RD (21·5 (sd 2·1) v. 15·7 (sd 3·0) out of 24, P<0·001). Exploratory factor analysis revealed that the questionnaire contained only one factor. Covariance structure analysis led to removal of sixteen items. Internal consistency for the overall questionnaire was adequate (Cronbach's α=0·73). Assessment of test-retest reliability resulted in significant associations for the total knowledge score (r=0·59, P<0·001). This nutrition knowledge questionnaire was found to be a suitable instrument which can be used to measure levels of nutrition knowledge in a Canadian population. It could also serve as a model for the development of similar instruments in other populations.

  20. The Chinese version of the Outcome Expectations for Exercise scale: validation study.

    PubMed

    Lee, Ling-Ling; Chiu, Yu-Yun; Ho, Chin-Chih; Wu, Shu-Chen; Watson, Roger

    2011-06-01

    Estimates of the reliability and validity of the English nine-item Outcome Expectations for Exercise (OEE) scale have been tested and found to be valid for use in various settings, particularly among older people, with good internal consistency and validity. Data on the use of the OEE scale among older Chinese people living in the community and how cultural differences might affect the administration of the OEE scale are limited. To test the validity and reliability of the Chinese version of the Outcome Expectations for Exercise scale among older people. A cross-sectional validation study was designed to test the Chinese version of the OEE scale (OEE-C). Reliability was examined by testing both the internal consistency for the overall scale and the squared multiple correlation coefficient for the single item measure. The validity of the scale was tested on the basis of both a traditional psychometric test and a confirmatory factor analysis using structural equation modelling. The Mokken Scaling Procedure (MSP) was used to investigate if there were any hierarchical, cumulative sets of items in the measure. The OEE-C scale was tested in a group of older people in Taiwan (n=108, mean age=77.1). There was acceptable internal consistency (alpha=.85) and model fit in the scale. Evidence of the validity of the measure was demonstrated by the tests for criterion-related validity and construct validity. There was a statistically significant correlation between exercise outcome expectations and exercise self-efficacy (r=.34, p<.01). An analysis of the Mokken Scaling Procedure found that nine items of the scale were all retained in the analysis and the resulting scale was reliable and statistically significant (p=.0008). The results obtained in the present study provided acceptable levels of reliability and validity evidence for the Chinese Outcome Expectations for Exercise scale when used with older people in Taiwan. Future testing of the OEE-C scale needs to be carried out to see whether these results are generalisable to older Chinese people living in urban areas. Copyright © 2010 Elsevier Ltd. All rights reserved.

  1. Nursing Job Rotation Stress Scale development and psychometric evaluation.

    PubMed

    Huang, Shan; Lin, Yu-Hua; Kao, Chia-Chan; Yang, Hsing-Yu; Anne, Ya-Li; Wang, Cheng-Hua

    2016-01-01

    The aim of this study was to develop and assess the reliability and validity of the Nurse Job Rotation Stress Scale (NJRS). A convenience sampling method was utilized to recruit two groups of nurses (n = 150 and 253) from a 2751 bed medical center in southern Taiwan. The NJRS scale was developed and used to evaluate the NJRS. Explorative factor analysis revealed that three factors accounted for 74.11% of the explained variance. Confirmatory factor analysis validity testing supported the three factor structure and the construct validity. Cronbach's alpha for the 10 item model was 0.87 and had high linearity. The NJRS can be considered a reliable and valid scale for the measurement of nurse job rotation stress for nursing management and research purposes. © 2015 Japan Academy of Nursing Science.

  2. 6DOF Testing of the SLS Inertial Navigation Unit

    NASA Technical Reports Server (NTRS)

    Geohagan, Kevin; Bernard, Bill; Oliver, T. Emerson; Leggett, Jared; Strickland, Dennis

    2018-01-01

    The Navigation System on the NASA Space Launch System (SLS) Block 1 vehicle performs initial alignment of the Inertial Navigation System (INS) navigation frame through gyrocompass alignment (GCA). Because the navigation architecture for the SLS Block 1 vehicle is a purely inertial system, the accuracy of the achieved orbit relative to mission requirements is very sensitive to initial alignment accuracy. The assessment of this sensitivity and many others via simulation is a part of the SLS Model-Based Design and Model-Based Requirements approach. As a part of the aforementioned, 6DOF Monte Carlo simulation is used in large part to develop and demonstrate verification of program requirements. To facilitate this and the GN&C flight software design process, an SLS-Program-controlled Design Math Model (DMM) of the SLS INS was developed by the SLS Navigation Team. The SLS INS model implements all of the key functions of the hardware-namely, GCA, inertial navigation, and FDIR (Fault Detection, Isolation, and Recovery)-in support of SLS GN&C design requirements verification. Despite the strong sensitivity to initial alignment, GCA accuracy requirements were not verified by test due to program cost and schedule constraints. Instead, the system relies upon assessments performed using the SLS INS model. In order to verify SLS program requirements by analysis, the SLS INS model is verified and validated against flight hardware. In lieu of direct testing of GCA accuracy in support of requirement verification, the SLS Navigation Team proposed and conducted an engineering test to, among other things, validate the GCA performance and overall behavior of the SLS INS model through comparison with test data. This paper will detail dynamic hardware testing of the SLS INS, conducted by the SLS Navigation Team at Marshall Space Flight Center's 6DOF Table Facility, in support of GCA performance characterization and INS model validation. A 6-DOF motion platform was used to produce 6DOF pad twist and sway dynamics while a simulated SLS flight computer communicated with the INS. Tests conducted include an evaluation of GCA algorithm robustness to increasingly dynamic pad environments, an examination of GCA algorithm stability and accuracy over long durations, and a long-duration static test to gather enough data for Allan Variance analysis. Test setup, execution, and data analysis will be discussed, including analysis performed in support of SLS INS model validation.

  3. Robustness Analysis and Reliable Flight Regime Estimation of an Integrated Resilent Control System for a Transport Aircraft

    NASA Technical Reports Server (NTRS)

    Shin, Jong-Yeob; Belcastro, Christine

    2008-01-01

    Formal robustness analysis of aircraft control upset prevention and recovery systems could play an important role in their validation and ultimate certification. As a part of the validation process, this paper describes an analysis method for determining a reliable flight regime in the flight envelope within which an integrated resilent control system can achieve the desired performance of tracking command signals and detecting additive faults in the presence of parameter uncertainty and unmodeled dynamics. To calculate a reliable flight regime, a structured singular value analysis method is applied to analyze the closed-loop system over the entire flight envelope. To use the structured singular value analysis method, a linear fractional transform (LFT) model of a transport aircraft longitudinal dynamics is developed over the flight envelope by using a preliminary LFT modeling software tool developed at the NASA Langley Research Center, which utilizes a matrix-based computational approach. The developed LFT model can capture original nonlinear dynamics over the flight envelope with the ! block which contains key varying parameters: angle of attack and velocity, and real parameter uncertainty: aerodynamic coefficient uncertainty and moment of inertia uncertainty. Using the developed LFT model and a formal robustness analysis method, a reliable flight regime is calculated for a transport aircraft closed-loop system.

  4. Comparison of two weighted integration models for the cueing task: linear and likelihood

    NASA Technical Reports Server (NTRS)

    Shimozaki, Steven S.; Eckstein, Miguel P.; Abbey, Craig K.

    2003-01-01

    In a task in which the observer must detect a signal at two locations, presenting a precue that predicts the location of a signal leads to improved performance with a valid cue (signal location matches the cue), compared to an invalid cue (signal location does not match the cue). The cue validity effect has often been explained with a limited capacity attentional mechanism improving the perceptual quality at the cued location. Alternatively, the cueing effect can also be explained by unlimited capacity models that assume a weighted combination of noisy responses across the two locations. We compare two weighted integration models, a linear model and a sum of weighted likelihoods model based on a Bayesian observer. While qualitatively these models are similar, quantitatively they predict different cue validity effects as the signal-to-noise ratios (SNR) increase. To test these models, 3 observers performed in a cued discrimination task of Gaussian targets with an 80% valid precue across a broad range of SNR's. Analysis of a limited capacity attentional switching model was also included and rejected. The sum of weighted likelihoods model best described the psychophysical results, suggesting that human observers approximate a weighted combination of likelihoods, and not a weighted linear combination.

  5. Mesoscale Characterization of Fracture Properties of Steel Fiber-Reinforced Concrete Using a Lattice-Particle Model.

    PubMed

    Montero-Chacón, Francisco; Cifuentes, Héctor; Medina, Fernando

    2017-02-21

    This work presents a lattice-particle model for the analysis of steel fiber-reinforced concrete (SFRC). In this approach, fibers are explicitly modeled and connected to the concrete matrix lattice via interface elements. The interface behavior was calibrated by means of pullout tests and a range for the bond properties is proposed. The model was validated with analytical and experimental results under uniaxial tension and compression, demonstrating the ability of the model to correctly describe the effect of fiber volume fraction and distribution on fracture properties of SFRC. The lattice-particle model was integrated into a hierarchical homogenization-based scheme in which macroscopic material parameters are obtained from mesoscale simulations. Moreover, a representative volume element (RVE) analysis was carried out and the results shows that such an RVE does exist in the post-peak regime and until localization takes place. Finally, the multiscale upscaling strategy was successfully validated with three-point bending tests.

  6. Mesoscale Characterization of Fracture Properties of Steel Fiber-Reinforced Concrete Using a Lattice–Particle Model

    PubMed Central

    Montero-Chacón, Francisco; Cifuentes, Héctor; Medina, Fernando

    2017-01-01

    This work presents a lattice–particle model for the analysis of steel fiber-reinforced concrete (SFRC). In this approach, fibers are explicitly modeled and connected to the concrete matrix lattice via interface elements. The interface behavior was calibrated by means of pullout tests and a range for the bond properties is proposed. The model was validated with analytical and experimental results under uniaxial tension and compression, demonstrating the ability of the model to correctly describe the effect of fiber volume fraction and distribution on fracture properties of SFRC. The lattice–particle model was integrated into a hierarchical homogenization-based scheme in which macroscopic material parameters are obtained from mesoscale simulations. Moreover, a representative volume element (RVE) analysis was carried out and the results shows that such an RVE does exist in the post-peak regime and until localization takes place. Finally, the multiscale upscaling strategy was successfully validated with three-point bending tests. PMID:28772568

  7. Quasi-likelihood generalized linear regression analysis of fatality risk data

    DOT National Transportation Integrated Search

    2009-01-01

    Transportation-related fatality risks is a function of many interacting human, vehicle, and environmental factors. Statisitcally valid analysis of such data is challenged both by the complexity of plausable structural models relating fatality rates t...

  8. Non-Contact Heart Rate and Blood Pressure Estimations from Video Analysis and Machine Learning Modelling Applied to Food Sensory Responses: A Case Study for Chocolate.

    PubMed

    Gonzalez Viejo, Claudia; Fuentes, Sigfredo; Torrico, Damir D; Dunshea, Frank R

    2018-06-03

    Traditional methods to assess heart rate (HR) and blood pressure (BP) are intrusive and can affect results in sensory analysis of food as participants are aware of the sensors. This paper aims to validate a non-contact method to measure HR using the photoplethysmography (PPG) technique and to develop models to predict the real HR and BP based on raw video analysis (RVA) with an example application in chocolate consumption using machine learning (ML). The RVA used a computer vision algorithm based on luminosity changes on the different RGB color channels using three face-regions (forehead and both cheeks). To validate the proposed method and ML models, a home oscillometric monitor and a finger sensor were used. Results showed high correlations with the G color channel (R² = 0.83). Two ML models were developed using three face-regions: (i) Model 1 to predict HR and BP using the RVA outputs with R = 0.85 and (ii) Model 2 based on time-series prediction with HR, magnitude and luminosity from RVA inputs to HR values every second with R = 0.97. An application for the sensory analysis of chocolate showed significant correlations between changes in HR and BP with chocolate hardness and purchase intention.

  9. RELAP-7 Software Verification and Validation Plan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Curtis L.; Choi, Yong-Joon; Zou, Ling

    This INL plan comprehensively describes the software for RELAP-7 and documents the software, interface, and software design requirements for the application. The plan also describes the testing-based software verification and validation (SV&V) process—a set of specially designed software models used to test RELAP-7. The RELAP-7 (Reactor Excursion and Leak Analysis Program) code is a nuclear reactor system safety analysis code being developed at Idaho National Laboratory (INL). The code is based on the INL’s modern scientific software development framework – MOOSE (Multi-Physics Object-Oriented Simulation Environment). The overall design goal of RELAP-7 is to take advantage of the previous thirty yearsmore » of advancements in computer architecture, software design, numerical integration methods, and physical models. The end result will be a reactor systems analysis capability that retains and improves upon RELAP5’s capability and extends the analysis capability for all reactor system simulation scenarios.« less

  10. SLS Navigation Model-Based Design Approach

    NASA Technical Reports Server (NTRS)

    Oliver, T. Emerson; Anzalone, Evan; Geohagan, Kevin; Bernard, Bill; Park, Thomas

    2018-01-01

    The SLS Program chose to implement a Model-based Design and Model-based Requirements approach for managing component design information and system requirements. This approach differs from previous large-scale design efforts at Marshall Space Flight Center where design documentation alone conveyed information required for vehicle design and analysis and where extensive requirements sets were used to scope and constrain the design. The SLS Navigation Team has been responsible for the Program-controlled Design Math Models (DMMs) which describe and represent the performance of the Inertial Navigation System (INS) and the Rate Gyro Assemblies (RGAs) used by Guidance, Navigation, and Controls (GN&C). The SLS Navigation Team is also responsible for the navigation algorithms. The navigation algorithms are delivered for implementation on the flight hardware as a DMM. For the SLS Block 1-B design, the additional GPS Receiver hardware is managed as a DMM at the vehicle design level. This paper provides a discussion of the processes and methods used to engineer, design, and coordinate engineering trades and performance assessments using SLS practices as applied to the GN&C system, with a particular focus on the Navigation components. These include composing system requirements, requirements verification, model development, model verification and validation, and modeling and analysis approaches. The Model-based Design and Requirements approach does not reduce the effort associated with the design process versus previous processes used at Marshall Space Flight Center. Instead, the approach takes advantage of overlap between the requirements development and management process, and the design and analysis process by efficiently combining the control (i.e. the requirement) and the design mechanisms. The design mechanism is the representation of the component behavior and performance in design and analysis tools. The focus in the early design process shifts from the development and management of design requirements to the development of usable models, model requirements, and model verification and validation efforts. The models themselves are represented in C/C++ code and accompanying data files. Under the idealized process, potential ambiguity in specification is reduced because the model must be implementable versus a requirement which is not necessarily subject to this constraint. Further, the models are shown to emulate the hardware during validation. For models developed by the Navigation Team, a common interface/standalone environment was developed. The common environment allows for easy implementation in design and analysis tools. Mechanisms such as unit test cases ensure implementation as the developer intended. The model verification and validation process provides a very high level of component design insight. The origin and implementation of the SLS variant of Model-based Design is described from the perspective of the SLS Navigation Team. The format of the models and the requirements are described. The Model-based Design approach has many benefits but is not without potential complications. Key lessons learned associated with the implementation of the Model Based Design approach and process from infancy to verification and certification are discussed

  11. Validation of a program for supercritical power plant calculations

    NASA Astrophysics Data System (ADS)

    Kotowicz, Janusz; Łukowicz, Henryk; Bartela, Łukasz; Michalski, Sebastian

    2011-12-01

    This article describes the validation of a supercritical steam cycle. The cycle model was created with the commercial program GateCycle and validated using in-house code of the Institute of Power Engineering and Turbomachinery. The Institute's in-house code has been used extensively for industrial power plants calculations with good results. In the first step of the validation process, assumptions were made about the live steam temperature and pressure, net power, characteristic quantities for high- and low-pressure regenerative heat exchangers and pressure losses in heat exchangers. These assumptions were then used to develop a steam cycle model in Gate-Cycle and a model based on the code developed in-house at the Institute of Power Engineering and Turbomachinery. Properties, such as thermodynamic parameters at characteristic points of the steam cycle, net power values and efficiencies, heat provided to the steam cycle and heat taken from the steam cycle, were compared. The last step of the analysis was calculation of relative errors of compared values. The method used for relative error calculations is presented in the paper. The assigned relative errors are very slight, generally not exceeding 0.1%. Based on our analysis, it can be concluded that using the GateCycle software for calculations of supercritical power plants is possible.

  12. Simulation of fMRI signals to validate dynamic causal modeling estimation

    NASA Astrophysics Data System (ADS)

    Anandwala, Mobin; Siadat, Mohamad-Reza; Hadi, Shamil M.

    2012-03-01

    Through cognitive tasks certain brain areas are activated and also receive increased blood to them. This is modeled through a state system consisting of two separate parts one that deals with the neural node stimulation and the other blood response during that stimulation. The rationale behind using this state system is to validate existing analysis methods such as DCM to see what levels of noise they can handle. Using the forward Euler's method this system was approximated in a series of difference equations. What was obtained was the hemodynamic response for each brain area and this was used to test an analysis tool to estimate functional connectivity between each brain area with a given amount of noise. The importance of modeling this system is to not only have a model for neural response but also to compare to actual data obtained through functional imaging scans.

  13. The problem of fouling in submerged membrane bioreactors - Model validation and experimental evidence

    NASA Astrophysics Data System (ADS)

    Tsibranska, Irene; Vlaev, Serafim; Tylkowski, Bartosz

    2018-01-01

    Integrating biological treatment with membrane separation has found a broad area of applications and industrial attention. Submerged membrane bioreactors (SMBRs), based on membrane modules immersed in the bioreactor, or side stream ones connected in recycle have been employed in different biotechnological processes for separation of thermally unstable products. Fouling is one of the most important challenges in the integrated SMBRs. A number of works are devoted to fouling analysis and its treatment, especially exploring the opportunity for enhanced fouling control in SMBRs. The main goal of the review is to provide a comprehensive yet concise overview of modeling the fouling in SMBRs in view of the problematics of model validation, either by real system measurements at different scales or by analysis of the obtained theoretical results. The review is focused on the current state of research applying computational fluid dynamics (CFD) modeling techniques.

  14. Refining and validating a conceptual model of Clinical Nurse Leader integrated care delivery.

    PubMed

    Bender, Miriam; Williams, Marjory; Su, Wei; Hites, Lisle

    2017-02-01

    To empirically validate a conceptual model of Clinical Nurse Leader integrated care delivery. There is limited evidence of frontline care delivery models that consistently achieve quality patient outcomes. Clinical Nurse Leader integrated care delivery is a promising nursing model with a growing record of success. However, theoretical clarity is necessary to generate causal evidence of effectiveness. Sequential mixed methods. A preliminary Clinical Nurse Leader practice model was refined and survey items developed to correspond with model domains, using focus groups and a Delphi process with a multi-professional expert panel. The survey was administered in 2015 to clinicians and administrators involved in Clinical Nurse Leader initiatives. Confirmatory factor analysis and structural equation modelling were used to validate the measurement and model structure. Final sample n = 518. The model incorporates 13 components organized into five conceptual domains: 'Readiness for Clinical Nurse Leader integrated care delivery'; 'Structuring Clinical Nurse Leader integrated care delivery'; 'Clinical Nurse Leader Practice: Continuous Clinical Leadership'; 'Outcomes of Clinical Nurse Leader integrated care delivery'; and 'Value'. Sample data had good fit with specified model and two-level measurement structure. All hypothesized pathways were significant, with strong coefficients suggesting good fit between theorized and observed path relationships. The validated model articulates an explanatory pathway of Clinical Nurse Leader integrated care delivery, including Clinical Nurse Leader practices that result in improved care dynamics and patient outcomes. The validated model provides a basis for testing in practice to generate evidence that can be deployed across the healthcare spectrum. © 2016 John Wiley & Sons Ltd.

  15. Wavelet Filtering to Reduce Conservatism in Aeroservoelastic Robust Stability Margins

    NASA Technical Reports Server (NTRS)

    Brenner, Marty; Lind, Rick

    1998-01-01

    Wavelet analysis for filtering and system identification was used to improve the estimation of aeroservoelastic stability margins. The conservatism of the robust stability margins was reduced with parametric and nonparametric time-frequency analysis of flight data in the model validation process. Nonparametric wavelet processing of data was used to reduce the effects of external desirableness and unmodeled dynamics. Parametric estimates of modal stability were also extracted using the wavelet transform. Computation of robust stability margins for stability boundary prediction depends on uncertainty descriptions derived from the data for model validation. F-18 high Alpha Research Vehicle aeroservoelastic flight test data demonstrated improved robust stability prediction by extension of the stability boundary beyond the flight regime.

  16. Modal testing for model validation of structures with discrete nonlinearities.

    PubMed

    Ewins, D J; Weekes, B; delli Carri, A

    2015-09-28

    Model validation using data from modal tests is now widely practiced in many industries for advanced structural dynamic design analysis, especially where structural integrity is a primary requirement. These industries tend to demand highly efficient designs for their critical structures which, as a result, are increasingly operating in regimes where traditional linearity assumptions are no longer adequate. In particular, many modern structures are found to contain localized areas, often around joints or boundaries, where the actual mechanical behaviour is far from linear. Such structures need to have appropriate representation of these nonlinear features incorporated into the otherwise largely linear models that are used for design and operation. This paper proposes an approach to this task which is an extension of existing linear techniques, especially in the testing phase, involving only just as much nonlinear analysis as is necessary to construct a model which is good enough, or 'valid': i.e. capable of predicting the nonlinear response behaviour of the structure under all in-service operating and test conditions with a prescribed accuracy. A short-list of methods described in the recent literature categorized using our framework is given, which identifies those areas in which further development is most urgently required. © 2015 The Authors.

  17. An investigation of the factor structure of the beck depression inventory-II in anorexia nervosa.

    PubMed

    Fuss, Samantha; Trottier, Kathryn; Carter, Jacqueline

    2015-01-01

    Symptoms of depression frequently co-occur with eating disorders and have been associated with negative outcomes. Self-report measures such as the Beck Depression Inventory-II (BDI-II) are commonly used to assess for the presence of depressive symptoms in eating disorders, but the instrument's factor structure in this population has not been examined. The purposes of this study were to explore the factor structure of the BDI-II in a sample of individuals (N = 437) with anorexia nervosa undergoing inpatient treatment and to examine changes in depressive symptoms on each of the identified factors following a course of treatment for anorexia nervosa in order to provide evidence supporting the construct validity of the measure. Exploratory factor analysis revealed that a three-factor model reflected the best fit for the data. Confirmatory factor analysis was used to validate this model against competing models and the three-factor model exhibited strong model fit characteristics. BDI-II scores were significantly reduced on all three factors following inpatient treatment, which supported the construct validity of the scale. The BDI-II appears to be reliable in this population, and the factor structure identified through this analysis may offer predictive utility for identifying individuals who may have more difficulty achieving weight restoration in the context of inpatient treatment. Copyright © 2014 John Wiley & Sons, Ltd and Eating Disorders Association. Copyright © 2014 John Wiley & Sons, Ltd and Eating Disorders Association.

  18. Quantitative structure-activity relationship study of P2X7 receptor inhibitors using combination of principal component analysis and artificial intelligence methods.

    PubMed

    Ahmadi, Mehdi; Shahlaei, Mohsen

    2015-01-01

    P2X7 antagonist activity for a set of 49 molecules of the P2X7 receptor antagonists, derivatives of purine, was modeled with the aid of chemometric and artificial intelligence techniques. The activity of these compounds was estimated by means of combination of principal component analysis (PCA), as a well-known data reduction method, genetic algorithm (GA), as a variable selection technique, and artificial neural network (ANN), as a non-linear modeling method. First, a linear regression, combined with PCA, (principal component regression) was operated to model the structure-activity relationships, and afterwards a combination of PCA and ANN algorithm was employed to accurately predict the biological activity of the P2X7 antagonist. PCA preserves as much of the information as possible contained in the original data set. Seven most important PC's to the studied activity were selected as the inputs of ANN box by an efficient variable selection method, GA. The best computational neural network model was a fully-connected, feed-forward model with 7-7-1 architecture. The developed ANN model was fully evaluated by different validation techniques, including internal and external validation, and chemical applicability domain. All validations showed that the constructed quantitative structure-activity relationship model suggested is robust and satisfactory.

  19. Quantitative structure–activity relationship study of P2X7 receptor inhibitors using combination of principal component analysis and artificial intelligence methods

    PubMed Central

    Ahmadi, Mehdi; Shahlaei, Mohsen

    2015-01-01

    P2X7 antagonist activity for a set of 49 molecules of the P2X7 receptor antagonists, derivatives of purine, was modeled with the aid of chemometric and artificial intelligence techniques. The activity of these compounds was estimated by means of combination of principal component analysis (PCA), as a well-known data reduction method, genetic algorithm (GA), as a variable selection technique, and artificial neural network (ANN), as a non-linear modeling method. First, a linear regression, combined with PCA, (principal component regression) was operated to model the structure–activity relationships, and afterwards a combination of PCA and ANN algorithm was employed to accurately predict the biological activity of the P2X7 antagonist. PCA preserves as much of the information as possible contained in the original data set. Seven most important PC's to the studied activity were selected as the inputs of ANN box by an efficient variable selection method, GA. The best computational neural network model was a fully-connected, feed-forward model with 7−7−1 architecture. The developed ANN model was fully evaluated by different validation techniques, including internal and external validation, and chemical applicability domain. All validations showed that the constructed quantitative structure–activity relationship model suggested is robust and satisfactory. PMID:26600858

  20. Improving mathematical problem solving ability through problem-based learning and authentic assessment for the students of Bali State Polytechnic

    NASA Astrophysics Data System (ADS)

    Darma, I. K.

    2018-01-01

    This research is aimed at determining: 1) the differences of mathematical problem solving ability between the students facilitated with problem-based learning model and conventional learning model, 2) the differences of mathematical problem solving ability between the students facilitated with authentic and conventional assessment model, and 3) interaction effect between learning and assessment model on mathematical problem solving. The research was conducted in Bali State Polytechnic, using the 2x2 experiment factorial design. The samples of this research were 110 students. The data were collected using a theoretically and empirically-validated test. Instruments were validated by using Aiken’s approach of technique content validity and item analysis, and then analyzed using anova stylistic. The result of the analysis shows that the students facilitated with problem-based learning and authentic assessment models get the highest score average compared to the other students, both in the concept understanding and mathematical problem solving. The result of hypothesis test shows that, significantly: 1) there is difference of mathematical problem solving ability between the students facilitated with problem-based learning model and conventional learning model, 2) there is difference of mathematical problem solving ability between the students facilitated with authentic assessment model and conventional assessment model, and 3) there is interaction effect between learning model and assessment model on mathematical problem solving. In order to improve the effectiveness of mathematics learning, collaboration between problem-based learning model and authentic assessment model can be considered as one of learning models in class.

  1. The Role of Integrated Modeling in the Design and Verification of the James Webb Space Telescope

    NASA Technical Reports Server (NTRS)

    Mosier, Gary E.; Howard, Joseph M.; Johnston, John D.; Parrish, Keith A.; Hyde, T. Tupper; McGinnis, Mark A.; Bluth, Marcel; Kim, Kevin; Ha, Kong Q.

    2004-01-01

    The James Web Space Telescope (JWST) is a large, infrared-optimized space telescope scheduled for launch in 2011. System-level verification of critical optical performance requirements will rely on integrated modeling to a considerable degree. In turn, requirements for accuracy of the models are significant. The size of the lightweight observatory structure, coupled with the need to test at cryogenic temperatures, effectively precludes validation of the models and verification of optical performance with a single test in 1-g. Rather, a complex series of steps are planned by which the components of the end-to-end models are validated at various levels of subassembly, and the ultimate verification of optical performance is by analysis using the assembled models. This paper describes the critical optical performance requirements driving the integrated modeling activity, shows how the error budget is used to allocate and track contributions to total performance, and presents examples of integrated modeling methods and results that support the preliminary observatory design. Finally, the concepts for model validation and the role of integrated modeling in the ultimate verification of observatory are described.

  2. Assessing normative cut points through differential item functioning analysis: an example from the adaptation of the Middlesex Elderly Assessment of Mental State (MEAMS) for use as a cognitive screening test in Turkey.

    PubMed

    Tennant, Alan; Küçükdeveci, Ayse A; Kutlay, Sehim; Elhan, Atilla H

    2006-03-23

    The Middlesex Elderly Assessment of Mental State (MEAMS) was developed as a screening test to detect cognitive impairment in the elderly. It includes 12 subtests, each having a 'pass score'. A series of tasks were undertaken to adapt the measure for use in the adult population in Turkey and to determine the validity of existing cut points for passing subtests, given the wide range of educational level in the Turkish population. This study focuses on identifying and validating the scoring system of the MEAMS for Turkish adult population. After the translation procedure, 350 normal subjects and 158 acquired brain injury patients were assessed by the Turkish version of MEAMS. Initially, appropriate pass scores for the normal population were determined through ANOVA post-hoc tests according to age, gender and education. Rasch analysis was then used to test the internal construct validity of the scale and the validity of the cut points for pass scores on the pooled data by using Differential Item Functioning (DIF) analysis within the framework of the Rasch model. Data with the initially modified pass scores were analyzed. DIF was found for certain subtests by age and education, but not for gender. Following this, pass scores were further adjusted and data re-fitted to the model. All subtests were found to fit the Rasch model (mean item fit 0.184, SD 0.319; person fit -0.224, SD 0.557) and DIF was then found to be absent. Thus the final pass scores for all subtests were determined. The MEAMS offers a valid assessment of cognitive state for the adult Turkish population, and the revised cut points accommodate for age and education. Further studies are required to ascertain the validity in different diagnostic groups.

  3. Summary of BISON Development and Validation Activities - NEAMS FY16 Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williamson, R. L.; Pastore, G.; Gamble, K. A.

    This summary report contains an overview of work performed under the work package en- titled “FY2016 NEAMS INL-Engineering Scale Fuel Performance (BISON)” A first chapter identifies the specific FY-16 milestones, providing a basic description of the associated work and references to related detailed documentation. Where applicable, a representative technical result is provided. A second chapter summarizes major additional accomplishments, which in- clude: 1) publication of a journal article on solution verification and validation of BISON for LWR fuel, 2) publication of a journal article on 3D Missing Pellet Surface (MPS) analysis of BWR fuel, 3) use of BISON to designmore » a unique 3D MPS validation experiment for future in- stallation in the Halden research reactor, 4) participation in an OECD benchmark on Pellet Clad Mechanical Interaction (PCMI), 5) participation in an OECD benchmark on Reactivity Insertion Accident (RIA) analysis, 6) participation in an OECD activity on uncertainity quantification and sensitivity analysis in nuclear fuel modeling and 7) major improvements to BISON’s fission gas behavior models. A final chapter outlines FY-17 future work.« less

  4. Real-Time Onboard Global Nonlinear Aerodynamic Modeling from Flight Data

    NASA Technical Reports Server (NTRS)

    Brandon, Jay M.; Morelli, Eugene A.

    2014-01-01

    Flight test and modeling techniques were developed to accurately identify global nonlinear aerodynamic models onboard an aircraft. The techniques were developed and demonstrated during piloted flight testing of an Aermacchi MB-326M Impala jet aircraft. Advanced piloting techniques and nonlinear modeling techniques based on fuzzy logic and multivariate orthogonal function methods were implemented with efficient onboard calculations and flight operations to achieve real-time maneuver monitoring and analysis, and near-real-time global nonlinear aerodynamic modeling and prediction validation testing in flight. Results demonstrated that global nonlinear aerodynamic models for a large portion of the flight envelope were identified rapidly and accurately using piloted flight test maneuvers during a single flight, with the final identified and validated models available before the aircraft landed.

  5. [Determination of calcium and magnesium in tobacco by near-infrared spectroscopy and least squares-support vector machine].

    PubMed

    Tian, Kuang-da; Qiu, Kai-xian; Li, Zu-hong; Lü, Ya-qiong; Zhang, Qiu-ju; Xiong, Yan-mei; Min, Shun-geng

    2014-12-01

    The purpose of the present paper is to determine calcium and magnesium in tobacco using NIR combined with least squares-support vector machine (LS-SVM). Five hundred ground and dried tobacco samples from Qujing city, Yunnan province, China, were surveyed by a MATRIX-I spectrometer (Bruker Optics, Bremen, Germany). At the beginning of data processing, outliers of samples were eliminated for stability of the model. The rest 487 samples were divided into several calibration sets and validation sets according to a hybrid modeling strategy. Monte-Carlo cross validation was used to choose the best spectral preprocess method from multiplicative scatter correction (MSC), standard normal variate transformation (SNV), S-G smoothing, 1st derivative, etc., and their combinations. To optimize parameters of LS-SVM model, the multilayer grid search and 10-fold cross validation were applied. The final LS-SVM models with the optimizing parameters were trained by the calibration set and accessed by 287 validation samples picked by Kennard-Stone method. For the quantitative model of calcium in tobacco, Savitzky-Golay FIR smoothing with frame size 21 showed the best performance. The regularization parameter λ of LS-SVM was e16.11, while the bandwidth of the RBF kernel σ2 was e8.42. The determination coefficient for prediction (Rc(2)) was 0.9755 and the determination coefficient for prediction (Rp(2)) was 0.9422, better than the performance of PLS model (Rc(2)=0.9593, Rp(2)=0.9344). For the quantitative analysis of magnesium, SNV made the regression model more precise than other preprocess. The optimized λ was e15.25 and σ2 was e6.32. Rc(2) and Rp(2) were 0.9961 and 0.9301, respectively, better than PLS model (Rc(2)=0.9716, Rp(2)=0.8924). After modeling, the whole progress of NIR scan and data analysis for one sample was within tens of seconds. The overall results show that NIR spectroscopy combined with LS-SVM can be efficiently utilized for rapid and accurate analysis of calcium and magnesium in tobacco.

  6. Parametric Study of Shear Strength of Concrete Beams Reinforced with FRP Bars

    NASA Astrophysics Data System (ADS)

    Thomas, Job; Ramadass, S.

    2016-09-01

    Fibre Reinforced Polymer (FRP) bars are being widely used as internal reinforcement in structural elements in the last decade. The corrosion resistance of FRP bars qualifies its use in severe and marine exposure conditions in structures. A total of eight concrete beams longitudinally reinforced with FRP bars were cast and tested over shear span to depth ratio of 0.5 and 1.75. The shear strength test data of 188 beams published in various literatures were also used. The model originally proposed by Indian Standard Code of practice for the prediction of shear strength of concrete beams reinforced with steel bars IS:456 (Plain and reinforced concrete, code of practice, fourth revision. Bureau of Indian Standards, New Delhi, 2000) is considered and a modification to account for the influence of the FRP bars is proposed based on regression analysis. Out of the 196 test data, 110 test data is used for the regression analysis and 86 test data is used for the validation of the model. In addition, the shear strength of 86 test data accounted for the validation is assessed using eleven models proposed by various researchers. The proposed model accounts for compressive strength of concrete ( f ck ), modulus of elasticity of FRP rebar ( E f ), longitudinal reinforcement ratio ( ρ f ), shear span to depth ratio ( a/ d) and size effect of beams. The predicted shear strength of beams using the proposed model and 11 models proposed by other researchers is compared with the corresponding experimental results. The mean of predicted shear strength to the experimental shear strength for the 86 beams accounted for the validation of the proposed model is found to be 0.93. The result of the statistical analysis indicates that the prediction based on the proposed model corroborates with the corresponding experimental data.

  7. Introduction to Bayesian statistical approaches to compositional analyses of transgenic crops 1. Model validation and setting the stage.

    PubMed

    Harrison, Jay M; Breeze, Matthew L; Harrigan, George G

    2011-08-01

    Statistical comparisons of compositional data generated on genetically modified (GM) crops and their near-isogenic conventional (non-GM) counterparts typically rely on classical significance testing. This manuscript presents an introduction to Bayesian methods for compositional analysis along with recommendations for model validation. The approach is illustrated using protein and fat data from two herbicide tolerant GM soybeans (MON87708 and MON87708×MON89788) and a conventional comparator grown in the US in 2008 and 2009. Guidelines recommended by the US Food and Drug Administration (FDA) in conducting Bayesian analyses of clinical studies on medical devices were followed. This study is the first Bayesian approach to GM and non-GM compositional comparisons. The evaluation presented here supports a conclusion that a Bayesian approach to analyzing compositional data can provide meaningful and interpretable results. We further describe the importance of method validation and approaches to model checking if Bayesian approaches to compositional data analysis are to be considered viable by scientists involved in GM research and regulation. Copyright © 2011 Elsevier Inc. All rights reserved.

  8. Patient self-report section of the ASES questionnaire: a Spanish validation study using classical test theory and the Rasch model.

    PubMed

    Vrotsou, Kalliopi; Cuéllar, Ricardo; Silió, Félix; Rodriguez, Miguel Ángel; Garay, Daniel; Busto, Gorka; Trancho, Ziortza; Escobar, Antonio

    2016-10-18

    The aim of the current study was to validate the self-report section of the American Shoulder and Elbow Surgeons questionnaire (ASES-p) into Spanish. Shoulder pathology patients were recruited and followed up to 6 months post treatment. The ASES-p, Constant, SF-36 and Barthel scales were filled-in pre and post treatment. Reliability was tested with Cronbach's alpha, convergent validity with Spearman's correlations coefficients. Confirmatory factor analysis (CFA) and the Rasch model were implemented for assessing structural validity and unidimensionality of the scale. Models with and without the pain item were considered. Responsiveness to change was explored via standardised effect sizes. Results were acceptable for both tested models. Cronbach's alpha was 0.91, total scale correlations with Constant and physical SF-36 dimensions were >0.50. Factor loadings for CFA were >0.40. The Rasch model confirmed unidimensionality of the scale, even though item 10 "do usual sport" was suggested as non-informative. Finally, patients with improved post treatment shoulder function and those receiving surgery had higher standardised effect sizes. The adapted Spanish ASES-p version is a valid and reliable tool for shoulder evaluation and its unidimensionality is supported by the data.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Epiney, A.; Canepa, S.; Zerkak, O.

    The STARS project at the Paul Scherrer Institut (PSI) has adopted the TRACE thermal-hydraulic (T-H) code for best-estimate system transient simulations of the Swiss Light Water Reactors (LWRs). For analyses involving interactions between system and core, a coupling of TRACE with the SIMULATE-3K (S3K) LWR core simulator has also been developed. In this configuration, the TRACE code and associated nuclear power reactor simulation models play a central role to achieve a comprehensive safety analysis capability. Thus, efforts have now been undertaken to consolidate the validation strategy by implementing a more rigorous and structured assessment approach for TRACE applications involving eithermore » only system T-H evaluations or requiring interfaces to e.g. detailed core or fuel behavior models. The first part of this paper presents the preliminary concepts of this validation strategy. The principle is to systematically track the evolution of a given set of predicted physical Quantities of Interest (QoIs) over a multidimensional parametric space where each of the dimensions represent the evolution of specific analysis aspects, including e.g. code version, transient specific simulation methodology and model "nodalisation". If properly set up, such environment should provide code developers and code users with persistent (less affected by user effect) and quantified information (sensitivity of QoIs) on the applicability of a simulation scheme (codes, input models, methodology) for steady state and transient analysis of full LWR systems. Through this, for each given transient/accident, critical paths of the validation process can be identified that could then translate into defining reference schemes to be applied for downstream predictive simulations. In order to illustrate this approach, the second part of this paper presents a first application of this validation strategy to an inadvertent blowdown event that occurred in a Swiss BWR/6. The transient was initiated by the spurious actuation of the Automatic Depressurization System (ADS). The validation approach progresses through a number of dimensions here: First, the same BWR system simulation model is assessed for different versions of the TRACE code, up to the most recent one. The second dimension is the "nodalisation" dimension, where changes to the input model are assessed. The third dimension is the "methodology" dimension. In this case imposed power and an updated TRACE core model are investigated. For each step in each validation dimension, a common set of QoIs are investigated. For the steady-state results, these include fuel temperatures distributions. For the transient part of the present study, the evaluated QoIs include the system pressure evolution and water carry-over into the steam line.« less

  10. A validated finite element model of a soft artificial muscle motor

    NASA Astrophysics Data System (ADS)

    Tse, Tony Chun H.; O'Brien, Benjamin; McKay, Thomas; Anderson, Iain A.

    2011-04-01

    The Biomimetics Laboratory has developed a soft artificial muscle motor based on Dielectric Elastomers. The motor, 'Flexidrive', is light-weight and has low system complexity. It works by gripping and turning a shaft with a soft gear, like we would with our fingers. The motor's performance depends on many factors, such as actuation waveform, electrode patterning, geometries and contact tribology between the shaft and gear. We have developed a finite element model (FEM) of the motor as a study and design tool. Contact interaction was integrated with previous material and electromechanical coupling models in ABAQUS. The model was experimentally validated through a shape and blocked force analysis.

  11. Analyzing the Validity of Relationship Banking through Agent-based Modeling

    NASA Astrophysics Data System (ADS)

    Nishikido, Yukihito; Takahashi, Hiroshi

    This article analyzes the validity of relationship banking through agent-based modeling. In the analysis, we especially focus on the relationship between economic conditions and both lenders' and borrowers' behaviors. As a result of intensive experiments, we made the following interesting findings: (1) Relationship banking contributes to reducing bad loan; (2) relationship banking is more effective in enhancing the market growth compared to transaction banking, when borrowers' sales scale is large; (3) keener competition among lenders may bring inefficiency to the market.

  12. ASC-AD penetration modeling FY05 status report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kistler, Bruce L.; Ostien, Jakob T.; Chiesa, Michael L.

    2006-04-01

    Sandia currently lacks a high fidelity method for predicting loads on and subsequent structural response of earth penetrating weapons. This project seeks to test, debug, improve and validate methodologies for modeling earth penetration. Results of this project will allow us to optimize and certify designs for the B61-11, Robust Nuclear Earth Penetrator (RNEP), PEN-X and future nuclear and conventional penetrator systems. Since this is an ASC Advanced Deployment project the primary goal of the work is to test, debug, verify and validate new Sierra (and Nevada) tools. Also, since this project is part of the V&V program within ASC, uncertaintymore » quantification (UQ), optimization using DAKOTA [1] and sensitivity analysis are an integral part of the work. This project evaluates, verifies and validates new constitutive models, penetration methodologies and Sierra/Nevada codes. In FY05 the project focused mostly on PRESTO [2] using the Spherical Cavity Expansion (SCE) [3,4] and PRESTO Lagrangian analysis with a preformed hole (Pen-X) methodologies. Modeling penetration tests using PRESTO with a pilot hole was also attempted to evaluate constitutive models. Future years work would include the Alegra/SHISM [5] and AlegrdEP (Earth Penetration) methodologies when they are ready for validation testing. Constitutive models such as Soil-and-Foam, the Sandia Geomodel [6], and the K&C Concrete model [7] were also tested and evaluated. This report is submitted to satisfy annual documentation requirements for the ASC Advanced Deployment program. This report summarizes FY05 work performed in the Penetration Mechanical Response (ASC-APPS) and Penetration Mechanics (ASC-V&V) projects. A single report is written to document the two projects because of the significant amount of technical overlap.« less

  13. Prognostic indices for early mortality in ischaemic stroke - meta-analysis.

    PubMed

    Mattishent, K; Kwok, C S; Mahtani, A; Pelpola, K; Myint, P K; Loke, Y K

    2016-01-01

    Several models have been developed to predict mortality in ischaemic stroke. We aimed to evaluate systematically the performance of published stroke prognostic scores. We searched MEDLINE and EMBASE in February 2014 for prognostic models (published between 2003 and 2014) used in predicting early mortality (<6 months) after ischaemic stroke. We evaluated discriminant ability of the tools through meta-analysis of the area under the curve receiver operating characteristic curve (AUROC) or c-statistic. We evaluated the following components of study validity: collection of prognostic variables, neuroimaging, treatment pathways and missing data. We identified 18 articles (involving 163 240 patients) reporting on the performance of prognostic models for mortality in ischaemic stroke, with 15 articles providing AUC for meta-analysis. Most studies were either retrospective, or post hoc analyses of prospectively collected data; all but three reported validation data. The iSCORE had the largest number of validation cohorts (five) within our systematic review and showed good performance in four different countries, pooled AUC 0.84 (95% CI 0.82-0.87). We identified other potentially useful prognostic tools that have yet to be as extensively validated as iSCORE - these include SOAR (2 studies, pooled AUC 0.79, 95% CI 0.78-0.80), GWTG (2 studies, pooled AUC 0.72, 95% CI 0.72-0.72) and PLAN (1 study, pooled AUC 0.85, 95% CI 0.84-0.87). Our meta-analysis has identified and summarized the performance of several prognostic scores with modest to good predictive accuracy for early mortality in ischaemic stroke, with the iSCORE having the broadest evidence base. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  14. Cross-national validation of prognostic models predicting sickness absence and the added value of work environment variables.

    PubMed

    Roelen, Corné A M; Stapelfeldt, Christina M; Heymans, Martijn W; van Rhenen, Willem; Labriola, Merete; Nielsen, Claus V; Bültmann, Ute; Jensen, Chris

    2015-06-01

    To validate Dutch prognostic models including age, self-rated health and prior sickness absence (SA) for ability to predict high SA in Danish eldercare. The added value of work environment variables to the models' risk discrimination was also investigated. 2,562 municipal eldercare workers (95% women) participated in the Working in Eldercare Survey. Predictor variables were measured by questionnaire at baseline in 2005. Prognostic models were validated for predictions of high (≥30) SA days and high (≥3) SA episodes retrieved from employer records during 1-year follow-up. The accuracy of predictions was assessed by calibration graphs and the ability of the models to discriminate between high- and low-risk workers was investigated by ROC-analysis. The added value of work environment variables was measured with Integrated Discrimination Improvement (IDI). 1,930 workers had complete data for analysis. The models underestimated the risk of high SA in eldercare workers and the SA episodes model had to be re-calibrated to the Danish data. Discrimination was practically useful for the re-calibrated SA episodes model, but not the SA days model. Physical workload improved the SA days model (IDI = 0.40; 95% CI 0.19-0.60) and psychosocial work factors, particularly the quality of leadership (IDI = 0.70; 95% CI 053-0.86) improved the SA episodes model. The prognostic model predicting high SA days showed poor performance even after physical workload was added. The prognostic model predicting high SA episodes could be used to identify high-risk workers, especially when psychosocial work factors are added as predictor variables.

  15. Numerical analysis of the dynamic interaction between wheel set and turnout crossing using the explicit finite element method

    NASA Astrophysics Data System (ADS)

    Xin, L.; Markine, V. L.; Shevtsov, I. Y.

    2016-03-01

    A three-dimensional (3-D) explicit dynamic finite element (FE) model is developed to simulate the impact of the wheel on the crossing nose. The model consists of a wheel set moving over the turnout crossing. Realistic wheel, wing rail and crossing geometries have been used in the model. Using this model the dynamic responses of the system such as the contact forces between the wheel and the crossing, crossing nose displacements and accelerations, stresses in rail material as well as in sleepers and ballast can be obtained. Detailed analysis of the wheel set and crossing interaction using the local contact stress state in the rail is possible as well, which provides a good basis for prediction of the long-term behaviour of the crossing (fatigue analysis). In order to tune and validate the FE model field measurements conducted on several turnouts in the railway network in the Netherlands are used here. The parametric study including variations of the crossing nose geometries performed here demonstrates the capabilities of the developed model. The results of the validation and parametric study are presented and discussed.

  16. A high-performance spatial database based approach for pathology imaging algorithm evaluation

    PubMed Central

    Wang, Fusheng; Kong, Jun; Gao, Jingjing; Cooper, Lee A.D.; Kurc, Tahsin; Zhou, Zhengwen; Adler, David; Vergara-Niedermayr, Cristobal; Katigbak, Bryan; Brat, Daniel J.; Saltz, Joel H.

    2013-01-01

    Background: Algorithm evaluation provides a means to characterize variability across image analysis algorithms, validate algorithms by comparison with human annotations, combine results from multiple algorithms for performance improvement, and facilitate algorithm sensitivity studies. The sizes of images and image analysis results in pathology image analysis pose significant challenges in algorithm evaluation. We present an efficient parallel spatial database approach to model, normalize, manage, and query large volumes of analytical image result data. This provides an efficient platform for algorithm evaluation. Our experiments with a set of brain tumor images demonstrate the application, scalability, and effectiveness of the platform. Context: The paper describes an approach and platform for evaluation of pathology image analysis algorithms. The platform facilitates algorithm evaluation through a high-performance database built on the Pathology Analytic Imaging Standards (PAIS) data model. Aims: (1) Develop a framework to support algorithm evaluation by modeling and managing analytical results and human annotations from pathology images; (2) Create a robust data normalization tool for converting, validating, and fixing spatial data from algorithm or human annotations; (3) Develop a set of queries to support data sampling and result comparisons; (4) Achieve high performance computation capacity via a parallel data management infrastructure, parallel data loading and spatial indexing optimizations in this infrastructure. Materials and Methods: We have considered two scenarios for algorithm evaluation: (1) algorithm comparison where multiple result sets from different methods are compared and consolidated; and (2) algorithm validation where algorithm results are compared with human annotations. We have developed a spatial normalization toolkit to validate and normalize spatial boundaries produced by image analysis algorithms or human annotations. The validated data were formatted based on the PAIS data model and loaded into a spatial database. To support efficient data loading, we have implemented a parallel data loading tool that takes advantage of multi-core CPUs to accelerate data injection. The spatial database manages both geometric shapes and image features or classifications, and enables spatial sampling, result comparison, and result aggregation through expressive structured query language (SQL) queries with spatial extensions. To provide scalable and efficient query support, we have employed a shared nothing parallel database architecture, which distributes data homogenously across multiple database partitions to take advantage of parallel computation power and implements spatial indexing to achieve high I/O throughput. Results: Our work proposes a high performance, parallel spatial database platform for algorithm validation and comparison. This platform was evaluated by storing, managing, and comparing analysis results from a set of brain tumor whole slide images. The tools we develop are open source and available to download. Conclusions: Pathology image algorithm validation and comparison are essential to iterative algorithm development and refinement. One critical component is the support for queries involving spatial predicates and comparisons. In our work, we develop an efficient data model and parallel database approach to model, normalize, manage and query large volumes of analytical image result data. Our experiments demonstrate that the data partitioning strategy and the grid-based indexing result in good data distribution across database nodes and reduce I/O overhead in spatial join queries through parallel retrieval of relevant data and quick subsetting of datasets. The set of tools in the framework provide a full pipeline to normalize, load, manage and query analytical results for algorithm evaluation. PMID:23599905

  17. Design of Novel Chemotherapeutic Agents Targeting Checkpoint Kinase 1 Using 3D-QSAR Modeling and Molecular Docking Methods.

    PubMed

    Balupuri, Anand; Balasubramanian, Pavithra K; Cho, Seung J

    2016-01-01

    Checkpoint kinase 1 (Chk1) has emerged as a potential therapeutic target for design and development of novel anticancer drugs. Herein, we have performed three-dimensional quantitative structure-activity relationship (3D-QSAR) and molecular docking analyses on a series of diazacarbazoles to design potent Chk1 inhibitors. 3D-QSAR models were developed using comparative molecular field analysis (CoMFA) and comparative molecular similarity indices analysis (CoMSIA) techniques. Docking studies were performed using AutoDock. The best CoMFA and CoMSIA models exhibited cross-validated correlation coefficient (q2) values of 0.631 and 0.585, and non-cross-validated correlation coefficient (r2) values of 0.933 and 0.900, respectively. CoMFA and CoMSIA models showed reasonable external predictabilities (r2 pred) of 0.672 and 0.513, respectively. A satisfactory performance in the various internal and external validation techniques indicated the reliability and robustness of the best model. Docking studies were performed to explore the binding mode of inhibitors inside the active site of Chk1. Molecular docking revealed that hydrogen bond interactions with Lys38, Glu85 and Cys87 are essential for Chk1 inhibitory activity. The binding interaction patterns observed during docking studies were complementary to 3D-QSAR results. Information obtained from the contour map analysis was utilized to design novel potent Chk1 inhibitors. Their activities and binding affinities were predicted using the derived model and docking studies. Designed inhibitors were proposed as potential candidates for experimental synthesis.

  18. Predicting human skin absorption of chemicals: development of a novel quantitative structure activity relationship.

    PubMed

    Luo, Wen; Medrek, Sarah; Misra, Jatin; Nohynek, Gerhard J

    2007-02-01

    The objective of this study was to construct and validate a quantitative structure-activity relationship model for skin absorption. Such models are valuable tools for screening and prioritization in safety and efficacy evaluation, and risk assessment of drugs and chemicals. A database of 340 chemicals with percutaneous absorption was assembled. Two models were derived from the training set consisting 306 chemicals (90/10 random split). In addition to the experimental K(ow) values, over 300 2D and 3D atomic and molecular descriptors were analyzed using MDL's QsarIS computer program. Subsequently, the models were validated using both internal (leave-one-out) and external validation (test set) procedures. Using the stepwise regression analysis, three molecular descriptors were determined to have significant statistical correlation with K(p) (R2 = 0.8225): logK(ow), X0 (quantification of both molecular size and the degree of skeletal branching), and SsssCH (count of aromatic carbon groups). In conclusion, two models to estimate skin absorption were developed. When compared to other skin absorption QSAR models in the literature, our model incorporated more chemicals and explored a large number of descriptors. Additionally, our models are reasonably predictive and have met both internal and external statistical validations.

  19. A solution to the static frame validation challenge problem using Bayesian model selection

    DOE PAGES

    Grigoriu, M. D.; Field, R. V.

    2007-12-23

    Within this paper, we provide a solution to the static frame validation challenge problem (see this issue) in a manner that is consistent with the guidelines provided by the Validation Challenge Workshop tasking document. The static frame problem is constructed such that variability in material properties is known to be the only source of uncertainty in the system description, but there is ignorance on the type of model that best describes this variability. Hence both types of uncertainty, aleatoric and epistemic, are present and must be addressed. Our approach is to consider a collection of competing probabilistic models for themore » material properties, and calibrate these models to the information provided; models of different levels of complexity and numerical efficiency are included in the analysis. A Bayesian formulation is used to select the optimal model from the collection, which is then used for the regulatory assessment. Lastly, bayesian credible intervals are used to provide a measure of confidence to our regulatory assessment.« less

  20. Construct validity test of evaluation tool for professional behaviors of entry-level occupational therapy students in the United States.

    PubMed

    Yuen, Hon K; Azuero, Andres; Lackey, Kaitlin W; Brown, Nicole S; Shrestha, Sangita

    2016-01-01

    This study aimed to test the construct validity of an instrument to measure student professional behaviors in entry-level occupational therapy (OT) students in the academic setting. A total of 718 students from 37 OT programs across the United States answered a self-assessment survey of professional behavior that we developed. The survey consisted of ranking 28 attributes, each on a 5-point Likert scale. A split-sample approach was used for exploratory and then confirmatory factor analysis. A three-factor solution with nine items was extracted using exploratory factor analysis [EFA] (n=430, 60%). The factors were 'Commitment to Learning' (2 items), 'Skills for Learning' (4 items), and 'Cultural Competence' (3 items). Confirmatory factor analysis (CFA) on the validation split (n=288, 40%) indicated fair fit for this three-factor model (fit indices: CFI=0.96, RMSEA=0.06, and SRMR=0.05). Internal consistency reliability estimates of each factor and the instrument ranged from 0.63 to 0.79. Results of the CFA in a separate validation dataset provided robust measures of goodness-of-fit for the three-factor solution developed in the EFA, and indicated that the three-factor model fitted the data well enough. Therefore, we can conclude that this student professional behavior evaluation instrument is a structurally validated tool to measure professional behaviors reported by entry-level OT students. The internal consistency reliability of each individual factor and the whole instrument was considered to be adequate to good.

  1. Validation of the Malay Version of the Parental Bonding Instrument among Malaysian Youths Using Exploratory Factor Analysis

    PubMed Central

    MUHAMMAD, Noor Azimah; SHAMSUDDIN, Khadijah; OMAR, Khairani; SHAH, Shamsul Azhar; MOHD AMIN, Rahmah

    2014-01-01

    Background: Parenting behaviour is culturally sensitive. The aims of this study were (1) to translate the Parental Bonding Instrument into Malay (PBI-M) and (2) to determine its factorial structure and validity among the Malaysian population. Methods: The PBI-M was generated from a standard translation process and comprehension testing. The validation study of the PBI-M was administered to 248 college students aged 18 to 22 years. Results: Participants in the comprehension testing had difficulty understanding negative items. Five translated double negative items were replaced with five positive items with similar meanings. Exploratory factor analysis showed a three-factor model for the PBI-M with acceptable reliability. Four negative items (items 3, 4, 8, and 16) and item 19 were omitted from the final PBI-M list because of incorrect placement or low factor loading (< 0.32). Out of the final 20 items of the PBI-M, there were 10 items for the care factor, five items for the autonomy factor and five items for the overprotection factor. All the items loaded positively on their respective factors. Conclusion: The Malaysian population favoured positive items in answering questions. The PBI-M confirmed the three-factor model that consisted of care, autonomy and overprotection. The PBI-M is a valid and reliable instrument to assess the Malaysian parenting style. Confirmatory factor analysis may further support this finding. Keywords: Malaysia, parenting, questionnaire, validity PMID:25977634

  2. Sensitivity Analysis of the Integrated Medical Model for ISS Programs

    NASA Technical Reports Server (NTRS)

    Goodenow, D. A.; Myers, J. G.; Arellano, J.; Boley, L.; Garcia, Y.; Saile, L.; Walton, M.; Kerstman, E.; Reyes, D.; Young, M.

    2016-01-01

    Sensitivity analysis estimates the relative contribution of the uncertainty in input values to the uncertainty of model outputs. Partial Rank Correlation Coefficient (PRCC) and Standardized Rank Regression Coefficient (SRRC) are methods of conducting sensitivity analysis on nonlinear simulation models like the Integrated Medical Model (IMM). The PRCC method estimates the sensitivity using partial correlation of the ranks of the generated input values to each generated output value. The partial part is so named because adjustments are made for the linear effects of all the other input values in the calculation of correlation between a particular input and each output. In SRRC, standardized regression-based coefficients measure the sensitivity of each input, adjusted for all the other inputs, on each output. Because the relative ranking of each of the inputs and outputs is used, as opposed to the values themselves, both methods accommodate the nonlinear relationship of the underlying model. As part of the IMM v4.0 validation study, simulations are available that predict 33 person-missions on ISS and 111 person-missions on STS. These simulated data predictions feed the sensitivity analysis procedures. The inputs to the sensitivity procedures include the number occurrences of each of the one hundred IMM medical conditions generated over the simulations and the associated IMM outputs: total quality time lost (QTL), number of evacuations (EVAC), and number of loss of crew lives (LOCL). The IMM team will report the results of using PRCC and SRRC on IMM v4.0 predictions of the ISS and STS missions created as part of the external validation study. Tornado plots will assist in the visualization of the condition-related input sensitivities to each of the main outcomes. The outcomes of this sensitivity analysis will drive review focus by identifying conditions where changes in uncertainty could drive changes in overall model output uncertainty. These efforts are an integral part of the overall verification, validation, and credibility review of IMM v4.0.

  3. Advancing the argument for validity of the Alberta Context Tool with healthcare aides in residential long-term care

    PubMed Central

    2011-01-01

    Background Organizational context has the potential to influence the use of new knowledge. However, despite advances in understanding the theoretical base of organizational context, its measurement has not been adequately addressed, limiting our ability to quantify and assess context in healthcare settings and thus, advance development of contextual interventions to improve patient care. We developed the Alberta Context Tool (the ACT) to address this concern. It consists of 58 items representing 10 modifiable contextual concepts. We reported the initial validation of the ACT in 2009. This paper presents the second stage of the psychometric validation of the ACT. Methods We used the Standards for Educational and Psychological Testing to frame our validity assessment. Data from 645 English speaking healthcare aides from 25 urban residential long-term care facilities (nursing homes) in the three Canadian Prairie Provinces were used for this stage of validation. In this stage we focused on: (1) advanced aspects of internal structure (e.g., confirmatory factor analysis) and (2) relations with other variables validity evidence. To assess reliability and validity of scores obtained using the ACT we conducted: Cronbach's alpha, confirmatory factor analysis, analysis of variance, and tests of association. We also assessed the performance of the ACT when individual responses were aggregated to the care unit level, because the instrument was developed to obtain unit-level scores of context. Results Item-total correlations exceeded acceptable standards (> 0.3) for the majority of items (51 of 58). We ran three confirmatory factor models. Model 1 (all ACT items) displayed unacceptable fit overall and for five specific items (1 item on adequate space for resident care in the Organizational Slack-Space ACT concept and 4 items on use of electronic resources in the Structural and Electronic Resources ACT concept). This prompted specification of two additional models. Model 2 used the 7 scaled ACT concepts while Model 3 used the 3 count-based ACT concepts. Both models displayed substantially improved fit in comparison to Model 1. Cronbach's alpha for the 10 ACT concepts ranged from 0.37 to 0.92 with 2 concepts performing below the commonly accepted standard of 0.70. Bivariate associations between the ACT concepts and instrumental research utilization levels (which the ACT should predict) were statistically significant at the 5% level for 8 of the 10 ACT concepts. The majority (8/10) of the ACT concepts also showed a statistically significant trend of increasing mean scores when arrayed across the lowest to the highest levels of instrumental research use. Conclusions The validation process in this study demonstrated additional empirical support for construct validity of the ACT, when completed by healthcare aides in nursing homes. The overall pattern of the data was consistent with the structure hypothesized in the development of the ACT and supports the ACT as an appropriate measure for assessing organizational context in nursing homes. Caution should be applied in using the one space and four electronic resource items that displayed misfit in this study with healthcare aides until further assessments are made. PMID:21767378

  4. Psychometric Properties of the Serbian Version of the Maslach Burnout Inventory-Human Services Survey: A Validation Study among Anesthesiologists from Belgrade Teaching Hospitals

    PubMed Central

    Matejić, Bojana; Milenović, Miodrag; Kisić Tepavčević, Darija; Simić, Dušica; Pekmezović, Tatjana; Worley, Jody A.

    2015-01-01

    We report findings from a validation study of the translated and culturally adapted Serbian version of Maslach Burnout Inventory-Human Services Survey (MBI-HSS), for a sample of anesthesiologists working in the tertiary healthcare. The results showed the sufficient overall reliability (Cronbach's α = 0.72) of the scores (items 1–22). The results of Bartlett's test of sphericity (χ 2 = 1983.75, df = 231, p < 0.001) and Kaiser-Meyer-Olkin measure of sampling adequacy (0.866) provided solid justification for factor analysis. In order to increase sensitivity of this questionnaire, we performed unfitted factor analysis model (eigenvalue greater than 1) which enabled us to extract the most suitable factor structure for our study instrument. The exploratory factor analysis model revealed five factors with eigenvalues greater than 1.0, explaining 62.0% of cumulative variance. Velicer's MAP test has supported five-factor model with the smallest average squared correlation of 0,184. This study indicated that Serbian version of the MBI-HSS is a reliable and valid instrument to measure burnout among a population of anesthesiologists. Results confirmed strong psychometric characteristics of the study instrument, with recommendations for interpretation of two new factors that may be unique to the Serbian version of the MBI-HSS. PMID:26090517

  5. Psychometric properties of the college survey for students with brain injury: individuals with and without traumatic brain injury.

    PubMed

    Kennedy, Mary R T; Krause, Miriam O; O'Brien, Katy H

    2014-01-01

    The psychometric properties of the college challenges sub-set from The College Survey for Students with Brain Injury (CSS-BI) were investigated with adults with and without traumatic brain injury (TBI). Adults with and without TBI completed the CSS-BI. A sub-set of participants with TBI were interviewed, intentional and convergent validity were investigated, and the internal structure of the college challenges was analysed with exploratory factor analysis/principle component analysis. Respondents with TBI understood the items describing college challenges with evidence of intentional validity. More individuals with TBI than controls endorsed eight of the 13 college challenges. Those who reported more health issues endorsed more college challenges, demonstrating preliminary convergent validity. Cronbach's alphas of >0.85 demonstrated acceptable internal reliability. Factor analysis revealed a four-factor model for those with TBI: studying and learning (Factor 1), time management and organization (Factor 2), social (Factor 3) and nervousness/anxiety (Factor 4). This model explained 72% and 69% of the variance for those with and without TBI, respectively. The college challenges sub-set from the CSS-BI identifies challenges that individuals with TBI face when going to college. Some challenges were related to two factors in the model, demonstrating the inter-connections of these experiences.

  6. Psychometric Properties of the Serbian Version of the Maslach Burnout Inventory-Human Services Survey: A Validation Study among Anesthesiologists from Belgrade Teaching Hospitals.

    PubMed

    Matejić, Bojana; Milenović, Miodrag; Kisić Tepavčević, Darija; Simić, Dušica; Pekmezović, Tatjana; Worley, Jody A

    2015-01-01

    We report findings from a validation study of the translated and culturally adapted Serbian version of Maslach Burnout Inventory-Human Services Survey (MBI-HSS), for a sample of anesthesiologists working in the tertiary healthcare. The results showed the sufficient overall reliability (Cronbach's α = 0.72) of the scores (items 1-22). The results of Bartlett's test of sphericity (χ(2) = 1983.75, df = 231, p < 0.001) and Kaiser-Meyer-Olkin measure of sampling adequacy (0.866) provided solid justification for factor analysis. In order to increase sensitivity of this questionnaire, we performed unfitted factor analysis model (eigenvalue greater than 1) which enabled us to extract the most suitable factor structure for our study instrument. The exploratory factor analysis model revealed five factors with eigenvalues greater than 1.0, explaining 62.0% of cumulative variance. Velicer's MAP test has supported five-factor model with the smallest average squared correlation of 0,184. This study indicated that Serbian version of the MBI-HSS is a reliable and valid instrument to measure burnout among a population of anesthesiologists. Results confirmed strong psychometric characteristics of the study instrument, with recommendations for interpretation of two new factors that may be unique to the Serbian version of the MBI-HSS.

  7. A Baseline Patient Model to Support Testing of Medical Cyber-Physical Systems.

    PubMed

    Silva, Lenardo C; Perkusich, Mirko; Almeida, Hyggo O; Perkusich, Angelo; Lima, Mateus A M; Gorgônio, Kyller C

    2015-01-01

    Medical Cyber-Physical Systems (MCPS) are currently a trending topic of research. The main challenges are related to the integration and interoperability of connected medical devices, patient safety, physiologic closed-loop control, and the verification and validation of these systems. In this paper, we focus on patient safety and MCPS validation. We present a formal patient model to be used in health care systems validation without jeopardizing the patient's health. To determine the basic patient conditions, our model considers the four main vital signs: heart rate, respiratory rate, blood pressure and body temperature. To generate the vital signs we used regression models based on statistical analysis of a clinical database. Our solution should be used as a starting point for a behavioral patient model and adapted to specific clinical scenarios. We present the modeling process of the baseline patient model and show its evaluation. The conception process may be used to build different patient models. The results show the feasibility of the proposed model as an alternative to the immediate need for clinical trials to test these medical systems.

  8. [Development and Validation of the Academic Resilience Inventory for Nursing Students in Taiwan].

    PubMed

    Li, Cheng-Chieh; Wei, Chi-Fang; Tung, Yuk-Ying

    2017-10-01

    Failure to cope with learning pressures has been shown to influence the learning achievement and professional performance of nursing students. In order to enable nursing students to adapt successfully to their academic stress, it is essential to explore their academic resilience in the process of learning. To develop the Academic Resilience Inventory for Nursing Students (ARINS) and to test its reliability and validity. A total of 611 nursing students in central and southern Taiwan were recruited as participants. We divided the sample into two subsamples randomly using R software. The first sample was used to conduct item analysis and exploratory factor analysis. The other sample was used to conduct confirmatory factor analysis, cross validation, and criterion-related validity. There are 15 items in the ARINS, with cognitive maturity, emotional regulation, and help-seeking behavior used as the measurement indicators of academic resilience in nursing students. The assessed goodness-of-fit index indicates that the model fit the data well based upon the CFA and has good convergent validity and discriminant validity. Criterion-related validity was supported by the correlation among ARINS, learning performance and attitude, hope and optimistic, and depression. The ARINS has good reliability and validation and is a suitable measure of academic resilience in nursing students. It is helpful for nursing students to examine their academic stress and coping efficacy in the learning process.

  9. Development of Chemistry Game Card as an Instructional Media in the Subject of Naming Chemical Compound in Grade X

    NASA Astrophysics Data System (ADS)

    Bayharti; Iswendi, I.; Arifin, M. N.

    2018-04-01

    The purpose of this research was to produce a chemistry game card as an instructional media in the subject of naming chemical compounds and determine the degree of validity and practicality of instructional media produced. Type of this research was Research and Development (R&D) that produced a product. The development model used was4-D model which comprises four stages incuding: (1) define, (2) design, (3) develop, and (4) disseminate. This research was restricted at the development stage. Chemistry game card developed was validated by seven validators and practicality was tested to class X6 students of SMAN 5 Padang. Instrument of this research is questionnair that consist of validity sheet and practicality sheet. Technique in collection data was done by distributing questionnaire to the validators, chemistry teachers, and students. The data were analyzed by using formula Cohen’s Kappa. Based on data analysis, validity of chemistry game card was0.87 with category highly valid and practicality of chemistry game card was 0.91 with category highly practice.

  10. Developing rural palliative care: validating a conceptual model.

    PubMed

    Kelley, Mary Lou; Williams, Allison; DeMiglio, Lily; Mettam, Hilary

    2011-01-01

    The purpose of this research was to validate a conceptual model for developing palliative care in rural communities. This model articulates how local rural healthcare providers develop palliative care services according to four sequential phases. The model has roots in concepts of community capacity development, evolves from collaborative, generalist rural practice, and utilizes existing health services infrastructure. It addresses how rural providers manage challenges, specifically those related to: lack of resources, minimal community understanding of palliative care, health professionals' resistance, the bureaucracy of the health system, and the obstacles of providing services in rural environments. Seven semi-structured focus groups were conducted with interdisciplinary health providers in 7 rural communities in two Canadian provinces. Using a constant comparative analysis approach, focus group data were analyzed by examining participants' statements in relation to the model and comparing emerging themes in the development of rural palliative care to the elements of the model. The data validated the conceptual model as the model was able to theoretically predict and explain the experiences of the 7 rural communities that participated in the study. New emerging themes from the data elaborated existing elements in the model and informed the requirement for minor revisions. The model was validated and slightly revised, as suggested by the data. The model was confirmed as being a useful theoretical tool for conceptualizing the development of rural palliative care that is applicable in diverse rural communities.

  11. Linking big models to big data: efficient ecosystem model calibration through Bayesian model emulation

    NASA Astrophysics Data System (ADS)

    Fer, I.; Kelly, R.; Andrews, T.; Dietze, M.; Richardson, A. D.

    2016-12-01

    Our ability to forecast ecosystems is limited by how well we parameterize ecosystem models. Direct measurements for all model parameters are not always possible and inverse estimation of these parameters through Bayesian methods is computationally costly. A solution to computational challenges of Bayesian calibration is to approximate the posterior probability surface using a Gaussian Process that emulates the complex process-based model. Here we report the integration of this method within an ecoinformatics toolbox, Predictive Ecosystem Analyzer (PEcAn), and its application with two ecosystem models: SIPNET and ED2.1. SIPNET is a simple model, allowing application of MCMC methods both to the model itself and to its emulator. We used both approaches to assimilate flux (CO2 and latent heat), soil respiration, and soil carbon data from Bartlett Experimental Forest. This comparison showed that emulator is reliable in terms of convergence to the posterior distribution. A 10000-iteration MCMC analysis with SIPNET itself required more than two orders of magnitude greater computation time than an MCMC run of same length with its emulator. This difference would be greater for a more computationally demanding model. Validation of the emulator-calibrated SIPNET against both the assimilated data and out-of-sample data showed improved fit and reduced uncertainty around model predictions. We next applied the validated emulator method to the ED2, whose complexity precludes standard Bayesian data assimilation. We used the ED2 emulator to assimilate demographic data from a network of inventory plots. For validation of the calibrated ED2, we compared the model to results from Empirical Succession Mapping (ESM), a novel synthesis of successional patterns in Forest Inventory and Analysis data. Our results revealed that while the pre-assimilation ED2 formulation cannot capture the emergent demographic patterns from ESM analysis, constrained model parameters controlling demographic processes increased their agreement considerably.

  12. Predicting Overall Survival After Stereotactic Ablative Radiation Therapy in Early-Stage Lung Cancer: Development and External Validation of the Amsterdam Prognostic Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Louie, Alexander V., E-mail: Dr.alexlouie@gmail.com; Department of Radiation Oncology, London Regional Cancer Program, University of Western Ontario, London, Ontario; Department of Epidemiology, Harvard School of Public Health, Harvard University, Boston, Massachusetts

    Purpose: A prognostic model for 5-year overall survival (OS), consisting of recursive partitioning analysis (RPA) and a nomogram, was developed for patients with early-stage non-small cell lung cancer (ES-NSCLC) treated with stereotactic ablative radiation therapy (SABR). Methods and Materials: A primary dataset of 703 ES-NSCLC SABR patients was randomly divided into a training (67%) and an internal validation (33%) dataset. In the former group, 21 unique parameters consisting of patient, treatment, and tumor factors were entered into an RPA model to predict OS. Univariate and multivariate models were constructed for RPA-selected factors to evaluate their relationship with OS. A nomogrammore » for OS was constructed based on factors significant in multivariate modeling and validated with calibration plots. Both the RPA and the nomogram were externally validated in independent surgical (n=193) and SABR (n=543) datasets. Results: RPA identified 2 distinct risk classes based on tumor diameter, age, World Health Organization performance status (PS) and Charlson comorbidity index. This RPA had moderate discrimination in SABR datasets (c-index range: 0.52-0.60) but was of limited value in the surgical validation cohort. The nomogram predicting OS included smoking history in addition to RPA-identified factors. In contrast to RPA, validation of the nomogram performed well in internal validation (r{sup 2}=0.97) and external SABR (r{sup 2}=0.79) and surgical cohorts (r{sup 2}=0.91). Conclusions: The Amsterdam prognostic model is the first externally validated prognostication tool for OS in ES-NSCLC treated with SABR available to individualize patient decision making. The nomogram retained strong performance across surgical and SABR external validation datasets. RPA performance was poor in surgical patients, suggesting that 2 different distinct patient populations are being treated with these 2 effective modalities.« less

  13. Examining construct and predictive validity of the Health-IT Usability Evaluation Scale: confirmatory factor analysis and structural equation modeling results.

    PubMed

    Yen, Po-Yin; Sousa, Karen H; Bakken, Suzanne

    2014-10-01

    In a previous study, we developed the Health Information Technology Usability Evaluation Scale (Health-ITUES), which is designed to support customization at the item level. Such customization matches the specific tasks/expectations of a health IT system while retaining comparability at the construct level, and provides evidence of its factorial validity and internal consistency reliability through exploratory factor analysis. In this study, we advanced the development of Health-ITUES to examine its construct validity and predictive validity. The health IT system studied was a web-based communication system that supported nurse staffing and scheduling. Using Health-ITUES, we conducted a cross-sectional study to evaluate users' perception toward the web-based communication system after system implementation. We examined Health-ITUES's construct validity through first and second order confirmatory factor analysis (CFA), and its predictive validity via structural equation modeling (SEM). The sample comprised 541 staff nurses in two healthcare organizations. The CFA (n=165) showed that a general usability factor accounted for 78.1%, 93.4%, 51.0%, and 39.9% of the explained variance in 'Quality of Work Life', 'Perceived Usefulness', 'Perceived Ease of Use', and 'User Control', respectively. The SEM (n=541) supported the predictive validity of Health-ITUES, explaining 64% of the variance in intention for system use. The results of CFA and SEM provide additional evidence for the construct and predictive validity of Health-ITUES. The customizability of Health-ITUES has the potential to support comparisons at the construct level, while allowing variation at the item level. We also illustrate application of Health-ITUES across stages of system development. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  14. Review of validation and reporting of non-targeted fingerprinting approaches for food authentication.

    PubMed

    Riedl, Janet; Esslinger, Susanne; Fauhl-Hassek, Carsten

    2015-07-23

    Food fingerprinting approaches are expected to become a very potent tool in authentication processes aiming at a comprehensive characterization of complex food matrices. By non-targeted spectrometric or spectroscopic chemical analysis with a subsequent (multivariate) statistical evaluation of acquired data, food matrices can be investigated in terms of their geographical origin, species variety or possible adulterations. Although many successful research projects have already demonstrated the feasibility of non-targeted fingerprinting approaches, their uptake and implementation into routine analysis and food surveillance is still limited. In many proof-of-principle studies, the prediction ability of only one data set was explored, measured within a limited period of time using one instrument within one laboratory. Thorough validation strategies that guarantee reliability of the respective data basis and that allow conclusion on the applicability of the respective approaches for its fit-for-purpose have not yet been proposed. Within this review, critical steps of the fingerprinting workflow were explored to develop a generic scheme for multivariate model validation. As a result, a proposed scheme for "good practice" shall guide users through validation and reporting of non-targeted fingerprinting results. Furthermore, food fingerprinting studies were selected by a systematic search approach and reviewed with regard to (a) transparency of data processing and (b) validity of study results. Subsequently, the studies were inspected for measures of statistical model validation, analytical method validation and quality assurance measures. In this context, issues and recommendations were found that might be considered as an actual starting point for developing validation standards of non-targeted metabolomics approaches for food authentication in the future. Hence, this review intends to contribute to the harmonization and standardization of food fingerprinting, both required as a prior condition for the authentication of food in routine analysis and official control. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Quantitative validation of an air-coupled ultrasonic probe model by Interferometric laser tomography

    NASA Astrophysics Data System (ADS)

    Revel, G. M.; Pandarese, G.; Cavuto, A.

    2012-06-01

    The present paper describes the quantitative validation of a finite element (FE) model of the ultrasound beam generated by an air coupled non-contact ultrasound transducer. The model boundary conditions are given by vibration velocities measured by laser vibrometry on the probe membrane. The proposed validation method is based on the comparison between the simulated 3D pressure field and the pressure data measured with interferometric laser tomography technique. The model details and the experimental techniques are described in paper. The analysis of results shows the effectiveness of the proposed approach and the possibility to quantitatively assess and predict the generated acoustic pressure field, with maximum discrepancies in the order of 20% due to uncertainty effects. This step is important for determining in complex problems the real applicability of air-coupled probes and for the simulation of the whole inspection procedure, also when the component is designed, so as to virtually verify its inspectability.

  16. An Innovative Software Tool Suite for Power Plant Model Validation and Parameter Calibration using PMU Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Yuanyuan; Diao, Ruisheng; Huang, Renke

    Maintaining good quality of power plant stability models is of critical importance to ensure the secure and economic operation and planning of today’s power grid with its increasing stochastic and dynamic behavior. According to North American Electric Reliability (NERC) standards, all generators in North America with capacities larger than 10 MVA are required to validate their models every five years. Validation is quite costly and can significantly affect the revenue of generator owners, because the traditional staged testing requires generators to be taken offline. Over the past few years, validating and calibrating parameters using online measurements including phasor measurement unitsmore » (PMUs) and digital fault recorders (DFRs) has been proven to be a cost-effective approach. In this paper, an innovative open-source tool suite is presented for validating power plant models using PPMV tool, identifying bad parameters with trajectory sensitivity analysis, and finally calibrating parameters using an ensemble Kalman filter (EnKF) based algorithm. The architectural design and the detailed procedures to run the tool suite are presented, with results of test on a realistic hydro power plant using PMU measurements for 12 different events. The calibrated parameters of machine, exciter, governor and PSS models demonstrate much better performance than the original models for all the events and show the robustness of the proposed calibration algorithm.« less

  17. Modeling and Validation of Microwave Ablations with Internal Vaporization

    PubMed Central

    Chiang, Jason; Birla, Sohan; Bedoya, Mariajose; Jones, David; Subbiah, Jeyam; Brace, Christopher L.

    2014-01-01

    Numerical simulation is increasingly being utilized for computer-aided design of treatment devices, analysis of ablation growth, and clinical treatment planning. Simulation models to date have incorporated electromagnetic wave propagation and heat conduction, but not other relevant physics such as water vaporization and mass transfer. Such physical changes are particularly noteworthy during the intense heat generation associated with microwave heating. In this work, a numerical model was created that integrates microwave heating with water vapor generation and transport by using porous media assumptions in the tissue domain. The heating physics of the water vapor model was validated through temperature measurements taken at locations 5, 10 and 20 mm away from the heating zone of the microwave antenna in homogenized ex vivo bovine liver setup. Cross-sectional area of water vapor transport was validated through intra-procedural computed tomography (CT) during microwave ablations in homogenized ex vivo bovine liver. Iso-density contours from CT images were compared to vapor concentration contours from the numerical model at intermittent time points using the Jaccard Index. In general, there was an improving correlation in ablation size dimensions as the ablation procedure proceeded, with a Jaccard Index of 0.27, 0.49, 0.61, 0.67 and 0.69 at 1, 2, 3, 4, and 5 minutes. This study demonstrates the feasibility and validity of incorporating water vapor concentration into thermal ablation simulations and validating such models experimentally. PMID:25330481

  18. Using the split Hopkinson pressure bar to validate material models.

    PubMed

    Church, Philip; Cornish, Rory; Cullis, Ian; Gould, Peter; Lewtas, Ian

    2014-08-28

    This paper gives a discussion of the use of the split-Hopkinson bar with particular reference to the requirements of materials modelling at QinetiQ. This is to deploy validated material models for numerical simulations that are physically based and have as little characterization overhead as possible. In order to have confidence that the models have a wide range of applicability, this means, at most, characterizing the models at low rate and then validating them at high rate. The split Hopkinson pressure bar (SHPB) is ideal for this purpose. It is also a very useful tool for analysing material behaviour under non-shock wave loading. This means understanding the output of the test and developing techniques for reliable comparison of simulations with SHPB data. For materials other than metals comparison with an output stress v strain curve is not sufficient as the assumptions built into the classical analysis are generally violated. The method described in this paper compares the simulations with as much validation data as can be derived from deployed instrumentation including the raw strain gauge data on the input and output bars, which avoids any assumptions about stress equilibrium. One has to take into account Pochhammer-Chree oscillations and their effect on the specimen and recognize that this is itself also a valuable validation test of the material model. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  19. Validating an Air Traffic Management Concept of Operation Using Statistical Modeling

    NASA Technical Reports Server (NTRS)

    He, Yuning; Davies, Misty Dawn

    2013-01-01

    Validating a concept of operation for a complex, safety-critical system (like the National Airspace System) is challenging because of the high dimensionality of the controllable parameters and the infinite number of states of the system. In this paper, we use statistical modeling techniques to explore the behavior of a conflict detection and resolution algorithm designed for the terminal airspace. These techniques predict the robustness of the system simulation to both nominal and off-nominal behaviors within the overall airspace. They also can be used to evaluate the output of the simulation against recorded airspace data. Additionally, the techniques carry with them a mathematical value of the worth of each prediction-a statistical uncertainty for any robustness estimate. Uncertainty Quantification (UQ) is the process of quantitative characterization and ultimately a reduction of uncertainties in complex systems. UQ is important for understanding the influence of uncertainties on the behavior of a system and therefore is valuable for design, analysis, and verification and validation. In this paper, we apply advanced statistical modeling methodologies and techniques on an advanced air traffic management system, namely the Terminal Tactical Separation Assured Flight Environment (T-TSAFE). We show initial results for a parameter analysis and safety boundary (envelope) detection in the high-dimensional parameter space. For our boundary analysis, we developed a new sequential approach based upon the design of computer experiments, allowing us to incorporate knowledge from domain experts into our modeling and to determine the most likely boundary shapes and its parameters. We carried out the analysis on system parameters and describe an initial approach that will allow us to include time-series inputs, such as the radar track data, into the analysis

  20. Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Validation

    NASA Astrophysics Data System (ADS)

    Sarrazin, Fanny; Pianosi, Francesca; Khorashadi Zadeh, Farkhondeh; Van Griensven, Ann; Wagener, Thorsten

    2015-04-01

    Global Sensitivity Analysis aims to characterize the impact that variations in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). In sampling-based Global Sensitivity Analysis, the sample size has to be chosen carefully in order to obtain reliable sensitivity estimates while spending computational resources efficiently. Furthermore, insensitive parameters are typically identified through the definition of a screening threshold: the theoretical value of their sensitivity index is zero but in a sampling-base framework they regularly take non-zero values. There is little guidance available for these two steps in environmental modelling though. The objective of the present study is to support modellers in making appropriate choices, regarding both sample size and screening threshold, so that a robust sensitivity analysis can be implemented. We performed sensitivity analysis for the parameters of three hydrological models with increasing level of complexity (Hymod, HBV and SWAT), and tested three widely used sensitivity analysis methods (Elementary Effect Test or method of Morris, Regional Sensitivity Analysis, and Variance-Based Sensitivity Analysis). We defined criteria based on a bootstrap approach to assess three different types of convergence: the convergence of the value of the sensitivity indices, of the ranking (the ordering among the parameters) and of the screening (the identification of the insensitive parameters). We investigated the screening threshold through the definition of a validation procedure. The results showed that full convergence of the value of the sensitivity indices is not necessarily needed to rank or to screen the model input factors. Furthermore, typical values of the sample sizes that are reported in the literature can be well below the sample sizes that actually ensure convergence of ranking and screening.

Top