Sample records for model validation tests

  1. Solar Sail Models and Test Measurements Correspondence for Validation Requirements Definition

    NASA Technical Reports Server (NTRS)

    Ewing, Anthony; Adams, Charles

    2004-01-01

    Solar sails are being developed as a mission-enabling technology in support of future NASA science missions. Current efforts have advanced solar sail technology sufficient to justify a flight validation program. A primary objective of this activity is to test and validate solar sail models that are currently under development so that they may be used with confidence in future science mission development (e.g., scalable to larger sails). Both system and model validation requirements must be defined early in the program to guide design cycles and to ensure that relevant and sufficient test data will be obtained to conduct model validation to the level required. A process of model identification, model input/output documentation, model sensitivity analyses, and test measurement correspondence is required so that decisions can be made to satisfy validation requirements within program constraints.

  2. Economic analysis of model validation for a challenge problem

    DOE PAGES

    Paez, Paul J.; Paez, Thomas L.; Hasselman, Timothy K.

    2016-02-19

    It is now commonplace for engineers to build mathematical models of the systems they are designing, building, or testing. And, it is nearly universally accepted that phenomenological models of physical systems must be validated prior to use for prediction in consequential scenarios. Yet, there are certain situations in which testing only or no testing and no modeling may be economically viable alternatives to modeling and its associated testing. This paper develops an economic framework within which benefit–cost can be evaluated for modeling and model validation relative to other options. The development is presented in terms of a challenge problem. Asmore » a result, we provide a numerical example that quantifies when modeling, calibration, and validation yield higher benefit–cost than a testing only or no modeling and no testing option.« less

  3. Statistical validation of normal tissue complication probability models.

    PubMed

    Xu, Cheng-Jian; van der Schaaf, Arjen; Van't Veld, Aart A; Langendijk, Johannes A; Schilstra, Cornelis

    2012-09-01

    To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. A penalized regression method, LASSO (least absolute shrinkage and selection operator), was used to build NTCP models for xerostomia after radiation therapy treatment of head-and-neck cancer. Model assessment was based on the likelihood function and the area under the receiver operating characteristic curve. Repeated double cross-validation showed the uncertainty and instability of the NTCP models and indicated that the statistical significance of model performance can be obtained by permutation testing. Repeated double cross-validation and permutation tests are recommended to validate NTCP models before clinical use. Copyright © 2012 Elsevier Inc. All rights reserved.

  4. SDG and qualitative trend based model multiple scale validation

    NASA Astrophysics Data System (ADS)

    Gao, Dong; Xu, Xin; Yin, Jianjin; Zhang, Hongyu; Zhang, Beike

    2017-09-01

    Verification, Validation and Accreditation (VV&A) is key technology of simulation and modelling. For the traditional model validation methods, the completeness is weak; it is carried out in one scale; it depends on human experience. The SDG (Signed Directed Graph) and qualitative trend based multiple scale validation is proposed. First the SDG model is built and qualitative trends are added to the model. And then complete testing scenarios are produced by positive inference. The multiple scale validation is carried out by comparing the testing scenarios with outputs of simulation model in different scales. Finally, the effectiveness is proved by carrying out validation for a reactor model.

  5. Testing and validating environmental models

    USGS Publications Warehouse

    Kirchner, J.W.; Hooper, R.P.; Kendall, C.; Neal, C.; Leavesley, G.

    1996-01-01

    Generally accepted standards for testing and validating ecosystem models would benefit both modellers and model users. Universally applicable test procedures are difficult to prescribe, given the diversity of modelling approaches and the many uses for models. However, the generally accepted scientific principles of documentation and disclosure provide a useful framework for devising general standards for model evaluation. Adequately documenting model tests requires explicit performance criteria, and explicit benchmarks against which model performance is compared. A model's validity, reliability, and accuracy can be most meaningfully judged by explicit comparison against the available alternatives. In contrast, current practice is often characterized by vague, subjective claims that model predictions show 'acceptable' agreement with data; such claims provide little basis for choosing among alternative models. Strict model tests (those that invalid models are unlikely to pass) are the only ones capable of convincing rational skeptics that a model is probably valid. However, 'false positive' rates as low as 10% can substantially erode the power of validation tests, making them insufficiently strict to convince rational skeptics. Validation tests are often undermined by excessive parameter calibration and overuse of ad hoc model features. Tests are often also divorced from the conditions under which a model will be used, particularly when it is designed to forecast beyond the range of historical experience. In such situations, data from laboratory and field manipulation experiments can provide particularly effective tests, because one can create experimental conditions quite different from historical data, and because experimental data can provide a more precisely defined 'target' for the model to hit. We present a simple demonstration showing that the two most common methods for comparing model predictions to environmental time series (plotting model time series against data time series, and plotting predicted versus observed values) have little diagnostic power. We propose that it may be more useful to statistically extract the relationships of primary interest from the time series, and test the model directly against them.

  6. Parameterization of Model Validating Sets for Uncertainty Bound Optimizations. Revised

    NASA Technical Reports Server (NTRS)

    Lim, K. B.; Giesy, D. P.

    2000-01-01

    Given measurement data, a nominal model and a linear fractional transformation uncertainty structure with an allowance on unknown but bounded exogenous disturbances, easily computable tests for the existence of a model validating uncertainty set are given. Under mild conditions, these tests are necessary and sufficient for the case of complex, nonrepeated, block-diagonal structure. For the more general case which includes repeated and/or real scalar uncertainties, the tests are only necessary but become sufficient if a collinearity condition is also satisfied. With the satisfaction of these tests, it is shown that a parameterization of all model validating sets of plant models is possible. The new parameterization is used as a basis for a systematic way to construct or perform uncertainty tradeoff with model validating uncertainty sets which have specific linear fractional transformation structure for use in robust control design and analysis. An illustrative example which includes a comparison of candidate model validating sets is given.

  7. Validation Test Report For The CRWMS Analysis and Logistics Visually Interactive Model Calvin Version 3.0, 10074-Vtr-3.0-00

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    S. Gillespie

    2000-07-27

    This report describes the tests performed to validate the CRWMS ''Analysis and Logistics Visually Interactive'' Model (CALVIN) Version 3.0 (V3.0) computer code (STN: 10074-3.0-00). To validate the code, a series of test cases was developed in the CALVIN V3.0 Validation Test Plan (CRWMS M&O 1999a) that exercises the principal calculation models and options of CALVIN V3.0. Twenty-five test cases were developed: 18 logistics test cases and 7 cost test cases. These cases test the features of CALVIN in a sequential manner, so that the validation of each test case is used to demonstrate the accuracy of the input to subsequentmore » calculations. Where necessary, the test cases utilize reduced-size data tables to make the hand calculations used to verify the results more tractable, while still adequately testing the code's capabilities. Acceptance criteria, were established for the logistics and cost test cases in the Validation Test Plan (CRWMS M&O 1999a). The Logistics test cases were developed to test the following CALVIN calculation models: Spent nuclear fuel (SNF) and reactivity calculations; Options for altering reactor life; Adjustment of commercial SNF (CSNF) acceptance rates for fiscal year calculations and mid-year acceptance start; Fuel selection, transportation cask loading, and shipping to the Monitored Geologic Repository (MGR); Transportation cask shipping to and storage at an Interim Storage Facility (ISF); Reactor pool allocation options; and Disposal options at the MGR. Two types of cost test cases were developed: cases to validate the detailed transportation costs, and cases to validate the costs associated with the Civilian Radioactive Waste Management System (CRWMS) Management and Operating Contractor (M&O) and Regional Servicing Contractors (RSCs). For each test case, values calculated using Microsoft Excel 97 worksheets were compared to CALVIN V3.0 scenarios with the same input data and assumptions. All of the test case results compare with the CALVIN V3.0 results within the bounds of the acceptance criteria. Therefore, it is concluded that the CALVIN V3.0 calculation models and options tested in this report are validated.« less

  8. Overview of Heat Addition and Efficiency Predictions for an Advanced Stirling Convertor

    NASA Technical Reports Server (NTRS)

    Wilson, Scott D.; Reid, Terry; Schifer, Nicholas; Briggs, Maxwell

    2011-01-01

    Past methods of predicting net heat input needed to be validated. Validation effort pursued with several paths including improving model inputs, using test hardware to provide validation data, and validating high fidelity models. Validation test hardware provided direct measurement of net heat input for comparison to predicted values. Predicted value of net heat input was 1.7 percent less than measured value and initial calculations of measurement uncertainty were 2.1 percent (under review). Lessons learned during validation effort were incorporated into convertor modeling approach which improved predictions of convertor efficiency.

  9. Use of the Ames Check Standard Model for the Validation of Wall Interference Corrections

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.; Amaya, M.; Flach, R.

    2018-01-01

    The new check standard model of the NASA Ames 11-ft Transonic Wind Tunnel was chosen for a future validation of the facility's wall interference correction system. The chosen validation approach takes advantage of the fact that test conditions experienced by a large model in the slotted part of the tunnel's test section will change significantly if a subset of the slots is temporarily sealed. Therefore, the model's aerodynamic coefficients have to be recorded, corrected, and compared for two different test section configurations in order to perform the validation. Test section configurations with highly accurate Mach number and dynamic pressure calibrations were selected for the validation. First, the model is tested with all test section slots in open configuration while keeping the model's center of rotation on the tunnel centerline. In the next step, slots on the test section floor are sealed and the model is moved to a new center of rotation that is 33 inches below the tunnel centerline. Then, the original angle of attack sweeps are repeated. Afterwards, wall interference corrections are applied to both test data sets and response surface models of the resulting aerodynamic coefficients in interference-free flow are generated. Finally, the response surface models are used to predict the aerodynamic coefficients for a family of angles of attack while keeping dynamic pressure, Mach number, and Reynolds number constant. The validation is considered successful if the corrected aerodynamic coefficients obtained from the related response surface model pair show good agreement. Residual differences between the corrected coefficient sets will be analyzed as well because they are an indicator of the overall accuracy of the facility's wall interference correction process.

  10. Understanding Student Teachers' Behavioural Intention to Use Technology: Technology Acceptance Model (TAM) Validation and Testing

    ERIC Educational Resources Information Center

    Wong, Kung-Teck; Osman, Rosma bt; Goh, Pauline Swee Choo; Rahmat, Mohd Khairezan

    2013-01-01

    This study sets out to validate and test the Technology Acceptance Model (TAM) in the context of Malaysian student teachers' integration of their technology in teaching and learning. To establish factorial validity, data collected from 302 respondents were tested against the TAM using confirmatory factor analysis (CFA), and structural equation…

  11. Correlation Results for a Mass Loaded Vehicle Panel Test Article Finite Element Models and Modal Survey Tests

    NASA Technical Reports Server (NTRS)

    Maasha, Rumaasha; Towner, Robert L.

    2012-01-01

    High-fidelity Finite Element Models (FEMs) were developed to support a recent test program at Marshall Space Flight Center (MSFC). The FEMs correspond to test articles used for a series of acoustic tests. Modal survey tests were used to validate the FEMs for five acoustic tests (a bare panel and four different mass-loaded panel configurations). An additional modal survey test was performed on the empty test fixture (orthogrid panel mounting fixture, between the reverb and anechoic chambers). Modal survey tests were used to test-validate the dynamic characteristics of FEMs used for acoustic test excitation. Modal survey testing and subsequent model correlation has validated the natural frequencies and mode shapes of the FEMs. The modal survey test results provide a basis for the analysis models used for acoustic loading response test and analysis comparisons

  12. Prospective validation of pathologic complete response models in rectal cancer: Transferability and reproducibility.

    PubMed

    van Soest, Johan; Meldolesi, Elisa; van Stiphout, Ruud; Gatta, Roberto; Damiani, Andrea; Valentini, Vincenzo; Lambin, Philippe; Dekker, Andre

    2017-09-01

    Multiple models have been developed to predict pathologic complete response (pCR) in locally advanced rectal cancer patients. Unfortunately, validation of these models normally omit the implications of cohort differences on prediction model performance. In this work, we will perform a prospective validation of three pCR models, including information whether this validation will target transferability or reproducibility (cohort differences) of the given models. We applied a novel methodology, the cohort differences model, to predict whether a patient belongs to the training or to the validation cohort. If the cohort differences model performs well, it would suggest a large difference in cohort characteristics meaning we would validate the transferability of the model rather than reproducibility. We tested our method in a prospective validation of three existing models for pCR prediction in 154 patients. Our results showed a large difference between training and validation cohort for one of the three tested models [Area under the Receiver Operating Curve (AUC) cohort differences model: 0.85], signaling the validation leans towards transferability. Two out of three models had a lower AUC for validation (0.66 and 0.58), one model showed a higher AUC in the validation cohort (0.70). We have successfully applied a new methodology in the validation of three prediction models, which allows us to indicate if a validation targeted transferability (large differences between training/validation cohort) or reproducibility (small cohort differences). © 2017 American Association of Physicists in Medicine.

  13. X-56A MUTT: Aeroservoelastic Modeling

    NASA Technical Reports Server (NTRS)

    Ouellette, Jeffrey A.

    2015-01-01

    For the NASA X-56a Program, Armstrong Flight Research Center has been developing a set of linear states space models that integrate the flight dynamics and structural dynamics. These high order models are needed for the control design, control evaluation, and test input design. The current focus has been on developing stiff wing models to validate the current modeling approach. The extension of the modeling approach to the flexible wings requires only a change in the structural model. Individual subsystems models (actuators, inertial properties, etc.) have been validated by component level ground tests. Closed loop simulation of maneuvers designed to validate the flight dynamics of these models correlates very well flight test data. The open loop structural dynamics are also shown to correlate well to the flight test data.

  14. A practical method to test the validity of the standard Gumbel distribution in logit-based multinomial choice models of travel behavior

    DOE PAGES

    Ye, Xin; Garikapati, Venu M.; You, Daehyun; ...

    2017-11-08

    Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less

  15. A practical method to test the validity of the standard Gumbel distribution in logit-based multinomial choice models of travel behavior

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ye, Xin; Garikapati, Venu M.; You, Daehyun

    Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less

  16. Reinforced Carbon-Carbon Subcomponent Flat Plate Impact Testing for Space Shuttle Orbiter Return to Flight

    NASA Technical Reports Server (NTRS)

    Melis, Matthew E.; Brand, Jeremy H.; Pereira, J. Michael; Revilock, Duane M.

    2007-01-01

    Following the tragedy of the Space Shuttle Columbia on February 1, 2003, a major effort commenced to develop a better understanding of debris impacts and their effect on the Space Shuttle subsystems. An initiative to develop and validate physics-based computer models to predict damage from such impacts was a fundamental component of this effort. To develop the models it was necessary to physically characterize Reinforced Carbon-Carbon (RCC) and various debris materials which could potentially shed on ascent and impact the Orbiter RCC leading edges. The validated models enabled the launch system community to use the impact analysis software LS DYNA to predict damage by potential and actual impact events on the Orbiter leading edge and nose cap thermal protection systems. Validation of the material models was done through a three-level approach: fundamental tests to obtain independent static and dynamic material model properties of materials of interest, sub-component impact tests to provide highly controlled impact test data for the correlation and validation of the models, and full-scale impact tests to establish the final level of confidence for the analysis methodology. This paper discusses the second level subcomponent test program in detail and its application to the LS DYNA model validation process. The level two testing consisted of over one hundred impact tests in the NASA Glenn Research Center Ballistic Impact Lab on 6 by 6 in. and 6 by 12 in. flat plates of RCC and evaluated three types of debris projectiles: BX 265 External Tank foam, ice, and PDL 1034 External Tank foam. These impact tests helped determine the level of damage generated in the RCC flat plates by each projectile. The information obtained from this testing validated the LS DYNA damage prediction models and provided a certain level of confidence to begin performing analysis for full-size RCC test articles for returning NASA to flight with STS 114 and beyond.

  17. Validation through Understanding Test-Taking Strategies: An Illustration With the CELPIP-General Reading Pilot Test Using Structural Equation Modeling

    ERIC Educational Resources Information Center

    Wu, Amery D.; Stone, Jake E.

    2016-01-01

    This article explores an approach for test score validation that examines test takers' strategies for taking a reading comprehension test. The authors formulated three working hypotheses about score validity pertaining to three types of test-taking strategy (comprehending meaning, test management, and test-wiseness). These hypotheses were…

  18. 75 FR 53371 - Liquefied Natural Gas Facilities: Obtaining Approval of Alternative Vapor-Gas Dispersion Models

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-31

    ... factors as the approved models, are validated by experimental test data, and receive the Administrator's... stage of the MEP involves applying the model against a database of experimental test cases including..., particularly the requirement for validation by experimental test data. That guidance is based on the MEP's...

  19. Assessing Discriminative Performance at External Validation of Clinical Prediction Models

    PubMed Central

    Nieboer, Daan; van der Ploeg, Tjeerd; Steyerberg, Ewout W.

    2016-01-01

    Introduction External validation studies are essential to study the generalizability of prediction models. Recently a permutation test, focusing on discrimination as quantified by the c-statistic, was proposed to judge whether a prediction model is transportable to a new setting. We aimed to evaluate this test and compare it to previously proposed procedures to judge any changes in c-statistic from development to external validation setting. Methods We compared the use of the permutation test to the use of benchmark values of the c-statistic following from a previously proposed framework to judge transportability of a prediction model. In a simulation study we developed a prediction model with logistic regression on a development set and validated them in the validation set. We concentrated on two scenarios: 1) the case-mix was more heterogeneous and predictor effects were weaker in the validation set compared to the development set, and 2) the case-mix was less heterogeneous in the validation set and predictor effects were identical in the validation and development set. Furthermore we illustrated the methods in a case study using 15 datasets of patients suffering from traumatic brain injury. Results The permutation test indicated that the validation and development set were homogenous in scenario 1 (in almost all simulated samples) and heterogeneous in scenario 2 (in 17%-39% of simulated samples). Previously proposed benchmark values of the c-statistic and the standard deviation of the linear predictors correctly pointed at the more heterogeneous case-mix in scenario 1 and the less heterogeneous case-mix in scenario 2. Conclusion The recently proposed permutation test may provide misleading results when externally validating prediction models in the presence of case-mix differences between the development and validation population. To correctly interpret the c-statistic found at external validation it is crucial to disentangle case-mix differences from incorrect regression coefficients. PMID:26881753

  20. Assessing Discriminative Performance at External Validation of Clinical Prediction Models.

    PubMed

    Nieboer, Daan; van der Ploeg, Tjeerd; Steyerberg, Ewout W

    2016-01-01

    External validation studies are essential to study the generalizability of prediction models. Recently a permutation test, focusing on discrimination as quantified by the c-statistic, was proposed to judge whether a prediction model is transportable to a new setting. We aimed to evaluate this test and compare it to previously proposed procedures to judge any changes in c-statistic from development to external validation setting. We compared the use of the permutation test to the use of benchmark values of the c-statistic following from a previously proposed framework to judge transportability of a prediction model. In a simulation study we developed a prediction model with logistic regression on a development set and validated them in the validation set. We concentrated on two scenarios: 1) the case-mix was more heterogeneous and predictor effects were weaker in the validation set compared to the development set, and 2) the case-mix was less heterogeneous in the validation set and predictor effects were identical in the validation and development set. Furthermore we illustrated the methods in a case study using 15 datasets of patients suffering from traumatic brain injury. The permutation test indicated that the validation and development set were homogenous in scenario 1 (in almost all simulated samples) and heterogeneous in scenario 2 (in 17%-39% of simulated samples). Previously proposed benchmark values of the c-statistic and the standard deviation of the linear predictors correctly pointed at the more heterogeneous case-mix in scenario 1 and the less heterogeneous case-mix in scenario 2. The recently proposed permutation test may provide misleading results when externally validating prediction models in the presence of case-mix differences between the development and validation population. To correctly interpret the c-statistic found at external validation it is crucial to disentangle case-mix differences from incorrect regression coefficients.

  1. A Model-Based Method for Content Validation of Automatically Generated Test Items

    ERIC Educational Resources Information Center

    Zhang, Xinxin; Gierl, Mark

    2016-01-01

    The purpose of this study is to describe a methodology to recover the item model used to generate multiple-choice test items with a novel graph theory approach. Beginning with the generated test items and working backward to recover the original item model provides a model-based method for validating the content used to automatically generate test…

  2. Model-Based Verification and Validation of Spacecraft Avionics

    NASA Technical Reports Server (NTRS)

    Khan, M. Omair; Sievers, Michael; Standley, Shaun

    2012-01-01

    Verification and Validation (V&V) at JPL is traditionally performed on flight or flight-like hardware running flight software. For some time, the complexity of avionics has increased exponentially while the time allocated for system integration and associated V&V testing has remained fixed. There is an increasing need to perform comprehensive system level V&V using modeling and simulation, and to use scarce hardware testing time to validate models; the norm for thermal and structural V&V for some time. Our approach extends model-based V&V to electronics and software through functional and structural models implemented in SysML. We develop component models of electronics and software that are validated by comparison with test results from actual equipment. The models are then simulated enabling a more complete set of test cases than possible on flight hardware. SysML simulations provide access and control of internal nodes that may not be available in physical systems. This is particularly helpful in testing fault protection behaviors when injecting faults is either not possible or potentially damaging to the hardware. We can also model both hardware and software behaviors in SysML, which allows us to simulate hardware and software interactions. With an integrated model and simulation capability we can evaluate the hardware and software interactions and identify problems sooner. The primary missing piece is validating SysML model correctness against hardware; this experiment demonstrated such an approach is possible.

  3. High Fidelity Modeling of Field-Reversed Configuration (FRC) Thrusters (Briefing Charts)

    DTIC Science & Technology

    2017-05-24

    Converged Math → Irrelevant Solutions? Validation: Fluids Example Stoke’s Flow MARTIN, SOUSA, TRAN (AFRL/RQRS) DISTRIBUTION A - APPROVED FOR PUBLIC RELEASE...Convergence Tests Converged Math → Irrelevant Solutions? Must be Aware of Valid Assumption Regions Validation: Fluids Example Stoke’s Flow Potential...AND VALIDATION Verification: Asymptotic Models → Analytical Solutions Yields Exact Convergence Tests Converged Math → Irrelevant Solutions? Must be

  4. Validation of 2D flood models with insurance claims

    NASA Astrophysics Data System (ADS)

    Zischg, Andreas Paul; Mosimann, Markus; Bernet, Daniel Benjamin; Röthlisberger, Veronika

    2018-02-01

    Flood impact modelling requires reliable models for the simulation of flood processes. In recent years, flood inundation models have been remarkably improved and widely used for flood hazard simulation, flood exposure and loss analyses. In this study, we validate a 2D inundation model for the purpose of flood exposure analysis at the river reach scale. We validate the BASEMENT simulation model with insurance claims using conventional validation metrics. The flood model is established on the basis of available topographic data in a high spatial resolution for four test cases. The validation metrics were calculated with two different datasets; a dataset of event documentations reporting flooded areas and a dataset of insurance claims. The model fit relating to insurance claims is in three out of four test cases slightly lower than the model fit computed on the basis of the observed inundation areas. This comparison between two independent validation data sets suggests that validation metrics using insurance claims can be compared to conventional validation data, such as the flooded area. However, a validation on the basis of insurance claims might be more conservative in cases where model errors are more pronounced in areas with a high density of values at risk.

  5. How to test validity in orthodontic research: a mixed dentition analysis example.

    PubMed

    Donatelli, Richard E; Lee, Shin-Jae

    2015-02-01

    The data used to test the validity of a prediction method should be different from the data used to generate the prediction model. In this study, we explored whether an independent data set is mandatory for testing the validity of a new prediction method and how validity can be tested without independent new data. Several validation methods were compared in an example using the data from a mixed dentition analysis with a regression model. The validation errors of real mixed dentition analysis data and simulation data were analyzed for increasingly large data sets. The validation results of both the real and the simulation studies demonstrated that the leave-1-out cross-validation method had the smallest errors. The largest errors occurred in the traditional simple validation method. The differences between the validation methods diminished as the sample size increased. The leave-1-out cross-validation method seems to be an optimal validation method for improving the prediction accuracy in a data set with limited sample sizes. Copyright © 2015 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.

  6. Validation of a computational knee joint model using an alignment method for the knee laxity test and computed tomography.

    PubMed

    Kang, Kyoung-Tak; Kim, Sung-Hwan; Son, Juhyun; Lee, Young Han; Koh, Yong-Gon

    2017-01-01

    Computational models have been identified as efficient techniques in the clinical decision-making process. However, computational model was validated using published data in most previous studies, and the kinematic validation of such models still remains a challenge. Recently, studies using medical imaging have provided a more accurate visualization of knee joint kinematics. The purpose of the present study was to perform kinematic validation for the subject-specific computational knee joint model by comparison with subject's medical imaging under identical laxity condition. The laxity test was applied to the anterior-posterior drawer under 90° flexion and the varus-valgus under 20° flexion with a series of stress radiographs, a Telos device, and computed tomography. The loading condition in the computational subject-specific knee joint model was identical to the laxity test condition in the medical image. Our computational model showed knee laxity kinematic trends that were consistent with the computed tomography images, except for negligible differences because of the indirect application of the subject's in vivo material properties. Medical imaging based on computed tomography with the laxity test allowed us to measure not only the precise translation but also the rotation of the knee joint. This methodology will be beneficial in the validation of laxity tests for subject- or patient-specific computational models.

  7. Validation Test Report for the 1/8 deg Global Navy Coastal Ocean Model Nowcast/Forecast System

    DTIC Science & Technology

    2007-01-24

    Test Report for the 1/8° Global Navy Coastal Ocean Model Nowcast/Forecast System Charlie N. BarroN a. Birol Kara roBert C. rhodes ClarK rowley......OF ACRONYMS ......................................................................48 VALIDATION TEST REPORT FOR THE 1/8° GLOBAL NAVY COASTAL

  8. Validation of tsunami inundation model TUNA-RP using OAR-PMEL-135 benchmark problem set

    NASA Astrophysics Data System (ADS)

    Koh, H. L.; Teh, S. Y.; Tan, W. K.; Kh'ng, X. Y.

    2017-05-01

    A standard set of benchmark problems, known as OAR-PMEL-135, is developed by the US National Tsunami Hazard Mitigation Program for tsunami inundation model validation. Any tsunami inundation model must be tested for its accuracy and capability using this standard set of benchmark problems before it can be gainfully used for inundation simulation. The authors have previously developed an in-house tsunami inundation model known as TUNA-RP. This inundation model solves the two-dimensional nonlinear shallow water equations coupled with a wet-dry moving boundary algorithm. This paper presents the validation of TUNA-RP against the solutions provided in the OAR-PMEL-135 benchmark problem set. This benchmark validation testing shows that TUNA-RP can indeed perform inundation simulation with accuracy consistent with that in the tested benchmark problem set.

  9. Utilisation of real-scale renewable energy test facility for validation of generic wind turbine and wind power plant controller models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zeni, Lorenzo; Hesselbæk, Bo; Bech, John

    This article presents an example of application of a modern test facility conceived for experiments regarding the integration of renewable energy in the power system. The capabilities of the test facility are used to validate dynamic simulation models of wind power plants and their controllers. The models are based on standard and generic blocks. The successful validation of events related to the control of active power (control phenomena in <10 Hz range, including frequency control and power oscillation damping) is described, demonstrating the capabilities of the test facility and drawing the track for future work and improvements.

  10. Phase 1 Validation Testing and Simulation for the WEC-Sim Open Source Code

    NASA Astrophysics Data System (ADS)

    Ruehl, K.; Michelen, C.; Gunawan, B.; Bosma, B.; Simmons, A.; Lomonaco, P.

    2015-12-01

    WEC-Sim is an open source code to model wave energy converters performance in operational waves, developed by Sandia and NREL and funded by the US DOE. The code is a time-domain modeling tool developed in MATLAB/SIMULINK using the multibody dynamics solver SimMechanics, and solves the WEC's governing equations of motion using the Cummins time-domain impulse response formulation in 6 degrees of freedom. The WEC-Sim code has undergone verification through code-to-code comparisons; however validation of the code has been limited to publicly available experimental data sets. While these data sets provide preliminary code validation, the experimental tests were not explicitly designed for code validation, and as a result are limited in their ability to validate the full functionality of the WEC-Sim code. Therefore, dedicated physical model tests for WEC-Sim validation have been performed. This presentation provides an overview of the WEC-Sim validation experimental wave tank tests performed at the Oregon State University's Directional Wave Basin at Hinsdale Wave Research Laboratory. Phase 1 of experimental testing was focused on device characterization and completed in Fall 2015. Phase 2 is focused on WEC performance and scheduled for Winter 2015/2016. These experimental tests were designed explicitly to validate the performance of WEC-Sim code, and its new feature additions. Upon completion, the WEC-Sim validation data set will be made publicly available to the wave energy community. For the physical model test, a controllable model of a floating wave energy converter has been designed and constructed. The instrumentation includes state-of-the-art devices to measure pressure fields, motions in 6 DOF, multi-axial load cells, torque transducers, position transducers, and encoders. The model also incorporates a fully programmable Power-Take-Off system which can be used to generate or absorb wave energy. Numerical simulations of the experiments using WEC-Sim will be presented. These simulations highlight the code features included in the latest release of WEC-Sim (v1.2), including: wave directionality, nonlinear hydrostatics and hydrodynamics, user-defined wave elevation time-series, state space radiation, and WEC-Sim compatibility with BEMIO (open source AQWA/WAMI/NEMOH coefficient parser).

  11. Item Development and Validity Testing for a Self- and Proxy Report: The Safe Driving Behavior Measure

    PubMed Central

    Classen, Sherrilene; Winter, Sandra M.; Velozo, Craig A.; Bédard, Michel; Lanford, Desiree N.; Brumback, Babette; Lutz, Barbara J.

    2010-01-01

    OBJECTIVE We report on item development and validity testing of a self-report older adult safe driving behaviors measure (SDBM). METHOD On the basis of theoretical frameworks (Precede–Proceed Model of Health Promotion, Haddon’s matrix, and Michon’s model), existing driving measures, and previous research and guided by measurement theory, we developed items capturing safe driving behavior. Item development was further informed by focus groups. We established face validity using peer reviewers and content validity using expert raters. RESULTS Peer review indicated acceptable face validity. Initial expert rater review yielded a scale content validity index (CVI) rating of 0.78, with 44 of 60 items rated ≥0.75. Sixteen unacceptable items (≤0.5) required major revision or deletion. The next CVI scale average was 0.84, indicating acceptable content validity. CONCLUSION The SDBM has relevance as a self-report to rate older drivers. Future pilot testing of the SDBM comparing results with on-road testing will define criterion validity. PMID:20437917

  12. Pretest information for a test to validate plume simulation procedures (FA-17)

    NASA Technical Reports Server (NTRS)

    Hair, L. M.

    1978-01-01

    The results of an effort to plan a final verification wind tunnel test to validate the recommended correlation parameters and application techniques were presented. The test planning effort was complete except for test site finalization and the associated coordination. Two suitable test sites were identified. Desired test conditions were shown. Subsequent sections of this report present the selected model and test site, instrumentation of this model, planned test operations, and some concluding remarks.

  13. Identifying model error in metabolic flux analysis - a generalized least squares approach.

    PubMed

    Sokolenko, Stanislav; Quattrociocchi, Marco; Aucoin, Marc G

    2016-09-13

    The estimation of intracellular flux through traditional metabolic flux analysis (MFA) using an overdetermined system of equations is a well established practice in metabolic engineering. Despite the continued evolution of the methodology since its introduction, there has been little focus on validation and identification of poor model fit outside of identifying "gross measurement error". The growing complexity of metabolic models, which are increasingly generated from genome-level data, has necessitated robust validation that can directly assess model fit. In this work, MFA calculation is framed as a generalized least squares (GLS) problem, highlighting the applicability of the common t-test for model validation. To differentiate between measurement and model error, we simulate ideal flux profiles directly from the model, perturb them with estimated measurement error, and compare their validation to real data. Application of this strategy to an established Chinese Hamster Ovary (CHO) cell model shows how fluxes validated by traditional means may be largely non-significant due to a lack of model fit. With further simulation, we explore how t-test significance relates to calculation error and show that fluxes found to be non-significant have 2-4 fold larger error (if measurement uncertainty is in the 5-10 % range). The proposed validation method goes beyond traditional detection of "gross measurement error" to identify lack of fit between model and data. Although the focus of this work is on t-test validation and traditional MFA, the presented framework is readily applicable to other regression analysis methods and MFA formulations.

  14. Calibration and validation of a spar-type floating offshore wind turbine model using the FAST dynamic simulation tool

    DOE PAGES

    Browning, J. R.; Jonkman, J.; Robertson, A.; ...

    2014-12-16

    In this study, high-quality computer simulations are required when designing floating wind turbines because of the complex dynamic responses that are inherent with a high number of degrees of freedom and variable metocean conditions. In 2007, the FAST wind turbine simulation tool, developed and maintained by the U.S. Department of Energy's (DOE's) National Renewable Energy Laboratory (NREL), was expanded to include capabilities that are suitable for modeling floating offshore wind turbines. In an effort to validate FAST and other offshore wind energy modeling tools, DOE funded the DeepCwind project that tested three prototype floating wind turbines at 1/50 th scalemore » in a wave basin, including a semisubmersible, a tension-leg platform, and a spar buoy. This paper describes the use of the results of the spar wave basin tests to calibrate and validate the FAST offshore floating simulation tool, and presents some initial results of simulated dynamic responses of the spar to several combinations of wind and sea states. Wave basin tests with the spar attached to a scale model of the NREL 5-megawatt reference wind turbine were performed at the Maritime Research Institute Netherlands under the DeepCwind project. This project included free-decay tests, tests with steady or turbulent wind and still water (both periodic and irregular waves with no wind), and combined wind/wave tests. The resulting data from the 1/50th model was scaled using Froude scaling to full size and used to calibrate and validate a full-size simulated model in FAST. Results of the model calibration and validation include successes, subtleties, and limitations of both wave basin testing and FAST modeling capabilities.« less

  15. CAPTIONALS: A computer aided testing environment for the verification and validation of communication protocols

    NASA Technical Reports Server (NTRS)

    Feng, C.; Sun, X.; Shen, Y. N.; Lombardi, Fabrizio

    1992-01-01

    This paper covers the verification and protocol validation for distributed computer and communication systems using a computer aided testing approach. Validation and verification make up the so-called process of conformance testing. Protocol applications which pass conformance testing are then checked to see whether they can operate together. This is referred to as interoperability testing. A new comprehensive approach to protocol testing is presented which address: (1) modeling for inter-layer representation for compatibility between conformance and interoperability testing; (2) computational improvement to current testing methods by using the proposed model inclusive of formulation of new qualitative and quantitative measures and time-dependent behavior; (3) analysis and evaluation of protocol behavior for interactive testing without extensive simulation.

  16. Assessment of human epidermal model LabCyte EPI-MODEL for in vitro skin irritation testing according to European Centre for the Validation of Alternative Methods (ECVAM)-validated protocol.

    PubMed

    Katoh, Masakazu; Hamajima, Fumiyasu; Ogasawara, Takahiro; Hata, Ken-Ichiro

    2009-06-01

    A validation study of an in vitro skin irritation testing method using a reconstructed human skin model has been conducted by the European Centre for the Validation of Alternative Methods (ECVAM), and a protocol using EpiSkin (SkinEthic, France) has been approved. The structural and performance criteria of skin models for testing are defined in the ECVAM Performance Standards announced along with the approval. We have performed several evaluations of the new reconstructed human epidermal model LabCyte EPI-MODEL, and confirmed that it is applicable to skin irritation testing as defined in the ECVAM Performance Standards. We selected 19 materials (nine irritants and ten non-irritants) available in Japan as test chemicals among the 20 reference chemicals described in the ECVAM Performance Standard. A test chemical was applied to the surface of the LabCyte EPI-MODEL for 15 min, after which it was completely removed and the model then post-incubated for 42 hr. Cell v iability was measured by MTT assay and skin irritancy of the test chemical evaluated. In addition, interleukin-1 alpha (IL-1alpha) concentration in the culture supernatant after post-incubation was measured to provide a complementary evaluation of skin irritation. Evaluation of the 19 test chemicals resulted in 79% accuracy, 78% sensitivity and 80% specificity, confirming that the in vitro skin irritancy of the LabCyte EPI-MODEL correlates highly with in vivo skin irritation. These results suggest that LabCyte EPI-MODEL is applicable to the skin irritation testing protocol set out in the ECVAM Performance Standards.

  17. Rational selection of training and test sets for the development of validated QSAR models

    NASA Astrophysics Data System (ADS)

    Golbraikh, Alexander; Shen, Min; Xiao, Zhiyan; Xiao, Yun-De; Lee, Kuo-Hsiung; Tropsha, Alexander

    2003-02-01

    Quantitative Structure-Activity Relationship (QSAR) models are used increasingly to screen chemical databases and/or virtual chemical libraries for potentially bioactive molecules. These developments emphasize the importance of rigorous model validation to ensure that the models have acceptable predictive power. Using k nearest neighbors ( kNN) variable selection QSAR method for the analysis of several datasets, we have demonstrated recently that the widely accepted leave-one-out (LOO) cross-validated R2 (q2) is an inadequate characteristic to assess the predictive ability of the models [Golbraikh, A., Tropsha, A. Beware of q2! J. Mol. Graphics Mod. 20, 269-276, (2002)]. Herein, we provide additional evidence that there exists no correlation between the values of q 2 for the training set and accuracy of prediction ( R 2) for the test set and argue that this observation is a general property of any QSAR model developed with LOO cross-validation. We suggest that external validation using rationally selected training and test sets provides a means to establish a reliable QSAR model. We propose several approaches to the division of experimental datasets into training and test sets and apply them in QSAR studies of 48 functionalized amino acid anticonvulsants and a series of 157 epipodophyllotoxin derivatives with antitumor activity. We formulate a set of general criteria for the evaluation of predictive power of QSAR models.

  18. Validation in the Absence of Observed Events

    DOE PAGES

    Lathrop, John; Ezell, Barry

    2015-07-22

    Here our paper addresses the problem of validating models in the absence of observed events, in the area of Weapons of Mass Destruction terrorism risk assessment. We address that problem with a broadened definition of “Validation,” based on “backing up” to the reason why modelers and decision makers seek validation, and from that basis re-define validation as testing how well the model can advise decision makers in terrorism risk management decisions. We develop that into two conditions: Validation must be based on cues available in the observable world; and it must focus on what can be done to affect thatmore » observable world, i.e. risk management. That in turn leads to two foci: 1.) the risk generating process, 2.) best use of available data. Based on our experience with nine WMD terrorism risk assessment models, we then describe three best use of available data pitfalls: SME confidence bias, lack of SME cross-referencing, and problematic initiation rates. Those two foci and three pitfalls provide a basis from which we define validation in this context in terms of four tests -- Does the model: … capture initiation? … capture the sequence of events by which attack scenarios unfold? … consider unanticipated scenarios? … consider alternative causal chains? Finally, we corroborate our approach against three key validation tests from the DOD literature: Is the model a correct representation of the simuland? To what degree are the model results comparable to the real world? Over what range of inputs are the model results useful?« less

  19. CFD Modeling Needs and What Makes a Good Supersonic Combustion Validation Experiment

    NASA Technical Reports Server (NTRS)

    Gaffney, Richard L., Jr.; Cutler, Andrew D.

    2005-01-01

    If a CFD code/model developer is asked what experimental data he wants to validate his code or numerical model, his answer will be: "Everything, everywhere, at all times." Since this is not possible, practical, or even reasonable, the developer must understand what can be measured within the limits imposed by the test article, the test location, the test environment and the available diagnostic equipment. At the same time, it is important for the expermentalist/diagnostician to understand what the CFD developer needs (as opposed to wants) in order to conduct a useful CFD validation experiment. If these needs are not known, it is possible to neglect easily measured quantities at locations needed by the developer, rendering the data set useless for validation purposes. It is also important for the experimentalist/diagnostician to understand what the developer is trying to validate so that the experiment can be designed to isolate (as much as possible) the effects of a particular physical phenomena that is associated with the model to be validated. The probability of a successful validation experiment can be greatly increased if the two groups work together, each understanding the needs and limitations of the other.

  20. Revalidation of the NASA Ames 11-by 11-Foot Transonic Wind Tunnel with a Commercial Airplane Model

    NASA Technical Reports Server (NTRS)

    Kmak, Frank J.; Hudgins, M.; Hergert, D.; George, Michael W. (Technical Monitor)

    2001-01-01

    The 11-By 11-Foot Transonic leg of the Unitary Plan Wind Tunnel (UPWT) was modernized to improve tunnel performance, capability, productivity, and reliability. Wind tunnel tests to demonstrate the readiness of the tunnel for a return to production operations included an Integrated Systems Test (IST), calibration tests, and airplane validation tests. One of the two validation tests was a 0.037-scale Boeing 777 model that was previously tested in the 11-By 11-Foot tunnel in 1991. The objective of the validation tests was to compare pre-modernization and post-modernization results from the same airplane model in order to substantiate the operational readiness of the facility. Evaluation of within-test, test-to-test, and tunnel-to-tunnel data repeatability were made to study the effects of the tunnel modifications. Tunnel productivity was also evaluated to determine the readiness of the facility for production operations. The operation of the facility, including model installation, tunnel operations, and the performance of tunnel systems, was observed and facility deficiency findings generated. The data repeatability studies and tunnel-to-tunnel comparisons demonstrated outstanding data repeatability and a high overall level of data quality. Despite some operational and facility problems, the validation test was successful in demonstrating the readiness of the facility to perform production airplane wind tunnel%, tests.

  1. Impact Testing on Reinforced Carbon-Carbon Flat Panels with Ice Projectiles for the Space Shuttle Return to Flight Program

    NASA Technical Reports Server (NTRS)

    Melis, Matthew E.; Revilock, Duane M.; Pereira, Michael J.; Lyle, Karen H.

    2009-01-01

    Following the tragedy of the Orbiter Columbia (STS-107) on February 1, 2003, a major effort commenced to develop a better understanding of debris impacts and their effect on the space shuttle subsystems. An initiative to develop and validate physics-based computer models to predict damage from such impacts was a fundamental component of this effort. To develop the models it was necessary to physically characterize reinforced carbon-carbon (RCC) along with ice and foam debris materials, which could shed on ascent and impact the orbiter RCC leading edges. The validated models enabled the launch system community to use the impact analysis software LS-DYNA (Livermore Software Technology Corp.) to predict damage by potential and actual impact events on the orbiter leading edge and nose cap thermal protection systems. Validation of the material models was done through a three-level approach: Level 1--fundamental tests to obtain independent static and dynamic constitutive model properties of materials of interest, Level 2--subcomponent impact tests to provide highly controlled impact test data for the correlation and validation of the models, and Level 3--full-scale orbiter leading-edge impact tests to establish the final level of confidence for the analysis methodology. This report discusses the Level 2 test program conducted in the NASA Glenn Research Center (GRC) Ballistic Impact Laboratory with ice projectile impact tests on flat RCC panels, and presents the data observed. The Level 2 testing consisted of 54 impact tests in the NASA GRC Ballistic Impact Laboratory on 6- by 6-in. and 6- by 12-in. flat plates of RCC and evaluated three types of debris projectiles: Single-crystal, polycrystal, and "soft" ice. These impact tests helped determine the level of damage generated in the RCC flat plates by each projectile and validated the use of the ice and RCC models for use in LS-DYNA.

  2. Impact Testing on Reinforced Carbon-Carbon Flat Panels With BX-265 and PDL-1034 External Tank Foam for the Space Shuttle Return to Flight Program

    NASA Technical Reports Server (NTRS)

    Melis, Matthew E.; Revilock, Duane M.; Pereira, Michael J.; Lyle, Karen H.

    2009-01-01

    Following the tragedy of the Orbiter Columbia (STS-107) on February 1, 2003, a major effort commenced to develop a better understanding of debris impacts and their effect on the space shuttle subsystems. An initiative to develop and validate physics-based computer models to predict damage from such impacts was a fundamental component of this effort. To develop the models it was necessary to physically characterize reinforced carbon-carbon (RCC) along with ice and foam debris materials, which could shed on ascent and impact the orbiter RCC leading edges. The validated models enabled the launch system community to use the impact analysis software LS-DYNA (Livermore Software Technology Corp.) to predict damage by potential and actual impact events on the orbiter leading edge and nose cap thermal protection systems. Validation of the material models was done through a three-level approach: Level 1-fundamental tests to obtain independent static and dynamic constitutive model properties of materials of interest, Level 2-subcomponent impact tests to provide highly controlled impact test data for the correlation and validation of the models, and Level 3-full-scale orbiter leading-edge impact tests to establish the final level of confidence for the analysis methodology. This report discusses the Level 2 test program conducted in the NASA Glenn Research Center (GRC) Ballistic Impact Laboratory with external tank foam impact tests on flat RCC panels, and presents the data observed. The Level 2 testing consisted of 54 impact tests in the NASA GRC Ballistic Impact Laboratory on 6- by 6-in. and 6- by 12-in. flat plates of RCC and evaluated two types of debris projectiles: BX-265 and PDL-1034 external tank foam. These impact tests helped determine the level of damage generated in the RCC flat plates by each projectile and validated the use of the foam and RCC models for use in LS-DYNA.

  3. Study of Bias in 2012-Placement Test through Rasch Model in Terms of Gender Variable

    ERIC Educational Resources Information Center

    Turkan, Azmi; Cetin, Bayram

    2017-01-01

    Validity and reliability are among the most crucial characteristics of a test. One of the steps to make sure that a test is valid and reliable is to examine the bias in test items. The purpose of this study was to examine the bias in 2012 Placement Test items in terms of gender variable using Rasch Model in Turkey. The sample of this study was…

  4. Does rational selection of training and test sets improve the outcome of QSAR modeling?

    PubMed

    Martin, Todd M; Harten, Paul; Young, Douglas M; Muratov, Eugene N; Golbraikh, Alexander; Zhu, Hao; Tropsha, Alexander

    2012-10-22

    Prior to using a quantitative structure activity relationship (QSAR) model for external predictions, its predictive power should be established and validated. In the absence of a true external data set, the best way to validate the predictive ability of a model is to perform its statistical external validation. In statistical external validation, the overall data set is divided into training and test sets. Commonly, this splitting is performed using random division. Rational splitting methods can divide data sets into training and test sets in an intelligent fashion. The purpose of this study was to determine whether rational division methods lead to more predictive models compared to random division. A special data splitting procedure was used to facilitate the comparison between random and rational division methods. For each toxicity end point, the overall data set was divided into a modeling set (80% of the overall set) and an external evaluation set (20% of the overall set) using random division. The modeling set was then subdivided into a training set (80% of the modeling set) and a test set (20% of the modeling set) using rational division methods and by using random division. The Kennard-Stone, minimal test set dissimilarity, and sphere exclusion algorithms were used as the rational division methods. The hierarchical clustering, random forest, and k-nearest neighbor (kNN) methods were used to develop QSAR models based on the training sets. For kNN QSAR, multiple training and test sets were generated, and multiple QSAR models were built. The results of this study indicate that models based on rational division methods generate better statistical results for the test sets than models based on random division, but the predictive power of both types of models are comparable.

  5. FAST Model Calibration and Validation of the OC5-DeepCwind Floating Offshore Wind System Against Wave Tank Test Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wendt, Fabian F; Robertson, Amy N; Jonkman, Jason

    During the course of the Offshore Code Comparison Collaboration, Continued, with Correlation (OC5) project, which focused on the validation of numerical methods through comparison against tank test data, the authors created a numerical FAST model of the 1:50-scale DeepCwind semisubmersible system that was tested at the Maritime Research Institute Netherlands ocean basin in 2013. This paper discusses several model calibration studies that were conducted to identify model adjustments that improve the agreement between the numerical simulations and the experimental test data. These calibration studies cover wind-field-specific parameters (coherence, turbulence), hydrodynamic and aerodynamic modeling approaches, as well as rotor model (blade-pitchmore » and blade-mass imbalances) and tower model (structural tower damping coefficient) adjustments. These calibration studies were conducted based on relatively simple calibration load cases (wave only/wind only). The agreement between the final FAST model and experimental measurements is then assessed based on more-complex combined wind and wave validation cases.« less

  6. Validation in the Absence of Observed Events.

    PubMed

    Lathrop, John; Ezell, Barry

    2016-04-01

    This article addresses the problem of validating models in the absence of observed events, in the area of weapons of mass destruction terrorism risk assessment. We address that problem with a broadened definition of "validation," based on stepping "up" a level to considering the reason why decisionmakers seek validation, and from that basis redefine validation as testing how well the model can advise decisionmakers in terrorism risk management decisions. We develop that into two conditions: validation must be based on cues available in the observable world; and it must focus on what can be done to affect that observable world, i.e., risk management. That leads to two foci: (1) the real-world risk generating process, and (2) best use of available data. Based on our experience with nine WMD terrorism risk assessment models, we then describe three best use of available data pitfalls: SME confidence bias, lack of SME cross-referencing, and problematic initiation rates. Those two foci and three pitfalls provide a basis from which we define validation in this context in terms of four tests--Does the model: … capture initiation? … capture the sequence of events by which attack scenarios unfold? … consider unanticipated scenarios? … consider alternative causal chains? Finally, we corroborate our approach against three validation tests from the DOD literature: Is the model a correct representation of the process to be simulated? To what degree are the model results comparable to the real world? Over what range of inputs are the model results useful? © 2015 Society for Risk Analysis.

  7. Applying Multidimensional Item Response Theory Models in Validating Test Dimensionality: An Example of K-12 Large-Scale Science Assessment

    ERIC Educational Resources Information Center

    Li, Ying; Jiao, Hong; Lissitz, Robert W.

    2012-01-01

    This study investigated the application of multidimensional item response theory (IRT) models to validate test structure and dimensionality. Multiple content areas or domains within a single subject often exist in large-scale achievement tests. Such areas or domains may cause multidimensionality or local item dependence, which both violate the…

  8. A Method of Q-Matrix Validation for the Linear Logistic Test Model

    PubMed Central

    Baghaei, Purya; Hohensinn, Christine

    2017-01-01

    The linear logistic test model (LLTM) is a well-recognized psychometric model for examining the components of difficulty in cognitive tests and validating construct theories. The plausibility of the construct model, summarized in a matrix of weights, known as the Q-matrix or weight matrix, is tested by (1) comparing the fit of LLTM with the fit of the Rasch model (RM) using the likelihood ratio (LR) test and (2) by examining the correlation between the Rasch model item parameters and LLTM reconstructed item parameters. The problem with the LR test is that it is almost always significant and, consequently, LLTM is rejected. The drawback of examining the correlation coefficient is that there is no cut-off value or lower bound for the magnitude of the correlation coefficient. In this article we suggest a simulation method to set a minimum benchmark for the correlation between item parameters from the Rasch model and those reconstructed by the LLTM. If the cognitive model is valid then the correlation coefficient between the RM-based item parameters and the LLTM-reconstructed item parameters derived from the theoretical weight matrix should be greater than those derived from the simulated matrices. PMID:28611721

  9. TOPEX Microwave Radiometer - Thermal design verification test and analytical model validation

    NASA Technical Reports Server (NTRS)

    Lin, Edward I.

    1992-01-01

    The testing of the TOPEX Microwave Radiometer (TMR) is described in terms of hardware development based on the modeling and thermal vacuum testing conducted. The TMR and the vacuum-test facility are described, and the thermal verification test includes a hot steady-state segment, a cold steady-state segment, and a cold survival mode segment totalling 65 hours. A graphic description is given of the test history which is related temperature tracking, and two multinode TMR test-chamber models are compared to the test results. Large discrepancies between the test data and the model predictions are attributed to contact conductance, effective emittance from the multilayer insulation, and heat leaks related to deviations from the flight configuration. The TMR thermal testing/modeling effort is shown to provide technical corrections for the procedure outlined, and the need for validating predictive models is underscored.

  10. On Nomological Validity and Auxiliary Assumptions: The Importance of Simultaneously Testing Effects in Social Cognitive Theories Applied to Health Behavior and Some Guidelines

    PubMed Central

    Hagger, Martin S.; Gucciardi, Daniel F.; Chatzisarantis, Nikos L. D.

    2017-01-01

    Tests of social cognitive theories provide informative data on the factors that relate to health behavior, and the processes and mechanisms involved. In the present article, we contend that tests of social cognitive theories should adhere to the principles of nomological validity, defined as the degree to which predictions in a formal theoretical network are confirmed. We highlight the importance of nomological validity tests to ensure theory predictions can be disconfirmed through observation. We argue that researchers should be explicit on the conditions that lead to theory disconfirmation, and identify any auxiliary assumptions on which theory effects may be conditional. We contend that few researchers formally test the nomological validity of theories, or outline conditions that lead to model rejection and the auxiliary assumptions that may explain findings that run counter to hypotheses, raising potential for ‘falsification evasion.’ We present a brief analysis of studies (k = 122) testing four key social cognitive theories in health behavior to illustrate deficiencies in reporting theory tests and evaluations of nomological validity. Our analysis revealed that few articles report explicit statements suggesting that their findings support or reject the hypotheses of the theories tested, even when findings point to rejection. We illustrate the importance of explicit a priori specification of fundamental theory hypotheses and associated auxiliary assumptions, and identification of the conditions which would lead to rejection of theory predictions. We also demonstrate the value of confirmatory analytic techniques, meta-analytic structural equation modeling, and Bayesian analyses in providing robust converging evidence for nomological validity. We provide a set of guidelines for researchers on how to adopt and apply the nomological validity approach to testing health behavior models. PMID:29163307

  11. Implementation of the validation testing in MPPG 5.a "Commissioning and QA of treatment planning dose calculations-megavoltage photon and electron beams".

    PubMed

    Jacqmin, Dustin J; Bredfeldt, Jeremy S; Frigo, Sean P; Smilowitz, Jennifer B

    2017-01-01

    The AAPM Medical Physics Practice Guideline (MPPG) 5.a provides concise guidance on the commissioning and QA of beam modeling and dose calculation in radiotherapy treatment planning systems. This work discusses the implementation of the validation testing recommended in MPPG 5.a at two institutions. The two institutions worked collaboratively to create a common set of treatment fields and analysis tools to deliver and analyze the validation tests. This included the development of a novel, open-source software tool to compare scanning water tank measurements to 3D DICOM-RT Dose distributions. Dose calculation algorithms in both Pinnacle and Eclipse were tested with MPPG 5.a to validate the modeling of Varian TrueBeam linear accelerators. The validation process resulted in more than 200 water tank scans and more than 50 point measurements per institution, each of which was compared to a dose calculation from the institution's treatment planning system (TPS). Overall, the validation testing recommended in MPPG 5.a took approximately 79 person-hours for a machine with four photon and five electron energies for a single TPS. Of the 79 person-hours, 26 person-hours required time on the machine, and the remainder involved preparation and analysis. The basic photon, electron, and heterogeneity correction tests were evaluated with the tolerances in MPPG 5.a, and the tolerances were met for all tests. The MPPG 5.a evaluation criteria were used to assess the small field and IMRT/VMAT validation tests. Both institutions found the use of MPPG 5.a to be a valuable resource during the commissioning process. The validation testing in MPPG 5.a showed the strengths and limitations of the TPS models. In addition, the data collected during the validation testing is useful for routine QA of the TPS, validation of software upgrades, and commissioning of new algorithms. © 2016 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  12. Validation of Survivability Validation Protocols

    DTIC Science & Technology

    1993-05-01

    simu- lation fidelityl. Physical testing of P.i SOS, in either aboveground tests (AGTs) or underground test ( UGTs ), will usually be impossible, due...with some simulation fidelity compromises) are possible in UGTs and/orAGTs. Hence proof tests, if done in statistically significant numbers, can...level. Simulation fidelity and AGT/ UGT /threat correlation will be validation issues here. Extrapolation to threat environments will be done via modeling

  13. Modeling complex treatment strategies: construction and validation of a discrete event simulation model for glaucoma.

    PubMed

    van Gestel, Aukje; Severens, Johan L; Webers, Carroll A B; Beckers, Henny J M; Jansonius, Nomdo M; Schouten, Jan S A G

    2010-01-01

    Discrete event simulation (DES) modeling has several advantages over simpler modeling techniques in health economics, such as increased flexibility and the ability to model complex systems. Nevertheless, these benefits may come at the cost of reduced transparency, which may compromise the model's face validity and credibility. We aimed to produce a transparent report on the construction and validation of a DES model using a recently developed model of ocular hypertension and glaucoma. Current evidence of associations between prognostic factors and disease progression in ocular hypertension and glaucoma was translated into DES model elements. The model was extended to simulate treatment decisions and effects. Utility and costs were linked to disease status and treatment, and clinical and health economic outcomes were defined. The model was validated at several levels. The soundness of design and the plausibility of input estimates were evaluated in interdisciplinary meetings (face validity). Individual patients were traced throughout the simulation under a multitude of model settings to debug the model, and the model was run with a variety of extreme scenarios to compare the outcomes with prior expectations (internal validity). Finally, several intermediate (clinical) outcomes of the model were compared with those observed in experimental or observational studies (external validity) and the feasibility of evaluating hypothetical treatment strategies was tested. The model performed well in all validity tests. Analyses of hypothetical treatment strategies took about 30 minutes per cohort and lead to plausible health-economic outcomes. There is added value of DES models in complex treatment strategies such as glaucoma. Achieving transparency in model structure and outcomes may require some effort in reporting and validating the model, but it is feasible.

  14. A Simulation Study of Threats to Validity in Quasi-Experimental Designs: Interrelationship between Design, Measurement, and Analysis.

    PubMed

    Holgado-Tello, Fco P; Chacón-Moscoso, Salvador; Sanduvete-Chaves, Susana; Pérez-Gil, José A

    2016-01-01

    The Campbellian tradition provides a conceptual framework to assess threats to validity. On the other hand, different models of causal analysis have been developed to control estimation biases in different research designs. However, the link between design features, measurement issues, and concrete impact estimation analyses is weak. In order to provide an empirical solution to this problem, we use Structural Equation Modeling (SEM) as a first approximation to operationalize the analytical implications of threats to validity in quasi-experimental designs. Based on the analogies established between the Classical Test Theory (CTT) and causal analysis, we describe an empirical study based on SEM in which range restriction and statistical power have been simulated in two different models: (1) A multistate model in the control condition (pre-test); and (2) A single-trait-multistate model in the control condition (post-test), adding a new mediator latent exogenous (independent) variable that represents a threat to validity. Results show, empirically, how the differences between both the models could be partially or totally attributed to these threats. Therefore, SEM provides a useful tool to analyze the influence of potential threats to validity.

  15. A Simulation Study of Threats to Validity in Quasi-Experimental Designs: Interrelationship between Design, Measurement, and Analysis

    PubMed Central

    Holgado-Tello, Fco. P.; Chacón-Moscoso, Salvador; Sanduvete-Chaves, Susana; Pérez-Gil, José A.

    2016-01-01

    The Campbellian tradition provides a conceptual framework to assess threats to validity. On the other hand, different models of causal analysis have been developed to control estimation biases in different research designs. However, the link between design features, measurement issues, and concrete impact estimation analyses is weak. In order to provide an empirical solution to this problem, we use Structural Equation Modeling (SEM) as a first approximation to operationalize the analytical implications of threats to validity in quasi-experimental designs. Based on the analogies established between the Classical Test Theory (CTT) and causal analysis, we describe an empirical study based on SEM in which range restriction and statistical power have been simulated in two different models: (1) A multistate model in the control condition (pre-test); and (2) A single-trait-multistate model in the control condition (post-test), adding a new mediator latent exogenous (independent) variable that represents a threat to validity. Results show, empirically, how the differences between both the models could be partially or totally attributed to these threats. Therefore, SEM provides a useful tool to analyze the influence of potential threats to validity. PMID:27378991

  16. WEC-SIM Phase 1 Validation Testing -- Numerical Modeling of Experiments: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruehl, Kelley; Michelen, Carlos; Bosma, Bret

    2016-08-01

    The Wave Energy Converter Simulator (WEC-Sim) is an open-source code jointly developed by Sandia National Laboratories and the National Renewable Energy Laboratory. It is used to model wave energy converters subjected to operational and extreme waves. In order for the WEC-Sim code to be beneficial to the wave energy community, code verification and physical model validation is necessary. This paper describes numerical modeling of the wave tank testing for the 1:33-scale experimental testing of the floating oscillating surge wave energy converter. The comparison between WEC-Sim and the Phase 1 experimental data set serves as code validation. This paper is amore » follow-up to the WEC-Sim paper on experimental testing, and describes the WEC-Sim numerical simulations for the floating oscillating surge wave energy converter.« less

  17. A Proposal on the Validation Model of Equivalence between PBLT and CBLT

    ERIC Educational Resources Information Center

    Chen, Huilin

    2014-01-01

    The validity of the computer-based language test is possibly affected by three factors: computer familiarity, audio-visual cognitive competence, and other discrepancies in construct. Therefore, validating the equivalence between the paper-and-pencil language test and the computer-based language test is a key step in the procedure of designing a…

  18. Critical evaluation of mechanistic two-phase flow pipeline and well simulation models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dhulesia, H.; Lopez, D.

    1996-12-31

    Mechanistic steady state simulation models, rather than empirical correlations, are used for a design of multiphase production system including well, pipeline and downstream installations. Among the available models, PEPITE, WELLSIM, OLGA, TACITE and TUFFP are widely used for this purpose and consequently, a critical evaluation of these models is needed. An extensive validation methodology is proposed which consists of two distinct steps: first to validate the hydrodynamic point model using the test loop data and, then to validate the over-all simulation model using the real pipelines and wells data. The test loop databank used in this analysis contains about 5952more » data sets originated from four different test loops and a majority of these data are obtained at high pressures (up to 90 bars) with real hydrocarbon fluids. Before performing the model evaluation, physical analysis of the test loops data is required to eliminate non-coherent data. The evaluation of these point models demonstrates that the TACITE and OLGA models can be applied to any configuration of pipes. The TACITE model performs better than the OLGA model because it uses the most appropriate closure laws from the literature validated on a large number of data. The comparison of predicted and measured pressure drop for various real pipelines and wells demonstrates that the TACITE model is a reliable tool.« less

  19. Cross-Cultural Validation of the Preventive Health Model for Colorectal Cancer Screening: An Australian Study

    ERIC Educational Resources Information Center

    Flight, Ingrid H.; Wilson, Carlene J.; McGillivray, Jane; Myers, Ronald E.

    2010-01-01

    We investigated whether the five-factor structure of the Preventive Health Model for colorectal cancer screening, developed in the United States, has validity in Australia. We also tested extending the model with the addition of the factor Self-Efficacy to Screen using Fecal Occult Blood Test (SESFOBT). Randomly selected men and women aged between…

  20. Recent Advances in Simulation of Eddy Current Testing of Tubes and Experimental Validations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reboud, C.; Premel, D.; Lesselier, D.

    2007-03-21

    Eddy current testing (ECT) is widely used in iron and steel industry for the inspection of tubes during manufacturing. A collaboration between CEA and the Vallourec Research Center led to the development of new numerical functionalities dedicated to the simulation of ECT of non-magnetic tubes by external probes. The achievement of experimental validations led us to the integration of these models into the CIVA platform. Modeling approach and validation results are discussed here. A new numerical scheme is also proposed in order to improve the accuracy of the model.

  1. Recent Advances in Simulation of Eddy Current Testing of Tubes and Experimental Validations

    NASA Astrophysics Data System (ADS)

    Reboud, C.; Prémel, D.; Lesselier, D.; Bisiaux, B.

    2007-03-01

    Eddy current testing (ECT) is widely used in iron and steel industry for the inspection of tubes during manufacturing. A collaboration between CEA and the Vallourec Research Center led to the development of new numerical functionalities dedicated to the simulation of ECT of non-magnetic tubes by external probes. The achievement of experimental validations led us to the integration of these models into the CIVA platform. Modeling approach and validation results are discussed here. A new numerical scheme is also proposed in order to improve the accuracy of the model.

  2. [Comparison of the Wechsler Memory Scale-III and the Spain-Complutense Verbal Learning Test in acquired brain injury: construct validity and ecological validity].

    PubMed

    Luna-Lario, P; Pena, J; Ojeda, N

    2017-04-16

    To perform an in-depth examination of the construct validity and the ecological validity of the Wechsler Memory Scale-III (WMS-III) and the Spain-Complutense Verbal Learning Test (TAVEC). The sample consists of 106 adults with acquired brain injury who were treated in the Area of Neuropsychology and Neuropsychiatry of the Complejo Hospitalario de Navarra and displayed memory deficit as the main sequela, measured by means of specific memory tests. The construct validity is determined by examining the tasks required in each test over the basic theoretical models, comparing the performance according to the parameters offered by the tests, contrasting the severity indices of each test and analysing their convergence. The external validity is explored through the correlation between the tests and by using regression models. According to the results obtained, both the WMS-III and the TAVEC have construct validity. The TAVEC is more sensitive and captures not only the deficits in mnemonic consolidation, but also in the executive functions involved in memory. The working memory index of the WMS-III is useful for predicting the return to work at two years after the acquired brain injury, but none of the instruments anticipates the disability and dependence at least six months after the injury. We reflect upon the construct validity of the tests and their insufficient capacity to predict functionality when the sequelae become chronic.

  3. Multi-body modeling method for rollover using MADYMO

    NASA Astrophysics Data System (ADS)

    Liu, Changye; Lin, Zhigui; Lv, Juncheng; Luo, Qinyue; Qin, Zhenyao; Zhang, Pu; Chen, Tao

    2017-04-01

    Rollovers are complex road accidents causing a big deal of fatalities. FE model for rollover study will costtoo much time due to its long duration.A new multi-body modeling method is proposed in this paper which can save a lot of time and has high-fidelity meanwhile. Following works were carried out to validate this new method. First, a small van was tested following the FMVSS 208 protocol for the validation of the proposed modeling method. Second, a MADYMO model of this small van was reconstructed. The vehicle body was divided into two main parts, the deformable upper body and the rigid lower body, modeled by different waysbased on an FE model. The specific method of modeling is offered in this paper. Finally, the trajectories of the vehicle from test and simulation were comparedand the match was very good. Acceleration of left B pillar was taken into consideration, which turned out fitting the test result well in the time of event. The final deformation status of the vehicle in test and simulation showed similar trend. This validated model provides a reliable wayfor further research in occupant injuries during rollovers.

  4. Calibration and Validation of a Finite ELement Model of THor-K Anthropomorphic Test Device for Aerospace Safety Applications

    NASA Technical Reports Server (NTRS)

    Putnam, J. B.; Unataroiu, C. D.; Somers, J. T.

    2014-01-01

    The THOR anthropomorphic test device (ATD) has been developed and continuously improved by the National Highway Traffic Safety Administration to provide automotive manufacturers an advanced tool that can be used to assess the injury risk of vehicle occupants in crash tests. Recently, a series of modifications were completed to improve the biofidelity of THOR ATD [1]. The updated THOR Modification Kit (THOR-K) ATD was employed at Wright-Patterson Air Base in 22 impact tests in three configurations: vertical, lateral, and spinal [2]. Although a computational finite element (FE) model of the THOR had been previously developed [3], updates to the model were needed to incorporate the recent changes in the modification kit. The main goal of this study was to develop and validate a FE model of the THOR-K ATD. The CAD drawings of the THOR-K ATD were reviewed and FE models were developed for the updated parts. For example, the head-skin geometry was found to change significantly, so its model was re-meshed (Fig. 1a). A protocol was developed to calibrate each component identified as key to the kinematic and kinetic response of the THOR-K head/neck ATD FE model (Fig. 1b). The available ATD tests were divided in two groups: a) calibration tests where the unknown material parameters of deformable parts (e.g., head skin, pelvis foam) were optimized to match the data and b) validation tests where the model response was only compared with test data by calculating their score using CORrelation and Analysis (CORA) rating system. Finally, the whole ATD model was validated under horizontal-, vertical-, and lateral-loading conditions against data recorded in the Wright Patterson tests [2]. Overall, the final THOR-K ATD model developed in this study is shown to respond similarly to the ATD in all validation tests. This good performance indicates that the optimization performed during calibration by using the CORA score as objective function is not test specific. Therefore confidence is provided in the ATD model for uses in predicting response in test conditions not performed in this study such those observed in the spacecraft landing. Comparison studies with ATD and human models may also be performed to contribute to future changes in THOR ATD design in an effort to improve its biofidelity, which has been traditionally based on post-mortem human subject testing and designer experience.

  5. Forward Bay Cover Separation Modeling and Testing for the Orion Multi-Purpose Crew Vehicle

    NASA Technical Reports Server (NTRS)

    Ali, Yasmin; Radke, Tara; Chuhta, Jesse; Hughes, Michael

    2014-01-01

    Spacecraft multi-body separation events during atmospheric descent require complex testing and analysis to validate the flight separation dynamics model and to verify no recontact. NASA Orion Multi-Purpose Crew Vehicle (MPCV) teams examined key model parameters and risk areas to develop a robust but affordable test campaign in order to validate and verify the Forward Bay Cover (FBC) separation event for Exploration Flight Test-1 (EFT-1). The FBC jettison simulation model is highly complex, consisting of dozens of parameters varied simultaneously, with numerous multi-parameter interactions (coupling and feedback) among the various model elements, and encompassing distinct near-field, mid-field, and far-field regimes. The test campaign was composed of component-level testing (for example gas-piston thrusters and parachute mortars), ground FBC jettison tests, and FBC jettison air-drop tests that were accomplished by a highly multi-disciplinary team. Three ground jettison tests isolated the testing of mechanisms and structures to anchor the simulation models excluding aerodynamic effects. Subsequently, two air-drop tests added aerodynamic and parachute parameters, and served as integrated system demonstrations, which had been preliminarily explored during the Orion Pad Abort-1 (PA-1) flight test in May 2010. Both ground and drop tests provided extensive data to validate analytical models and to verify the FBC jettison event for EFT-1, but more testing is required to support human certification, for which NASA and Lockheed Martin are applying knowledge from Apollo and EFT-1 testing and modeling to develop a robust but affordable human spacecraft capability.

  6. An Efficient Data Partitioning to Improve Classification Performance While Keeping Parameters Interpretable.

    PubMed

    Korjus, Kristjan; Hebart, Martin N; Vicente, Raul

    2016-01-01

    Supervised machine learning methods typically require splitting data into multiple chunks for training, validating, and finally testing classifiers. For finding the best parameters of a classifier, training and validation are usually carried out with cross-validation. This is followed by application of the classifier with optimized parameters to a separate test set for estimating the classifier's generalization performance. With limited data, this separation of test data creates a difficult trade-off between having more statistical power in estimating generalization performance versus choosing better parameters and fitting a better model. We propose a novel approach that we term "Cross-validation and cross-testing" improving this trade-off by re-using test data without biasing classifier performance. The novel approach is validated using simulated data and electrophysiological recordings in humans and rodents. The results demonstrate that the approach has a higher probability of discovering significant results than the standard approach of cross-validation and testing, while maintaining the nominal alpha level. In contrast to nested cross-validation, which is maximally efficient in re-using data, the proposed approach additionally maintains the interpretability of individual parameters. Taken together, we suggest an addition to currently used machine learning approaches which may be particularly useful in cases where model weights do not require interpretation, but parameters do.

  7. The clinical effectiveness and cost-effectiveness of testing for cytochrome P450 polymorphisms in patients with schizophrenia treated with antipsychotics: a systematic review and economic evaluation.

    PubMed

    Fleeman, N; McLeod, C; Bagust, A; Beale, S; Boland, A; Dundar, Y; Jorgensen, A; Payne, K; Pirmohamed, M; Pushpakom, S; Walley, T; de Warren-Penny, P; Dickson, R

    2010-01-01

    To determine whether testing for cytochrome P450 (CYP) polymorphisms in adults entering antipsychotic treatment for schizophrenia leads to improvement in outcomes, is useful in medical, personal or public health decision-making, and is a cost-effective use of health-care resources. The following electronic databases were searched for relevant published literature: Cochrane Controlled Trials Register, Cochrane Database of Systematic Reviews, Database of Abstracts of Reviews of Effectiveness, EMBASE, Health Technology Assessment database, ISI Web of Knowledge, MEDLINE, PsycINFO, NHS Economic Evaluation Database, Health Economic Evaluation Database, Cost-effectiveness Analysis (CEA) Registry and the Centre for Health Economics website. In addition, publicly available information on various genotyping tests was sought from the internet and advisory panel members. A systematic review of analytical validity, clinical validity and clinical utility of CYP testing was undertaken. Data were extracted into structured tables and narratively discussed, and meta-analysis was undertaken when possible. A review of economic evaluations of CYP testing in psychiatry and a review of economic models related to schizophrenia were also carried out. For analytical validity, 46 studies of a range of different genotyping tests for 11 different CYP polymorphisms (most commonly CYP2D6) were included. Sensitivity and specificity were high (99-100%). For clinical validity, 51 studies were found. In patients tested for CYP2D6, an association between genotype and tardive dyskinesia (including Abnormal Involuntary Movement Scale scores) was found. The only other significant finding linked the CYP2D6 genotype to parkinsonism. One small unpublished study met the inclusion criteria for clinical utility. One economic evaluation assessing the costs and benefits of CYP testing for prescribing antidepressants and 28 economic models of schizophrenia were identified; none was suitable for developing a model to examine the cost-effectiveness of CYP testing. Tests for determining genotypes appear to be accurate although not all aspects of analytical validity were reported. Given the absence of convincing evidence from clinical validity studies, the lack of clinical utility and economic studies, and the unsuitability of published schizophrenia models, no model was developed; instead key features and data requirements for economic modelling are presented. Recommendations for future research cover both aspects of research quality and data that will be required to inform the development of future economic models.

  8. Real-Time Onboard Global Nonlinear Aerodynamic Modeling from Flight Data

    NASA Technical Reports Server (NTRS)

    Brandon, Jay M.; Morelli, Eugene A.

    2014-01-01

    Flight test and modeling techniques were developed to accurately identify global nonlinear aerodynamic models onboard an aircraft. The techniques were developed and demonstrated during piloted flight testing of an Aermacchi MB-326M Impala jet aircraft. Advanced piloting techniques and nonlinear modeling techniques based on fuzzy logic and multivariate orthogonal function methods were implemented with efficient onboard calculations and flight operations to achieve real-time maneuver monitoring and analysis, and near-real-time global nonlinear aerodynamic modeling and prediction validation testing in flight. Results demonstrated that global nonlinear aerodynamic models for a large portion of the flight envelope were identified rapidly and accurately using piloted flight test maneuvers during a single flight, with the final identified and validated models available before the aircraft landed.

  9. Rotary Engine Friction Test Rig Development Report

    DTIC Science & Technology

    2011-12-01

    fundamental research is needed to understand the friction characteristics of the rotary engine that lead to accelerated wear and tear on the seals...that includes a turbocharger . Once the original GT-Suite model is validated, the turbocharger model will be more accurate. This validation will...prepare for turbocharger and fuel-injector testing, which will lead to further development and calibration of the model. Further details are beyond the

  10. FAST Model Calibration and Validation of the OC5- DeepCwind Floating Offshore Wind System Against Wave Tank Test Data: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wendt, Fabian F; Robertson, Amy N; Jonkman, Jason

    During the course of the Offshore Code Comparison Collaboration, Continued, with Correlation (OC5) project, which focused on the validation of numerical methods through comparison against tank test data, the authors created a numerical FAST model of the 1:50-scale DeepCwind semisubmersible system that was tested at the Maritime Research Institute Netherlands ocean basin in 2013. This paper discusses several model calibration studies that were conducted to identify model adjustments that improve the agreement between the numerical simulations and the experimental test data. These calibration studies cover wind-field-specific parameters (coherence, turbulence), hydrodynamic and aerodynamic modeling approaches, as well as rotor model (blade-pitchmore » and blade-mass imbalances) and tower model (structural tower damping coefficient) adjustments. These calibration studies were conducted based on relatively simple calibration load cases (wave only/wind only). The agreement between the final FAST model and experimental measurements is then assessed based on more-complex combined wind and wave validation cases.« less

  11. Validation of Blockage Interference Corrections in the National Transonic Facility

    NASA Technical Reports Server (NTRS)

    Walker, Eric L.

    2007-01-01

    A validation test has recently been constructed for wall interference methods as applied to the National Transonic Facility (NTF). The goal of this study was to begin to address the uncertainty of wall-induced-blockage interference corrections, which will make it possible to address the overall quality of data generated by the facility. The validation test itself is not specific to any particular modeling. For this present effort, the Transonic Wall Interference Correction System (TWICS) as implemented at the NTF is the mathematical model being tested. TWICS uses linear, potential boundary conditions that must first be calibrated. These boundary conditions include three different classical, linear. homogeneous forms that have been historically used to approximate the physical behavior of longitudinally slotted test section walls. Results of the application of the calibrated wall boundary conditions are discussed in the context of the validation test.

  12. Performance Evaluation of a Data Validation System

    NASA Technical Reports Server (NTRS)

    Wong, Edmond (Technical Monitor); Sowers, T. Shane; Santi, L. Michael; Bickford, Randall L.

    2005-01-01

    Online data validation is a performance-enhancing component of modern control and health management systems. It is essential that performance of the data validation system be verified prior to its use in a control and health management system. A new Data Qualification and Validation (DQV) Test-bed application was developed to provide a systematic test environment for this performance verification. The DQV Test-bed was used to evaluate a model-based data validation package known as the Data Quality Validation Studio (DQVS). DQVS was employed as the primary data validation component of a rocket engine health management (EHM) system developed under NASA's NGLT (Next Generation Launch Technology) program. In this paper, the DQVS and DQV Test-bed software applications are described, and the DQV Test-bed verification procedure for this EHM system application is presented. Test-bed results are summarized and implications for EHM system performance improvements are discussed.

  13. Reliability and Validity of Inferences about Teachers Based on Student Scores. William H. Angoff Memorial Lecture Series

    ERIC Educational Resources Information Center

    Haertel, Edward H.

    2013-01-01

    Policymakers and school administrators have embraced value-added models of teacher effectiveness as tools for educational improvement. Teacher value-added estimates may be viewed as complicated scores of a certain kind. This suggests using a test validation model to examine their reliability and validity. Validation begins with an interpretive…

  14. Two-Speed Gearbox Dynamic Simulation Predictions and Test Validation

    NASA Technical Reports Server (NTRS)

    Lewicki, David G.; DeSmidt, Hans; Smith, Edward C.; Bauman, Steven W.

    2010-01-01

    Dynamic simulations and experimental validation tests were performed on a two-stage, two-speed gearbox as part of the drive system research activities of the NASA Fundamental Aeronautics Subsonics Rotary Wing Project. The gearbox was driven by two electromagnetic motors and had two electromagnetic, multi-disk clutches to control output speed. A dynamic model of the system was created which included a direct current electric motor with proportional-integral-derivative (PID) speed control, a two-speed gearbox with dual electromagnetically actuated clutches, and an eddy current dynamometer. A six degree-of-freedom model of the gearbox accounted for the system torsional dynamics and included gear, clutch, shaft, and load inertias as well as shaft flexibilities and a dry clutch stick-slip friction model. Experimental validation tests were performed on the gearbox in the NASA Glenn gear noise test facility. Gearbox output speed and torque as well as drive motor speed and current were compared to those from the analytical predictions. The experiments correlate very well with the predictions, thus validating the dynamic simulation methodologies.

  15. Procedure for the Selection and Validation of a Calibration Model I-Description and Application.

    PubMed

    Desharnais, Brigitte; Camirand-Lemyre, Félix; Mireault, Pascal; Skinner, Cameron D

    2017-05-01

    Calibration model selection is required for all quantitative methods in toxicology and more broadly in bioanalysis. This typically involves selecting the equation order (quadratic or linear) and weighting factor correctly modelizing the data. A mis-selection of the calibration model will generate lower quality control (QC) accuracy, with an error up to 154%. Unfortunately, simple tools to perform this selection and tests to validate the resulting model are lacking. We present a stepwise, analyst-independent scheme for selection and validation of calibration models. The success rate of this scheme is on average 40% higher than a traditional "fit and check the QCs accuracy" method of selecting the calibration model. Moreover, the process was completely automated through a script (available in Supplemental Data 3) running in RStudio (free, open-source software). The need for weighting was assessed through an F-test using the variances of the upper limit of quantification and lower limit of quantification replicate measurements. When weighting was required, the choice between 1/x and 1/x2 was determined by calculating which option generated the smallest spread of weighted normalized variances. Finally, model order was selected through a partial F-test. The chosen calibration model was validated through Cramer-von Mises or Kolmogorov-Smirnov normality testing of the standardized residuals. Performance of the different tests was assessed using 50 simulated data sets per possible calibration model (e.g., linear-no weight, quadratic-no weight, linear-1/x, etc.). This first of two papers describes the tests, procedures and outcomes of the developed procedure using real LC-MS-MS results for the quantification of cocaine and naltrexone. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  16. The development and testing of a skin tear risk assessment tool.

    PubMed

    Newall, Nelly; Lewin, Gill F; Bulsara, Max K; Carville, Keryln J; Leslie, Gavin D; Roberts, Pam A

    2017-02-01

    The aim of the present study is to develop a reliable and valid skin tear risk assessment tool. The six characteristics identified in a previous case control study as constituting the best risk model for skin tear development were used to construct a risk assessment tool. The ability of the tool to predict skin tear development was then tested in a prospective study. Between August 2012 and September 2013, 1466 tertiary hospital patients were assessed at admission and followed up for 10 days to see if they developed a skin tear. The predictive validity of the tool was assessed using receiver operating characteristic (ROC) analysis. When the tool was found not to have performed as well as hoped, secondary analyses were performed to determine whether a potentially better performing risk model could be identified. The tool was found to have high sensitivity but low specificity and therefore have inadequate predictive validity. Secondary analysis of the combined data from this and the previous case control study identified an alternative better performing risk model. The tool developed and tested in this study was found to have inadequate predictive validity. The predictive validity of an alternative, more parsimonious model now needs to be tested. © 2015 Medicalhelplines.com Inc and John Wiley & Sons Ltd.

  17. Assessing Requirements Quality through Requirements Coverage

    NASA Technical Reports Server (NTRS)

    Rajan, Ajitha; Heimdahl, Mats; Woodham, Kurt

    2008-01-01

    In model-based development, the development effort is centered around a formal description of the proposed software system the model. This model is derived from some high-level requirements describing the expected behavior of the software. For validation and verification purposes, this model can then be subjected to various types of analysis, for example, completeness and consistency analysis [6], model checking [3], theorem proving [1], and test-case generation [4, 7]. This development paradigm is making rapid inroads in certain industries, e.g., automotive, avionics, space applications, and medical technology. This shift towards model-based development naturally leads to changes in the verification and validation (V&V) process. The model validation problem determining that the model accurately captures the customer's high-level requirements has received little attention and the sufficiency of the validation activities has been largely determined through ad-hoc methods. Since the model serves as the central artifact, its correctness with respect to the users needs is absolutely crucial. In our investigation, we attempt to answer the following two questions with respect to validation (1) Are the requirements sufficiently defined for the system? and (2) How well does the model implement the behaviors specified by the requirements? The second question can be addressed using formal verification. Nevertheless, the size and complexity of many industrial systems make formal verification infeasible even if we have a formal model and formalized requirements. Thus, presently, there is no objective way of answering these two questions. To this end, we propose an approach based on testing that, when given a set of formal requirements, explores the relationship between requirements-based structural test-adequacy coverage and model-based structural test-adequacy coverage. The proposed technique uses requirements coverage metrics defined in [9] on formal high-level software requirements and existing model coverage metrics such as the Modified Condition and Decision Coverage (MC/DC) used when testing highly critical software in the avionics industry [8]. Our work is related to Chockler et al. [2], but we base our work on traditional testing techniques as opposed to verification techniques.

  18. Tree-Based Global Model Tests for Polytomous Rasch Models

    ERIC Educational Resources Information Center

    Komboz, Basil; Strobl, Carolin; Zeileis, Achim

    2018-01-01

    Psychometric measurement models are only valid if measurement invariance holds between test takers of different groups. Global model tests, such as the well-established likelihood ratio (LR) test, are sensitive to violations of measurement invariance, such as differential item functioning and differential step functioning. However, these…

  19. Testing of the SEE and OEE post-hip fracture.

    PubMed

    Resnick, Barbara; Orwig, Denise; Zimmerman, Sheryl; Hawkes, William; Golden, Justine; Werner-Bronzert, Michelle; Magaziner, Jay

    2006-08-01

    The purpose of this study was to test the reliability and validity of the Self-Efficacy for Exercise (SEE) and the Outcome Expectations for Exercise (OEE) scales in a sample of 166 older women post-hip fracture. There was some evidence of validity of the SEE and OEE based on confirmatory factor analysis and Rasch model testing, criterion based and convergent validity, and evidence of internal consistency based on alpha coefficients and separation indices and reliability based on R2 estimates. Rasch model testing demonstrated that some items had high variability. Based on these findings suggestions are made for how items could be revised and the scales improved for future use.

  20. Flight Testing an Iced Business Jet for Flight Simulation Model Validation

    NASA Technical Reports Server (NTRS)

    Ratvasky, Thomas P.; Barnhart, Billy P.; Lee, Sam; Cooper, Jon

    2007-01-01

    A flight test of a business jet aircraft with various ice accretions was performed to obtain data to validate flight simulation models developed through wind tunnel tests. Three types of ice accretions were tested: pre-activation roughness, runback shapes that form downstream of the thermal wing ice protection system, and a wing ice protection system failure shape. The high fidelity flight simulation models of this business jet aircraft were validated using a software tool called "Overdrive." Through comparisons of flight-extracted aerodynamic forces and moments to simulation-predicted forces and moments, the simulation models were successfully validated. Only minor adjustments in the simulation database were required to obtain adequate match, signifying the process used to develop the simulation models was successful. The simulation models were implemented in the NASA Ice Contamination Effects Flight Training Device (ICEFTD) to enable company pilots to evaluate flight characteristics of the simulation models. By and large, the pilots confirmed good similarities in the flight characteristics when compared to the real airplane. However, pilots noted pitch up tendencies at stall with the flaps extended that were not representative of the airplane and identified some differences in pilot forces. The elevator hinge moment model and implementation of the control forces on the ICEFTD were identified as a driver in the pitch ups and control force issues, and will be an area for future work.

  1. The Challenge of Grounding Planning in Simulation with an Interactive Model Development Environment

    NASA Technical Reports Server (NTRS)

    Clement, Bradley J.; Frank, Jeremy D.; Chachere, John M.; Smith, Tristan B.; Swanson, Keith J.

    2011-01-01

    A principal obstacle to fielding automated planning systems is the difficulty of modeling. Physical systems are modeled conventionally based on specification documents and the modeler's understanding of the system. Thus, the model is developed in a way that is disconnected from the system's actual behavior and is vulnerable to manual error. Another obstacle to fielding planners is testing and validation. For a space mission, generated plans must be validated often by translating them into command sequences that are run in a simulation testbed. Testing in this way is complex and onerous because of the large number of possible plans and states of the spacecraft. Though, if used as a source of domain knowledge, the simulator can ease validation. This paper poses a challenge: to ground planning models in the system physics represented by simulation. A proposed, interactive model development environment illustrates the integration of planning and simulation to meet the challenge. This integration reveals research paths for automated model construction and validation.

  2. Validation of individual and aggregate global flood hazard models for two major floods in Africa.

    NASA Astrophysics Data System (ADS)

    Trigg, M.; Bernhofen, M.; Whyman, C.

    2017-12-01

    A recent intercomparison of global flood hazard models undertaken by the Global Flood Partnership shows that there is an urgent requirement to undertake more validation of the models against flood observations. As part of the intercomparison, the aggregated model dataset resulting from the project was provided as open access data. We compare the individual and aggregated flood extent output from the six global models and test these against two major floods in the African Continent within the last decade, namely severe flooding on the Niger River in Nigeria in 2012, and on the Zambezi River in Mozambique in 2007. We test if aggregating different number and combination of models increases model fit to the observations compared with the individual model outputs. We present results that illustrate some of the challenges of comparing imperfect models with imperfect observations and also that of defining the probability of a real event in order to test standard model output probabilities. Finally, we propose a collective set of open access validation flood events, with associated observational data and descriptions that provide a standard set of tests across different climates and hydraulic conditions.

  3. Validation of the Economic and Health Outcomes Model of Type 2 Diabetes Mellitus (ECHO-T2DM).

    PubMed

    Willis, Michael; Johansen, Pierre; Nilsson, Andreas; Asseburg, Christian

    2017-03-01

    The Economic and Health Outcomes Model of Type 2 Diabetes Mellitus (ECHO-T2DM) was developed to address study questions pertaining to the cost-effectiveness of treatment alternatives in the care of patients with type 2 diabetes mellitus (T2DM). Naturally, the usefulness of a model is determined by the accuracy of its predictions. A previous version of ECHO-T2DM was validated against actual trial outcomes and the model predictions were generally accurate. However, there have been recent upgrades to the model, which modify model predictions and necessitate an update of the validation exercises. The objectives of this study were to extend the methods available for evaluating model validity, to conduct a formal model validation of ECHO-T2DM (version 2.3.0) in accordance with the principles espoused by the International Society for Pharmacoeconomics and Outcomes Research (ISPOR) and the Society for Medical Decision Making (SMDM), and secondarily to evaluate the relative accuracy of four sets of macrovascular risk equations included in ECHO-T2DM. We followed the ISPOR/SMDM guidelines on model validation, evaluating face validity, verification, cross-validation, and external validation. Model verification involved 297 'stress tests', in which specific model inputs were modified systematically to ascertain correct model implementation. Cross-validation consisted of a comparison between ECHO-T2DM predictions and those of the seminal National Institutes of Health model. In external validation, study characteristics were entered into ECHO-T2DM to replicate the clinical results of 12 studies (including 17 patient populations), and model predictions were compared to observed values using established statistical techniques as well as measures of average prediction error, separately for the four sets of macrovascular risk equations supported in ECHO-T2DM. Sub-group analyses were conducted for dependent vs. independent outcomes and for microvascular vs. macrovascular vs. mortality endpoints. All stress tests were passed. ECHO-T2DM replicated the National Institutes of Health cost-effectiveness application with numerically similar results. In external validation of ECHO-T2DM, model predictions agreed well with observed clinical outcomes. For all sets of macrovascular risk equations, the results were close to the intercept and slope coefficients corresponding to a perfect match, resulting in high R 2 and failure to reject concordance using an F test. The results were similar for sub-groups of dependent and independent validation, with some degree of under-prediction of macrovascular events. ECHO-T2DM continues to match health outcomes in clinical trials in T2DM, with prediction accuracy similar to other leading models of T2DM.

  4. Validity of the Eating Attitude Test among Exercisers.

    PubMed

    Lane, Helen J; Lane, Andrew M; Matheson, Hilary

    2004-12-01

    Theory testing and construct measurement are inextricably linked. To date, no published research has looked at the factorial validity of an existing eating attitude inventory for use with exercisers. The Eating Attitude Test (EAT) is a 26-item measure that yields a single index of disordered eating attitudes. The original factor analysis showed three interrelated factors: Dieting behavior (13-items), oral control (7-items), and bulimia nervosa-food preoccupation (6-items). The primary purpose of the study was to examine the factorial validity of the EAT among a sample of exercisers. The second purpose was to investigate relationships between eating attitudes scores and selected psychological constructs. In stage one, 598 regular exercisers completed the EAT. Confirmatory factor analysis (CFA) was used to test the single-factor, a three-factor model, and a four-factor model, which distinguished bulimia from food pre-occupation. CFA of the single-factor model (RCFI = 0.66, RMSEA = 0.10), the three-factor-model (RCFI = 0.74; RMSEA = 0.09) showed poor model fit. There was marginal fit for the 4-factor model (RCFI = 0.91, RMSEA = 0.06). Results indicated five-items showed poor factor loadings. After these 5-items were discarded, the three models were re-analyzed. CFA results indicated that the single-factor model (RCFI = 0.76, RMSEA = 0.10) and three-factor model (RCFI = 0.82, RMSEA = 0.08) showed poor fit. CFA results for the four-factor model showed acceptable fit indices (RCFI = 0.98, RMSEA = 0.06). Stage two explored relationships between EAT scores, mood, self-esteem, and motivational indices toward exercise in terms of self-determination, enjoyment and competence. Correlation results indicated that depressed mood scores positively correlated with bulimia and dieting scores. Further, dieting was inversely related with self-determination toward exercising. Collectively, findings suggest that a 21-item four-factor model shows promising validity coefficients among exercise participants, and that future research is needed to investigate eating attitudes among samples of exercisers. Key PointsValidity of psychometric measures should be thoroughly investigated. Researchers should not assume that a scale validation on one sample will show the same validity coefficients in a different population.The Eating Attitude Test is a commonly used scale. The present study shows a revised 21-item scale was suitable for exercisers.Researchers using the Eating Attitude Test should use subscales of Dieting, Oral control, Food pre-occupation, and Bulimia.Future research should involve qualitative techniques and interview exercise participants to explore the nature of eating attitudes.

  5. Testing the Construct Validity of Proposed Criteria for "DSM-5" Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Mandy, William P. L.; Charman, Tony; Skuse, David H.

    2012-01-01

    Objective: To use confirmatory factor analysis to test the construct validity of the proposed "DSM-5" symptom model of autism spectrum disorder (ASD), in comparison to alternative models, including that described in "DSM-IV-TR." Method: Participants were 708 verbal children and young persons (mean age, 9.5 years) with mild to severe autistic…

  6. Development and Validation of Methodology to Model Flow in Ventilation Systems Commonly Found in Nuclear Facilities. Phase I

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strons, Philip; Bailey, James L.; Davis, John

    2016-03-01

    In this work, we apply the CFD in modeling airflow and particulate transport. This modeling is then compared to field validation studies to both inform and validate the modeling assumptions. Based on the results of field tests, modeling assumptions and boundary conditions are refined and the process is repeated until the results are found to be reliable with a high level of confidence.

  7. Evaluating the dynamic response of in-flight thrust calculation techniques during throttle transients

    NASA Technical Reports Server (NTRS)

    Ray, Ronald J.

    1994-01-01

    New flight test maneuvers and analysis techniques for evaluating the dynamic response of in-flight thrust models during throttle transients have been developed and validated. The approach is based on the aircraft and engine performance relationship between thrust and drag. Two flight test maneuvers, a throttle step and a throttle frequency sweep, were developed and used in the study. Graphical analysis techniques, including a frequency domain analysis method, were also developed and evaluated. They provide quantitative and qualitative results. Four thrust calculation methods were used to demonstrate and validate the test technique. Flight test applications on two high-performance aircraft confirmed the test methods as valid and accurate. These maneuvers and analysis techniques were easy to implement and use. Flight test results indicate the analysis techniques can identify the combined effects of model error and instrumentation response limitations on the calculated thrust value. The methods developed in this report provide an accurate approach for evaluating, validating, or comparing thrust calculation methods for dynamic flight applications.

  8. Forward Bay Cover Separation Modeling and Testing for the Orion Multi-Purpose Crew Vehicle

    NASA Technical Reports Server (NTRS)

    Ali, Yasmin; Chuhta, Jesse D.; Hughes, Michael P.; Radke, Tara S.

    2015-01-01

    Spacecraft multi-body separation events during atmospheric descent require complex testing and analysis to validate the flight separation dynamics models used to verify no re-contact. The NASA Orion Multi-Purpose Crew Vehicle (MPCV) architecture includes a highly-integrated Forward Bay Cover (FBC) jettison assembly design that combines parachutes and piston thrusters to separate the FBC from the Crew Module (CM) and avoid re-contact. A multi-disciplinary team across numerous organizations examined key model parameters and risk areas to develop a robust but affordable test campaign in order to validate and verify the FBC separation event for Exploration Flight Test-1 (EFT-1). The FBC jettison simulation model is highly complex, consisting of dozens of parameters varied simultaneously, with numerous multi-parameter interactions (coupling and feedback) among the various model elements, and encompassing distinct near-field, mid-field, and far-field regimes. The test campaign was composed of component-level testing (for example gas-piston thrusters and parachute mortars), ground FBC jettison tests, and FBC jettison air-drop tests that were accomplished by a highly multi-disciplinary team. Three ground jettison tests isolated the testing of mechanisms and structures to anchor the simulation models excluding aerodynamic effects. Subsequently, two air-drop tests added aerodynamic and parachute elements, and served as integrated system demonstrations, which had been preliminarily explored during the Orion Pad Abort-1 (PA-1) flight test in May 2010. Both ground and drop tests provided extensive data to validate analytical models and to verify the FBC jettison event for EFT-1. Additional testing will be required to support human certification of this separation event, for which NASA and Lockheed Martin are applying knowledge from Apollo and EFT-1 testing and modeling to develop a robust human-rated FBC separation event.

  9. Accounting for Test Variability through Sizing Local Domains in Sequential Design Optimization with Concurrent Calibration-Based Model Validation

    DTIC Science & Technology

    2013-08-01

    in Sequential Design Optimization with Concurrent Calibration-Based Model Validation Dorin Drignei 1 Mathematics and Statistics Department...Validation 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Dorin Drignei; Zissimos Mourelatos; Vijitashwa Pandey

  10. Evaluation of Validity and Reliability for Hierarchical Scales Using Latent Variable Modeling

    ERIC Educational Resources Information Center

    Raykov, Tenko; Marcoulides, George A.

    2012-01-01

    A latent variable modeling method is outlined, which accomplishes estimation of criterion validity and reliability for a multicomponent measuring instrument with hierarchical structure. The approach provides point and interval estimates for the scale criterion validity and reliability coefficients, and can also be used for testing composite or…

  11. Aeroservoelastic Model Validation and Test Data Analysis of the F/A-18 Active Aeroelastic Wing

    NASA Technical Reports Server (NTRS)

    Brenner, Martin J.; Prazenica, Richard J.

    2003-01-01

    Model validation and flight test data analysis require careful consideration of the effects of uncertainty, noise, and nonlinearity. Uncertainty prevails in the data analysis techniques and results in a composite model uncertainty from unmodeled dynamics, assumptions and mechanics of the estimation procedures, noise, and nonlinearity. A fundamental requirement for reliable and robust model development is an attempt to account for each of these sources of error, in particular, for model validation, robust stability prediction, and flight control system development. This paper is concerned with data processing procedures for uncertainty reduction in model validation for stability estimation and nonlinear identification. F/A-18 Active Aeroelastic Wing (AAW) aircraft data is used to demonstrate signal representation effects on uncertain model development, stability estimation, and nonlinear identification. Data is decomposed using adaptive orthonormal best-basis and wavelet-basis signal decompositions for signal denoising into linear and nonlinear identification algorithms. Nonlinear identification from a wavelet-based Volterra kernel procedure is used to extract nonlinear dynamics from aeroelastic responses, and to assist model development and uncertainty reduction for model validation and stability prediction by removing a class of nonlinearity from the uncertainty.

  12. Development, Testing, and Validation of a Model-Based Tool to Predict Operator Responses in Unexpected Workload Transitions

    NASA Technical Reports Server (NTRS)

    Sebok, Angelia; Wickens, Christopher; Sargent, Robert

    2015-01-01

    One human factors challenge is predicting operator performance in novel situations. Approaches such as drawing on relevant previous experience, and developing computational models to predict operator performance in complex situations, offer potential methods to address this challenge. A few concerns with modeling operator performance are that models need to realistic, and they need to be tested empirically and validated. In addition, many existing human performance modeling tools are complex and require that an analyst gain significant experience to be able to develop models for meaningful data collection. This paper describes an effort to address these challenges by developing an easy to use model-based tool, using models that were developed from a review of existing human performance literature and targeted experimental studies, and performing an empirical validation of key model predictions.

  13. An Efficient Data Partitioning to Improve Classification Performance While Keeping Parameters Interpretable

    PubMed Central

    Korjus, Kristjan; Hebart, Martin N.; Vicente, Raul

    2016-01-01

    Supervised machine learning methods typically require splitting data into multiple chunks for training, validating, and finally testing classifiers. For finding the best parameters of a classifier, training and validation are usually carried out with cross-validation. This is followed by application of the classifier with optimized parameters to a separate test set for estimating the classifier’s generalization performance. With limited data, this separation of test data creates a difficult trade-off between having more statistical power in estimating generalization performance versus choosing better parameters and fitting a better model. We propose a novel approach that we term “Cross-validation and cross-testing” improving this trade-off by re-using test data without biasing classifier performance. The novel approach is validated using simulated data and electrophysiological recordings in humans and rodents. The results demonstrate that the approach has a higher probability of discovering significant results than the standard approach of cross-validation and testing, while maintaining the nominal alpha level. In contrast to nested cross-validation, which is maximally efficient in re-using data, the proposed approach additionally maintains the interpretability of individual parameters. Taken together, we suggest an addition to currently used machine learning approaches which may be particularly useful in cases where model weights do not require interpretation, but parameters do. PMID:27564393

  14. Evaluating a technical university's placement test using the Rasch measurement model

    NASA Astrophysics Data System (ADS)

    Salleh, Tuan Salwani; Bakri, Norhayati; Zin, Zalhan Mohd

    2016-10-01

    This study discusses the process of validating a mathematics placement test at a technical university. The main objective is to produce a valid and reliable test to measure students' prerequisite knowledge to learn engineering technology mathematics. It is crucial to have a valid and reliable test as the results will be used in a critical decision making to assign students into different groups of Technical Mathematics 1. The placement test which consists of 50 mathematics questions were tested on 82 new diplomas in engineering technology students at a technical university. This study employed rasch measurement model to analyze the data through the Winsteps software. The results revealed that there are ten test questions lower than less able students' ability. Nevertheless, all the ten questions satisfied infit and outfit standard values. Thus, all the questions can be reused in the future placement test at the technical university.

  15. Model-Based Thermal System Design Optimization for the James Webb Space Telescope

    NASA Technical Reports Server (NTRS)

    Cataldo, Giuseppe; Niedner, Malcolm B.; Fixsen, Dale J.; Moseley, Samuel H.

    2017-01-01

    Spacecraft thermal model validation is normally performed by comparing model predictions with thermal test data and reducing their discrepancies to meet the mission requirements. Based on thermal engineering expertise, the model input parameters are adjusted to tune the model output response to the test data. The end result is not guaranteed to be the best solution in terms of reduced discrepancy and the process requires months to complete. A model-based methodology was developed to perform the validation process in a fully automated fashion and provide mathematical bases to the search for the optimal parameter set that minimizes the discrepancies between model and data. The methodology was successfully applied to several thermal subsystems of the James Webb Space Telescope (JWST). Global or quasiglobal optimal solutions were found and the total execution time of the model validation process was reduced to about two weeks. The model sensitivities to the parameters, which are required to solve the optimization problem, can be calculated automatically before the test begins and provide a library for sensitivity studies. This methodology represents a crucial commodity when testing complex, large-scale systems under time and budget constraints. Here, results for the JWST Core thermal system will be presented in detail.

  16. Model-based thermal system design optimization for the James Webb Space Telescope

    NASA Astrophysics Data System (ADS)

    Cataldo, Giuseppe; Niedner, Malcolm B.; Fixsen, Dale J.; Moseley, Samuel H.

    2017-10-01

    Spacecraft thermal model validation is normally performed by comparing model predictions with thermal test data and reducing their discrepancies to meet the mission requirements. Based on thermal engineering expertise, the model input parameters are adjusted to tune the model output response to the test data. The end result is not guaranteed to be the best solution in terms of reduced discrepancy and the process requires months to complete. A model-based methodology was developed to perform the validation process in a fully automated fashion and provide mathematical bases to the search for the optimal parameter set that minimizes the discrepancies between model and data. The methodology was successfully applied to several thermal subsystems of the James Webb Space Telescope (JWST). Global or quasiglobal optimal solutions were found and the total execution time of the model validation process was reduced to about two weeks. The model sensitivities to the parameters, which are required to solve the optimization problem, can be calculated automatically before the test begins and provide a library for sensitivity studies. This methodology represents a crucial commodity when testing complex, large-scale systems under time and budget constraints. Here, results for the JWST Core thermal system will be presented in detail.

  17. Object-Oriented Modeling of an Energy Harvesting System Based on Thermoelectric Generators

    NASA Astrophysics Data System (ADS)

    Nesarajah, Marco; Frey, Georg

    This paper deals with the modeling of an energy harvesting system based on thermoelectric generators (TEG), and the validation of the model by means of a test bench. TEGs are capable to improve the overall energy efficiency of energy systems, e.g. combustion engines or heating systems, by using the remaining waste heat to generate electrical power. Previously, a component-oriented model of the TEG itself was developed in Modelica® language. With this model any TEG can be described and simulated given the material properties and the physical dimension. Now, this model was extended by the surrounding components to a complete model of a thermoelectric energy harvesting system. In addition to the TEG, the model contains the cooling system, the heat source, and the power electronics. To validate the simulation model, a test bench was built and installed on an oil-fired household heating system. The paper reports results of the measurements and discusses the validity of the developed simulation models. Furthermore, the efficiency of the proposed energy harvesting system is derived and possible improvements based on design variations tested in the simulation model are proposed.

  18. Validation of Slosh Model Parameters and Anti-Slosh Baffle Designs of Propellant Tanks by Using Lateral Slosh Testing

    NASA Technical Reports Server (NTRS)

    Perez, Jose G.; Parks, Russel A.; Lazor, Daniel R.

    2012-01-01

    The slosh dynamics of propellant tanks can be represented by an equivalent pendulum-mass mechanical model. The parameters of this equivalent model, identified as slosh model parameters, are slosh mass, slosh mass center of gravity, slosh frequency, and smooth-wall damping. They can be obtained by both analysis and testing for discrete fill heights. Anti-slosh baffles are usually needed in propellant tanks to control the movement of the fluid inside the tank. Lateral slosh testing, involving both random testing and free-decay testing, are performed to validate the slosh model parameters and the damping added to the fluid by the anti-slosh baffles. Traditional modal analysis procedures are used to extract the parameters from the experimental data. Test setup of sub-scale test articles of cylindrical and spherical shapes will be described. A comparison between experimental results and analysis will be presented.

  19. Validity and Reliability of the 8-Item Work Limitations Questionnaire.

    PubMed

    Walker, Timothy J; Tullar, Jessica M; Diamond, Pamela M; Kohl, Harold W; Amick, Benjamin C

    2017-12-01

    Purpose To evaluate factorial validity, scale reliability, test-retest reliability, convergent validity, and discriminant validity of the 8-item Work Limitations Questionnaire (WLQ) among employees from a public university system. Methods A secondary analysis using de-identified data from employees who completed an annual Health Assessment between the years 2009-2015 tested research aims. Confirmatory factor analysis (CFA) (n = 10,165) tested the latent structure of the 8-item WLQ. Scale reliability was determined using a CFA-based approach while test-retest reliability was determined using the intraclass correlation coefficient. Convergent/discriminant validity was tested by evaluating relations between the 8-item WLQ with health/performance variables for convergent validity (health-related work performance, number of chronic conditions, and general health) and demographic variables for discriminant validity (gender and institution type). Results A 1-factor model with three correlated residuals demonstrated excellent model fit (CFI = 0.99, TLI = 0.99, RMSEA = 0.03, and SRMR = 0.01). The scale reliability was acceptable (0.69, 95% CI 0.68-0.70) and the test-retest reliability was very good (ICC = 0.78). Low-to-moderate associations were observed between the 8-item WLQ and the health/performance variables while weak associations were observed between the demographic variables. Conclusions The 8-item WLQ demonstrated sufficient reliability and validity among employees from a public university system. Results suggest the 8-item WLQ is a usable alternative for studies when the more comprehensive 25-item WLQ is not available.

  20. A New Approach of Juvenile Age Estimation using Measurements of the Ilium and Multivariate Adaptive Regression Splines (MARS) Models for Better Age Prediction.

    PubMed

    Corron, Louise; Marchal, François; Condemi, Silvana; Chaumoître, Kathia; Adalian, Pascal

    2017-01-01

    Juvenile age estimation methods used in forensic anthropology generally lack methodological consistency and/or statistical validity. Considering this, a standard approach using nonparametric Multivariate Adaptive Regression Splines (MARS) models were tested to predict age from iliac biometric variables of male and female juveniles from Marseilles, France, aged 0-12 years. Models using unidimensional (length and width) and bidimensional iliac data (module and surface) were constructed on a training sample of 176 individuals and validated on an independent test sample of 68 individuals. Results show that MARS prediction models using iliac width, module and area give overall better and statistically valid age estimates. These models integrate punctual nonlinearities of the relationship between age and osteometric variables. By constructing valid prediction intervals whose size increases with age, MARS models take into account the normal increase of individual variability. MARS models can qualify as a practical and standardized approach for juvenile age estimation. © 2016 American Academy of Forensic Sciences.

  1. Development of Decision Support Formulas for the Prediction of Bladder Outlet Obstruction and Prostatic Surgery in Patients With Lower Urinary Tract Symptom/Benign Prostatic Hyperplasia: Part II, External Validation and Usability Testing of a Smartphone App.

    PubMed

    Choo, Min Soo; Jeong, Seong Jin; Cho, Sung Yong; Yoo, Changwon; Jeong, Chang Wook; Ku, Ja Hyeon; Oh, Seung-June

    2017-04-01

    We aimed to externally validate the prediction model we developed for having bladder outlet obstruction (BOO) and requiring prostatic surgery using 2 independent data sets from tertiary referral centers, and also aimed to validate a mobile app for using this model through usability testing. Formulas and nomograms predicting whether a subject has BOO and needs prostatic surgery were validated with an external validation cohort from Seoul National University Bundang Hospital and Seoul Metropolitan Government-Seoul National University Boramae Medical Center between January 2004 and April 2015. A smartphone-based app was developed, and 8 young urologists were enrolled for usability testing to identify any human factor issues of the app. A total of 642 patients were included in the external validation cohort. No significant differences were found in the baseline characteristics of major parameters between the original (n=1,179) and the external validation cohort, except for the maximal flow rate. Predictions of requiring prostatic surgery in the validation cohort showed a sensitivity of 80.6%, a specificity of 73.2%, a positive predictive value of 49.7%, and a negative predictive value of 92.0%, and area under receiver operating curve of 0.84. The calibration plot indicated that the predictions have good correspondence. The decision curve showed also a high net benefit. Similar evaluation results using the external validation cohort were seen in the predictions of having BOO. Overall results of the usability test demonstrated that the app was user-friendly with no major human factor issues. External validation of these newly developed a prediction model demonstrated a moderate level of discrimination, adequate calibration, and high net benefit gains for predicting both having BOO and requiring prostatic surgery. Also a smartphone app implementing the prediction model was user-friendly with no major human factor issue.

  2. Development and validation of age-dependent FE human models of a mid-sized male thorax.

    PubMed

    El-Jawahri, Raed E; Laituri, Tony R; Ruan, Jesse S; Rouhana, Stephen W; Barbat, Saeed D

    2010-11-01

    The increasing number of people over 65 years old (YO) is an important research topic in the area of impact biomechanics, and finite element (FE) modeling can provide valuable support for related research. There were three objectives of this study: (1) Estimation of the representative age of the previously-documented Ford Human Body Model (FHBM) -- an FE model which approximates the geometry and mass of a mid-sized male, (2) Development of FE models representing two additional ages, and (3) Validation of the resulting three models to the extent possible with respect to available physical tests. Specifically, the geometry of the model was compared to published data relating rib angles to age, and the mechanical properties of different simulated tissues were compared to a number of published aging functions. The FHBM was determined to represent a 53-59 YO mid-sized male. The aforementioned aging functions were used to develop FE models representing two additional ages: 35 and 75 YO. The rib model was validated against human rib specimens and whole rib tests, under different loading conditions, with and without modeled fracture. In addition, the resulting three age-dependent models were validated by simulating cadaveric tests of blunt and sled impacts. The responses of the models, in general, were within the cadaveric response corridors. When compared to peak responses from individual cadavers similar in size and age to the age-dependent models, some responses were within one standard deviation of the test data. All the other responses, but one, were within two standard deviations.

  3. Improved Healing of Large, Osseous, Segmental Defects by Reverse Dynamization: Evaluation in a Sheep Model

    DTIC Science & Technology

    2017-12-01

    reverse dynamization. This was supplemented by finite element analysis and the use of a strain gauge. This aim was successfully completed, with the...testing deformation results for model validation. Development of a Finite Element (FE) model was conducted through ANSYS 16 to help characterize...Fixators were characterized through mechanical testing by sawbone and ovine cadaver tibiae samples, and data was used to validate a finite element

  4. Optimal test selection for prediction uncertainty reduction

    DOE PAGES

    Mullins, Joshua; Mahadevan, Sankaran; Urbina, Angel

    2016-12-02

    Economic factors and experimental limitations often lead to sparse and/or imprecise data used for the calibration and validation of computational models. This paper addresses resource allocation for calibration and validation experiments, in order to maximize their effectiveness within given resource constraints. When observation data are used for model calibration, the quality of the inferred parameter descriptions is directly affected by the quality and quantity of the data. This paper characterizes parameter uncertainty within a probabilistic framework, which enables the uncertainty to be systematically reduced with additional data. The validation assessment is also uncertain in the presence of sparse and imprecisemore » data; therefore, this paper proposes an approach for quantifying the resulting validation uncertainty. Since calibration and validation uncertainty affect the prediction of interest, the proposed framework explores the decision of cost versus importance of data in terms of the impact on the prediction uncertainty. Often, calibration and validation tests may be performed for different input scenarios, and this paper shows how the calibration and validation results from different conditions may be integrated into the prediction. Then, a constrained discrete optimization formulation that selects the number of tests of each type (calibration or validation at given input conditions) is proposed. Furthermore, the proposed test selection methodology is demonstrated on a microelectromechanical system (MEMS) example.« less

  5. Patient self-report section of the ASES questionnaire: a Spanish validation study using classical test theory and the Rasch model.

    PubMed

    Vrotsou, Kalliopi; Cuéllar, Ricardo; Silió, Félix; Rodriguez, Miguel Ángel; Garay, Daniel; Busto, Gorka; Trancho, Ziortza; Escobar, Antonio

    2016-10-18

    The aim of the current study was to validate the self-report section of the American Shoulder and Elbow Surgeons questionnaire (ASES-p) into Spanish. Shoulder pathology patients were recruited and followed up to 6 months post treatment. The ASES-p, Constant, SF-36 and Barthel scales were filled-in pre and post treatment. Reliability was tested with Cronbach's alpha, convergent validity with Spearman's correlations coefficients. Confirmatory factor analysis (CFA) and the Rasch model were implemented for assessing structural validity and unidimensionality of the scale. Models with and without the pain item were considered. Responsiveness to change was explored via standardised effect sizes. Results were acceptable for both tested models. Cronbach's alpha was 0.91, total scale correlations with Constant and physical SF-36 dimensions were >0.50. Factor loadings for CFA were >0.40. The Rasch model confirmed unidimensionality of the scale, even though item 10 "do usual sport" was suggested as non-informative. Finally, patients with improved post treatment shoulder function and those receiving surgery had higher standardised effect sizes. The adapted Spanish ASES-p version is a valid and reliable tool for shoulder evaluation and its unidimensionality is supported by the data.

  6. Uncertainty Analysis of OC5-DeepCwind Floating Semisubmersible Offshore Wind Test Campaign

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robertson, Amy N

    This paper examines how to assess the uncertainty levels for test measurements of the Offshore Code Comparison, Continued, with Correlation (OC5)-DeepCwind floating offshore wind system, examined within the OC5 project. The goal of the OC5 project was to validate the accuracy of ultimate and fatigue load estimates from a numerical model of the floating semisubmersible using data measured during scaled tank testing of the system under wind and wave loading. The examination of uncertainty was done after the test, and it was found that the limited amount of data available did not allow for an acceptable uncertainty assessment. Therefore, thismore » paper instead qualitatively examines the sources of uncertainty associated with this test to start a discussion of how to assess uncertainty for these types of experiments and to summarize what should be done during future testing to acquire the information needed for a proper uncertainty assessment. Foremost, future validation campaigns should initiate numerical modeling before testing to guide the test campaign, which should include a rigorous assessment of uncertainty, and perform validation during testing to ensure that the tests address all of the validation needs.« less

  7. Uncertainty Analysis of OC5-DeepCwind Floating Semisubmersible Offshore Wind Test Campaign: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robertson, Amy N

    This paper examines how to assess the uncertainty levels for test measurements of the Offshore Code Comparison, Continued, with Correlation (OC5)-DeepCwind floating offshore wind system, examined within the OC5 project. The goal of the OC5 project was to validate the accuracy of ultimate and fatigue load estimates from a numerical model of the floating semisubmersible using data measured during scaled tank testing of the system under wind and wave loading. The examination of uncertainty was done after the test, and it was found that the limited amount of data available did not allow for an acceptable uncertainty assessment. Therefore, thismore » paper instead qualitatively examines the sources of uncertainty associated with this test to start a discussion of how to assess uncertainty for these types of experiments and to summarize what should be done during future testing to acquire the information needed for a proper uncertainty assessment. Foremost, future validation campaigns should initiate numerical modeling before testing to guide the test campaign, which should include a rigorous assessment of uncertainty, and perform validation during testing to ensure that the tests address all of the validation needs.« less

  8. Validation of Groundwater Models: Meaningful or Meaningless?

    NASA Astrophysics Data System (ADS)

    Konikow, L. F.

    2003-12-01

    Although numerical simulation models are valuable tools for analyzing groundwater systems, their predictive accuracy is limited. People who apply groundwater flow or solute-transport models, as well as those who make decisions based on model results, naturally want assurance that a model is "valid." To many people, model validation implies some authentication of the truth or accuracy of the model. History matching is often presented as the basis for model validation. Although such model calibration is a necessary modeling step, it is simply insufficient for model validation. Because of parameter uncertainty and solution non-uniqueness, declarations of validation (or verification) of a model are not meaningful. Post-audits represent a useful means to assess the predictive accuracy of a site-specific model, but they require the existence of long-term monitoring data. Model testing may yield invalidation, but that is an opportunity to learn and to improve the conceptual and numerical models. Examples of post-audits and of the application of a solute-transport model to a radioactive waste disposal site illustrate deficiencies in model calibration, prediction, and validation.

  9. Developing a model for hospital inherent safety assessment: Conceptualization and validation.

    PubMed

    Yari, Saeed; Akbari, Hesam; Gholami Fesharaki, Mohammad; Khosravizadeh, Omid; Ghasemi, Mohammad; Barsam, Yalda; Akbari, Hamed

    2018-01-01

    Paying attention to the safety of hospitals, as the most crucial institute for providing medical and health services wherein a bundle of facilities, equipment, and human resource exist, is of significant importance. The present research aims at developing a model for assessing hospitals' safety based on principles of inherent safety design. Face validity (30 experts), content validity (20 experts), construct validity (268 examples), convergent validity, and divergent validity have been employed to validate the prepared questionnaire; and the items analysis, the Cronbach's alpha test, ICC test (to measure reliability of the test), composite reliability coefficient have been used to measure primary reliability. The relationship between variables and factors has been confirmed at 0.05 significance level by conducting confirmatory factor analysis (CFA) and structural equations modeling (SEM) technique with the use of Smart-PLS. R-square and load factors values, which were higher than 0.67 and 0.300 respectively, indicated the strong fit. Moderation (0.970), simplification (0.959), substitution (0.943), and minimization (0.5008) have had the most weights in determining the inherent safety of hospital respectively. Moderation, simplification, and substitution, among the other dimensions, have more weight on the inherent safety, while minimization has the less weight, which could be due do its definition as to minimize the risk.

  10. Embedded performance validity testing in neuropsychological assessment: Potential clinical tools.

    PubMed

    Rickards, Tyler A; Cranston, Christopher C; Touradji, Pegah; Bechtold, Kathleen T

    2018-01-01

    The article aims to suggest clinically-useful tools in neuropsychological assessment for efficient use of embedded measures of performance validity. To accomplish this, we integrated available validity-related and statistical research from the literature, consensus statements, and survey-based data from practicing neuropsychologists. We provide recommendations for use of 1) Cutoffs for embedded performance validity tests including Reliable Digit Span, California Verbal Learning Test (Second Edition) Forced Choice Recognition, Rey-Osterrieth Complex Figure Test Combination Score, Wisconsin Card Sorting Test Failure to Maintain Set, and the Finger Tapping Test; 2) Selecting number of performance validity measures to administer in an assessment; and 3) Hypothetical clinical decision-making models for use of performance validity testing in a neuropsychological assessment collectively considering behavior, patient reporting, and data indicating invalid or noncredible performance. Performance validity testing helps inform the clinician about an individual's general approach to tasks: response to failure, task engagement and persistence, compliance with task demands. Data-driven clinical suggestions provide a resource to clinicians and to instigate conversation within the field to make more uniform, testable decisions to further the discussion, and guide future research in this area.

  11. Testing for the validity of purchasing power parity theory both in the long-run and the short-run for ASEAN-5

    NASA Astrophysics Data System (ADS)

    Choji, Niri Martha; Sek, Siok Kun

    2017-11-01

    The purchasing power parity theory says that the trade rates among two nations ought to be equivalent to the proportion of the total price levels between the two nations. For more than a decade, there has been substantial interest in testing for the validity of the Purchasing Power Parity (PPP) empirically. This paper performs a series of tests to see if PPP is valid for ASEAN-5 nations for the period of 2000-2016 using monthly data. For this purpose, we conducted four different tests of stationarity, two cointegration tests (Pedroni and Westerlund), and also the VAR model. The stationarity (unit root) tests reveal that the variables are not stationary at levels however stationary at first difference. Cointegration test results did not reject the H0 of no cointegration implying the absence long-run association among the variables and results of the VAR model did not reveal a strong short-run relationship. Based on the data, we, therefore, conclude that PPP is not valid in long-and short-run for ASEAN-5 during 2000-2016.

  12. EUCLID/NISP GRISM qualification model AIT/AIV campaign: optical, mechanical, thermal and vibration tests

    NASA Astrophysics Data System (ADS)

    Caillat, A.; Costille, A.; Pascal, S.; Rossin, C.; Vives, S.; Foulon, B.; Sanchez, P.

    2017-09-01

    Dark matter and dark energy mysteries will be explored by the Euclid ESA M-class space mission which will be launched in 2020. Millions of galaxies will be surveyed through visible imagery and NIR imagery and spectroscopy in order to map in three dimensions the Universe at different evolution stages over the past 10 billion years. The massive NIR spectroscopic survey will be done efficiently by the NISP instrument thanks to the use of grisms (for "Grating pRISMs") developed under the responsibility of the LAM. In this paper, we present the verification philosophy applied to test and validate each grism before the delivery to the project. The test sequence covers a large set of verifications: optical tests to validate efficiency and WFE of the component, mechanical tests to validate the robustness to vibration, thermal tests to validate its behavior in cryogenic environment and a complete metrology of the assembled component. We show the test results obtained on the first grism Engineering and Qualification Model (EQM) which will be delivered to the NISP project in fall 2016.

  13. The Role of Psychometric Modeling in Test Validation: An Application of Multidimensional Item Response Theory

    ERIC Educational Resources Information Center

    Schilling, Stephen G.

    2007-01-01

    In this paper the author examines the role of item response theory (IRT), particularly multidimensional item response theory (MIRT) in test validation from a validity argument perspective. The author provides justification for several structural assumptions and interpretations, taking care to describe the role he believes they should play in any…

  14. Development of Detonation Modeling Capabilities for Rocket Test Facilities: Hydrogen-Oxygen-Nitrogen Mixtures

    NASA Technical Reports Server (NTRS)

    Allgood, Daniel C.

    2016-01-01

    The objective of the presented work was to develop validated computational fluid dynamics (CFD) based methodologies for predicting propellant detonations and their associated blast environments. Applications of interest were scenarios relevant to rocket propulsion test and launch facilities. All model development was conducted within the framework of the Loci/CHEM CFD tool due to its reliability and robustness in predicting high-speed combusting flow-fields associated with rocket engines and plumes. During the course of the project, verification and validation studies were completed for hydrogen-fueled detonation phenomena such as shock-induced combustion, confined detonation waves, vapor cloud explosions, and deflagration-to-detonation transition (DDT) processes. The DDT validation cases included predicting flame acceleration mechanisms associated with turbulent flame-jets and flow-obstacles. Excellent comparison between test data and model predictions were observed. The proposed CFD methodology was then successfully applied to model a detonation event that occurred during liquid oxygen/gaseous hydrogen rocket diffuser testing at NASA Stennis Space Center.

  15. Validation of catchment models for predicting land-use and climate change impacts. 2. Case study for a Mediterranean catchment

    NASA Astrophysics Data System (ADS)

    Parkin, G.; O'Donnell, G.; Ewen, J.; Bathurst, J. C.; O'Connell, P. E.; Lavabre, J.

    1996-02-01

    Validation methods commonly used to test catchment models are not capable of demonstrating a model's fitness for making predictions for catchments where the catchment response is not known (including hypothetical catchments, and future conditions of existing catchments which are subject to land-use or climate change). This paper describes the first use of a new method of validation (Ewen and Parkin, 1996. J. Hydrol., 175: 583-594) designed to address these types of application; the method involves making 'blind' predictions of selected hydrological responses which are considered important for a particular application. SHETRAN (a physically based, distributed catchment modelling system) is tested on a small Mediterranean catchment. The test involves quantification of the uncertainty in four predicted features of the catchment response (continuous hydrograph, peak discharge rates, monthly runoff, and total runoff), and comparison of observations with the predicted ranges for these features. The results of this test are considered encouraging.

  16. Validity of Sensory Systems as Distinct Constructs

    PubMed Central

    Su, Chia-Ting

    2014-01-01

    This study investigated the validity of sensory systems as distinct measurable constructs as part of a larger project examining Ayres’s theory of sensory integration. Confirmatory factor analysis (CFA) was conducted to test whether sensory questionnaire items represent distinct sensory system constructs. Data were obtained from clinical records of two age groups, 2- to 5-yr-olds (n = 231) and 6- to 10-yr-olds (n = 223). With each group, we tested several CFA models for goodness of fit with the data. The accepted model was identical for each group and indicated that tactile, vestibular–proprioceptive, visual, and auditory systems form distinct, valid factors that are not age dependent. In contrast, alternative models that grouped items according to sensory processing problems (e.g., over- or underresponsiveness within or across sensory systems) did not yield valid factors. Results indicate that distinct sensory system constructs can be measured validly using questionnaire data. PMID:25184467

  17. ASTP ranging system mathematical model

    NASA Technical Reports Server (NTRS)

    Ellis, M. R.; Robinson, L. H.

    1973-01-01

    A mathematical model is presented of the VHF ranging system to analyze the performance of the Apollo-Soyuz test project (ASTP). The system was adapted for use in the ASTP. The ranging system mathematical model is presented in block diagram form, and a brief description of the overall model is also included. A procedure for implementing the math model is presented along with a discussion of the validation of the math model and the overall summary and conclusions of the study effort. Detailed appendices of the five study tasks are presented: early late gate model development, unlock probability development, system error model development, probability of acquisition and model development, and math model validation testing.

  18. Bond Graph Modeling and Validation of an Energy Regenerative System for Emulsion Pump Tests

    PubMed Central

    Li, Yilei; Zhu, Zhencai; Chen, Guoan

    2014-01-01

    The test system for emulsion pump is facing serious challenges due to its huge energy consumption and waste nowadays. To settle this energy issue, a novel energy regenerative system (ERS) for emulsion pump tests is briefly introduced at first. Modeling such an ERS of multienergy domains needs a unified and systematic approach. Bond graph modeling is well suited for this task. The bond graph model of this ERS is developed by first considering the separate components before assembling them together and so is the state-space equation. Both numerical simulation and experiments are carried out to validate the bond graph model of this ERS. Moreover the simulation and experiments results show that this ERS not only satisfies the test requirements, but also could save at least 25% of energy consumption as compared to the original test system, demonstrating that it is a promising method of energy regeneration for emulsion pump tests. PMID:24967428

  19. Validation of Slosh Model Parameters and Anti-Slosh Baffle Designs of Propellant Tanks by Using Lateral Slosh Testing

    NASA Technical Reports Server (NTRS)

    Perez, Jose G.; Parks, Russel, A.; Lazor, Daniel R.

    2012-01-01

    The slosh dynamics of propellant tanks can be represented by an equivalent mass-pendulum-dashpot mechanical model. The parameters of this equivalent model, identified as slosh mechanical model parameters, are slosh frequency, slosh mass, and pendulum hinge point location. They can be obtained by both analysis and testing for discrete fill levels. Anti-slosh baffles are usually needed in propellant tanks to control the movement of the fluid inside the tank. Lateral slosh testing, involving both random excitation testing and free-decay testing, are performed to validate the slosh mechanical model parameters and the damping added to the fluid by the anti-slosh baffles. Traditional modal analysis procedures were used to extract the parameters from the experimental data. Test setup of sub-scale tanks will be described. A comparison between experimental results and analysis will be presented.

  20. An RL10A-3-3A rocket engine model using the rocket engine transient simulator (ROCETS) software

    NASA Technical Reports Server (NTRS)

    Binder, Michael

    1993-01-01

    Steady-state and transient computer models of the RL10A-3-3A rocket engine have been created using the Rocket Engine Transient Simulation (ROCETS) code. These models were created for several purposes. The RL10 engine is a critical component of past, present, and future space missions; the model will give NASA an in-house capability to simulate the performance of the engine under various operating conditions and mission profiles. The RL10 simulation activity is also an opportunity to further validate the ROCETS program. The ROCETS code is an important tool for modeling rocket engine systems at NASA Lewis. ROCETS provides a modular and general framework for simulating the steady-state and transient behavior of any desired propulsion system. Although the ROCETS code is being used in a number of different analysis and design projects within NASA, it has not been extensively validated for any system using actual test data. The RL10A-3-3A has a ten year history of test and flight applications; it should provide sufficient data to validate the ROCETS program capability. The ROCETS models of the RL10 system were created using design information provided by Pratt & Whitney, the engine manufacturer. These models are in the process of being validated using test-stand and flight data. This paper includes a brief description of the models and comparison of preliminary simulation output against flight and test-stand data.

  1. Validation of catchment models for predicting land-use and climate change impacts. 1. Method

    NASA Astrophysics Data System (ADS)

    Ewen, J.; Parkin, G.

    1996-02-01

    Computer simulation models are increasingly being proposed as tools capable of giving water resource managers accurate predictions of the impact of changes in land-use and climate. Previous validation testing of catchment models is reviewed, and it is concluded that the methods used do not clearly test a model's fitness for such a purpose. A new generally applicable method is proposed. This involves the direct testing of fitness for purpose, uses established scientific techniques, and may be implemented within a quality assured programme of work. The new method is applied in Part 2 of this study (Parkin et al., J. Hydrol., 175:595-613, 1996).

  2. Validation of the SimSET simulation package for modeling the Siemens Biograph mCT PET scanner

    NASA Astrophysics Data System (ADS)

    Poon, Jonathan K.; Dahlbom, Magnus L.; Casey, Michael E.; Qi, Jinyi; Cherry, Simon R.; Badawi, Ramsey D.

    2015-02-01

    Monte Carlo simulation provides a valuable tool in performance assessment and optimization of system design parameters for PET scanners. SimSET is a popular Monte Carlo simulation toolkit that features fast simulation time, as well as variance reduction tools to further enhance computational efficiency. However, SimSET has lacked the ability to simulate block detectors until its most recent release. Our goal is to validate new features of SimSET by developing a simulation model of the Siemens Biograph mCT PET scanner and comparing the results to a simulation model developed in the GATE simulation suite and to experimental results. We used the NEMA NU-2 2007 scatter fraction, count rates, and spatial resolution protocols to validate the SimSET simulation model and its new features. The SimSET model overestimated the experimental results of the count rate tests by 11-23% and the spatial resolution test by 13-28%, which is comparable to previous validation studies of other PET scanners in the literature. The difference between the SimSET and GATE simulation was approximately 4-8% for the count rate test and approximately 3-11% for the spatial resolution test. In terms of computational time, SimSET performed simulations approximately 11 times faster than GATE simulations. The new block detector model in SimSET offers a fast and reasonably accurate simulation toolkit for PET imaging applications.

  3. Global Aerodynamic Modeling for Stall/Upset Recovery Training Using Efficient Piloted Flight Test Techniques

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.; Cunningham, Kevin; Hill, Melissa A.

    2013-01-01

    Flight test and modeling techniques were developed for efficiently identifying global aerodynamic models that can be used to accurately simulate stall, upset, and recovery on large transport airplanes. The techniques were developed and validated in a high-fidelity fixed-base flight simulator using a wind-tunnel aerodynamic database, realistic sensor characteristics, and a realistic flight deck representative of a large transport aircraft. Results demonstrated that aerodynamic models for stall, upset, and recovery can be identified rapidly and accurately using relatively simple piloted flight test maneuvers. Stall maneuver predictions and comparisons of identified aerodynamic models with data from the underlying simulation aerodynamic database were used to validate the techniques.

  4. Development and validation of an energy-balance knowledge test for fourth- and fifth-grade students.

    PubMed

    Chen, Senlin; Zhu, Xihe; Kang, Minsoo

    2017-05-01

    A valid test measuring children's energy-balance (EB) knowledge is lacking in research. This study developed and validated the energy-balance knowledge test (EBKT) for fourth and fifth grade students. The original EBKT contained 25 items but was reduced to 23 items based on pilot result and intensive expert panel discussion. De-identified data were collected from 468 fourth and fifth grade students enrolled in four schools to examine the psychometric properties of the EBKT items. The Rasch model analysis was conducted using the Winstep 3.65.0 software. Differential item functioning (DIF) analysis flagged 1 item (item #4) functioning differently between boys and girls, which was deleted. The final 22-item EBKT showed desirable model-data fit indices. The items had large variability ranging from -3.58 logit (item #10, the easiest) to 1.70 logit (item #3, the hardest). The average person ability on the test was 0.28 logit (SD = .78). Additional analyses supported known-group difference validity of the EBKT scores in capturing gender- and grade-based ability differences. The test was overall valid but could be further improved by expanding test items to discern various ability levels. For lack of a better test, researchers and practitioners may use the EBKT to assess fourth- and fifth-grade students' EB knowledge.

  5. Verification and Validation of EnergyPlus Phase Change Material Model for Opaque Wall Assemblies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tabares-Velasco, P. C.; Christensen, C.; Bianchi, M.

    2012-08-01

    Phase change materials (PCMs) represent a technology that may reduce peak loads and HVAC energy consumption in buildings. A few building energy simulation programs have the capability to simulate PCMs, but their accuracy has not been completely tested. This study shows the procedure used to verify and validate the PCM model in EnergyPlus using a similar approach as dictated by ASHRAE Standard 140, which consists of analytical verification, comparative testing, and empirical validation. This process was valuable, as two bugs were identified and fixed in the PCM model, and version 7.1 of EnergyPlus will have a validated PCM model. Preliminarymore » results using whole-building energy analysis show that careful analysis should be done when designing PCMs in homes, as their thermal performance depends on several variables such as PCM properties and location in the building envelope.« less

  6. Construct validity of the Moral Development Scale for Professionals (MDSP).

    PubMed

    Söderhamn, Olle; Bjørnestad, John Olav; Skisland, Anne; Cliffordson, Christina

    2011-01-01

    The aim of this study was to investigate the construct validity of the Moral Development Scale for Professionals (MDSP) using structural equation modeling. The instrument is a 12-item self-report instrument, developed in the Scandinavian cultural context and based on Kohlberg's theory. A hypothesized simplex structure model underlying the MDSP was tested through structural equation modeling. Validity was also tested as the proportion of respondents older than 20 years that reached the highest moral level, which according to the theory should be small. A convenience sample of 339 nursing students with a mean age of 25.3 years participated. Results confirmed the simplex model structure, indicating that MDSP reflects a moral construct empirically organized from low to high. A minority of respondents >20 years of age (13.5%) scored more than 80% on the highest moral level. The findings support the construct validity of the MDSP and the stages and levels in Kohlberg's theory.

  7. Flight-Test Validation and Flying Qualities Evaluation of a Rotorcraft UAV Flight Control System

    NASA Technical Reports Server (NTRS)

    Mettler, Bernard; Tuschler, Mark B.; Kanade, Takeo

    2000-01-01

    This paper presents a process of design and flight-test validation and flying qualities evaluation of a flight control system for a rotorcraft-based unmanned aerial vehicle (RUAV). The keystone of this process is an accurate flight-dynamic model of the aircraft, derived by using system identification modeling. The model captures the most relevant dynamic features of our unmanned rotorcraft, and explicitly accounts for the presence of a stabilizer bar. Using the identified model we were able to determine the performance margins of our original control system and identify limiting factors. The performance limitations were addressed and the attitude control system was 0ptimize.d for different three performance levels: slow, medium, fast. The optimized control laws will be implemented in our RUAV. We will first determine the validity of our control design approach by flight test validating our optimized controllers. Subsequently, we will fly a series of maneuvers with the three optimized controllers to determine the level of flying qualities that can be attained. The outcome enable us to draw important conclusions on the flying qualities requirements for small-scale RUAVs.

  8. MAVRIC Flutter Model Transonic Limit Cycle Oscillation Test

    NASA Technical Reports Server (NTRS)

    Edwards, John W.; Schuster, David M.; Spain, Charles V.; Keller, Donald F.; Moses, Robert W.

    2001-01-01

    The Models for Aeroelastic Validation Research Involving Computation semi-span wind-tunnel model (MAVRIC-I), a business jet wing-fuselage flutter model, was tested in NASA Langley's Transonic Dynamics Tunnel with the goal of obtaining experimental data suitable for Computational Aeroelasticity code validation at transonic separation onset conditions. This research model is notable for its inexpensive construction and instrumentation installation procedures. Unsteady pressures and wing responses were obtained for three wingtip configurations of clean, tipstore, and winglet. Traditional flutter boundaries were measured over the range of M = 0.6 to 0.9 and maps of Limit Cycle Oscillation (LCO) behavior were made in the range of M = 0.85 to 0.95. Effects of dynamic pressure and angle-of-attack were measured. Testing in both R134a heavy gas and air provided unique data on Reynolds number, transition effects, and the effect of speed of sound on LCO behavior. The data set provides excellent code validation test cases for the important class of flow conditions involving shock-induced transonic flow separation onset at low wing angles, including LCO behavior.

  9. MAVRIC Flutter Model Transonic Limit Cycle Oscillation Test

    NASA Technical Reports Server (NTRS)

    Edwards, John W.; Schuster, David M.; Spain, Charles V.; Keller, Donald F.; Moses, Robert W.

    2001-01-01

    The Models for Aeroelastic Validation Research Involving Computation semi-span wind-tunnel model (MAVRIC-I), a business jet wing-fuselage flutter model, was tested in NASA Langley's Transonic Dynamics Tunnel with the goal of obtaining experimental data suitable for Computational Aeroelasticity code validation at transonic separation onset conditions. This research model is notable for its inexpensive construction and instrumentation installation procedures. Unsteady pressures and wing responses were obtained for three wingtip configurations clean, tipstore, and winglet. Traditional flutter boundaries were measured over the range of M = 0.6 to 0.9 and maps of Limit Cycle Oscillation (LCO) behavior were made in the range of M = 0.85 to 0.95. Effects of dynamic pressure and angle-of-attack were measured. Testing in both R134a heavy gas and air provided unique data on Reynolds number, transition effects, and the effect of speed of sound on LCO behavior. The data set provides excellent code validation test cases for the important class of flow conditions involving shock-induced transonic flow separation onset at low wing angles, including Limit Cycle Oscillation behavior.

  10. Piloted Evaluation of a UH-60 Mixer Equivalent Turbulence Simulation Model

    NASA Technical Reports Server (NTRS)

    Lusardi, Jeff A.; Blanken, Chris L.; Tischeler, Mark B.

    2002-01-01

    A simulation study of a recently developed hover/low speed Mixer Equivalent Turbulence Simulation (METS) model for the UH-60 Black Hawk helicopter was conducted in the NASA Ames Research Center Vertical Motion Simulator (VMS). The experiment was a continuation of previous work to develop a simple, but validated, turbulence model for hovering rotorcraft. To validate the METS model, two experienced test pilots replicated precision hover tasks that had been conducted in an instrumented UH-60 helicopter in turbulence. Objective simulation data were collected for comparison with flight test data, and subjective data were collected that included handling qualities ratings and pilot comments for increasing levels of turbulence. Analyses of the simulation results show good analytic agreement between the METS model and flight test data, with favorable pilot perception of the simulated turbulence. Precision hover tasks were also repeated using the more complex rotating-frame SORBET (Simulation Of Rotor Blade Element Turbulence) model to generate turbulence. Comparisons of the empirically derived METS model with the theoretical SORBET model show good agreement providing validation of the more complex blade element method of simulating turbulence.

  11. SPR Hydrostatic Column Model Verification and Validation.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bettin, Giorgia; Lord, David; Rudeen, David Keith

    2015-10-01

    A Hydrostatic Column Model (HCM) was developed to help differentiate between normal "tight" well behavior and small-leak behavior under nitrogen for testing the pressure integrity of crude oil storage wells at the U.S. Strategic Petroleum Reserve. This effort was motivated by steady, yet distinct, pressure behavior of a series of Big Hill caverns that have been placed under nitrogen for extended period of time. This report describes the HCM model, its functional requirements, the model structure and the verification and validation process. Different modes of operation are also described, which illustrate how the software can be used to model extendedmore » nitrogen monitoring and Mechanical Integrity Tests by predicting wellhead pressures along with nitrogen interface movements. Model verification has shown that the program runs correctly and it is implemented as intended. The cavern BH101 long term nitrogen test was used to validate the model which showed very good agreement with measured data. This supports the claim that the model is, in fact, capturing the relevant physical phenomena and can be used to make accurate predictions of both wellhead pressure and interface movements.« less

  12. A validated model for the 22-item Sino-Nasal Outcome Test subdomain structure in chronic rhinosinusitis.

    PubMed

    Feng, Allen L; Wesely, Nicholas C; Hoehle, Lloyd P; Phillips, Katie M; Yamasaki, Alisa; Campbell, Adam P; Gregorio, Luciano L; Killeen, Thomas E; Caradonna, David S; Meier, Josh C; Gray, Stacey T; Sedaghat, Ahmad R

    2017-12-01

    Previous studies have identified subdomains of the 22-item Sino-Nasal Outcome Test (SNOT-22), reflecting distinct and largely independent categories of chronic rhinosinusitis (CRS) symptoms. However, no study has validated the subdomain structure of the SNOT-22. This study aims to validate the existence of underlying symptom subdomains of the SNOT-22 using confirmatory factor analysis (CFA) and to develop a subdomain model that practitioners and researchers can use to describe CRS symptomatology. A total of 800 patients with CRS were included into this cross-sectional study (400 CRS patients from Boston, MA, and 400 CRS patients from Reno, NV). Their SNOT-22 responses were analyzed using exploratory factor analysis (EFA) to determine the number of symptom subdomains. A CFA was performed to develop a validated measurement model for the underlying SNOT-22 subdomains along with various tests of validity and goodness of fit. EFA demonstrated 4 distinct factors reflecting: sleep, nasal, otologic/facial pain, and emotional symptoms (Cronbach's alpha, >0.7; Bartlett's test of sphericity, p < 0.001; Kaiser-Meyer-Olkin >0.90), independent of geographic locale. The corresponding CFA measurement model demonstrated excellent measures of fit (root mean square error of approximation, <0.06; standardized root mean square residual, <0.08; comparative fit index, >0.95; Tucker-Lewis index, >0.95) and measures of construct validity (heterotrait-monotrait [HTMT] ratio, <0.85; composite reliability, >0.7), again independent of geographic locale. The use of the 4-subdomain structure for SNOT-22 (reflecting sleep, nasal, otologic/facial pain, and emotional symptoms of CRS) was validated as the most appropriate to calculate SNOT-22 subdomain scores for patients from different geographic regions using CFA. © 2017 ARS-AAOA, LLC.

  13. The Chinese version of the Outcome Expectations for Exercise scale: validation study.

    PubMed

    Lee, Ling-Ling; Chiu, Yu-Yun; Ho, Chin-Chih; Wu, Shu-Chen; Watson, Roger

    2011-06-01

    Estimates of the reliability and validity of the English nine-item Outcome Expectations for Exercise (OEE) scale have been tested and found to be valid for use in various settings, particularly among older people, with good internal consistency and validity. Data on the use of the OEE scale among older Chinese people living in the community and how cultural differences might affect the administration of the OEE scale are limited. To test the validity and reliability of the Chinese version of the Outcome Expectations for Exercise scale among older people. A cross-sectional validation study was designed to test the Chinese version of the OEE scale (OEE-C). Reliability was examined by testing both the internal consistency for the overall scale and the squared multiple correlation coefficient for the single item measure. The validity of the scale was tested on the basis of both a traditional psychometric test and a confirmatory factor analysis using structural equation modelling. The Mokken Scaling Procedure (MSP) was used to investigate if there were any hierarchical, cumulative sets of items in the measure. The OEE-C scale was tested in a group of older people in Taiwan (n=108, mean age=77.1). There was acceptable internal consistency (alpha=.85) and model fit in the scale. Evidence of the validity of the measure was demonstrated by the tests for criterion-related validity and construct validity. There was a statistically significant correlation between exercise outcome expectations and exercise self-efficacy (r=.34, p<.01). An analysis of the Mokken Scaling Procedure found that nine items of the scale were all retained in the analysis and the resulting scale was reliable and statistically significant (p=.0008). The results obtained in the present study provided acceptable levels of reliability and validity evidence for the Chinese Outcome Expectations for Exercise scale when used with older people in Taiwan. Future testing of the OEE-C scale needs to be carried out to see whether these results are generalisable to older Chinese people living in urban areas. Copyright © 2010 Elsevier Ltd. All rights reserved.

  14. Control Oriented Modeling and Validation of Aeroservoelastic Systems

    NASA Technical Reports Server (NTRS)

    Crowder, Marianne; deCallafon, Raymond (Principal Investigator)

    2002-01-01

    Lightweight aircraft design emphasizes the reduction of structural weight to maximize aircraft efficiency and agility at the cost of increasing the likelihood of structural dynamic instabilities. To ensure flight safety, extensive flight testing and active structural servo control strategies are required to explore and expand the boundary of the flight envelope. Aeroservoelastic (ASE) models can provide online flight monitoring of dynamic instabilities to reduce flight time testing and increase flight safety. The success of ASE models is determined by the ability to take into account varying flight conditions and the possibility to perform flight monitoring under the presence of active structural servo control strategies. In this continued study, these aspects are addressed by developing specific methodologies and algorithms for control relevant robust identification and model validation of aeroservoelastic structures. The closed-loop model robust identification and model validation are based on a fractional model approach where the model uncertainties are characterized in a closed-loop relevant way.

  15. Large Engine Technology (LET) Short Haul Civil Tiltrotor Contingency Power Materials Knowledge and Lifing Methodologies

    NASA Technical Reports Server (NTRS)

    Spring, Samuel D.

    2006-01-01

    This report documents the results of an experimental program conducted on two advanced metallic alloy systems (Rene' 142 directionally solidified alloy (DS) and Rene' N6 single crystal alloy) and the characterization of two distinct internal state variable inelastic constitutive models. The long term objective of the study was to develop a computational life prediction methodology that can integrate the obtained material data. A specialized test matrix for characterizing advanced unified viscoplastic models was specified and conducted. This matrix included strain controlled tensile tests with intermittent relaxtion test with 2 hr hold times, constant stress creep tests, stepped creep tests, mixed creep and plasticity tests, cyclic temperature creep tests and tests in which temperature overloads were present to simulate actual operation conditions for validation of the models. The selected internal state variable models where shown to be capable of representing the material behavior exhibited by the experimental results; however the program ended prior to final validation of the models.

  16. Validation of Alternative In Vitro Methods to Animal Testing: Concepts, Challenges, Processes and Tools.

    PubMed

    Griesinger, Claudius; Desprez, Bertrand; Coecke, Sandra; Casey, Warren; Zuang, Valérie

    This chapter explores the concepts, processes, tools and challenges relating to the validation of alternative methods for toxicity and safety testing. In general terms, validation is the process of assessing the appropriateness and usefulness of a tool for its intended purpose. Validation is routinely used in various contexts in science, technology, the manufacturing and services sectors. It serves to assess the fitness-for-purpose of devices, systems, software up to entire methodologies. In the area of toxicity testing, validation plays an indispensable role: "alternative approaches" are increasingly replacing animal models as predictive tools and it needs to be demonstrated that these novel methods are fit for purpose. Alternative approaches include in vitro test methods, non-testing approaches such as predictive computer models up to entire testing and assessment strategies composed of method suites, data sources and decision-aiding tools. Data generated with alternative approaches are ultimately used for decision-making on public health and the protection of the environment. It is therefore essential that the underlying methods and methodologies are thoroughly characterised, assessed and transparently documented through validation studies involving impartial actors. Importantly, validation serves as a filter to ensure that only test methods able to produce data that help to address legislative requirements (e.g. EU's REACH legislation) are accepted as official testing tools and, owing to the globalisation of markets, recognised on international level (e.g. through inclusion in OECD test guidelines). Since validation creates a credible and transparent evidence base on test methods, it provides a quality stamp, supporting companies developing and marketing alternative methods and creating considerable business opportunities. Validation of alternative methods is conducted through scientific studies assessing two key hypotheses, reliability and relevance of the test method for a given purpose. Relevance encapsulates the scientific basis of the test method, its capacity to predict adverse effects in the "target system" (i.e. human health or the environment) as well as its applicability for the intended purpose. In this chapter we focus on the validation of non-animal in vitro alternative testing methods and review the concepts, challenges, processes and tools fundamental to the validation of in vitro methods intended for hazard testing of chemicals. We explore major challenges and peculiarities of validation in this area. Based on the notion that validation per se is a scientific endeavour that needs to adhere to key scientific principles, namely objectivity and appropriate choice of methodology, we examine basic aspects of study design and management, and provide illustrations of statistical approaches to describe predictive performance of validated test methods as well as their reliability.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rhinefrank, Kenneth E.; Lenee-Bluhm, Pukha; Prudell, Joseph H.

    The most prudent path to a full-scale design, build and deployment of a wave energy conversion (WEC) system involves establishment of validated numerical models using physical experiments in a methodical scaling program. This Project provides essential additional rounds of wave tank testing at 1:33 scale and ocean/bay testing at a 1:7 scale, necessary to validate numerical modeling that is essential to a utility-scale WEC design and associated certification.

  18. A Criterion-Related Validation Study of the Army Core Leader Competency Model

    DTIC Science & Technology

    2007-04-01

    2004). Transformational and transactional leadership: A meta-analytic test of their relative validity. Journal of Applied Psychology , 89, 755- 768...performance criteria in an attempt to adjust ratings for this influence. Leader survey materials were developed and pilot tested at Ft. Drum and Ft... psychological constructs in the behavioral science realm. Numerous theories, popular literature, websites, assessments, and competency models are

  19. Stability of the thermodynamic equilibrium - A test of the validity of dynamic models as applied to gyroviscous perpendicular magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Faghihi, Mustafa; Scheffel, Jan; Spies, Guenther O.

    1988-05-01

    Stability of the thermodynamic equilibrium is put forward as a simple test of the validity of dynamic equations, and is applied to perpendicular gyroviscous magnetohydrodynamics (i.e., perpendicular magnetohydrodynamics with gyroviscosity added). This model turns out to be invalid because it predicts exponentially growing Alfven waves in a spatially homogeneous static equilibrium with scalar pressure.

  20. Testing Math or Testing Language? The Construct Validity of the KeyMath-Revised for Children With Intellectual Disability and Language Difficulties.

    PubMed

    Rhodes, Katherine T; Branum-Martin, Lee; Morris, Robin D; Romski, MaryAnn; Sevcik, Rose A

    2015-11-01

    Although it is often assumed that mathematics ability alone predicts mathematics test performance, linguistic demands may also predict achievement. This study examined the role of language in mathematics assessment performance for children with intellectual disability (ID) at less severe levels, on the KeyMath-Revised Inventory (KM-R) with a sample of 264 children, in grades 2-5. Using confirmatory factor analysis, the hypothesis that the KM-R would demonstrate discriminant validity with measures of language abilities in a two-factor model was compared to two plausible alternative models. Results indicated that KM-R did not have discriminant validity with measures of children's language abilities and was a multidimensional test of both mathematics and language abilities for this population of test users. Implications are considered for test development, interpretation, and intervention.

  1. A Finite Element Model of a Midsize Male for Simulating Pedestrian Accidents.

    PubMed

    Untaroiu, Costin D; Pak, Wansoo; Meng, Yunzhu; Schap, Jeremy; Koya, Bharath; Gayzik, Scott

    2018-01-01

    Pedestrians represent one of the most vulnerable road users and comprise nearly 22% the road crash-related fatalities in the world. Therefore, protection of pedestrians in car-to-pedestrian collisions (CPC) has recently generated increased attention with regulations involving three subsystem tests. The development of a finite element (FE) pedestrian model could provide a complementary component that characterizes the whole-body response of vehicle-pedestrian interactions and assesses the pedestrian injuries. The main goal of this study was to develop and to validate a simplified full body FE model corresponding to a 50th male pedestrian in standing posture (M50-PS). The FE model mesh and defined material properties are based on a 50th percentile male occupant model. The lower limb-pelvis and lumbar spine regions of the human model were validated against the postmortem human surrogate (PMHS) test data recorded in four-point lateral knee bending tests, pelvic\\abdomen\\shoulder\\thoracic impact tests, and lumbar spine bending tests. Then, a pedestrian-to-vehicle impact simulation was performed using the whole pedestrian model, and the results were compared to corresponding PMHS tests. Overall, the simulation results showed that lower leg response is mostly within the boundaries of PMHS corridors. In addition, the model shows the capability to predict the most common lower extremity injuries observed in pedestrian accidents. Generally, the validated pedestrian model may be used by safety researchers in the design of front ends of new vehicles in order to increase pedestrian protection.

  2. Development and Validation of Scores from an Instrument Measuring Student Test-Taking Motivation

    ERIC Educational Resources Information Center

    Eklof, Hanna

    2006-01-01

    Using the expectancy-value model of achievement motivation as a basis, this study's purpose is to develop, apply, and validate scores from a self-report instrument measuring student test-taking motivation. Sampled evidence of construct validity for the present sample indicates that a number of the items in the instrument could be used as an…

  3. Comparing the Construct and Criterion-Related Validity of Ability-Based and Mixed-Model Measures of Emotional Intelligence

    ERIC Educational Resources Information Center

    Livingstone, Holly A.; Day, Arla L.

    2005-01-01

    Despite the popularity of the concept of emotional intelligence(EI), there is much controversy around its definition, measurement, and validity. Therefore, the authors examined the construct and criterion-related validity of an ability-based EI measure (Mayer Salovey Caruso Emotional Intelligence Test [MSCEIT]) and a mixed-model EI measure…

  4. Crash Certification by Analysis - Are We There Yet?

    NASA Technical Reports Server (NTRS)

    Jackson, Karen E.; Fasanella, Edwin L.; Lyle, Karen H.

    2006-01-01

    This paper addresses the issue of crash certification by analysis. This broad topic encompasses many ancillary issues including model validation procedures, uncertainty in test data and analysis models, probabilistic techniques for test-analysis correlation, verification of the mathematical formulation, and establishment of appropriate qualification requirements. This paper will focus on certification requirements for crashworthiness of military helicopters; capabilities of the current analysis codes used for crash modeling and simulation, including some examples of simulations from the literature to illustrate the current approach to model validation; and future directions needed to achieve "crash certification by analysis."

  5. Recovery Act. Development and Validation of an Advanced Stimulation Prediction Model for Enhanced Geothermal System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gutierrez, Marte

    The research project aims to develop and validate an advanced computer model that can be used in the planning and design of stimulation techniques to create engineered reservoirs for Enhanced Geothermal Systems. The specific objectives of the proposal are to: 1) Develop a true three-dimensional hydro-thermal fracturing simulator that is particularly suited for EGS reservoir creation. 2) Perform laboratory scale model tests of hydraulic fracturing and proppant flow/transport using a polyaxial loading device, and use the laboratory results to test and validate the 3D simulator. 3) Perform discrete element/particulate modeling of proppant transport in hydraulic fractures, and use the resultsmore » to improve understand of proppant flow and transport. 4) Test and validate the 3D hydro-thermal fracturing simulator against case histories of EGS energy production. 5) Develop a plan to commercialize the 3D fracturing and proppant flow/transport simulator. The project is expected to yield several specific results and benefits. Major technical products from the proposal include: 1) A true-3D hydro-thermal fracturing computer code that is particularly suited to EGS, 2) Documented results of scale model tests on hydro-thermal fracturing and fracture propping in an analogue crystalline rock, 3) Documented procedures and results of discrete element/particulate modeling of flow and transport of proppants for EGS applications, and 4) Database of monitoring data, with focus of Acoustic Emissions (AE) from lab scale modeling and field case histories of EGS reservoir creation.« less

  6. Recovery Act. Development and Validation of an Advanced Stimulation Prediction Model for Enhanced Geothermal Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gutierrez, Marte

    2013-12-31

    This research project aims to develop and validate an advanced computer model that can be used in the planning and design of stimulation techniques to create engineered reservoirs for Enhanced Geothermal Systems. The specific objectives of the proposal are to; Develop a true three-dimensional hydro-thermal fracturing simulator that is particularly suited for EGS reservoir creation; Perform laboratory scale model tests of hydraulic fracturing and proppant flow/transport using a polyaxial loading device, and use the laboratory results to test and validate the 3D simulator; Perform discrete element/particulate modeling of proppant transport in hydraulic fractures, and use the results to improve understandmore » of proppant flow and transport; Test and validate the 3D hydro-thermal fracturing simulator against case histories of EGS energy production; and Develop a plan to commercialize the 3D fracturing and proppant flow/transport simulator. The project is expected to yield several specific results and benefits. Major technical products from the proposal include; A true-3D hydro-thermal fracturing computer code that is particularly suited to EGS; Documented results of scale model tests on hydro-thermal fracturing and fracture propping in an analogue crystalline rock; Documented procedures and results of discrete element/particulate modeling of flow and transport of proppants for EGS applications; and Database of monitoring data, with focus of Acoustic Emissions (AE) from lab scale modeling and field case histories of EGS reservoir creation.« less

  7. Comparison of modeled backscatter with SAR data at P-band

    NASA Technical Reports Server (NTRS)

    Wang, Yong; Davis, Frank W.; Melack, John M.

    1992-01-01

    In recent years several analytical models were developed to predict microwave scattering by trees and forest canopies. These models contribute to the understanding of radar backscatter over forested regions to the extent that they capture the basic interactions between microwave radiation and tree canopies, understories, and ground layers as functions of incidence angle, wavelength, and polarization. The Santa Barbara microwave model backscatter model for woodland (i.e. with discontinuous tree canopies) combines a single-tree backscatter model and a gap probability model. Comparison of model predictions with synthetic aperture radar (SAR) data and L-band (lambda = 0.235 m) is promising, but much work is still needed to test the validity of model predictions at other wavelengths. The validity of the model predictions at P-band (lambda = 0.68 m) for woodland stands at our Mt. Shasta test site was tested.

  8. Conducting field studies for testing pesticide leaching models

    USGS Publications Warehouse

    Smith, Charles N.; Parrish, Rudolph S.; Brown, David S.

    1990-01-01

    A variety of predictive models are being applied to evaluate the transport and transformation of pesticides in the environment. These include well known models such as the Pesticide Root Zone Model (PRZM), the Risk of Unsaturated-Saturated Transport and Transformation Interactions for Chemical Concentrations Model (RUSTIC) and the Groundwater Loading Effects of Agricultural Management Systems Model (GLEAMS). The potentially large impacts of using these models as tools for developing pesticide management strategies and regulatory decisions necessitates development of sound model validation protocols. This paper offers guidance on many of the theoretical and practical problems encountered in the design and implementation of field-scale model validation studies. Recommendations are provided for site selection and characterization, test compound selection, data needs, measurement techniques, statistical design considerations and sampling techniques. A strategy is provided for quantitatively testing models using field measurements.

  9. Does Rational Selection of Training and Test Sets Improve the Outcome of QSAR Modeling?

    EPA Science Inventory

    Prior to using a quantitative structure activity relationship (QSAR) model for external predictions, its predictive power should be established and validated. In the absence of a true external dataset, the best way to validate the predictive ability of a model is to perform its s...

  10. Testing Crites' Model of Career Maturity: A Hierarchical Strategy.

    ERIC Educational Resources Information Center

    Wallbrown, Fred H.; And Others

    1986-01-01

    Investigated the construct validity of Crites' model of career maturity and the Career Maturity Inventory (CMI). Results from a nationwide sample of adolescents, using hierarchical factor analytic methodology, indicated confirmatory support for the multidimensionality of Crites' model of career maturity, and the construct validity of the CMI as a…

  11. Simulator validation results and proposed reporting format from flight testing a software model of a complex, high-performance airplane.

    DOT National Transportation Integrated Search

    2008-01-01

    Computer simulations are often used in aviation studies. These simulation tools may require complex, high-fidelity aircraft models. Since many of the flight models used are third-party developed products, independent validation is desired prior to im...

  12. Creating a Test Validated Structural Dynamic Finite Element Model of the Multi-Utility Technology Test Bed Aircraft

    NASA Technical Reports Server (NTRS)

    Pak, Chan-Gi; Truong, Samson S.

    2014-01-01

    Small modeling errors in the finite element model will eventually induce errors in the structural flexibility and mass, thus propagating into unpredictable errors in the unsteady aerodynamics and the control law design. One of the primary objectives of Multi Utility Technology Test Bed, X-56A, aircraft is the flight demonstration of active flutter suppression, and therefore in this study, the identification of the primary and secondary modes for the structural model tuning based on the flutter analysis of X-56A. The ground vibration test validated structural dynamic finite element model of the X-56A is created in this study. The structural dynamic finite element model of the X-56A is improved using a model tuning tool. In this study, two different weight configurations of the X-56A have been improved in a single optimization run.

  13. The presentation and preliminary validation of KIWEST using a large sample of Norwegian university staff.

    PubMed

    Innstrand, Siw Tone; Christensen, Marit; Undebakke, Kirsti Godal; Svarva, Kyrre

    2015-12-01

    The aim of the present paper is to present and validate a Knowledge-Intensive Work Environment Survey Target (KIWEST), a questionnaire developed for assessing the psychosocial factors among people in knowledge-intensive work environments. The construct validity and reliability of the measurement model where tested on a representative sample of 3066 academic and administrative staff working at one of the largest universities in Norway. Confirmatory factor analysis provided initial support for the convergent validity and internal consistency of the 30 construct KIWEST measurement model. However, discriminant validity tests indicated that some of the constructs might overlap to some degree. Overall, the KIWEST measure showed promising psychometric properties as a psychosocial work environment measure. © 2015 the Nordic Societies of Public Health.

  14. Lightweight ZERODUR: Validation of Mirror Performance and Mirror Modeling Predictions

    NASA Technical Reports Server (NTRS)

    Hull, Tony; Stahl, H. Philip; Westerhoff, Thomas; Valente, Martin; Brooks, Thomas; Eng, Ron

    2017-01-01

    Upcoming spaceborne missions, both moderate and large in scale, require extreme dimensional stability while relying both upon established lightweight mirror materials, and also upon accurate modeling methods to predict performance under varying boundary conditions. We describe tests, recently performed at NASA's XRCF chambers and laboratories in Huntsville Alabama, during which a 1.2 m diameter, f/1.2988% lightweighted SCHOTT lightweighted ZERODUR(TradeMark) mirror was tested for thermal stability under static loads in steps down to 230K. Test results are compared to model predictions, based upon recently published data on ZERODUR(TradeMark). In addition to monitoring the mirror surface for thermal perturbations in XRCF Thermal Vacuum tests, static load gravity deformations have been measured and compared to model predictions. Also the Modal Response(dynamic disturbance) was measured and compared to model. We will discuss the fabrication approach and optomechanical design of the ZERODUR(TradeMark) mirror substrate by SCHOTT, its optical preparation for test by Arizona Optical Systems (AOS). Summarize the outcome of NASA's XRCF tests and model validations

  15. Lightweight ZERODUR®: Validation of mirror performance and mirror modeling predictions

    NASA Astrophysics Data System (ADS)

    Hull, Anthony B.; Stahl, H. Philip; Westerhoff, Thomas; Valente, Martin; Brooks, Thomas; Eng, Ron

    2017-01-01

    Upcoming spaceborne missions, both moderate and large in scale, require extreme dimensional stability while relying both upon established lightweight mirror materials, and also upon accurate modeling methods to predict performance under varying boundary conditions. We describe tests, recently performed at NASA’s XRCF chambers and laboratories in Huntsville Alabama, during which a 1.2m diameter, f/1.29 88% lightweighted SCHOTT lightweighted ZERODUR® mirror was tested for thermal stability under static loads in steps down to 230K. Test results are compared to model predictions, based upon recently published data on ZERODUR®. In addition to monitoring the mirror surface for thermal perturbations in XRCF Thermal Vacuum tests, static load gravity deformations have been measured and compared to model predictions. Also the Modal Response (dynamic disturbance) was measured and compared to model. We will discuss the fabrication approach and optomechanical design of the ZERODUR® mirror substrate by SCHOTT, its optical preparation for test by Arizona Optical Systems (AOS), and summarize the outcome of NASA’s XRCF tests and model validations.

  16. Sorbent, Sublimation, and Icing Modeling Methods: Experimental Validation and Application to an Integrated MTSA Subassembly Thermal Model

    NASA Technical Reports Server (NTRS)

    Bower, Chad; Padilla, Sebastian; Iacomini, Christie; Paul, Heather L.

    2010-01-01

    This paper details the validation of modeling methods for the three core components of a Metabolic heat regenerated Temperature Swing Adsorption (MTSA) subassembly, developed for use in a Portable Life Support System (PLSS). The first core component in the subassembly is a sorbent bed, used to capture and reject metabolically produced carbon dioxide (CO2). The sorbent bed performance can be augmented with a temperature swing driven by a liquid CO2 (LCO2) sublimation heat exchanger (SHX) for cooling the sorbent bed, and a condensing, icing heat exchanger (CIHX) for warming the sorbent bed. As part of the overall MTSA effort, scaled design validation test articles for each of these three components have been independently tested in laboratory conditions. Previously described modeling methodologies developed for implementation in Thermal Desktop and SINDA/FLUINT are reviewed and updated, their application in test article models outlined, and the results of those model correlations relayed. Assessment of the applicability of each modeling methodology to the challenge of simulating the response of the test articles and their extensibility to a full scale integrated subassembly model is given. The independent verified and validated modeling methods are applied to the development of a MTSA subassembly prototype model and predictions of the subassembly performance are given. These models and modeling methodologies capture simulation of several challenging and novel physical phenomena in the Thermal Desktop and SINDA/FLUINT software suite. Novel methodologies include CO2 adsorption front tracking and associated thermal response in the sorbent bed, heat transfer associated with sublimation of entrained solid CO2 in the SHX, and water mass transfer in the form of ice as low as 210 K in the CIHX.

  17. Use of the FDA nozzle model to illustrate validation techniques in computational fluid dynamics (CFD) simulations

    PubMed Central

    Hariharan, Prasanna; D’Souza, Gavin A.; Horner, Marc; Morrison, Tina M.; Malinauskas, Richard A.; Myers, Matthew R.

    2017-01-01

    A “credible” computational fluid dynamics (CFD) model has the potential to provide a meaningful evaluation of safety in medical devices. One major challenge in establishing “model credibility” is to determine the required degree of similarity between the model and experimental results for the model to be considered sufficiently validated. This study proposes a “threshold-based” validation approach that provides a well-defined acceptance criteria, which is a function of how close the simulation and experimental results are to the safety threshold, for establishing the model validity. The validation criteria developed following the threshold approach is not only a function of Comparison Error, E (which is the difference between experiments and simulations) but also takes in to account the risk to patient safety because of E. The method is applicable for scenarios in which a safety threshold can be clearly defined (e.g., the viscous shear-stress threshold for hemolysis in blood contacting devices). The applicability of the new validation approach was tested on the FDA nozzle geometry. The context of use (COU) was to evaluate if the instantaneous viscous shear stress in the nozzle geometry at Reynolds numbers (Re) of 3500 and 6500 was below the commonly accepted threshold for hemolysis. The CFD results (“S”) of velocity and viscous shear stress were compared with inter-laboratory experimental measurements (“D”). The uncertainties in the CFD and experimental results due to input parameter uncertainties were quantified following the ASME V&V 20 standard. The CFD models for both Re = 3500 and 6500 could not be sufficiently validated by performing a direct comparison between CFD and experimental results using the Student’s t-test. However, following the threshold-based approach, a Student’s t-test comparing |S-D| and |Threshold-S| showed that relative to the threshold, the CFD and experimental datasets for Re = 3500 were statistically similar and the model could be considered sufficiently validated for the COU. However, for Re = 6500, at certain locations where the shear stress is close the hemolysis threshold, the CFD model could not be considered sufficiently validated for the COU. Our analysis showed that the model could be sufficiently validated either by reducing the uncertainties in experiments, simulations, and the threshold or by increasing the sample size for the experiments and simulations. The threshold approach can be applied to all types of computational models and provides an objective way of determining model credibility and for evaluating medical devices. PMID:28594889

  18. Use of the FDA nozzle model to illustrate validation techniques in computational fluid dynamics (CFD) simulations.

    PubMed

    Hariharan, Prasanna; D'Souza, Gavin A; Horner, Marc; Morrison, Tina M; Malinauskas, Richard A; Myers, Matthew R

    2017-01-01

    A "credible" computational fluid dynamics (CFD) model has the potential to provide a meaningful evaluation of safety in medical devices. One major challenge in establishing "model credibility" is to determine the required degree of similarity between the model and experimental results for the model to be considered sufficiently validated. This study proposes a "threshold-based" validation approach that provides a well-defined acceptance criteria, which is a function of how close the simulation and experimental results are to the safety threshold, for establishing the model validity. The validation criteria developed following the threshold approach is not only a function of Comparison Error, E (which is the difference between experiments and simulations) but also takes in to account the risk to patient safety because of E. The method is applicable for scenarios in which a safety threshold can be clearly defined (e.g., the viscous shear-stress threshold for hemolysis in blood contacting devices). The applicability of the new validation approach was tested on the FDA nozzle geometry. The context of use (COU) was to evaluate if the instantaneous viscous shear stress in the nozzle geometry at Reynolds numbers (Re) of 3500 and 6500 was below the commonly accepted threshold for hemolysis. The CFD results ("S") of velocity and viscous shear stress were compared with inter-laboratory experimental measurements ("D"). The uncertainties in the CFD and experimental results due to input parameter uncertainties were quantified following the ASME V&V 20 standard. The CFD models for both Re = 3500 and 6500 could not be sufficiently validated by performing a direct comparison between CFD and experimental results using the Student's t-test. However, following the threshold-based approach, a Student's t-test comparing |S-D| and |Threshold-S| showed that relative to the threshold, the CFD and experimental datasets for Re = 3500 were statistically similar and the model could be considered sufficiently validated for the COU. However, for Re = 6500, at certain locations where the shear stress is close the hemolysis threshold, the CFD model could not be considered sufficiently validated for the COU. Our analysis showed that the model could be sufficiently validated either by reducing the uncertainties in experiments, simulations, and the threshold or by increasing the sample size for the experiments and simulations. The threshold approach can be applied to all types of computational models and provides an objective way of determining model credibility and for evaluating medical devices.

  19. Mind the Noise When Identifying Computational Models of Cognition from Brain Activity.

    PubMed

    Kolossa, Antonio; Kopp, Bruno

    2016-01-01

    The aim of this study was to analyze how measurement error affects the validity of modeling studies in computational neuroscience. A synthetic validity test was created using simulated P300 event-related potentials as an example. The model space comprised four computational models of single-trial P300 amplitude fluctuations which differed in terms of complexity and dependency. The single-trial fluctuation of simulated P300 amplitudes was computed on the basis of one of the models, at various levels of measurement error and at various numbers of data points. Bayesian model selection was performed based on exceedance probabilities. At very low numbers of data points, the least complex model generally outperformed the data-generating model. Invalid model identification also occurred at low levels of data quality and under low numbers of data points if the winning model's predictors were closely correlated with the predictors from the data-generating model. Given sufficient data quality and numbers of data points, the data-generating model could be correctly identified, even against models which were very similar to the data-generating model. Thus, a number of variables affects the validity of computational modeling studies, and data quality and numbers of data points are among the main factors relevant to the issue. Further, the nature of the model space (i.e., model complexity, model dependency) should not be neglected. This study provided quantitative results which show the importance of ensuring the validity of computational modeling via adequately prepared studies. The accomplishment of synthetic validity tests is recommended for future applications. Beyond that, we propose to render the demonstration of sufficient validity via adequate simulations mandatory to computational modeling studies.

  20. Temperature and heat flux datasets of a complex object in a fire plume for the validation of fire and thermal response codes.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jernigan, Dann A.; Blanchat, Thomas K.

    It is necessary to improve understanding and develop temporally- and spatially-resolved integral scale validation data of the heat flux incident to a complex object in addition to measuring the thermal response of said object located within the fire plume for the validation of the SIERRA/FUEGO/SYRINX fire and SIERRA/CALORE codes. To meet this objective, a complex calorimeter with sufficient instrumentation to allow validation of the coupling between FUEGO/SYRINX/CALORE has been designed, fabricated, and tested in the Fire Laboratory for Accreditation of Models and Experiments (FLAME) facility. Validation experiments are specifically designed for direct comparison with the computational predictions. Making meaningful comparisonmore » between the computational and experimental results requires careful characterization and control of the experimental features or parameters used as inputs into the computational model. Validation experiments must be designed to capture the essential physical phenomena, including all relevant initial and boundary conditions. This report presents the data validation steps and processes, the results of the penlight radiant heat experiments (for the purpose of validating the CALORE heat transfer modeling of the complex calorimeter), and the results of the fire tests in FLAME.« less

  1. Calibration and validation of toxicokinetic-toxicodynamic models for three neonicotinoids and some aquatic macroinvertebrates.

    PubMed

    Focks, Andreas; Belgers, Dick; Boerwinkel, Marie-Claire; Buijse, Laura; Roessink, Ivo; Van den Brink, Paul J

    2018-05-01

    Exposure patterns in ecotoxicological experiments often do not match the exposure profiles for which a risk assessment needs to be performed. This limitation can be overcome by using toxicokinetic-toxicodynamic (TKTD) models for the prediction of effects under time-variable exposure. For the use of TKTD models in the environmental risk assessment of chemicals, it is required to calibrate and validate the model for specific compound-species combinations. In this study, the survival of macroinvertebrates after exposure to the neonicotinoid insecticide was modelled using TKTD models from the General Unified Threshold models of Survival (GUTS) framework. The models were calibrated on existing survival data from acute or chronic tests under static exposure regime. Validation experiments were performed for two sets of species-compound combinations: one set focussed on multiple species sensitivity to a single compound: imidacloprid, and the other set on the effects of multiple compounds for a single species, i.e., the three neonicotinoid compounds imidacloprid, thiacloprid and thiamethoxam, on the survival of the mayfly Cloeon dipterum. The calibrated models were used to predict survival over time, including uncertainty ranges, for the different time-variable exposure profiles used in the validation experiments. From the comparison between observed and predicted survival, it appeared that the accuracy of the model predictions was acceptable for four of five tested species in the multiple species data set. For compounds such as neonicotinoids, which are known to have the potential to show increased toxicity under prolonged exposure, the calibration and validation of TKTD models for survival needs to be performed ideally by considering calibration data from both acute and chronic tests.

  2. Performance validation of the ANSER control laws for the F-18 HARV

    NASA Technical Reports Server (NTRS)

    Messina, Michael D.

    1995-01-01

    The ANSER control laws were implemented in Ada by NASA Dryden for flight test on the High Alpha Research Vehicle (HARV). The Ada implementation was tested in the hardware-in-the-loop (HIL) simulation, and results were compared to those obtained with the NASA Langley batch Fortran implementation of the control laws which are considered the 'truth model.' This report documents the performance validation test results between these implementations. This report contains the ANSER performance validation test plan, HIL versus batch time-history comparisons, simulation scripts used to generate checkcases, and detailed analysis of discrepancies discovered during testing.

  3. Performance validation of the ANSER Control Laws for the F-18 HARV

    NASA Technical Reports Server (NTRS)

    Messina, Michael D.

    1995-01-01

    The ANSER control laws were implemented in Ada by NASA Dryden for flight test on the High Alpha Research Vehicle (HARV). The Ada implementation was tested in the hardware-in-the-loop (HIL) simulation, and results were compared to those obtained with the NASA Langley batch Fortran implementation of the control laws which are considered the 'truth model'. This report documents the performance validation test results between these implementations. This report contains the ANSER performance validation test plan, HIL versus batch time-history comparisons, simulation scripts used to generate checkcases, and detailed analysis of discrepancies discovered during testing.

  4. Virtual experiments, physical validation: dental morphology at the intersection of experiment and theory

    PubMed Central

    Anderson, P. S. L.; Rayfield, E. J.

    2012-01-01

    Computational models such as finite-element analysis offer biologists a means of exploring the structural mechanics of biological systems that cannot be directly observed. Validated against experimental data, a model can be manipulated to perform virtual experiments, testing variables that are hard to control in physical experiments. The relationship between tooth form and the ability to break down prey is key to understanding the evolution of dentition. Recent experimental work has quantified how tooth shape promotes fracture in biological materials. We present a validated finite-element model derived from physical compression experiments. The model shows close agreement with strain patterns observed in photoelastic test materials and reaction forces measured during these experiments. We use the model to measure strain energy within the test material when different tooth shapes are used. Results show that notched blades deform materials for less strain energy cost than straight blades, giving insights into the energetic relationship between tooth form and prey materials. We identify a hypothetical ‘optimal’ blade angle that minimizes strain energy costs and test alternative prey materials via virtual experiments. Using experimental data and computational models offers an integrative approach to understand the mechanics of tooth morphology. PMID:22399789

  5. Development of a model for occipital fixation--validation of an analogue bone material.

    PubMed

    Mullett, H; O'Donnell, T; Felle, P; O'Rourke, K; FitzPatrick, D

    2002-01-01

    Several implant systems may be used to fuse the skull to the upper cervical spine (occipitocervical fusion). Current biomechanical evaluation is restricted by the limitations of human cadaveric specimens. This paper describes the design and validation of a synthetic testing model of the occipital bone. Data from thickness measurement and pull-out strength testing of a series of human cadaveric skulls was used in the design of a high-density rigid polyurethane foam model. The synthetic occipital model demonstrated repeatable and consistent morphological and biomechanical properties. The model provides a standardized environment for evaluation of occipital implants.

  6. OC5 Project Phase II: Validation of Global Loads of the DeepCwind Floating Semisubmersible Wind Turbine

    DOE PAGES

    Robertson, Amy N.; Wendt, Fabian; Jonkman, Jason M.; ...

    2017-10-01

    This paper summarizes the findings from Phase II of the Offshore Code Comparison, Collaboration, Continued, with Correlation project. The project is run under the International Energy Agency Wind Research Task 30, and is focused on validating the tools used for modeling offshore wind systems through the comparison of simulated responses of select system designs to physical test data. Validation activities such as these lead to improvement of offshore wind modeling tools, which will enable the development of more innovative and cost-effective offshore wind designs. For Phase II of the project, numerical models of the DeepCwind floating semisubmersible wind system weremore » validated using measurement data from a 1/50th-scale validation campaign performed at the Maritime Research Institute Netherlands offshore wave basin. Validation of the models was performed by comparing the calculated ultimate and fatigue loads for eight different wave-only and combined wind/wave test cases against the measured data, after calibration was performed using free-decay, wind-only, and wave-only tests. The results show a decent estimation of both the ultimate and fatigue loads for the simulated results, but with a fairly consistent underestimation in the tower and upwind mooring line loads that can be attributed to an underestimation of wave-excitation forces outside the linear wave-excitation region, and the presence of broadband frequency excitation in the experimental measurements from wind. Participant results showed varied agreement with the experimental measurements based on the modeling approach used. Modeling attributes that enabled better agreement included: the use of a dynamic mooring model; wave stretching, or some other hydrodynamic modeling approach that excites frequencies outside the linear wave region; nonlinear wave kinematics models; and unsteady aerodynamics models. Also, it was observed that a Morison-only hydrodynamic modeling approach could create excessive pitch excitation and resulting tower loads in some frequency bands.« less

  7. OC5 Project Phase II: Validation of Global Loads of the DeepCwind Floating Semisubmersible Wind Turbine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robertson, Amy N.; Wendt, Fabian; Jonkman, Jason M.

    This paper summarizes the findings from Phase II of the Offshore Code Comparison, Collaboration, Continued, with Correlation project. The project is run under the International Energy Agency Wind Research Task 30, and is focused on validating the tools used for modeling offshore wind systems through the comparison of simulated responses of select system designs to physical test data. Validation activities such as these lead to improvement of offshore wind modeling tools, which will enable the development of more innovative and cost-effective offshore wind designs. For Phase II of the project, numerical models of the DeepCwind floating semisubmersible wind system weremore » validated using measurement data from a 1/50th-scale validation campaign performed at the Maritime Research Institute Netherlands offshore wave basin. Validation of the models was performed by comparing the calculated ultimate and fatigue loads for eight different wave-only and combined wind/wave test cases against the measured data, after calibration was performed using free-decay, wind-only, and wave-only tests. The results show a decent estimation of both the ultimate and fatigue loads for the simulated results, but with a fairly consistent underestimation in the tower and upwind mooring line loads that can be attributed to an underestimation of wave-excitation forces outside the linear wave-excitation region, and the presence of broadband frequency excitation in the experimental measurements from wind. Participant results showed varied agreement with the experimental measurements based on the modeling approach used. Modeling attributes that enabled better agreement included: the use of a dynamic mooring model; wave stretching, or some other hydrodynamic modeling approach that excites frequencies outside the linear wave region; nonlinear wave kinematics models; and unsteady aerodynamics models. Also, it was observed that a Morison-only hydrodynamic modeling approach could create excessive pitch excitation and resulting tower loads in some frequency bands.« less

  8. 6DOF Testing of the SLS Inertial Navigation Unit

    NASA Technical Reports Server (NTRS)

    Geohagan, Kevin W.; Bernard, William P.; Oliver, T. Emerson; Strickland, Dennis J.; Leggett, Jared O.

    2018-01-01

    The Navigation System on the NASA Space Launch System (SLS) Block 1 vehicle performs initial alignment of the Inertial Navigation System (INS) navigation frame through gyrocompass alignment (GCA). In lieu of direct testing of GCA accuracy in support of requirement verification, the SLS Navigation Team proposed and conducted an engineering test to, among other things, validate the GCA performance and overall behavior of the SLS INS model through comparison with test data. This paper will detail dynamic hardware testing of the SLS INS, conducted by the SLS Navigation Team at Marshall Space Flight Center's 6DOF Table Facility, in support of GCA performance characterization and INS model validation. A 6-DOF motion platform was used to produce 6DOF pad twist and sway dynamics while a simulated SLS flight computer communicated with the INS. Tests conducted include an evaluation of GCA algorithm robustness to increasingly dynamic pad environments, an examination of GCA algorithm stability and accuracy over long durations, and a long-duration static test to gather enough data for Allan Variance analysis. Test setup, execution, and data analysis will be discussed, including analysis performed in support of SLS INS model validation.

  9. Scopolamine provocation-based pharmacological MRI model for testing procognitive agents.

    PubMed

    Hegedűs, Nikolett; Laszy, Judit; Gyertyán, István; Kocsis, Pál; Gajári, Dávid; Dávid, Szabolcs; Deli, Levente; Pozsgay, Zsófia; Tihanyi, Károly

    2015-04-01

    There is a huge unmet need to understand and treat pathological cognitive impairment. The development of disease modifying cognitive enhancers is hindered by the lack of correct pathomechanism and suitable animal models. Most animal models to study cognition and pathology do not fulfil either the predictive validity, face validity or construct validity criteria, and also outcome measures greatly differ from those of human trials. Fortunately, some pharmacological agents such as scopolamine evoke similar effects on cognition and cerebral circulation in rodents and humans and functional MRI enables us to compare cognitive agents directly in different species. In this paper we report the validation of a scopolamine based rodent pharmacological MRI provocation model. The effects of deemed procognitive agents (donepezil, vinpocetine, piracetam, alpha 7 selective cholinergic compounds EVP-6124, PNU-120596) were compared on the blood-oxygen-level dependent responses and also linked to rodent cognitive models. These drugs revealed significant effect on scopolamine induced blood-oxygen-level dependent change except for piracetam. In the water labyrinth test only PNU-120596 did not show a significant effect. This provocational model is suitable for testing procognitive compounds. These functional MR imaging experiments can be paralleled with human studies, which may help reduce the number of false cognitive clinical trials. © The Author(s) 2015.

  10. Electromagnetic Compatibility Testing Studies

    NASA Technical Reports Server (NTRS)

    Trost, Thomas F.; Mitra, Atindra K.

    1996-01-01

    This report discusses the results on analytical models and measurement and simulation of statistical properties from a study of microwave reverberation (mode-stirred) chambers performed at Texas Tech University. Two analytical models of power transfer vs. frequency in a chamber, one for antenna-to-antenna transfer and the other for antenna to D-dot sensor, were experimentally validated in our chamber. Two examples are presented of the measurement and calculation of chamber Q, one for each of the models. Measurements of EM power density validate a theoretical probability distribution on and away from the chamber walls and also yield a distribution with larger standard deviation at frequencies below the range of validity of the theory. Measurements of EM power density at pairs of points which validate a theoretical spatial correlation function on the chamber walls and also yield a correlation function with larger correlation length, R(sub corr), at frequencies below the range of validity of the theory. A numerical simulation, employing a rectangular cavity with a moving wall shows agreement with the measurements. The determination that the lowest frequency at which the theoretical spatial correlation function is valid in our chamber is considerably higher than the lowest frequency recommended by current guidelines for utilizing reverberation chambers in EMC testing. Two suggestions have been made for future studies related to EMC testing.

  11. Turbine Engine Mathematical Model Validation

    DTIC Science & Technology

    1976-12-01

    AEDC-TR-76-90 ~Ec i ? Z985 TURBINE ENGINE MATHEMATICAL MODEL VALIDATION ENGINE TEST FACILITY ARNOLD ENGINEERING DEVELOPMENT CENTER AIR FORCE...i f n e c e s e a ~ ~ d i den t i f y by b l ock number) YJI01-GE-100 engine turbine engines mathematical models computations mathematical...report presents and discusses the results of an investigation to develop a rationale and technique for the validation of turbine engine steady-state

  12. Least Squares Distance Method of Cognitive Validation and Analysis for Binary Items Using Their Item Response Theory Parameters

    ERIC Educational Resources Information Center

    Dimitrov, Dimiter M.

    2007-01-01

    The validation of cognitive attributes required for correct answers on binary test items or tasks has been addressed in previous research through the integration of cognitive psychology and psychometric models using parametric or nonparametric item response theory, latent class modeling, and Bayesian modeling. All previous models, each with their…

  13. Choice Inconsistencies among the Elderly: Evidence from Plan Choice in the Medicare Part D Program: Comment.

    PubMed

    Ketcham, Jonathan D; Kuminoff, Nicolai V; Powers, Christopher A

    2016-12-01

    Consumers' enrollment decisions in Medicare Part D can be explained by Abaluck and Gruber’s (2011) model of utility maximization with psychological biases or by a neoclassical version of their model that precludes such biases. We evaluate these competing hypotheses by applying nonparametric tests of utility maximization and model validation tests to administrative data. We find that 79 percent of enrollment decisions from 2006 to 2010 satisfied basic axioms of consumer theory under the assumption of full information. The validation tests provide evidence against widespread psychological biases. In particular, we find that precluding psychological biases improves the structural model's out-of-sample predictions for consumer behavior.

  14. Design and validation of a comprehensive fecal incontinence questionnaire.

    PubMed

    Macmillan, Alexandra K; Merrie, Arend E H; Marshall, Roger J; Parry, Bryan R

    2008-10-01

    Fecal incontinence can have a profound effect on quality of life. Its prevalence remains uncertain because of stigma, lack of consistent definition, and dearth of validated measures. This study was designed to develop a valid clinical and epidemiologic questionnaire, building on current literature and expertise. Patients and experts undertook face validity testing. Construct validity, criterion validity, and test-retest reliability was undertaken. Construct validity comprised factor analysis and internal consistency of the quality of life scale. The validity of known groups was tested against 77 control subjects by using regression models. Questionnaire results were compared with a stool diary for criterion validity. Test-retest reliability was calculated from repeated questionnaire completion. The questionnaire achieved good face validity. It was completed by 104 patients. The quality of life scale had four underlying traits (factor analysis) and high internal consistency (overall Cronbach alpha = 0.97). Patients and control subjects answered the questionnaire significantly differently (P < 0.01) in known-groups validity testing. Criterion validity assessment found mean differences close to zero. Median reliability for the whole questionnaire was 0.79 (range, 0.35-1). This questionnaire compares favorably with other available instruments, although the interpretation of stool consistency requires further research. Its sensitivity to treatment still needs to be investigated.

  15. AMOVA ["Accumulative Manifold Validation Analysis"]: An Advanced Statistical Methodology Designed to Measure and Test the Validity, Reliability, and Overall Efficacy of Inquiry-Based Psychometric Instruments

    ERIC Educational Resources Information Center

    Osler, James Edward, II

    2015-01-01

    This monograph provides an epistemological rational for the Accumulative Manifold Validation Analysis [also referred by the acronym "AMOVA"] statistical methodology designed to test psychometric instruments. This form of inquiry is a form of mathematical optimization in the discipline of linear stochastic modelling. AMOVA is an in-depth…

  16. Ares I Scale Model Acoustic Test Liftoff Acoustic Results and Comparisons

    NASA Technical Reports Server (NTRS)

    Counter, Doug; Houston, Janice

    2011-01-01

    Conclusions: Ares I-X flight data validated the ASMAT LOA results. Ares I Liftoff acoustic environments were verified with scale model test results. Results showed that data book environments were under-conservative for Frustum (Zone 5). Recommendations: Data book environments can be updated with scale model test and flight data. Subscale acoustic model testing useful for future vehicle environment assessments.

  17. An improved snow scheme for the ECMWF land surface model: Description and offline validation

    Treesearch

    Emanuel Dutra; Gianpaolo Balsamo; Pedro Viterbo; Pedro M. A. Miranda; Anton Beljaars; Christoph Schar; Kelly Elder

    2010-01-01

    A new snow scheme for the European Centre for Medium-Range Weather Forecasts (ECMWF) land surface model has been tested and validated. The scheme includes a new parameterization of snow density, incorporating a liquid water reservoir, and revised formulations for the subgrid snow cover fraction and snow albedo. Offline validation (covering a wide range of spatial and...

  18. Differential Item Functioning Analysis of High-Stakes Test in Terms of Gender: A Rasch Model Approach

    ERIC Educational Resources Information Center

    Alavi, Seyed Mohammad; Bordbar, Soodeh

    2017-01-01

    Differential Item Functioning (DIF) analysis is a key element in evaluating educational test fairness and validity. One of the frequently cited sources of construct-irrelevant variance is gender which has an important role in the university entrance exam; therefore, it causes bias and consequently undermines test validity. The present study aims…

  19. Initial Development and Validation of the BullyHARM: The Bullying, Harassment, and Aggression Receipt Measure.

    PubMed

    Hall, William J

    2016-11-01

    This article describes the development and preliminary validation of the Bullying, Harassment, and Aggression Receipt Measure (BullyHARM). The development of the BullyHARM involved a number of steps and methods, including a literature review, expert review, cognitive testing, readability testing, data collection from a large sample, reliability testing, and confirmatory factor analysis. A sample of 275 middle school students was used to examine the psychometric properties and factor structure of the BullyHARM, which consists of 22 items and 6 subscales: physical bullying, verbal bullying, social/relational bullying, cyber-bullying, property bullying, and sexual bullying. First-order and second-order factor models were evaluated. Results demonstrate that the first-order factor model had superior fit. Results of reliability testing indicate that the BullyHARM scale and subscales have very good internal consistency reliability. Findings indicate that the BullyHARM has good properties regarding content validation and respondent-related validation and is a promising instrument for measuring bullying victimization in school.

  20. Initial Development and Validation of the BullyHARM: The Bullying, Harassment, and Aggression Receipt Measure

    PubMed Central

    Hall, William J.

    2017-01-01

    This article describes the development and preliminary validation of the Bullying, Harassment, and Aggression Receipt Measure (BullyHARM). The development of the BullyHARM involved a number of steps and methods, including a literature review, expert review, cognitive testing, readability testing, data collection from a large sample, reliability testing, and confirmatory factor analysis. A sample of 275 middle school students was used to examine the psychometric properties and factor structure of the BullyHARM, which consists of 22 items and 6 subscales: physical bullying, verbal bullying, social/relational bullying, cyber-bullying, property bullying, and sexual bullying. First-order and second-order factor models were evaluated. Results demonstrate that the first-order factor model had superior fit. Results of reliability testing indicate that the BullyHARM scale and subscales have very good internal consistency reliability. Findings indicate that the BullyHARM has good properties regarding content validation and respondent-related validation and is a promising instrument for measuring bullying victimization in school. PMID:28194041

  1. Summary of EASM Turbulence Models in CFL3D With Validation Test Cases

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.; Gatski, Thomas B.

    2003-01-01

    This paper summarizes the Explicit Algebraic Stress Model in k-omega form (EASM-ko) and in k-epsilon form (EASM-ke) in the Reynolds-averaged Navier-Stokes code CFL3D. These models have been actively used over the last several years in CFL3D, and have undergone some minor modifications during that time. Details of the equations and method for coding the latest versions of the models are given, and numerous validation cases are presented. This paper serves as a validation archive for these models.

  2. Validation of a school-based amblyopia screening protocol in a kindergarten population.

    PubMed

    Casas-Llera, Pilar; Ortega, Paula; Rubio, Inmaculada; Santos, Verónica; Prieto, María J; Alio, Jorge L

    2016-08-04

    To validate a school-based amblyopia screening program model by comparing its outcomes to those of a state-of-the-art conventional ophthalmic clinic examination in a kindergarten population of children between the ages of 4 and 5 years. An amblyopia screening protocol, which consisted of visual acuity measurement using Lea charts, ocular alignment test, ocular motility assessment, and stereoacuity with TNO random-dot test, was performed at school in a pediatric 4- to 5-year-old population by qualified healthcare professionals. The outcomes were validated in a selected group by a conventional ophthalmologic examination performed in a fully equipped ophthalmologic center. The ophthalmologic evaluation was used to confirm whether or not children were correctly classified by the screening protocol. The sensitivity and specificity of the test model to detect amblyopia were established. A total of 18,587 4- to 5-year-old children were subjected to the amblyopia screening program during the 2010-2011 school year. A population of 100 children were selected for the ophthalmologic validation screening. A sensitivity of 89.3%, specificity of 93.1%, positive predictive value of 83.3%, negative predictive value of 95.7%, positive likelihood ratio of 12.86, and negative likelihood ratio of 0.12 was obtained for the amblyopia screening validation model. The amblyopia screening protocol model tested in this investigation shows high sensitivity and specificity in detecting high-risk cases of amblyopia compared to the standard ophthalmologic examination. This screening program may be highly relevant for amblyopia screening at schools.

  3. Self-esteem among nursing assistants: reliability and validity of the Rosenberg Self-Esteem Scale.

    PubMed

    McMullen, Tara; Resnick, Barbara

    2013-01-01

    To establish the reliability and validity of the Rosenberg Self-Esteem Scale (RSES) when used with nursing assistants (NAs). Testing the RSES used baseline data from a randomized controlled trial testing the Res-Care Intervention. Female NAs were recruited from nursing homes (n = 508). Validity testing for the positive and negative subscales of the RSES was based on confirmatory factor analysis (CFA) using structural equation modeling and Rasch analysis. Estimates of reliability were based on Rasch analysis and the person separation index. Evidence supports the reliability and validity of the RSES in NAs although we recommend minor revisions to the measure for subsequent use. Establishing reliable and valid measures of self-esteem in NAs will facilitate testing of interventions to strengthen workplace self-esteem, job satisfaction, and retention.

  4. LIVVkit: An extensible, python-based, land ice verification and validation toolkit for ice sheet models

    NASA Astrophysics Data System (ADS)

    Kennedy, Joseph H.; Bennett, Andrew R.; Evans, Katherine J.; Price, Stephen; Hoffman, Matthew; Lipscomb, William H.; Fyke, Jeremy; Vargo, Lauren; Boghozian, Adrianna; Norman, Matthew; Worley, Patrick H.

    2017-06-01

    To address the pressing need to better understand the behavior and complex interaction of ice sheets within the global Earth system, significant development of continental-scale, dynamical ice sheet models is underway. Concurrent to the development of the Community Ice Sheet Model (CISM), the corresponding verification and validation (V&V) process is being coordinated through a new, robust, Python-based extensible software package, the Land Ice Verification and Validation toolkit (LIVVkit). Incorporated into the typical ice sheet model development cycle, it provides robust and automated numerical verification, software verification, performance validation, and physical validation analyses on a variety of platforms, from personal laptops to the largest supercomputers. LIVVkit operates on sets of regression test and reference data sets, and provides comparisons for a suite of community prioritized tests, including configuration and parameter variations, bit-for-bit evaluation, and plots of model variables to indicate where differences occur. LIVVkit also provides an easily extensible framework to incorporate and analyze results of new intercomparison projects, new observation data, and new computing platforms. LIVVkit is designed for quick adaptation to additional ice sheet models via abstraction of model specific code, functions, and configurations into an ice sheet model description bundle outside the main LIVVkit structure. Ultimately, through shareable and accessible analysis output, LIVVkit is intended to help developers build confidence in their models and enhance the credibility of ice sheet models overall.

  5. The PDB_REDO server for macromolecular structure model optimization.

    PubMed

    Joosten, Robbie P; Long, Fei; Murshudov, Garib N; Perrakis, Anastassis

    2014-07-01

    The refinement and validation of a crystallographic structure model is the last step before the coordinates and the associated data are submitted to the Protein Data Bank (PDB). The success of the refinement procedure is typically assessed by validating the models against geometrical criteria and the diffraction data, and is an important step in ensuring the quality of the PDB public archive [Read et al. (2011 ▶), Structure, 19, 1395-1412]. The PDB_REDO procedure aims for 'constructive validation', aspiring to consistent and optimal refinement parameterization and pro-active model rebuilding, not only correcting errors but striving for optimal interpretation of the electron density. A web server for PDB_REDO has been implemented, allowing thorough, consistent and fully automated optimization of the refinement procedure in REFMAC and partial model rebuilding. The goal of the web server is to help practicing crystallo-graphers to improve their model prior to submission to the PDB. For this, additional steps were implemented in the PDB_REDO pipeline, both in the refinement procedure, e.g. testing of resolution limits and k-fold cross-validation for small test sets, and as new validation criteria, e.g. the density-fit metrics implemented in EDSTATS and ligand validation as implemented in YASARA. Innovative ways to present the refinement and validation results to the user are also described, which together with auto-generated Coot scripts can guide users to subsequent model inspection and improvement. It is demonstrated that using the server can lead to substantial improvement of structure models before they are submitted to the PDB.

  6. Testability of evolutionary game dynamics based on experimental economics data

    NASA Astrophysics Data System (ADS)

    Wang, Yijia; Chen, Xiaojie; Wang, Zhijian

    In order to better understand the dynamic processes of a real game system, we need an appropriate dynamics model, so to evaluate the validity of a model is not a trivial task. Here, we demonstrate an approach, considering the dynamical macroscope patterns of angular momentum and speed as the measurement variables, to evaluate the validity of various dynamics models. Using the data in real time Rock-Paper-Scissors (RPS) games experiments, we obtain the experimental dynamic patterns, and then derive the related theoretical dynamic patterns from a series of typical dynamics models respectively. By testing the goodness-of-fit between the experimental and theoretical patterns, the validity of the models can be evaluated. One of the results in our study case is that, among all the nonparametric models tested, the best-known Replicator dynamics model performs almost worst, while the Projection dynamics model performs best. Besides providing new empirical macroscope patterns of social dynamics, we demonstrate that the approach can be an effective and rigorous tool to test game dynamics models. Fundamental Research Funds for the Central Universities (SSEYI2014Z) and the National Natural Science Foundation of China (Grants No. 61503062).

  7. The Role of Structural Models in the Solar Sail Flight Validation Process

    NASA Technical Reports Server (NTRS)

    Johnston, John D.

    2004-01-01

    NASA is currently soliciting proposals via the New Millennium Program ST-9 opportunity for a potential Solar Sail Flight Validation (SSFV) experiment to develop and operate in space a deployable solar sail that can be steered and provides measurable acceleration. The approach planned for this experiment is to test and validate models and processes for solar sail design, fabrication, deployment, and flight. These models and processes would then be used to design, fabricate, and operate scaleable solar sails for future space science missions. There are six validation objectives planned for the ST9 SSFV experiment: 1) Validate solar sail design tools and fabrication methods; 2) Validate controlled deployment; 3) Validate in space structural characteristics (focus of poster); 4) Validate solar sail attitude control; 5) Validate solar sail thrust performance; 6) Characterize the sail's electromagnetic interaction with the space environment. This poster presents a top-level assessment of the role of structural models in the validation process for in-space structural characteristics.

  8. Validation of Metrics as Error Predictors

    NASA Astrophysics Data System (ADS)

    Mendling, Jan

    In this chapter, we test the validity of metrics that were defined in the previous chapter for predicting errors in EPC business process models. In Section 5.1, we provide an overview of how the analysis data is generated. Section 5.2 describes the sample of EPCs from practice that we use for the analysis. Here we discuss a disaggregation by the EPC model group and by error as well as a correlation analysis between metrics and error. Based on this sample, we calculate a logistic regression model for predicting error probability with the metrics as input variables in Section 5.3. In Section 5.4, we then test the regression function for an independent sample of EPC models from textbooks as a cross-validation. Section 5.5 summarizes the findings.

  9. DEVELOPMENT AND VALIDATION OF A MULTIFIELD MODEL OF CHURN-TURBULENT GAS/LIQUID FLOWS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elena A. Tselishcheva; Steven P. Antal; Michael Z. Podowski

    The accuracy of numerical predictions for gas/liquid two-phase flows using Computational Multiphase Fluid Dynamics (CMFD) methods strongly depends on the formulation of models governing the interaction between the continuous liquid field and bubbles of different sizes. The purpose of this paper is to develop, test and validate a multifield model of adiabatic gas/liquid flows at intermediate gas concentrations (e.g., churn-turbulent flow regime), in which multiple-size bubbles are divided into a specified number of groups, each representing a prescribed range of sizes. The proposed modeling concept uses transport equations for the continuous liquid field and for each bubble field. The overallmore » model has been implemented in the NPHASE-CMFD computer code. The results of NPHASE-CMFD simulations have been validated against the experimental data from the TOPFLOW test facility. Also, a parametric analysis on the effect of various modeling assumptions has been performed.« less

  10. Perception of competence in middle school physical education: instrument development and validation.

    PubMed

    Scrabis-Fletcher, Kristin; Silverman, Stephen

    2010-03-01

    Perception of Competence (POC) has been studied extensively in physical activity (PA) research with similar instruments adapted for physical education (PE) research. Such instruments do not account for the unique PE learning environment. Therefore, an instrument was developed and the scores validated to measure POC in middle school PE. A multiphase design was used consisting of an intensive theoretical review, elicitation study, prepilot study, pilot study, content validation study, and final validation study (N=1281). Data analysis included a multistep iterative process to identify the best model fit. A three-factor model for POC was tested and resulted in root mean square error of approximation = .09, root mean square residual = .07, goodness offit index = .90, and adjusted goodness offit index = .86 values in the acceptable range (Hu & Bentler, 1999). A two-factor model was also tested and resulted in a good fit (two-factor fit indexes values = .05, .03, .98, .97, respectively). The results of this study suggest that an instrument using a three- or two-factor model provides reliable and valid scores ofPOC measurement in middle school PE.

  11. Validation of the Continuum of Care Conceptual Model for Athletic Therapy

    PubMed Central

    Lafave, Mark R.; Butterwick, Dale; Eubank, Breda

    2015-01-01

    Utilization of conceptual models in field-based emergency care currently borrows from existing standards of medical and paramedical professions. The purpose of this study was to develop and validate a comprehensive conceptual model that could account for injuries ranging from nonurgent to catastrophic events including events that do not follow traditional medical or prehospital care protocols. The conceptual model should represent the continuum of care from the time of initial injury spanning to an athlete's return to participation in their sport. Finally, the conceptual model should accommodate both novices and experts in the AT profession. This paper chronicles the content validation steps of the Continuum of Care Conceptual Model for Athletic Therapy (CCCM-AT). The stages of model development were domain and item generation, content expert validation using a three-stage modified Ebel procedure, and pilot testing. Only the final stage of the modified Ebel procedure reached a priori 80% consensus on three domains of interest: (1) heading descriptors; (2) the order of the model; (3) the conceptual model as a whole. Future research is required to test the use of the CCCM-AT in order to understand its efficacy in teaching and practice within the AT discipline. PMID:26464897

  12. ASC-AD penetration modeling FY05 status report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kistler, Bruce L.; Ostien, Jakob T.; Chiesa, Michael L.

    2006-04-01

    Sandia currently lacks a high fidelity method for predicting loads on and subsequent structural response of earth penetrating weapons. This project seeks to test, debug, improve and validate methodologies for modeling earth penetration. Results of this project will allow us to optimize and certify designs for the B61-11, Robust Nuclear Earth Penetrator (RNEP), PEN-X and future nuclear and conventional penetrator systems. Since this is an ASC Advanced Deployment project the primary goal of the work is to test, debug, verify and validate new Sierra (and Nevada) tools. Also, since this project is part of the V&V program within ASC, uncertaintymore » quantification (UQ), optimization using DAKOTA [1] and sensitivity analysis are an integral part of the work. This project evaluates, verifies and validates new constitutive models, penetration methodologies and Sierra/Nevada codes. In FY05 the project focused mostly on PRESTO [2] using the Spherical Cavity Expansion (SCE) [3,4] and PRESTO Lagrangian analysis with a preformed hole (Pen-X) methodologies. Modeling penetration tests using PRESTO with a pilot hole was also attempted to evaluate constitutive models. Future years work would include the Alegra/SHISM [5] and AlegrdEP (Earth Penetration) methodologies when they are ready for validation testing. Constitutive models such as Soil-and-Foam, the Sandia Geomodel [6], and the K&C Concrete model [7] were also tested and evaluated. This report is submitted to satisfy annual documentation requirements for the ASC Advanced Deployment program. This report summarizes FY05 work performed in the Penetration Mechanical Response (ASC-APPS) and Penetration Mechanics (ASC-V&V) projects. A single report is written to document the two projects because of the significant amount of technical overlap.« less

  13. 40 CFR 1037.501 - General testing and modeling provisions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 1065 to perform valid tests. (1) For service accumulation, use the test fuel or any commercially... appropriate diesel test fuel is ultra low-sulfur diesel fuel. (3) For gasoline-fueled vehicles, use the...) AIR POLLUTION CONTROLS CONTROL OF EMISSIONS FROM NEW HEAVY-DUTY MOTOR VEHICLES Test and Modeling...

  14. 1:50 Scale Testing of Three Floating Wind Turbines at MARIN and Numerical Model Validation Against Test Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dagher, Habib; Viselli, Anthony; Goupee, Andrew

    The primary goal of the basin model test program discussed herein is to properly scale and accurately capture physical data of the rigid body motions, accelerations and loads for different floating wind turbine platform technologies. The intended use for this data is for performing comparisons with predictions from various aero-hydro-servo-elastic floating wind turbine simulators for calibration and validation. Of particular interest is validating the floating offshore wind turbine simulation capabilities of NREL’s FAST open-source simulation tool. Once the validation process is complete, coupled simulators such as FAST can be used with a much greater degree of confidence in design processesmore » for commercial development of floating offshore wind turbines. The test program subsequently described in this report was performed at MARIN (Maritime Research Institute Netherlands) in Wageningen, the Netherlands. The models considered consisted of the horizontal axis, NREL 5 MW Reference Wind Turbine (Jonkman et al., 2009) with a flexible tower affixed atop three distinct platforms: a tension leg platform (TLP), a spar-buoy modeled after the OC3 Hywind (Jonkman, 2010) and a semi-submersible. The three generic platform designs were intended to cover the spectrum of currently investigated concepts, each based on proven floating offshore structure technology. The models were tested under Froude scale wind and wave loads. The high-quality wind environments, unique to these tests, were realized in the offshore basin via a novel wind machine which exhibits negligible swirl and low turbulence intensity in the flow field. Recorded data from the floating wind turbine models included rotor torque and position, tower top and base forces and moments, mooring line tensions, six-axis platform motions and accelerations at key locations on the nacelle, tower, and platform. A large number of tests were performed ranging from simple free-decay tests to complex operating conditions with irregular sea states and dynamic winds.« less

  15. 2D-QSAR and 3D-QSAR Analyses for EGFR Inhibitors

    PubMed Central

    Zhao, Manman; Zheng, Linfeng; Qiu, Chun

    2017-01-01

    Epidermal growth factor receptor (EGFR) is an important target for cancer therapy. In this study, EGFR inhibitors were investigated to build a two-dimensional quantitative structure-activity relationship (2D-QSAR) model and a three-dimensional quantitative structure-activity relationship (3D-QSAR) model. In the 2D-QSAR model, the support vector machine (SVM) classifier combined with the feature selection method was applied to predict whether a compound was an EGFR inhibitor. As a result, the prediction accuracy of the 2D-QSAR model was 98.99% by using tenfold cross-validation test and 97.67% by using independent set test. Then, in the 3D-QSAR model, the model with q2 = 0.565 (cross-validated correlation coefficient) and r2 = 0.888 (non-cross-validated correlation coefficient) was built to predict the activity of EGFR inhibitors. The mean absolute error (MAE) of the training set and test set was 0.308 log units and 0.526 log units, respectively. In addition, molecular docking was also employed to investigate the interaction between EGFR inhibitors and EGFR. PMID:28630865

  16. Testing the Predictive Validity of the Hendrich II Fall Risk Model.

    PubMed

    Jung, Hyesil; Park, Hyeoun-Ae

    2018-03-01

    Cumulative data on patient fall risk have been compiled in electronic medical records systems, and it is possible to test the validity of fall-risk assessment tools using these data between the times of admission and occurrence of a fall. The Hendrich II Fall Risk Model scores assessed during three time points of hospital stays were extracted and used for testing the predictive validity: (a) upon admission, (b) when the maximum fall-risk score from admission to falling or discharge, and (c) immediately before falling or discharge. Predictive validity was examined using seven predictive indicators. In addition, logistic regression analysis was used to identify factors that significantly affect the occurrence of a fall. Among the different time points, the maximum fall-risk score assessed between admission and falling or discharge showed the best predictive performance. Confusion or disorientation and having a poor ability to rise from a sitting position were significant risk factors for a fall.

  17. A multivariate model and statistical method for validating tree grade lumber yield equations

    Treesearch

    Donald W. Seegrist

    1975-01-01

    Lumber yields within lumber grades can be described by a multivariate linear model. A method for validating lumber yield prediction equations when there are several tree grades is presented. The method is based on multivariate simultaneous test procedures.

  18. Criteria for Reviewing District Competency Tests.

    ERIC Educational Resources Information Center

    Herman, Joan L.

    A formative evaluation minimum competency test model is examined. The model systematically uses assessment information to support and facilitate program improvement. In terms of the model, four inter-related qualities are essential for a sound testing program. The content validity perspective looks at how well the district has defined competency…

  19. Progress toward the determination of correct classification rates in fire debris analysis.

    PubMed

    Waddell, Erin E; Song, Emma T; Rinke, Caitlin N; Williams, Mary R; Sigman, Michael E

    2013-07-01

    Principal components analysis (PCA), linear discriminant analysis (LDA), and quadratic discriminant analysis (QDA) were used to develop a multistep classification procedure for determining the presence of ignitable liquid residue in fire debris and assigning any ignitable liquid residue present into the classes defined under the American Society for Testing and Materials (ASTM) E 1618-10 standard method. A multistep classification procedure was tested by cross-validation based on model data sets comprised of the time-averaged mass spectra (also referred to as total ion spectra) of commercial ignitable liquids and pyrolysis products from common building materials and household furnishings (referred to simply as substrates). Fire debris samples from laboratory-scale and field test burns were also used to test the model. The optimal model's true-positive rate was 81.3% for cross-validation samples and 70.9% for fire debris samples. The false-positive rate was 9.9% for cross-validation samples and 8.9% for fire debris samples. © 2013 American Academy of Forensic Sciences.

  20. Development of an anaerobic threshold (HRLT, HRVT) estimation equation using the heart rate threshold (HRT) during the treadmill incremental exercise test

    PubMed Central

    Ham, Joo-ho; Park, Hun-Young; Kim, Youn-ho; Bae, Sang-kon; Ko, Byung-hoon

    2017-01-01

    [Purpose] The purpose of this study was to develop a regression model to estimate the heart rate at the lactate threshold (HRLT) and the heart rate at the ventilatory threshold (HRVT) using the heart rate threshold (HRT), and to test the validity of the regression model. [Methods] We performed a graded exercise test with a treadmill in 220 normal individuals (men: 112, women: 108) aged 20–59 years. HRT, HRLT, and HRVT were measured in all subjects. A regression model was developed to estimate HRLT and HRVT using HRT with 70% of the data (men: 79, women: 76) through randomization (7:3), with the Bernoulli trial. The validity of the regression model developed with the remaining 30% of the data (men: 33, women: 32) was also examined. [Results] Based on the regression coefficient, we found that the independent variable HRT was a significant variable in all regression models. The adjusted R2 of the developed regression models averaged about 70%, and the standard error of estimation of the validity test results was 11 bpm, which is similar to that of the developed model. [Conclusion] These results suggest that HRT is a useful parameter for predicting HRLT and HRVT. PMID:29036765

  1. Development of an anaerobic threshold (HRLT, HRVT) estimation equation using the heart rate threshold (HRT) during the treadmill incremental exercise test.

    PubMed

    Ham, Joo-Ho; Park, Hun-Young; Kim, Youn-Ho; Bae, Sang-Kon; Ko, Byung-Hoon; Nam, Sang-Seok

    2017-09-30

    The purpose of this study was to develop a regression model to estimate the heart rate at the lactate threshold (HRLT) and the heart rate at the ventilatory threshold (HRVT) using the heart rate threshold (HRT), and to test the validity of the regression model. We performed a graded exercise test with a treadmill in 220 normal individuals (men: 112, women: 108) aged 20-59 years. HRT, HRLT, and HRVT were measured in all subjects. A regression model was developed to estimate HRLT and HRVT using HRT with 70% of the data (men: 79, women: 76) through randomization (7:3), with the Bernoulli trial. The validity of the regression model developed with the remaining 30% of the data (men: 33, women: 32) was also examined. Based on the regression coefficient, we found that the independent variable HRT was a significant variable in all regression models. The adjusted R2 of the developed regression models averaged about 70%, and the standard error of estimation of the validity test results was 11 bpm, which is similar to that of the developed model. These results suggest that HRT is a useful parameter for predicting HRLT and HRVT. ©2017 The Korean Society for Exercise Nutrition

  2. Test Cases for Modeling and Validation of Structures with Piezoelectric Actuators

    NASA Technical Reports Server (NTRS)

    Reaves, Mercedes C.; Horta, Lucas G.

    2001-01-01

    A set of benchmark test articles were developed to validate techniques for modeling structures containing piezoelectric actuators using commercially available finite element analysis packages. The paper presents the development, modeling, and testing of two structures: an aluminum plate with surface mounted patch actuators and a composite box beam with surface mounted actuators. Three approaches for modeling structures containing piezoelectric actuators using the commercially available packages: MSC/NASTRAN and ANSYS are presented. The approaches, applications, and limitations are discussed. Data for both test articles are compared in terms of frequency response functions from deflection and strain data to input voltage to the actuator. Frequency response function results using the three different analysis approaches provided comparable test/analysis results. It is shown that global versus local behavior of the analytical model and test article must be considered when comparing different approaches. Also, improper bonding of actuators greatly reduces the electrical to mechanical effectiveness of the actuators producing anti-resonance errors.

  3. Prediction of biodegradability from chemical structure: Modeling or ready biodegradation test data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loonen, H.; Lindgren, F.; Hansen, B.

    1999-08-01

    Biodegradation data were collected and evaluated for 894 substances with widely varying chemical structures. All data were determined according to the Japanese Ministry of International Trade and Industry (MITI) I test protocol. The MITI I test is a screening test for ready biodegradability and has been described by Organization for Economic Cooperation and Development (OECD) test guideline 301 C and European Union (EU) test guideline C4F. The chemicals were characterized by a set of 127 predefined structural fragments. This data set was used to develop a model for the prediction of the biodegradability of chemicals under standardized OECD and EUmore » ready biodegradation test conditions. Partial least squares (PLS) discriminant analysis was used for the model development. The model was evaluated by means of internal cross-validation and repeated external validation. The importance of various structural fragments and fragment interactions was investigated. The most important fragments include the presence of a long alkyl chain; hydroxy, ester, and acid groups (enhancing biodegradation); and the presence of one or more aromatic rings and halogen substituents (regarding biodegradation). More than 85% of the model predictions were correct for using the complete data set. The not readily biodegradable predictions were slightly better than the readily biodegradable predictions (86 vs 84%). The average percentage of correct predictions from four external validation studies was 83%. Model optimization by including fragment interactions improve the model predicting capabilities to 89%. It can be concluded that the PLS model provides predictions of high reliability for a diverse range of chemical structures. The predictions conform to the concept of readily biodegradable (or not readily biodegradable) as defined by OECD and EU test guidelines.« less

  4. Alphabus Mechanical Validation Plan and Test Campaign

    NASA Astrophysics Data System (ADS)

    Calvisi, G.; Bonnet, D.; Belliol, P.; Lodereau, P.; Redoundo, R.

    2012-07-01

    A joint team of the two leading European satellite companies (Astrium and Thales Alenia Space) worked with the support of ESA and CNES to define a product line able to efficiently address the upper segment of communications satellites : Alphabus Starting in 2009 and up to 2011 the mechanical validation of the Alphabus platform has been obtained thanks to static tests performed on dedicated static model and to environmental test performed on the first satellite based on Alphabus: Alphasat I-XL. The mechanical validation of the Alphabus platform presented an excellent opportunity to improve the validation and qualification process, with respect to static, sine vibrations, acoustic and L/V shock environment, minimizing recurrent cost of manufacturing, integration and testing. A main driver on mechanical testing is that mechanical acceptance testing at satellite level will be performed with empty tanks due to technical constraints (limitation of existing vibration devices) and programmatic advantages (test risk reduction, test schedule minimization). In this paper the impacts that such testing logic have on validation plan are briefly recalled and its actual application for Alphasat PFM mechanical test campaign is detailed.

  5. Ada (Trade name) Compiler Validation Summary Report: Rational. Rational Environment (Trademark) A952. Rational Architecture (R1000 (Trade name) Model 200).

    DTIC Science & Technology

    1987-05-06

    Rational . Rational Environment A_9_5_2. Rational Arthitecture (R1000 Model 200) 6. PERFORMING ORG. REPORT...validation testing performed on the Rational Environment , A_9_5_2, using Version 1.8 of the Ada0 Compiler Validation Capability (ACVC). The Rational ... Environment is hosted on a Rational Architecture (R1000 Model 200) operating under Rational Environment , Release A 95 2. Programs processed by this

  6. Infrared Imagery of Solid Rocket Exhaust Plumes

    NASA Technical Reports Server (NTRS)

    Moran, Robert P.; Houston, Janice D.

    2011-01-01

    The Ares I Scale Model Acoustic Test program consisted of a series of 18 solid rocket motor static firings, simulating the liftoff conditions of the Ares I five-segment Reusable Solid Rocket Motor Vehicle. Primary test objectives included acquiring acoustic and pressure data which will be used to validate analytical models for the prediction of Ares 1 liftoff acoustics and ignition overpressure environments. The test article consisted of a 5% scale Ares I vehicle and launch tower mounted on the Mobile Launch Pad. The testing also incorporated several Water Sound Suppression Systems. Infrared imagery was employed during the solid rocket testing to support the validation or improvement of analytical models, and identify corollaries between rocket plume size or shape and the accompanying measured level of noise suppression obtained by water sound suppression systems.

  7. Testability of evolutionary game dynamics based on experimental economics data

    NASA Astrophysics Data System (ADS)

    Wang, Yijia; Chen, Xiaojie; Wang, Zhijian

    2017-11-01

    Understanding the dynamic processes of a real game system requires an appropriate dynamics model, and rigorously testing a dynamics model is nontrivial. In our methodological research, we develop an approach to testing the validity of game dynamics models that considers the dynamic patterns of angular momentum and speed as measurement variables. Using Rock-Paper-Scissors (RPS) games as an example, we illustrate the geometric patterns in the experiment data. We then derive the related theoretical patterns from a series of typical dynamics models. By testing the goodness-of-fit between the experimental and theoretical patterns, we show that the validity of these models can be evaluated quantitatively. Our approach establishes a link between dynamics models and experimental systems, which is, to the best of our knowledge, the most effective and rigorous strategy for ascertaining the testability of evolutionary game dynamics models.

  8. Validation of Finite Element Crash Test Dummy Models for Predicting Orion Crew Member Injuries During a Simulated Vehicle Landing

    NASA Technical Reports Server (NTRS)

    Tabiei, Al; Lawrence, Charles; Fasanella, Edwin L.

    2009-01-01

    A series of crash tests were conducted with dummies during simulated Orion crew module landings at the Wright-Patterson Air Force Base. These tests consisted of several crew configurations with and without astronaut suits. Some test results were collected and are presented. In addition, finite element models of the tests were developed and are presented. The finite element models were validated using the experimental data, and the test responses were compared with the computed results. Occupant crash data, such as forces, moments, and accelerations, were collected from the simulations and compared with injury criteria to assess occupant survivability and injury. Some of the injury criteria published in the literature is summarized for completeness. These criteria were used to determine potential injury during crew impact events.

  9. Current progress in patient-specific modeling

    PubMed Central

    2010-01-01

    We present a survey of recent advancements in the emerging field of patient-specific modeling (PSM). Researchers in this field are currently simulating a wide variety of tissue and organ dynamics to address challenges in various clinical domains. The majority of this research employs three-dimensional, image-based modeling techniques. Recent PSM publications mostly represent feasibility or preliminary validation studies on modeling technologies, and these systems will require further clinical validation and usability testing before they can become a standard of care. We anticipate that with further testing and research, PSM-derived technologies will eventually become valuable, versatile clinical tools. PMID:19955236

  10. Switching moving boundary models for two-phase flow evaporators and condensers

    NASA Astrophysics Data System (ADS)

    Bonilla, Javier; Dormido, Sebastián; Cellier, François E.

    2015-03-01

    The moving boundary method is an appealing approach for the design, testing and validation of advanced control schemes for evaporators and condensers. When it comes to advanced control strategies, not only accurate but fast dynamic models are required. Moving boundary models are fast low-order dynamic models, and they can describe the dynamic behavior with high accuracy. This paper presents a mathematical formulation based on physical principles for two-phase flow moving boundary evaporator and condenser models which support dynamic switching between all possible flow configurations. The models were implemented in a library using the equation-based object-oriented Modelica language. Several integrity tests in steady-state and transient predictions together with stability tests verified the models. Experimental data from a direct steam generation parabolic-trough solar thermal power plant is used to validate and compare the developed moving boundary models against finite volume models.

  11. Accelerated Aging in Electrolytic Capacitors for Prognostics

    NASA Technical Reports Server (NTRS)

    Celaya, Jose R.; Kulkarni, Chetan; Saha, Sankalita; Biswas, Gautam; Goebel, Kai Frank

    2012-01-01

    The focus of this work is the analysis of different degradation phenomena based on thermal overstress and electrical overstress accelerated aging systems and the use of accelerated aging techniques for prognostics algorithm development. Results on thermal overstress and electrical overstress experiments are presented. In addition, preliminary results toward the development of physics-based degradation models are presented focusing on the electrolyte evaporation failure mechanism. An empirical degradation model based on percentage capacitance loss under electrical overstress is presented and used in: (i) a Bayesian-based implementation of model-based prognostics using a discrete Kalman filter for health state estimation, and (ii) a dynamic system representation of the degradation model for forecasting and remaining useful life (RUL) estimation. A leave-one-out validation methodology is used to assess the validity of the methodology under the small sample size constrain. The results observed on the RUL estimation are consistent through the validation tests comparing relative accuracy and prediction error. It has been observed that the inaccuracy of the model to represent the change in degradation behavior observed at the end of the test data is consistent throughout the validation tests, indicating the need of a more detailed degradation model or the use of an algorithm that could estimate model parameters on-line. Based on the observed degradation process under different stress intensity with rest periods, the need for more sophisticated degradation models is further supported. The current degradation model does not represent the capacitance recovery over rest periods following an accelerated aging stress period.

  12. The diagnostic value of specific IgE to Ara h 2 to predict peanut allergy in children is comparable to a validated and updated diagnostic prediction model.

    PubMed

    Klemans, Rob J B; Otte, Dianne; Knol, Mirjam; Knol, Edward F; Meijer, Yolanda; Gmelig-Meyling, Frits H J; Bruijnzeel-Koomen, Carla A F M; Knulst, André C; Pasmans, Suzanne G M A

    2013-01-01

    A diagnostic prediction model for peanut allergy in children was recently published, using 6 predictors: sex, age, history, skin prick test, peanut specific immunoglobulin E (sIgE), and total IgE minus peanut sIgE. To validate this model and update it by adding allergic rhinitis, atopic dermatitis, and sIgE to peanut components Ara h 1, 2, 3, and 8 as candidate predictors. To develop a new model based only on sIgE to peanut components. Validation was performed by testing discrimination (diagnostic value) with an area under the receiver operating characteristic curve and calibration (agreement between predicted and observed frequencies of peanut allergy) with the Hosmer-Lemeshow test and a calibration plot. The performance of the (updated) models was similarly analyzed. Validation of the model in 100 patients showed good discrimination (88%) but poor calibration (P < .001). In the updating process, age, history, and additional candidate predictors did not significantly increase discrimination, being 94%, and leaving only 4 predictors of the original model: sex, skin prick test, peanut sIgE, and total IgE minus sIgE. When building a model with sIgE to peanut components, Ara h 2 was the only predictor, with a discriminative ability of 90%. Cutoff values with 100% positive and negative predictive values could be calculated for both the updated model and sIgE to Ara h 2. In this way, the outcome of the food challenge could be predicted with 100% accuracy in 59% (updated model) and 50% (Ara h 2) of the patients. Discrimination of the validated model was good; however, calibration was poor. The discriminative ability of Ara h 2 was almost comparable to that of the updated model, containing 4 predictors. With both models, the need for peanut challenges could be reduced by at least 50%. Copyright © 2012 American Academy of Allergy, Asthma & Immunology. Published by Mosby, Inc. All rights reserved.

  13. New strategy to improve quality control of Montenegro skin test at the production level.

    PubMed

    Guedes, Deborah Carbonera; Minozzo, João Carlos; Pasquali, Aline Kuhn Sbruzzi; Faulds, Craig; Soccol, Carlos Ricardo; Thomaz-Soccol, Vanete

    2017-01-01

    The production of the Montenegro antigen for skin test poses difficulties regarding quality control. Here, we propose that certain animal models reproducing a similar immune response to humans may be used in the quality control of Montenegro antigen production. Fifteen Cavia porcellus (guinea pigs) were immunized with Leishmania amazonensis or Leishmania braziliensis , and, after 30 days, they were skin tested with standard Montenegro antigen. To validate C. porcellus as an animal model for skin tests, eighteen Mesocricetus auratus (hamsters) were infected with L. amazonensis or L. braziliensis , and, after 45 days, they were skin tested with standard Montenegro antigen. Cavia porcellus immunized with L. amazonensis or L. braziliensis , and hamsters infected with the same species presented induration reactions when skin tested with standard Montenegro antigen 48-72h after the test. The comparison between immunization methods and immune response from the two animal species validated C. porcellus as a good model for Montenegro skin test, and the model showed strong potential as an in vivo model in the quality control of the production of Montenegro antigen.

  14. Overview of Heat Addition and Efficiency Predictions for an Advanced Stirling Convertor

    NASA Technical Reports Server (NTRS)

    Wilson, Scott D.; Reid, Terry V.; Schifer, Nicholas A.; Briggs, Maxwell H.

    2012-01-01

    The U.S. Department of Energy (DOE) and Lockheed Martin Space Systems Company (LMSSC) have been developing the Advanced Stirling Radioisotope Generator (ASRG) for use as a power system for space science missions. This generator would use two high-efficiency Advanced Stirling Convertors (ASCs), developed by Sunpower Inc. and NASA Glenn Research Center (GRC). The ASCs convert thermal energy from a radioisotope heat source into electricity. As part of ground testing of these ASCs, different operating conditions are used to simulate expected mission conditions. These conditions require achieving a particular operating frequency, hot end and cold end temperatures, and specified electrical power output for a given net heat input. Microporous bulk insulation is used in the ground support test hardware to minimize the loss of thermal energy from the electric heat source to the environment. The insulation package is characterized before operation to predict how much heat will be absorbed by the convertor and how much will be lost to the environment during operation. In an effort to validate these predictions, numerous tasks have been performed, which provided a more accurate value for net heat input into the ASCs. This test and modeling effort included: (a) making thermophysical property measurements of test setup materials to provide inputs to the numerical models, (b) acquiring additional test data that was collected during convertor tests to provide numerical models with temperature profiles of the test setup via thermocouple and infrared measurements, (c) using multidimensional numerical models (computational fluid dynamics code) to predict net heat input of an operating convertor, and (d) using validation test hardware to provide direct comparison of numerical results and validate the multidimensional numerical models used to predict convertor net heat input. This effort produced high fidelity ASC net heat input predictions, which were successfully validated using specially designed test hardware enabling measurement of heat transferred through a simulated Stirling cycle. The overall effort and results are discussed.

  15. Verification of mechanistic-empirical design models for flexible pavements through accelerated pavement testing : technical summary.

    DOT National Transportation Integrated Search

    2014-08-01

    Midwest States Accelerated Pavement Testing Pooled-Fund Program, financed by the : highway departments of Kansas, Iowa, and Missouri, has supported an accelerated : pavement testing (APT) project to validate several models incorporated in the NCHRP :...

  16. Verification of mechanistic-empirical design models for flexible pavements through accelerated pavement testing.

    DOT National Transportation Integrated Search

    2014-08-01

    The Midwest States Accelerated Pavement Testing Pooled Fund Program, financed by the highway : departments of Kansas, Iowa, and Missouri, has supported an accelerated pavement testing (APT) project to : validate several models incorporated in the NCH...

  17. Computer simulation of Cerebral Arteriovenous Malformation-validation analysis of hemodynamics parameters.

    PubMed

    Kumar, Y Kiran; Mehta, Shashi Bhushan; Ramachandra, Manjunath

    2017-01-01

    The purpose of this work is to provide some validation methods for evaluating the hemodynamic assessment of Cerebral Arteriovenous Malformation (CAVM). This article emphasizes the importance of validating noninvasive measurements for CAVM patients, which are designed using lumped models for complex vessel structure. The validation of the hemodynamics assessment is based on invasive clinical measurements and cross-validation techniques with the Philips proprietary validated software's Qflow and 2D Perfursion. The modeling results are validated for 30 CAVM patients for 150 vessel locations. Mean flow, diameter, and pressure were compared between modeling results and with clinical/cross validation measurements, using an independent two-tailed Student t test. Exponential regression analysis was used to assess the relationship between blood flow, vessel diameter, and pressure between them. Univariate analysis is used to assess the relationship between vessel diameter, vessel cross-sectional area, AVM volume, AVM pressure, and AVM flow results were performed with linear or exponential regression. Modeling results were compared with clinical measurements from vessel locations of cerebral regions. Also, the model is cross validated with Philips proprietary validated software's Qflow and 2D Perfursion. Our results shows that modeling results and clinical results are nearly matching with a small deviation. In this article, we have validated our modeling results with clinical measurements. The new approach for cross-validation is proposed by demonstrating the accuracy of our results with a validated product in a clinical environment.

  18. Construct validity of the Moral Development Scale for Professionals (MDSP)

    PubMed Central

    Söderhamn, Olle; Bjørnestad, John Olav; Skisland, Anne; Cliffordson, Christina

    2011-01-01

    The aim of this study was to investigate the construct validity of the Moral Development Scale for Professionals (MDSP) using structural equation modeling. The instrument is a 12-item self-report instrument, developed in the Scandinavian cultural context and based on Kohlberg’s theory. A hypothesized simplex structure model underlying the MDSP was tested through structural equation modeling. Validity was also tested as the proportion of respondents older than 20 years that reached the highest moral level, which according to the theory should be small. A convenience sample of 339 nursing students with a mean age of 25.3 years participated. Results confirmed the simplex model structure, indicating that MDSP reflects a moral construct empirically organized from low to high. A minority of respondents >20 years of age (13.5%) scored more than 80% on the highest moral level. The findings support the construct validity of the MDSP and the stages and levels in Kohlberg’s theory. PMID:21655343

  19. Unsteady Aerodynamic Model Tuning for Precise Flutter Prediction

    NASA Technical Reports Server (NTRS)

    Pak, Chan-Gi

    2011-01-01

    A simple method for an unsteady aerodynamic model tuning is proposed in this study. This method is based on the direct modification of the aerodynamic influence coefficient matrices. The aerostructures test wing 2 flight-test data is used to demonstrate the proposed model tuning method. The flutter speed margin computed using only the test validated structural dynamic model can be improved using the additional unsteady aerodynamic model tuning, and then the flutter speed margin requirement of 15 % in military specifications can apply towards the test validated aeroelastic model. In this study, unsteady aerodynamic model tunings are performed at two time invariant flight conditions, at Mach numbers of 0.390 and 0.456. When the Mach number for the unsteady model tuning approaches to the measured fluttering Mach number, 0.502, at the flight altitude of 9,837 ft, the estimated flutter speed is approached to the measured flutter speed at this altitude. The minimum flutter speed difference between the estimated and measured flutter speed is -.14 %.

  20. Unsteady Aerodynamic Model Tuning for Precise Flutter Prediction

    NASA Technical Reports Server (NTRS)

    Pak, Chan-gi

    2011-01-01

    A simple method for an unsteady aerodynamic model tuning is proposed in this study. This method is based on the direct modification of the aerodynamic influence coefficient matrices. The aerostructures test wing 2 flight-test data is used to demonstrate the proposed model tuning method. The flutter speed margin computed using only the test validated structural dynamic model can be improved using the additional unsteady aerodynamic model tuning, and then the flutter speed margin requirement of 15 percent in military specifications can apply towards the test validated aeroelastic model. In this study, unsteady aerodynamic model tunings are performed at two time invariant flight conditions, at Mach numbers of 0.390 and 0.456. When the Mach number for the unsteady aerodynamic model tuning approaches to the measured fluttering Mach number, 0.502, at the flight altitude of 9,837 ft, the estimated flutter speed is approached to the measured flutter speed at this altitude. The minimum flutter speed difference between the estimated and measured flutter speed is -0.14 percent.

  1. Acoustic-Structure Interaction in Rocket Engines: Validation Testing

    NASA Technical Reports Server (NTRS)

    Davis, R. Benjamin; Joji, Scott S.; Parks, Russel A.; Brown, Andrew M.

    2009-01-01

    While analyzing a rocket engine component, it is often necessary to account for any effects that adjacent fluids (e.g., liquid fuels or oxidizers) might have on the structural dynamics of the component. To better characterize the fully coupled fluid-structure system responses, an analytical approach that models the system as a coupled expansion of rigid wall acoustic modes and in vacuo structural modes has been proposed. The present work seeks to experimentally validate this approach. To experimentally observe well-coupled system modes, the test article and fluid cavities are designed such that the uncoupled structural frequencies are comparable to the uncoupled acoustic frequencies. The test measures the natural frequencies, mode shapes, and forced response of cylindrical test articles in contact with fluid-filled cylindrical and/or annular cavities. The test article is excited with a stinger and the fluid-loaded response is acquired using a laser-doppler vibrometer. The experimentally determined fluid-loaded natural frequencies are compared directly to the results of the analytical model. Due to the geometric configuration of the test article, the analytical model is found to be valid for natural modes with circumferential wave numbers greater than four. In the case of these modes, the natural frequencies predicted by the analytical model demonstrate excellent agreement with the experimentally determined natural frequencies.

  2. The Screening Test for Emotional Problems--Teacher-Report Version (Step-T): Studies of Reliability and Validity

    ERIC Educational Resources Information Center

    Erford, Bradley T.; Butler, Caitlin; Peacock, Elizabeth

    2015-01-01

    The Screening Test for Emotional Problems-Teacher Version (STEP-T) was designed to identify students aged 7-17 years with wide-ranging emotional disturbances. Coefficients alpha and test-retest reliability were adequate for all subscales except Anxiety. The hypothesized five-factor model fit the data very well and external aspects of validity were…

  3. Some guidance on preparing validation plans for the DART Full System Models.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gray, Genetha Anne; Hough, Patricia Diane; Hills, Richard Guy

    2009-03-01

    Planning is an important part of computational model verification and validation (V&V) and the requisite planning document is vital for effectively executing the plan. The document provides a means of communicating intent to the typically large group of people, from program management to analysts to test engineers, who must work together to complete the validation activities. This report provides guidelines for writing a validation plan. It describes the components of such a plan and includes important references and resources. While the initial target audience is the DART Full System Model teams in the nuclear weapons program, the guidelines are generallymore » applicable to other modeling efforts. Our goal in writing this document is to provide a framework for consistency in validation plans across weapon systems, different types of models, and different scenarios. Specific details contained in any given validation plan will vary according to application requirements and available resources.« less

  4. Modeling of correlated data with informative cluster sizes: An evaluation of joint modeling and within-cluster resampling approaches.

    PubMed

    Zhang, Bo; Liu, Wei; Zhang, Zhiwei; Qu, Yanping; Chen, Zhen; Albert, Paul S

    2017-08-01

    Joint modeling and within-cluster resampling are two approaches that are used for analyzing correlated data with informative cluster sizes. Motivated by a developmental toxicity study, we examined the performances and validity of these two approaches in testing covariate effects in generalized linear mixed-effects models. We show that the joint modeling approach is robust to the misspecification of cluster size models in terms of Type I and Type II errors when the corresponding covariates are not included in the random effects structure; otherwise, statistical tests may be affected. We also evaluate the performance of the within-cluster resampling procedure and thoroughly investigate the validity of it in modeling correlated data with informative cluster sizes. We show that within-cluster resampling is a valid alternative to joint modeling for cluster-specific covariates, but it is invalid for time-dependent covariates. The two methods are applied to a developmental toxicity study that investigated the effect of exposure to diethylene glycol dimethyl ether.

  5. FEMFLOW3D; a finite-element program for the simulation of three-dimensional aquifers; version 1.0

    USGS Publications Warehouse

    Durbin, Timothy J.; Bond, Linda D.

    1998-01-01

    This document also includes model validation, source code, and example input and output files. Model validation was performed using four test problems. For each test problem, the results of a model simulation with FEMFLOW3D were compared with either an analytic solution or the results of an independent numerical approach. The source code, written in the ANSI x3.9-1978 FORTRAN standard, and the complete input and output of an example problem are listed in the appendixes.

  6. [Validity evidence of the Health-Related Quality of Life for Drug Abusers Test based on the Biaxial Model of Addiction].

    PubMed

    Lozano, Oscar M; Rojas, Antonio J; Pérez, Cristino; González-Sáiz, Francisco; Ballesta, Rosario; Izaskun, Bilbao

    2008-05-01

    The aim of this work is to show evidence of the validity of the Health-Related Quality of Life for Drug Abusers Test (HRQoLDA Test). This test was developed to measure specific HRQoL for drugs abusers, within the theoretical addiction framework of the biaxial model. The sample comprised 138 patients diagnosed with opiate drug dependence. In this study, the following constructs and variables of the biaxial model were measured: severity of dependence, physical health status, psychological adjustment and substance consumption. Results indicate that the HRQoLDA Test scores are related to dependency and consumption-related problems. Multiple regression analysis reveals that HRQoL can be predicted from drug dependence, physical health status and psychological adjustment. These results contribute empirical evidence of the theoretical relationships established between HRQoL and the biaxial model, and they support the interpretation of the HRQoLDA Test to measure HRQoL in drug abusers, thus providing a test to measure this specific construct in this population.

  7. TU-D-201-05: Validation of Treatment Planning Dose Calculations: Experience Working with MPPG 5.a

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xue, J; Park, J; Kim, L

    2016-06-15

    Purpose: Newly published medical physics practice guideline (MPPG 5.a.) has set the minimum requirements for commissioning and QA of treatment planning dose calculations. We present our experience in the validation of a commercial treatment planning system based on MPPG 5.a. Methods: In addition to tests traditionally performed to commission a model-based dose calculation algorithm, extensive tests were carried out at short and extended SSDs, various depths, oblique gantry angles and off-axis conditions to verify the robustness and limitations of a dose calculation algorithm. A comparison between measured and calculated dose was performed based on validation tests and evaluation criteria recommendedmore » by MPPG 5.a. An ion chamber was used for the measurement of dose at points of interest, and diodes were used for photon IMRT/VMAT validations. Dose profiles were measured with a three-dimensional scanning system and calculated in the TPS using a virtual water phantom. Results: Calculated and measured absolute dose profiles were compared at each specified SSD and depth for open fields. The disagreement is easily identifiable with the difference curve. Subtle discrepancy has revealed the limitation of the measurement, e.g., a spike at the high dose region and an asymmetrical penumbra observed on the tests with an oblique MLC beam. The excellent results we had (> 98% pass rate on 3%/3mm gamma index) on the end-to-end tests for both IMRT and VMAT are attributed to the quality beam data and the good understanding of the modeling. The limitation of the model and the uncertainty of measurement were considered when comparing the results. Conclusion: The extensive tests recommended by the MPPG encourage us to understand the accuracy and limitations of a dose algorithm as well as the uncertainty of measurement. Our experience has shown how the suggested tests can be performed effectively to validate dose calculation models.« less

  8. Development of diagnostic test instruments to reveal level student conception in kinematic and dynamics

    NASA Astrophysics Data System (ADS)

    Handhika, J.; Cari, C.; Suparmi, A.; Sunarno, W.; Purwandari, P.

    2018-03-01

    The purpose of this research was to develop a diagnostic test instrument to reveal students' conceptions in kinematics and dynamics. The diagnostic test was developed based on the content indicator the concept of (1) displacement and distance, (2) instantaneous and average velocity, (3) zero and constant acceleration, (4) gravitational acceleration (5) Newton's first Law, (6) and Newton's third Law. The diagnostic test development model includes: Diagnostic test requirement analysis, formulating test-making objectives, developing tests, checking the validity of the content and the performance of reliability, and application of tests. The Content Validation Index (CVI) results in the category are highly relevant, with a value of 0.85. Three questions get negative Content Validation Ratio CVR) (-0.6), after revised distractors and clarify visual presentation; the CVR become 1 (highly relevant). This test was applied, obtained 16 valid test items, with Cronbach Alpha value of 0.80. It can conclude that diagnostic test can be used to reveal the level of students conception in kinematics and dynamics.

  9. Investigation of Zircaloy-2 oxidation model for SFP accident analysis

    NASA Astrophysics Data System (ADS)

    Nemoto, Yoshiyuki; Kaji, Yoshiyuki; Ogawa, Chihiro; Kondo, Keietsu; Nakashima, Kazuo; Kanazawa, Toru; Tojo, Masayuki

    2017-05-01

    The authors previously conducted thermogravimetric analyses on Zircaloy-2 in air. By using the thermogravimetric data, an oxidation model was constructed in this study so that it can be applied for the modeling of cladding degradation in spent fuel pool (SFP) severe accident condition. For its validation, oxidation tests of long cladding tube were conducted, and computational fluid dynamics analyses using the constructed oxidation model were proceeded to simulate the experiments. In the oxidation tests, high temperature thermal gradient along the cladding axis was applied and air flow rates in testing chamber were controlled to simulate hypothetical SFP accidents. The analytical outputs successfully reproduced the growth of oxide film and porous oxide layer on the claddings in oxidation tests, and validity of the oxidation model was proved. Influence of air flow rate for the oxidation behavior was thought negligible in the conditions investigated in this study.

  10. Towards Validation of an Adaptive Flight Control Simulation Using Statistical Emulation

    NASA Technical Reports Server (NTRS)

    He, Yuning; Lee, Herbert K. H.; Davies, Misty D.

    2012-01-01

    Traditional validation of flight control systems is based primarily upon empirical testing. Empirical testing is sufficient for simple systems in which a.) the behavior is approximately linear and b.) humans are in-the-loop and responsible for off-nominal flight regimes. A different possible concept of operation is to use adaptive flight control systems with online learning neural networks (OLNNs) in combination with a human pilot for off-nominal flight behavior (such as when a plane has been damaged). Validating these systems is difficult because the controller is changing during the flight in a nonlinear way, and because the pilot and the control system have the potential to co-adapt in adverse ways traditional empirical methods are unlikely to provide any guarantees in this case. Additionally, the time it takes to find unsafe regions within the flight envelope using empirical testing means that the time between adaptive controller design iterations is large. This paper describes a new concept for validating adaptive control systems using methods based on Bayesian statistics. This validation framework allows the analyst to build nonlinear models with modal behavior, and to have an uncertainty estimate for the difference between the behaviors of the model and system under test.

  11. Numerical Modelling of Femur Fracture and Experimental Validation Using Bone Simulant.

    PubMed

    Marco, Miguel; Giner, Eugenio; Larraínzar-Garijo, Ricardo; Caeiro, José Ramón; Miguélez, María Henar

    2017-10-01

    Bone fracture pattern prediction is still a challenge and an active field of research. The main goal of this article is to present a combined methodology (experimental and numerical) for femur fracture onset analysis. Experimental work includes the characterization of the mechanical properties and fracture testing on a bone simulant. The numerical work focuses on the development of a model whose material properties are provided by the characterization tests. The fracture location and the early stages of the crack propagation are modelled using the extended finite element method and the model is validated by fracture tests developed in the experimental work. It is shown that the accuracy of the numerical results strongly depends on a proper bone behaviour characterization.

  12. An Investigation to Advance the Technology Readiness Level of the Centaur Derived On-orbit Propellant Storage and Transfer System

    NASA Astrophysics Data System (ADS)

    Silvernail, Nathan L.

    This research was carried out in collaboration with the United Launch Alliance (ULA), to advance an innovative Centaur-based on-orbit propellant storage and transfer system that takes advantage of rotational settling to simplify Fluid Management (FM), specifically enabling settled fluid transfer between two tanks and settled pressure control. This research consists of two specific objectives: (1) technique and process validation and (2) computational model development. In order to raise the Technology Readiness Level (TRL) of this technology, the corresponding FM techniques and processes must be validated in a series of experimental tests, including: laboratory/ground testing, microgravity flight testing, suborbital flight testing, and orbital testing. Researchers from Embry-Riddle Aeronautical University (ERAU) have joined with the Massachusetts Institute of Technology (MIT) Synchronized Position Hold Engage and Reorient Experimental Satellites (SPHERES) team to develop a prototype FM system for operations aboard the International Space Station (ISS). Testing of the integrated system in a representative environment will raise the FM system to TRL 6. The tests will demonstrate the FM system and provide unique data pertaining to the vehicle's rotational dynamics while undergoing fluid transfer operations. These data sets provide insight into the behavior and physical tendencies of the on-orbit refueling system. Furthermore, they provide a baseline for comparison against the data produced by various computational models; thus verifying the accuracy of the models output and validating the modeling approach. Once these preliminary models have been validated, the parameters defined by them will provide the basis of development for accurate simulations of full scale, on-orbit systems. The completion of this project and the models being developed will accelerate the commercialization of on-orbit propellant storage and transfer technologies as well as all in-space technologies that utilize or will utilize similar FM techniques and processes.

  13. Combined Use of Tissue Morphology, Neural Network Analysis of Chromatin Texture and Clinical Variables to Predict Prostate Cancer Agressiveness from Biopsy Water

    DTIC Science & Technology

    2000-10-01

    Purpose: To combine clinical, serum, pathologic and computer derived information into an artificial neural network to develop/validate a model to...Development of an artificial neural network (year 02). Prospective validation of this model (projected year 03). All models will be tested and

  14. Validity and reliability of Chinese version of Adult Carer Quality of Life questionnaire (AC-QoL) in family caregivers of stroke survivors

    PubMed Central

    Li, Yingshuang; Ding, Chunge

    2017-01-01

    The Adult Carer Quality of Life questionnaire (AC-QoL) is a reliable and valid instrument used to assess the quality of life (QoL) of adult family caregivers. We explored the psychometric properties and tested the reliability and validity of a Chinese version of the AC-QoL with reliability and validity testing in 409 Chinese stroke caregivers. We used item-total correlation and extreme group comparison to do item analysis. To evaluate its reliability, we used a test-retest reliability approach, intraclass correlation coefficient (ICC), together with Cronbach’s alpha and model-based internal consistency index; to evaluate its validity, we used scale content validity, confirmatory factor analysis (CFA) and exploratory factor analysis (EFA) via principal component analysis with varimax rotation. We found that the CFA did not in fact confirm the original factor model and our EFA yielded a 31-item measure with a five-factor model. In conclusions, although some items performed differently in our analysis of the original English language version and our Chinese language version, our translated AC-QoL is a reliable and valid tool which can be used to assess the quality of life of stroke caregivers in mainland China. Chinese version AC-QoL is a comprehensive and good measurement to understand caregivers and has the potential to be a screening tool to assess QoL of caregiver. PMID:29131845

  15. Modeling Adhesive Anchors in a Discrete Element Framework

    PubMed Central

    Marcon, Marco; Vorel, Jan; Ninčević, Krešimir; Wan-Wendner, Roman

    2017-01-01

    In recent years, post-installed anchors are widely used to connect structural members and to fix appliances to load-bearing elements. A bonded anchor typically denotes a threaded bar placed into a borehole filled with adhesive mortar. The high complexity of the problem, owing to the multiple materials and failure mechanisms involved, requires a numerical support for the experimental investigation. A reliable model able to reproduce a system’s short-term behavior is needed before the development of a more complex framework for the subsequent investigation of the lifetime of fasteners subjected to various deterioration processes can commence. The focus of this contribution is the development and validation of such a model for bonded anchors under pure tension load. Compression, modulus, fracture and splitting tests are performed on standard concrete specimens. These serve for the calibration and validation of the concrete constitutive model. The behavior of the adhesive mortar layer is modeled with a stress-slip law, calibrated on a set of confined pull-out tests. The model validation is performed on tests with different configurations comparing load-displacement curves, crack patterns and concrete cone shapes. A model sensitivity analysis and the evaluation of the bond stress and slippage along the anchor complete the study. PMID:28786964

  16. Development and validation of a modified Hybrid-III six-year-old dummy model for simulating submarining in motor-vehicle crashes.

    PubMed

    Hu, Jingwen; Klinich, Kathleen D; Reed, Matthew P; Kokkolaras, Michael; Rupp, Jonathan D

    2012-06-01

    In motor-vehicle crashes, young school-aged children restrained by vehicle seat belt systems often suffer from abdominal injuries due to submarining. However, the current anthropomorphic test device, so-called "crash dummy", is not adequate for proper simulation of submarining. In this study, a modified Hybrid-III six-year-old dummy model capable of simulating and predicting submarining was developed using MADYMO (TNO Automotive Safety Solutions). The model incorporated improved pelvis and abdomen geometry and properties previously tested in a modified physical dummy. The model was calibrated and validated against four sled tests under two test conditions with and without submarining using a multi-objective optimization method. A sensitivity analysis using this validated child dummy model showed that dummy knee excursion, torso rotation angle, and the difference between head and knee excursions were good predictors for submarining status. It was also shown that restraint system design variables, such as lap belt angle, D-ring height, and seat coefficient of friction (COF), may have opposite effects on head and abdomen injury risks; therefore child dummies and dummy models capable of simulating submarining are crucial for future restraint system design optimization for young school-aged children. Copyright © 2011 IPEM. Published by Elsevier Ltd. All rights reserved.

  17. A cell-based assay for aggregation inhibitors as therapeutics of polyglutamine-repeat disease and validation in Drosophila

    NASA Astrophysics Data System (ADS)

    Apostol, Barbara L.; Kazantsev, Alexsey; Raffioni, Simona; Illes, Katalin; Pallos, Judit; Bodai, Laszlo; Slepko, Natalia; Bear, James E.; Gertler, Frank B.; Hersch, Steven; Housman, David E.; Marsh, J. Lawrence; Michels Thompson, Leslie

    2003-05-01

    The formation of polyglutamine-containing aggregates and inclusions are hallmarks of pathogenesis in Huntington's disease that can be recapitulated in model systems. Although the contribution of inclusions to pathogenesis is unclear, cell-based assays can be used to screen for chemical compounds that affect aggregation and may provide therapeutic benefit. We have developed inducible PC12 cell-culture models to screen for loss of visible aggregates. To test the validity of this approach, compounds that inhibit aggregation in the PC12 cell-based screen were tested in a Drosophila model of polyglutamine-repeat disease. The disruption of aggregation in PC12 cells strongly correlates with suppression of neuronal degeneration in Drosophila. Thus, the engineered PC12 cells coupled with the Drosophila model provide a rapid and effective method to screen and validate compounds.

  18. Finite Element Model Development and Validation for Aircraft Fuselage Structures

    NASA Technical Reports Server (NTRS)

    Buehrle, Ralph D.; Fleming, Gary A.; Pappa, Richard S.; Grosveld, Ferdinand W.

    2000-01-01

    The ability to extend the valid frequency range for finite element based structural dynamic predictions using detailed models of the structural components and attachment interfaces is examined for several stiffened aircraft fuselage structures. This extended dynamic prediction capability is needed for the integration of mid-frequency noise control technology. Beam, plate and solid element models of the stiffener components are evaluated. Attachment models between the stiffener and panel skin range from a line along the rivets of the physical structure to a constraint over the entire contact surface. The finite element models are validated using experimental modal analysis results. The increased frequency range results in a corresponding increase in the number of modes, modal density and spatial resolution requirements. In this study, conventional modal tests using accelerometers are complemented with Scanning Laser Doppler Velocimetry and Electro-Optic Holography measurements to further resolve the spatial response characteristics. Whenever possible, component and subassembly modal tests are used to validate the finite element models at lower levels of assembly. Normal mode predictions for different finite element representations of components and assemblies are compared with experimental results to assess the most accurate techniques for modeling aircraft fuselage type structures.

  19. ADVANCED UTILITY SIMULATION MODEL, REPORT OF SENSITIVITY TESTING, CALIBRATION, AND MODEL OUTPUT COMPARISONS (VERSION 3.0)

    EPA Science Inventory

    The report gives results of activities relating to the Advanced Utility Simulation Model (AUSM): sensitivity testing. comparison with a mature electric utility model, and calibration to historical emissions. The activities were aimed at demonstrating AUSM's validity over input va...

  20. Validating soil denitrification models based on laboratory N_{2} and N_{2}O fluxes and underlying processes derived by stable isotope approaches

    NASA Astrophysics Data System (ADS)

    Well, Reinhard; Böttcher, Jürgen; Butterbach-Bahl, Klaus; Dannenmann, Michael; Deppe, Marianna; Dittert, Klaus; Dörsch, Peter; Horn, Marcus; Ippisch, Olaf; Mikutta, Robert; Müller, Carsten; Müller, Christoph; Senbayram, Mehmet; Vogel, Hans-Jörg; Wrage-Mönnig, Nicole

    2016-04-01

    Robust denitrification data suitable to validate soil N2 fluxes in denitrification models are scarce due to methodical limitations and the extreme spatio-temporal heterogeneity of denitrification in soils. Numerical models have become essential tools to predict denitrification at different scales. Model performance could either be tested for total gaseous flux (NO + N2O + N2), individual denitrification products (e.g. N2O and/or NO) or for the effect of denitrification factors (e.g. C-availability, respiration, diffusivity, anaerobic volume, etc.). While there are numerous examples for validating N2O fluxes, there are neither robust field data of N2 fluxes nor sufficiently resolved measurements of control factors used as state variables in the models. To the best of our knowledge there has been only one published validation of modelled soil N2 flux by now, using a laboratory data set to validate an ecosystem model. Hence there is a need for validation data at both, the mesocosm and the field scale including validation of individual denitrification controls. Here we present the concept for collecting model validation data which is be part of the DFG-research unit "Denitrification in Agricultural Soils: Integrated Control and Modelling at Various Scales (DASIM)" starting this year. We will use novel approaches including analysis of stable isotopes, microbial communities, pores structure and organic matter fractions to provide denitrification data sets comprising as much detail on activity and regulation as possible as a basis to validate existing and calibrate new denitrification models that are applied and/or developed by DASIM subprojects. The basic idea is to simulate "field-like" conditions as far as possible in an automated mesocosm system without plants in order to mimic processes in the soil parts not significantly influenced by the rhizosphere (rhizosphere soils are studied by other DASIM projects). Hence, to allow model testing in a wide range of conditions, denitrification control factors will be varied in the initial settings (pore volume, plant residues, mineral N, pH) but also over time, where moisture, temperature, and mineral N will be manipulated according to typical time patterns in the field. This will be realized by including precipitation events, fertilization (via irrigation), drainage (via water potential) and temperature in the course of incubations. Moreover, oxygen concentration will be varied to simulate anaerobic events. These data will be used to calibrate the newly to develop DASIM models as well as existing denitrification models. One goal of DASIM is to create a public data base as a joint basis for model testing by denitrification modellers. Therefore we invite contributions of suitable data-sets from the scientific community. Requirements will be briefly outlined.

  1. V-SUIT Model Validation Using PLSS 1.0 Test Results

    NASA Technical Reports Server (NTRS)

    Olthoff, Claas

    2015-01-01

    The dynamic portable life support system (PLSS) simulation software Virtual Space Suit (V-SUIT) has been under development at the Technische Universitat Munchen since 2011 as a spin-off from the Virtual Habitat (V-HAB) project. The MATLAB(trademark)-based V-SUIT simulates space suit portable life support systems and their interaction with a detailed and also dynamic human model, as well as the dynamic external environment of a space suit moving on a planetary surface. To demonstrate the feasibility of a large, system level simulation like V-SUIT, a model of NASA's PLSS 1.0 prototype was created. This prototype was run through an extensive series of tests in 2011. Since the test setup was heavily instrumented, it produced a wealth of data making it ideal for model validation. The implemented model includes all components of the PLSS in both the ventilation and thermal loops. The major components are modeled in greater detail, while smaller and ancillary components are low fidelity black box models. The major components include the Rapid Cycle Amine (RCA) CO2 removal system, the Primary and Secondary Oxygen Assembly (POS/SOA), the Pressure Garment System Volume Simulator (PGSVS), the Human Metabolic Simulator (HMS), the heat exchanger between the ventilation and thermal loops, the Space Suit Water Membrane Evaporator (SWME) and finally the Liquid Cooling Garment Simulator (LCGS). Using the created model, dynamic simulations were performed using same test points also used during PLSS 1.0 testing. The results of the simulation were then compared to the test data with special focus on absolute values during the steady state phases and dynamic behavior during the transition between test points. Quantified simulation results are presented that demonstrate which areas of the V-SUIT model are in need of further refinement and those that are sufficiently close to the test results. Finally, lessons learned from the modelling and validation process are given in combination with implications for the future development of other PLSS models in V-SUIT.

  2. System-Integrated Finite Element Analysis of a Full-Scale Helicopter Crash Test with Deployable Energy Absorbers

    NASA Technical Reports Server (NTRS)

    Annett, Martin S.; Polanco, Michael A.

    2010-01-01

    A full-scale crash test of an MD-500 helicopter was conducted in December 2009 at NASA Langley's Landing and Impact Research facility (LandIR). The MD-500 helicopter was fitted with a composite honeycomb Deployable Energy Absorber (DEA) and tested under vertical and horizontal impact velocities of 26-ft/sec and 40-ft/sec, respectively. The objectives of the test were to evaluate the performance of the DEA concept under realistic crash conditions and to generate test data for validation of a system integrated finite element model. In preparation for the full-scale crash test, a series of sub-scale and MD-500 mass simulator tests was conducted to evaluate the impact performances of various components, including a new crush tube and the DEA blocks. Parameters defined within the system integrated finite element model were determined from these tests. The objective of this paper is to summarize the finite element models developed and analyses performed, beginning with pre-test predictions and continuing through post-test validation.

  3. LS-DYNA Analysis of a Full-Scale Helicopter Crash Test

    NASA Technical Reports Server (NTRS)

    Annett, Martin S.

    2010-01-01

    A full-scale crash test of an MD-500 helicopter was conducted in December 2009 at NASA Langley's Landing and Impact Research facility (LandIR). The MD-500 helicopter was fitted with a composite honeycomb Deployable Energy Absorber (DEA) and tested under vertical and horizontal impact velocities of 26 ft/sec and 40 ft/sec, respectively. The objectives of the test were to evaluate the performance of the DEA concept under realistic crash conditions and to generate test data for validation of a system integrated LS-DYNA finite element model. In preparation for the full-scale crash test, a series of sub-scale and MD-500 mass simulator tests was conducted to evaluate the impact performances of various components, including a new crush tube and the DEA blocks. Parameters defined within the system integrated finite element model were determined from these tests. The objective of this paper is to summarize the finite element models developed and analyses performed, beginning with pre-test and continuing through post test validation.

  4. A Comprehensive Validation Methodology for Sparse Experimental Data

    NASA Technical Reports Server (NTRS)

    Norman, Ryan B.; Blattnig, Steve R.

    2010-01-01

    A comprehensive program of verification and validation has been undertaken to assess the applicability of models to space radiation shielding applications and to track progress as models are developed over time. The models are placed under configuration control, and automated validation tests are used so that comparisons can readily be made as models are improved. Though direct comparisons between theoretical results and experimental data are desired for validation purposes, such comparisons are not always possible due to lack of data. In this work, two uncertainty metrics are introduced that are suitable for validating theoretical models against sparse experimental databases. The nuclear physics models, NUCFRG2 and QMSFRG, are compared to an experimental database consisting of over 3600 experimental cross sections to demonstrate the applicability of the metrics. A cumulative uncertainty metric is applied to the question of overall model accuracy, while a metric based on the median uncertainty is used to analyze the models from the perspective of model development by analyzing subsets of the model parameter space.

  5. 6DOF Testing of the SLS Inertial Navigation Unit

    NASA Technical Reports Server (NTRS)

    Geohagan, Kevin; Bernard, Bill; Oliver, T. Emerson; Leggett, Jared; Strickland, Dennis

    2018-01-01

    The Navigation System on the NASA Space Launch System (SLS) Block 1 vehicle performs initial alignment of the Inertial Navigation System (INS) navigation frame through gyrocompass alignment (GCA). Because the navigation architecture for the SLS Block 1 vehicle is a purely inertial system, the accuracy of the achieved orbit relative to mission requirements is very sensitive to initial alignment accuracy. The assessment of this sensitivity and many others via simulation is a part of the SLS Model-Based Design and Model-Based Requirements approach. As a part of the aforementioned, 6DOF Monte Carlo simulation is used in large part to develop and demonstrate verification of program requirements. To facilitate this and the GN&C flight software design process, an SLS-Program-controlled Design Math Model (DMM) of the SLS INS was developed by the SLS Navigation Team. The SLS INS model implements all of the key functions of the hardware-namely, GCA, inertial navigation, and FDIR (Fault Detection, Isolation, and Recovery)-in support of SLS GN&C design requirements verification. Despite the strong sensitivity to initial alignment, GCA accuracy requirements were not verified by test due to program cost and schedule constraints. Instead, the system relies upon assessments performed using the SLS INS model. In order to verify SLS program requirements by analysis, the SLS INS model is verified and validated against flight hardware. In lieu of direct testing of GCA accuracy in support of requirement verification, the SLS Navigation Team proposed and conducted an engineering test to, among other things, validate the GCA performance and overall behavior of the SLS INS model through comparison with test data. This paper will detail dynamic hardware testing of the SLS INS, conducted by the SLS Navigation Team at Marshall Space Flight Center's 6DOF Table Facility, in support of GCA performance characterization and INS model validation. A 6-DOF motion platform was used to produce 6DOF pad twist and sway dynamics while a simulated SLS flight computer communicated with the INS. Tests conducted include an evaluation of GCA algorithm robustness to increasingly dynamic pad environments, an examination of GCA algorithm stability and accuracy over long durations, and a long-duration static test to gather enough data for Allan Variance analysis. Test setup, execution, and data analysis will be discussed, including analysis performed in support of SLS INS model validation.

  6. The development and validation of a clinical prediction model to determine the probability of MODY in patients with young-onset diabetes.

    PubMed

    Shields, B M; McDonald, T J; Ellard, S; Campbell, M J; Hyde, C; Hattersley, A T

    2012-05-01

    Diagnosing MODY is difficult. To date, selection for molecular genetic testing for MODY has used discrete cut-offs of limited clinical characteristics with varying sensitivity and specificity. We aimed to use multiple, weighted, clinical criteria to determine an individual's probability of having MODY, as a crucial tool for rational genetic testing. We developed prediction models using logistic regression on data from 1,191 patients with MODY (n = 594), type 1 diabetes (n = 278) and type 2 diabetes (n = 319). Model performance was assessed by receiver operating characteristic (ROC) curves, cross-validation and validation in a further 350 patients. The models defined an overall probability of MODY using a weighted combination of the most discriminative characteristics. For MODY, compared with type 1 diabetes, these were: lower HbA(1c), parent with diabetes, female sex and older age at diagnosis. MODY was discriminated from type 2 diabetes by: lower BMI, younger age at diagnosis, female sex, lower HbA(1c), parent with diabetes, and not being treated with oral hypoglycaemic agents or insulin. Both models showed excellent discrimination (c-statistic = 0.95 and 0.98, respectively), low rates of cross-validated misclassification (9.2% and 5.3%), and good performance on the external test dataset (c-statistic = 0.95 and 0.94). Using the optimal cut-offs, the probability models improved the sensitivity (91% vs 72%) and specificity (94% vs 91%) for identifying MODY compared with standard criteria of diagnosis <25 years and an affected parent. The models are now available online at www.diabetesgenes.org . We have developed clinical prediction models that calculate an individual's probability of having MODY. This allows an improved and more rational approach to determine who should have molecular genetic testing.

  7. Establishment of a VISAR Measurement System for Material Model Validation in DSTO

    DTIC Science & Technology

    2013-02-01

    advancements published in the works by L.M. Baker, E.R. Hollenbach and W.F. Hemsing [1-3] and results in the user-friendly interface and configuration of the...VISAR system [4] used in the current work . VISAR tests are among the mandatory instrumentation techniques when validating material models and...The present work reports on preliminary tests using the recently commissioned DSTO VISAR system, providing an assessment of the experimental set-up

  8. Thermal Vacuum Test Correlation of A Zero Propellant Load Case Thermal Capacitance Propellant Gauging Analytics Model

    NASA Technical Reports Server (NTRS)

    McKim, Stephen A.

    2016-01-01

    This thesis describes the development and test data validation of the thermal model that is the foundation of a thermal capacitance spacecraft propellant load estimator. Specific details of creating the thermal model for the diaphragm propellant tank used on NASA's Magnetospheric Multiscale spacecraft using ANSYS and the correlation process implemented to validate the model are presented. The thermal model was correlated to within plus or minus 3 degrees Centigrade of the thermal vacuum test data, and was found to be relatively insensitive to uncertainties in applied heat flux and mass knowledge of the tank. More work is needed, however, to refine the thermal model to further improve temperature predictions in the upper hemisphere of the propellant tank. Temperatures predictions in this portion were found to be 2-2.5 degrees Centigrade lower than the test data. A road map to apply the model to predict propellant loads on the actual MMS spacecraft toward its end of life in 2017-2018 is also presented.

  9. Modeling and experimental validation of a Hybridized Energy Storage System for automotive applications

    NASA Astrophysics Data System (ADS)

    Fiorenti, Simone; Guanetti, Jacopo; Guezennec, Yann; Onori, Simona

    2013-11-01

    This paper presents the development and experimental validation of a dynamic model of a Hybridized Energy Storage System (HESS) consisting of a parallel connection of a lead acid (PbA) battery and double layer capacitors (DLCs), for automotive applications. The dynamic modeling of both the PbA battery and the DLC has been tackled via the equivalent electric circuit based approach. Experimental tests are designed for identification purposes. Parameters of the PbA battery model are identified as a function of state of charge and current direction, whereas parameters of the DLC model are identified for different temperatures. A physical HESS has been assembled at the Center for Automotive Research The Ohio State University and used as a test-bench to validate the model against a typical current profile generated for Start&Stop applications. The HESS model is then integrated into a vehicle simulator to assess the effects of the battery hybridization on the vehicle fuel economy and mitigation of the battery stress.

  10. Prediction of functional aerobic capacity without exercise testing

    NASA Technical Reports Server (NTRS)

    Jackson, A. S.; Blair, S. N.; Mahar, M. T.; Wier, L. T.; Ross, R. M.; Stuteville, J. E.

    1990-01-01

    The purpose of this study was to develop functional aerobic capacity prediction models without using exercise tests (N-Ex) and to compare the accuracy with Astrand single-stage submaximal prediction methods. The data of 2,009 subjects (9.7% female) were randomly divided into validation (N = 1,543) and cross-validation (N = 466) samples. The validation sample was used to develop two N-Ex models to estimate VO2peak. Gender, age, body composition, and self-report activity were used to develop two N-Ex prediction models. One model estimated percent fat from skinfolds (N-Ex %fat) and the other used body mass index (N-Ex BMI) to represent body composition. The multiple correlations for the developed models were R = 0.81 (SE = 5.3 ml.kg-1.min-1) and R = 0.78 (SE = 5.6 ml.kg-1.min-1). This accuracy was confirmed when applied to the cross-validation sample. The N-Ex models were more accurate than what was obtained from VO2peak estimated from the Astrand prediction models. The SEs of the Astrand models ranged from 5.5-9.7 ml.kg-1.min-1. The N-Ex models were cross-validated on 59 men on hypertensive medication and 71 men who were found to have a positive exercise ECG. The SEs of the N-Ex models ranged from 4.6-5.4 ml.kg-1.min-1 with these subjects.(ABSTRACT TRUNCATED AT 250 WORDS).

  11. LIVVkit: An extensible, python-based, land ice verification and validation toolkit for ice sheet models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kennedy, Joseph H.; Bennett, Andrew R.; Evans, Katherine J.

    To address the pressing need to better understand the behavior and complex interaction of ice sheets within the global Earth system, significant development of continental-scale, dynamical ice sheet models is underway. Concurrent to the development of the Community Ice Sheet Model (CISM), the corresponding verification and validation (V&V) process is being coordinated through a new, robust, Python-based extensible software package, the Land Ice Verification and Validation toolkit (LIVVkit). Incorporated into the typical ice sheet model development cycle, it provides robust and automated numerical verification, software verification, performance validation, and physical validation analyses on a variety of platforms, from personal laptopsmore » to the largest supercomputers. LIVVkit operates on sets of regression test and reference data sets, and provides comparisons for a suite of community prioritized tests, including configuration and parameter variations, bit-for-bit evaluation, and plots of model variables to indicate where differences occur. LIVVkit also provides an easily extensible framework to incorporate and analyze results of new intercomparison projects, new observation data, and new computing platforms. LIVVkit is designed for quick adaptation to additional ice sheet models via abstraction of model specific code, functions, and configurations into an ice sheet model description bundle outside the main LIVVkit structure. Furthermore, through shareable and accessible analysis output, LIVVkit is intended to help developers build confidence in their models and enhance the credibility of ice sheet models overall.« less

  12. LIVVkit: An extensible, python-based, land ice verification and validation toolkit for ice sheet models

    DOE PAGES

    Kennedy, Joseph H.; Bennett, Andrew R.; Evans, Katherine J.; ...

    2017-03-23

    To address the pressing need to better understand the behavior and complex interaction of ice sheets within the global Earth system, significant development of continental-scale, dynamical ice sheet models is underway. Concurrent to the development of the Community Ice Sheet Model (CISM), the corresponding verification and validation (V&V) process is being coordinated through a new, robust, Python-based extensible software package, the Land Ice Verification and Validation toolkit (LIVVkit). Incorporated into the typical ice sheet model development cycle, it provides robust and automated numerical verification, software verification, performance validation, and physical validation analyses on a variety of platforms, from personal laptopsmore » to the largest supercomputers. LIVVkit operates on sets of regression test and reference data sets, and provides comparisons for a suite of community prioritized tests, including configuration and parameter variations, bit-for-bit evaluation, and plots of model variables to indicate where differences occur. LIVVkit also provides an easily extensible framework to incorporate and analyze results of new intercomparison projects, new observation data, and new computing platforms. LIVVkit is designed for quick adaptation to additional ice sheet models via abstraction of model specific code, functions, and configurations into an ice sheet model description bundle outside the main LIVVkit structure. Furthermore, through shareable and accessible analysis output, LIVVkit is intended to help developers build confidence in their models and enhance the credibility of ice sheet models overall.« less

  13. A 45-Second Self-Test for Cardiorespiratory Fitness: Heart Rate-Based Estimation in Healthy Individuals

    PubMed Central

    Bonato, Matteo; Papini, Gabriele; Bosio, Andrea; Mohammed, Rahil A.; Bonomi, Alberto G.; Moore, Jonathan P.; Merati, Giampiero; La Torre, Antonio; Kubis, Hans-Peter

    2016-01-01

    Cardio-respiratory fitness (CRF) is a widespread essential indicator in Sports Science as well as in Sports Medicine. This study aimed to develop and validate a prediction model for CRF based on a 45 second self-test, which can be conducted anywhere. Criterion validity, test re-test study was set up to accomplish our objectives. Data from 81 healthy volunteers (age: 29 ± 8 years, BMI: 24.0 ± 2.9), 18 of whom females, were used to validate this test against gold standard. Nineteen volunteers repeated this test twice in order to evaluate its repeatability. CRF estimation models were developed using heart rate (HR) features extracted from the resting, exercise, and the recovery phase. The most predictive HR feature was the intercept of the linear equation fitting the HR values during the recovery phase normalized for the height2 (r2 = 0.30). The Ruffier-Dickson Index (RDI), which was originally developed for this squat test, showed a negative significant correlation with CRF (r = -0.40), but explained only 15% of the variability in CRF. A multivariate model based on RDI and sex, age and height increased the explained variability up to 53% with a cross validation (CV) error of 0.532 L ∙ min-1 and substantial repeatability (ICC = 0.91). The best predictive multivariate model made use of the linear intercept of HR at the beginning of the recovery normalized for height2 and age2; this had an adjusted r2 = 0. 59, a CV error of 0.495 L·min-1 and substantial repeatability (ICC = 0.93). It also had a higher agreement in classifying CRF levels (κ = 0.42) than RDI-based model (κ = 0.29). In conclusion, this simple 45 s self-test can be used to estimate and classify CRF in healthy individuals with moderate accuracy and large repeatability when HR recovery features are included. PMID:27959935

  14. A 45-Second Self-Test for Cardiorespiratory Fitness: Heart Rate-Based Estimation in Healthy Individuals.

    PubMed

    Sartor, Francesco; Bonato, Matteo; Papini, Gabriele; Bosio, Andrea; Mohammed, Rahil A; Bonomi, Alberto G; Moore, Jonathan P; Merati, Giampiero; La Torre, Antonio; Kubis, Hans-Peter

    2016-01-01

    Cardio-respiratory fitness (CRF) is a widespread essential indicator in Sports Science as well as in Sports Medicine. This study aimed to develop and validate a prediction model for CRF based on a 45 second self-test, which can be conducted anywhere. Criterion validity, test re-test study was set up to accomplish our objectives. Data from 81 healthy volunteers (age: 29 ± 8 years, BMI: 24.0 ± 2.9), 18 of whom females, were used to validate this test against gold standard. Nineteen volunteers repeated this test twice in order to evaluate its repeatability. CRF estimation models were developed using heart rate (HR) features extracted from the resting, exercise, and the recovery phase. The most predictive HR feature was the intercept of the linear equation fitting the HR values during the recovery phase normalized for the height2 (r2 = 0.30). The Ruffier-Dickson Index (RDI), which was originally developed for this squat test, showed a negative significant correlation with CRF (r = -0.40), but explained only 15% of the variability in CRF. A multivariate model based on RDI and sex, age and height increased the explained variability up to 53% with a cross validation (CV) error of 0.532 L ∙ min-1 and substantial repeatability (ICC = 0.91). The best predictive multivariate model made use of the linear intercept of HR at the beginning of the recovery normalized for height2 and age2; this had an adjusted r2 = 0. 59, a CV error of 0.495 L·min-1 and substantial repeatability (ICC = 0.93). It also had a higher agreement in classifying CRF levels (κ = 0.42) than RDI-based model (κ = 0.29). In conclusion, this simple 45 s self-test can be used to estimate and classify CRF in healthy individuals with moderate accuracy and large repeatability when HR recovery features are included.

  15. Patient and Societal Value Functions for the Testing Morbidities Index

    PubMed Central

    Swan, John Shannon; Kong, Chung Yin; Lee, Janie M.; Akinyemi, Omosalewa; Halpern, Elkan F.; Lee, Pablo; Vavinskiy, Sergey; Williams, Olubunmi; Zoltick, Emilie S.; Donelan, Karen

    2013-01-01

    Background We developed preference-based and summated scale scoring for the Testing Morbidities Index (TMI) classification, which addresses short-term effects on quality of life from diagnostic testing before, during and after a testing procedure. Methods The two TMI value functions utilize multiattribute value techniques; one is patient-based and the other has a societal perspective. 206 breast biopsy patients and 466 (societal) subjects informed the models. Due to a lack of standard short-term methods for this application, we utilized the visual analog scale (VAS). Waiting trade-off (WTO) tolls provided an additional option for linear transformation of the TMI. We randomized participants to one of three surveys: the first derived weights for generic testing morbidity attributes and levels of severity with the VAS; a second developed VAS values and WTO tolls for linear transformation of the TMI to a death-healthy scale; the third addressed initial validation in a specific test (breast biopsy). 188 patients and 425 community subjects participated in initial validation, comparing direct VAS and WTO values to the TMI. Alternative TMI scoring as a non-preference summated scale was included, given evidence of construct and content validity. Results The patient model can use an additive function, while the societal model is multiplicative. Direct VAS and the VAS-scaled TMI were correlated across modeling groups (r=0.45 to 0.62) and agreement was comparable to the value function validation of the Health Utilities Index 2. Mean Absolute Difference (MAD) calculations showed a range of 0.07–0.10 in patients and 0.11–0.17 in subjects. MAD for direct WTO tolls compared to the WTO-scaled TMI varied closely around one quality-adjusted life day. Conclusions The TMI shows initial promise in measuring short-term testing-related health states. PMID:23689044

  16. Estimation of Unsteady Aerodynamic Models from Dynamic Wind Tunnel Data

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick; Klein, Vladislav

    2011-01-01

    Demanding aerodynamic modelling requirements for military and civilian aircraft have motivated researchers to improve computational and experimental techniques and to pursue closer collaboration in these areas. Model identification and validation techniques are key components for this research. This paper presents mathematical model structures and identification techniques that have been used successfully to model more general aerodynamic behaviours in single-degree-of-freedom dynamic testing. Model parameters, characterizing aerodynamic properties, are estimated using linear and nonlinear regression methods in both time and frequency domains. Steps in identification including model structure determination, parameter estimation, and model validation, are addressed in this paper with examples using data from one-degree-of-freedom dynamic wind tunnel and water tunnel experiments. These techniques offer a methodology for expanding the utility of computational methods in application to flight dynamics, stability, and control problems. Since flight test is not always an option for early model validation, time history comparisons are commonly made between computational and experimental results and model adequacy is inferred by corroborating results. An extension is offered to this conventional approach where more general model parameter estimates and their standard errors are compared.

  17. The psychometric validation of the Social Problem-Solving Inventory--Revised with UK incarcerated sexual offenders.

    PubMed

    Wakeling, Helen C

    2007-09-01

    This study examined the reliability and validity of the Social Problem-Solving Inventory--Revised (SPSI-R; D'Zurilla, Nezu, & Maydeu-Olivares, 2002) with a population of incarcerated sexual offenders. An availability sample of 499 adult male sexual offenders was used. The SPSI-R had good reliability measured by internal consistency and test-retest reliability, and adequate validity. Construct validity was determined via factor analysis. An exploratory factor analysis extracted a two-factor model. This model was then tested against the theory-driven five-factor model using confirmatory factor analysis. The five-factor model was selected as the better fitting of the two, and confirmed the model according to social problem-solving theory (D'Zurilla & Nezu, 1982). The SPSI-R had good convergent validity; significant correlations were found between SPSI-R subscales and measures of self-esteem, impulsivity, and locus of control. SPSI-R subscales were however found to significantly correlate with a measure of socially desirable responding. This finding is discussed in relation to recent research suggesting that impression management may not invalidate self-report measures (e.g. Mills & Kroner, 2005). The SPSI-R was sensitive to sexual offender intervention, with problem-solving improving pre to post-treatment in both rapists and child molesters. The study concludes that the SPSI-R is a reasonably internally valid and appropriate tool to assess problem-solving in sexual offenders. However future research should cross-validate the SPSI-R with other behavioural outcomes to examine the external validity of the measure. Furthermore, future research should utilise a control group to determine treatment impact.

  18. What Different Kinds of Stratification Can Reveal about the Generalizability of Data-Mined Skill Assessment Models

    ERIC Educational Resources Information Center

    Sao Pedro, Michael A.; Baker, Ryan S. J. d.; Gobert, Janice D.

    2013-01-01

    When validating assessment models built with data mining, generalization is typically tested at the student-level, where models are tested on new students. This approach, though, may fail to find cases where model performance suffers if other aspects of those cases relevant to prediction are not well represented. We explore this here by testing if…

  19. Measuring Children's Environmental Attitudes and Values in Northwest Mexico: Validating a Modified Version of Measures to Test the Model of Ecological Values (2-MEV)

    ERIC Educational Resources Information Center

    Schneller, A. J.; Johnson, B.; Bogner, F. X.

    2015-01-01

    This paper describes the validation process of measuring children's attitudes and values toward the environment within a Mexican sample. We applied the Model of Ecological Values (2-MEV), which has been shown to be valid and reliable in 20 countries, including one Spanish speaking culture. Items were initially modified to fit the regional dialect,…

  20. Reliability and validity of electrothermometers and associated thermocouples.

    PubMed

    Jutte, Lisa S; Knight, Kenneth L; Long, Blaine C

    2008-02-01

    Examine thermocouple model uncertainty (reliability+validity). First, a 3x3 repeated measures design with independent variables electrothermometers and thermocouple model. Second, a 1x3 repeated measures design with independent variable subprobe. Three electrothermometers, 3 thermocouple models, a multi-sensor probe and a mercury thermometer measured a stable water bath. Temperature and absolute temperature differences between thermocouples and a mercury thermometer. Thermocouple uncertainty was greater than manufactures'claims. For all thermocouple models, validity and reliability were better in the Iso-Themex than the Datalogger, but there were no practical differences between models within an electrothermometers. Validity of multi-sensor probes and thermocouples within a probe were not different but were greater than manufacturers'claims. Reliability of multiprobes and thermocouples within a probe were within manufacturers claims. Thermocouple models vary in reliability and validity. Scientists should test and report the uncertainty of their equipment rather than depending on manufactures' claims.

  1. On the validation of a code and a turbulence model appropriate to circulation control airfoils

    NASA Technical Reports Server (NTRS)

    Viegas, J. R.; Rubesin, M. W.; Maccormack, R. W.

    1988-01-01

    A computer code for calculating flow about a circulation control airfoil within a wind tunnel test section has been developed. This code is being validated for eventual use as an aid to design such airfoils. The concept of code validation being used is explained. The initial stages of the process have been accomplished. The present code has been applied to a low-subsonic, 2-D flow about a circulation control airfoil for which extensive data exist. Two basic turbulence models and variants thereof have been successfully introduced into the algorithm, the Baldwin-Lomax algebraic and the Jones-Launder two-equation models of turbulence. The variants include adding a history of the jet development for the algebraic model and adding streamwise curvature effects for both models. Numerical difficulties and difficulties in the validation process are discussed. Turbulence model and code improvements to proceed with the validation process are also discussed.

  2. The Persian Version of the "Life Satisfaction Scale": Construct Validity and Test-Re-Test Reliability among Iranian Older Adults.

    PubMed

    Moghadam, Manije; Salavati, Mahyar; Sahaf, Robab; Rassouli, Maryam; Moghadam, Mojgan; Kamrani, Ahmad Ali Akbari

    2018-03-01

    After forward-backward translation, the LSS was administered to 334 Persian speaking, cognitively healthy elderly aged 60 years and over recruited through convenience sampling. To analyze the validity of the model's constructs and the relationships between the constructs, a confirmatory factor analysis followed by PLS analysis was performed. The Construct validity was further investigated by calculating the correlations between the LSS and the "Short Form Health Survey" (SF-36) subscales measuring similar and dissimilar constructs. The LSS was re-administered to 50 participants a month later to assess the reliability. For the eight-factor model of the life satisfaction construct, adequate goodness of fit between the hypothesized model and the model derived from the sample data was attained (positive and statistically significant beta coefficients, good R-squares and acceptable GoF). Construct validity was supported by convergent and discriminant validity, and correlations between the LSS and SF-36 subscales. Minimum Intraclass Correlation Coefficient level of 0.60 was exceeded by all subscales. Minimum level of reliability indices (Cronbach's α, composite reliability and indicator reliability) was exceeded by all subscales. The Persian-version of the Life Satisfaction Scale is a reliable and valid instrument, with psychometric properties which are consistent with the original version.

  3. FUEL ASSEMBLY SHAKER TEST SIMULATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klymyshyn, Nicholas A.; Sanborn, Scott E.; Adkins, Harold E.

    This report describes the modeling of a PWR fuel assembly under dynamic shock loading in support of the Sandia National Laboratories (SNL) shaker test campaign. The focus of the test campaign is on evaluating the response of used fuel to shock and vibration loads that a can occur during highway transport. Modeling began in 2012 using an LS-DYNA fuel assembly model that was first created for modeling impact scenarios. SNL’s proposed test scenario was simulated through analysis and the calculated results helped guide the instrumentation and other aspects of the testing. During FY 2013, the fuel assembly model was refinedmore » to better represent the test surrogate. Analysis of the proposed loads suggested the frequency band needed to be lowered to attempt to excite the lower natural frequencies of the fuel assembly. Despite SNL’s expansion of lower frequency components in their five shock realizations, pretest predictions suggested a very mild dynamic response to the test loading. After testing was completed, one specific shock case was modeled, using recorded accelerometer data to excite the model. Direct comparison of predicted strain in the cladding was made to the recorded strain gauge data. The magnitude of both sets of strain (calculated and recorded) are very low, compared to the expected yield strength of the Zircaloy-4 material. The model was accurate enough to predict that no yielding of the cladding was expected, but its precision at predicting micro strains is questionable. The SNL test data offers some opportunity for validation of the finite element model, but the specific loading conditions of the testing only excite the fuel assembly to respond in a limited manner. For example, the test accelerations were not strong enough to substantially drive the fuel assembly out of contact with the basket. Under this test scenario, the fuel assembly model does a reasonable job of approximating actual fuel assembly response, a claim that can be verified through direct comparison of model results to recorded test results. This does not offer validation for the fuel assembly model in all conceivable cases, such as high kinetic energy shock cases where the fuel assembly might lift off the basket floor to strike to basket ceiling. This type of nonlinear behavior was not witnessed in testing, so the model does not have test data to be validated against.a basis for validation in cases that substantially alter the fuel assembly response range. This leads to a gap in knowledge that is identified through this modeling study. The SNL shaker testing loaded a surrogate fuel assembly with a certain set of artificially-generated time histories. One thing all the shock cases had in common was an elimination of low frequency components, which reduces the rigid body dynamic response of the system. It is not known if the SNL test cases effectively bound all highway transportation scenarios, or if significantly greater rigid body motion than was tested is credible. This knowledge gap could be filled through modeling the vehicle dynamics of a used fuel conveyance, or by collecting acceleration time history data from an actual conveyance under highway conditions.« less

  4. Design and Development Computer-Based E-Learning Teaching Material for Improving Mathematical Understanding Ability and Spatial Sense of Junior High School Students

    NASA Astrophysics Data System (ADS)

    Nurjanah; Dahlan, J. A.; Wibisono, Y.

    2017-02-01

    This paper aims to make a design and development computer-based e-learning teaching material for improving mathematical understanding ability and spatial sense of junior high school students. Furthermore, the particular aims are (1) getting teaching material design, evaluation model, and intrument to measure mathematical understanding ability and spatial sense of junior high school students; (2) conducting trials computer-based e-learning teaching material model, asessment, and instrument to develop mathematical understanding ability and spatial sense of junior high school students; (3) completing teaching material models of computer-based e-learning, assessment, and develop mathematical understanding ability and spatial sense of junior high school students; (4) resulting research product is teaching materials of computer-based e-learning. Furthermore, the product is an interactive learning disc. The research method is used of this study is developmental research which is conducted by thought experiment and instruction experiment. The result showed that teaching materials could be used very well. This is based on the validation of computer-based e-learning teaching materials, which is validated by 5 multimedia experts. The judgement result of face and content validity of 5 validator shows that the same judgement result to the face and content validity of each item test of mathematical understanding ability and spatial sense. The reliability test of mathematical understanding ability and spatial sense are 0,929 and 0,939. This reliability test is very high. While the validity of both tests have a high and very high criteria.

  5. Multi-Evaporator Miniature Loop Heat Pipe for Small Spacecraft Thermal Control. Part 1; New Technologies and Validation Approach

    NASA Technical Reports Server (NTRS)

    Ku, Jentung; Ottenstein, Laura; Douglas, Donya; Hoang, Triem

    2010-01-01

    Under NASA s New Millennium Program Space Technology 8 (ST 8) Project, four experiments Thermal Loop, Dependable Microprocessor, SAILMAST, and UltraFlex - were conducted to advance the maturity of individual technologies from proof of concept to prototype demonstration in a relevant environment , i.e. from a technology readiness level (TRL) of 3 to a level of 6. This paper presents the new technologies and validation approach of the Thermal Loop experiment. The Thermal Loop is an advanced thermal control system consisting of a miniature loop heat pipe (MLHP) with multiple evaporators and multiple condensers designed for future small system applications requiring low mass, low power, and compactness. The MLHP retains all features of state-of-the-art loop heat pipes (LHPs) and offers additional advantages to enhance the functionality, performance, versatility, and reliability of the system. Details of the thermal loop concept, technical advances, benefits, objectives, level 1 requirements, and performance characteristics are described. Also included in the paper are descriptions of the test articles and mathematical modeling used for the technology validation. An MLHP breadboard was built and tested in the laboratory and thermal vacuum environments for TRL 4 and TRL 5 validations, and an MLHP proto-flight unit was built and tested in a thermal vacuum chamber for the TRL 6 validation. In addition, an analytical model was developed to simulate the steady state and transient behaviors of the MLHP during various validation tests. Capabilities and limitations of the analytical model are also addressed.

  6. Validation of the filament winding process model

    NASA Technical Reports Server (NTRS)

    Calius, Emilo P.; Springer, George S.; Wilson, Brian A.; Hanson, R. Scott

    1987-01-01

    Tests were performed toward validating the WIND model developed previously for simulating the filament winding of composite cylinders. In these tests two 24 in. long, 8 in. diam and 0.285 in. thick cylinders, made of IM-6G fibers and HBRF-55 resin, were wound at + or - 45 deg angle on steel mandrels. The temperatures on the inner and outer surfaces and inside the composite cylinders were recorded during oven cure. The temperatures inside the cylinders were also calculated by the WIND model. The measured and calculated temperatures were then compared. In addition, the degree of cure and resin viscosity distributions inside the cylinders were calculated for the conditions which existed in the tests.

  7. Iced Aircraft Flight Data for Flight Simulator Validation

    NASA Technical Reports Server (NTRS)

    Ratvasky, Thomas P.; Blankenship, Kurt; Rieke, William; Brinker, David J.

    2003-01-01

    NASA is developing and validating technology to incorporate aircraft icing effects into a flight training device concept demonstrator. Flight simulation models of a DHC-6 Twin Otter were developed from wind tunnel data using a subscale, complete aircraft model with and without simulated ice, and from previously acquired flight data. The validation of the simulation models required additional aircraft response time histories of the airplane configured with simulated ice similar to the subscale model testing. Therefore, a flight test was conducted using the NASA Twin Otter Icing Research Aircraft. Over 500 maneuvers of various types were conducted in this flight test. The validation data consisted of aircraft state parameters, pilot inputs, propulsion, weight, center of gravity, and moments of inertia with the airplane configured with different amounts of simulated ice. Emphasis was made to acquire data at wing stall and tailplane stall since these events are of primary interest to model accurately in the flight training device. Analyses of several datasets are described regarding wing and tailplane stall. Key findings from these analyses are that the simulated wing ice shapes significantly reduced the C , max, while the simulated tail ice caused elevator control force anomalies and tailplane stall when flaps were deflected 30 deg or greater. This effectively reduced the safe operating margins between iced wing and iced tail stall as flap deflection and thrust were increased. This flight test demonstrated that the critical aspects to be modeled in the icing effects flight training device include: iced wing and tail stall speeds, flap and thrust effects, control forces, and control effectiveness.

  8. Testing the validity of the International Atomic Energy Agency (IAEA) safety culture model.

    PubMed

    López de Castro, Borja; Gracia, Francisco J; Peiró, José M; Pietrantoni, Luca; Hernández, Ana

    2013-11-01

    This paper takes the first steps to empirically validate the widely used model of safety culture of the International Atomic Energy Agency (IAEA), composed of five dimensions, further specified by 37 attributes. To do so, three independent and complementary studies are presented. First, 290 students serve to collect evidence about the face validity of the model. Second, 48 experts in organizational behavior judge its content validity. And third, 468 workers in a Spanish nuclear power plant help to reveal how closely the theoretical five-dimensional model can be replicated. Our findings suggest that several attributes of the model may not be related to their corresponding dimensions. According to our results, a one-dimensional structure fits the data better than the five dimensions proposed by the IAEA. Moreover, the IAEA model, as it stands, seems to have rather moderate content validity and low face validity. Practical implications for researchers and practitioners are included. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. Pre-engineering Spaceflight Validation of Environmental Models and the 2005 HZETRN Simulation Code

    NASA Technical Reports Server (NTRS)

    Nealy, John E.; Cucinotta, Francis A.; Wilson, John W.; Badavi, Francis F.; Dachev, Ts. P.; Tomov, B. T.; Walker, Steven A.; DeAngelis, Giovanni; Blattnig, Steve R.; Atwell, William

    2006-01-01

    The HZETRN code has been identified by NASA for engineering design in the next phase of space exploration highlighting a return to the Moon in preparation for a Mars mission. In response, a new series of algorithms beginning with 2005 HZETRN, will be issued by correcting some prior limitations and improving control of propagated errors along with established code verification processes. Code validation processes will use new/improved low Earth orbit (LEO) environmental models with a recently improved International Space Station (ISS) shield model to validate computational models and procedures using measured data aboard ISS. These validated models will provide a basis for flight-testing the designs of future space vehicles and systems of the Constellation program in the LEO environment.

  10. Developing a short measure of organizational justice: a multisample health professionals study.

    PubMed

    Elovainio, Marko; Heponiemi, Tarja; Kuusio, Hannamaria; Sinervo, Timo; Hintsa, Taina; Aalto, Anna-Mari

    2010-11-01

    To develop and test the validity of a short version of the original questionnaire measuring organizational justice. The study samples comprised working physicians (N = 2792) and registered nurses (n = 2137) from the Finnish Health Professionals study. Structural equation modelling was applied to test structural validity, using the justice scales. Furthermore, criterion validity was explored with well-being (sleeping problems) and health indicators (psychological distress/self-rated health). The short version of the organizational justice questionnaire (eight items) provides satisfactory psychometric properties (internal consistency, a good model fit of the data). All scales were associated with an increased risk of sleeping problems and psychological distress, indicating satisfactory criterion validity. This short version of the organizational justice questionnaire provides a useful tool for epidemiological studies focused on health-adverse effects of work environment.

  11. The PDB_REDO server for macromolecular structure model optimization

    PubMed Central

    Joosten, Robbie P.; Long, Fei; Murshudov, Garib N.; Perrakis, Anastassis

    2014-01-01

    The refinement and validation of a crystallographic structure model is the last step before the coordinates and the associated data are submitted to the Protein Data Bank (PDB). The success of the refinement procedure is typically assessed by validating the models against geometrical criteria and the diffraction data, and is an important step in ensuring the quality of the PDB public archive [Read et al. (2011 ▶), Structure, 19, 1395–1412]. The PDB_REDO procedure aims for ‘constructive validation’, aspiring to consistent and optimal refinement parameterization and pro-active model rebuilding, not only correcting errors but striving for optimal interpretation of the electron density. A web server for PDB_REDO has been implemented, allowing thorough, consistent and fully automated optimization of the refinement procedure in REFMAC and partial model rebuilding. The goal of the web server is to help practicing crystallo­graphers to improve their model prior to submission to the PDB. For this, additional steps were implemented in the PDB_REDO pipeline, both in the refinement procedure, e.g. testing of resolution limits and k-fold cross-validation for small test sets, and as new validation criteria, e.g. the density-fit metrics implemented in EDSTATS and ligand validation as implemented in YASARA. Innovative ways to present the refinement and validation results to the user are also described, which together with auto-generated Coot scripts can guide users to subsequent model inspection and improvement. It is demonstrated that using the server can lead to substantial improvement of structure models before they are submitted to the PDB. PMID:25075342

  12. Validating archetypes for the Multiple Sclerosis Functional Composite.

    PubMed

    Braun, Michael; Brandt, Alexander Ulrich; Schulz, Stefan; Boeker, Martin

    2014-08-03

    Numerous information models for electronic health records, such as openEHR archetypes are available. The quality of such clinical models is important to guarantee standardised semantics and to facilitate their interoperability. However, validation aspects are not regarded sufficiently yet. The objective of this report is to investigate the feasibility of archetype development and its community-based validation process, presuming that this review process is a practical way to ensure high-quality information models amending the formal reference model definitions. A standard archetype development approach was applied on a case set of three clinical tests for multiple sclerosis assessment: After an analysis of the tests, the obtained data elements were organised and structured. The appropriate archetype class was selected and the data elements were implemented in an iterative refinement process. Clinical and information modelling experts validated the models in a structured review process. Four new archetypes were developed and publicly deployed in the openEHR Clinical Knowledge Manager, an online platform provided by the openEHR Foundation. Afterwards, these four archetypes were validated by domain experts in a team review. The review was a formalised process, organised in the Clinical Knowledge Manager. Both, development and review process turned out to be time-consuming tasks, mostly due to difficult selection processes between alternative modelling approaches. The archetype review was a straightforward team process with the goal to validate archetypes pragmatically. The quality of medical information models is crucial to guarantee standardised semantic representation in order to improve interoperability. The validation process is a practical way to better harmonise models that diverge due to necessary flexibility left open by the underlying formal reference model definitions.This case study provides evidence that both community- and tool-enabled review processes, structured in the Clinical Knowledge Manager, ensure archetype quality. It offers a pragmatic but feasible way to reduce variation in the representation of clinical information models towards a more unified and interoperable model.

  13. Validating archetypes for the Multiple Sclerosis Functional Composite

    PubMed Central

    2014-01-01

    Background Numerous information models for electronic health records, such as openEHR archetypes are available. The quality of such clinical models is important to guarantee standardised semantics and to facilitate their interoperability. However, validation aspects are not regarded sufficiently yet. The objective of this report is to investigate the feasibility of archetype development and its community-based validation process, presuming that this review process is a practical way to ensure high-quality information models amending the formal reference model definitions. Methods A standard archetype development approach was applied on a case set of three clinical tests for multiple sclerosis assessment: After an analysis of the tests, the obtained data elements were organised and structured. The appropriate archetype class was selected and the data elements were implemented in an iterative refinement process. Clinical and information modelling experts validated the models in a structured review process. Results Four new archetypes were developed and publicly deployed in the openEHR Clinical Knowledge Manager, an online platform provided by the openEHR Foundation. Afterwards, these four archetypes were validated by domain experts in a team review. The review was a formalised process, organised in the Clinical Knowledge Manager. Both, development and review process turned out to be time-consuming tasks, mostly due to difficult selection processes between alternative modelling approaches. The archetype review was a straightforward team process with the goal to validate archetypes pragmatically. Conclusions The quality of medical information models is crucial to guarantee standardised semantic representation in order to improve interoperability. The validation process is a practical way to better harmonise models that diverge due to necessary flexibility left open by the underlying formal reference model definitions. This case study provides evidence that both community- and tool-enabled review processes, structured in the Clinical Knowledge Manager, ensure archetype quality. It offers a pragmatic but feasible way to reduce variation in the representation of clinical information models towards a more unified and interoperable model. PMID:25087081

  14. PIV Measurements of the CEV Hot Abort Motor Plume for CFD Validation

    NASA Technical Reports Server (NTRS)

    Wernet, Mark; Wolter, John D.; Locke, Randy; Wroblewski, Adam; Childs, Robert; Nelson, Andrea

    2010-01-01

    NASA s next manned launch platform for missions to the moon and Mars are the Orion and Ares systems. Many critical aspects of the launch system performance are being verified using computational fluid dynamics (CFD) predictions. The Orion Launch Abort Vehicle (LAV) consists of a tower mounted tractor rocket tasked with carrying the Crew Module (CM) safely away from the launch vehicle in the event of a catastrophic failure during the vehicle s ascent. Some of the predictions involving the launch abort system flow fields produced conflicting results, which required further investigation through ground test experiments. Ground tests were performed to acquire data from a hot supersonic jet in cross-flow for the purpose of validating CFD turbulence modeling relevant to the Orion Launch Abort Vehicle (LAV). Both 2-component axial plane Particle Image Velocimetry (PIV) and 3-component cross-stream Stereo Particle Image Velocimetry (SPIV) measurements were obtained on a model of an Abort Motor (AM). Actual flight conditions could not be simulated on the ground, so the highest temperature and pressure conditions that could be safely used in the test facility (nozzle pressure ratio 28.5 and a nozzle temperature ratio of 3) were used for the validation tests. These conditions are significantly different from those of the flight vehicle, but were sufficiently high enough to begin addressing turbulence modeling issues that predicated the need for the validation tests.

  15. Commercial Supersonics Technology Project - Status of Airport Noise

    NASA Technical Reports Server (NTRS)

    Bridges, James

    2016-01-01

    The Commercial Supersonic Technology Project has been developing databases, computational tools, and system models to prepare for a level 1 milestone, the Low Noise Propulsion Tech Challenge, to be delivered Sept 2016. Steps taken to prepare for the final validation test are given, including system analysis, code validation, and risk reduction testing.

  16. Does IQ Really Predict Job Performance?

    PubMed Central

    Richardson, Ken; Norgate, Sarah H.

    2015-01-01

    IQ has played a prominent part in developmental and adult psychology for decades. In the absence of a clear theoretical model of internal cognitive functions, however, construct validity for IQ tests has always been difficult to establish. Test validity, therefore, has always been indirect, by correlating individual differences in test scores with what are assumed to be other criteria of intelligence. Job performance has, for several reasons, been one such criterion. Correlations of around 0.5 have been regularly cited as evidence of test validity, and as justification for the use of the tests in developmental studies, in educational and occupational selection and in research programs on sources of individual differences. Here, those correlations are examined together with the quality of the original data and the many corrections needed to arrive at them. It is concluded that considerable caution needs to be exercised in citing such correlations for test validation purposes. PMID:26405429

  17. The Role of Integrated Modeling in the Design and Verification of the James Webb Space Telescope

    NASA Technical Reports Server (NTRS)

    Mosier, Gary E.; Howard, Joseph M.; Johnston, John D.; Parrish, Keith A.; Hyde, T. Tupper; McGinnis, Mark A.; Bluth, Marcel; Kim, Kevin; Ha, Kong Q.

    2004-01-01

    The James Web Space Telescope (JWST) is a large, infrared-optimized space telescope scheduled for launch in 2011. System-level verification of critical optical performance requirements will rely on integrated modeling to a considerable degree. In turn, requirements for accuracy of the models are significant. The size of the lightweight observatory structure, coupled with the need to test at cryogenic temperatures, effectively precludes validation of the models and verification of optical performance with a single test in 1-g. Rather, a complex series of steps are planned by which the components of the end-to-end models are validated at various levels of subassembly, and the ultimate verification of optical performance is by analysis using the assembled models. This paper describes the critical optical performance requirements driving the integrated modeling activity, shows how the error budget is used to allocate and track contributions to total performance, and presents examples of integrated modeling methods and results that support the preliminary observatory design. Finally, the concepts for model validation and the role of integrated modeling in the ultimate verification of observatory are described.

  18. Multireader multicase reader studies with binary agreement data: simulation, analysis, validation, and sizing.

    PubMed

    Chen, Weijie; Wunderlich, Adam; Petrick, Nicholas; Gallas, Brandon D

    2014-10-01

    We treat multireader multicase (MRMC) reader studies for which a reader's diagnostic assessment is converted to binary agreement (1: agree with the truth state, 0: disagree with the truth state). We present a mathematical model for simulating binary MRMC data with a desired correlation structure across readers, cases, and two modalities, assuming the expected probability of agreement is equal for the two modalities ([Formula: see text]). This model can be used to validate the coverage probabilities of 95% confidence intervals (of [Formula: see text], [Formula: see text], or [Formula: see text] when [Formula: see text]), validate the type I error of a superiority hypothesis test, and size a noninferiority hypothesis test (which assumes [Formula: see text]). To illustrate the utility of our simulation model, we adapt the Obuchowski-Rockette-Hillis (ORH) method for the analysis of MRMC binary agreement data. Moreover, we use our simulation model to validate the ORH method for binary data and to illustrate sizing in a noninferiority setting. Our software package is publicly available on the Google code project hosting site for use in simulation, analysis, validation, and sizing of MRMC reader studies with binary agreement data.

  19. Multireader multicase reader studies with binary agreement data: simulation, analysis, validation, and sizing

    PubMed Central

    Chen, Weijie; Wunderlich, Adam; Petrick, Nicholas; Gallas, Brandon D.

    2014-01-01

    Abstract. We treat multireader multicase (MRMC) reader studies for which a reader’s diagnostic assessment is converted to binary agreement (1: agree with the truth state, 0: disagree with the truth state). We present a mathematical model for simulating binary MRMC data with a desired correlation structure across readers, cases, and two modalities, assuming the expected probability of agreement is equal for the two modalities (P1=P2). This model can be used to validate the coverage probabilities of 95% confidence intervals (of P1, P2, or P1−P2 when P1−P2=0), validate the type I error of a superiority hypothesis test, and size a noninferiority hypothesis test (which assumes P1=P2). To illustrate the utility of our simulation model, we adapt the Obuchowski–Rockette–Hillis (ORH) method for the analysis of MRMC binary agreement data. Moreover, we use our simulation model to validate the ORH method for binary data and to illustrate sizing in a noninferiority setting. Our software package is publicly available on the Google code project hosting site for use in simulation, analysis, validation, and sizing of MRMC reader studies with binary agreement data. PMID:26158051

  20. Test, revision, and cross-validation of the Physical Activity Self-Definition Model.

    PubMed

    Kendzierski, Deborah; Morganstein, Mara S

    2009-08-01

    Structural equation modeling was used to test an extended version of the Kendzierski, Furr, and Schiavoni (1998) Physical Activity Self-Definition Model. A revised model using data from 622 runners fit the data well. Cross-validation indices supported the revised model, and this model also provided a good fit to data from 397 cyclists. Partial invariance was found across activities. In both samples, perceived commitment and perceived ability had direct effects on self-definition, and perceived wanting, perceived trying, and enjoyment had indirect effects. The contribution of perceived ability to self-definition did not differ across activities. Implications concerning the original model, indirect effects, skill salience, and the role of context in self-definition are discussed.

  1. A Capable and Temporary Test Facility on a Shoestring Budget: The MSL Touchdown Test Facility

    NASA Technical Reports Server (NTRS)

    White, Christopher V.; Frankovich, John K.; Yates, Philip; Wells, George, Jr.; Robert, Losey

    2008-01-01

    The Mars Science Laboratory mission (MSL) has undertaken a developmental Touchdown Test Program that utilizes a full-scale rover vehicle and an overhead winch system to replicate the skycrane landing event. Landing surfaces consisting of flat and sloped granular media, planar, rigid surfaces, and various combinations of rocks and slopes were studied. Information gathered from these tests was vital for validating the rover analytical model, validating certain design or system behavior assumptions, and for exploring events and phenomenon that are either very difficult or too costly to model in a credible way. This paper describes this test program, with a focus on the creation of test facility, daily test operations, and some of the challenges faced and lessons learned along the way.

  2. Coupled Hydro-Mechanical Constitutive Model for Vegetated Soils: Validation and Applications

    NASA Astrophysics Data System (ADS)

    Switala, Barbara Maria; Veenhof, Rick; Wu, Wei; Askarinejad, Amin

    2016-04-01

    It is well known, that presence of vegetation influences stability of the slope. However, the quantitative assessment of this contribution remains challenging. It is essential to develop a numerical model, which combines mechanical root reinforcement and root water uptake, and allows modelling rainfall induced landslides of vegetated slopes. Therefore a novel constitutive formulation is proposed, which is based on the modified Cam-clay model for unsaturated soils. Mechanical root reinforcement is modelled introducing a new constitutive parameter, which governs the evolution of the Cam-clay failure surface with the degree of root reinforcement. Evapotranspiration is modelled in terms of the root water uptake, defined as a sink term in the water flow continuity equation. The original concept is extended for different shapes of the root architecture in three dimensions, and combined with the mechanical model. The model is implemented in the research finite element code Comes-Geo, and in the commercial software Abaqus. The formulation is tested, performing a series of numerical examples, which allow validation of the concept. The direct shear test and the triaxial test are modelled in order to test the performance of the mechanical part of the model. In order to validate the hydrological part of the constitutive formulation, evapotranspiration from the vegetated box is simulated and compared with the experimental results. Obtained numerical results exhibit a good agreement with the experimental data. The implemented model is capable of reproducing results of basic geotechnical laboratory tests. Moreover, the constitutive formulation can be used to model rainfall induced landslides of vegetated slopes, taking into account the most important factors influencing the slope stability (root reinforcement and evapotranspiration).

  3. Multivariate statistical assessment of predictors of firefighters' muscular and aerobic work capacity.

    PubMed

    Lindberg, Ann-Sofie; Oksa, Juha; Antti, Henrik; Malm, Christer

    2015-01-01

    Physical capacity has previously been deemed important for firefighters physical work capacity, and aerobic fitness, muscular strength, and muscular endurance are the most frequently investigated parameters of importance. Traditionally, bivariate and multivariate linear regression statistics have been used to study relationships between physical capacities and work capacities among firefighters. An alternative way to handle datasets consisting of numerous correlated variables is to use multivariate projection analyses, such as Orthogonal Projection to Latent Structures. The first aim of the present study was to evaluate the prediction and predictive power of field and laboratory tests, respectively, on firefighters' physical work capacity on selected work tasks. Also, to study if valid predictions could be achieved without anthropometric data. The second aim was to externally validate selected models. The third aim was to validate selected models on firefighters' and on civilians'. A total of 38 (26 men and 12 women) + 90 (38 men and 52 women) subjects were included in the models and the external validation, respectively. The best prediction (R2) and predictive power (Q2) of Stairs, Pulling, Demolition, Terrain, and Rescue work capacities included field tests (R2 = 0.73 to 0.84, Q2 = 0.68 to 0.82). The best external validation was for Stairs work capacity (R2 = 0.80) and worst for Demolition work capacity (R2 = 0.40). In conclusion, field and laboratory tests could equally well predict physical work capacities for firefighting work tasks, and models excluding anthropometric data were valid. The predictive power was satisfactory for all included work tasks except Demolition.

  4. Land Ice Verification and Validation Kit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2015-07-15

    To address a pressing need to better understand the behavior and complex interaction of ice sheets within the global Earth system, significant development of continental-scale, dynamical ice-sheet models is underway. The associated verification and validation process of these models is being coordinated through a new, robust, python-based extensible software package, the Land Ice Verification and Validation toolkit (LIVV). This release provides robust and automated verification and a performance evaluation on LCF platforms. The performance V&V involves a comprehensive comparison of model performance relative to expected behavior on a given computing platform. LIVV operates on a set of benchmark and testmore » data, and provides comparisons for a suite of community prioritized tests, including configuration and parameter variations, bit-4-bit evaluation, and plots of tests where differences occur.« less

  5. Development Instrument’s Learning of Physics Through Scientific Inquiry Model Based Batak Culture to Improve Science Process Skill and Student’s Curiosity

    NASA Astrophysics Data System (ADS)

    Nasution, Derlina; Syahreni Harahap, Putri; Harahap, Marabangun

    2018-03-01

    This research aims to: (1) developed a instrument’s learning (lesson plan, worksheet, student’s book, teacher’s guide book, and instrument test) of physics learning through scientific inquiry learning model based Batak culture to achieve skills improvement process of science students and the students’ curiosity; (2) describe the quality of the result of develop instrument’s learning in high school using scientific inquiry learning model based Batak culture (lesson plan, worksheet, student’s book, teacher’s guide book, and instrument test) to achieve the science process skill improvement of students and the student curiosity. This research is research development. This research developed a instrument’s learning of physics by using a development model that is adapted from the development model Thiagarajan, Semmel, and Semmel. The stages are traversed until retrieved a valid physics instrument’s learning, practical, and effective includes :(1) definition phase, (2) the planning phase, and (3) stages of development. Test performed include expert test/validation testing experts, small groups, and test classes is limited. Test classes are limited to do in SMAN 1 Padang Bolak alternating on a class X MIA. This research resulted in: 1) the learning of physics static fluid material specially for high school grade 10th consisted of (lesson plan, worksheet, student’s book, teacher’s guide book, and instrument test) and quality worthy of use in the learning process; 2) each component of the instrument’s learning meet the criteria have valid learning, practical, and effective way to reach the science process skill improvement and curiosity in students.

  6. Development and validation of a risk-prediction nomogram for in-hospital mortality in adults poisoned with drugs and nonpharmaceutical agents

    PubMed Central

    Lionte, Catalina; Sorodoc, Victorita; Jaba, Elisabeta; Botezat, Alina

    2017-01-01

    Abstract Acute poisoning with drugs and nonpharmaceutical agents represents an important challenge in the emergency department (ED). The objective is to create and validate a risk-prediction nomogram for use in the ED to predict the risk of in-hospital mortality in adults from acute poisoning with drugs and nonpharmaceutical agents. This was a prospective cohort study involving adults with acute poisoning from drugs and nonpharmaceutical agents admitted to a tertiary referral center for toxicology between January and December 2015 (derivation cohort) and between January and June 2016 (validation cohort). We used a program to generate nomograms based on binary logistic regression predictive models. We included variables that had significant associations with death. Using regression coefficients, we calculated scores for each variable, and estimated the event probability. Model validation was performed using bootstrap to quantify our modeling strategy and using receiver operator characteristic (ROC) analysis. The nomogram was tested on a separate validation cohort using ROC analysis and goodness-of-fit tests. Data from 315 patients aged 18 to 91 years were analyzed (n = 180 in the derivation cohort; n = 135 in the validation cohort). In the final model, the following variables were significantly associated with mortality: age, laboratory test results (lactate, potassium, MB isoenzyme of creatine kinase), electrocardiogram parameters (QTc interval), and echocardiography findings (E wave velocity deceleration time). Sex was also included to use the same model for men and women. The resulting nomogram showed excellent survival/mortality discrimination (area under the curve [AUC] 0.976, 95% confidence interval [CI] 0.954–0.998, P < 0.0001 for the derivation cohort; AUC 0.957, 95% CI 0.892–1, P < 0.0001 for the validation cohort). This nomogram provides more precise, rapid, and simple risk-analysis information for individual patients acutely exposed to drugs and nonpharmaceutical agents, and accurately estimates the probability of in-hospital death, exclusively using the results of objective tests available in the ED. PMID:28328838

  7. Impact of Learning Model Based on Cognitive Conflict toward Student’s Conceptual Understanding

    NASA Astrophysics Data System (ADS)

    Mufit, F.; Festiyed, F.; Fauzan, A.; Lufri, L.

    2018-04-01

    The problems that often occur in the learning of physics is a matter of misconception and low understanding of the concept. Misconceptions do not only happen to students, but also happen to college students and teachers. The existing learning model has not had much impact on improving conceptual understanding and remedial efforts of student misconception. This study aims to see the impact of cognitive-based learning model in improving conceptual understanding and remediating student misconceptions. The research method used is Design / Develop Research. The product developed is a cognitive conflict-based learning model along with its components. This article reports on product design results, validity tests, and practicality test. The study resulted in the design of cognitive conflict-based learning model with 4 learning syntaxes, namely (1) preconception activation, (2) presentation of cognitive conflict, (3) discovery of concepts & equations, (4) Reflection. The results of validity tests by some experts on aspects of content, didactic, appearance or language, indicate very valid criteria. Product trial results also show a very practical product to use. Based on pretest and posttest results, cognitive conflict-based learning models have a good impact on improving conceptual understanding and remediating misconceptions, especially in high-ability students.

  8. Development and validation of Dutch version of Lasater Clinical Judgment Rubric in hospital practice: An instrument design study.

    PubMed

    Vreugdenhil, Jettie; Spek, Bea

    2018-03-01

    Clinical reasoning in patient care is a skill that cannot be observed directly. So far, no reliable, valid instrument exists for the assessment of nursing students' clinical reasoning skills in hospital practice. Lasater's clinical judgment rubric (LCJR), based on Tanner's model "Thinking like a nurse" has been tested, mainly in academic simulation settings. The aim is to develop a Dutch version of the LCJR (D-LCJR) and to test its psychometric properties when used in a hospital traineeship context. A mixed-model approach was used to develop and to validate the instrument. Ten dedicated educational units in a university hospital. A well-mixed group of 52 nursing students, nurse coaches and nurse educators. A Delphi panel developed the D-LCJR. Students' clinical reasoning skills were assessed "live" by nurse coaches, nurse educators and students who rated themselves. The psychometric properties tested during the assessment process are reliability, reproducibility, content validity and construct validity by testing two hypothesis: 1) a positive correlation between assessed and self-reported sum scores (convergent validity) and 2) a linear relation between experience and sum score (clinical validity). The obtained D-LCJR was found to be internally consistent, Cronbach's alpha 0.93. The rubric is also reproducible with intraclass correlations between 0.69 and 0.78. Experts judged it to be content valid. The two hypothesis were both tested significant, supporting evidence for construct validity. The translated and modified LCJR, is a promising tool for the evaluation of nursing students' development in clinical reasoning in hospital traineeships, by students, nurse coaches and nurse educators. More evidence on construct validity is necessary, in particular for students at the end of their hospital traineeship. Based on our research, the D-LCJR applied in hospital traineeships is a usable and reliable tool. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Modal testing for model validation of structures with discrete nonlinearities.

    PubMed

    Ewins, D J; Weekes, B; delli Carri, A

    2015-09-28

    Model validation using data from modal tests is now widely practiced in many industries for advanced structural dynamic design analysis, especially where structural integrity is a primary requirement. These industries tend to demand highly efficient designs for their critical structures which, as a result, are increasingly operating in regimes where traditional linearity assumptions are no longer adequate. In particular, many modern structures are found to contain localized areas, often around joints or boundaries, where the actual mechanical behaviour is far from linear. Such structures need to have appropriate representation of these nonlinear features incorporated into the otherwise largely linear models that are used for design and operation. This paper proposes an approach to this task which is an extension of existing linear techniques, especially in the testing phase, involving only just as much nonlinear analysis as is necessary to construct a model which is good enough, or 'valid': i.e. capable of predicting the nonlinear response behaviour of the structure under all in-service operating and test conditions with a prescribed accuracy. A short-list of methods described in the recent literature categorized using our framework is given, which identifies those areas in which further development is most urgently required. © 2015 The Authors.

  10. Working Memory Structure in 10- and 15-Year Old Children with Mild to Borderline Intellectual, Disabilities

    ERIC Educational Resources Information Center

    van der Molen, Mariet J.

    2010-01-01

    The validity of Baddeley's working memory model within the typically developing population, was tested. However, it is not clear if this model also holds in children and adolescents with mild to, borderline intellectual disabilities (ID; IQ score 55-85). The main purpose of this study was therefore, to explore the model's validity in this…

  11. Maximizing the Information and Validity of a Linear Composite in the Factor Analysis Model for Continuous Item Responses

    ERIC Educational Resources Information Center

    Ferrando, Pere J.

    2008-01-01

    This paper develops results and procedures for obtaining linear composites of factor scores that maximize: (a) test information, and (b) validity with respect to external variables in the multiple factor analysis (FA) model. I treat FA as a multidimensional item response theory model, and use Ackerman's multidimensional information approach based…

  12. An externally validated model for predicting long-term survival after exercise treadmill testing in patients with suspected coronary artery disease and a normal electrocardiogram.

    PubMed

    Lauer, Michael S; Pothier, Claire E; Magid, David J; Smith, S Scott; Kattan, Michael W

    2007-12-18

    The exercise treadmill test is recommended for risk stratification among patients with intermediate to high pretest probability of coronary artery disease. Posttest risk stratification is based on the Duke treadmill score, which includes only functional capacity and measures of ischemia. To develop and externally validate a post-treadmill test, multivariable mortality prediction rule for adults with suspected coronary artery disease and normal electrocardiograms. Prospective cohort study conducted from September 1990 to May 2004. Exercise treadmill laboratories in a major medical center (derivation set) and a separate HMO (validation set). 33,268 patients in the derivation set and 5821 in the validation set. All patients had normal electrocardiograms and were referred for evaluation of suspected coronary artery disease. The derivation set patients were followed for a median of 6.2 years. A nomogram-illustrated model was derived on the basis of variables easily obtained in the stress laboratory, including age; sex; history of smoking, hypertension, diabetes, or typical angina; and exercise findings of functional capacity, ST-segment changes, symptoms, heart rate recovery, and frequent ventricular ectopy in recovery. The derivation data set included 1619 deaths. Although both the Duke treadmill score and our nomogram-illustrated model were significantly associated with death (P < 0.001), the nomogram was better at discrimination (concordance index for right-censored data, 0.83 vs. 0.73) and calibration. We reclassified many patients with intermediate- to high-risk Duke treadmill scores as low risk on the basis of the nomogram. The model also predicted 3-year mortality rates well in the validation set: Based on an optimal cut-point for a negative predictive value of 0.97, derivation and validation rates were, respectively, 1.7% and 2.5% below the cut-point and 25% and 29% above the cut-point. Blood test-based measures or left ventricular ejection fraction were not included. The nomogram can be applied only to patients with a normal electrocardiogram. Clinical utility remains to be tested. A simple nomogram based on easily obtained pretest and exercise test variables predicted all-cause mortality in adults with suspected coronary artery disease and normal electrocardiograms.

  13. Renewable Energy Generation and Storage Models | Grid Modernization | NREL

    Science.gov Websites

    -the-loop testing Projects Generator, Plant, and Storage Modeling, Simulation, and Validation NREL power plants. Power Hardware-in-the-Loop Testing NREL researchers are developing software-and-hardware -combined simulation testing methods known as power hardware-in-the-loop testing. Power hardware in the loop

  14. Using School Lotteries to Evaluate the Value-Added Model

    ERIC Educational Resources Information Center

    Deutsch, Jonah

    2013-01-01

    There has been an active debate in the literature over the validity of value-added models. In this study, the author tests the central assumption of value-added models that school assignment is random relative to expected test scores conditional on prior test scores, demographic variables, and other controls. He uses a Chicago charter school's…

  15. An entropy-based nonparametric test for the validation of surrogate endpoints.

    PubMed

    Miao, Xiaopeng; Wang, Yong-Cheng; Gangopadhyay, Ashis

    2012-06-30

    We present a nonparametric test to validate surrogate endpoints based on measure of divergence and random permutation. This test is a proposal to directly verify the Prentice statistical definition of surrogacy. The test does not impose distributional assumptions on the endpoints, and it is robust to model misspecification. Our simulation study shows that the proposed nonparametric test outperforms the practical test of the Prentice criterion in terms of both robustness of size and power. We also evaluate the performance of three leading methods that attempt to quantify the effect of surrogate endpoints. The proposed method is applied to validate magnetic resonance imaging lesions as the surrogate endpoint for clinical relapses in a multiple sclerosis trial. Copyright © 2012 John Wiley & Sons, Ltd.

  16. Validation and Simulation of Ares I Scale Model Acoustic Test - 3 - Modeling and Evaluating the Effect of Rainbird Water Deluge Inclusion

    NASA Technical Reports Server (NTRS)

    Strutzenberg, Louise L.; Putman, Gabriel C.

    2011-01-01

    The Ares I Scale Model Acoustics Test (ASMAT) is a series of live-fire tests of scaled rocket motors meant to simulate the conditions of the Ares I launch configuration. These tests have provided a well documented set of high fidelity measurements useful for validation including data taken over a range of test conditions and containing phenomena like Ignition Over-Pressure and water suppression of acoustics. Building on dry simulations of the ASMAT tests with the vehicle at 5 ft. elevation (100 ft. real vehicle elevation), wet simulations of the ASMAT test setup have been performed using the Loci/CHEM computational fluid dynamics software to explore the effect of rainbird water suppression inclusion on the launch platform deck. Two-phase water simulation has been performed using an energy and mass coupled lagrangian particle system module where liquid phase emissions are segregated into clouds of virtual particles and gas phase mass transfer is accomplished through simple Weber number controlled breakup and boiling models. Comparisons have been performed to the dry 5 ft. elevation cases, using configurations with and without launch mounts. These cases have been used to explore the interaction between rainbird spray patterns and launch mount geometry and evaluate the acoustic sound pressure level knockdown achieved through above-deck rainbird deluge inclusion. This comparison has been anchored with validation from live-fire test data which showed a reduction in rainbird effectiveness with the presence of a launch mount.

  17. Comprehensive Modeling of Temperature-Dependent Degradation Mechanisms in Lithium Iron Phosphate Batteries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schimpe, Michael; von Kuepach, M. E.; Naumann, M.

    For reliable lifetime predictions of lithium-ion batteries, models for cell degradation are required. A comprehensive semi-empirical model based on a reduced set of internal cell parameters and physically justified degradation functions for the capacity loss is developed and presented for a commercial lithium iron phosphate/graphite cell. One calendar and several cycle aging effects are modeled separately. Emphasis is placed on the varying degradation at different temperatures. Degradation mechanisms for cycle aging at high and low temperatures as well as the increased cycling degradation at high state of charge are calculated separately. For parameterization, a lifetime test study is conducted includingmore » storage and cycle tests. Additionally, the model is validated through a dynamic current profile based on real-world application in a stationary energy storage system revealing the accuracy. Tests for validation are continued for up to 114 days after the longest parametrization tests. In conclusion, the model error for the cell capacity loss in the application-based tests is at the end of testing below 1% of the original cell capacity and the maximum relative model error is below 21%.« less

  18. Comprehensive Modeling of Temperature-Dependent Degradation Mechanisms in Lithium Iron Phosphate Batteries

    DOE PAGES

    Schimpe, Michael; von Kuepach, M. E.; Naumann, M.; ...

    2018-01-12

    For reliable lifetime predictions of lithium-ion batteries, models for cell degradation are required. A comprehensive semi-empirical model based on a reduced set of internal cell parameters and physically justified degradation functions for the capacity loss is developed and presented for a commercial lithium iron phosphate/graphite cell. One calendar and several cycle aging effects are modeled separately. Emphasis is placed on the varying degradation at different temperatures. Degradation mechanisms for cycle aging at high and low temperatures as well as the increased cycling degradation at high state of charge are calculated separately. For parameterization, a lifetime test study is conducted includingmore » storage and cycle tests. Additionally, the model is validated through a dynamic current profile based on real-world application in a stationary energy storage system revealing the accuracy. Tests for validation are continued for up to 114 days after the longest parametrization tests. In conclusion, the model error for the cell capacity loss in the application-based tests is at the end of testing below 1% of the original cell capacity and the maximum relative model error is below 21%.« less

  19. Are Model Transferability And Complexity Antithetical? Insights From Validation of a Variable-Complexity Empirical Snow Model in Space and Time

    NASA Astrophysics Data System (ADS)

    Lute, A. C.; Luce, Charles H.

    2017-11-01

    The related challenges of predictions in ungauged basins and predictions in ungauged climates point to the need to develop environmental models that are transferable across both space and time. Hydrologic modeling has historically focused on modelling one or only a few basins using highly parameterized conceptual or physically based models. However, model parameters and structures have been shown to change significantly when calibrated to new basins or time periods, suggesting that model complexity and model transferability may be antithetical. Empirical space-for-time models provide a framework within which to assess model transferability and any tradeoff with model complexity. Using 497 SNOTEL sites in the western U.S., we develop space-for-time models of April 1 SWE and Snow Residence Time based on mean winter temperature and cumulative winter precipitation. The transferability of the models to new conditions (in both space and time) is assessed using non-random cross-validation tests with consideration of the influence of model complexity on transferability. As others have noted, the algorithmic empirical models transfer best when minimal extrapolation in input variables is required. Temporal split-sample validations use pseudoreplicated samples, resulting in the selection of overly complex models, which has implications for the design of hydrologic model validation tests. Finally, we show that low to moderate complexity models transfer most successfully to new conditions in space and time, providing empirical confirmation of the parsimony principal.

  20. Development and Demonstration of a Computational Tool for the Analysis of Particle Vitiation Effects in Hypersonic Propulsion Test Facilities

    NASA Technical Reports Server (NTRS)

    Perkins, Hugh Douglas

    2010-01-01

    In order to improve the understanding of particle vitiation effects in hypersonic propulsion test facilities, a quasi-one dimensional numerical tool was developed to efficiently model reacting particle-gas flows over a wide range of conditions. Features of this code include gas-phase finite-rate kinetics, a global porous-particle combustion model, mass, momentum and energy interactions between phases, and subsonic and supersonic particle drag and heat transfer models. The basic capabilities of this tool were validated against available data or other validated codes. To demonstrate the capabilities of the code a series of computations were performed for a model hypersonic propulsion test facility and scramjet. Parameters studied were simulated flight Mach number, particle size, particle mass fraction and particle material.

  1. Development of Flight-Test Performance Estimation Techniques for Small Unmanned Aerial Systems

    NASA Astrophysics Data System (ADS)

    McCrink, Matthew Henry

    This dissertation provides a flight-testing framework for assessing the performance of fixed-wing, small-scale unmanned aerial systems (sUAS) by leveraging sub-system models of components unique to these vehicles. The development of the sub-system models, and their links to broader impacts on sUAS performance, is the key contribution of this work. The sub-system modeling and analysis focuses on the vehicle's propulsion, navigation and guidance, and airframe components. Quantification of the uncertainty in the vehicle's power available and control states is essential for assessing the validity of both the methods and results obtained from flight-tests. Therefore, detailed propulsion and navigation system analyses are presented to validate the flight testing methodology. Propulsion system analysis required the development of an analytic model of the propeller in order to predict the power available over a range of flight conditions. The model is based on the blade element momentum (BEM) method. Additional corrections are added to the basic model in order to capture the Reynolds-dependent scale effects unique to sUAS. The model was experimentally validated using a ground based testing apparatus. The BEM predictions and experimental analysis allow for a parameterized model relating the electrical power, measurable during flight, to the power available required for vehicle performance analysis. Navigation system details are presented with a specific focus on the sensors used for state estimation, and the resulting uncertainty in vehicle state. Uncertainty quantification is provided by detailed calibration techniques validated using quasi-static and hardware-in-the-loop (HIL) ground based testing. The HIL methods introduced use a soft real-time flight simulator to provide inertial quality data for assessing overall system performance. Using this tool, the uncertainty in vehicle state estimation based on a range of sensors, and vehicle operational environments is presented. The propulsion and navigation system models are used to evaluate flight-testing methods for evaluating fixed-wing sUAS performance. A brief airframe analysis is presented to provide a foundation for assessing the efficacy of the flight-test methods. The flight-testing presented in this work is focused on validating the aircraft drag polar, zero-lift drag coefficient, and span efficiency factor. Three methods are detailed and evaluated for estimating these design parameters. Specific focus is placed on the influence of propulsion and navigation system uncertainty on the resulting performance data. Performance estimates are used in conjunction with the propulsion model to estimate the impact sensor and measurement uncertainty on the endurance and range of a fixed-wing sUAS. Endurance and range results for a simplistic power available model are compared to the Reynolds-dependent model presented in this work. Additional parameter sensitivity analysis related to state estimation uncertainties encountered in flight-testing are presented. Results from these analyses indicate that the sub-system models introduced in this work are of first-order importance, on the order of 5-10% change in range and endurance, in assessing the performance of a fixed-wing sUAS.

  2. A Baseline Patient Model to Support Testing of Medical Cyber-Physical Systems.

    PubMed

    Silva, Lenardo C; Perkusich, Mirko; Almeida, Hyggo O; Perkusich, Angelo; Lima, Mateus A M; Gorgônio, Kyller C

    2015-01-01

    Medical Cyber-Physical Systems (MCPS) are currently a trending topic of research. The main challenges are related to the integration and interoperability of connected medical devices, patient safety, physiologic closed-loop control, and the verification and validation of these systems. In this paper, we focus on patient safety and MCPS validation. We present a formal patient model to be used in health care systems validation without jeopardizing the patient's health. To determine the basic patient conditions, our model considers the four main vital signs: heart rate, respiratory rate, blood pressure and body temperature. To generate the vital signs we used regression models based on statistical analysis of a clinical database. Our solution should be used as a starting point for a behavioral patient model and adapted to specific clinical scenarios. We present the modeling process of the baseline patient model and show its evaluation. The conception process may be used to build different patient models. The results show the feasibility of the proposed model as an alternative to the immediate need for clinical trials to test these medical systems.

  3. NDARC - NASA Design and Analysis of Rotorcraft Validation and Demonstration

    NASA Technical Reports Server (NTRS)

    Johnson, Wayne

    2010-01-01

    Validation and demonstration results from the development of the conceptual design tool NDARC (NASA Design and Analysis of Rotorcraft) are presented. The principal tasks of NDARC are to design a rotorcraft to satisfy specified design conditions and missions, and then analyze the performance of the aircraft for a set of off-design missions and point operating conditions. The aircraft chosen as NDARC development test cases are the UH-60A single main-rotor and tail-rotor helicopter, the CH-47D tandem helicopter, the XH-59A coaxial lift-offset helicopter, and the XV-15 tiltrotor. These aircraft were selected because flight performance data, a weight statement, detailed geometry information, and a correlated comprehensive analysis model are available for each. Validation consists of developing the NDARC models for these aircraft by using geometry and weight information, airframe wind tunnel test data, engine decks, rotor performance tests, and comprehensive analysis results; and then comparing the NDARC results for aircraft and component performance with flight test data. Based on the calibrated models, the capability of the code to size rotorcraft is explored.

  4. "La Clave Profesional": Validation of a Vocational Guidance Instrument

    ERIC Educational Resources Information Center

    Mudarra, Maria J.; Lázaro Martínez, Ángel

    2014-01-01

    Introduction: The current study demonstrates empirical and cultural validity of "La Clave Profesional" (Spanish adaptation of Career Key, Jones's test based Holland's RIASEC model). The process of providing validity evidence also includes a reflection on personal and career development and examines the relationahsips between RIASEC…

  5. Structural Validation of the Holistic Wellness Assessment

    ERIC Educational Resources Information Center

    Brown, Charlene; Applegate, E. Brooks; Yildiz, Mustafa

    2015-01-01

    The Holistic Wellness Assessment (HWA) is a relatively new assessment instrument based on an emergent transdisciplinary model of wellness. This study validated the factor structure identified via exploratory factor analysis (EFA), assessed test-retest reliability, and investigated concurrent validity of the HWA in three separate samples. The…

  6. On-Road Validation of a Simplified Model for Estimating Real-World Fuel Economy: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wood, Eric; Gonder, Jeff; Jehlik, Forrest

    On-road fuel economy is known to vary significantly between individual trips in real-world driving conditions. This work introduces a methodology for rapidly simulating a specific vehicle's fuel economy over the wide range of real-world conditions experienced across the country. On-road test data collected using a highly instrumented vehicle is used to refine and validate this modeling approach. Model accuracy relative to on-road data collection is relevant to the estimation of 'off-cycle credits' that compensate for real-world fuel economy benefits that are not observed during certification testing on a chassis dynamometer.

  7. On-Road Validation of a Simplified Model for Estimating Real-World Fuel Economy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wood, Eric; Gonder, Jeffrey; Jehlik, Forrest

    On-road fuel economy is known to vary significantly between individual trips in real-world driving conditions. This work introduces a methodology for rapidly simulating a specific vehicle's fuel economy over the wide range of real-world conditions experienced across the country. On-road test data collected using a highly instrumented vehicle is used to refine and validate this modeling approach. Here, model accuracy relative to on-road data collection is relevant to the estimation of 'off-cycle credits' that compensate for real-world fuel economy benefits that are not observed during certification testing on a chassis dynamometer.

  8. On-Road Validation of a Simplified Model for Estimating Real-World Fuel Economy

    DOE PAGES

    Wood, Eric; Gonder, Jeffrey; Jehlik, Forrest

    2017-03-28

    On-road fuel economy is known to vary significantly between individual trips in real-world driving conditions. This work introduces a methodology for rapidly simulating a specific vehicle's fuel economy over the wide range of real-world conditions experienced across the country. On-road test data collected using a highly instrumented vehicle is used to refine and validate this modeling approach. Here, model accuracy relative to on-road data collection is relevant to the estimation of 'off-cycle credits' that compensate for real-world fuel economy benefits that are not observed during certification testing on a chassis dynamometer.

  9. Injector Design Tool Improvements: User's manual for FDNS V.4.5

    NASA Technical Reports Server (NTRS)

    Chen, Yen-Sen; Shang, Huan-Min; Wei, Hong; Liu, Jiwen

    1998-01-01

    The major emphasis of the current effort is in the development and validation of an efficient parallel machine computational model, based on the FDNS code, to analyze the fluid dynamics of a wide variety of liquid jet configurations for general liquid rocket engine injection system applications. This model includes physical models for droplet atomization, breakup/coalescence, evaporation, turbulence mixing and gas-phase combustion. Benchmark validation cases for liquid rocket engine chamber combustion conditions will be performed for model validation purpose. Test cases may include shear coaxial, swirl coaxial and impinging injection systems with combinations LOXIH2 or LOXISP-1 propellant injector elements used in rocket engine designs. As a final goal of this project, a well tested parallel CFD performance methodology together with a user's operation description in a final technical report will be reported at the end of the proposed research effort.

  10. Parametric Study of Shear Strength of Concrete Beams Reinforced with FRP Bars

    NASA Astrophysics Data System (ADS)

    Thomas, Job; Ramadass, S.

    2016-09-01

    Fibre Reinforced Polymer (FRP) bars are being widely used as internal reinforcement in structural elements in the last decade. The corrosion resistance of FRP bars qualifies its use in severe and marine exposure conditions in structures. A total of eight concrete beams longitudinally reinforced with FRP bars were cast and tested over shear span to depth ratio of 0.5 and 1.75. The shear strength test data of 188 beams published in various literatures were also used. The model originally proposed by Indian Standard Code of practice for the prediction of shear strength of concrete beams reinforced with steel bars IS:456 (Plain and reinforced concrete, code of practice, fourth revision. Bureau of Indian Standards, New Delhi, 2000) is considered and a modification to account for the influence of the FRP bars is proposed based on regression analysis. Out of the 196 test data, 110 test data is used for the regression analysis and 86 test data is used for the validation of the model. In addition, the shear strength of 86 test data accounted for the validation is assessed using eleven models proposed by various researchers. The proposed model accounts for compressive strength of concrete ( f ck ), modulus of elasticity of FRP rebar ( E f ), longitudinal reinforcement ratio ( ρ f ), shear span to depth ratio ( a/ d) and size effect of beams. The predicted shear strength of beams using the proposed model and 11 models proposed by other researchers is compared with the corresponding experimental results. The mean of predicted shear strength to the experimental shear strength for the 86 beams accounted for the validation of the proposed model is found to be 0.93. The result of the statistical analysis indicates that the prediction based on the proposed model corroborates with the corresponding experimental data.

  11. Modeling Liver-Related Adverse Effects of Drugs Using kNN QSAR Method

    PubMed Central

    Rodgers, Amie D.; Zhu, Hao; Fourches, Dennis; Rusyn, Ivan; Tropsha, Alexander

    2010-01-01

    Adverse effects of drugs (AEDs) continue to be a major cause of drug withdrawals both in development and post-marketing. While liver-related AEDs are a major concern for drug safety, there are few in silico models for predicting human liver toxicity for drug candidates. We have applied the Quantitative Structure Activity Relationship (QSAR) approach to model liver AEDs. In this study, we aimed to construct a QSAR model capable of binary classification (active vs. inactive) of drugs for liver AEDs based on chemical structure. To build QSAR models, we have employed an FDA spontaneous reporting database of human liver AEDs (elevations in activity of serum liver enzymes), which contains data on approximately 500 approved drugs. Approximately 200 compounds with wide clinical data coverage, structural similarity and balanced (40/60) active/inactive ratio were selected for modeling and divided into multiple training/test and external validation sets. QSAR models were developed using the k nearest neighbor method and validated using external datasets. Models with high sensitivity (>73%) and specificity (>94%) for prediction of liver AEDs in external validation sets were developed. To test applicability of the models, three chemical databases (World Drug Index, Prestwick Chemical Library, and Biowisdom Liver Intelligence Module) were screened in silico and the validity of predictions was determined, where possible, by comparing model-based classification with assertions in publicly available literature. Validated QSAR models of liver AEDs based on the data from the FDA spontaneous reporting system can be employed as sensitive and specific predictors of AEDs in pre-clinical screening of drug candidates for potential hepatotoxicity in humans. PMID:20192250

  12. A Frequency-Domain Substructure System Identification Algorithm

    NASA Technical Reports Server (NTRS)

    Blades, Eric L.; Craig, Roy R., Jr.

    1996-01-01

    A new frequency-domain system identification algorithm is presented for system identification of substructures, such as payloads to be flown aboard the Space Shuttle. In the vibration test, all interface degrees of freedom where the substructure is connected to the carrier structure are either subjected to active excitation or are supported by a test stand with the reaction forces measured. The measured frequency-response data is used to obtain a linear, viscous-damped model with all interface-degree of freedom entries included. This model can then be used to validate analytical substructure models. This procedure makes it possible to obtain not only the fixed-interface modal data associated with a Craig-Bampton substructure model, but also the data associated with constraint modes. With this proposed algorithm, multiple-boundary-condition tests are not required, and test-stand dynamics is accounted for without requiring a separate modal test or finite element modeling of the test stand. Numerical simulations are used in examining the algorithm's ability to estimate valid reduced-order structural models. The algorithm's performance when frequency-response data covering narrow and broad frequency bandwidths is used as input is explored. Its performance when noise is added to the frequency-response data and the use of different least squares solution techniques are also examined. The identified reduced-order models are also compared for accuracy with other test-analysis models and a formulation for a Craig-Bampton test-analysis model is also presented.

  13. Measuring engagement in nurses: the psychometric properties of the Persian version of Utrecht Work Engagement Scale

    PubMed Central

    Torabinia, Mansour; Mahmoudi, Sara; Dolatshahi, Mojtaba; Abyaz, Mohamad Reza

    2017-01-01

    Background: Considering the overall tendency in psychology, researchers in the field of work and organizational psychology have become progressively interested in employees’ effective and optimistic experiments at work such as work engagement. This study was conducted to investigate 2 main purposes: assessing the psychometric properties of the Utrecht Work Engagement Scale, and finding any association between work engagement and burnout in nurses. Methods: The present methodological study was conducted in 2015 and included 248 females and 34 males with 6 months to 30 years of job experience. After the translation process, face and content validity were calculated by qualitative and quantitative methods. Moreover, content validation ratio, scale-level content validity index and item-level content validity index were measured for this scale. Construct validity was determined by factor analysis. Moreover, internal consistency and stability reliability were assessed. Factor analysis, test-retest, Cronbach’s alpha, and association analysis were used as statistical methods. Results: Face and content validity were acceptable. Exploratory factor analysis suggested a new 3- factor model. In this new model, some items from the construct model of the original version were dislocated with the same 17 items. The new model was confirmed by divergent Copenhagen Burnout Inventory as the Persian version of UWES. Internal consistency reliability for the total scale and the subscales was 0.76 to 0.89. Results from Pearson correlation test indicated a high degree of test-retest reliability (r = 0. 89). ICC was also 0.91. Engagement was negatively related to burnout and overtime per month, whereas it was positively related with age and job experiment. Conclusion: The Persian 3– factor model of Utrecht Work Engagement Scale is a valid and reliable instrument to measure work engagement in Iranian nurses as well as in other medical professionals. PMID:28955665

  14. A posteriori model validation for the temporal order of directed functional connectivity maps.

    PubMed

    Beltz, Adriene M; Molenaar, Peter C M

    2015-01-01

    A posteriori model validation for the temporal order of neural directed functional connectivity maps is rare. This is striking because models that require sequential independence among residuals are regularly implemented. The aim of the current study was (a) to apply to directed functional connectivity maps of functional magnetic resonance imaging data an a posteriori model validation procedure (i.e., white noise tests of one-step-ahead prediction errors combined with decision criteria for revising the maps based upon Lagrange Multiplier tests), and (b) to demonstrate how the procedure applies to single-subject simulated, single-subject task-related, and multi-subject resting state data. Directed functional connectivity was determined by the unified structural equation model family of approaches in order to map contemporaneous and first order lagged connections among brain regions at the group- and individual-levels while incorporating external input, then white noise tests were run. Findings revealed that the validation procedure successfully detected unmodeled sequential dependencies among residuals and recovered higher order (greater than one) simulated connections, and that the procedure can accommodate task-related input. Findings also revealed that lags greater than one were present in resting state data: With a group-level network that contained only contemporaneous and first order connections, 44% of subjects required second order, individual-level connections in order to obtain maps with white noise residuals. Results have broad methodological relevance (e.g., temporal validation is necessary after directed functional connectivity analyses because the presence of unmodeled higher order sequential dependencies may bias parameter estimates) and substantive implications (e.g., higher order lags may be common in resting state data).

  15. Testing Strategies for Model-Based Development

    NASA Technical Reports Server (NTRS)

    Heimdahl, Mats P. E.; Whalen, Mike; Rajan, Ajitha; Miller, Steven P.

    2006-01-01

    This report presents an approach for testing artifacts generated in a model-based development process. This approach divides the traditional testing process into two parts: requirements-based testing (validation testing) which determines whether the model implements the high-level requirements and model-based testing (conformance testing) which determines whether the code generated from a model is behaviorally equivalent to the model. The goals of the two processes differ significantly and this report explores suitable testing metrics and automation strategies for each. To support requirements-based testing, we define novel objective requirements coverage metrics similar to existing specification and code coverage metrics. For model-based testing, we briefly describe automation strategies and examine the fault-finding capability of different structural coverage metrics using tests automatically generated from the model.

  16. Free kick instead of cross-validation in maximum-likelihood refinement of macromolecular crystal structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pražnikar, Jure; University of Primorska,; Turk, Dušan, E-mail: dusan.turk@ijs.si

    2014-12-01

    The maximum-likelihood free-kick target, which calculates model error estimates from the work set and a randomly displaced model, proved superior in the accuracy and consistency of refinement of crystal structures compared with the maximum-likelihood cross-validation target, which calculates error estimates from the test set and the unperturbed model. The refinement of a molecular model is a computational procedure by which the atomic model is fitted to the diffraction data. The commonly used target in the refinement of macromolecular structures is the maximum-likelihood (ML) function, which relies on the assessment of model errors. The current ML functions rely on cross-validation. Theymore » utilize phase-error estimates that are calculated from a small fraction of diffraction data, called the test set, that are not used to fit the model. An approach has been developed that uses the work set to calculate the phase-error estimates in the ML refinement from simulating the model errors via the random displacement of atomic coordinates. It is called ML free-kick refinement as it uses the ML formulation of the target function and is based on the idea of freeing the model from the model bias imposed by the chemical energy restraints used in refinement. This approach for the calculation of error estimates is superior to the cross-validation approach: it reduces the phase error and increases the accuracy of molecular models, is more robust, provides clearer maps and may use a smaller portion of data for the test set for the calculation of R{sub free} or may leave it out completely.« less

  17. An Innovative Software Tool Suite for Power Plant Model Validation and Parameter Calibration using PMU Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Yuanyuan; Diao, Ruisheng; Huang, Renke

    Maintaining good quality of power plant stability models is of critical importance to ensure the secure and economic operation and planning of today’s power grid with its increasing stochastic and dynamic behavior. According to North American Electric Reliability (NERC) standards, all generators in North America with capacities larger than 10 MVA are required to validate their models every five years. Validation is quite costly and can significantly affect the revenue of generator owners, because the traditional staged testing requires generators to be taken offline. Over the past few years, validating and calibrating parameters using online measurements including phasor measurement unitsmore » (PMUs) and digital fault recorders (DFRs) has been proven to be a cost-effective approach. In this paper, an innovative open-source tool suite is presented for validating power plant models using PPMV tool, identifying bad parameters with trajectory sensitivity analysis, and finally calibrating parameters using an ensemble Kalman filter (EnKF) based algorithm. The architectural design and the detailed procedures to run the tool suite are presented, with results of test on a realistic hydro power plant using PMU measurements for 12 different events. The calibrated parameters of machine, exciter, governor and PSS models demonstrate much better performance than the original models for all the events and show the robustness of the proposed calibration algorithm.« less

  18. Using the split Hopkinson pressure bar to validate material models.

    PubMed

    Church, Philip; Cornish, Rory; Cullis, Ian; Gould, Peter; Lewtas, Ian

    2014-08-28

    This paper gives a discussion of the use of the split-Hopkinson bar with particular reference to the requirements of materials modelling at QinetiQ. This is to deploy validated material models for numerical simulations that are physically based and have as little characterization overhead as possible. In order to have confidence that the models have a wide range of applicability, this means, at most, characterizing the models at low rate and then validating them at high rate. The split Hopkinson pressure bar (SHPB) is ideal for this purpose. It is also a very useful tool for analysing material behaviour under non-shock wave loading. This means understanding the output of the test and developing techniques for reliable comparison of simulations with SHPB data. For materials other than metals comparison with an output stress v strain curve is not sufficient as the assumptions built into the classical analysis are generally violated. The method described in this paper compares the simulations with as much validation data as can be derived from deployed instrumentation including the raw strain gauge data on the input and output bars, which avoids any assumptions about stress equilibrium. One has to take into account Pochhammer-Chree oscillations and their effect on the specimen and recognize that this is itself also a valuable validation test of the material model. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  19. Numerical modeling and experimental validation of thermoplastic composites induction welding

    NASA Astrophysics Data System (ADS)

    Palmieri, Barbara; Nele, Luigi; Galise, Francesco

    2018-05-01

    In this work, a numerical simulation and experimental test of the induction welding of continuous fibre-reinforced thermoplastic composites (CFRTPCs) was provided. The thermoplastic Polyamide 66 (PA66) with carbon fiber fabric was used. Using a dedicated software (JMag Designer), the influence of the fundamental process parameters such as temperature, current and holding time was investigated. In order to validate the results of the simulations, and therefore the numerical model used, experimental tests were carried out, and the temperature values measured during the tests were compared with the aid of an optical pyrometer, with those provided by the numerical simulation. The mechanical properties of the welded joints were evaluated by single lap shear tests.

  20. Cross-Validation of Survival Bump Hunting by Recursive Peeling Methods.

    PubMed

    Dazard, Jean-Eudes; Choe, Michael; LeBlanc, Michael; Rao, J Sunil

    2014-08-01

    We introduce a survival/risk bump hunting framework to build a bump hunting model with a possibly censored time-to-event type of response and to validate model estimates. First, we describe the use of adequate survival peeling criteria to build a survival/risk bump hunting model based on recursive peeling methods. Our method called "Patient Recursive Survival Peeling" is a rule-induction method that makes use of specific peeling criteria such as hazard ratio or log-rank statistics. Second, to validate our model estimates and improve survival prediction accuracy, we describe a resampling-based validation technique specifically designed for the joint task of decision rule making by recursive peeling (i.e. decision-box) and survival estimation. This alternative technique, called "combined" cross-validation is done by combining test samples over the cross-validation loops, a design allowing for bump hunting by recursive peeling in a survival setting. We provide empirical results showing the importance of cross-validation and replication.

  1. Cross-Validation of Survival Bump Hunting by Recursive Peeling Methods

    PubMed Central

    Dazard, Jean-Eudes; Choe, Michael; LeBlanc, Michael; Rao, J. Sunil

    2015-01-01

    We introduce a survival/risk bump hunting framework to build a bump hunting model with a possibly censored time-to-event type of response and to validate model estimates. First, we describe the use of adequate survival peeling criteria to build a survival/risk bump hunting model based on recursive peeling methods. Our method called “Patient Recursive Survival Peeling” is a rule-induction method that makes use of specific peeling criteria such as hazard ratio or log-rank statistics. Second, to validate our model estimates and improve survival prediction accuracy, we describe a resampling-based validation technique specifically designed for the joint task of decision rule making by recursive peeling (i.e. decision-box) and survival estimation. This alternative technique, called “combined” cross-validation is done by combining test samples over the cross-validation loops, a design allowing for bump hunting by recursive peeling in a survival setting. We provide empirical results showing the importance of cross-validation and replication. PMID:26997922

  2. Impact and Penetration of Thin Aluminum 2024 Flat Panels at Oblique Angles of Incidence

    NASA Technical Reports Server (NTRS)

    Ruggeri, Charles R.; Revilock, Duane M.; Pereira, J. Michael; Emmerling, William; Queitzsch, Gilbert K., Jr.

    2015-01-01

    The U.S. Federal Aviation Administration (FAA) and the National Aeronautics and Space Administration (NASA) are actively involved in improving the predictive capabilities of transient finite element computational methods for application to safety issues involving unintended impacts on aircraft and aircraft engine structures. One aspect of this work involves the development of an improved deformation and failure model for metallic materials, known as the Tabulated Johnson-Cook model, or MAT224, which has been implemented in the LS-DYNA commercial transient finite element analysis code (LSTC Corp., Livermore, CA) (Ref. 1). In this model the yield stress is a function of strain, strain rate and temperature and the plastic failure strain is a function of the state of stress, temperature and strain rate. The failure criterion is based on the accumulation of plastic strain in an element. The model also incorporates a regularization scheme to account for the dependency of plastic failure strain on mesh size. For a given material the model requires a significant amount of testing to determine the yield stress and failure strain as a function of the three-dimensional state of stress, strain rate and temperature. In addition, experiments are required to validate the model. Currently the model has been developed for Aluminum 2024 and validated against a series of ballistic impact tests on flat plates of various thicknesses (Refs. 1 to 3). Full development of the model for Titanium 6Al-4V is being completed, and mechanical testing for Inconel 718 has begun. The validation testing for the models involves ballistic impact tests using cylindrical projectiles impacting flat plates at a normal incidence (Ref. 2). By varying the thickness of the plates, different stress states and resulting failure modes are induced, providing a range of conditions over which the model can be validated. The objective of the study reported here was to provide experimental data to evaluate the model under more extreme conditions, using a projectile with a more complex shape and sharp contacts, impacting flat panels at oblique angles of incidence.

  3. Ventilation tube insertion simulation: a literature review and validity assessment of five training models.

    PubMed

    Mahalingam, S; Awad, Z; Tolley, N S; Khemani, S

    2016-08-01

    The objective of this study was to identify and investigate the face and content validity of ventilation tube insertion (VTI) training models described in the literature. A review of literature was carried out to identify articles describing VTI simulators. Feasible models were replicated and assessed by a group of experts. Postgraduate simulation centre. Experts were defined as surgeons who had performed at least 100 VTI on patients. Seventeen experts were participated ensuring sufficient statistical power for analysis. A standardised 18-item Likert-scale questionnaire was used. This addressed face validity (realism), global and task-specific content (suitability of the model for teaching) and curriculum recommendation. The search revealed eleven models, of which only five had associated validity data. Five models were found to be feasible to replicate. None of the tested models achieved face or global content validity. Only one model achieved task-specific validity, and hence, there was no agreement on curriculum recommendation. The quality of simulation models is moderate and there is room for improvement. There is a need for new models to be developed or existing ones to be refined in order to construct a more realistic training platform for VTI simulation. © 2015 John Wiley & Sons Ltd.

  4. Predictive models of safety based on audit findings: Part 1: Model development and reliability.

    PubMed

    Hsiao, Yu-Lin; Drury, Colin; Wu, Changxu; Paquet, Victor

    2013-03-01

    This consecutive study was aimed at the quantitative validation of safety audit tools as predictors of safety performance, as we were unable to find prior studies that tested audit validity against safety outcomes. An aviation maintenance domain was chosen for this work as both audits and safety outcomes are currently prescribed and regulated. In Part 1, we developed a Human Factors/Ergonomics classification framework based on HFACS model (Shappell and Wiegmann, 2001a,b), for the human errors detected by audits, because merely counting audit findings did not predict future safety. The framework was tested for measurement reliability using four participants, two of whom classified errors on 1238 audit reports. Kappa values leveled out after about 200 audits at between 0.5 and 0.8 for different tiers of errors categories. This showed sufficient reliability to proceed with prediction validity testing in Part 2. Copyright © 2012 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  5. The Politics and Statistics of Value-Added Modeling for Accountability of Teacher Preparation Programs

    ERIC Educational Resources Information Center

    Lincove, Jane Arnold; Osborne, Cynthia; Dillon, Amanda; Mills, Nicholas

    2014-01-01

    Despite questions about validity and reliability, the use of value-added estimation methods has moved beyond academic research into state accountability systems for teachers, schools, and teacher preparation programs (TPPs). Prior studies of value-added measurement for TPPs test the validity of researcher-designed models and find that measuring…

  6. The Validity of the Three-Component Model of Organizational Commitment in a Chinese Context.

    ERIC Educational Resources Information Center

    Cheng, Yuqiu; Stockdale, Margaret S.

    2003-01-01

    The construct validity of a three-component model of organizational commitment was tested with 226 Chinese employees. Affective and normative commitment significantly predicted job satisfaction; all three components predicted turnover intention. Compared with Canadian (n=603) and South Korean (n=227) samples, normative and affective commitment…

  7. 77 FR 485 - Wind Plant Performance-Public Meeting on Modeling and Testing Needs for Complex Air Flow...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-05

    ... modeling needs and experimental validation techniques for complex flow phenomena in and around off- shore... experimental validation. Ultimately, research in this area may lead to significant improvements in wind plant... meeting will consist of an initial plenary session in which invited speakers will survey available...

  8. Validating Cognitive Models of Task Performance in Algebra on the SAT®. Research Report No. 2009-3

    ERIC Educational Resources Information Center

    Gierl, Mark J.; Leighton, Jacqueline P.; Wang, Changjiang; Zhou, Jiawen; Gokiert, Rebecca; Tan, Adele

    2009-01-01

    The purpose of the study is to present research focused on validating the four algebra cognitive models in Gierl, Wang, et al., using student response data collected with protocol analysis methods to evaluate the knowledge structures and processing skills used by a sample of SAT test takers.

  9. Pressure distribution on the roof of a model low-rise building tested in a boundary layer wind tunnel

    NASA Astrophysics Data System (ADS)

    Goliber, Matthew Robert

    With three of the largest metropolitan areas in the United States along the Gulf coast (Houston, Tampa, and New Orleans), residential populations ever increasing due to the subtropical climate, and insured land value along the coast from Texas to the Florida panhandle greater than $500 billion, hurricane related knowledge is as important now as ever before. This thesis focuses on model low-rise building wind tunnel tests done in connection with full-scale low-rise building tests. Mainly, pressure data collection equipment and methods used in the wind tunnel are compared to pressure data collection equipment and methods used in the field. Although the focus of this report is on the testing of models in the wind tunnel, the low-rise building in the field is located in Pensacola, Florida. It has a wall length of 48 feet, a width of 32 feet, a height of 10 feet, and a gable roof with a pitch of 1:3 and 68 pressure ports strategically placed on the surface of the roof. Built by Forest Products Laboratory (FPL) in 2002, the importance of the test structure has been realized as it has been subjected to numerous hurricanes. In fact, the validity of the field data is so important that the following thesis was necessary. The first model tested in the Bill James Wind Tunnel for this research was a rectangular box. It was through the testing of this box that much of the basic wind tunnel and pressure data collection knowledge was gathered. Knowledge gained from Model 1 tests was as basic as how to: mount pressure tubes on a model, use a pressure transducer, operate the wind tunnel, utilize the pitot tube and reference pressure, and measure wind velocity. Model 1 tests also showed the importance of precise construction to produce precise pressure coefficients. Model 2 was tested in the AABL Wind Tunnel at Iowa State University. This second model was a 22 inch cube which contained a total of 11 rows of pressure ports on its front and top faces. The purpose of Model 2 was to validate the tube length, tube diameter, port diameter, and pressure transducer used in the field. Also, Model 2 was used to study the effects of surface roughness on pressure readings. A partial roof and wall of the low-rise building in the field was used as the third model. Similar to the second model, Model 3 was tested in the AABL Wind Tunnel. Initially, the objectives of the third model were to validate the pressure port protection device (PPPD) being used in the field and test the possibility of interpolating between pressure ports. But in the end, Model 3 was best used to validate the inconsistencies of the full-scale PPPD, validate the transducers used in the field, and prove the importance of scaling either all or none of the model. Fourthly, Model 4 was a 1:16 model of the low-rise building itself. Based on the three previous model tests, Model 4 was instrumented with 202 pressure transducers to better understand: (1) the pressure distribution on the roof of the structure, (2) the affects of the fundamental test variables such as tube length, tube diameter, port diameter, transducer type, and surface roughness, (3) the affects of a scaled PPPD, (4) the importance of wind angle of attack, and (5) the possibility of measuring pressure data and load data simultaneously. In the end, the combination of all four model tests proved to be helpful in understanding the pressure data gathered on the roof of the low-rise building in the field. The two main recommendations for the field structure are for reevaluation of the PPPD design and slight redistribution of the pressure ports. The wind tunnel model tests show a need for these two modifications in order to gather more accurate field pressure data. Other than these two adjustments, the model tests show that the remaining data gathering system is currently accurate.

  10. External validation of preexisting first trimester preeclampsia prediction models.

    PubMed

    Allen, Rebecca E; Zamora, Javier; Arroyo-Manzano, David; Velauthar, Luxmilar; Allotey, John; Thangaratinam, Shakila; Aquilina, Joseph

    2017-10-01

    To validate the increasing number of prognostic models being developed for preeclampsia using our own prospective study. A systematic review of literature that assessed biomarkers, uterine artery Doppler and maternal characteristics in the first trimester for the prediction of preeclampsia was performed and models selected based on predefined criteria. Validation was performed by applying the regression coefficients that were published in the different derivation studies to our cohort. We assessed the models discrimination ability and calibration. Twenty models were identified for validation. The discrimination ability observed in derivation studies (Area Under the Curves) ranged from 0.70 to 0.96 when these models were validated against the validation cohort, these AUC varied importantly, ranging from 0.504 to 0.833. Comparing Area Under the Curves obtained in the derivation study to those in the validation cohort we found statistically significant differences in several studies. There currently isn't a definitive prediction model with adequate ability to discriminate for preeclampsia, which performs as well when applied to a different population and can differentiate well between the highest and lowest risk groups within the tested population. The pre-existing large number of models limits the value of further model development and future research should be focussed on further attempts to validate existing models and assessing whether implementation of these improves patient care. Crown Copyright © 2017. Published by Elsevier B.V. All rights reserved.

  11. Community-Based Participatory Research Conceptual Model: Community Partner Consultation and Face Validity.

    PubMed

    Belone, Lorenda; Lucero, Julie E; Duran, Bonnie; Tafoya, Greg; Baker, Elizabeth A; Chan, Domin; Chang, Charlotte; Greene-Moton, Ella; Kelley, Michele A; Wallerstein, Nina

    2016-01-01

    A national community-based participatory research (CBPR) team developed a conceptual model of CBPR partnerships to understand the contribution of partnership processes to improved community capacity and health outcomes. With the model primarily developed through academic literature and expert consensus building, we sought community input to assess face validity and acceptability. Our research team conducted semi-structured focus groups with six partnerships nationwide. Participants validated and expanded on existing model constructs and identified new constructs based on "real-world" praxis, resulting in a revised model. Four cross-cutting constructs were identified: trust development, capacity, mutual learning, and power dynamics. By empirically testing the model, we found community face validity and capacity to adapt the model to diverse contexts. We recommend partnerships use and adapt the CBPR model and its constructs, for collective reflection and evaluation, to enhance their partnering practices and achieve their health and research goals. © The Author(s) 2014.

  12. Base Flow Model Validation

    NASA Technical Reports Server (NTRS)

    Sinha, Neeraj; Brinckman, Kevin; Jansen, Bernard; Seiner, John

    2011-01-01

    A method was developed of obtaining propulsive base flow data in both hot and cold jet environments, at Mach numbers and altitude of relevance to NASA launcher designs. The base flow data was used to perform computational fluid dynamics (CFD) turbulence model assessments of base flow predictive capabilities in order to provide increased confidence in base thermal and pressure load predictions obtained from computational modeling efforts. Predictive CFD analyses were used in the design of the experiments, available propulsive models were used to reduce program costs and increase success, and a wind tunnel facility was used. The data obtained allowed assessment of CFD/turbulence models in a complex flow environment, working within a building-block procedure to validation, where cold, non-reacting test data was first used for validation, followed by more complex reacting base flow validation.

  13. RELAP-7 Software Verification and Validation Plan: Requirements Traceability Matrix (RTM) Part 1 – Physics and numerical methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choi, Yong Joon; Yoo, Jun Soo; Smith, Curtis Lee

    2015-09-01

    This INL plan comprehensively describes the Requirements Traceability Matrix (RTM) on main physics and numerical method of the RELAP-7. The plan also describes the testing-based software verification and validation (SV&V) process—a set of specially designed software models used to test RELAP-7.

  14. The Impact of Model Parameterization and Estimation Methods on Tests of Measurement Invariance with Ordered Polytomous Data

    ERIC Educational Resources Information Center

    Koziol, Natalie A.; Bovaird, James A.

    2018-01-01

    Evaluations of measurement invariance provide essential construct validity evidence--a prerequisite for seeking meaning in psychological and educational research and ensuring fair testing procedures in high-stakes settings. However, the quality of such evidence is partly dependent on the validity of the resulting statistical conclusions. Type I or…

  15. Construct validity of the Chinese version of the Self-care of Heart Failure Index determined using structural equation modeling.

    PubMed

    Kang, Xiaofeng; Dennison Himmelfarb, Cheryl R; Li, Zheng; Zhang, Jian; Lv, Rong; Guo, Jinyu

    2015-01-01

    The Self-care of Heart Failure Index (SCHFI) is an empirically tested instrument for measuring the self-care of patients with heart failure. The aim of this study was to develop a simplified Chinese version of the SCHFI and provide evidence for its construct validity. A total of 182 Chinese with heart failure were surveyed. A 2-step structural equation modeling procedure was applied to test construct validity. Factor analysis showed 3 factors explaining 43% of the variance. Structural equation model confirmed that self-care maintenance, self-care management, and self-care confidence are indeed indicators of self-care, and self-care confidence was a positive and equally strong predictor of self-care maintenance and self-care management. Moreover, self-care scores were correlated with the Partners in Health Scale, indicating satisfactory concurrent validity. The Chinese version of the SCHFI is a theory-based instrument for assessing self-care of Chinese patients with heart failure.

  16. An extended protocol for usability validation of medical devices: Research design and reference model.

    PubMed

    Schmettow, Martin; Schnittker, Raphaela; Schraagen, Jan Maarten

    2017-05-01

    This paper proposes and demonstrates an extended protocol for usability validation testing of medical devices. A review of currently used methods for the usability evaluation of medical devices revealed two main shortcomings. Firstly, the lack of methods to closely trace the interaction sequences and derive performance measures. Secondly, a prevailing focus on cross-sectional validation studies, ignoring the issues of learnability and training. The U.S. Federal Drug and Food Administration's recent proposal for a validation testing protocol for medical devices is then extended to address these shortcomings: (1) a novel process measure 'normative path deviations' is introduced that is useful for both quantitative and qualitative usability studies and (2) a longitudinal, completely within-subject study design is presented that assesses learnability, training effects and allows analysis of diversity of users. A reference regression model is introduced to analyze data from this and similar studies, drawing upon generalized linear mixed-effects models and a Bayesian estimation approach. The extended protocol is implemented and demonstrated in a study comparing a novel syringe infusion pump prototype to an existing design with a sample of 25 healthcare professionals. Strong performance differences between designs were observed with a variety of usability measures, as well as varying training-on-the-job effects. We discuss our findings with regard to validation testing guidelines, reflect on the extensions and discuss the perspectives they add to the validation process. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Improving the quality of discrete-choice experiments in health: how can we assess validity and reliability?

    PubMed

    Janssen, Ellen M; Marshall, Deborah A; Hauber, A Brett; Bridges, John F P

    2017-12-01

    The recent endorsement of discrete-choice experiments (DCEs) and other stated-preference methods by regulatory and health technology assessment (HTA) agencies has placed a greater focus on demonstrating the validity and reliability of preference results. Areas covered: We present a practical overview of tests of validity and reliability that have been applied in the health DCE literature and explore other study qualities of DCEs. From the published literature, we identify a variety of methods to assess the validity and reliability of DCEs. We conceptualize these methods to create a conceptual model with four domains: measurement validity, measurement reliability, choice validity, and choice reliability. Each domain consists of three categories that can be assessed using one to four procedures (for a total of 24 tests). We present how these tests have been applied in the literature and direct readers to applications of these tests in the health DCE literature. Based on a stakeholder engagement exercise, we consider the importance of study characteristics beyond traditional concepts of validity and reliability. Expert commentary: We discuss study design considerations to assess the validity and reliability of a DCE, consider limitations to the current application of tests, and discuss future work to consider the quality of DCEs in healthcare.

  18. Parametric Study of the Effect of Membrane Tension on Sunshield Dynamics

    NASA Technical Reports Server (NTRS)

    Ross, Brian; Johnston, John D.; Smith, James

    2002-01-01

    The NGST sunshield is a lightweight, flexible structure consisting of pretensioned membranes supported by deployable booms. The structural dynamic behavior of the sunshield must be well understood in order to predict its influence on observatory performance. A 1/10th scale model of the sunshield has been developed for ground testing to provide data to validate modeling techniques for thin film membrane structures. The validated models can then be used to predict the behaviour of the full scale sunshield. This paper summarizes the most recent tests performed on the 1/10th scale sunshield to study the effect of membrane preload on sunshield dynamics. Topics to be covered include the test setup, procedures, and a summary of results.

  19. Wall-to-wall Landsat TM classifications for Georgia in support of SAFIS using FIA plots for training and verification

    Treesearch

    William H. Cooke; Andrew J. Hartsell

    2000-01-01

    Wall-to-wall Landsat TM classification efforts in Georgia require field validation. Validation uslng FIA data was testing by developing a new crown modeling procedure. A methodology is under development at the Southern Research Station to model crown diameter using Forest Health monitoring data. These models are used to simulate the proportion of tree crowns that...

  20. A model of scientific attitudes assessment by observation in physics learning based scientific approach: case study of dynamic fluid topic in high school

    NASA Astrophysics Data System (ADS)

    Yusliana Ekawati, Elvin

    2017-01-01

    This study aimed to produce a model of scientific attitude assessment in terms of the observations for physics learning based scientific approach (case study of dynamic fluid topic in high school). Development of instruments in this study adaptation of the Plomp model, the procedure includes the initial investigation, design, construction, testing, evaluation and revision. The test is done in Surakarta, so that the data obtained are analyzed using Aiken formula to determine the validity of the content of the instrument, Cronbach’s alpha to determine the reliability of the instrument, and construct validity using confirmatory factor analysis with LISREL 8.50 program. The results of this research were conceptual models, instruments and guidelines on scientific attitudes assessment by observation. The construct assessment instruments include components of curiosity, objectivity, suspended judgment, open-mindedness, honesty and perseverance. The construct validity of instruments has been qualified (rated load factor > 0.3). The reliability of the model is quite good with the Alpha value 0.899 (> 0.7). The test showed that the model fits the theoretical models are supported by empirical data, namely p-value 0.315 (≥ 0.05), RMSEA 0.027 (≤ 0.08)

  1. Abe Silverstein 10- by 10-Foot Supersonic Wind Tunnel Validated for Low-Speed (Subsonic) Operation

    NASA Technical Reports Server (NTRS)

    Hoffman, Thomas R.

    2001-01-01

    The NASA Glenn Research Center and Lockheed Martin Corporation tested an aircraft model in two wind tunnels to compare low-speed (subsonic) flow characteristics. Objectives of the test were to determine and document the similarities and uniqueness of the tunnels and to validate that Glenn's 10- by 10-Foot Supersonic Wind Tunnel (10x10 SWT) is a viable low-speed test facility. Results from two of Glenn's wind tunnels compare very favorably and show that the 10x10 SWT is a viable low-speed wind tunnel. The Subsonic Comparison Test was a joint effort by NASA and Lockheed Martin using the Lockheed Martin's Joint Strike Fighter Concept Demonstration Aircraft model. Although Glenn's 10310 and 836 SWT's have many similarities, they also have unique characteristics. Therefore, test data were collected for multiple model configurations at various vertical locations in the test section, starting at the test section centerline and extending into the ceiling and floor boundary layers.

  2. Validity and reliability of the PowerTap mobile cycling powermeter when compared with the SRM Device.

    PubMed

    Bertucci, W; Duc, S; Villerius, V; Pernin, J N; Grappe, F

    2005-12-01

    The SRM power measuring crank system is nowadays a popular device for cycling power output (PO) measurements in the field and in laboratories. The PowerTap (CycleOps, Madison, USA) is a more recent and less well-known device that allows mobile PO measurements of cycling via the rear wheel hub. The aim of this study is to test the validity and reliability of the PowerTap by comparing it with the most accurate (i.e. the scientific model) of the SRM system. The validity of the PowerTap is tested during i) sub-maximal incremental intensities (ranging from 100 to 420 W) on a treadmill with different pedalling cadences (45 to 120 rpm) and cycling positions (standing and seated) on different grades, ii) a continuous sub-maximal intensity lasting 30 min, iii) a maximal intensity (8-s sprint), and iiii) real road cycling. The reliability is assessed by repeating ten times the sub-maximal incremental and continuous tests. The results show a good validity of the PowerTap during sub-maximal intensities between 100 and 450 W (mean PO difference -1.2 +/- 1.3 %) when it is compared to the scientific SRM model, but less validity for the maximal PO during sprint exercise, where the validity appears to depend on the gear ratio. The reliability of the PowerTap during the sub-maximal intensities is similar to the scientific SRM model (the coefficient of variation is respectively 0.9 to 2.9 % and 0.7 to 2.1 % for PowerTap and SRM). The PowerTap must be considered as a suitable device for PO measurements during sub-maximal real road cycling and in sub-maximal laboratory tests.

  3. Crack propagation monitoring in a full-scale aircraft fatigue test based on guided wave-Gaussian mixture model

    NASA Astrophysics Data System (ADS)

    Qiu, Lei; Yuan, Shenfang; Bao, Qiao; Mei, Hanfei; Ren, Yuanqiang

    2016-05-01

    For aerospace application of structural health monitoring (SHM) technology, the problem of reliable damage monitoring under time-varying conditions must be addressed and the SHM technology has to be fully validated on real aircraft structures under realistic load conditions on ground before it can reach the status of flight test. In this paper, the guided wave (GW) based SHM method is applied to a full-scale aircraft fatigue test which is one of the most similar test status to the flight test. To deal with the time-varying problem, a GW-Gaussian mixture model (GW-GMM) is proposed. The probability characteristic of GW features, which is introduced by time-varying conditions is modeled by GW-GMM. The weak cumulative variation trend of the crack propagation, which is mixed in time-varying influence can be tracked by the GW-GMM migration during on-line damage monitoring process. A best match based Kullback-Leibler divergence is proposed to measure the GW-GMM migration degree to reveal the crack propagation. The method is validated in the full-scale aircraft fatigue test. The validation results indicate that the reliable crack propagation monitoring of the left landing gear spar and the right wing panel under realistic load conditions are achieved.

  4. Issues in cross-cultural validity: example from the adaptation, reliability, and validity testing of a Turkish version of the Stanford Health Assessment Questionnaire.

    PubMed

    Küçükdeveci, Ayse A; Sahin, Hülya; Ataman, Sebnem; Griffiths, Bridget; Tennant, Alan

    2004-02-15

    Guidelines have been established for cross-cultural adaptation of outcome measures. However, invariance across cultures must also be demonstrated through analysis of Differential Item Functioning (DIF). This is tested in the context of a Turkish adaptation of the Health Assessment Questionnaire (HAQ). Internal construct validity of the adapted HAQ is assessed by Rasch analysis; reliability, by internal consistency and the intraclass correlation coefficient; external construct validity, by association with impairments and American College of Rheumatology functional stages. Cross-cultural validity is tested through DIF by comparison with data from the UK version of the HAQ. The adapted version of the HAQ demonstrated good internal construct validity through fit of the data to the Rasch model (mean item fit 0.205; SD 0.998). Reliability was excellent (alpha = 0.97) and external construct validity was confirmed by expected associations. DIF for culture was found in only 1 item. Cross-cultural validity was found to be sufficient for use in international studies between the UK and Turkey. Future adaptation of instruments should include analysis of DIF at the field testing stage in the adaptation process.

  5. Assessing Wildlife Habitat Value of New England Salt Marshes: II. Model Testing and Validation

    EPA Science Inventory

    We test a previously described model to assess the wildlife habitat value of New England salt marshes by comparing modeled habitat values and scores with bird abundance and species richness at sixteen salt marshes in Narragansett Bay, Rhode Island USA. Assessment scores ranged f...

  6. Development and validation of the short-form Adolescent Health Promotion Scale.

    PubMed

    Chen, Mei-Yen; Lai, Li-Ju; Chen, Hsiu-Chih; Gaete, Jorge

    2014-10-26

    Health-promoting lifestyle choices of adolescents are closely related to current and subsequent health status. However, parsimonious yet reliable and valid screening tools are scarce. The original 40-item adolescent health promotion (AHP) scale was developed by our research team and has been applied to measure adolescent health-promoting behaviors worldwide. The aim of our study was to examine the psychometric properties of a newly developed short-form version of the AHP (AHP-SF) including tests of its reliability and validity. The study was conducted in nine middle and high schools in southern Taiwan. Participants were 814 adolescents randomly divided into two subgroups with equal size and homogeneity of baseline characteristics. The first subsample (calibration sample) was used to modify and shorten the factorial model while the second subsample (validation sample) was utilized to validate the result obtained from the first one. The psychometric testing of the AHP-SF included internal reliability of McDonald's omega and Cronbach's alpha, convergent validity, discriminant validity, and construct validity with confirmatory factor analysis (CFA). The results of the CFA supported a six-factor model and 21 items were retained in the AHP-SF with acceptable model fit. For the discriminant validity test, results indicated that adolescents with lower AHP-SF scores were more likely to be overweight or obese, skip breakfast, and spend more time watching TV and playing computer games. The AHP-SF also showed excellent internal consistency with a McDonald's omega of 0.904 (Cronbach's alpha 0.905) in the calibration group. The current findings suggest that the AHP-SF is a valid and reliable instrument for the evaluation of adolescent health-promoting behaviors. Primary health care providers and clinicians can use the AHP-SF to assess these behaviors and evaluate the outcome of health promotion programs in the adolescent population.

  7. In-line pressure-flow module for in vitro modelling of haemodynamics and biosensor validation

    NASA Technical Reports Server (NTRS)

    Koenig, S. C.; Schaub, J. D.; Ewert, D. L.; Swope, R. D.; Convertino, V. A. (Principal Investigator)

    1997-01-01

    An in-line pressure-flow module for in vitro modelling of haemodynamics and biosensor validation has been developed. Studies show that good accuracy can be achieved in the measurement of pressure and of flow, in steady and pulstile flow systems. The model can be used for development, testing and evaluation of cardiovascular-mechanical-electrical anlogue models, cardiovascular prosthetics (i.e. valves, vascular grafts) and pressure and flow biosensors.

  8. Short-Term Forecasts Using NU-WRF for the Winter Olympics 2018

    NASA Technical Reports Server (NTRS)

    Srikishen, Jayanthi; Case, Jonathan L.; Petersen, Walter A.; Iguchi, Takamichi; Tao, Wei-Kuo; Zavodsky, Bradley T.; Molthan, Andrew

    2017-01-01

    The NASA Unified-Weather Research and Forecasting model (NU-WRF) will be included for testing and evaluation in the forecast demonstration project (FDP) of the International Collaborative Experiment -PyeongChang 2018 Olympic and Paralympic (ICE-POP) Winter Games. An international array of radar and supporting ground based observations together with various forecast and now-cast models will be operational during ICE-POP. In conjunction with personnel from NASA's Goddard Space Flight Center, the NASA Short-term Prediction Research and Transition (SPoRT) Center is developing benchmark simulations for a real-time NU-WRF configuration to run during the FDP. ICE-POP observational datasets will be used to validate model simulations and investigate improved model physics and performance for prediction of snow events during the research phase (RDP) of the project The NU-WRF model simulations will also support NASA Global Precipitation Measurement (GPM) Mission ground-validation physical and direct validation activities in relation to verifying, testing and improving satellite-based snowfall retrieval algorithms over complex terrain.

  9. Design and landing dynamic analysis of reusable landing leg for a near-space manned capsule

    NASA Astrophysics Data System (ADS)

    Yue, Shuai; Nie, Hong; Zhang, Ming; Wei, Xiaohui; Gan, Shengyong

    2018-06-01

    To improve the landing performance of a near-space manned capsule under various landing conditions, a novel landing system is designed that employs double chamber and single chamber dampers in the primary and auxiliary struts, respectively. A dynamic model of the landing system is established, and the damper parameters are determined by employing the design method. A single-leg drop test with different initial pitch angles is then conducted to compare and validate the simulation model. Based on the validated simulation model, seven critical landing conditions regarding nine crucial landing responses are found by combining the radial basis function (RBF) surrogate model and adaptive simulated annealing (ASA) optimization method. Subsequently, the adaptability of the landing system under critical landing conditions is analyzed. The results show that the simulation effectively results match the test results, which validates the accuracy of the dynamic model. In addition, all of the crucial responses under their corresponding critical landing conditions satisfy the design specifications, demonstrating the feasibility of the landing system.

  10. Development and Psychometric Testing of a Sexual Concerns Questionnaire for Kidney Transplant Recipients.

    PubMed

    Muehrer, Rebecca J; Lanuza, Dorothy M; Brown, Roger L; Djamali, Arjang

    2015-01-01

    This study describes the development and psychometric testing of the Sexual Concerns Questionnaire (SCQ) in kidney transplant (KTx) recipients. Construct validity was assessed using the Kroonenberg and Lewis exploratory/confirmatory procedure and testing hypothesized relationships with established questionnaires. Configural and weak invariance were examined across gender, dialysis history, relationship status, and transplant type. Reliability was assessed with Cronbach's alpha, composite reliability, and test-retest reliability. Factor analysis resulted in a 7-factor solution and suggests good model fit. Construct validity was also supported by the tests of hypothesized relationships. Configural and weak invariance were supported for all subgroups. Reliability of the SCQ was also supported. Findings indicate the SCQ is a valid and reliable measure of KTx recipients' sexual concerns.

  11. Concurrent validity of persian version of wechsler intelligence scale for children - fourth edition and cognitive assessment system in patients with learning disorder.

    PubMed

    Rostami, Reza; Sadeghi, Vahid; Zarei, Jamileh; Haddadi, Parvaneh; Mohazzab-Torabi, Saman; Salamati, Payman

    2013-04-01

    The aim of this study was to compare the Persian version of the wechsler intelligence scale for children - fourth edition (WISC-IV) and cognitive assessment system (CAS) tests, to determine the correlation between their scales and to evaluate the probable concurrent validity of these tests in patients with learning disorders. One-hundered-sixty-two children with learning disorder who were presented at Atieh Comprehensive Psychiatry Center were selected in a consecutive non-randomized order. All of the patients were assessed based on WISC-IV and CAS scores questionnaires. Pearson correlation coefficient was used to analyze the correlation between the data and to assess the concurrent validity of the two tests. Linear regression was used for statistical modeling. The type one error was considered 5% in maximum. There was a strong correlation between total score of WISC-IV test and total score of CAS test in the patients (r=0.75, P<0.001). The correlations among the other scales were mostly high and all of them were statistically significant (P<0.001). A linear regression model was obtained (α = 0.51, β = 0.81 and P<0.001). There is an acceptable correlation between the WISC-IV scales and CAS test in children with learning disorders. A concurrent validity is established between the two tests and their scales.

  12. Concurrent Validity of Persian Version of Wechsler Intelligence Scale for Children - Fourth Edition and Cognitive Assessment System in Patients with Learning Disorder

    PubMed Central

    Rostami, Reza; Sadeghi, Vahid; Zarei, Jamileh; Haddadi, Parvaneh; Mohazzab-Torabi, Saman; Salamati, Payman

    2013-01-01

    Objective The aim of this study was to compare the Persian version of the wechsler intelligence scale for children - fourth edition (WISC-IV) and cognitive assessment system (CAS) tests, to determine the correlation between their scales and to evaluate the probable concurrent validity of these tests in patients with learning disorders. Methods One-hundered-sixty-two children with learning disorder who were presented at Atieh Comprehensive Psychiatry Center were selected in a consecutive non-randomized order. All of the patients were assessed based on WISC-IV and CAS scores questionnaires. Pearson correlation coefficient was used to analyze the correlation between the data and to assess the concurrent validity of the two tests. Linear regression was used for statistical modeling. The type one error was considered 5% in maximum. Findings There was a strong correlation between total score of WISC-IV test and total score of CAS test in the patients (r=0.75, P<0.001). The correlations among the other scales were mostly high and all of them were statistically significant (P<0.001). A linear regression model was obtained (α = 0.51, β = 0.81 and P<0.001). Conclusion There is an acceptable correlation between the WISC-IV scales and CAS test in children with learning disorders. A concurrent validity is established between the two tests and their scales. PMID:23724180

  13. Testing and Modeling of a 3-MW Wind Turbine Using Fully Coupled Simulation Codes (Poster)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    LaCava, W.; Guo, Y.; Van Dam, J.

    This poster describes the NREL/Alstom Wind testing and model verification of the Alstom 3-MW wind turbine located at NREL's National Wind Technology Center. NREL,in collaboration with ALSTOM Wind, is studying a 3-MW wind turbine installed at the National Wind Technology Center(NWTC). The project analyzes the turbine design using a state-of-the-art simulation code validated with detailed test data. This poster describes the testing and the model validation effort, and provides conclusions about the performance of the unique drive train configuration used in this wind turbine. The 3-MW machine has been operating at the NWTC since March 2011, and drive train measurementsmore » will be collected through the spring of 2012. The NWTC testing site has particularly turbulent wind patterns that allow for the measurement of large transient loads and the resulting turbine response. This poster describes the 3-MW turbine test project, the instrumentation installed, and the load cases captured. The design of a reliable wind turbine drive train increasingly relies on the use of advanced simulation to predict structural responses in a varying wind field. This poster presents a fully coupled, aero-elastic and dynamic model of the wind turbine. It also shows the methodology used to validate the model, including the use of measured tower modes, model-to-model comparisons of the power curve, and mainshaft bending predictions for various load cases. The drivetrain is designed to only transmit torque to the gearbox, eliminating non-torque moments that are known to cause gear misalignment. Preliminary results show that the drivetrain is able to divert bending loads in extreme loading cases, and that a significantly smaller bending moment is induced on the mainshaft compared to a three-point mounting design.« less

  14. Similarity Metrics for Closed Loop Dynamic Systems

    NASA Technical Reports Server (NTRS)

    Whorton, Mark S.; Yang, Lee C.; Bedrossian, Naz; Hall, Robert A.

    2008-01-01

    To what extent and in what ways can two closed-loop dynamic systems be said to be "similar?" This question arises in a wide range of dynamic systems modeling and control system design applications. For example, bounds on error models are fundamental to the controller optimization with modern control design methods. Metrics such as the structured singular value are direct measures of the degree to which properties such as stability or performance are maintained in the presence of specified uncertainties or variations in the plant model. Similarly, controls-related areas such as system identification, model reduction, and experimental model validation employ measures of similarity between multiple realizations of a dynamic system. Each area has its tools and approaches, with each tool more or less suited for one application or the other. Similarity in the context of closed-loop model validation via flight test is subtly different from error measures in the typical controls oriented application. Whereas similarity in a robust control context relates to plant variation and the attendant affect on stability and performance, in this context similarity metrics are sought that assess the relevance of a dynamic system test for the purpose of validating the stability and performance of a "similar" dynamic system. Similarity in the context of system identification is much more relevant than are robust control analogies in that errors between one dynamic system (the test article) and another (the nominal "design" model) are sought for the purpose of bounding the validity of a model for control design and analysis. Yet system identification typically involves open-loop plant models which are independent of the control system (with the exception of limited developments in closed-loop system identification which is nonetheless focused on obtaining open-loop plant models from closed-loop data). Moreover the objectives of system identification are not the same as a flight test and hence system identification error metrics are not directly relevant. In applications such as launch vehicles where the open loop plant is unstable it is similarity of the closed-loop system dynamics of a flight test that are relevant.

  15. A signal detection-item response theory model for evaluating neuropsychological measures.

    PubMed

    Thomas, Michael L; Brown, Gregory G; Gur, Ruben C; Moore, Tyler M; Patt, Virginie M; Risbrough, Victoria B; Baker, Dewleen G

    2018-02-05

    Models from signal detection theory are commonly used to score neuropsychological test data, especially tests of recognition memory. Here we show that certain item response theory models can be formulated as signal detection theory models, thus linking two complementary but distinct methodologies. We then use the approach to evaluate the validity (construct representation) of commonly used research measures, demonstrate the impact of conditional error on neuropsychological outcomes, and evaluate measurement bias. Signal detection-item response theory (SD-IRT) models were fitted to recognition memory data for words, faces, and objects. The sample consisted of U.S. Infantry Marines and Navy Corpsmen participating in the Marine Resiliency Study. Data comprised item responses to the Penn Face Memory Test (PFMT; N = 1,338), Penn Word Memory Test (PWMT; N = 1,331), and Visual Object Learning Test (VOLT; N = 1,249), and self-report of past head injury with loss of consciousness. SD-IRT models adequately fitted recognition memory item data across all modalities. Error varied systematically with ability estimates, and distributions of residuals from the regression of memory discrimination onto self-report of past head injury were positively skewed towards regions of larger measurement error. Analyses of differential item functioning revealed little evidence of systematic bias by level of education. SD-IRT models benefit from the measurement rigor of item response theory-which permits the modeling of item difficulty and examinee ability-and from signal detection theory-which provides an interpretive framework encompassing the experimentally validated constructs of memory discrimination and response bias. We used this approach to validate the construct representation of commonly used research measures and to demonstrate how nonoptimized item parameters can lead to erroneous conclusions when interpreting neuropsychological test data. Future work might include the development of computerized adaptive tests and integration with mixture and random-effects models.

  16. Simulation, Model Verification and Controls Development of Brayton Cycle PM Alternator: Testing and Simulation of 2 KW PM Generator with Diode Bridge Output

    NASA Technical Reports Server (NTRS)

    Stankovic, Ana V.

    2003-01-01

    Professor Stankovic will be developing and refining Simulink based models of the PM alternator and comparing the simulation results with experimental measurements taken from the unit. Her first task is to validate the models using the experimental data. Her next task is to develop alternative control techniques for the application of the Brayton Cycle PM Alternator in a nuclear electric propulsion vehicle. The control techniques will be first simulated using the validated models then tried experimentally with hardware available at NASA. Testing and simulation of a 2KW PM synchronous generator with diode bridge output is described. The parameters of a synchronous PM generator have been measured and used in simulation. Test procedures have been developed to verify the PM generator model with diode bridge output. Experimental and simulation results are in excellent agreement.

  17. Mathematical Models of IABG Thermal-Vacuum Facilities

    NASA Astrophysics Data System (ADS)

    Doring, Daniel; Ulfers, Hendrik

    2014-06-01

    IABG in Ottobrunn, Germany, operates thermal-vacuum facilities of different sizes and complexities as a service for space-testing of satellites and components. One aspect of these tests is the qualification of the thermal control system that keeps all onboard components within their save operating temperature band. As not all possible operation / mission states can be simulated within a sensible test time, usually a subset of important and extreme states is tested at TV facilities to validate the thermal model of the satellite, which is then used to model all other possible mission states. With advances in the precision of customer thermal models, simple assumptions of the test environment (e.g. everything black & cold, one solar constant of light from this side) are no longer sufficient, as real space simulation chambers do deviate from this ideal. For example the mechanical adapters which support the spacecraft are usually not actively cooled. To enable IABG to provide a model that is sufficiently detailed and realistic for current system tests, Munich engineering company CASE developed ESATAN models for the two larger chambers. CASE has many years of experience in thermal analysis for space-flight systems and ESATAN. The two models represent the rather simple (and therefore very homogeneous) 3m-TVA and the extremely complex space simulation test facility and its solar simulator. The cooperation of IABG and CASE built up extensive knowledge of the facilities thermal behaviour. This is the key to optimally support customers with their test campaigns in the future. The ESARAD part of the models contains all relevant information with regard to geometry (CAD data), surface properties (optical measurements) and solar irradiation for the sun simulator. The temperature of the actively cooled thermal shrouds is measured and mapped to the thermal mesh to create the temperature field in the ESATAN part as boundary conditions. Both models comprise switches to easily establish multiple possible set-ups (e.g. exclude components like the motion system or enable / disable the solar simulator). Both models were validated by comparing calculated results (thermal balance temperatures for simple passive test articles) with measured temperatures generated in actual tests in these facilities. This paper presents information about the chambers, the modelling approach, properties of the models and their performance in the validation tests.

  18. Practical Results from the Application of Model Checking and Test Generation from UML/SysML Models of On-Board Space Applications

    NASA Astrophysics Data System (ADS)

    Faria, J. M.; Mahomad, S.; Silva, N.

    2009-05-01

    The deployment of complex safety-critical applications requires rigorous techniques and powerful tools both for the development and V&V stages. Model-based technologies are increasingly being used to develop safety-critical software, and arguably, turning to them can bring significant benefits to such processes, however, along with new challenges. This paper presents the results of a research project where we tried to extend current V&V methodologies to be applied on UML/SysML models and aiming at answering the demands related to validation issues. Two quite different but complementary approaches were investigated: (i) model checking and the (ii) extraction of robustness test-cases from the same models. These two approaches don't overlap and when combined provide a wider reaching model/design validation ability than each one alone thus offering improved safety assurance. Results are very encouraging, even though they either fell short of the desired outcome as shown for model checking, or still appear as not fully matured as shown for robustness test case extraction. In the case of model checking, it was verified that the automatic model validation process can become fully operational and even expanded in scope once tool vendors help (inevitably) to improve the XMI standard interoperability situation. For the robustness test case extraction methodology, the early approach produced interesting results but need further systematisation and consolidation effort in order to produce results in a more predictable fashion and reduce reliance on expert's heuristics. Finally, further improvements and innovation research projects were immediately apparent for both investigated approaches, which point to either circumventing current limitations in XMI interoperability on one hand and bringing test case specification onto the same graphical level as the models themselves and then attempting to automate the generation of executable test cases from its standard UML notation.

  19. QSAR study of curcumine derivatives as HIV-1 integrase inhibitors.

    PubMed

    Gupta, Pawan; Sharma, Anju; Garg, Prabha; Roy, Nilanjan

    2013-03-01

    A QSAR study was performed on curcumine derivatives as HIV-1 integrase inhibitors using multiple linear regression. The statistically significant model was developed with squared correlation coefficients (r(2)) 0.891 and cross validated r(2) (r(2) cv) 0.825. The developed model revealed that electronic, shape, size, geometry, substitution's information and hydrophilicity were important atomic properties for determining the inhibitory activity of these molecules. The model was also tested successfully for external validation (r(2) pred = 0.849) as well as Tropsha's test for model predictability. Furthermore, the domain analysis was carried out to evaluate the prediction reliability of external set molecules. The model was statistically robust and had good predictive power which can be successfully utilized for screening of new molecules.

  20. Project on Elite Athlete Commitment (PEAK): III. An examination of the external validity across gender, and the expansion and clarification of the Sport Commitment Model.

    PubMed

    Scanlan, Tara K; Russell, David G; Magyar, T Michelle; Scanlan, Larry A

    2009-12-01

    The Sport Commitment Model was further tested using the Scanlan Collaborative Interview Method to examine its generalizability to New Zealand's elite female amateur netball team, the Silver Ferns. Results supported or clarified Sport Commitment Model predictions, revealed avenues for model expansion, and elucidated the functions of perceived competence and enjoyment in the commitment process. A comparison and contrast of the in-depth interview data from the Silver Ferns with previous interview data from a comparable elite team of amateur male athletes allowed assessment of model external validity, tested the generalizability of the underlying mechanisms, and separated gender differences from discrepancies that simply reflected team or idiosyncratic differences.

  1. Genetic and Environmental Influences of General Cognitive Ability: Is g a valid latent construct?

    PubMed Central

    Panizzon, Matthew S.; Vuoksimaa, Eero; Spoon, Kelly M.; Jacobson, Kristen C.; Lyons, Michael J.; Franz, Carol E.; Xian, Hong; Vasilopoulos, Terrie; Kremen, William S.

    2014-01-01

    Despite an extensive literature, the “g” construct remains a point of debate. Different models explaining the observed relationships among cognitive tests make distinct assumptions about the role of g in relation to those tests and specific cognitive domains. Surprisingly, these different models and their corresponding assumptions are rarely tested against one another. In addition to the comparison of distinct models, a multivariate application of the twin design offers a unique opportunity to test whether there is support for g as a latent construct with its own genetic and environmental influences, or whether the relationships among cognitive tests are instead driven by independent genetic and environmental factors. Here we tested multiple distinct models of the relationships among cognitive tests utilizing data from the Vietnam Era Twin Study of Aging (VETSA), a study of middle-aged male twins. Results indicated that a hierarchical (higher-order) model with a latent g phenotype, as well as specific cognitive domains, was best supported by the data. The latent g factor was highly heritable (86%), and accounted for most, but not all, of the genetic effects in specific cognitive domains and elementary cognitive tests. By directly testing multiple competing models of the relationships among cognitive tests in a genetically-informative design, we are able to provide stronger support than in prior studies for g being a valid latent construct. PMID:24791031

  2. Validation and influence of anthropometric and kinematic models of obese teenagers in vertical jump performance and mechanical internal energy expenditure.

    PubMed

    Achard de Leluardière, F; Hajri, L N; Lacouture, P; Duboy, J; Frelut, M L; Peres, G

    2006-02-01

    There may be concerns about the validity of kinetic models when studying locomotion in obese subjects (OS). The aim of the present study was to improve and validate a relevant representation of obese subject from four kinetic models. Fourteen teenagers with severe primary obesity (BMI = 40 +/- 5.2 kg/m(2)), were studied during jumping. The jumps were filmed by six cameras (synchronized, 50 Hz), associated with a force-plate (1,000 Hz). All the tested models were valid; the linear mechanical analysis of the jumps gave similar results (p > 0.05); but there were significantly different segment inertias when considering the subjects' abdomen (p < 0.01), which was associated with a significantly higher mechanical internal energy expenditure (p < 0.01) than that estimated from Dempster's and Hanavan's model, by about 40 and 30%. The validation of a modelling specifically for obese subjects will enable a better understanding of their locomotion.

  3. Development and Validation of Linear Alternator Models for the Advanced Stirling Convertor

    NASA Technical Reports Server (NTRS)

    Metscher, Jonathan F.; Lewandowski, Edward

    2014-01-01

    Two models of the linear alternator of the Advanced Stirling Convertor (ASC) have been developed using the Sage 1-D modeling software package. The first model relates the piston motion to electric current by means of a motor constant. The second uses electromagnetic model components to model the magnetic circuit of the alternator. The models are tuned and validated using test data and compared against each other. Results show both models can be tuned to achieve results within 7% of ASC test data under normal operating conditions. Using Sage enables the creation of a complete ASC model to be developed and simulations completed quickly compared to more complex multi-dimensional models. These models allow for better insight into overall Stirling convertor performance, aid with Stirling power system modeling, and in the future support NASA mission planning for Stirling-based power systems.

  4. Development and Validation of Linear Alternator Models for the Advanced Stirling Convertor

    NASA Technical Reports Server (NTRS)

    Metscher, Jonathan F.; Lewandowski, Edward J.

    2015-01-01

    Two models of the linear alternator of the Advanced Stirling Convertor (ASC) have been developed using the Sage 1-D modeling software package. The first model relates the piston motion to electric current by means of a motor constant. The second uses electromagnetic model components to model the magnetic circuit of the alternator. The models are tuned and validated using test data and also compared against each other. Results show both models can be tuned to achieve results within 7 of ASC test data under normal operating conditions. Using Sage enables the creation of a complete ASC model to be developed and simulations completed quickly compared to more complex multi-dimensional models. These models allow for better insight into overall Stirling convertor performance, aid with Stirling power system modeling, and in the future support NASA mission planning for Stirling-based power systems.

  5. A unified bond theory, probabilistic meso-scale modeling, and experimental validation of deformed steel rebar in normal strength concrete

    NASA Astrophysics Data System (ADS)

    Wu, Chenglin

    Bond between deformed rebar and concrete is affected by rebar deformation pattern, concrete properties, concrete confinement, and rebar-concrete interfacial properties. Two distinct groups of bond models were traditionally developed based on the dominant effects of concrete splitting and near-interface shear-off failures. Their accuracy highly depended upon the test data sets selected in analysis and calibration. In this study, a unified bond model is proposed and developed based on an analogy to the indentation problem around the rib front of deformed rebar. This mechanics-based model can take into account the combined effect of concrete splitting and interface shear-off failures, resulting in average bond strengths for all practical scenarios. To understand the fracture process associated with bond failure, a probabilistic meso-scale model of concrete is proposed and its sensitivity to interface and confinement strengths are investigated. Both the mechanical and finite element models are validated with the available test data sets and are superior to existing models in prediction of average bond strength (< 6% error) and crack spacing (< 6% error). The validated bond model is applied to derive various interrelations among concrete crushing, concrete splitting, interfacial behavior, and the rib spacing-to-height ratio of deformed rebar. It can accurately predict the transition of failure modes from concrete splitting to rebar pullout and predict the effect of rebar surface characteristics as the rib spacing-to-height ratio increases. Based on the unified theory, a global bond model is proposed and developed by introducing bond-slip laws, and validated with testing of concrete beams with spliced reinforcement, achieving a load capacity prediction error of less than 26%. The optimal rebar parameters and concrete cover in structural designs can be derived from this study.

  6. PACIC Instrument: disentangling dimensions using published validation models.

    PubMed

    Iglesias, K; Burnand, B; Peytremann-Bridevaux, I

    2014-06-01

    To better understand the structure of the Patient Assessment of Chronic Illness Care (PACIC) instrument. More specifically to test all published validation models, using one single data set and appropriate statistical tools. Validation study using data from cross-sectional survey. A population-based sample of non-institutionalized adults with diabetes residing in Switzerland (canton of Vaud). French version of the 20-items PACIC instrument (5-point response scale). We conducted validation analyses using confirmatory factor analysis (CFA). The original five-dimension model and other published models were tested with three types of CFA: based on (i) a Pearson estimator of variance-covariance matrix, (ii) a polychoric correlation matrix and (iii) a likelihood estimation with a multinomial distribution for the manifest variables. All models were assessed using loadings and goodness-of-fit measures. The analytical sample included 406 patients. Mean age was 64.4 years and 59% were men. Median of item responses varied between 1 and 4 (range 1-5), and range of missing values was between 5.7 and 12.3%. Strong floor and ceiling effects were present. Even though loadings of the tested models were relatively high, the only model showing acceptable fit was the 11-item single-dimension model. PACIC was associated with the expected variables of the field. Our results showed that the model considering 11 items in a single dimension exhibited the best fit for our data. A single score, in complement to the consideration of single-item results, might be used instead of the five dimensions usually described. © The Author 2014. Published by Oxford University Press in association with the International Society for Quality in Health Care; all rights reserved.

  7. Development and Validation of an Older Occupant Finite Element Model of a Mid-Sized Male for Investigation of Age-related Injury Risk.

    PubMed

    Schoell, Samantha L; Weaver, Ashley A; Urban, Jillian E; Jones, Derek A; Stitzel, Joel D; Hwang, Eunjoo; Reed, Matthew P; Rupp, Jonathan D; Hu, Jingwen

    2015-11-01

    The aging population is a growing concern as the increased fragility and frailty of the elderly results in an elevated incidence of injury as well as an increased risk of mortality and morbidity. To assess elderly injury risk, age-specific computational models can be developed to directly calculate biomechanical metrics for injury. The first objective was to develop an older occupant Global Human Body Models Consortium (GHBMC) average male model (M50) representative of a 65 year old (YO) and to perform regional validation tests to investigate predicted fractures and injury severity with age. Development of the GHBMC M50 65 YO model involved implementing geometric, cortical thickness, and material property changes with age. Regional validation tests included a chest impact, a lateral impact, a shoulder impact, a thoracoabdominal impact, an abdominal bar impact, a pelvic impact, and a lateral sled test. The second objective was to investigate age-related injury risks by performing a frontal US NCAP simulation test with the GHBMC M50 65 YO and the GHBMC M50 v4.2 models. Simulation results were compared to the GHBMC M50 v4.2 to evaluate the effect of age on occupant response and risk for head injury, neck injury, thoracic injury, and lower extremity injury. Overall, the GHBMC M50 65 YO model predicted higher probabilities of AIS 3+ injury for the head and thorax.

  8. A quantitative dynamic systems model of health-related quality of life among older adults

    PubMed Central

    Roppolo, Mattia; Kunnen, E Saskia; van Geert, Paul L; Mulasso, Anna; Rabaglietti, Emanuela

    2015-01-01

    Health-related quality of life (HRQOL) is a person-centered concept. The analysis of HRQOL is highly relevant in the aged population, which is generally suffering from health decline. Starting from a conceptual dynamic systems model that describes the development of HRQOL in individuals over time, this study aims to develop and test a quantitative dynamic systems model, in order to reveal the possible dynamic trends of HRQOL among older adults. The model is tested in different ways: first, with a calibration procedure to test whether the model produces theoretically plausible results, and second, with a preliminary validation procedure using empirical data of 194 older adults. This first validation tested the prediction that given a particular starting point (first empirical data point), the model will generate dynamic trajectories that lead to the observed endpoint (second empirical data point). The analyses reveal that the quantitative model produces theoretically plausible trajectories, thus providing support for the calibration procedure. Furthermore, the analyses of validation show a good fit between empirical and simulated data. In fact, no differences were found in the comparison between empirical and simulated final data for the same subgroup of participants, whereas the comparison between different subgroups of people resulted in significant differences. These data provide an initial basis of evidence for the dynamic nature of HRQOL during the aging process. Therefore, these data may give new theoretical and applied insights into the study of HRQOL and its development with time in the aging population. PMID:26604722

  9. The development and validation of a multidimensional sum-scaling questionnaire to measure patient-reported outcomes in acute respiratory tract infections in primary care: the acute respiratory tract infection questionnaire.

    PubMed

    Aabenhus, Rune; Thorsen, Hanne; Siersma, Volkert; Brodersen, John

    2013-01-01

    Patient-reported outcomes are seldom validated measures in clinical trials of acute respiratory tract infections (ARTIs) in primary care. We developed and validated a patient-reported outcome sum-scaling measure to assess the severity and functional impacts of ARTIs. Qualitative interviews and field testing among adults with an ARTI were conducted to ascertain a high degree of face and content validity of the questionnaire. Subsequently, a draft version of the Acute Respiratory Tract Infection Questionnaire (ARTIQ) was statistically validated by using the partial credit Rasch model to test dimensionality, objectivity, and reliability of items. Test of known groups' validity was conducted by comparing participants with and without an ARTI. The final version of the ARTIQ consisted of 38 items covering five dimensions (Physical-upper, Physical-lower, Psychological, Sleep, and Medicine) and five single items. All final dimensions were confirmed to fit the Rasch model, thus enabling sum-scaling of responses. The ARTIQ scores in participants with an ARTI were significantly higher than in those without ARTI (known groups' validity). A self-administered, multidimensional, sum-scaling questionnaire with high face and content validity and adequate psychometric properties for assessing severity and functional impacts from ARTIs in adults is available to clinical trials and audits in primary care. Copyright © 2013, International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc.

  10. Developing and upgrading of solar system thermal energy storage simulation models. Technical progress report, March 1, 1979-February 29, 1980

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuhn, J K; von Fuchs, G F; Zob, A P

    1980-05-01

    Two water tank component simulation models have been selected and upgraded. These models are called the CSU Model and the Extended SOLSYS Model. The models have been standardized and links have been provided for operation in the TRNSYS simulation program. The models are described in analytical terms as well as in computer code. Specific water tank tests were performed for the purpose of model validation. Agreement between model data and test data is excellent. A description of the limitations has also been included. Streamlining results and criteria for the reduction of computer time have also been shown for both watermore » tank computer models. Computer codes for the models and instructions for operating these models in TRNSYS have also been included, making the models readily available for DOE and industry use. Rock bed component simulation models have been reviewed and a model selected and upgraded. This model is a logical extension of the Mumma-Marvin model. Specific rock bed tests have been performed for the purpose of validation. Data have been reviewed for consistency. Details of the test results concerned with rock characteristics and pressure drop through the bed have been explored and are reported.« less

  11. Data reduction of room tests for zone model validation

    Treesearch

    M. Janssens; H. C. Tran

    1992-01-01

    Compartment fire zone models are based on many simplifying assumptions, in particular that gases stratify in two distinct layers. Because of these assumptions, certain model output is in a form unsuitable for direct comparison to measurements made in full-scale room tests. The experimental data must first be reduced and transformed to be compatible with the model...

  12. Validation of the Work-Life Balance Culture Scale (WLBCS).

    PubMed

    Nitzsche, Anika; Jung, Julia; Kowalski, Christoph; Pfaff, Holger

    2014-01-01

    The purpose of this paper is to describe the theoretical development and initial validation of the newly developed Work-Life Balance Culture Scale (WLBCS), an instrument for measuring an organizational culture that promotes the work-life balance of employees. In Study 1 (N=498), the scale was developed and its factorial validity tested through exploratory factor analyses. In Study 2 (N=513), confirmatory factor analysis (CFA) was performed to examine model fit and retest the dimensional structure of the instrument. To assess construct validity, a priori hypotheses were formulated and subsequently tested using correlation analyses. Exploratory and confirmatory factor analyses revealed a one-factor model. Results of the bivariate correlation analyses may be interpreted as preliminary evidence of the scale's construct validity. The five-item WLBCS is a new and efficient instrument with good overall quality. Its conciseness makes it particularly suitable for use in employee surveys to gain initial insight into a company's perceived work-life balance culture.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Kandler A; Santhanagopalan, Shriram; Yang, Chuanbo

    Computer models are helping to accelerate the design and validation of next generation batteries and provide valuable insights not possible through experimental testing alone. Validated 3-D physics-based models exist for predicting electrochemical performance, thermal and mechanical response of cells and packs under normal and abuse scenarios. The talk describes present efforts to make the models better suited for engineering design, including improving their computation speed, developing faster processes for model parameter identification including under aging, and predicting the performance of a proposed electrode material recipe a priori using microstructure models.

  14. Longitudinal Construct Validity of Brief Symptom Inventory Subscales in Schizophrenia

    ERIC Educational Resources Information Center

    Long, Jeffrey D.; Harring, Jeffrey R.; Brekke, John S.; Test, Mary Ann; Greenberg, Jan

    2007-01-01

    Longitudinal validity of Brief Symptom Inventory subscales was examined in a sample (N = 318) with schizophrenia-related illness measured at baseline and every 6 months for 3 years. Nonlinear factor analysis of items was used to test graded response models (GRMs) for subscales in isolation. The models varied in their within-time and between-times…

  15. Transitioning from Software Requirements Models to Design Models

    NASA Technical Reports Server (NTRS)

    Lowry, Michael (Technical Monitor); Whittle, Jon

    2003-01-01

    Summary: 1. Proof-of-concept of state machine synthesis from scenarios - CTAS case study. 2. CTAS team wants to use the syntheses algorithm to validate trajectory generation. 3. Extending synthesis algorithm towards requirements validation: (a) scenario relationships' (b) methodology for generalizing/refining scenarios, and (c) interaction patterns to control synthesis. 4. Initial ideas tested on conflict detection scenarios.

  16. A Testing Platform for Validation of Overhead Conductor Aging Models and Understanding Thermal Limits

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Irminger, Philip; Starke, Michael R; Dimitrovski, Aleksandar D

    2014-01-01

    Power system equipment manufacturers and researchers continue to experiment with novel overhead electric conductor designs that support better conductor performance and address congestion issues. To address the technology gap in testing these novel designs, Oak Ridge National Laboratory constructed the Powerline Conductor Accelerated Testing (PCAT) facility to evaluate the performance of novel overhead conductors in an accelerated fashion in a field environment. Additionally, PCAT has the capability to test advanced sensors and measurement methods for accessing overhead conductor performance and condition. Equipped with extensive measurement and monitoring devices, PCAT provides a platform to improve/validate conductor computer models and assess themore » performance of novel conductors. The PCAT facility and its testing capabilities are described in this paper.« less

  17. A high power ion thruster for deep space missions

    NASA Astrophysics Data System (ADS)

    Polk, James E.; Goebel, Dan M.; Snyder, John S.; Schneider, Analyn C.; Johnson, Lee K.; Sengupta, Anita

    2012-07-01

    The Nuclear Electric Xenon Ion System ion thruster was developed for potential outer planet robotic missions using nuclear electric propulsion (NEP). This engine was designed to operate at power levels ranging from 13 to 28 kW at specific impulses of 6000-8500 s and for burn times of up to 10 years. State-of-the-art performance and life assessment tools were used to design the thruster, which featured 57-cm-diameter carbon-carbon composite grids operating at voltages of 3.5-6.5 kV. Preliminary validation of the thruster performance was accomplished with a laboratory model thruster, while in parallel, a flight-like development model (DM) thruster was completed and two DM thrusters fabricated. The first thruster completed full performance testing and a 2000-h wear test. The second successfully completed vibration tests at the full protoflight levels defined for this NEP program and then passed performance validation testing. The thruster design, performance, and the experimental validation of the design tools are discussed in this paper.

  18. A high power ion thruster for deep space missions.

    PubMed

    Polk, James E; Goebel, Dan M; Snyder, John S; Schneider, Analyn C; Johnson, Lee K; Sengupta, Anita

    2012-07-01

    The Nuclear Electric Xenon Ion System ion thruster was developed for potential outer planet robotic missions using nuclear electric propulsion (NEP). This engine was designed to operate at power levels ranging from 13 to 28 kW at specific impulses of 6000-8500 s and for burn times of up to 10 years. State-of-the-art performance and life assessment tools were used to design the thruster, which featured 57-cm-diameter carbon-carbon composite grids operating at voltages of 3.5-6.5 kV. Preliminary validation of the thruster performance was accomplished with a laboratory model thruster, while in parallel, a flight-like development model (DM) thruster was completed and two DM thrusters fabricated. The first thruster completed full performance testing and a 2000-h wear test. The second successfully completed vibration tests at the full protoflight levels defined for this NEP program and then passed performance validation testing. The thruster design, performance, and the experimental validation of the design tools are discussed in this paper.

  19. NASA Occupant Protection Standards Development

    NASA Technical Reports Server (NTRS)

    Somers, Jeffrey; Gernhardt, Michael; Lawrence, Charles

    2012-01-01

    Historically, spacecraft landing systems have been tested with human volunteers, because analytical methods for estimating injury risk were insufficient. These tests were conducted with flight-like suits and seats to verify the safety of the landing systems. Currently, NASA uses the Brinkley Dynamic Response Index to estimate injury risk, although applying it to the NASA environment has drawbacks: (1) Does not indicate severity or anatomical location of injury (2) Unclear if model applies to NASA applications. Because of these limitations, a new validated, analytical approach was desired. Leveraging off of the current state of the art in automotive safety and racing, a new approach was developed. The approach has several aspects: (1) Define the acceptable level of injury risk by injury severity (2) Determine the appropriate human surrogate for testing and modeling (3) Mine existing human injury data to determine appropriate Injury Assessment Reference Values (IARV). (4) Rigorously Validate the IARVs with sub-injurious human testing (5) Use validated IARVs to update standards and vehicle requirement

  20. The Construct Validity of Higher Order Structure-of-Intellect Abilities in a Battery of Tests Emphasizing the Product of Transformations: A Confirmatory Maximum Likelihood Factor Analysis.

    ERIC Educational Resources Information Center

    Khattab, Ali-Maher; And Others

    1982-01-01

    A causal modeling system, using confirmatory maximum likelihood factor analysis with the LISREL IV computer program, evaluated the construct validity underlying the higher order factor structure of a given correlation matrix of 46 structure-of-intellect tests emphasizing the product of transformations. (Author/PN)

  1. Technology Readiness of the NEXT Ion Propulsion System

    NASA Technical Reports Server (NTRS)

    Benson, Scott W.; Patterson, Michael J.

    2008-01-01

    The NASA's Evolutionary Xenon Thruster (NEXT) ion propulsion system has been in advanced technology development under the NASA In-Space Propulsion Technology project. The highest fidelity hardware planned has now been completed by the government/industry team, including: a flight prototype model (PM) thruster, an engineering model (EM) power processing unit, EM propellant management assemblies, a breadboard gimbal, and control unit simulators. Subsystem and system level technology validation testing is in progress. To achieve the objective Technology Readiness Level 6, environmental testing is being conducted to qualification levels in ground facilities simulating the space environment. Additional tests have been conducted to characterize the performance range and life capability of the NEXT thruster. This paper presents the status and results of technology validation testing accomplished to date, the validated subsystem and system capabilities, and the plans for completion of this phase of NEXT development. The next round of competed planetary science mission announcements of opportunity, and directed mission decisions, are anticipated to occur in 2008 and 2009. Progress to date, and the success of on-going technology validation, indicate that the NEXT ion propulsion system will be a primary candidate for mission consideration in these upcoming opportunities.

  2. Toward a Process-Focused Model of Test Score Validity: Improving Psychological Assessment in Science and Practice

    ERIC Educational Resources Information Center

    Bornstein, Robert F.

    2011-01-01

    Although definitions of validity have evolved considerably since L. J. Cronbach and P. E. Meehl's classic (1955) review, contemporary validity research continues to emphasize correlational analyses assessing predictor-criterion relationships, with most outcome criteria being self-reports. The present article describes an alternative way of…

  3. The Model Analyst’s Toolkit: Scientific Model Development, Analysis, and Validation

    DTIC Science & Technology

    2013-11-20

    Granger causality F-test validation 3.1.2. Dynamic time warping for uneven temporal relationships Many causal relationships are imperfectly...mapping for dynamic feedback models Granger causality and DTW can identify causal relationships and consider complex temporal factors. However, many ...variant of the tf-idf algorithm (Manning, Raghavan, Schutze et al., 2008), typically used in search engines, to “score” features. The (-log tf) in

  4. Identification of patients at high risk for Clostridium difficile infection: development and validation of a risk prediction model in hospitalized patients treated with antibiotics.

    PubMed

    van Werkhoven, C H; van der Tempel, J; Jajou, R; Thijsen, S F T; Diepersloot, R J A; Bonten, M J M; Postma, D F; Oosterheert, J J

    2015-08-01

    To develop and validate a prediction model for Clostridium difficile infection (CDI) in hospitalized patients treated with systemic antibiotics, we performed a case-cohort study in a tertiary (derivation) and secondary care hospital (validation). Cases had a positive Clostridium test and were treated with systemic antibiotics before suspicion of CDI. Controls were randomly selected from hospitalized patients treated with systemic antibiotics. Potential predictors were selected from the literature. Logistic regression was used to derive the model. Discrimination and calibration of the model were tested in internal and external validation. A total of 180 cases and 330 controls were included for derivation. Age >65 years, recent hospitalization, CDI history, malignancy, chronic renal failure, use of immunosuppressants, receipt of antibiotics before admission, nonsurgical admission, admission to the intensive care unit, gastric tube feeding, treatment with cephalosporins and presence of an underlying infection were independent predictors of CDI. The area under the receiver operating characteristic curve of the model in the derivation cohort was 0.84 (95% confidence interval 0.80-0.87), and was reduced to 0.81 after internal validation. In external validation, consisting of 97 cases and 417 controls, the model area under the curve was 0.81 (95% confidence interval 0.77-0.85) and model calibration was adequate (Brier score 0.004). A simplified risk score was derived. Using a cutoff of 7 points, the positive predictive value, sensitivity and specificity were 1.0%, 72% and 73%, respectively. In conclusion, a risk prediction model was developed and validated, with good discrimination and calibration, that can be used to target preventive interventions in patients with increased risk of CDI. Copyright © 2015 European Society of Clinical Microbiology and Infectious Diseases. Published by Elsevier Ltd. All rights reserved.

  5. SAMICS Validation. SAMICS Support Study, Phase 3

    NASA Technical Reports Server (NTRS)

    1979-01-01

    SAMICS provides a consistent basis for estimating array costs and compares production technology costs. A review and a validation of the SAMICS model are reported. The review had the following purposes: (1) to test the computational validity of the computer model by comparison with preliminary hand calculations based on conventional cost estimating techniques; (2) to review and improve the accuracy of the cost relationships being used by the model: and (3) to provide an independent verification to users of the model's value in decision making for allocation of research and developement funds and for investment in manufacturing capacity. It is concluded that the SAMICS model is a flexible, accurate, and useful tool for managerial decision making.

  6. The EGS Collab Project: Stimulation Investigations for Geothermal Modeling Analysis and Validation

    NASA Astrophysics Data System (ADS)

    Blankenship, D.; Kneafsey, T. J.

    2017-12-01

    The US DOE's EGS Collab project team is establishing a suite of intermediate-scale ( 10-20 m) field test beds for coupled stimulation and interwell flow tests. The multiple national laboratory and university team is designing the tests to compare measured data to models to improve measurement and modeling toolsets available for use in field sites and investigations such as DOE's Frontier Observatory for Research in Geothermal Energy (FORGE) Project. Our tests will be well-controlled, in situexperiments focused on rock fracture behavior, seismicity, and permeability enhancement. Pre- and post-test modeling will allow for model prediction and validation. High-quality, high-resolution geophysical and other fracture characterization data will be collected, analyzed, and compared with models and field observations to further elucidate the basic relationships between stress, induced seismicity, and permeability enhancement. Coring through the stimulated zone after tests will provide fracture characteristics that can be compared to monitoring data and model predictions. We will also observe and quantify other key governing parameters that impact permeability, and attempt to understand how these parameters might change throughout the development and operation of an Enhanced Geothermal System (EGS) project with the goal of enabling commercial viability of EGS. The Collab team will perform three major experiments over the three-year project duration. Experiment 1, intended to investigate hydraulic fracturing, will be performed in the Sanford Underground Research Facility (SURF) at 4,850 feet depth and will build on kISMET Project findings. Experiment 2 will be designed to investigate hydroshearing. Experiment 3 will investigate changes in fracturing strategies and will be further specified as the project proceeds. The tests will provide quantitative insights into the nature of stimulation (e.g., hydraulic fracturing, hydroshearing, mixed-mode fracturing, thermal fracturing) in crystalline rock under reservoir-like stress conditions and generate high-quality, high-resolution, diverse data sets to be simulated allowing model validation. Monitoring techniques will also be evaluated under controlled conditions identifying technologies appropriate for deeper full-scale EGS sites.

  7. Integration and validation of a data grid software

    NASA Astrophysics Data System (ADS)

    Carenton-Madiec, Nicolas; Berger, Katharina; Cofino, Antonio

    2014-05-01

    The Earth System Grid Federation (ESGF) Peer-to-Peer (P2P) is a software infrastructure for the management, dissemination, and analysis of model output and observational data. The ESGF grid is composed with several types of nodes which have different roles. About 40 data nodes host model outputs and datasets using thredds catalogs. About 25 compute nodes offer remote visualization and analysis tools. About 15 index nodes crawl data nodes catalogs and implement faceted and federated search in a web interface. About 15 Identity providers nodes manage accounts, authentication and authorization. Here we will present an actual size test federation spread across different institutes in different countries and a python test suite that were started in December 2013. The first objective of the test suite is to provide a simple tool that helps to test and validate a single data node and its closest index, compute and identity provider peer. The next objective will be to run this test suite on every data node of the federation and therefore test and validate every single node of the whole federation. The suite already implements nosetests, requests, myproxy-logon, subprocess, selenium and fabric python libraries in order to test both web front ends, back ends and security services. The goal of this project is to improve the quality of deliverable in a small developers team context. Developers are widely spread around the world working collaboratively and without hierarchy. This kind of working organization context en-lighted the need of a federated integration test and validation process.

  8. A Comparison of Different Psychometric Approaches to Modeling Testlet Structures: An Example with C-Tests

    ERIC Educational Resources Information Center

    Schroeders, Ulrich; Robitzsch, Alexander; Schipolowski, Stefan

    2014-01-01

    C-tests are a specific variant of cloze tests that are considered time-efficient, valid indicators of general language proficiency. They are commonly analyzed with models of item response theory assuming local item independence. In this article we estimated local interdependencies for 12 C-tests and compared the changes in item difficulties,…

  9. Validity, Reliability, and the Questionable Role of Psychometrics in Plastic Surgery

    PubMed Central

    2014-01-01

    Summary: This report examines the meaning of validity and reliability and the role of psychometrics in plastic surgery. Study titles increasingly include the word “valid” to support the authors’ claims. Studies by other investigators may be labeled “not validated.” Validity simply refers to the ability of a device to measure what it intends to measure. Validity is not an intrinsic test property. It is a relative term most credibly assigned by the independent user. Similarly, the word “reliable” is subject to interpretation. In psychometrics, its meaning is synonymous with “reproducible.” The definitions of valid and reliable are analogous to accuracy and precision. Reliability (both the reliability of the data and the consistency of measurements) is a prerequisite for validity. Outcome measures in plastic surgery are intended to be surveys, not tests. The role of psychometric modeling in plastic surgery is unclear, and this discipline introduces difficult jargon that can discourage investigators. Standard statistical tests suffice. The unambiguous term “reproducible” is preferred when discussing data consistency. Study design and methodology are essential considerations when assessing a study’s validity. PMID:25289354

  10. Psychometric properties of the Bulgarian translation of noise sensitivity scale short form (NSS-SF): implementation in the field of noise control.

    PubMed

    Dzhambov, Angel M; Dimitrova, Donka D

    2014-01-01

    The Noise Sensitivity Scale Short Form (NSS-SF), developed in English as a more practical form of the classical Weinstein NSS, has not to date been validated in other cultures, and its validity and reliability have not yet been confirmed. This study aimed to validate NSS-SF in Bulgarian and to demonstrate its applicability. The study comprised test-retest (n = 115) and a field-testing (n = 71) of the newly validated scale. Its construct validity was examined with confirmatory factor analysis, and very good model-fit was observed. Temporal stability was assessed in a test-retest (r = 0.990), convergent validity was examined with single-item susceptibility to the noise scale (r = 0.906) and discriminant validity was confirmed with single-item noise annoyance scale (r = 0.718). The lowest observed McDonald's omega across the studies was 0.923. The cross-cultural validation of NSS-SF was successful but it proved to be somewhat problematic with respect to its annoyance-based items.

  11. Simulation-Based Training for Colonoscopy

    PubMed Central

    Preisler, Louise; Svendsen, Morten Bo Søndergaard; Nerup, Nikolaj; Svendsen, Lars Bo; Konge, Lars

    2015-01-01

    Abstract The aim of this study was to create simulation-based tests with credible pass/fail standards for 2 different fidelities of colonoscopy models. Only competent practitioners should perform colonoscopy. Reliable and valid simulation-based tests could be used to establish basic competency in colonoscopy before practicing on patients. Twenty-five physicians (10 consultants with endoscopic experience and 15 fellows with very little endoscopic experience) were tested on 2 different simulator models: a virtual-reality simulator and a physical model. Tests were repeated twice on each simulator model. Metrics with discriminatory ability were identified for both modalities and reliability was determined. The contrasting-groups method was used to create pass/fail standards and the consequences of these were explored. The consultants significantly performed faster and scored higher than the fellows on both the models (P < 0.001). Reliability analysis showed Cronbach α = 0.80 and 0.87 for the virtual-reality and the physical model, respectively. The established pass/fail standards failed one of the consultants (virtual-reality simulator) and allowed one fellow to pass (physical model). The 2 tested simulations-based modalities provided reliable and valid assessments of competence in colonoscopy and credible pass/fail standards were established for both the tests. We propose to use these standards in simulation-based training programs before proceeding to supervised training on patients. PMID:25634177

  12. Aquatic Animal Models – Not Just for Ecotox Anymore

    EPA Science Inventory

    A wide range of internationally harmonized toxicity test guidelines employing aquatic animal models have been established for regulatory use. For fish alone, there are over a dozen internationally harmonized toxicity test guidelines that have been, or are being, validated. To dat...

  13. Comparison of machine-learning algorithms to build a predictive model for detecting undiagnosed diabetes - ELSA-Brasil: accuracy study.

    PubMed

    Olivera, André Rodrigues; Roesler, Valter; Iochpe, Cirano; Schmidt, Maria Inês; Vigo, Álvaro; Barreto, Sandhi Maria; Duncan, Bruce Bartholow

    2017-01-01

    Type 2 diabetes is a chronic disease associated with a wide range of serious health complications that have a major impact on overall health. The aims here were to develop and validate predictive models for detecting undiagnosed diabetes using data from the Longitudinal Study of Adult Health (ELSA-Brasil) and to compare the performance of different machine-learning algorithms in this task. Comparison of machine-learning algorithms to develop predictive models using data from ELSA-Brasil. After selecting a subset of 27 candidate variables from the literature, models were built and validated in four sequential steps: (i) parameter tuning with tenfold cross-validation, repeated three times; (ii) automatic variable selection using forward selection, a wrapper strategy with four different machine-learning algorithms and tenfold cross-validation (repeated three times), to evaluate each subset of variables; (iii) error estimation of model parameters with tenfold cross-validation, repeated ten times; and (iv) generalization testing on an independent dataset. The models were created with the following machine-learning algorithms: logistic regression, artificial neural network, naïve Bayes, K-nearest neighbor and random forest. The best models were created using artificial neural networks and logistic regression. -These achieved mean areas under the curve of, respectively, 75.24% and 74.98% in the error estimation step and 74.17% and 74.41% in the generalization testing step. Most of the predictive models produced similar results, and demonstrated the feasibility of identifying individuals with highest probability of having undiagnosed diabetes, through easily-obtained clinical data.

  14. Ada Compiler Validation Summary Report. Certificate Number: 900726W1. 11017, Verdix Corporation VADS IBM RISC System/6000, AIX 3.1, VAda-110-7171, Version 6.0 IBM RISC System/6000 Model 530 = IBM RISC System/6000 Model 530

    DTIC Science & Technology

    1991-01-22

    Customer Agreement Number: 90-05-29- VRX See Section 3.1 for any additional information about the testing environment. As a result of this validation...22 January 1991 90-05-29- VRX Ada COMPILER VALIDATION SUMMARY REPORT: Certificate Number: 900726W1.11017 Verdix Corporation VADS IBM RISC System/6000

  15. Propeller aircraft interior noise model utilization study and validation

    NASA Technical Reports Server (NTRS)

    Pope, L. D.

    1984-01-01

    Utilization and validation of a computer program designed for aircraft interior noise prediction is considered. The program, entitled PAIN (an acronym for Propeller Aircraft Interior Noise), permits (in theory) predictions of sound levels inside propeller driven aircraft arising from sidewall transmission. The objective of the work reported was to determine the practicality of making predictions for various airplanes and the extent of the program's capabilities. The ultimate purpose was to discern the quality of predictions for tonal levels inside an aircraft occurring at the propeller blade passage frequency and its harmonics. The effort involved three tasks: (1) program validation through comparisons of predictions with scale-model test results; (2) development of utilization schemes for large (full scale) fuselages; and (3) validation through comparisons of predictions with measurements taken in flight tests on a turboprop aircraft. Findings should enable future users of the program to efficiently undertake and correctly interpret predictions.

  16. A posteriori model validation for the temporal order of directed functional connectivity maps

    PubMed Central

    Beltz, Adriene M.; Molenaar, Peter C. M.

    2015-01-01

    A posteriori model validation for the temporal order of neural directed functional connectivity maps is rare. This is striking because models that require sequential independence among residuals are regularly implemented. The aim of the current study was (a) to apply to directed functional connectivity maps of functional magnetic resonance imaging data an a posteriori model validation procedure (i.e., white noise tests of one-step-ahead prediction errors combined with decision criteria for revising the maps based upon Lagrange Multiplier tests), and (b) to demonstrate how the procedure applies to single-subject simulated, single-subject task-related, and multi-subject resting state data. Directed functional connectivity was determined by the unified structural equation model family of approaches in order to map contemporaneous and first order lagged connections among brain regions at the group- and individual-levels while incorporating external input, then white noise tests were run. Findings revealed that the validation procedure successfully detected unmodeled sequential dependencies among residuals and recovered higher order (greater than one) simulated connections, and that the procedure can accommodate task-related input. Findings also revealed that lags greater than one were present in resting state data: With a group-level network that contained only contemporaneous and first order connections, 44% of subjects required second order, individual-level connections in order to obtain maps with white noise residuals. Results have broad methodological relevance (e.g., temporal validation is necessary after directed functional connectivity analyses because the presence of unmodeled higher order sequential dependencies may bias parameter estimates) and substantive implications (e.g., higher order lags may be common in resting state data). PMID:26379489

  17. A novel QSAR model of Salmonella mutagenicity and its application in the safety assessment of drug impurities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Valencia, Antoni; Prous, Josep; Mora, Oscar

    As indicated in ICH M7 draft guidance, in silico predictive tools including statistically-based QSARs and expert analysis may be used as a computational assessment for bacterial mutagenicity for the qualification of impurities in pharmaceuticals. To address this need, we developed and validated a QSAR model to predict Salmonella t. mutagenicity (Ames assay outcome) of pharmaceutical impurities using Prous Institute's Symmetry℠, a new in silico solution for drug discovery and toxicity screening, and the Mold2 molecular descriptor package (FDA/NCTR). Data was sourced from public benchmark databases with known Ames assay mutagenicity outcomes for 7300 chemicals (57% mutagens). Of these data, 90%more » was used to train the model and the remaining 10% was set aside as a holdout set for validation. The model's applicability to drug impurities was tested using a FDA/CDER database of 951 structures, of which 94% were found within the model's applicability domain. The predictive performance of the model is acceptable for supporting regulatory decision-making with 84 ± 1% sensitivity, 81 ± 1% specificity, 83 ± 1% concordance and 79 ± 1% negative predictivity based on internal cross-validation, while the holdout dataset yielded 83% sensitivity, 77% specificity, 80% concordance and 78% negative predictivity. Given the importance of having confidence in negative predictions, an additional external validation of the model was also carried out, using marketed drugs known to be Ames-negative, and obtained 98% coverage and 81% specificity. Additionally, Ames mutagenicity data from FDA/CFSAN was used to create another data set of 1535 chemicals for external validation of the model, yielding 98% coverage, 73% sensitivity, 86% specificity, 81% concordance and 84% negative predictivity. - Highlights: • A new in silico QSAR model to predict Ames mutagenicity is described. • The model is extensively validated with chemicals from the FDA and the public domain. • Validation tests show desirable high sensitivity and high negative predictivity. • The model predicted 14 reportedly difficult to predict drug impurities with accuracy. • The model is suitable to support risk evaluation of potentially mutagenic compounds.« less

  18. Data Validation in the Kepler Science Operations Center Pipeline

    NASA Technical Reports Server (NTRS)

    Wu, Hayley; Twicken, Joseph D.; Tenenbaum, Peter; Clarke, Bruce D.; Li, Jie; Quintana, Elisa V.; Allen, Christopher; Chandrasekaran, Hema; Jenkins, Jon M.; Caldwell, Douglas A.; hide

    2010-01-01

    We present an overview of the Data Validation (DV) software component and its context within the Kepler Science Operations Center (SOC) pipeline and overall Kepler Science mission. The SOC pipeline performs a transiting planet search on the corrected light curves for over 150,000 targets across the focal plane array. We discuss the DV strategy for automated validation of Threshold Crossing Events (TCEs) generated in the transiting planet search. For each TCE, a transiting planet model is fitted to the target light curve. A multiple planet search is conducted by repeating the transiting planet search on the residual light curve after the model flux has been removed; if an additional detection occurs, a planet model is fitted to the new TCE. A suite of automated tests are performed after all planet candidates have been identified. We describe a centroid motion test to determine the significance of the motion of the target photocenter during transit and to estimate the coordinates of the transit source within the photometric aperture; a series of eclipsing binary discrimination tests on the parameters of the planet model fits to all transits and the sequences of odd and even transits; and a statistical bootstrap to assess the likelihood that the TCE would have been generated purely by chance given the target light curve with all transits removed. Keywords: photometry, data validation, Kepler, Earth-size planets

  19. Comparison effectiveness of cooperative learning type STAD with cooperative learning type TPS in terms of mathematical method of Junior High School students

    NASA Astrophysics Data System (ADS)

    Wahyuni, A.

    2018-05-01

    This research is aimed to find out whether the model of cooperative learning type Student Team Achievement Division (STAD) is more effective than cooperative learning type Think-Pair-Share in SMP Negeri 7 Yogyakarta. This research was a quasi-experimental research, using two experimental groups. The population of research was all students of 7thclass in SMP Negeri 7 Yogyakarta that consists of 5 Classes. From the population were taken 2 classes randomly which used as sample. The instrument to collect data was a description test. Measurement of instrument validity use content validity and construct validity, while measuring instrument reliability use Cronbach Alpha formula. To investigate the effectiveness of cooperative learning type STAD and cooperative learning type TPS on the aspect of student’s mathematical method, the datas were analyzed by one sample test. Comparing the effectiveness of cooperative learning type STAD and TPS in terms of mathematical communication skills by using t-test. Normality test was not conducted because the sample of research more than 30 students, while homogeneity tested by using Kolmogorov Smirnov test. The analysis was performed at 5% confidence level.The results show as follows : 1) The model of cooperative learning type STAD and TPS are effective in terms of mathematical method of junior high school students. 2). STAD type cooperative learning model is more effective than TPS type cooperative learning model in terms of mathematical methods of junior high school students.

  20. Validating Cellular Automata Lava Flow Emplacement Algorithms with Standard Benchmarks

    NASA Astrophysics Data System (ADS)

    Richardson, J. A.; Connor, L.; Charbonnier, S. J.; Connor, C.; Gallant, E.

    2015-12-01

    A major existing need in assessing lava flow simulators is a common set of validation benchmark tests. We propose three levels of benchmarks which test model output against increasingly complex standards. First, imulated lava flows should be morphologically identical, given changes in parameter space that should be inconsequential, such as slope direction. Second, lava flows simulated in simple parameter spaces can be tested against analytical solutions or empirical relationships seen in Bingham fluids. For instance, a lava flow simulated on a flat surface should produce a circular outline. Third, lava flows simulated over real world topography can be compared to recent real world lava flows, such as those at Tolbachik, Russia, and Fogo, Cape Verde. Success or failure of emplacement algorithms in these validation benchmarks can be determined using a Bayesian approach, which directly tests the ability of an emplacement algorithm to correctly forecast lava inundation. Here we focus on two posterior metrics, P(A|B) and P(¬A|¬B), which describe the positive and negative predictive value of flow algorithms. This is an improvement on less direct statistics such as model sensitivity and the Jaccard fitness coefficient. We have performed these validation benchmarks on a new, modular lava flow emplacement simulator that we have developed. This simulator, which we call MOLASSES, follows a Cellular Automata (CA) method. The code is developed in several interchangeable modules, which enables quick modification of the distribution algorithm from cell locations to their neighbors. By assessing several different distribution schemes with the benchmark tests, we have improved the performance of MOLASSES to correctly match early stages of the 2012-3 Tolbachik Flow, Kamchakta Russia, to 80%. We also can evaluate model performance given uncertain input parameters using a Monte Carlo setup. This illuminates sensitivity to model uncertainty.

  1. Personality, Cognitive, and Interpersonal Factors in Adolescent Substance Use: A Longitudinal Test of an Integrative Model.

    ERIC Educational Resources Information Center

    Barnea, Zipora; And Others

    1992-01-01

    A test with 1,446 high school students in Israel of a multidimensional model of adolescent drug use that incorporates sociodemographic variables, personality variables, cognitive variables, interpersonal factors, and the availability of drugs validated the model longitudinally. Results suggest that different legal and illegal substances share a…

  2. Validation of new psychosocial factors questionnaires: a Colombian national study.

    PubMed

    Villalobos, Gloria H; Vargas, Angélica M; Rondón, Martin A; Felknor, Sarah A

    2013-01-01

    The study of workers' health problems possibly associated with stressful conditions requires valid and reliable tools for monitoring risk factors. The present study validates two questionnaires to assess psychosocial risk factors for stress-related illnesses within a sample of Colombian workers. The validation process was based on a representative sample survey of 2,360 Colombian employees, aged 18-70 years. Worker response rate was 90%; 46% of the responders were women. Internal consistency was calculated, construct validity was tested with factor analysis and concurrent validity was tested with Spearman correlations. The questionnaires demonstrated adequate reliability (0.88-0.95). Factor analysis confirmed the dimensions proposed in the measurement model. Concurrent validity resulted in significant correlations with stress and health symptoms. "Work and Non-work Psychosocial Factors Questionnaires" were found to be valid and reliable for the assessment of workers' psychosocial factors, and they provide information for research and intervention. Copyright © 2012 Wiley Periodicals, Inc.

  3. OC5 Project Phase Ib: Validation of hydrodynamic loading on a fixed, flexible cylinder for offshore wind applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robertson, Amy N.; Wendt, Fabian; Jonkman, Jason M.

    This paper summarizes the findings from Phase Ib of the Offshore Code Comparison, Collaboration, Continued with Correlation (OC5) project. OC5 is a project run under the International Energy Agency (IEA) Wind Research Task 30, and is focused on validating the tools used for modelling offshore wind systems through the comparison of simulated responses of select offshore wind systems (and components) to physical test data. For Phase Ib of the project, simulated hydrodynamic loads on a flexible cylinder fixed to a sloped bed were validated against test measurements made in the shallow water basin at the Danish Hydraulic Institute (DHI) withmore » support from the Technical University of Denmark (DTU). The first phase of OC5 examined two simple cylinder structures (Phase Ia and Ib) to focus on validation of hydrodynamic models used in the various tools before moving on to more complex offshore wind systems and the associated coupled physics. As a result, verification and validation activities such as these lead to improvement of offshore wind modelling tools, which will enable the development of more innovative and cost-effective offshore wind designs.« less

  4. OC5 Project Phase Ib: Validation of hydrodynamic loading on a fixed, flexible cylinder for offshore wind applications

    DOE PAGES

    Robertson, Amy N.; Wendt, Fabian; Jonkman, Jason M.; ...

    2016-10-13

    This paper summarizes the findings from Phase Ib of the Offshore Code Comparison, Collaboration, Continued with Correlation (OC5) project. OC5 is a project run under the International Energy Agency (IEA) Wind Research Task 30, and is focused on validating the tools used for modelling offshore wind systems through the comparison of simulated responses of select offshore wind systems (and components) to physical test data. For Phase Ib of the project, simulated hydrodynamic loads on a flexible cylinder fixed to a sloped bed were validated against test measurements made in the shallow water basin at the Danish Hydraulic Institute (DHI) withmore » support from the Technical University of Denmark (DTU). The first phase of OC5 examined two simple cylinder structures (Phase Ia and Ib) to focus on validation of hydrodynamic models used in the various tools before moving on to more complex offshore wind systems and the associated coupled physics. As a result, verification and validation activities such as these lead to improvement of offshore wind modelling tools, which will enable the development of more innovative and cost-effective offshore wind designs.« less

  5. Modeling and Validation of a Navy A6-Intruder Actively Controlled Landing Gear System

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Daugherty, Robert H.; Martinson, Veloria J.

    1999-01-01

    Concepts for long-range air travel are characterized by airframe designs with long, slender, relatively flexible fuselages. One aspect often overlooked is ground-induced vibration of these aircraft. This paper presents an analytical and experimental study of reducing ground-induced aircraft vibration loads by using actively controlled landing gear. A facility has been developed to test various active landing gear control concepts and their performance, The facility uses a Navy A6 Intruder landing gear fitted with an auxiliary hydraulic supply electronically controlled by servo valves. An analytical model of the gear is presented, including modifications to actuate the gear externally, and test data are used to validate the model. The control design is described and closed-loop test and analysis comparisons are presented.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koh, J. H.; Ng, E. Y. K.; Robertson, Amy

    As part of a collaboration of the National Renewable Energy Laboratory (NREL) and SWAY AS, NREL installed scientific wind, wave, and motion measurement equipment on the spar-type 1/6.5th-scale prototype SWAY floating offshore wind system. The equipment enhanced SWAY's data collection and allowed SWAY to verify the concept and NREL to validate a FAST model of the SWAY design in an open-water condition. Nanyang Technological University (NTU), in collaboration with NREL, assisted with the validation. This final report gives an overview of the SWAY prototype and NREL and NTU's efforts to validate a model of the system. The report provides amore » summary of the different software tools used in the study, the modeling strategies, and the development of a FAST model of the SWAY prototype wind turbine, including justification of the modeling assumptions. Because of uncertainty in system parameters and modeling assumptions due to the complexity of the design, several system properties were tuned to better represent the system and improve the accuracy of the simulations. Calibration was performed using data from a static equilibrium test and free-decay tests.« less

  7. Finding Furfural Hydrogenation Catalysts via Predictive Modelling

    PubMed Central

    Strassberger, Zea; Mooijman, Maurice; Ruijter, Eelco; Alberts, Albert H; Maldonado, Ana G; Orru, Romano V A; Rothenberg, Gadi

    2010-01-01

    Abstract We combine multicomponent reactions, catalytic performance studies and predictive modelling to find transfer hydrogenation catalysts. An initial set of 18 ruthenium-carbene complexes were synthesized and screened in the transfer hydrogenation of furfural to furfurol with isopropyl alcohol complexes gave varied yields, from 62% up to >99.9%, with no obvious structure/activity correlations. Control experiments proved that the carbene ligand remains coordinated to the ruthenium centre throughout the reaction. Deuterium-labelling studies showed a secondary isotope effect (kH:kD=1.5). Further mechanistic studies showed that this transfer hydrogenation follows the so-called monohydride pathway. Using these data, we built a predictive model for 13 of the catalysts, based on 2D and 3D molecular descriptors. We tested and validated the model using the remaining five catalysts (cross-validation, R2=0.913). Then, with this model, the conversion and selectivity were predicted for four completely new ruthenium-carbene complexes. These four catalysts were then synthesized and tested. The results were within 3% of the model’s predictions, demonstrating the validity and value of predictive modelling in catalyst optimization. PMID:23193388

  8. LIVVkit 2: An extensible land ice verification and validation toolkit for comparing observations and models?

    NASA Astrophysics Data System (ADS)

    Kennedy, J. H.; Bennett, A. R.; Evans, K. J.; Fyke, J. G.; Vargo, L.; Price, S. F.; Hoffman, M. J.

    2016-12-01

    Accurate representation of ice sheets and glaciers are essential for robust predictions of arctic climate within Earth System models. Verification and Validation (V&V) is a set of techniques used to quantify the correctness and accuracy of a model, which builds developer/modeler confidence, and can be used to enhance the credibility of the model. Fundamentally, V&V is a continuous process because each model change requires a new round of V&V testing. The Community Ice Sheet Model (CISM) development community is actively developing LIVVkit, the Land Ice Verification and Validation toolkit, which is designed to easily integrate into an ice-sheet model's development workflow (on both personal and high-performance computers) to provide continuous V&V testing.LIVVkit is a robust and extensible python package for V&V, which has components for both software V&V (construction and use) and model V&V (mathematics and physics). The model Verification component is used, for example, to verify model results against community intercomparisons such as ISMIP-HOM. The model validation component is used, for example, to generate a series of diagnostic plots showing the differences between model results against observations for variables such as thickness, surface elevation, basal topography, surface velocity, surface mass balance, etc. Because many different ice-sheet models are under active development, new validation datasets are becoming available, and new methods of analysing these models are actively being researched, LIVVkit includes a framework to easily extend the model V&V analyses by ice-sheet modelers. This allows modelers and developers to develop evaluations of parameters, implement changes, and quickly see how those changes effect the ice-sheet model and earth system model (when coupled). Furthermore, LIVVkit outputs a portable hierarchical website allowing evaluations to be easily shared, published, and analysed throughout the arctic and Earth system communities.

  9. Development and validation of the ExPRESS instrument for primary health care providers' evaluation of external supervision.

    PubMed

    Schriver, Michael; Cubaka, Vincent Kalumire; Vedsted, Peter; Besigye, Innocent; Kallestrup, Per

    2018-01-01

    External supervision of primary health care facilities to monitor and improve services is common in low-income countries. Currently there are no tools to measure the quality of support in external supervision in these countries. To develop a provider-reported instrument to assess the support delivered through external supervision in Rwanda and other countries. "External supervision: Provider Evaluation of Supervisor Support" (ExPRESS) was developed in 18 steps, primarily in Rwanda. Content validity was optimised using systematic search for related instruments, interviews, translations, and relevance assessments by international supervision experts as well as local experts in Nigeria, Kenya, Uganda and Rwanda. Construct validity and reliability were examined in two separate field tests, the first using exploratory factor analysis and a test-retest design, the second for confirmatory factor analysis. We included 16 items in section A ('The most recent experience with an external supervisor'), and 13 items in section B ('The overall experience with external supervisors'). Item-content validity index was acceptable. In field test I, test-retest had acceptable kappa values and exploratory factor analysis suggested relevant factors in sections A and B used for model hypotheses. In field test II, models were tested by confirmatory factor analysis fitting a 4-factor model for section A, and a 3-factor model for section B. ExPRESS is a promising tool for evaluation of the quality of support of primary health care providers in external supervision of primary health care facilities in resource-constrained settings. ExPRESS may be used as specific feedback to external supervisors to help identify and address gaps in the supervision they provide. Further studies should determine optimal interpretation of scores and the number of respondents needed per supervisor to obtain precise results, as well as test the functionality of section B.

  10. Validation of mechanical models for reinforced concrete structures: Presentation of the French project ``Benchmark des Poutres de la Rance''

    NASA Astrophysics Data System (ADS)

    L'Hostis, V.; Brunet, C.; Poupard, O.; Petre-Lazar, I.

    2006-11-01

    Several ageing models are available for the prediction of the mechanical consequences of rebar corrosion. They are used for service life prediction of reinforced concrete structures. Concerning corrosion diagnosis of reinforced concrete, some Non Destructive Testing (NDT) tools have been developed, and have been in use for some years. However, these developments require validation on existing concrete structures. The French project “Benchmark des Poutres de la Rance” contributes to this aspect. It has two main objectives: (i) validation of mechanical models to estimate the influence of rebar corrosion on the load bearing capacity of a structure, (ii) qualification of the use of the NDT results to collect information on steel corrosion within reinforced-concrete structures. Ten French and European institutions from both academic research laboratories and industrial companies contributed during the years 2004 and 2005. This paper presents the project that was divided into several work packages: (i) the reinforced concrete beams were characterized from non-destructive testing tools, (ii) the mechanical behaviour of the beams was experimentally tested, (iii) complementary laboratory analysis were performed and (iv) finally numerical simulations results were compared to the experimental results obtained with the mechanical tests.

  11. Validating Translation Test Items via the Many-Facet Rasch Model.

    PubMed

    Tseng, Wen-Ta; Su, Tzi-Ying; Nix, John-Michael L

    2018-01-01

    This study applied the many-facet Rasch model to assess learners' translation ability in an English as a foreign language context. Few attempts have been made in extant research to detect and calibrate rater severity in the domain of translation testing. To fill the research gap, this study documented the process of validating a test of Chinese-to-English sentence translation and modeled raters' scoring propensity defined by harshness or leniency, expert/novice effects on severity, and concomitant effects on item difficulty. Two hundred twenty-five, third-year senior high school Taiwanese students and six educators from tertiary and secondary educational institutions served as participants. The students' mean age was 17.80 years ( SD = 1.20, range 17-19). The exam consisted of 10 translation items adapted from two entrance exam tests. The results showed that this subjectively scored performance assessment exhibited robust unidimensionality, thus reliably measuring translation ability free from unmodeled disturbances. Furthermore, discrepancies in ratings between novice and expert raters were also identified and modeled by the many-facet Rasch model. The implications for applying the many-facet Rasch model in translation tests at the tertiary level were discussed.

  12. Calibration and Validation of the Dutch-Flemish PROMIS Pain Interference Item Bank in Patients with Chronic Pain

    PubMed Central

    Crins, Martine H. P.; Roorda, Leo D.; Smits, Niels; de Vet, Henrica C. W.; Westhovens, Rene; Cella, David; Cook, Karon F.; Revicki, Dennis; van Leeuwen, Jaap; Boers, Maarten; Dekker, Joost; Terwee, Caroline B.

    2015-01-01

    The Dutch-Flemish PROMIS Group translated the adult PROMIS Pain Interference item bank into Dutch-Flemish. The aims of the current study were to calibrate the parameters of these items using an item response theory (IRT) model, to evaluate the cross-cultural validity of the Dutch-Flemish translations compared to the original English items, and to evaluate their reliability and construct validity. The 40 items in the bank were completed by 1085 Dutch chronic pain patients. Before calibrating the items, IRT model assumptions were evaluated using confirmatory factor analysis (CFA). Items were calibrated using the graded response model (GRM), an IRT model appropriate for items with more than two response options. To evaluate cross-cultural validity, differential item functioning (DIF) for language (Dutch vs. English) was examined. Reliability was evaluated based on standard errors and Cronbach’s alpha. To evaluate construct validity correlations with scores on legacy instruments (e.g., the Disabilities of the Arm, Shoulder and Hand Questionnaire) were calculated. Unidimensionality of the Dutch-Flemish PROMIS Pain Interference item bank was supported by CFA tests of model fit (CFI = 0.986, TLI = 0.986). Furthermore, the data fit the GRM and showed good coverage across the pain interference continuum (threshold-parameters range: -3.04 to 3.44). The Dutch-Flemish PROMIS Pain Interference item bank has good cross-cultural validity (only two out of 40 items showing DIF), good reliability (Cronbach’s alpha = 0.98), and good construct validity (Pearson correlations between 0.62 and 0.75). A computer adaptive test (CAT) and Dutch-Flemish PROMIS short forms of the Dutch-Flemish PROMIS Pain Interference item bank can now be developed. PMID:26214178

  13. Calibration and Validation of the Dutch-Flemish PROMIS Pain Interference Item Bank in Patients with Chronic Pain.

    PubMed

    Crins, Martine H P; Roorda, Leo D; Smits, Niels; de Vet, Henrica C W; Westhovens, Rene; Cella, David; Cook, Karon F; Revicki, Dennis; van Leeuwen, Jaap; Boers, Maarten; Dekker, Joost; Terwee, Caroline B

    2015-01-01

    The Dutch-Flemish PROMIS Group translated the adult PROMIS Pain Interference item bank into Dutch-Flemish. The aims of the current study were to calibrate the parameters of these items using an item response theory (IRT) model, to evaluate the cross-cultural validity of the Dutch-Flemish translations compared to the original English items, and to evaluate their reliability and construct validity. The 40 items in the bank were completed by 1085 Dutch chronic pain patients. Before calibrating the items, IRT model assumptions were evaluated using confirmatory factor analysis (CFA). Items were calibrated using the graded response model (GRM), an IRT model appropriate for items with more than two response options. To evaluate cross-cultural validity, differential item functioning (DIF) for language (Dutch vs. English) was examined. Reliability was evaluated based on standard errors and Cronbach's alpha. To evaluate construct validity correlations with scores on legacy instruments (e.g., the Disabilities of the Arm, Shoulder and Hand Questionnaire) were calculated. Unidimensionality of the Dutch-Flemish PROMIS Pain Interference item bank was supported by CFA tests of model fit (CFI = 0.986, TLI = 0.986). Furthermore, the data fit the GRM and showed good coverage across the pain interference continuum (threshold-parameters range: -3.04 to 3.44). The Dutch-Flemish PROMIS Pain Interference item bank has good cross-cultural validity (only two out of 40 items showing DIF), good reliability (Cronbach's alpha = 0.98), and good construct validity (Pearson correlations between 0.62 and 0.75). A computer adaptive test (CAT) and Dutch-Flemish PROMIS short forms of the Dutch-Flemish PROMIS Pain Interference item bank can now be developed.

  14. Validation of a Scalable Solar Sailcraft

    NASA Technical Reports Server (NTRS)

    Murphy, D. M.

    2006-01-01

    The NASA In-Space Propulsion (ISP) program sponsored intensive solar sail technology and systems design, development, and hardware demonstration activities over the past 3 years. Efforts to validate a scalable solar sail system by functional demonstration in relevant environments, together with test-analysis correlation activities on a scalable solar sail system have recently been successfully completed. A review of the program, with descriptions of the design, results of testing, and analytical model validations of component and assembly functional, strength, stiffness, shape, and dynamic behavior are discussed. The scaled performance of the validated system is projected to demonstrate the applicability to flight demonstration and important NASA road-map missions.

  15. Development and Validation of a Computational Model for Androgen Receptor Activity

    PubMed Central

    2016-01-01

    Testing thousands of chemicals to identify potential androgen receptor (AR) agonists or antagonists would cost millions of dollars and take decades to complete using current validated methods. High-throughput in vitro screening (HTS) and computational toxicology approaches can more rapidly and inexpensively identify potential androgen-active chemicals. We integrated 11 HTS ToxCast/Tox21 in vitro assays into a computational network model to distinguish true AR pathway activity from technology-specific assay interference. The in vitro HTS assays probed perturbations of the AR pathway at multiple points (receptor binding, coregulator recruitment, gene transcription, and protein production) and multiple cell types. Confirmatory in vitro antagonist assay data and cytotoxicity information were used as additional flags for potential nonspecific activity. Validating such alternative testing strategies requires high-quality reference data. We compiled 158 putative androgen-active and -inactive chemicals from a combination of international test method validation efforts and semiautomated systematic literature reviews. Detailed in vitro assay information and results were compiled into a single database using a standardized ontology. Reference chemical concentrations that activated or inhibited AR pathway activity were identified to establish a range of potencies with reproducible reference chemical results. Comparison with existing Tier 1 AR binding data from the U.S. EPA Endocrine Disruptor Screening Program revealed that the model identified binders at relevant test concentrations (<100 μM) and was more sensitive to antagonist activity. The AR pathway model based on the ToxCast/Tox21 assays had balanced accuracies of 95.2% for agonist (n = 29) and 97.5% for antagonist (n = 28) reference chemicals. Out of 1855 chemicals screened in the AR pathway model, 220 chemicals demonstrated AR agonist or antagonist activity and an additional 174 chemicals were predicted to have potential weak AR pathway activity. PMID:27933809

  16. Fundamental Movement Skills Are More than Run, Throw and Catch: The Role of Stability Skills.

    PubMed

    Rudd, James R; Barnett, Lisa M; Butson, Michael L; Farrow, Damian; Berry, Jason; Polman, Remco C J

    2015-01-01

    In motor development literature fundamental movement skills are divided into three constructs: locomotive, object control and stability skills. Most fundamental movement skills research has focused on children's competency in locomotor and object control skills. The first aim of this study was to validate a test battery to assess the construct of stability skills, in children aged 6 to 10 (M age = 8.2, SD = 1.2). Secondly we assessed how the stability skills construct fitted into a model of fundamental movement skill. The Delphi method was used to select the stability skill battery. Confirmatory factor analysis (CFA) was used to assess if the skills loaded onto the same construct and a new model of FMS was developed using structural equation modelling. Three postural control tasks were selected (the log roll, rock and back support) because they had good face and content validity. These skills also demonstrated good predictive validity with gymnasts scoring significantly better than children without gymnastic training and children from a high SES school performing better than those from a mid and low SES schools and the mid SES children scored better than the low SES children (all p < .05). Inter rater reliability tests were excellent for all three skills (ICC = 0.81, 0.87, 0.87) as was test re-test reliability (ICC 0.87-0.95). CFA provided good construct validity, and structural equation modelling revealed stability skills to be an independent factor in an overall FMS model which included locomotor (r = .88), object control (r = .76) and stability skills (r = .81). This study provides a rationale for the inclusion of stability skills in FMS assessment. The stability skills could be used alongside other FMS assessment tools to provide a holistic assessment of children's fundamental movement skills.

  17. Isokinetic knee strength qualities as predictors of jumping performance in high-level volleyball athletes: multiple regression approach.

    PubMed

    Sattler, Tine; Sekulic, Damir; Spasic, Miodrag; Osmankac, Nedzad; Vicente João, Paulo; Dervisevic, Edvin; Hadzic, Vedran

    2016-01-01

    Previous investigations noted potential importance of isokinetic strength in rapid muscular performances, such as jumping. This study aimed to identify the influence of isokinetic-knee-strength on specific jumping performance in volleyball. The secondary aim of the study was to evaluate reliability and validity of the two volleyball-specific jumping tests. The sample comprised 67 female (21.96±3.79 years; 68.26±8.52 kg; 174.43±6.85 cm) and 99 male (23.62±5.27 years; 84.83±10.37 kg; 189.01±7.21 cm) high- volleyball players who competed in 1st and 2nd National Division. Subjects were randomly divided into validation (N.=55 and 33 for males and females, respectively) and cross-validation subsamples (N.=54 and 34 for males and females, respectively). Set of predictors included isokinetic tests, to evaluate the eccentric and concentric strength capacities of the knee extensors, and flexors for dominant and non-dominant leg. The main outcome measure for the isokinetic testing was peak torque (PT) which was later normalized for body mass and expressed as PT/Kg. Block-jump and spike-jump performances were measured over three trials, and observed as criteria. Forward stepwise multiple regressions were calculated for validation subsamples and then cross-validated. Cross validation included correlations between and t-test differences between observed and predicted scores; and Bland Altman graphics. Jumping tests were found to be reliable (spike jump: ICC of 0.79 and 0.86; block-jump: ICC of 0.86 and 0.90; for males and females, respectively), and their validity was confirmed by significant t-test differences between 1st vs. 2nd division players. Isokinetic variables were found to be significant predictors of jumping performance in females, but not among males. In females, the isokinetic-knee measures were shown to be stronger and more valid predictors of the block-jump (42% and 64% of the explained variance for validation and cross-validation subsample, respectively) than that of the spike-jump (39% and 34% of the explained variance for validation and cross-validation subsample, respectively). Differences between prediction models calculated for males and females are mostly explained by gender-specific biomechanics of jumping. Study defined importance of knee-isokinetic-strength in volleyball jumping performance in female athletes. Further studies should evaluate association between ankle-isokinetic-strength and volleyball-specific jumping performances. Results reinforce the need for the cross-validation of the prediction-models in sport and exercise sciences.

  18. Opportunity integrated assessment facilitating critical thinking and science process skills measurement on acid base matter

    NASA Astrophysics Data System (ADS)

    Sari, Anggi Ristiyana Puspita; Suyanta, LFX, Endang Widjajanti; Rohaeti, Eli

    2017-05-01

    Recognizing the importance of the development of critical thinking and science process skills, the instrument should give attention to the characteristics of chemistry. Therefore, constructing an accurate instrument for measuring those skills is important. However, the integrated instrument assessment is limited in number. The purpose of this study is to validate an integrated assessment instrument for measuring students' critical thinking and science process skills on acid base matter. The development model of the test instrument adapted McIntire model. The sample consisted of 392 second grade high school students in the academic year of 2015/2016 in Yogyakarta. Exploratory Factor Analysis (EFA) was conducted to explore construct validity, whereas content validity was substantiated by Aiken's formula. The result shows that the KMO test is 0.714 which indicates sufficient items for each factor and the Bartlett test is significant (a significance value of less than 0.05). Furthermore, content validity coefficient which is based on 8 experts is obtained at 0.85. The findings support the integrated assessment instrument to measure critical thinking and science process skills on acid base matter.

  19. Development and validation of a discriminating in vitro dissolution method for a poorly soluble drug, olmesartan medoxomil: comparison between commercial tablets.

    PubMed

    Bajerski, Lisiane; Rossi, Rochele Cassanta; Dias, Carolina Lupi; Bergold, Ana Maria; Fröehlich, Pedro Eduardo

    2010-06-01

    A dissolution test for tablets containing 40 mg of olmesartan medoxomil (OLM) was developed and validated using both LC-UV and UV methods. After evaluation of the sink condition, dissolution medium, and stability of the drug, the method was validated using USP apparatus 2, 50 rpm rotation speed, and 900 ml of deaerated H(2)O + 0.5% sodium lauryl sulfate (w/v) at pH 6.8 (adjusted with 18% phosphoric acid) as the dissolution medium. The model-independent method using difference factor (f(1)) and similarity factor (f(2)), model-dependent method, and dissolution efficiency were employed to compare dissolution profiles. The kinetic parameters of drug release were also investigated. The obtained results provided adequate dissolution profiles. The developed dissolution test was validated according to international guidelines. Since there is no monograph for this drug in tablets, the dissolution method presented here can be used as a quality control test for OLM in this dosage form, especially in a batch to batch evaluation.

  20. Validation of US3D for Capsule Aerodynamics using 05-CA Wind Tunnel Test Data

    NASA Technical Reports Server (NTRS)

    Schwing, Alan

    2012-01-01

    Several comparisons of computational fluid dynamics to wind tunnel test data are shown for the purpose of code validation. The wind tunnel test, 05-CA, uses a 7.66% model of NASA's Multi-Purpose Crew Vehicle in the 11-foot test section of the Ames Unitary Plan Wind tunnel. A variety of freestream conditions over four Mach numbers and three angles of attack are considered. Test data comparisons include time-averaged integrated forces and moments, time-averaged static pressure ports on the surface, and Strouhal Number. The applicability of the US3D code to subsonic and transonic flow over a bluff body is assessed on a comprehensive data set. With close comparison, this work validates US3D for highly separated flows similar to those examined here.

  1. The Trunk Impairment Scale - modified to ordinal scales in the Norwegian version.

    PubMed

    Gjelsvik, Bente; Breivik, Kyrre; Verheyden, Geert; Smedal, Tori; Hofstad, Håkon; Strand, Liv Inger

    2012-01-01

    To translate the Trunk Impairment Scale (TIS), a measure of trunk control in patients after stroke, into Norwegian (TIS-NV), and to explore its construct validity, internal consistency, intertester and test-retest reliability. TIS was translated according to international guidelines. The validity study was performed on data from 201 patients with acute stroke. Fifty patients with stroke and acquired brain injury were recruited to examine intertester and test-retest reliability. Construct validity was analyzed with exploratory and confirmatory factor analysis and item response theory, internal consistency with Cronbach's alpha test, and intertester and test-retest reliability with kappa and intraclass correlation coefficient tests. The back-translated version of TIS-NV was validated by the original developer. The subscale Static sitting balance was removed. By combining items from the subscales Dynamic sitting balance and Coordination, six ordinal superitems (testlets) were constructed. The TIS-NV was renamed the modified TIS-NV (TIS-modNV). After modifications the TIS-modNV fitted well to a locally dependent unidimensional item response theory model. It demonstrated good construct validity, excellent internal consistency, and high intertester and test-retest reliability for the total score. This study supports that the TIS-modNV is a valid and reliable scale for use in clinical practice and research.

  2. Cryogenic Orbital Test Bed 3 (CRYOTE3) Overview and Status

    NASA Technical Reports Server (NTRS)

    Stephens, Jonathan; Martin, Jim; Smith, James; Sisco, Jim; Marsell, Brandon; Roth, Jacob; Schallhorn, Paul; Wanzie, Nathaniel; Piryk, David; Bauer, Jeffrey; hide

    2015-01-01

    CRYOTE3 is a grassroots CFM test effort with contributing government and industry partners focused on developing and testing hardware to produce needed data for model validation and implementation into flight systems.

  3. Investigating the Capacity of Hydrological Models to Project Impacts of Climate Change in the Context of Water Allocation

    NASA Astrophysics Data System (ADS)

    Velez, Carlos; Maroy, Edith; Rocabado, Ivan; Pereira, Fernando

    2017-04-01

    To analyse the impacts of climate changes, hydrological models are used to project the hydrology responds under future conditions that normally differ from those for which they were calibrated. The challenge is to assess the validity of the projected effects when there is not data to validate it. A framework for testing the ability of models to project climate change was proposed by Refsgaard et al., (2014). The authors recommend the use of the differential-split sample test (DSST) in order to build confidence in the model projections. The method follow three steps: 1. A small number of sub-periods are selected according to one climate characteristics, 2. The calibration - validation test is applied on these periods, 3. The validation performances are compered to evaluate whether they vary significantly when climatic characteristics differ between calibration and validation. DSST rely on the existing records of climate and hydrological variables; and performances are estimated based on indicators of error between observed and simulated variables. Other authors suggest that, since climate models are not able to reproduce single events but rather statistical properties describing the climate, this should be reflected when testing hydrological models. Thus, performance criteria such as RMSE should be replaced by for instance flow duration curves or other distribution functions. Using this type of performance criteria, Van Steenbergen and Willems, (2012) proposed a method to test the validity of hydrological models in a climate changing context. The method is based on the evaluation of peak flow increases due to different levels of rainfall increases. In contrast to DSST, this method use the projected climate variability and it is especially useful to compare different modelling tools. In the framework of a water allocation project for the region of Flanders (Belgium) we calibrated three hydrological models: NAM, PDM and VHM; for 67 gauged sub-catchments with approx. 40 years of records. This paper investigates the capacity of the three hydrological models to project the impacts of climate change scenarios. It is proposed a general testing framework which combine the use of the existing information through an adapted form of DSST with the approach proposed by Van Steenbergen and Willems, (2012) adapted to assess statistical properties of flows useful in the context of water allocation. To assess the model we use robustness criteria based on a Log Nash-Sutcliffe, BIAS on cummulative volumes and relative changes based on Q50/Q90 estimated from the duration curve. The three conceptual rainfall-runoff models yielded different results per sub-catchments. A relation was found between robustness criteria and changes in mean rainfall and changes in mean potential evapotranspiration. Biases are greatly affected by changes in precipitation, especially when the climate scenarios involve changes in precipitation volume beyond the range used for calibration. Using the combine approach we were able to classify the modelling tools per sub-catchments and create an ensemble of best models to project the impacts of climate variability for the catchments of 10 main rivers in Flanders. Thus, managers could understand better the usability of the modelling tools and the credibility of its outputs for water allocation applications. References Refsgaard, J.C., Madsen, H., Andréassian, V., Arnbjerg-Nielsen, K., Davidson, T.A., Drews, M., Hamilton, D.P., Jeppesen, E., Kjellström, E., Olesen, J.E., Sonnenborg, T.O., Trolle, D., Willems, P., Christensen, J.H., 2014. A framework for testing the ability of models to project climate change and its impacts. Clim. Change. Van Steenbergen, N., Willems, P., 2012. Method for testing the accuracy of rainfall - runoff models in predicting peak flow changes due to rainfall changes , in a climate changing context. J. Hydrol. 415, 425-434.

  4. A Nonparametric Approach for Assessing Goodness-of-Fit of IRT Models in a Mixed Format Test

    ERIC Educational Resources Information Center

    Liang, Tie; Wells, Craig S.

    2015-01-01

    Investigating the fit of a parametric model plays a vital role in validating an item response theory (IRT) model. An area that has received little attention is the assessment of multiple IRT models used in a mixed-format test. The present study extends the nonparametric approach, proposed by Douglas and Cohen (2001), to assess model fit of three…

  5. Updated Prognostic Model for Predicting Overall Survival in First-Line Chemotherapy for Patients With Metastatic Castration-Resistant Prostate Cancer

    PubMed Central

    Halabi, Susan; Lin, Chen-Yen; Kelly, W. Kevin; Fizazi, Karim S.; Moul, Judd W.; Kaplan, Ellen B.; Morris, Michael J.; Small, Eric J.

    2014-01-01

    Purpose Prognostic models for overall survival (OS) for patients with metastatic castration-resistant prostate cancer (mCRPC) are dated and do not reflect significant advances in treatment options available for these patients. This work developed and validated an updated prognostic model to predict OS in patients receiving first-line chemotherapy. Methods Data from a phase III trial of 1,050 patients with mCRPC were used (Cancer and Leukemia Group B CALGB-90401 [Alliance]). The data were randomly split into training and testing sets. A separate phase III trial served as an independent validation set. Adaptive least absolute shrinkage and selection operator selected eight factors prognostic for OS. A predictive score was computed from the regression coefficients and used to classify patients into low- and high-risk groups. The model was assessed for its predictive accuracy using the time-dependent area under the curve (tAUC). Results The model included Eastern Cooperative Oncology Group performance status, disease site, lactate dehydrogenase, opioid analgesic use, albumin, hemoglobin, prostate-specific antigen, and alkaline phosphatase. Median OS values in the high- and low-risk groups, respectively, in the testing set were 17 and 30 months (hazard ratio [HR], 2.2; P < .001); in the validation set they were 14 and 26 months (HR, 2.9; P < .001). The tAUCs were 0.73 (95% CI, 0.70 to 0.73) and 0.76 (95% CI, 0.72 to 0.76) in the testing and validation sets, respectively. Conclusion An updated prognostic model for OS in patients with mCRPC receiving first-line chemotherapy was developed and validated on an external set. This model can be used to predict OS, as well as to better select patients to participate in trials on the basis of their prognosis. PMID:24449231

  6. Insight into the structural requirements of proton pump inhibitors based on CoMFA and CoMSIA studies.

    PubMed

    Nayana, M Ravi Shashi; Sekhar, Y Nataraja; Nandyala, Haritha; Muttineni, Ravikumar; Bairy, Santosh Kumar; Singh, Kriti; Mahmood, S K

    2008-10-01

    In the present study, a series of 179 quinoline and quinazoline heterocyclic analogues exhibiting inhibitory activity against Gastric (H+/K+)-ATPase were investigated using the comparative molecular field analysis (CoMFA) and comparative molecular similarity indices (CoMSIA) methods. Both the models exhibited good correlation between the calculated 3D-QSAR fields and the observed biological activity for the respective training set compounds. The most optimal CoMFA and CoMSIA models yielded significant leave-one-out cross-validation coefficient, q(2) of 0.777, 0.744 and conventional cross-validation coefficient, r(2) of 0.927, 0.914 respectively. The predictive ability of generated models was tested on a set of 52 compounds having broad range of activity. CoMFA and CoMSIA yielded predicted activities for test set compounds with r(pred)(2) of 0.893 and 0.917 respectively. These validation tests not only revealed the robustness of the models but also demonstrated that for our models r(pred)(2) based on the mean activity of test set compounds can accurately estimate external predictivity. The factors affecting activity were analyzed carefully according to standard coefficient contour maps of steric, electrostatic, hydrophobic, acceptor and donor fields derived from the CoMFA and CoMSIA. These contour plots identified several key features which explain the wide range of activities. The results obtained from models offer important structural insight into designing novel peptic-ulcer inhibitors prior to their synthesis.

  7. Track/train dynamics test procedure transfer function test

    NASA Technical Reports Server (NTRS)

    Vigil, R. A.

    1975-01-01

    A transfer function vibration test was made on an 80 ton open hopper freight car in an effort to obtain validation data on the car's nonlinear elastic model. Test configuration, handling, test facilities, test operations, and data acquisition/reduction activities necessary to meet the conditions of test requirements are given.

  8. Transfer of Learning.

    ERIC Educational Resources Information Center

    1999

    This document contains four symposium papers on transfer of learning. In "Learning Transfer in a Social Service Agency: Test of an Expectancy Model of Motivation" (Reid A. Bates) structural equation modeling is used to test the validity of a valence-instrumentality-expectancy approach to motivation to transfer learning. "The…

  9. Comparison of free-piston Stirling engine model predictions with RE1000 engine test data

    NASA Technical Reports Server (NTRS)

    Tew, R. C., Jr.

    1984-01-01

    Predictions of a free-piston Stirling engine model are compared with RE1000 engine test data taken at NASA-Lewis Research Center. The model validation and the engine testing are being done under a joint interagency agreement between the Department of Energy's Oak Ridge National Laboratory and NASA-Lewis. A kinematic code developed at Lewis was upgraded to permit simulation of free-piston engine performance; it was further upgraded and modified at Lewis and is currently being validated. The model predicts engine performance by numerical integration of equations for each control volume in the working space. Piston motions are determined by numerical integration of the force balance on each piston or can be specified as Fourier series. In addition, the model Fourier analyzes the various piston forces to permit the construction of phasor force diagrams. The paper compares predicted and experimental values of power and efficiency and shows phasor force diagrams for the RE1000 engine displacer and piston. Further development plans for the model are also discussed.

  10. Development of novel in silico model for developmental toxicity assessment by using naïve Bayes classifier method.

    PubMed

    Zhang, Hui; Ren, Ji-Xia; Kang, Yan-Li; Bo, Peng; Liang, Jun-Yu; Ding, Lan; Kong, Wei-Bao; Zhang, Ji

    2017-08-01

    Toxicological testing associated with developmental toxicity endpoints are very expensive, time consuming and labor intensive. Thus, developing alternative approaches for developmental toxicity testing is an important and urgent task in the drug development filed. In this investigation, the naïve Bayes classifier was applied to develop a novel prediction model for developmental toxicity. The established prediction model was evaluated by the internal 5-fold cross validation and external test set. The overall prediction results for the internal 5-fold cross validation of the training set and external test set were 96.6% and 82.8%, respectively. In addition, four simple descriptors and some representative substructures of developmental toxicants were identified. Thus, we hope the established in silico prediction model could be used as alternative method for toxicological assessment. And these obtained molecular information could afford a deeper understanding on the developmental toxicants, and provide guidance for medicinal chemists working in drug discovery and lead optimization. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Performance Testing of a Trace Contaminant Control Subassembly for the International Space Station

    NASA Technical Reports Server (NTRS)

    Perry, J. L.; Curtis, R. E.; Alexandre, K. L.; Ruggiero, L. L.; Shtessel, N.

    1998-01-01

    As part of the International Space Station (ISS) Trace Contaminant Control Subassembly (TCCS) development, a performance test has been conducted to provide reference data for flight verification analyses. This test, which used the U.S. Habitation Module (U.S. Hab) TCCS as the test article, was designed to add to the existing database on TCCS performance. Included in this database are results obtained during ISS development testing; testing of functionally similar TCCS prototype units; and bench scale testing of activated charcoal, oxidation catalyst, and granular lithium hydroxide (LiOH). The present database has served as the basis for the development and validation of a computerized TCCS process simulation model. This model serves as the primary means for verifying the ISS TCCS performance. In order to mitigate risk associated with this verification approach, the U.S. Hab TCCS performance test provides an additional set of data which serve to anchor both the process model and previously-obtained development test data to flight hardware performance. The following discussion provides relevant background followed by a summary of the test hardware, objectives, requirements, and facilities. Facility and test article performance during the test is summarized, test results are presented, and the TCCS's performance relative to past test experience is discussed. Performance predictions made with the TCCS process model are compared with the U.S. Hab TCCS test results to demonstrate its validation.

  12. Measurement Noninvariance of Safer Sex Self-Efficacy Between Heterosexual and Sexual Minority Black Youth.

    PubMed

    Gerke, Donald; Budd, Elizabeth L; Plax, Kathryn

    2016-01-01

    Black and lesbian, gay, bisexual, or questioning (LGBQ) youth in the United States are disproportionately affected by HIV and other sexually transmitted diseases (STDs). Although self-efficacy is strongly, positively associated with safer sex behaviors, no studies have examined the validity of a safer sex self-efficacy scale used by many federally funded HIV/STD prevention programs. This study aims to test factor validity of the Sexual Self-Efficacy Scale by using confirmatory factor analysis (CFA) to determine if scale validity varies between heterosexual and LGBQ Black youth. The study uses cross-sectional data collected through baseline surveys with 226 Black youth (15 to 24 years) enrolled in community-based HIV-prevention programs. Participants use a 4-point Likert-type scale to report their confidence in performing 6 healthy sexual behaviors. CFAs are conducted on 2 factor structures of the scale. Using the best-fitting model, the scale is tested for measurement invariance between the 2 groups. A single-factor model with correlated errors of condom-specific items fits the sample well and, when tested with the heterosexual group, the model demonstrates good fit. However, when tested with the LGBQ group, the same model yields poor fit, indicating factorial noninvariance between the groups. The Sexual Self-Efficacy Scale does not perform equally well among Black heterosexual and LGBQ youth. Study findings suggest additional research is needed to inform development of measures for safer sex self-efficacy among Black LGBQ youth to ensure validity of conceptual understanding and to accurately assess effectiveness of HIV/STD prevention interventions among this population.

  13. Validating proposed migration equation and parameters' values as a tool to reproduce and predict 137Cs vertical migration activity in Spanish soils.

    PubMed

    Olondo, C; Legarda, F; Herranz, M; Idoeta, R

    2017-04-01

    This paper shows the procedure performed to validate the migration equation and the migration parameters' values presented in a previous paper (Legarda et al., 2011) regarding the migration of 137 Cs in Spanish mainland soils. In this paper, this model validation has been carried out checking experimentally obtained activity concentration values against those predicted by the model. This experimental data come from the measured vertical activity profiles of 8 new sampling points which are located in northern Spain. Before testing predicted values of the model, the uncertainty of those values has been assessed with the appropriate uncertainty analysis. Once establishing the uncertainty of the model, both activity concentration values, experimental versus model predicted ones, have been compared. Model validation has been performed analyzing its accuracy, studying it as a whole and also at different depth intervals. As a result, this model has been validated as a tool to predict 137 Cs behaviour in a Mediterranean environment. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. From theory to experimental design-Quantifying a trait-based theory of predator-prey dynamics.

    PubMed

    Laubmeier, A N; Wootton, Kate; Banks, J E; Bommarco, Riccardo; Curtsdotter, Alva; Jonsson, Tomas; Roslin, Tomas; Banks, H T

    2018-01-01

    Successfully applying theoretical models to natural communities and predicting ecosystem behavior under changing conditions is the backbone of predictive ecology. However, the experiments required to test these models are dictated by practical constraints, and models are often opportunistically validated against data for which they were never intended. Alternatively, we can inform and improve experimental design by an in-depth pre-experimental analysis of the model, generating experiments better targeted at testing the validity of a theory. Here, we describe this process for a specific experiment. Starting from food web ecological theory, we formulate a model and design an experiment to optimally test the validity of the theory, supplementing traditional design considerations with model analysis. The experiment itself will be run and described in a separate paper. The theory we test is that trophic population dynamics are dictated by species traits, and we study this in a community of terrestrial arthropods. We depart from the Allometric Trophic Network (ATN) model and hypothesize that including habitat use, in addition to body mass, is necessary to better model trophic interactions. We therefore formulate new terms which account for micro-habitat use as well as intra- and interspecific interference in the ATN model. We design an experiment and an effective sampling regime to test this model and the underlying assumptions about the traits dominating trophic interactions. We arrive at a detailed sampling protocol to maximize information content in the empirical data obtained from the experiment and, relying on theoretical analysis of the proposed model, explore potential shortcomings of our design. Consequently, since this is a "pre-experimental" exercise aimed at improving the links between hypothesis formulation, model construction, experimental design and data collection, we hasten to publish our findings before analyzing data from the actual experiment, thus setting the stage for strong inference.

  15. On vital aid: the why, what and how of validation

    PubMed Central

    Kleywegt, Gerard J.

    2009-01-01

    Limitations to the data and subjectivity in the structure-determination process may cause errors in macromolecular crystal structures. Appropriate validation techniques may be used to reveal problems in structures, ideally before they are analysed, published or deposited. Additionally, such tech­niques may be used a posteriori to assess the (relative) merits of a model by potential users. Weak validation methods and statistics assess how well a model reproduces the information that was used in its construction (i.e. experimental data and prior knowledge). Strong methods and statistics, on the other hand, test how well a model predicts data or information that were not used in the structure-determination process. These may be data that were excluded from the process on purpose, general knowledge about macromolecular structure, information about the biological role and biochemical activity of the molecule under study or its mutants or complexes and predictions that are based on the model and that can be tested experimentally. PMID:19171968

  16. New public QSAR model for carcinogenicity

    PubMed Central

    2010-01-01

    Background One of the main goals of the new chemical regulation REACH (Registration, Evaluation and Authorization of Chemicals) is to fulfill the gaps in data concerned with properties of chemicals affecting the human health. (Q)SAR models are accepted as a suitable source of information. The EU funded CAESAR project aimed to develop models for prediction of 5 endpoints for regulatory purposes. Carcinogenicity is one of the endpoints under consideration. Results Models for prediction of carcinogenic potency according to specific requirements of Chemical regulation were developed. The dataset of 805 non-congeneric chemicals extracted from Carcinogenic Potency Database (CPDBAS) was used. Counter Propagation Artificial Neural Network (CP ANN) algorithm was implemented. In the article two alternative models for prediction carcinogenicity are described. The first model employed eight MDL descriptors (model A) and the second one twelve Dragon descriptors (model B). CAESAR's models have been assessed according to the OECD principles for the validation of QSAR. For the model validity we used a wide series of statistical checks. Models A and B yielded accuracy of training set (644 compounds) equal to 91% and 89% correspondingly; the accuracy of the test set (161 compounds) was 73% and 69%, while the specificity was 69% and 61%, respectively. Sensitivity in both cases was equal to 75%. The accuracy of the leave 20% out cross validation for the training set of models A and B was equal to 66% and 62% respectively. To verify if the models perform correctly on new compounds the external validation was carried out. The external test set was composed of 738 compounds. We obtained accuracy of external validation equal to 61.4% and 60.0%, sensitivity 64.0% and 61.8% and specificity equal to 58.9% and 58.4% respectively for models A and B. Conclusion Carcinogenicity is a particularly important endpoint and it is expected that QSAR models will not replace the human experts opinions and conventional methods. However, we believe that combination of several methods will provide useful support to the overall evaluation of carcinogenicity. In present paper models for classification of carcinogenic compounds using MDL and Dragon descriptors were developed. Models could be used to set priorities among chemicals for further testing. The models at the CAESAR site were implemented in java and are publicly accessible. PMID:20678182

  17. Using the Rasch analysis for the psychometric validation of the Irregular Word Reading Test (TeLPI): A Portuguese test for the assessment of premorbid intelligence.

    PubMed

    Freitas, Sandra; Prieto, Gerardo; Simões, Mário R; Nogueira, Joana; Santana, Isabel; Martins, Cristina; Alves, Lara

    2018-05-03

    The present study aims to analyze the psychometric characteristics of the TeLPI (Irregular Words Reading Test), a Portuguese premorbid intelligence test, using the Rasch model for dichotomous items. The results reveal an overall adequacy and a good fit of values regarding both items and persons. A high variability of cognitive performance level and a good quality of the measurements were also found. The TeLPI has proved to be a unidimensional measure with reduced DIF effects. The present findings contribute to overcome an important gap in the psychometric validity of this instrument and provide good evidence of the overall psychometric validity of TeLPI results.

  18. Validation of High-Fidelity CFD/CAA Framework for Launch Vehicle Acoustic Environment Simulation against Scale Model Test Data

    NASA Technical Reports Server (NTRS)

    Liever, Peter A.; West, Jeffrey S.

    2016-01-01

    A hybrid Computational Fluid Dynamics and Computational Aero-Acoustics (CFD/CAA) modeling framework has been developed for launch vehicle liftoff acoustic environment predictions. The framework couples the existing highly-scalable NASA production CFD code, Loci/CHEM, with a high-order accurate discontinuous Galerkin solver developed in the same production framework, Loci/THRUST, to accurately resolve and propagate acoustic physics across the entire launch environment. Time-accurate, Hybrid RANS/LES CFD modeling is applied for predicting the acoustic generation physics at the plume source, and a high-order accurate unstructured discontinuous Galerkin (DG) method is employed to propagate acoustic waves away from the source across large distances using high-order accurate schemes. The DG solver is capable of solving 2nd, 3rd, and 4th order Euler solutions for non-linear, conservative acoustic field propagation. Initial application testing and validation has been carried out against high resolution acoustic data from the Ares Scale Model Acoustic Test (ASMAT) series to evaluate the capabilities and production readiness of the CFD/CAA system to resolve the observed spectrum of acoustic frequency content. This paper presents results from this validation and outlines efforts to mature and improve the computational simulation framework.

  19. Integrated Resilient Aircraft Control Project Full Scale Flight Validation

    NASA Technical Reports Server (NTRS)

    Bosworth, John T.

    2009-01-01

    Objective: Provide validation of adaptive control law concepts through full scale flight evaluation. Technical Approach: a) Engage failure mode - destabilizing or frozen surface. b) Perform formation flight and air-to-air tracking tasks. Evaluate adaptive algorithm: a) Stability metrics. b) Model following metrics. Full scale flight testing provides an ability to validate different adaptive flight control approaches. Full scale flight testing adds credence to NASA's research efforts. A sustained research effort is required to remove the road blocks and provide adaptive control as a viable design solution for increased aircraft resilience.

  20. Construct Validity of ADHD/ODD Rating Scales: Recommendations for the Evaluation of Forthcoming DSM-V ADHD/ODD Scales

    ERIC Educational Resources Information Center

    Burns, G. Leonard; Walsh, James A.; Servera, Mateu; Lorenzo-Seva, Urbano; Cardo, Esther; Rodriguez-Fornells, Antoni

    2013-01-01

    Exploratory structural equation modeling (SEM) was applied to a multiple indicator (26 individual symptom ratings) by multitrait (ADHD-IN, ADHD-HI and ODD factors) by multiple source (mothers, fathers and teachers) model to test the invariance, convergent and discriminant validity of the Child and Adolescent Disruptive Behavior Inventory with 872…

  1. Simulated training in colonoscopic stenting of colonic strictures: validation of a cadaver model.

    PubMed

    Iordache, F; Bucobo, J C; Devlin, D; You, K; Bergamaschi, R

    2015-07-01

    There are currently no available simulation models for training in colonoscopic stent deployment. The aim of this study was to validate a cadaver model for simulation training in colonoscopy with stent deployment for colonic strictures. This was a prospective study enrolling surgeons at a single institution. Participants performed colonoscopic stenting on a cadaver model. Their performance was assessed by two independent observers. Measurements were performed for quantitative analysis (time to identify stenosis, time for deployment, accuracy) and a weighted score was devised for assessment. The Mann-Whitney U-test and Student's t-test were used for nonparametric and parametric data, respectively. Cohen's kappa coefficient was used for reliability. Twenty participants performed a colonoscopy with deployment of a self-expandable metallic stent in two cadavers (groups A and B) with 20 strictures overall. The median time was 206 s. The model was able to differentiate between experts and novices (P = 0. 013). The results showed a good consensus estimate of reliability, with kappa = 0.571 (P < 0.0001). The cadaver model described in this study has content, construct and concurrent validity for simulation training in colonoscopic deployment of self-expandable stents for colonic strictures. Further studies are needed to evaluate the predictive validity of this model in terms of skill transfer to clinical practice. Colorectal Disease © 2014 The Association of Coloproctology of Great Britain and Ireland.

  2. Testing the psychometric properties of the Environmental Attitudes Inventory on undergraduate students in the Arab context: A test-retest approach.

    PubMed

    AlMenhali, Entesar Ali; Khalid, Khalizani; Iyanna, Shilpa

    2018-01-01

    The Environmental Attitudes Inventory (EAI) was developed to evaluate the multidimensional nature of environmental attitudes; however, it is based on a dataset from outside the Arab context. This study reinvestigated the construct validity of the EAI with a new dataset and confirmed the feasibility of applying it in the Arab context. One hundred and forty-eight subjects in Study 1 and 130 in Study 2 provided valid responses. An exploratory factor analysis (EFA) was used to extract a new factor structure in Study 1, and confirmatory factor analysis (CFA) was performed in Study 2. Both studies generated a seven-factor model, and the model fit was discussed for both the studies. Study 2 exhibited satisfactory model fit indices compared to Study 1. Factor loading values of a few items in Study 1 affected the reliability values and average variance extracted values, which demonstrated low discriminant validity. Based on the results of the EFA and CFA, this study showed sufficient model fit and suggested the feasibility of applying the EAI in the Arab context with a good construct validity and internal consistency.

  3. Testing the psychometric properties of the Environmental Attitudes Inventory on undergraduate students in the Arab context: A test-retest approach

    PubMed Central

    2018-01-01

    The Environmental Attitudes Inventory (EAI) was developed to evaluate the multidimensional nature of environmental attitudes; however, it is based on a dataset from outside the Arab context. This study reinvestigated the construct validity of the EAI with a new dataset and confirmed the feasibility of applying it in the Arab context. One hundred and forty-eight subjects in Study 1 and 130 in Study 2 provided valid responses. An exploratory factor analysis (EFA) was used to extract a new factor structure in Study 1, and confirmatory factor analysis (CFA) was performed in Study 2. Both studies generated a seven-factor model, and the model fit was discussed for both the studies. Study 2 exhibited satisfactory model fit indices compared to Study 1. Factor loading values of a few items in Study 1 affected the reliability values and average variance extracted values, which demonstrated low discriminant validity. Based on the results of the EFA and CFA, this study showed sufficient model fit and suggested the feasibility of applying the EAI in the Arab context with a good construct validity and internal consistency. PMID:29758021

  4. ExEP yield modeling tool and validation test results

    NASA Astrophysics Data System (ADS)

    Morgan, Rhonda; Turmon, Michael; Delacroix, Christian; Savransky, Dmitry; Garrett, Daniel; Lowrance, Patrick; Liu, Xiang Cate; Nunez, Paul

    2017-09-01

    EXOSIMS is an open-source simulation tool for parametric modeling of the detection yield and characterization of exoplanets. EXOSIMS has been adopted by the Exoplanet Exploration Programs Standards Definition and Evaluation Team (ExSDET) as a common mechanism for comparison of exoplanet mission concept studies. To ensure trustworthiness of the tool, we developed a validation test plan that leverages the Python-language unit-test framework, utilizes integration tests for selected module interactions, and performs end-to-end crossvalidation with other yield tools. This paper presents the test methods and results, with the physics-based tests such as photometry and integration time calculation treated in detail and the functional tests treated summarily. The test case utilized a 4m unobscured telescope with an idealized coronagraph and an exoplanet population from the IPAC radial velocity (RV) exoplanet catalog. The known RV planets were set at quadrature to allow deterministic validation of the calculation of physical parameters, such as working angle, photon counts and integration time. The observing keepout region was tested by generating plots and movies of the targets and the keepout zone over a year. Although the keepout integration test required the interpretation of a user, the test revealed problems in the L2 halo orbit and the parameterization of keepout applied to some solar system bodies, which the development team was able to address. The validation testing of EXOSIMS was performed iteratively with the developers of EXOSIMS and resulted in a more robust, stable, and trustworthy tool that the exoplanet community can use to simulate exoplanet direct-detection missions from probe class, to WFIRST, up to large mission concepts such as HabEx and LUVOIR.

  5. [Turkish validity and reliability study of fear of pain questionnaire-III].

    PubMed

    Ünver, Seher; Turan, Fatma Nesrin

    2018-01-01

    This study aimed to develop a Turkish version of the Fear of Pain Questionnaire-III developed by McNeil and Rainwater (1998) and examine its validity and reliability indicators. The study was conducted with 459 university students studying in the nursing department. The Turkish translation of the scale was conducted by language experts and the original scale owner. Expert opinions were taken for language validity, and the Lawshe's content validity ratio formula was used to calculate the content validity. Exploratory factor analysis was used to assess the construct validity. The factors were rotated using the Varimax rotation (orthogonal) method. For reliability indicators of the questionnaire, the internal consistency coefficient and test re-test reliability were utilized. Explanatory factor analyses using the three-factor model (explaining 50.5% of the total variance) revealed that the item factor loads varied were above the limit value of 0.30 which indicated that the questionnaire had good construct validity. The Cronbach's alpha value for the total questionnaire was 0.938, and test re-test value was 0.846 for the total scale. The Turkish version of the Fear of Pain Questionnaire-III had sufficiently high reliability and validity to be used as a tool in evaluating the fear of pain among the young Turkish population.

  6. Development and Validity Testing of Belief Measurement Model in Buddhism for Junior High School Students at Chiang Rai Buddhist Scripture School: An Application for Multitrait-Multimethod Analysis

    ERIC Educational Resources Information Center

    Chaidi, Thirachai; Damrongpanich, Sunthorapot

    2016-01-01

    The purposes of this study were to develop a model to measure the belief in Buddhism of junior high school students at Chiang Rai Buddhist Scripture School, and to determine construct validity of the model for measuring the belief in Buddhism by using Multitrait-Multimethod analysis. The samples were 590 junior high school students at Buddhist…

  7. Modeling Item-Level and Step-Level Invariance Effects in Polytomous Items Using the Partial Credit Model

    ERIC Educational Resources Information Center

    Gattamorta, Karina A.; Penfield, Randall D.; Myers, Nicholas D.

    2012-01-01

    Measurement invariance is a common consideration in the evaluation of the validity and fairness of test scores when the tested population contains distinct groups of examinees, such as examinees receiving different forms of a translated test. Measurement invariance in polytomous items has traditionally been evaluated at the item-level,…

  8. Grid Modernization Laboratory Consortium - Testing and Verification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kroposki, Benjamin; Skare, Paul; Pratt, Rob

    This paper highlights some of the unique testing capabilities and projects being performed at several national laboratories as part of the U. S. Department of Energy Grid Modernization Laboratory Consortium. As part of this effort, the Grid Modernization Laboratory Consortium Testing Network isbeing developed to accelerate grid modernization by enablingaccess to a comprehensive testing infrastructure and creating a repository of validated models and simulation tools that will be publicly available. This work is key to accelerating thedevelopment, validation, standardization, adoption, and deployment of new grid technologies to help meet U. S. energy goals.

  9. Validation of the Arabic Version of the Group Personality Projective Test among university students in Bahrain.

    PubMed

    Al-Musawi, Nu'man M

    2003-04-01

    Using confirmatory factor analytic techniques on data generated from 200 students enrolled at the University of Bahrain, we obtained some construct validity and reliability data for the Arabic Version of the 1961 Group Personality Projective Test by Cassel and Khan. In contrast to the 5-factor model proposed for the Group Personality Projective Test, a 6-factor solution appeared justified for the Arabic Version of this test, suggesting some variance between the cultural groups in the United States and in Bahrain.

  10. The Predictive Validity of Interim Assessment Scores Based on the Full-Information Bifactor Model for the Prediction of End-of-Grade Test Performance

    ERIC Educational Resources Information Center

    Immekus, Jason C.; Atitya, Ben

    2016-01-01

    Interim tests are a central component of district-wide assessment systems, yet their technical quality to guide decisions (e.g., instructional) has been repeatedly questioned. In response, the study purpose was to investigate the validity of a series of English Language Arts (ELA) interim assessments in terms of dimensionality and prediction of…

  11. Early detection of ovarian cancer by serum marker and targeted ultrasound imaging | Division of Cancer Prevention

    Cancer.gov

    We propose to test the validity and specificity of our targeted ultrasound imaging probes in detecting early stage ovarian cancer (OVCA) by transvaginal ultrasound imaging (TVUS). We then test the predictive validity of these probes in a longitudinal study using the laying hen – the only widely available animal model of spontaneous OVCA. OVCA is a fatal gynecological

  12. Developing a Research Method for Testing New Curriculum Ideas: Report No. 1 on Using Marketing Research in a Lifelong Learning Organization.

    ERIC Educational Resources Information Center

    Haskins, Jack B.

    A survey method was developed and used to determine interest in new course topics at the Learning Institute for Elders (LIFE) at the University of Central Florida. In the absence of a known validated method for course concept testing, this approach was modeled after the "message pretesting," or formative, research used and validated in…

  13. Rasch Modeling of Revised Token Test Performance: Validity and Sensitivity to Change

    ERIC Educational Resources Information Center

    Hula, William; Doyle, Patrick J.; McNeil, Malcolm R.; Mikolic, Joseph M.

    2006-01-01

    The purpose of this research was to examine the validity of the 55-item Revised Token Test (RTT) and to compare traditional and Rasch-based scores in their ability to detect group differences and change over time. The 55-item RTT was administered to 108 left- and right-hemisphere stroke survivors, and the data were submitted to Rasch analysis.…

  14. On the Reliability and Validity of a Numerical Reasoning Speed Dimension Derived from Response Times Collected in Computerized Testing

    ERIC Educational Resources Information Center

    Davison, Mark L.; Semmes, Robert; Huang, Lan; Close, Catherine N.

    2012-01-01

    Data from 181 college students were used to assess whether math reasoning item response times in computerized testing can provide valid and reliable measures of a speed dimension. The alternate forms reliability of the speed dimension was .85. A two-dimensional structural equation model suggests that the speed dimension is related to the accuracy…

  15. Modelling elderly patients' perception of the community pharmacist image when providing professional services.

    PubMed

    Sabater-Galindo, Marta; Sabater-Hernández, Daniel; Ruiz de Maya, Salvador; Gastelurrutia, Miguel Angel; Martínez-Martínez, Fernando; Benrimoj, Shalom I

    2017-06-01

    Professional pharmaceutical services may impact on patient's health behaviour as well as influence on patients' perceptions of the pharmacist image. The Health Belief Model predicts health-related behaviours using patients' beliefs. However, health beliefs (HBs) could transcend beyond predicting health behaviour and may have an impact on the patients' perceptions of the pharmacist image. This study objective was to develop and test a model that relates patients' HBs to patient's perception of the image of the pharmacist, and to assess if the provision of pharmacy services (Intervention group-IG) influences this perception compared to usual care (Control group). A qualitative study was undertaken and a questionnaire was created for the development of the model. The content, dimensions, validity and reliability of the questionnaire were pre-tested qualitatively and in a pilot mail survey. The reliability and validity of the proposed model were tested using Confirmatory Factor Analysis (CFA). Structural Equation Modelling (SEM) was used to explain relationships between dimensions of the final model and to analyse differences between groups. As a result, a final model was developed. CFA concluded that the model was valid and reliable (Goodness of Fit indices: x²(80) = 125.726, p = .001, RMSEA = .04, SRMR = .04, GFI = .997, NFI = .93, CFI = .974). SEM indicated that 'Perceived benefits' were significantly associated with 'Perceived Pharmacist Image' in the whole sample. Differences were found in the IG with also 'Self-efficacy' significantly influencing 'Perceived pharmacist image'. A model of patients' HBs related to their image of the pharmacist was developed and tested. When pharmacists deliver professional services, these services modify some patients' HBs that in turn influence public perception of the pharmacist.

  16. A ferrofluid based energy harvester: Computational modeling, analysis, and experimental validation

    NASA Astrophysics Data System (ADS)

    Liu, Qi; Alazemi, Saad F.; Daqaq, Mohammed F.; Li, Gang

    2018-03-01

    A computational model is described and implemented in this work to analyze the performance of a ferrofluid based electromagnetic energy harvester. The energy harvester converts ambient vibratory energy into an electromotive force through a sloshing motion of a ferrofluid. The computational model solves the coupled Maxwell's equations and Navier-Stokes equations for the dynamic behavior of the magnetic field and fluid motion. The model is validated against experimental results for eight different configurations of the system. The validated model is then employed to study the underlying mechanisms that determine the electromotive force of the energy harvester. Furthermore, computational analysis is performed to test the effect of several modeling aspects, such as three-dimensional effect, surface tension, and type of the ferrofluid-magnetic field coupling on the accuracy of the model prediction.

  17. Semi-physiologic model validation and bioequivalence trials simulation to select the best analyte for acetylsalicylic acid.

    PubMed

    Cuesta-Gragera, Ana; Navarro-Fontestad, Carmen; Mangas-Sanjuan, Victor; González-Álvarez, Isabel; García-Arieta, Alfredo; Trocóniz, Iñaki F; Casabó, Vicente G; Bermejo, Marival

    2015-07-10

    The objective of this paper is to apply a previously developed semi-physiologic pharmacokinetic model implemented in NONMEM to simulate bioequivalence trials (BE) of acetyl salicylic acid (ASA) in order to validate the model performance against ASA human experimental data. ASA is a drug with first-pass hepatic and intestinal metabolism following Michaelis-Menten kinetics that leads to the formation of two main metabolites in two generations (first and second generation metabolites). The first aim was to adapt the semi-physiological model for ASA in NOMMEN using ASA pharmacokinetic parameters from literature, showing its sequential metabolism. The second aim was to validate this model by comparing the results obtained in NONMEM simulations with published experimental data at a dose of 1000 mg. The validated model was used to simulate bioequivalence trials at 3 dose schemes (100, 1000 and 3000 mg) and with 6 test formulations with decreasing in vivo dissolution rate constants versus the reference formulation (kD 8-0.25 h (-1)). Finally, the third aim was to determine which analyte (parent drug, first generation or second generation metabolite) was more sensitive to changes in formulation performance. The validation results showed that the concentration-time curves obtained with the simulations reproduced closely the published experimental data, confirming model performance. The parent drug (ASA) was the analyte that showed to be more sensitive to the decrease in pharmaceutical quality, with the highest decrease in Cmax and AUC ratio between test and reference formulations. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Comparison of the Incremental Validity of the Old and New MCAT.

    ERIC Educational Resources Information Center

    Wolf, Fredric M.; And Others

    The predictive and incremental validity of both the Old and New Medical College Admission Test (MCAT) was examined and compared with a sample of over 300 medical students. Results of zero order and incremental validity coefficients, as well as prediction models resulting from all possible subsets regression analyses using Mallow's Cp criterion,…

  19. The Model Analyst’s Toolkit: Scientific Model Development, Analysis, and Validation

    DTIC Science & Technology

    2015-02-20

    being integrated within MAT, including Granger causality. Granger causality tests whether a data series helps when predicting future values of another...relations by econometric models and cross-spectral methods. Econometrica: Journal of the Econometric Society, 424-438. Granger, C. W. (1980). Testing ... testing dataset. This effort is described in Section 3.2. 3.1. Improvements in Granger Causality User Interface Various metrics of causality are

  20. Creating an automated trigger for sepsis clinical decision support at emergency department triage using machine learning

    PubMed Central

    Halpern, Yoni; Jernite, Yacine; Shapiro, Nathan I.; Nathanson, Larry A.

    2017-01-01

    Objective To demonstrate the incremental benefit of using free text data in addition to vital sign and demographic data to identify patients with suspected infection in the emergency department. Methods This was a retrospective, observational cohort study performed at a tertiary academic teaching hospital. All consecutive ED patient visits between 12/17/08 and 2/17/13 were included. No patients were excluded. The primary outcome measure was infection diagnosed in the emergency department defined as a patient having an infection related ED ICD-9-CM discharge diagnosis. Patients were randomly allocated to train (64%), validate (20%), and test (16%) data sets. After preprocessing the free text using bigram and negation detection, we built four models to predict infection, incrementally adding vital signs, chief complaint, and free text nursing assessment. We used two different methods to represent free text: a bag of words model and a topic model. We then used a support vector machine to build the prediction model. We calculated the area under the receiver operating characteristic curve to compare the discriminatory power of each model. Results A total of 230,936 patient visits were included in the study. Approximately 14% of patients had the primary outcome of diagnosed infection. The area under the ROC curve (AUC) for the vitals model, which used only vital signs and demographic data, was 0.67 for the training data set, 0.67 for the validation data set, and 0.67 (95% CI 0.65–0.69) for the test data set. The AUC for the chief complaint model which also included demographic and vital sign data was 0.84 for the training data set, 0.83 for the validation data set, and 0.83 (95% CI 0.81–0.84) for the test data set. The best performing methods made use of all of the free text. In particular, the AUC for the bag-of-words model was 0.89 for training data set, 0.86 for the validation data set, and 0.86 (95% CI 0.85–0.87) for the test data set. The AUC for the topic model was 0.86 for the training data set, 0.86 for the validation data set, and 0.85 (95% CI 0.84–0.86) for the test data set. Conclusion Compared to previous work that only used structured data such as vital signs and demographic information, utilizing free text drastically improves the discriminatory ability (increase in AUC from 0.67 to 0.86) of identifying infection. PMID:28384212

  1. Creating an automated trigger for sepsis clinical decision support at emergency department triage using machine learning.

    PubMed

    Horng, Steven; Sontag, David A; Halpern, Yoni; Jernite, Yacine; Shapiro, Nathan I; Nathanson, Larry A

    2017-01-01

    To demonstrate the incremental benefit of using free text data in addition to vital sign and demographic data to identify patients with suspected infection in the emergency department. This was a retrospective, observational cohort study performed at a tertiary academic teaching hospital. All consecutive ED patient visits between 12/17/08 and 2/17/13 were included. No patients were excluded. The primary outcome measure was infection diagnosed in the emergency department defined as a patient having an infection related ED ICD-9-CM discharge diagnosis. Patients were randomly allocated to train (64%), validate (20%), and test (16%) data sets. After preprocessing the free text using bigram and negation detection, we built four models to predict infection, incrementally adding vital signs, chief complaint, and free text nursing assessment. We used two different methods to represent free text: a bag of words model and a topic model. We then used a support vector machine to build the prediction model. We calculated the area under the receiver operating characteristic curve to compare the discriminatory power of each model. A total of 230,936 patient visits were included in the study. Approximately 14% of patients had the primary outcome of diagnosed infection. The area under the ROC curve (AUC) for the vitals model, which used only vital signs and demographic data, was 0.67 for the training data set, 0.67 for the validation data set, and 0.67 (95% CI 0.65-0.69) for the test data set. The AUC for the chief complaint model which also included demographic and vital sign data was 0.84 for the training data set, 0.83 for the validation data set, and 0.83 (95% CI 0.81-0.84) for the test data set. The best performing methods made use of all of the free text. In particular, the AUC for the bag-of-words model was 0.89 for training data set, 0.86 for the validation data set, and 0.86 (95% CI 0.85-0.87) for the test data set. The AUC for the topic model was 0.86 for the training data set, 0.86 for the validation data set, and 0.85 (95% CI 0.84-0.86) for the test data set. Compared to previous work that only used structured data such as vital signs and demographic information, utilizing free text drastically improves the discriminatory ability (increase in AUC from 0.67 to 0.86) of identifying infection.

  2. Nonparametric model validations for hidden Markov models with applications in financial econometrics.

    PubMed

    Zhao, Zhibiao

    2011-06-01

    We address the nonparametric model validation problem for hidden Markov models with partially observable variables and hidden states. We achieve this goal by constructing a nonparametric simultaneous confidence envelope for transition density function of the observable variables and checking whether the parametric density estimate is contained within such an envelope. Our specification test procedure is motivated by a functional connection between the transition density of the observable variables and the Markov transition kernel of the hidden states. Our approach is applicable for continuous time diffusion models, stochastic volatility models, nonlinear time series models, and models with market microstructure noise.

  3. Development, test-retest reliability and validity of the Pharmacy Value-Added Services Questionnaire (PVASQ).

    PubMed

    Tan, Christine L; Hassali, Mohamed A; Saleem, Fahad; Shafie, Asrul A; Aljadhey, Hisham; Gan, Vincent B

    2015-01-01

    (i) To develop the Pharmacy Value-Added Services Questionnaire (PVASQ) using emerging themes generated from interviews. (ii) To establish reliability and validity of questionnaire instrument. Using an extended Theory of Planned Behavior as the theoretical model, face-to-face interviews generated salient beliefs of pharmacy value-added services. The PVASQ was constructed initially in English incorporating important themes and later translated into the Malay language with forward and backward translation. Intention (INT) to adopt pharmacy value-added services is predicted by attitudes (ATT), subjective norms (SN), perceived behavioral control (PBC), knowledge and expectations. Using a 7-point Likert-type scale and a dichotomous scale, test-retest reliability (N=25) was assessed by administrating the questionnaire instrument twice at an interval of one week apart. Internal consistency was measured by Cronbach's alpha and construct validity between two administrations was assessed using the kappa statistic and the intraclass correlation coefficient (ICC). Confirmatory Factor Analysis, CFA (N=410) was conducted to assess construct validity of the PVASQ. The kappa coefficients indicate a moderate to almost perfect strength of agreement between test and retest. The ICC for all scales tested for intra-rater (test-retest) reliability was good. The overall Cronbach' s alpha (N=25) is 0.912 and 0.908 for the two time points. The result of CFA (N=410) showed most items loaded strongly and correctly into corresponding factors. Only one item was eliminated. This study is the first to develop and establish the reliability and validity of the Pharmacy Value-Added Services Questionnaire instrument using the Theory of Planned Behavior as the theoretical model. The translated Malay language version of PVASQ is reliable and valid to predict Malaysian patients' intention to adopt pharmacy value-added services to collect partial medicine supply.

  4. Development and psychometric testing of the Knowledge, Attitudes and Practices (KAP) questionnaire among student Tuberculosis (TB) Patients (STBP-KAPQ) in China.

    PubMed

    Fan, Yahui; Zhang, Shaoru; Li, Yan; Li, Yuelu; Zhang, Tianhua; Liu, Weiping; Jiang, Hualin

    2018-05-08

    TB outbreaking in schools is extremely complex, and presents a major challenge for public health. Understanding the knowledge, attitudes and practices among student TB patients in such settings is fundamental when it comes to decreasing future TB cases. The objective of this study was to develop a Knowledge, Attitudes and Practices Questionnaire among Student Tuberculosis Patients (STBP-KAPQ), and evaluate its psychometric properties. This study was conducted in three stages: item construction, pilot testing in 10 student TB patients and psychometric testing, including reliability and validity. The item pool for the questionnaire was compiled from literature review and early individual interviews. The questionnaire items were evaluated by the Delphi method based on 12 experts. Reliability and validity were assessed using student TB patients (n = 416) and healthy students (n = 208). Reliability was examined with internal consistency reliability and test-retest reliability. Content validity was calculated by content validity index (CVI); Construct validity was examined using exploratory factor analysis (EFA) and confirmatory factor analysis (CFA); The Public Tuberculosis Knowledge, Attitudes and Practices Questionnaire (PTB-KAPQ) was applied to evaluate criterion validity; As concerning discriminant validity, T-test was performed. The final STBP-KAPQ consisted of three dimensions and 25 items. Cronbach's α coefficient and intraclass correlation coefficient (ICC) was 0.817 and 0.765, respectively. Content validity index (CVI) was 0.962. Seven common factors were extracted by principal factor analysis and varimax rotation, with a cumulative contribution of 66.253%. The resulting CFA model of the STBP-KAPQ exhibited an appropriate model fit (χ2/df = 1.74, RMSEA = 0.082, CFI = 0.923, NNFI = 0.962). STBP-KAPQ and PTB-KAPQ had a strong correlation in the knowledge part, and the correlation coefficient was 0.606 (p < 0.05). Discriminant validity was supported through a significant difference between student TB patients and healthy students across all domains (p < 0.05). An instrument, "Knowledge, Attitudes and Practices Questionnaire among Student Tuberculosis Patients (STBP-KAPQ)" was developed. Psychometric testing indicated that it had adequate validity and reliability for use in KAP researches with student TB patients in China. The new tool might help public health researchers evaluate the level of KAP in student TB patients, and it could also be used to examine the effects of TB health education.

  5. Longitudinal train dynamics model for a rail transit simulation system

    DOE PAGES

    Wang, Jinghui; Rakha, Hesham A.

    2018-01-01

    The paper develops a longitudinal train dynamics model in support of microscopic railway transportation simulation. The model can be calibrated without any mechanical data making it ideal for implementation in transportation simulators. The calibration and validation work is based on data collected from the Portland light rail train fleet. The calibration procedure is mathematically formulated as a constrained non-linear optimization problem. The validity of the model is assessed by comparing instantaneous model predictions against field observations, and also evaluated in the domains of acceleration/deceleration versus speed and acceleration/deceleration versus distance. A test is conducted to investigate the adequacy of themore » model in simulation implementation. The results demonstrate that the proposed model can adequately capture instantaneous train dynamics, and provides good performance in the simulation test. Thus, the model provides a simple theoretical foundation for microscopic simulators and will significantly support the planning, management and control of railway transportation systems.« less

  6. Longitudinal train dynamics model for a rail transit simulation system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Jinghui; Rakha, Hesham A.

    The paper develops a longitudinal train dynamics model in support of microscopic railway transportation simulation. The model can be calibrated without any mechanical data making it ideal for implementation in transportation simulators. The calibration and validation work is based on data collected from the Portland light rail train fleet. The calibration procedure is mathematically formulated as a constrained non-linear optimization problem. The validity of the model is assessed by comparing instantaneous model predictions against field observations, and also evaluated in the domains of acceleration/deceleration versus speed and acceleration/deceleration versus distance. A test is conducted to investigate the adequacy of themore » model in simulation implementation. The results demonstrate that the proposed model can adequately capture instantaneous train dynamics, and provides good performance in the simulation test. Thus, the model provides a simple theoretical foundation for microscopic simulators and will significantly support the planning, management and control of railway transportation systems.« less

  7. Quantification of Dynamic Model Validation Metrics Using Uncertainty Propagation from Requirements

    NASA Technical Reports Server (NTRS)

    Brown, Andrew M.; Peck, Jeffrey A.; Stewart, Eric C.

    2018-01-01

    The Space Launch System, NASA's new large launch vehicle for long range space exploration, is presently in the final design and construction phases, with the first launch scheduled for 2019. A dynamic model of the system has been created and is critical for calculation of interface loads and natural frequencies and mode shapes for guidance, navigation, and control (GNC). Because of the program and schedule constraints, a single modal test of the SLS will be performed while bolted down to the Mobile Launch Pad just before the first launch. A Monte Carlo and optimization scheme will be performed to create thousands of possible models based on given dispersions in model properties and to determine which model best fits the natural frequencies and mode shapes from modal test. However, the question still remains as to whether this model is acceptable for the loads and GNC requirements. An uncertainty propagation and quantification (UP and UQ) technique to develop a quantitative set of validation metrics that is based on the flight requirements has therefore been developed and is discussed in this paper. There has been considerable research on UQ and UP and validation in the literature, but very little on propagating the uncertainties from requirements, so most validation metrics are "rules-of-thumb;" this research seeks to come up with more reason-based metrics. One of the main assumptions used to achieve this task is that the uncertainty in the modeling of the fixed boundary condition is accurate, so therefore that same uncertainty can be used in propagating the fixed-test configuration to the free-free actual configuration. The second main technique applied here is the usage of the limit-state formulation to quantify the final probabilistic parameters and to compare them with the requirements. These techniques are explored with a simple lumped spring-mass system and a simplified SLS model. When completed, it is anticipated that this requirements-based validation metric will provide a quantified confidence and probability of success for the final SLS dynamics model, which will be critical for a successful launch program, and can be applied in the many other industries where an accurate dynamic model is required.

  8. Wind tunnel validation of AeroDyn within LIFES50+ project: imposed Surge and Pitch tests

    NASA Astrophysics Data System (ADS)

    Bayati, I.; Belloli, M.; Bernini, L.; Zasso, A.

    2016-09-01

    This paper presents the first set of results of the steady and unsteady wind tunnel tests, performed at Politecnico di Milano wind tunnel, on a 1/75 rigid scale model of the DTU 10 MW wind turbine, within the LIFES50+ project. The aim of these tests is the validation of the open source code AeroDyn developed at NREL. Numerical and experimental steady results are compared in terms of thrust and torque coefficients, showing good agreement, as well as for unsteady measurements gathered with a 2 degree-of-freedom test rig, capable of imposing the displacements at the base of the model, and providing the surge and pitch motion of the floating offshore wind turbine (FOWT) scale model. The measurements of the unsteady test configuration are compared with AeroDyn/Dynin module results, implementing the generalized dynamic wake (GDW) model. Numerical and experimental comparison showed similar behaviours in terms of non linear hysteresis, however some discrepancies are herein reported and need further data analysis and interpretations about the aerodynamic integral quantities, with a special attention to the physics of the unsteady phenomenon.

  9. Development of rotorcraft interior noise control concepts. Phase 2: Full scale testing, revision 1

    NASA Technical Reports Server (NTRS)

    Yoerkie, C. A.; Gintoli, P. J.; Moore, J. A.

    1986-01-01

    The phase 2 effort consisted of a series of ground and flight test measurements to obtain data for validation of the Statistical Energy Analysis (SEA) model. Included in the gound tests were various transfer function measurements between vibratory and acoustic subsystems, vibration and acoustic decay rate measurements, and coherent source measurements. The bulk of these, the vibration transfer functions, were used for SEA model validation, while the others provided information for characterization of damping and reverberation time of the subsystems. The flight test program included measurements of cabin and cockpit sound pressure level, frame and panel vibration level, and vibration levels at the main transmission attachment locations. Comparisons between measured and predicted subsystem excitation levels from both ground and flight testing were evaluated. The ground test data show good correlation with predictions of vibration levels throughout the cabin overhead for all excitations. The flight test results also indicate excellent correlation of inflight sound pressure measurements to sound pressure levels predicted by the SEA model, where the average aircraft speech interference level is predicted within 0.2 dB.

  10. Morpheus Lander Roll Control System and Wind Modeling

    NASA Technical Reports Server (NTRS)

    Gambone, Elisabeth A.

    2014-01-01

    The Morpheus prototype lander is a testbed capable of vertical takeoff and landing developed by NASA Johnson Space Center to assess advanced space technologies. Morpheus completed a series of flight tests at Kennedy Space Center to demonstrate autonomous landing and hazard avoidance for future exploration missions. As a prototype vehicle being tested in Earth's atmosphere, Morpheus requires a robust roll control system to counteract aerodynamic forces. This paper describes the control algorithm designed that commands jet firing and delay times based on roll orientation. Design, analysis, and testing are supported using a high fidelity, 6 degree-of-freedom simulation of vehicle dynamics. This paper also details the wind profiles generated using historical wind data, which are necessary to validate the roll control system in the simulation environment. In preparation for Morpheus testing, the wind model was expanded to create day-of-flight wind profiles based on data delivered by Kennedy Space Center. After the test campaign, a comparison of flight and simulation performance was completed to provide additional model validation.

  11. Analysis and Ground Testing for Validation of the Inflatable Sunshield in Space (ISIS) Experiment

    NASA Technical Reports Server (NTRS)

    Lienard, Sebastien; Johnston, John; Adams, Mike; Stanley, Diane; Alfano, Jean-Pierre; Romanacci, Paolo

    2000-01-01

    The Next Generation Space Telescope (NGST) design requires a large sunshield to protect the large aperture mirror and instrument module from constant solar exposure at its L2 orbit. The structural dynamics of the sunshield must be modeled in order to predict disturbances to the observatory attitude control system and gauge effects on the line of site jitter. Models of large, non-linear membrane systems are not well understood and have not been successfully demonstrated. To answer questions about sunshield dynamic behavior and demonstrate controlled deployment, the NGST project is flying a Pathfinder experiment, the Inflatable Sunshield in Space (ISIS). This paper discusses in detail the modeling and ground-testing efforts performed at the Goddard Space Flight Center to: validate analytical tools for characterizing the dynamic behavior of the deployed sunshield, qualify the experiment for the Space Shuttle, and verify the functionality of the system. Included in the discussion will be test parameters, test setups, problems encountered, and test results.

  12. Validation, Proof-of-Concept, and Postaudit of the Groundwater Flow and Transport Model of the Project Shoal Area

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahmed Hassan

    2004-09-01

    The groundwater flow and radionuclide transport model characterizing the Shoal underground nuclear test has been accepted by the State of Nevada Division of Environmental Protection. According to the Federal Facility Agreement and Consent Order (FFACO) between DOE and the State of Nevada, the next steps in the closure process for the site are then model validation (or postaudit), the proof-of-concept, and the long-term monitoring stage. This report addresses the development of the validation strategy for the Shoal model, needed for preparing the subsurface Corrective Action Decision Document-Corrective Action Plan and the development of the proof-of-concept tools needed during the five-yearmore » monitoring/validation period. The approach builds on a previous model, but is adapted and modified to the site-specific conditions and challenges of the Shoal site.« less

  13. Fracture simulation of restored teeth using a continuum damage mechanics failure model.

    PubMed

    Li, Haiyan; Li, Jianying; Zou, Zhenmin; Fok, Alex Siu-Lun

    2011-07-01

    The aim of this paper is to validate the use of a finite-element (FE) based continuum damage mechanics (CDM) failure model to simulate the debonding and fracture of restored teeth. Fracture testing of plastic model teeth, with or without a standard Class-II MOD (mesial-occusal-distal) restoration, was carried out to investigate their fracture behavior. In parallel, 2D FE models of the teeth are constructed and analyzed using the commercial FE software ABAQUS. A CDM failure model, implemented into ABAQUS via the user element subroutine (UEL), is used to simulate the debonding and/or final fracture of the model teeth under a compressive load. The material parameters needed for the CDM model to simulate fracture are obtained through separate mechanical tests. The predicted results are then compared with the experimental data of the fracture tests to validate the failure model. The failure processes of the intact and restored model teeth are successfully reproduced by the simulation. However, the fracture parameters obtained from testing small specimens need to be adjusted to account for the size effect. The results indicate that the CDM model is a viable model for the prediction of debonding and fracture in dental restorations. Copyright © 2011 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.

  14. Development and testing of the CALDs and CLES+T scales for international nursing students' clinical learning environments.

    PubMed

    Mikkonen, Kristina; Elo, Satu; Miettunen, Jouko; Saarikoski, Mikko; Kääriäinen, Maria

    2017-08-01

    The purpose of this study was to develop and test the psychometric properties of the new Cultural and Linguistic Diversity scale, which is designed to be used with the newly validated Clinical Learning Environment, Supervision and Nurse Teacher scale for assessing international nursing students' clinical learning environments. In various developed countries, clinical placements are known to present challenges in the professional development of international nursing students. A cross-sectional survey. Data were collected from eight Finnish universities of applied sciences offering nursing degree courses taught in English during 2015-2016. All the relevant students (N = 664) were invited and 50% chose to participate. Of the total data submitted by the participants, 28% were used for scale validation. The construct validity of the two scales was tested by exploratory factor analysis, while their validity with respect to convergence and discriminability was assessed using Spearman's correlation. Construct validation of the Clinical Learning Environment, Supervision and Nurse Teacher scale yielded an eight-factor model with 34 items, while validation of the Cultural and Linguistic Diversity scale yielded a five-factor model with 21 items. A new scale was developed to improve evidence-based mentorship of international nursing students in clinical learning environments. The instrument will be useful to educators seeking to identify factors that affect the learning of international students. © 2017 John Wiley & Sons Ltd.

  15. Development and Validation of the Total HUman Model for Safety (THUMS) Toward Further Understanding of Occupant Injury Mechanisms in Precrash and During Crash.

    PubMed

    Iwamoto, Masami; Nakahira, Yuko; Kimpara, Hideyuki

    2015-01-01

    Active safety devices such as automatic emergency brake (AEB) and precrash seat belt have the potential to accomplish further reduction in the number of the fatalities due to automotive accidents. However, their effectiveness should be investigated by more accurate estimations of their interaction with human bodies. Computational human body models are suitable for investigation, especially considering muscular tone effects on occupant motions and injury outcomes. However, the conventional modeling approaches such as multibody models and detailed finite element (FE) models have advantages and disadvantages in computational costs and injury predictions considering muscular tone effects. The objective of this study is to develop and validate a human body FE model with whole body muscles, which can be used for the detailed investigation of interaction between human bodies and vehicular structures including some safety devices precrash and during a crash with relatively low computational costs. In this study, we developed a human body FE model called THUMS (Total HUman Model for Safety) with a body size of 50th percentile adult male (AM50) and a sitting posture. The model has anatomical structures of bones, ligaments, muscles, brain, and internal organs. The total number of elements is 281,260, which would realize relatively low computational costs. Deformable material models were assigned to all body parts. The muscle-tendon complexes were modeled by truss elements with Hill-type muscle material and seat belt elements with tension-only material. The THUMS was validated against 35 series of cadaver or volunteer test data on frontal, lateral, and rear impacts. Model validations for 15 series of cadaver test data associated with frontal impacts are presented in this article. The THUMS with a vehicle sled model was applied to investigate effects of muscle activations on occupant kinematics and injury outcomes in specific frontal impact situations with AEB. In the validations using 5 series of cadaver test data, force-time curves predicted by the THUMS were quantitatively evaluated using correlation and analysis (CORA), which showed good or acceptable agreement with cadaver test data in most cases. The investigation of muscular effects showed that muscle activation levels and timing had significant effects on occupant kinematics and injury outcomes. Although further studies on accident injury reconstruction are needed, the THUMS has the potential for predictions of occupant kinematics and injury outcomes considering muscular tone effects with relatively low computational costs.

  16. Resolution of Forces and Strain Measurements from an Acoustic Ground Test

    NASA Technical Reports Server (NTRS)

    Smith, Andrew M.; LaVerde, Bruce T.; Hunt, Ronald; Waldon, James M.

    2013-01-01

    The Conservatism in Typical Vibration Tests was Demonstrated: Vibration test at component level produced conservative force reactions by approximately a factor of 4 (approx.12 dB) as compared to the integrated acoustic test in 2 out of 3 axes. Reaction Forces Estimated at the Base of Equipment Using a Finite Element Based Method were Validated: FEM based estimate of interface forces may be adequate to guide development of vibration test criteria with less conservatism. Element Forces Estimated in Secondary Structure Struts were Validated: Finite element approach provided best estimate of axial strut forces in frequency range below 200 Hz where a rigid lumped mass assumption for the entire electronics box was valid. Models with enough fidelity to represent diminishing apparent mass of equipment are better suited for estimating force reactions across the frequency range. Forward Work: Demonstrate the reduction in conservatism provided by; Current force limited approach and an FEM guided approach. Validate proposed CMS approach to estimate coupled response from uncoupled system characteristics for vibroacoustics.

  17. Fatigue Failure of Space Shuttle Main Engine Turbine Blades

    NASA Technical Reports Server (NTRS)

    Swanson, Gregrory R.; Arakere, Nagaraj K.

    2000-01-01

    Experimental validation of finite element modeling of single crystal turbine blades is presented. Experimental results from uniaxial high cycle fatigue (HCF) test specimens and full scale Space Shuttle Main Engine test firings with the High Pressure Fuel Turbopump Alternate Turbopump (HPFTP/AT) provide the data used for the validation. The conclusions show the significant contribution of the crystal orientation within the blade on the resulting life of the component, that the analysis can predict this variation, and that experimental testing demonstrates it.

  18. Assessment of Differential Item Functioning under Cognitive Diagnosis Models: The DINA Model Example

    ERIC Educational Resources Information Center

    Li, Xiaomin; Wang, Wen-Chung

    2015-01-01

    The assessment of differential item functioning (DIF) is routinely conducted to ensure test fairness and validity. Although many DIF assessment methods have been developed in the context of classical test theory and item response theory, they are not applicable for cognitive diagnosis models (CDMs), as the underlying latent attributes of CDMs are…

  19. Structural Validity of CLASS K-3 in Primary Grades: Testing Alternative Models

    ERIC Educational Resources Information Center

    Sandilos, Lia E.; Shervey, Sarah Wollersheim; DiPerna, James C.; Lei, Puiwa; Cheng, Weiyi

    2017-01-01

    This study examined the internal structure of the Classroom Assessment Scoring System (CLASS; K-3 version). The original CLASS K-3 model (Pianta, La Paro, & Hamre, 2008) and 5 alternative models were tested using confirmatory factor analysis with a sample of first- and second-grade classrooms (N = 141). Findings indicated that a slightly…

  20. Computational identification of structural factors affecting the mutagenic potential of aromatic amines: study design and experimental validation.

    PubMed

    Slavov, Svetoslav H; Stoyanova-Slavova, Iva; Mattes, William; Beger, Richard D; Brüschweiler, Beat J

    2018-07-01

    A grid-based, alignment-independent 3D-SDAR (three-dimensional spectral data-activity relationship) approach based on simulated 13 C and 15 N NMR chemical shifts augmented with through-space interatomic distances was used to model the mutagenicity of 554 primary and 419 secondary aromatic amines. A robust modeling strategy supported by extensive validation including randomized training/hold-out test set pairs, validation sets, "blind" external test sets as well as experimental validation was applied to avoid over-parameterization and build Organization for Economic Cooperation and Development (OECD 2004) compliant models. Based on an experimental validation set of 23 chemicals tested in a two-strain Salmonella typhimurium Ames assay, 3D-SDAR was able to achieve performance comparable to 5-strain (Ames) predictions by Lhasa Limited's Derek and Sarah Nexus for the same set. Furthermore, mapping of the most frequently occurring bins on the primary and secondary aromatic amine structures allowed the identification of molecular features that were associated either positively or negatively with mutagenicity. Prominent structural features found to enhance the mutagenic potential included: nitrobenzene moieties, conjugated π-systems, nitrothiophene groups, and aromatic hydroxylamine moieties. 3D-SDAR was also able to capture "true" negative contributions that are particularly difficult to detect through alternative methods. These include sulphonamide, acetamide, and other functional groups, which not only lack contributions to the overall mutagenic potential, but are known to actively lower it, if present in the chemical structures of what otherwise would be potential mutagens.

  1. Validating a model that predicts daily growth and feed quality of New Zealand dairy pastures.

    PubMed

    Woodward, S J

    2001-09-01

    The Pasture Quality (PQ) model is a simple, mechanistic, dynamical system model that was designed to capture the essential biological processes in grazed grass-clover pasture, and to be optimised to derive improved grazing strategies for New Zealand dairy farms. While the individual processes represented in the model (photosynthesis, tissue growth, flowering, leaf death, decomposition, worms) were based on experimental data, this did not guarantee that the assembled model would accurately predict the behaviour of the system as a whole (i.e., pasture growth and quality). Validation of the whole model was thus a priority, since any strategy derived from the model could impact a farm business in the order of thousands of dollars per annum if adopted. This paper describes the process of defining performance criteria for the model, obtaining suitable data to test the model, and carrying out the validation analysis. The validation process highlighted a number of weaknesses in the model, which will lead to the model being improved. As a result, the model's utility will be enhanced. Furthermore, validation was found to have an unexpected additional benefit, in that despite the model's poor initial performance, support was generated for the model among field scientists involved in the wider project.

  2. Model Validation of an RSRM Transporter Through Full-scale Operational and Modal Testing

    NASA Technical Reports Server (NTRS)

    Brillhart, Ralph; Davis, Joshua; Allred, Bradley

    2009-01-01

    The Reusable Solid Rocket Motor (RSRM) segments, which are part of the current Space Shuttle system and will provide the first stage of the Ares launch vehicle, must be transported from their manufacturing facility in Promontory, Utah, to a railhead in Corinne, Utah. This approximately 25-mile trip on secondary paved roads is accomplished using a special transporter system which lifts and conveys each individual segment. ATK Launch Systems (ATK) has recently obtained a new set of these transporters from Scheuerle, a company in Germany. The transporter is a 96-wheel, dual tractor vehicle that supports the payload via a hydraulic suspension. Since this system is a different design than was previously used, computer modeling with validation via test is required to ensure that the environment to which the segment is exposed is not too severe for this space-critical hardware. Accurate prediction of the loads imparted to the rocket motor is essential in order to prevent damage to the segment. To develop and validate a finite element model capable of such accurate predictions, ATA Engineering, Inc., teamed with ATK to perform a modal survey of the transport system, including a forward RSRM segment. A set of electrodynamic shakers was placed around the transporter at locations capable of exciting the transporter vehicle dynamics. Forces from the shakers with varying phase combinations were applied using sinusoidal sweep excitation. The relative phase of the shaker forcing functions was adjusted to match the shape characteristics of each of several target modes, thereby customizing each sweep run for exciting a particular mode. The resulting frequency response functions (FRF) from this series of sine sweeps allowed identification of all target modes and other higher-order modes, allowing good comparison to the finite element model. Furthermore, the survey-derived modal frequencies were correlated with peak frequencies observed during road-going operating tests. This correlation enabled verification of the most significant modes contributing to real-world loading of the motor segment under transport. After traditional model updating, dynamic simulation of the transportation environment was compared to the measured operating data to provided further validation of the analysis model. KEYWORDS Validation, correlation, modal test, rocket motor, transporter

  3. Modelling cognitive affective biases in major depressive disorder using rodents

    PubMed Central

    Hales, Claire A; Stuart, Sarah A; Anderson, Michael H; Robinson, Emma S J

    2014-01-01

    Major depressive disorder (MDD) affects more than 10% of the population, although our understanding of the underlying aetiology of the disease and how antidepressant drugs act to remediate symptoms is limited. Major obstacles include the lack of availability of good animal models that replicate aspects of the phenotype and tests to assay depression-like behaviour in non-human species. To date, research in rodents has been dominated by two types of assays designed to test for depression-like behaviour: behavioural despair tests, such as the forced swim test, and measures of anhedonia, such as the sucrose preference test. These tests have shown relatively good predictive validity in terms of antidepressant efficacy, but have limited translational validity. Recent developments in clinical research have revealed that cognitive affective biases (CABs) are a key feature of MDD. Through the development of neuropsychological tests to provide objective measures of CAB in humans, we have the opportunity to use ‘reverse translation’ to develop and evaluate whether similar methods are suitable for research into MDD using animals. The first example of this approach was reported in 2004 where rodents in a putative negative affective state were shown to exhibit pessimistic choices in a judgement bias task. Subsequent work in both judgement bias tests and a novel affective bias task suggest that these types of assay may provide translational methods for studying MDD using animals. This review considers recent work in this area and the pharmacological and translational validity of these new animal models of CABs. Linked Articles This article is part of a themed section on Animal Models in Psychiatry Research. To view the other articles in this section visit http://dx.doi.org/10.1111/bph.2014.171.issue-20 PMID:24467454

  4. A mobile, high-throughput semi-automated system for testing cognition in large non-primate animal models of Huntington disease.

    PubMed

    McBride, Sebastian D; Perentos, Nicholas; Morton, A Jennifer

    2016-05-30

    For reasons of cost and ethical concerns, models of neurodegenerative disorders such as Huntington disease (HD) are currently being developed in farm animals, as an alternative to non-human primates. Developing reliable methods of testing cognitive function is essential to determining the usefulness of such models. Nevertheless, cognitive testing of farm animal species presents a unique set of challenges. The primary aims of this study were to develop and validate a mobile operant system suitable for high throughput cognitive testing of sheep. We designed a semi-automated testing system with the capability of presenting stimuli (visual, auditory) and reward at six spatial locations. Fourteen normal sheep were used to validate the system using a two-choice visual discrimination task. Four stages of training devised to acclimatise animals to the system are also presented. All sheep progressed rapidly through the training stages, over eight sessions. All sheep learned the 2CVDT and performed at least one reversal stage. The mean number of trials the sheep took to reach criterion in the first acquisition learning was 13.9±1.5 and for the reversal learning was 19.1±1.8. This is the first mobile semi-automated operant system developed for testing cognitive function in sheep. We have designed and validated an automated operant behavioural testing system suitable for high throughput cognitive testing in sheep and other medium-sized quadrupeds, such as pigs and dogs. Sheep performance in the two-choice visual discrimination task was very similar to that reported for non-human primates and strongly supports the use of farm animals as pre-clinical models for the study of neurodegenerative diseases. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Reliability and validity of general health questionnaire (GHQ-12) for male tannery workers: a study carried out in Kanpur, India.

    PubMed

    Kashyap, Gyan Chandra; Singh, Shri Kant

    2017-03-21

    The purpose of this study was to test the reliability, validity and factor structure of GHQ-12 questionnaire on male tannery workers of India. We have tested three different factor models of the GHQ-12. This paper used primary data obtained from a cross-sectional household study of tannery workers from Jajmau area of the city of Kanpur in northern India, which was conducted during January-June, 2015, as part of a doctoral program. The study covered 286 tannery workers from the study area. An interview schedule containing GHQ-12 was used for tannery workers who had completed at least 1 year at their present occupation preceding the survey. To test reliability, Cronbach's alpha test was used. The convergent test was used for validity. Confirmatory factor analysis was used to compare three factor structures for the GHQ-12. A total of 286 samples were analyzed in this study. The mean age of the tannery workers in this study was 38 years (SD = 1.42). We found the alpha coefficient to be 0.93 for the complete sample. The value of alpha represents the acceptable internal consistency for all the groups. Each item of scale showed almost the same internal consistency of 0.93 for the male tannery workers. The correlation between factor 1 (Anxiety and Depression) and factor 2 (Social Dysfunction) was 0.92. The correlation between factor 1 (Anxiety and Depression) and factor 3 (Loss of confidence) was the highest 0.98. Comparative fit index (CFI) estimate best-fitted for model-III that gave the CFI value 0.97. The SRMR indicator gave the lowest value 0.031 for the model-III. The findings suggest that the Hindi version of GHQ-12 is a reliable and valid tool for measuring psychological distress in male tannery workers of Kanpur city, India. Study found that the model proposed by the Graetz was the best fitted model for the data.

  6. Aeroservoelastic wind-tunnel investigations using the Active Flexible Wing Model: Status and recent accomplishments

    NASA Technical Reports Server (NTRS)

    Noll, Thomas E.; Perry, Boyd, III; Tiffany, Sherwood H.; Cole, Stanley R.; Buttrill, Carey S.; Adams, William M., Jr.; Houck, Jacob A.; Srinathkumar, S.; Mukhopadhyay, Vivek; Pototzky, Anthony S.

    1989-01-01

    The status of the joint NASA/Rockwell Active Flexible Wing Wind-Tunnel Test Program is described. The objectives are to develop and validate the analysis, design, and test methodologies required to apply multifunction active control technology for improving aircraft performance and stability. Major tasks include designing digital multi-input/multi-output flutter-suppression and rolling-maneuver-load alleviation concepts for a flexible full-span wind-tunnel model, obtaining an experimental data base for the basic model and each control concept and providing comparisons between experimental and analytical results to validate the methodologies. The opportunity is provided to improve real-time simulation techniques and to gain practical experience with digital control law implementation procedures.

  7. TIE: an ability test of emotional intelligence.

    PubMed

    Śmieja, Magdalena; Orzechowski, Jarosław; Stolarski, Maciej S

    2014-01-01

    The Test of Emotional Intelligence (TIE) is a new ability scale based on a theoretical model that defines emotional intelligence as a set of skills responsible for the processing of emotion-relevant information. Participants are provided with descriptions of emotional problems, and asked to indicate which emotion is most probable in a given situation, or to suggest the most appropriate action. Scoring is based on the judgments of experts: professional psychotherapists, trainers, and HR specialists. The validation study showed that the TIE is a reliable and valid test, suitable for both scientific research and individual assessment. Its internal consistency measures were as high as .88. In line with theoretical model of emotional intelligence, the results of the TIE shared about 10% of common variance with a general intelligence test, and were independent of major personality dimensions.

  8. Development and Validation of High Precision Thermal, Mechanical, and Optical Models for the Space Interferometry Mission

    NASA Technical Reports Server (NTRS)

    Lindensmith, Chris A.; Briggs, H. Clark; Beregovski, Yuri; Feria, V. Alfonso; Goullioud, Renaud; Gursel, Yekta; Hahn, Inseob; Kinsella, Gary; Orzewalla, Matthew; Phillips, Charles

    2006-01-01

    SIM Planetquest (SIM) is a large optical interferometer for making microarcsecond measurements of the positions of stars, and to detect Earth-sized planets around nearby stars. To achieve this precision, SIM requires stability of optical components to tens of picometers per hour. The combination of SIM s large size (9 meter baseline) and the high stability requirement makes it difficult and costly to measure all aspects of system performance on the ground. To reduce risks, costs and to allow for a design with fewer intermediate testing stages, the SIM project is developing an integrated thermal, mechanical and optical modeling process that will allow predictions of the system performance to be made at the required high precision. This modeling process uses commercial, off-the-shelf tools and has been validated against experimental results at the precision of the SIM performance requirements. This paper presents the description of the model development, some of the models, and their validation in the Thermo-Opto-Mechanical (TOM3) testbed which includes full scale brassboard optical components and the metrology to test them at the SIM performance requirement levels.

  9. Development and validation of rear impact computer simulation model of an adult manual transit wheelchair with a seated occupant.

    PubMed

    Salipur, Zdravko; Bertocci, Gina

    2010-01-01

    It has been shown that ANSI WC19 transit wheelchairs that are crashworthy in frontal impact exhibit catastrophic failures in rear impact and may not be able to provide stable seating support and thus occupant protection for the wheelchair occupant. Thus far only limited sled test and computer simulation data have been available to study rear impact wheelchair safety. Computer modeling can be used as an economic and comprehensive tool to gain critical knowledge regarding wheelchair integrity and occupant safety. This study describes the development and validation of a computer model simulating an adult wheelchair-seated occupant subjected to a rear impact event. The model was developed in MADYMO and validated rigorously using the results of three similar sled tests conducted to specifications provided in the draft ISO/TC 173 standard. Outcomes from the model can provide critical wheelchair loading information to wheelchair and tiedown manufacturers, resulting in safer wheelchair designs for rear impact conditions. (c) 2009 IPEM. Published by Elsevier Ltd. All rights reserved.

  10. Target Soil Impact Verification: Experimental Testing and Kayenta Constitutive Modeling.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Broome, Scott Thomas; Flint, Gregory Mark; Dewers, Thomas

    2015-11-01

    This report details experimental testing and constitutive modeling of sandy soil deformation under quasi - static conditions. This is driven by the need to understand constitutive response of soil to target/component behavior upon impact . An experimental and constitutive modeling program was followed to determine elastic - plastic properties and a compressional failure envelope of dry soil . One hydrostatic, one unconfined compressive stress (UCS), nine axisymmetric compression (ACS) , and one uniaxial strain (US) test were conducted at room temperature . Elastic moduli, assuming isotropy, are determined from unload/reload loops and final unloading for all tests pre - failuremore » and increase monotonically with mean stress. Very little modulus degradation was discernable from elastic results even when exposed to mean stresses above 200 MPa . The failure envelope and initial yield surface were determined from peak stresses and observed onset of plastic yielding from all test results. Soil elasto - plastic behavior is described using the Brannon et al. (2009) Kayenta constitutive model. As a validation exercise, the ACS - parameterized Kayenta model is used to predict response of the soil material under uniaxial strain loading. The resulting parameterized and validated Kayenta model is of high quality and suitable for modeling sandy soil deformation under a range of conditions, including that for impact prediction.« less

  11. A Supervised Learning Process to Validate Online Disease Reports for Use in Predictive Models.

    PubMed

    Patching, Helena M M; Hudson, Laurence M; Cooke, Warrick; Garcia, Andres J; Hay, Simon I; Roberts, Mark; Moyes, Catherine L

    2015-12-01

    Pathogen distribution models that predict spatial variation in disease occurrence require data from a large number of geographic locations to generate disease risk maps. Traditionally, this process has used data from public health reporting systems; however, using online reports of new infections could speed up the process dramatically. Data from both public health systems and online sources must be validated before they can be used, but no mechanisms exist to validate data from online media reports. We have developed a supervised learning process to validate geolocated disease outbreak data in a timely manner. The process uses three input features, the data source and two metrics derived from the location of each disease occurrence. The location of disease occurrence provides information on the probability of disease occurrence at that location based on environmental and socioeconomic factors and the distance within or outside the current known disease extent. The process also uses validation scores, generated by disease experts who review a subset of the data, to build a training data set. The aim of the supervised learning process is to generate validation scores that can be used as weights going into the pathogen distribution model. After analyzing the three input features and testing the performance of alternative processes, we selected a cascade of ensembles comprising logistic regressors. Parameter values for the training data subset size, number of predictors, and number of layers in the cascade were tested before the process was deployed. The final configuration was tested using data for two contrasting diseases (dengue and cholera), and 66%-79% of data points were assigned a validation score. The remaining data points are scored by the experts, and the results inform the training data set for the next set of predictors, as well as going to the pathogen distribution model. The new supervised learning process has been implemented within our live site and is being used to validate the data that our system uses to produce updated predictive disease maps on a weekly basis.

  12. From military to civil loadings: Preliminary numerical-based thorax injury criteria investigations.

    PubMed

    Goumtcha, Aristide Awoukeng; Bodo, Michèle; Taddei, Lorenzo; Roth, Sébastien

    2016-03-01

    Effects of the impact of a mechanical structure on the human body are of great interest in the understanding of body trauma. Experimental tests have led to first conclusions about the dangerousness of an impact observing impact forces or displacement time history with PMHS (Post Mortem human Subjects). They have allowed providing interesting data for the development and the validation of numerical biomechanical models. These models, widely used in the framework of automotive crashworthiness, have led to the development of numerical-based injury criteria and tolerance thresholds. The aim of this process is to improve the safety of mechanical structures in interaction with the body. In a military context, investigations both at experimental and numerical level are less successfully completed. For both military and civil frameworks, the literature list a number of numerical analysis trying to propose injury mechanisms, and tolerance thresholds based on biofidelic Finite Element (FE) models of different part of the human body. However the link between both frameworks is not obvious, since lots of parameters are different: great mass impacts at relatively low velocity for civil impacts (falls, automotive crashworthiness) and low mass at very high velocity for military loadings (ballistic, blast). In this study, different accident cases were investigated, and replicated with a previously developed and validated FE model of the human thorax named Hermaphrodite Universal Biomechanical YX model (HUBYX model). These previous validations included replications of standard experimental tests often used to validate models in the context of automotive industry, experimental ballistic tests in high speed dynamic impact and also numerical replication of blast loading test ensuring its biofidelity. In order to extend the use of this model in other frameworks, some real-world accidents were reconstructed, and consequences of these loadings on the FE model were explored. These various numerical replications of accident coming from different contexts raise the question about the ability of a FE model to correctly predict several kinds of trauma, from blast or ballistic impacts to falls, sports or automotive ones in a context of numerical injury mechanisms and tolerance limits investigations. Copyright © 2015 John Wiley & Sons, Ltd.

  13. Calibration and validation of rockfall models

    NASA Astrophysics Data System (ADS)

    Frattini, Paolo; Valagussa, Andrea; Zenoni, Stefania; Crosta, Giovanni B.

    2013-04-01

    Calibrating and validating landslide models is extremely difficult due to the particular characteristic of landslides: limited recurrence in time, relatively low frequency of the events, short durability of post-event traces, poor availability of continuous monitoring data, especially for small landslide and rockfalls. For this reason, most of the rockfall models presented in literature completely lack calibration and validation of the results. In this contribution, we explore different strategies for rockfall model calibration and validation starting from both an historical event and a full-scale field test. The event occurred in 2012 in Courmayeur (Western Alps, Italy), and caused serious damages to quarrying facilities. This event has been studied soon after the occurrence through a field campaign aimed at mapping the blocks arrested along the slope, the shape and location of the detachment area, and the traces of scars associated to impacts of blocks on the slope. The full-scale field test was performed by Geovert Ltd in the Christchurch area (New Zealand) after the 2011 earthquake. During the test, a number of large blocks have been mobilized from the upper part of the slope and filmed with high velocity cameras from different viewpoints. The movies of each released block were analysed to identify the block shape, the propagation path, the location of impacts, the height of the trajectory and the velocity of the block along the path. Both calibration and validation of rockfall models should be based on the optimization of the agreement between the actual trajectories or location of arrested blocks and the simulated ones. A measure that describe this agreement is therefore needed. For calibration purpose, this measure should simple enough to allow trial and error repetitions of the model for parameter optimization. In this contribution we explore different calibration/validation measures: (1) the percentage of simulated blocks arresting within a buffer of the actual blocks, (2) the percentage of trajectories passing through the buffer of the actual rockfall path, (3) the mean distance between the location of arrest of each simulated blocks and the location of the nearest actual blocks; (4) the mean distance between the location of detachment of each simulated block and the location of detachment of the actual block located closer to the arrest position. By applying the four measures to the case studies, we observed that all measures are able to represent the model performance for validation purposes. However, the third measure is more simple and reliable than the others, and seems to be optimal for model calibration, especially when using a parameter estimation and optimization modelling software for automated calibration.

  14. Non-Nuclear Validation Test Results of a Closed Brayton Cycle Test-Loop

    NASA Astrophysics Data System (ADS)

    Wright, Steven A.

    2007-01-01

    Both NASA and DOE have programs that are investigating advanced power conversion cycles for planetary surface power on the moon or Mars, or for next generation nuclear power plants on earth. Although open Brayton cycles are in use for many applications (combined cycle power plants, aircraft engines), only a few closed Brayton cycles have been tested. Experience with closed Brayton cycles coupled to nuclear reactors is even more limited and current projections of Brayton cycle performance are based on analytic models. This report describes and compares experimental results with model predictions from a series of non-nuclear tests using a small scale closed loop Brayton cycle available at Sandia National Laboratories. A substantial amount of testing has been performed, and the information is being used to help validate models. In this report we summarize the results from three kinds of tests. These tests include: 1) test results that are useful for validating the characteristic flow curves of the turbomachinery for various gases ranging from ideal gases (Ar or Ar/He) to non-ideal gases such as CO2, 2) test results that represent shut down transients and decay heat removal capability of Brayton loops after reactor shut down, and 3) tests that map a range of operating power versus shaft speed curve and turbine inlet temperature that are useful for predicting stable operating conditions during both normal and off-normal operating behavior. These tests reveal significant interactions between the reactor and balance of plant. Specifically these results predict limited speed up behavior of the turbomachinery caused by loss of load, the conditions for stable operation, and for direct cooled reactors, the tests reveal that the coast down behavior during loss of power events can extend for hours provided the ultimate heat sink remains available.

  15. Development of the monitoring system to detect the piping thickness reduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, N. Y.; Ryu, K. H.; Oh, Y. J.

    2006-07-01

    As nuclear piping becomes aging, secondary piping which was considered safe, undergo thickness reduction problem these days. After some accidents caused by Flow Accelerated Corrosion (FAC), guidelines and recommendations for the thinned pipe management were issued, and thus need for monitoring increases. Through thinned pipe management program, monitoring activities based on the various analyses and the case study of other plants also increases. As the monitoring points increase, time needs to cover the recommended inspection area becomes increasing, while the time given to inspect the piping during overhaul becomes shortened. Existing Ultrasonic Technique (UT) can cover small area in amore » given time. Moreover, it cannot be applied to a complex geometry piping or a certain location like welded part. In this paper, we suggested Switching Direct Current Potential Drop (S-DCPD) method by which we can narrow down the FAC-susceptible area. To apply DCPD, we developed both resistance model and Finite Element Method (FEM) model to predict the DCPD feasibility. We tested elbow specimen to compare DCPD monitoring results with UT results to identify consistency. For the validation test, we designed simulation loop. To determine the text condition, we analyzed environmental parameters and introduced applicable wearing rate model. To obtain the model parameters, we developed electrodes and analyzed velocity profile in the test loop using CFX code. Based on the prediction model and prototype testing results, we are planning to perform validation test to identify applicability of S-DCPD in the NPP environment. Validation text plan will be described as a future work. (authors)« less

  16. Computational fluid dynamics modeling of laboratory flames and an industrial flare.

    PubMed

    Singh, Kanwar Devesh; Gangadharan, Preeti; Chen, Daniel H; Lou, Helen H; Li, Xianchang; Richmond, Peyton

    2014-11-01

    A computational fluid dynamics (CFD) methodology for simulating the combustion process has been validated with experimental results. Three different types of experimental setups were used to validate the CFD model. These setups include an industrial-scale flare setups and two lab-scale flames. The CFD study also involved three different fuels: C3H6/CH/Air/N2, C2H4/O2/Ar and CH4/Air. In the first setup, flare efficiency data from the Texas Commission on Environmental Quality (TCEQ) 2010 field tests were used to validate the CFD model. In the second setup, a McKenna burner with flat flames was simulated. Temperature and mass fractions of important species were compared with the experimental data. Finally, results of an experimental study done at Sandia National Laboratories to generate a lifted jet flame were used for the purpose of validation. The reduced 50 species mechanism, LU 1.1, the realizable k-epsilon turbulence model, and the EDC turbulence-chemistry interaction model were usedfor this work. Flare efficiency, axial profiles of temperature, and mass fractions of various intermediate species obtained in the simulation were compared with experimental data and a good agreement between the profiles was clearly observed. In particular the simulation match with the TCEQ 2010 flare tests has been significantly improved (within 5% of the data) compared to the results reported by Singh et al. in 2012. Validation of the speciated flat flame data supports the view that flares can be a primary source offormaldehyde emission.

  17. [New questionnaire to assess self-efficacy toward physical activity in children].

    PubMed

    Aedo, Angeles; Avila, Héctor

    2009-10-01

    To design a questionnaire for assessment of self-efficacy toward physical activity in school children, as well as to measure its construct validity, test-retest reliability, and internal consistency. A four-stage multimethod approach was used: (1) bibliographic research followed by exploratory study and the formulation of questions and responses based on a dichotomous scale of 14 items; (2) validation of the content by a panel of experts; (3) application of the preliminary version of the questionnaire to a sample of 900 school-aged children in Mexico City; and (4) determination of the construct validity, test-retest reliability, and internal consistency (Cronbach's alpha). Three factors were identified that explain 64.15% of the variance: the search for positive alternatives to physical activity, ability to deal with possible barriers to exercising, and expectations of skill or competence. The model was validated using the goodness of fit, and the result of 65% less than 0.05 indicated that the estimated factor model fit the data. Cronbach's consistency alpha was 0.733; test-retest reliability was 0.867. The scale designed has adequate reliability and validity. These results are a good indicator of self-efficacy toward physical activity in school children, which is important when developing programs intended to promote such behavior in this age group.

  18. Aeroservoelastic Modeling and Validation of a Thrust-Vectoring F/A-18 Aircraft

    NASA Technical Reports Server (NTRS)

    Brenner, Martin J.

    1996-01-01

    An F/A-18 aircraft was modified to perform flight research at high angles of attack (AOA) using thrust vectoring and advanced control law concepts for agility and performance enhancement and to provide a testbed for the computational fluid dynamics community. Aeroservoelastic (ASE) characteristics had changed considerably from the baseline F/A-18 aircraft because of structural and flight control system amendments, so analyses and flight tests were performed to verify structural stability at high AOA. Detailed actuator models that consider the physical, electrical, and mechanical elements of actuation and its installation on the airframe were employed in the analysis to accurately model the coupled dynamics of the airframe, actuators, and control surfaces. This report describes the ASE modeling procedure, ground test validation, flight test clearance, and test data analysis for the reconfigured F/A-18 aircraft. Multivariable ASE stability margins are calculated from flight data and compared to analytical margins. Because this thrust-vectoring configuration uses exhaust vanes to vector the thrust, the modeling issues are nearly identical for modem multi-axis nozzle configurations. This report correlates analysis results with flight test data and makes observations concerning the application of the linear predictions to thrust-vectoring and high-AOA flight.

  19. The Social Cognitive Model of Job Satisfaction among Teachers: Testing and Validation

    ERIC Educational Resources Information Center

    Badri, Masood A.; Mohaidat, Jihad; Ferrandino, Vincent; El Mourad, Tarek

    2013-01-01

    The study empirically tests an integrative model of work satisfaction (0280, 0140, 0300 and 0255) in a sample of 5,022 teachers in Abu Dhabi in the United Arab Emirates. The study provided more support for the Lent and Brown (2006) model. Results revealed that this model was a strong fit for the data and accounted for 82% of the variance in work…

  20. Differential Item Functioning Assessment in Cognitive Diagnostic Modeling: Application of the Wald Test to Investigate DIF in the DINA Model

    ERIC Educational Resources Information Center

    Hou, Likun; de la Torre, Jimmy; Nandakumar, Ratna

    2014-01-01

    Analyzing examinees' responses using cognitive diagnostic models (CDMs) has the advantage of providing diagnostic information. To ensure the validity of the results from these models, differential item functioning (DIF) in CDMs needs to be investigated. In this article, the Wald test is proposed to examine DIF in the context of CDMs. This study…

Top